[tor-relays] Middle relays stopping because of "SYN flood"?

r1610091651 r1610091651 at telenet.be
Thu Jan 25 19:53:17 UTC 2018


Probably to do with the lately regular spikes in load on nodes. You should
configure tor to protect itself and the machine it is running on:

Limit its maxmeminqueue to <= 1gb (in torrc)
limit virtual mem adressable by tor to <=2gb (limits on process)
limit number open files to your usual load (limits on process)
in firewall limit number of connections / ip
in firewall limit new connection rate limit / ip

This might not be enough for tor to survive, but your system should.

There have been quite a few mails about that recently.

Regards

On Thu, 25 Jan 2018 at 20:30 Conrad Rockenhaus <conrad at rockenhaus.com>
wrote:

> Have you checked your kernel socket backlog limit? (net.core.somaxconn)
>
> > On Jan 25, 2018, at 1:19 PM, Christian Krbusek <christian at ph3x.at>
> wrote:
> >
> > Dear list,
> >
> > recently I´m facing issues with my 4 middle relays:
> >
> > B1E889E8EA604D81326F5071E0ABF1B2B16D5DAB
> > FC9AC8EA0160D88BCCFDE066940D7DD9FA45495B
> > BDB9EBCB1C2A424973A240C91EEC082C3EB61626
> > ACD889D86E02EDDAB1AFD81F598C0936238DC6D0
> >
> > All of them running v0.3.2.9 on Debian stretch (kvm, 2 cores, 2GB RAM)
> and stopping with the
> > log entries below. I can´t remember when this started but I´m quite sure
> it did not happen
> > a few month ago.
> >
> > There have not been any recent changes on the hardwarenode or on the VMs.
> >
> > All of them running the "same" configuration created by ansible-relayor
> - one example below.
> >
> > The VMs are running at the time the relays go offline and even a network
> restart doesn´t fix
> > the issue so I have to restart the VM to get the relays up and running
> again.
> >
> > Any hints would be much appreciated, if any more infos are needed just
> ask.
> >
> > Cheers,
> > Christian
> >
> >>>>>> CONFIG START <<<<<
> >
> > # ansible-relayor generated torrc configuration file
> > # Note: manual changes will be OVERWRITTEN on the next ansible-playbook
> run
> >
> > OfflineMasterKey 1
> > RunAsDaemon 0
> > Log notice syslog
> > OutboundBindAddress 188.118.198.244
> > SocksPort 0
> > User _tor-188.118.198.244_443
> > DataDirectory /var/lib/tor-instances/188.118.198.244_443
> > ORPort 188.118.198.244:443
> >
> >
> > DirPort 188.118.198.244:80
> >
> > SyslogIdentityTag 188.118.198.244_443
> >
> > ControlSocket /var/lib/tor-instances/188.118.198.244_443/controlsocket
> >
> > Nickname ph3x
> >
> > Sandbox 1
> > ExitRelay 0
> > ExitPolicy reject *:*
> >
> > ContactInfo 4096R/0x73538126032AD297 Christian Krbusek <abuse AT ph3x
> DOT at> - 1NtneNxb8awwH8woJ7oL7UVj1SJDpAn8Kc
> > RelayBandwidthRate 15 MB
> > RelayBandwidthBurst 30 MB
> > NumCPUs 2
> >
> > MyFamily
> acd889d86e02eddab1afd81f598c0936238dc6d0,b1e889e8ea604d81326f5071e0abf1b2b16d5dab,bdb9ebcb1c2a424973a240c91eec082c3eb61626,fc9ac8ea0160d88bccfde066940d7dd9fa45495b
> > # end of torrc
> >
> >>>>>> CONFIG END <<<<<
> >
> >>>>>> LOGS START <<<<<
> >
> > at-vie01a-tor01.ph3x.at:
> >
> > Jan 25 17:58:24 at-vie01a-tor01 Tor-86.59.119.83_443[540]: Since
> startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3
> connections, and 8629 v4 connections; and receiv
> > ed 60 v1 connections, 4568 v2 connections, 9249 v3 connections, and
> 403685 v4 connections.
> > Jan 25 18:00:17 at-vie01a-tor01 kernel: [64938.876706] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.773119] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.774838] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.786154] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.786189] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.786235] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.786250] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.786315] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.786333] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.786391] TCP: too many
> orphaned sockets
> > Jan 25 18:01:06 at-vie01a-tor01 kernel: [64987.786411] TCP: too many
> orphaned sockets
> > Jan 25 18:01:15 at-vie01a-tor01 kernel: [64997.053055] net_ratelimit: 2
> callbacks suppressed
> > Jan 25 18:01:15 at-vie01a-tor01 kernel: [64997.053056] TCP: too many
> orphaned sockets
> > Jan 25 18:01:16 at-vie01a-tor01 kernel: [64998.301030] TCP: too many
> orphaned sockets
> > Jan 25 18:01:16 at-vie01a-tor01 kernel: [64998.365099] TCP: too many
> orphaned sockets
> > Jan 25 18:01:17 at-vie01a-tor01 kernel: [64998.557033] TCP: too many
> orphaned sockets
> > Jan 25 18:01:18 at-vie01a-tor01 kernel: [64999.805082] TCP: too many
> orphaned sockets
> > Jan 25 18:01:27 at-vie01a-tor01 kernel: [65009.277179] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.533307] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.533678] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.533697] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.533820] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.534615] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.535200] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.535460] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.535708] TCP: too many
> orphaned sockets
> > Jan 25 18:01:28 at-vie01a-tor01 kernel: [65009.536685] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.677629] net_ratelimit: 11
> callbacks suppressed
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.677661] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.677910] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.677989] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.680478] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.680857] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.682767] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.682832] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.682953] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.683033] TCP: too many
> orphaned sockets
> > Jan 25 18:01:34 at-vie01a-tor01 kernel: [65015.683041] TCP: too many
> orphaned sockets
> > Jan 25 18:02:13 at-vie01a-tor01 kernel: [65054.590190] net_ratelimit: 6
> callbacks suppressed
> > Jan 25 18:02:13 at-vie01a-tor01 kernel: [65054.590194] TCP: too many
> orphaned sockets
> > Jan 25 18:02:13 at-vie01a-tor01 kernel: [65054.597927] TCP: too many
> orphaned sockets
> > Jan 25 18:05:15 at-vie01a-tor01 kernel: [65237.086196] TCP:
> request_sock_TCP: Possible SYN flooding on port 443. Sending cookies.
> Check SNMP counters.
> > Jan 25 18:17:01 at-vie01a-tor01 CRON[24416]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 18:57:09 at-vie01a-tor01 systemd[1]: Stopped target Timers.
> >
> >
> > at-vie01a-tor02.ph3x.at:
> >
> > Jan 25 10:17:01 at-vie01a-tor02 CRON[19123]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 11:17:01 at-vie01a-tor02 CRON[23992]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 11:59:10 at-vie01a-tor02 Tor-86.59.119.88_443[509]: Heartbeat:
> Tor's uptime is 11:59 hours, with 38342 circuits open. I've sent 218.94 GB
> and received 192.87 GB.
> > Jan 25 11:59:10 at-vie01a-tor02 Tor-86.59.119.88_443[509]: Circuit
> handshake stats since last time: 88566/90206 TAP, 12299213/12353484 NTor.
> > Jan 25 11:59:10 at-vie01a-tor02 Tor-86.59.119.88_443[509]: Since
> startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3
> connections, and 6295 v4 connections; and receiv
> > ed 60 v1 connections, 3085 v2 connections, 6040 v3 connections, and
> 300693 v4 connections.
> > Jan 25 12:17:01 at-vie01a-tor02 CRON[28856]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 12:18:28 at-vie01a-tor02 kernel: [44384.957658] TCP:
> request_sock_TCP: Possible SYN flooding on port 443. Sending cookies.
> Check SNMP counters.
> > Jan 25 13:17:01 at-vie01a-tor02 CRON[28998]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 14:17:01 at-vie01a-tor02 CRON[29061]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 14:36:57 at-vie01a-tor02 ntpd[490]: Soliciting pool server
> 185.144.161.170
> > Jan 25 14:37:05 at-vie01a-tor02 ntpd[490]: Soliciting pool server
> 2a02:1748:0:1500:3::2005
> > Jan 25 14:37:16 at-vie01a-tor02 ntpd[490]: error resolving pool
> 1.debian.pool.ntp.org: Temporary failure in name resolution (-3)
> > Jan 25 14:37:36 at-vie01a-tor02 ntpd[490]: error resolving pool
> 0.debian.pool.ntp.org: Temporary failure in name resolution (-3)
> >
> >
> > at-vie01a-tor03.ph3x.at:
> >
> > Jan 24 13:21:18 at-vie01a-tor03 systemd[1]: apt-daily.timer: Adding 9h
> 35min 45.841113s random time.
> > Jan 24 13:24:24 at-vie01a-tor03 kernel: [62694.054136] TCP:
> request_sock_TCP: Possible SYN flooding on port 443. Sending cookies.
> Check SNMP counters.
> > Jan 24 13:59:57 at-vie01a-tor03 Tor-78.142.140.242_443[517]: Heartbeat:
> Tor's uptime is 17:59 hours, with 0 circuits open. I've sent 314.59 GB and
> received 317.28 GB.
> > Jan 24 13:59:57 at-vie01a-tor03 Tor-78.142.140.242_443[517]: Circuit
> handshake stats since last time: 35924/36525 TAP, 10006455/10027375 NTor.
> > Jan 24 13:59:57 at-vie01a-tor03 Tor-78.142.140.242_443[517]: Since
> startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3
> connections, and 11906 v4 connections; and received 47 v1 connections, 4092
> v2 connections, 8578 v3 connections, and 409960 v4 connections.
> > Jan 24 14:17:01 at-vie01a-tor03 CRON[21604]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 24 15:17:01 at-vie01a-tor03 CRON[21670]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 24 15:41:25 at-vie01a-tor03 ntpd[492]: error resolving pool
> 1.debian.pool.ntp.org: Temporary failure in name resolution (-3)
> >
> >
> > at-vie01a-tor04.ph3x.at:
> >
> > Jan 25 12:00:52 at-vie01a-tor04 Tor-188.118.198.244_443[510]: Heartbeat:
> Tor's uptime is 11:59 hours, with 36596 circuits open. I've sent 192.29 GB
> and received 194.32 GB.
> > Jan 25 12:00:52 at-vie01a-tor04 Tor-188.118.198.244_443[510]: Circuit
> handshake stats since last time: 84713/86100 TAP, 11892935/11941101 NTor.
> > Jan 25 12:00:52 at-vie01a-tor04 Tor-188.118.198.244_443[510]: Since
> startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3
> connections, and 5938 v4 connections; and rec
> > eived 44 v1 connections, 2949 v2 connections, 5395 v3 connections, and
> 287856 v4 connections.
> > Jan 25 12:17:01 at-vie01a-tor04 CRON[29820]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 12:29:17 at-vie01a-tor04 kernel: [44936.694001] hrtimer:
> interrupt took 12598111 ns
> > Jan 25 13:17:01 at-vie01a-tor04 CRON[2275]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 14:17:01 at-vie01a-tor04 CRON[7180]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 14:27:14 at-vie01a-tor04 kernel: [52013.316026] TCP:
> request_sock_TCP: Possible SYN flooding on port 443. Sending cookies.
> Check SNMP counters.
> > Jan 25 15:17:01 at-vie01a-tor04 CRON[7920]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 16:17:01 at-vie01a-tor04 CRON[8023]: (root) CMD (   cd / &&
> run-parts --report /etc/cron.hourly)
> > Jan 25 16:42:38 at-vie01a-tor04 ntpd[492]: Soliciting pool server
> 81.16.38.161
> > Jan 25 16:42:46 at-vie01a-tor04 ntpd[492]: error resolving pool
> 3.debian.pool.ntp.org: Temporary failure in name resolution (-3)
> >
> >>>>>> LOGS END <<<<<
> >
> > _______________________________________________
> > tor-relays mailing list
> > tor-relays at lists.torproject.org
> > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
>
> _______________________________________________
> tor-relays mailing list
> tor-relays at lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.torproject.org/pipermail/tor-relays/attachments/20180125/f0101121/attachment-0001.html>


More information about the tor-relays mailing list