My little guard node (855BC2DABE24C861CD887DB9B2E950424B49FC34) have suddenly started to behave strangely. iftop (my "bandwidth monitor"), shows twice as much sent traffic as received traffic. The traffic seems to be distributed to a lot of ip addresses. No ip address stands out as receiving very much traffic: https://imgur.com/a/dAUzc
Given the last few days of DDoS attacks (my node is still targeted by those) I naturally assume this is another attack. First it is lots of connections (mitigated with connection limits) Then it is massive amounts of memory per circuit (MaxMemInQueues fixes that) And now this.
Could this be a third attack vector or am I seeing something "normal" (though I often check my bandwidth and I've never seen this before). My node recently got the HSDir flag after the last crash. Could the network be starved for HSDir machines and this is what I'm seeing?
Being a linux noob I don't know how to figure out exactly what kind of traffic this is. Suggestions gratefully accepted.
On 21 Dec 2017, at 06:29, Logforme m7527@abc.se wrote:
My little guard node (855BC2DABE24C861CD887DB9B2E950424B49FC34) have suddenly started to behave strangely. iftop (my "bandwidth monitor"), shows twice as much sent traffic as received traffic. The traffic seems to be distributed to a lot of ip addresses. No ip address stands out as receiving very much traffic: https://imgur.com/a/dAUzc
Given the last few days of DDoS attacks (my node is still targeted by those) I naturally assume this is another attack. First it is lots of connections (mitigated with connection limits) Then it is massive amounts of memory per circuit (MaxMemInQueues fixes that) And now this.
Could this be a third attack vector or am I seeing something "normal" (though I often check my bandwidth and I've never seen this before). My node recently got the HSDir flag after the last crash. Could the network be starved for HSDir machines and this is what I'm seeing?
This is normal for HSDirs and directory mirrors, because the requests are smaller than the responses.
Being a linux noob I don't know how to figure out exactly what kind of traffic this is. Suggestions gratefully accepted.
Check the logs, but they won't tell you much, and that's deliberate.
T
Check the logs, but they won't tell you much, and that's deliberate.
So I checked the tor log.
First part is before the "weirdness": Dec 20 16:00:08.000 [notice] Heartbeat: Tor's uptime is 4 days 23:59 hours, with 36191 circuits open. I've sent 3686.92 GB and received 3646.75 GB. Dec 20 16:00:08.000 [notice] Circuit handshake stats since last time: 160437/160437 TAP, 5003782/5003782 NTor. Dec 20 16:00:08.000 [notice] Since startup, we have initiated 0 v1 connections, 0 v2 connections, 1 v3 connections, and 102511 v4 connections; and received 2151 v1 connections, 29819 v2 connections, 46331 v3 connections, and 683484 v4 connections.
Next time during the weirdness: Dec 20 22:00:08.000 [notice] Heartbeat: Tor's uptime is 5 days 5:59 hours, with 233634 circuits open. I've sent 3908.13 GB and received 3832.44 GB. Dec 20 22:00:08.000 [notice] Circuit handshake stats since last time: 564576/564576 TAP, 18285622/18285622 NTor. Dec 20 22:00:08.000 [notice] Since startup, we have initiated 0 v1 connections, 0 v2 connections, 1 v3 connections, and 107666 v4 connections; and received 2309 v1 connections, 31585 v2 connections, 49188 v3 connections, and 711324 v4 connections.
Note that the number of circuits have gone up from a relatively normal number, 36191, to a massive 233634. Definitely not normal. And this is with my connection limits in place in the iptables.
The tor process now uses about twice as much CPU as normally.
I think the attacker has found a new way "in".
On 21 Dec 2017, at 08:57, Logforme m7527@abc.se wrote:
Check the logs, but they won't tell you much, and that's deliberate.
So I checked the tor log.
First part is before the "weirdness": Dec 20 16:00:08.000 [notice] Heartbeat: Tor's uptime is 4 days 23:59 hours, with 36191 circuits open. I've sent 3686.92 GB and received 3646.75 GB. Dec 20 16:00:08.000 [notice] Circuit handshake stats since last time: 160437/160437 TAP, 5003782/5003782 NTor. Dec 20 16:00:08.000 [notice] Since startup, we have initiated 0 v1 connections, 0 v2 connections, 1 v3 connections, and 102511 v4 connections; and received 2151 v1 connections, 29819 v2 connections, 46331 v3 connections, and 683484 v4 connections.
Next time during the weirdness: Dec 20 22:00:08.000 [notice] Heartbeat: Tor's uptime is 5 days 5:59 hours, with 233634 circuits open. I've sent 3908.13 GB and received 3832.44 GB. Dec 20 22:00:08.000 [notice] Circuit handshake stats since last time: 564576/564576 TAP, 18285622/18285622 NTor. Dec 20 22:00:08.000 [notice] Since startup, we have initiated 0 v1 connections, 0 v2 connections, 1 v3 connections, and 107666 v4 connections; and received 2309 v1 connections, 31585 v2 connections, 49188 v3 connections, and 711324 v4 connections.
Note that the number of circuits have gone up from a relatively normal number, 36191, to a massive 233634. Definitely not normal. And this is with my connection limits in place in the iptables.
The tor process now uses about twice as much CPU as normally.
I think the attacker has found a new way "in".
Incoming connection limits aren't entirely effective against this attack, and never were. We're working on other mitigations.
T
-- Tim / teor
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n ------------------------------------------------------------------------
tor-relays@lists.torproject.org