[tor-bugs] #24902 [Core Tor/Tor]: Denial of Service mitigation subsystem

Tor Bug Tracker & Wiki blackhole at torproject.org
Wed Mar 14 21:31:46 UTC 2018


#24902: Denial of Service mitigation subsystem
-------------------------------------------------+-------------------------
 Reporter:  dgoulet                              |          Owner:  dgoulet
     Type:  enhancement                          |         Status:
                                                 |  reopened
 Priority:  Very High                            |      Milestone:  Tor:
                                                 |  0.3.3.x-final
Component:  Core Tor/Tor                         |        Version:
 Severity:  Normal                               |     Resolution:
 Keywords:  tor-dos, tor-relay, review-          |  Actual Points:
  group-30, 029-backport, 031-backport,          |
  032-backport, review-group-31, SponsorV        |
Parent ID:                                       |         Points:
 Reviewer:  arma                                 |        Sponsor:
-------------------------------------------------+-------------------------
Changes (by hurus):

 * status:  closed => reopened
 * resolution:  fixed =>


Comment:

 It is a crazy decision to merge those changes.

 Some thoughts:

 1. Since release of 0.2.9.15, 0.3.1.10, 0.3.2.10 i myself as an extensive
 user of tor, my internal scripts which scan Tor network and monitored
 onion sites started to see frequent unexpected broken requests.

 It looks like this:
 You browse your wished onion and then suddently when you change or refresh
 the page, connection immediately drops. Tor Browser shows 'Unable to
 connect' message in this case.

 It happens at around 1% of requests at the moment in my tests. And i
 suspect that number to be much higher when more relays will update or
 consensus will change.

 I suspect such behaviour will lead to inability for Tor network to keep
 it's usage patterns. Users will not use system which randomly drops
 connections. Before Tor was very reliable. For years of usage i never
 noticed any dropped connection issues without proper automatic
 reconnection in place. Regardless of how relays were loaded, everything
 was smooth.

 2. Such feature as dropping connections is very close to 'shadow ban'. We
 can end up to censor Tor network by Tor network. Tor network was made to
 fight censorship, but 'shadow ban' and 'connection drop' is a way to the
 other direction - to the censorship / selectivity of whom allowed to query
 Tor network and whom is not. This is not a way which Tor Project should
 follow to.

 3. We may potentially face a situation where relays will drop all or most
 connections and Tor client will not be ever able to establish connection
 with a rate more than a crazy minimal (e.g. 5 requests per second). Such
 rate is not suitable for a regular browsing. Even if we hope that all web
 properties are ideally optimized, there are still conditions which will
 lead to problems. Most web properties usually use 5 - 200 elements per
 page which are loaded simultaneously. It usually means 20 constant
 concurrent connections and 20 - 500 requests per second on each client.
 Means that with usual limits no one will correctly load all elements of a
 web property.

 4. Under tests there was indicator of incorrectly used 'malicious' word.
 You were counting requests to be 'malicious' but you have no clue are they
 'malicious' at all. You are likely block regular browsing of high volume /
 less optimized web properties but you pretending that you blocking
 something 'malicious'. I highly recommend to remove that word as it is
 incorrect and do not show reality.

 5. Is it really a way to go to make limits per IPv4 address in modern
 world? We live in IPv4-shortage world. We live in the world of NAT and
 potentially tens of clients per IPv4 address. By limiting IPv4 we only
 limit real users which for some reason send too much requests from their
 IPv4 address.
 At start of the century it was a way to go. Nowadays, that's no-op. Such
 behaviour is suicidal for any online instance.

 6. Is it really a way to go to make limits per IPv6 address in modern
 world? We can blindly imagine that attacker can potentially use billions
 of IPv6 addresses. Limiting activity per IPv6 addresses will not impact
 DOS behaviour in any way but oppositely will cause refuse of regular
 clients which do not trying to bypass specific mitigations.

 7. Why do we limit usage patterns like crawling or high-loaded projects
 which are trying to lower costs by using a single IP address? Are those
 not allowed usage patterns on Tor? I doubt so. Current set of patches
 forces any service which are using Tor network extensively to build hard
 expensive systems to simply use Tor network for anything high-volumetric.


 All in all i think those changes are horrible and should be either
 reverted and reconsidered. It is one of the most harmful commitment in
 years of Tor developement by my opinion. It looks like this set of patches
 was made for a different world, not the one where we live. As it is
 disconnected from reality. The only correct way to limit is by group set
 of unique identifications. It can be: IP address - target destination -
 Tor node (requester) identifier (if any). But it never should be IP
 address solely and never should be forced (be not voluntary), that is not
 for you to decide should relay block requests or not, thats for them to
 decide and for other Tor network participants to decide will they
 communicate using that particular relays with limits or not.

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/24902#comment:79>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list