<div class="gmail_quote">I'm sure you've thought of this, but adversaries can replicate any<br>
properties we're looking for to rate limit. In this case simply making<br>
yourself a slow relay and routing client traffic through yourself<br>
(being your own first hop) seems to get around the limitation. -Damian<br>
<div><div></div><div class="h5"><br>
On Sun, Dec 13, 2009 at 5:23 PM, Roger Dingledine <<a href="mailto:arma@mit.edu">arma@mit.edu</a>> wrote:<br>
> Hi folks (Nick in particular),<br>
><br>
> I've been pondering other performance improvements. One of them is to<br>
> rate-limit client connections as they enter the network. Rate limiting<br>
> in the Tor client itself would work better, but it's not a very stable<br>
> equilibrium -- it encourages people to switch to security disasters<br>
> like tortunnel.<br>
><br>
> So I'm looking at rate-limiting non-relay OR connections. Here's the<br>
> patch:<br>
><br>
> diff --git a/src/or/connection_or.c b/src/or/connection_or.c<br>
> index aa26bf8..3f984bf 100644<br>
> --- a/src/or/connection_or.c<br>
> +++ b/src/or/connection_or.c<br>
> @@ -333,10 +333,24 @@ connection_or_init_conn_from_address(or_connection_t *conn<br>
> ,<br>
> {<br>
> or_options_t *options = get_options();<br>
> routerinfo_t *r = router_get_by_digest(id_digest);<br>
> - conn->bandwidthrate = (int)options->BandwidthRate;<br>
> - conn->read_bucket = conn->bandwidthburst = (int)options->BandwidthBurst;<br>
> connection_or_set_identity_digest(conn, id_digest);<br>
><br>
> + if (r || router_get_consensus_status_by_id(id_digest)) {<br>
> + /* It's in the consensus, or we have a descriptor for it meaning it<br>
> + * was probably in a recent consensus. It's a recognized relay:<br>
> + * give it full bandwidth. */<br>
> + conn->bandwidthrate = (int)options->BandwidthRate;<br>
> + conn->read_bucket = conn->bandwidthburst = (int)options->BandwidthBurst;<br>
> + } else { /* Not a recognized relay. Squeeze it down based on the<br>
> + * suggested bandwidth parameters in the consensus. */<br>
> + conn->bandwidthrate =<br>
> + (int)networkstatus_get_param(NULL, "bwconnrate",<br>
> + (int)options->BandwidthRate);<br>
> + conn->read_bucket = conn->bandwidthburst =<br>
> + (int)networkstatus_get_param(NULL, "bwconnburst",<br>
> + (int)options->BandwidthBurst);<br>
> + }<br>
> +<br>
> conn->_base.port = port;<br>
> tor_addr_copy(&conn->_base.addr, addr);<br>
> tor_addr_copy(&conn->real_addr, addr);<br>
><br>
> As you can see, I'm making it configurable inside the consensus, so we<br>
> can experiment with it rather than rolling it out and then changing our<br>
> minds later. I don't have a good sense of whether it will be a good move,<br>
> but the only way I can imagine to find out is to try it.<br>
><br>
> I'm imagining trying it out with a rate of 20KB and a burst of 500KB.<br>
><br>
> As a nice side effect, we'll also be rolling out the infrastructure for<br>
> one defense against Sambuddho's "approximating a global passive adversary"<br>
> congestion attack, if the attack ever gets precise enough that we can<br>
> try out our defense and compare.<br>
><br>
> In the distant future, where we've deployed a design where not all relays<br>
> get to see the unified networkstatus consensus, we'll have to stop voting<br>
> for a modified bwconnrate and bwconnburst, since the relays won't be<br>
> able to know (at least this way) if it's a genuine relay. Maybe part<br>
> of that design will be to present a signed credential proving you're a<br>
> public relay. In any case, we have a way to disable this feature when<br>
> that distant future arrives.<br>
><br>
> We're also squeezing down bridge relays by this feature, since the public<br>
> relays can't tell the difference between a bridge and a client. At some<br>
> point we should make sure that bridges send their client traffic over<br>
> different TCP connections than their own traffic. That's a separate<br>
> discussion though.<br>
><br>
> It shouldn't impact bandwidth bootstrapping tests, since those aim to<br>
> spread 500KB of traffic across 4 circuits.<br>
><br>
> It would impact Mike's bwauthority tests. We'd want to make an exception<br>
> for those Tors. I think we'd leave the torperf deployments alone, since<br>
> after all their goal is to measure "realistic" client performance.<br>
><br>
> It could also impact initial directory info bootstrapping -- if you try<br>
> to fetch 1MB from a particular dir mirror, it would slow down the second<br>
> half of that download. We'd want to keep an eye on how much that changes.<br>
><br>
> A more thorough solution would be to rate-limit all the OR conns coming<br>
> from a particular non-relay into the same bucket, to prevent people<br>
> getting around the limits by opening multiple TCP connections. But it's<br>
> actually not so easy to open multiple conns to the same destination in<br>
> Tor; plus I'm aiming to solve this for the general case where people<br>
> are overloading relays and don't even know it's a bad thing.<br>
><br>
> My main concern here is that I wonder if we are being thorough enough at<br>
> detecting "is a relay". It checks the consensus and the descriptor cache<br>
> currently. So if the authorities think you're not Running, they won't put<br>
> you in the consensus, and no relays will hear about you. If you go up and<br>
> down, relays that serve dirport info will have your descriptor cached,<br>
> so they'll recognize you so long as you were around in the past day or so.<br>
><br>
> Relays that don't serve dirport info will stop fetching descriptors,<br>
> but they'll continue to fetch the consensus. So they'll still mostly work.<br>
><br>
> Are there any other cases that are going to be a problem? Are there better<br>
> (simple, easy to deploy soon) ways to decide if the peer is a relay?<br>
><br>
> --Roger<br>
><br>
><br>
</div></div></div><br>