Effect of Tor window size on performance

Roger Dingledine arma at mit.edu
Fri Feb 13 22:23:41 UTC 2009


On Wed, Feb 04, 2009 at 10:25:31AM +0100, Csaba Kiraly wrote:
> http://disi.unitn.it/locigno/preprints/TR-DISI-08-041.pdf
[snip]
> Another thing that comes to my mind is that a better separation of 
> signaling (directory lookup, circuit setup, etc.) and data path would be 
> beneficial. This would ease things such as experimenting with DTLS or 
> other transport methods on the overlay tunnel, and experimenting with 
> different solutions end-to-end.

True. But on the other hand, if we separate them on the wire then we
are giving up some potential anonymity protections that we might get by
blending them together.

I've heard from a few people studying the "website fingerprinting"
attack (see #1 on https://www.torproject.org/volunteer#Research) that
Tor's directory fetches confuse their statistics. Whether it's something
that could be easily distinguished and removed from their statistics is
an open question. My guess is that it's not confusing the attack very
much, so we shouldn't be too stubborn about keeping everything blended
together. But "more research remains", as they say.

> >So the next question is an implementation one. Right now the window sizes
> >are hard-coded at both ends. I've been meaning to extend the protocol
> >so sendme cells have a number in them, and so the initial window sizes
> >are specified in the 'create' and 'created' cells for circuits and the
> >'begin' and 'connected' cells for streams. But we haven't really fleshed
> >out the details of those designs, or how they could be phased in and still
> >handle clients and relays that don't use (or know about) the numbers.
> >
> >So the big deployment question is: is it worth it to work on a design
> >for the above, and then either shrink the default window sizes or do
> >something smarter like variable window sizes, or should we just be
> >patient until a UDP-based solution is more within reach?
> >
> >One answer is that if you were interested in working on a design proposal
> >and patch, it would be much more likely to get implemented. :)
> >  
> We are doing verifications on this. Our lab experiments (the ones in the 
> tech report) show that there is a huge gain on the user side in delays, 
> while throughput is untouched. Throughput is capped with a static window 
> size, but I think the cap can be chosen better than what it is now. 
> There should also be a big gain in the memory consumption of ORs, 
> although we didn't measure it yet. Since the Tor network is kind of 
> overloaded all the time, memory usage should decrease almost linearly 
> with the window size.
> 
> Currently we are verifying one-side modification of the circuit, i.e. 
> whether one side of the connection can reduce the widow size on its own, 
> without explicitly notifying the other side.  From the  code it seems to 
> me that this will work, and if so, phasing in a smaller window size in a 
> new release should not be a problem.

Hey, that's a really good point. We don't have to change much code at
all if we want to use a *smaller* package window than we are allowed. We
simply pretend that the package window started out smaller, and either
side can do that independently!

Do you have a patch in mind for this? It would seem that if we
change init_circuit_base() so it sets circ->package_window to some
lower number x, and change connection_exit_begin_conn() so it sets
nstream->package_window to some even lower number y, that should be it.
The client side will send sendme's like normal, and the only difference
is that no more than y cells will ever be 'in flight' for a given stream.

So here are the questions we need to consider:

1) What values of x and y are best? Presumably as we reduce them from
1000 and 500, the performance gets better, but at some point they become
so low that performance gets worse (because each round-trip through the
network takes extra time). As sample numbers, if we start x at 100 and
y at 50, then we need another round-trip for any stream that delivers
more than 24900 bytes, and/or for every 49800 bytes on the circuit.
Should our choices be influenced by the 'typical' Tor stream that the
network is seeing right now? (Not that we have those numbers, but maybe
we should get them.) What other factors are there?

2) What are the effects of this patch in a hybrid Tor network? If some
exit relays use package windows of 1000, and newer ones use package
windows of 100, will the newer ones get 'clobbered' by the old ones?
That is, should we expect to see even more slow-down in the network
while the relays transition? Is there anything we can do about that,
since it will take a year or more for everybody to transition?

3) Should we reduce the package_windows on the client side too? It would
be easy to do. How do we decide whether it's worth it?

Anything I missed?

Thanks!
--Roger



More information about the tor-dev mailing list