Padding, multiplexing and more

Marc Rennhard rennhard at tik.ee.ethz.ch
Thu Dec 19 13:37:05 UTC 2002


Hi folks,

I only found this list yesterday and spent this morning to browse
through its archive. Following the discussion from 'what padding
scheme to use' to 'we are doomed anyway against certain types of 
adversaries' was very interesting. At one point Andrei mentioned a
padding scheme that sends one packet out on each link when traffic
has to be sent, but he couldn't recall where he had read it. Well,
we use exactly this scheme in the 'Anonymity Network', and it's well
described in 

http://www.tik.ee.ethz.ch/~rennhard/publications/dia_an.pdf

That's virtually the version I submitted to Pet 2002, but it didn't
get in. It also contains a performance analysis with up to 400
users, each of them issuing a web request every 5 seconds on average 
and 6 anonymity proxies (~ORs). Using the padding scheme above and
using fixed bit-rates on the user-first hop links, it performed well
enough and it is probably very resistant against a global eaves-
dropper (assuming the users never leave and communicate with a proxy
all the time to defeat long-term intersection attacks). Against
active attackers and compromised proxies, this is no longer the
case, I guess... 


We also employ the two-layer multiplexing scheme just described by
Roger. We actually went for it right away and never really thought
about disadvanteges compared to a fresh onion for each TCP-connection,
but I prefer the two-layer multiplexing approach. If we assume that 
web browsing (and accessing e-mail via the browser) will be the 
killer apps for OR-like systems (and I'm sure that's the case), then
let's look at some data about sizes of we objects. Without going
into the details, the follow a Pareto distribution with an average
size of about 12KB. But the median is at about 1.5KB, so there are
very many small objects. Even with http 1.1, getting www.cnn.com
results in about 20 TCP-connections (I just tested), and taking 7ms
(mentioned by Roger before) for an OR to handle the onion, that's not 
insignificant. And that's 7ms for each OR along an each anonymous
connection. Assuming a network with 10 ORs (you won't get many more
in the beginning...) and 1000 users, and assuming each connection
uses 4 ORs, then each OR handles 400 users on average. With 7ms to
handle an onion, this allows each user to send a new onion every
2800 ms on average. That's just an example, but handling the onions
could soon become a bottleneck assuming the ORs have very good network
connectivity (I'm speaking about at least some Mbits/s, otherwise the 
limit in number of users they can handle may be reached earlier due to
bandwidth limitations).

The main concern here is about linkability, and I fully agree. Why
not use a compromize: As long as I'm on www.cnn.com, I can happily
use the same connection with a second level of multiplexing. Yes,
the adversary learns (a bit) more thant with a new onion per
connection, but only if we assume he can observe the last OR and 
not the web server. In the latter case, using new connections (and
new exit ORs) for each web object won't help much as users usually
navigate using links and the adversary should be able to link all
objects to the same user by analyzing which page at cnn.com links to
what other pages. So the compromize is to use a connection as long we
are in the same domain, but build a new onion and use a new connection
(with a new exit OR) when we move to another domain. Comments?

Btw, the two-layer stuff id fully described in

http://www.tik.ee.ethz.ch/~rennhard/publications/PN.pdf

Cheers,
--Marc


More information about the tor-dev mailing list