[tor-talk] Who said it takes hours of latency to fix anonymity?
arma at mit.edu
Sun Feb 15 11:27:31 UTC 2015
On Sun, Feb 15, 2015 at 11:55:09AM +0100, carlo von lynX wrote:
> I'm sorry to disturb with this, but I am being confronted with
> hearsay about Roger D. having said that it would take latencies
> in the order of hours to fully make communications impossible
> to shape and correlate. And that hearsay is being purported as
> generic for any kind of anonymization network. To me, if it is
> true, this only makes sense applied to Tor's low latency approach
> of things. A system that uses shaping-resistant fixed size packets
> would not need latencies in the order of hours to be provably
> successfully anonymizing even in the face of a pervasive global
> attacker, and I presume several papers in anonbib propose viable
> strategies concerning that. They are just too many to pick one to
> start from. Am I missing a clue? I am so embarrassed to ask this,
> I don't even feel like mailing Roger about it. I prefer having
> more advanced questions to ask.
It's actually worse than that -- we have no idea.
I'd love to have a graph where the x axis is how much additional overhead
(latency, bandwidth, whatever) we're willing to add, and the y axis is
how much additional security (anonymity, privacy, whatever) we can get.
Currently we have zero data points for this graph.
The NRL folks have a fun paper on how to turn a defense against passive
timing attacks into a defense against active timing attacks:
But you have to have a defense against passive timing attacks or their
paper isn't useful to you yet.
On the 'bad news' side, check out
which shows reasonable scenarios against high-latency anonymity systems
where the anonymity breaks down over time against a passive observer.
Such attacks work especially well against a world where you have "users"
and you have "mixes", and the users don't participate consistently for
the entire existence of the system.
I've always been fond of
as an example of what you can do if all your users are mixes and no
users need to send or receive much traffic. But that paper also comes
with many hidden assumptions, so be careful thinking the next step is
to just build it.
On the 'good news' side, consider that with millions of traffic flows,
maybe you just have to drive the false positives up a little bit, and
suddenly an attacker with only a partial view of the system can't trust
his conclusions: see the "More precisely, it's possible that correlation
attacks don't scale well because" paragraph in
The PETS conference is where it's at in terms of progress so far. But
it's been a while since things have moved forward. One next step might
be to try to rephrase the question into something that somebody can
More information about the tor-talk