Proposal: Bring Back PathlenCoinWeight
mikeperry at fscked.org
Thu Apr 26 05:08:26 UTC 2007
Thus spake Roger Dingledine (arma at mit.edu):
> So this proposal isn't about bringing back the old behavior. It appears
> rather to be some combination of:
> a) We should let people choose two hops if they want, for better
> performance and to reduce the overall load on the network.
> b) We should make path length have a random component, so it's hard
> to tell if a given user prefers two-hop paths or three-hop paths.
> c) We should change our policy for dropping guards based on when they
Yes. Bingo. My proposal really had little to do with
PathlenCoinWeight, other than the fact that I think it would be nice
to have that flexibility in case it proved worthwhile elsewhere (for
example, Johannes's work or an attack I realized the other day
> Letting each user choose his own random weight is probably a recipe
> for disaster, in that it adds complexity to the system and to analysis
> and doesn't really provide any clear win. So giving people a couple of
> weights to choose from (0, 1, and maybe a few more) seems a smarter move.
I don't think we should worry about our typical user mucking with
torrc and making themselves insecure. Torrc is an experts tool.
Instead we should allow the flexibility to be there for some
controller later to decide if it is useful.
> > Furthermore, when blocking resistance measures insert an extra relay
> > hop into the equation, 4 hops will certainly be completely unusable
> > for these users, especially since it will be considerably more
> > difficult to balance the load across a dark relay net than balancing
> > the load on Tor itself (which today is still not without its flaws).
> I believe the blocking-resistance plan is to have 3 hops also -- the
> bridge acts quite like an entry guard, and then two more. But all of
> that is up for more thorough analysis too.
I think 3 hops is reasonable for people who have to traverse agressive
content filters that actively block Tor - for the sole reason that it
is hard to authenticate a darknet IP as a Tor node or not, and doing
so would thus require 1 hop paths. I agree we really probably don't
want to give the adversary anything tangible by confiscating or
ordering (or in some less free countries-*ahem*-abitrarily inserting)
a pen register tap on to a single Tor node..
However, for rapid prototyping of blocking resistance mechanisms (and
as you have pointed out elsewhere, we DO want to rapid prototype as
many as possible), it would be nice to be able to leverage as much of
the existing Tor code as possible (ie not have to build 2-hop paths
via controller, etc). So that would actually mean disabling guards
alltogether from Tor's point of view, and letting the controller take
over that aspect only. If we can eliminate Tor's network fingerprints
other than IP address, we can build all sorts of crazy darknets using
just http proxies and Tor's config option for this toggled via
> > I believe currently guards are rotated if circuits fail, which does
> > provide some protection,
> Actually, no. Guards are marked as inactive if the initial connection
> to them fails, or the create cell to them fails. If a circuit through
> them fails, we don't mark anything.
> Guards are only actually dropped from the guard list if they are inactive
> for many weeks. Otherwise we try them again as soon as the directory
> authorities tell us they're running again. Section 5 of path-spec.txt
> has a few more details.
What I meant by "rotated" is a different guard (from the guard list)
is chosen randomly for the next circuit upon a complete circuit
failure, so a guard can't just keep failing circuits with the
guarantee that it will be able to own a connection. At least this was
my impression from a glance at the source (choose_good_entry_server())
and seems to be neither confirmed nor denied by path-spec :).
> > Why not fix Pathlen=2?:
> > The main reason I am not advocating that we always use 2 hops is that
> > in some situations, timing correlation evidence by itself may not be
> > considered as solid and convincing as an actual, uninterrupted, fully
> > traced path. Are these timing attacks as effective on a real network
> > as they are in simulation? Would an extralegal adversary or authoritarian
> > government even care? In the face of these situation-dependent unknowns,
> > it should be up to the user to decide if this is a concern for them or not.
> Fair enough. How do the above anonymity-breaking attacks look to you, in
> this light?
> In fact, assuming Alice uses Tor a lot, it doesn't even need to
> fail every single "safe" circuit. It can just take advantage of the
> circuits where it wins, and/or maybe fail a few to help that along.
> Note that it's trivial for it to recognize which circuits are the
> winners, so there's no synchronization or communication needed
> except when it's a sure bet.
I think this is the best argument against the proposal because it is
the only attack that is a sure kill if successful and the adversay can
just be patient and be assured of some success (perhaps to make an
example of). Yet at the same time even this underscores that it should
be up to the user to decide. Much of the Tor userbase is not at risk
for simply viewing censored content, and so doesn't really require
maximum anonymity at the cost of speed and load. Even China rarely if
ever jails or harms those who simply view content. They are concerned
with suppressing publishers, and just blocking all of Tor.
Furtheremore, if we agree about the effectiveness of timing attacks,
then there is no real barrier other than implementation details. An
adversary with the resources to mount this attack against the network
via C nodes can also afford a rentacoder post to get the extra timing
work done ;)
And of course there are the scores of users in more permissive
societies that simply don't want the long-term association of things
like Google queries being tied to their IP (but who also click on
well-placed relevant ads, oh ye beloved, benevolent and omniscient
sponsor :). These users don't need the full anonymity Tor provides by
default, yet want more reliability and safety than random proxies.
I think the major risks can be conveyed in a sentence or two in a
radio button choice, with a bit more info in the help. Users who risk
the adversary mounting even low-cost (let alone long-term) attacks to
circumvent Tor rather than just block Tor entirely for them will know
who they are. And there will be plenty of paranoids out there who will
keep them company :)
Incidentally, I strongly believe that the most damaging attacks
against anonymity for users who need it come not via Tor, or even from
browser leaks or misconfigurations. They come from the fact that they
are the only people in their workplace, profession, skillset, city,
state, or even circumstance that use Tor. The only thing that will
fix this is more users. Thankfully, the censors, marketers, and data
warehousing companies really REALLY want to help us out on that one.
We should help them help us :)
> > Partitioning attacks form another concern. Since Tor uses
> > telescoping to build circuits, it is possible to tell a user
> > is constructing only two hop paths at the entry node. It is
> > questionable if this data is actually worth anything
> > though, especially if the majority of users have easy
> > access to this option, and do actually choose their path
> > lengths semi-randomly.
> Agreed. The possibility that an entry guard could learn that its user
> is a 2-hop user doesn't bother me very much. At least not until it
> turns into a deeper attack.
Hrmm, so a possible deeper attack is for an adversay to watch
connections into Tor and say "Hrmm, these Tor users are using 3 hop
circuits, they are the evil Tor users. They have something to hide!
Get 'em!/Watch them Closer!".
There are two defenses for this:
1. Use PathlenCoinWeight to BUILD circuits, but have
a secondary StreamlenCoinWeight that chooses them for use. Since
many circuits will share the same guard, it will be non-trivial for
the (local external) adversary to tell which hop length is being
2. Make a RELAY_SHISHKABOB cell that has onionskins for N nodes to
set up a circuit in one pass. This is obviously more work than #1,
but will cover guard node aversaries also.
> Another worry is the local network triggering repeated failures only
> for Alice. But actually, Alice should be able to distinguish failures
> on her local network from authenticated messages from the guards saying
> that an extend failed.
> Hm. What other issues are there with abandoning guards after failure? One
> issue is that we could more quickly rotate away from honest guards onto
> bad guards; so we would need to make sure that the bar for failure is
> sufficiently high that it doesn't trigger except during one of these
> attacks. This part needs more thought.
I will experimentally evaluate this via TorFlow sometime in the next 2
weeks or so. So long as it is represented as a failure rate rather
than total failure count, guard turnover should be minimized.
Hopefully the Tor circuit failure rate is low. Last time I measured
it, it was around 20%, but that was a bunch of rickety perl doing the
Mad Computer Scientist
fscked.org evil labs
More information about the tor-dev