Router twins are obsolete?

Roger Dingledine arma at
Sat Aug 23 09:50:26 UTC 2003

"Router twins" are two or more onion routers that share the same public
key (and thus private key). Thus if an extend cell asks for one and it's
down, the router can just choose the other instead. This provides some
level of redundancy/availability.

Indeed, Paul pointed out recently that the new ephemeral-DH session key
means that router twins aren't as much of a security risk as before:
sharing the RSA key just means that any of them can 'answer the phone',
as it were, and then proceed to negotiate a session key that the other
twins don't know. That is, if routers A1 and A2 have the same private
key, and the adversary compromises A1, he still can't eavesdrop on the
conversation bob has with A2.

Here are some of our reasons for wanting router twins, with reasoning
why they're no longer relevant:

A) Having long-term reliable nodes is critical for reply onions, since
the nodes in a reply path have to be there weeks or months later. Having
redundancy at the router level allows a reply path to survive despite
node failure.

However, we're not planning to do reply onions anymore, because rendezvous
points are more flexible and more robust.

B) If Alice chooses a path and a node in it is down, she needs to choose
a whole new path (that is, build a whole new onion). This endangers her
anonymity, as per the recent papers by Wright et al.

However, now if an extend fails, it sends back a truncated, and Alice
can try another extend from that point in the path. This seems more
safe (well, not as unsafe). And besides, it's not like we were actually
allowing Alice's circuit to survive node failure; we were just allowing
circuit-building to tolerate node failure, sometimes.

C) We can do load balancing between twins, based on how full each
one is at this moment.

However, load balancing seems like a whole lot of complexity if we want
to keep much anonymity. We've been putting off spec'ing this, because
it's really hard. It's not going to get any easier.

D) Users don't have to rely as much on up-to-date network snapshots
(directories) to choose a working path, since most supernodes
(conglomerates of twins) will have at least one representative present
all the time.

This is still a nice idea. However, it doesn't seem tremendously hard
to crank up the directory refresh rate for users so they have a pretty
accurate picture of the network. The directory servers already keep
track of who's up right now and publish this in the directory. And if
most of the nodes aren't part of a supernode, then this reason doesn't
apply as much. So if this is the only reason left to do router twins,
maybe it's not good enough.

Some other reasons why router twins are bad:

E) Surprising jurisdiction changes. Imagine Alice choosing an exit
node in Bulgaria and then finding that it has a twin run by Alice's
employer. Choosing between router twins is done entirely at the previous
hop, after all.

F) Path selection is harder. We go through a complex dance right now
to make sure twins aren't adjacent to each other in the path. To keep
it sane we choose the entire path before we start building it. We're
going to have to change to a more dynamic system as we start taking into
account exit policies, having nodes down (or supernodes down, even if we
do router twins), etc.  I bet a simpler path selection algorithm would
be easier to analyze and easier to describe.

G) Router twins threaten anonymity. Having multiple nodes around the
world, any of which can leak the private key, is bad. Remember "There's no
security without physical security" and "Two can keep a secret if one's
dead". Either all the twins are run by just me, in which case it's hard
for me to physically secure them all (or they're in the same room, and
not improving availability much), or they're run by different people,
in which case the private key isn't as private. If the adversary gets
the supernode's private key, he may be able to redirect you to him by
DoSing the remaining twins, etc. (or if we do load balancing, simply
by advertising a low load). Or if he happens to own the previous hop,
he can just simulate the supernode right there.

Agree/disagree? Any important reasons I've overlooked?

More information about the tor-dev mailing list