More thoughts on bridge distribution strategies

Roger Dingledine arma at
Tue Dec 8 01:12:20 UTC 2009

Hi folks,

A few weeks back I spent 3 days at Kaist, talking to teams of freshmen
who are brainstorming distribution strategies for Tor bridges. For a
refresher on what I'm talking about, see the original blog post:

While the problem started out as theoretical at the beginning of the term,
it became much more concrete in September when China blocked the public
Tor relays:

Here are some newer ideas on how we should handle the problem, sparked
by talking to the students and also by spending several days focused on
it myself.

The first point is to recognize better where we are in the arms race.
Specifically, we've had exactly one blocking event. Most countries have
never blocked any bridges, and China did one push on Sept 25.

(For this discussion I'm ignoring cases where they block by protocol
fingerprint rather than IP address. That's an important issue that we
have to prepare for also, and in some countries it may actually be a
more urgent issue, but it's a separate topic.)

Practically speaking, what we expect to see for the next months is
people mostly ignoring us with perhaps one or two days where they put
in a lot of effort. They're not (yet) rolling out automated enumeration
programs that run 24/7 and try to block bridges in real-time.

---Part one----------------------------------------------------------

Conclusion #1 is that we need to hold more of our bridges in reserve.
Right now we have perhaps 300-400 bridges running at any given time. The
configuration parameters I picked for the bridgedb server involve holding
1/11 of the bridges in reserve (5/11 are given out via https, and 5/11
are given out via gmail). On Sept 25, they enumerated and blocked the
https portion. That's about half the bridges gone. If they'd decided
to make a bunch of gmail accounts, they would have found over 90% of
the bridges -- leaving us with only 30-40 bridges for whatever disaster
recovery backup plan we produce.

So the next time somebody tries to learn all the bridges, we need to
make sure they get a much smaller fraction. The only reason to make
that fraction not too tiny is that we need to handle all the load that
users want to put on the bridges. Karsten's latest estimates show 30k+
daily users of bridges. They won't all fit on just 5 bridges.

Soon Karsten will be publishing anonymized archives of bridge descriptors,
and it will be clearer that the distribution of users to bridges is far
from even. There actually are a handful of bridges that are handling
much of the traffic from China. Presumably these are the ones that
spread through out-of-band social channels and not our automated (and
more evenly distributed) distribution mechanisms. So I'm tempted to be
conservative and say that a share of 20% for https, 10% for gmail, and
70% in reserve is a fine new plan. (I suggest a higher share for https
since it's easier to use so it will end up with a higher load.)

In fact, if we expect the next blocking event to come all at once, then
one additional goal we should want is that some people who already have
a bridge don't get blocked. One way to go about this is to vary which
bridges are available at all. Up until recently it turns out we were
almost doing this: we reshuffled which bridges were available to which IP
addresses every few days. But our problem was that all the bridges were
still available to *some* IP address. Instead, we need to keep some of
the bridges within the https share in reserve, and make only a fraction
of them available at any given point. For example, we could pick 1/5
of the https share and hand those out; every week we would change to
a different 1/5. Thus anybody who tries to block all the https bridges
today will learn only 1/5 of them, and people who learned their bridges
last week will be unaffected.

There are some drawbacks here. Two big ones are 1) a patient attacker can
still learn the whole set in that share over time, and 2) if a bunch of
users all decide in the same week that they need bridges, then the load
will be uneven. I'm not too worried about #1 yet (we'll deal with that
when it happens), and I think #2 is an acceptable tradeoff too.

Another problem with the more general approach of keeping 70% of the
bridges unpublished is that people may not get the immediate gratification
of seeing users, and so they'll turn off their bridge. This is a social
problem that can be tackled at a different level than the other technical
problems here. But we should be sure to address it.

---Part two----------------------------------------------------------

Conclusion #2 is that we need to prepare better for the next blocking
event. Actually, we still haven't reacted well to the first one. Our
bridgedb code doesn't know how to leave out bridges that we know are
blocked. We're still giving out blocked bridges to China via the https
mechanism. :(

The immediate need is for bridgedb to take in a blacklist of IP addresses
that shouldn't be offered. It can then just leave those out of any
answers it provides.

(I had originally been thinking "leave them blocked -- if we cull the
blocked ones, we're just making it easier for them to find the remaining
unblocked ones and block those." But since nobody's tried to do any more
blocking, we're just harming the current users for no gain.)

The more advanced approach is to give out different answers
depending on what country you're asking about. For example, you
could visit, or you could mail
bridges+zh at As a bonus, we could provide instructions
localized in that language. Kaner is currently adding this type of
feature to our gettor email auto-responder, so it's the same idea here.

Now, the downside here is that giving different answers to different
countries means there are more overall addresses being offered out.
After all, we can't tell what country the asker *really* is in. So if you
wanted to enumerate bridges, you would ask about each country separately,
and pool your answers.

We could imagine a solution where we have a totally separate set of
bridges for each country, and figure that a given country will do the
minimum possible work and block only the ones assigned to its country. My
preference would be to move directly to the later step in the arms race,
and just make the answers as uniform between countries as possible:
tag each bridge internally with which countries it's blocked in, and
skip over blocked bridges when choosing which answers to give out.

Finally, there's the related problem of figuring out which bridges are
actually blocked. Eventually we'd like to automate this, while at the same
time not introducing new ways for attackers to learn all the addresses
we're testing. But assuming the blocking events are obvious and rare,
the simple answer for now is to do it manually soon after the fact. I
still have the list from doing the manual test shortly after Sept 25.

---Part three--------------------------------------------------------

Conclusion #3 is that we're doing our gmail answers wrong. Our current
approach is that we do a one-off answer in response to an email request.
Instead we need to focus on building a relationship with that email
address. Specifically, gmail requests should be more like subscriptions
rather than responses.

A while ago, our gmail autoresponder behavior was to pick a set of three
answers the first time a given address mails us. Then on subsequent
mails, we provide exactly the same answer, except we leave out bridges
that we know are down. So if you mail us a week later from the same
gmail account, and all three of your original bridges had disappeared,
then we'd send you a list of zero bridges. Not so good.

In early October we fixed that to send the first three running bridges in
the hash ring, starting from the point in the hash ring that the gmail
account maps to. So you get three bridges, and if you ask again later,
and those three are down, you get new ones. And if the originals come
back, you get whichever three are first (closest) in line.

We also changed it so you don't have to say 'get bridges' in the body
of the mail: too many users were getting that wrong. But that trick
introduced a new concern: we could get into email loops where we bombard
a gmail user who has set up a vacation program to auto-answer. The fix for
now is to answer a given address no more than once every couple of hours.

But what if you get three bridges, and they go down in an hour? Bridgedb
will ignore your followup request.

My incremental fix is to check if the answer has changed since last
time, and if it has, be willing to answer even if the 'couple of hours'
hasn't elapsed. That way people can get answers if there are new answers,
and we still protect against email loops.

But I have a better fix. Why not automatically mail out an update if
the answer changes? There could be a new command 'subscribe bridges',
and then it will remember which ones you've heard already, and if the
best three answers include any you haven't heard before, it'll send you
an email. Or we can batch them, or only mail if none of the ones you've
heard are still up, or some other permutation. 'get bridges' could be a
synonym for a one-week subscription.

This approach means storing data about which gmail addresses are our users
(boo), though hopefully they're using throwaway gmail addresses if that
matters to them. But better, it brings in a host of new possibilities down
the road when the arms race has moved to its next phase. Specifically,
we can allocate persistent bridges to the gmail accounts that have been
around a while, or the ones whose bridges didn't get blocked, etc. See
"the sixth strategy" in the original blocking-resistance design doc for
my original plans there:

It's kind of a shame that we can't demand proof-of-ongoing-work though.
I fear the scenario where we just accrue more and more gmail accounts
that we're mailing updates to but they've long since forgotten about
Tor. Should we make them re-subscribe periodically? Alas, our proof of
work on the gmail side is in creating a new account, not in receiving
further emails. So maybe we need to start relying more on the social
networking side as the arms race progresses here.

---Part four--------------------------------------------------------

Conclusion #4 is that we need to automate some other distribution
approaches. When the https approach got blocked, and we worried the gmail
approach would get blocked soon after, we got a friend in China to set
up a password-protected twitter account and sign up his 1000 closest
friends. Then I manually fed him the list of "reserve" bridges for a
few weeks, and he twittered a few bridges per day.

We need somebody to automate the posting of them on the twitter side
(anybody want to help write the scripts there?). And we need to automate
the bridgedb side also, for example so it writes out an up-to-date new
list of bridge addresses to various files that we can then export. As
a simple example, I'd like to mail a daily list to this fellow in China
a) which bridges from the previous mails are still around today, and
b) all the newly available bridges.

Next would be having bridgedb, or some associated scripts, track lists
and write them out to different files by share (not just every reserved
bridge at once). Then we can identify a few social hubs in each affected
country and make sure they always have a good handle on some new (and
otherwise unknown) bridges.

Having alternatives to the gmail distribution strategy is particularly
important these days in places like Iran, where they've shown little
hesitation in blocking gmail when things get rough. Right now your
best bet if you can't reach or gmail is to show up to IRC
and hope that I'm around and can give you one of the reserve bridge
addresses. That doesn't scale.

---Part five--------------------------------------------------------

Where are we going to go in the future?

It's hard to say how the arms race will progress on the attacker side, so
it's hard to predict how we'll need to adapt. But here are two directions
to keep in mind.

First, as the arms race picks up we're going to want to think harder about
ways to isolate good users on long-term bridges. One of the promising
directions from the Kaist groups was the idea of splitting users into
two groups each epoch: a large group and a small group. Each group
gets its own bridge. If the bridge for the small group ends up blocked,
break the small group in half and repeat. If it doesn't end up blocked,
all of those users are known good -- leave them with their bridge, and
they'll all be fine. Now, this approach has some remaining challenges
(for example, what if it takes a while before the bridge gets blocked),
but the basic idea of "separate users onto bridges such that you can
cluster good users by themselves on the long-term bridges" is a good goal.

Second, bridge distribution strategies where you can vary the hardness
of the challenge could be useful. The harder the challenge, the less
likely an attacker is to go for it. Thus the strategy adapts well to
unknown and/or changing levels of effort on the part of the adversary. For
example, say you have an online game with 50 levels, each harder than the
last, and you give out a few bridge addresses for successfully passing a
level. At the beginning when the attacker is ignoring it, passing just
the first level is enough to get a working bridge. But as the attacker
starts putting more effort in, then only the users who care a great deal
(the ones who play many levels) will be the ones with usable bridges.

All of that said, another constraint to remember is an economic one:
we should avoid challenges where experience with the challenge favors
the attacker. For example, if you have to solve a minesweeper game
to get bridges, and you get different bridges depending on how large
the board is, then the attacker will get really good at minesweeper,
whereas the users won't. So the "hard" levels will be harder for the
users than the attacker, and suddenly the disparity in effort will be
less than it used to be.


More information about the tor-dev mailing list