[or-cvs] r8842: two easy discovery approaches, plus a discussion of publicit (tor/trunk/doc/design-paper)

arma at seul.org arma at seul.org
Sat Oct 28 06:14:19 UTC 2006

Author: arma
Date: 2006-10-28 02:14:18 -0400 (Sat, 28 Oct 2006)
New Revision: 8842

two easy discovery approaches, plus a discussion of publicity,
and general cleanups.

Modified: tor/trunk/doc/design-paper/blocking.tex
--- tor/trunk/doc/design-paper/blocking.tex	2006-10-27 19:35:12 UTC (rev 8841)
+++ tor/trunk/doc/design-paper/blocking.tex	2006-10-28 06:14:18 UTC (rev 8842)
@@ -305,7 +305,7 @@
 on a set of single-hop proxies. In these systems, each user connects to
 a single proxy, which then relays the user's traffic. These public proxy
 systems are typically characterized by two features: they control and
-operator the proxies centrally, and many different users get assigned
+operate the proxies centrally, and many different users get assigned
 to each proxy.
 In terms of the relay component, single proxies provide weak security
@@ -343,7 +343,8 @@
 users with certain characteristics, such as paying customers or people
 from certain IP address ranges.
-Discovery despite a government-level firewall is a complex and unsolved
+Discovery in the face of a government-level firewall is a complex and
 topic, and we're stuck in this same arms race ourselves; we explore it
 in more detail in Section~\ref{sec:discovery}. But first we examine the
 other end of the spectrum --- getting volunteers to run the proxies,
@@ -413,7 +414,8 @@
-Stefan's WPES paper is probably the closest related work, and is
+Stefan's WPES paper~\cite{koepsell:wpes2004} is probably the closest
+related work, and is
 the starting point for the design in this paper.
@@ -446,7 +448,7 @@
 more subtle variant on this theory is that we've positioned Tor in the
 public eye as a tool for retaining civil liberties in more free countries,
 so perhaps blocking authorities don't view it as a threat. (We revisit
-this idea when we consider whether and how to publicize a a Tor variant
+this idea when we consider whether and how to publicize a Tor variant
 that improves blocking-resistance --- see Section~\ref{subsec:publicity}
 for more discussion.)
@@ -501,7 +503,7 @@
 %to an alternate directory authority, and for controller commands
 %that will do this cleanly.
-\subsection{The bridge directory authority (BDA)}
+\subsection{The bridge directory authority}
 How do the bridge relays advertise their existence to the world? We
 introduce a second new component of the design: a specialized directory
@@ -559,6 +561,7 @@
 \subsection{Putting them together}
 If a blocked user knows the identity keys of a set of bridge relays, and
 he has correct address information for at least one of them, he can use
@@ -613,7 +616,7 @@
 Therefore a better way to summarize a bridge's address is by its IP
 address and ORPort, so all communications between the client and the
-bridge will the ordinary TLS. But there are other details that need
+bridge will use ordinary TLS. But there are other details that need
 more investigation.
 What port should bridges pick for their ORPort? We currently recommend
@@ -621,13 +624,14 @@
 be most useful, because clients behind standard firewalls will have
 the best chance to reach them. Is this the best choice in all cases,
 or should we encourage some fraction of them pick random ports, or other
-ports commonly permitted on firewalls like 53 (DNS) or 110 (POP)? We need
+ports commonly permitted through firewalls like 53 (DNS) or 110
+(POP)? We need
 more research on our potential users, and their current and anticipated
 firewall restrictions.
 Furthermore, we need to look at the specifics of Tor's TLS handshake.
 Right now Tor uses some predictable strings in its TLS handshakes. For
-example, it sets the X.509 organizationName field to "Tor", and it puts
+example, it sets the X.509 organizationName field to ``Tor'', and it puts
 the Tor server's nickname in the certificate's commonName field. We
 should tweak the handshake protocol so it doesn't rely on any details
 in the certificate headers, yet it remains secure. Should we replace
@@ -678,8 +682,9 @@
 What about anonymity-breaking attacks from observing traffic, if the
 blocked user doesn't start out knowing the identity key of his intended
 bridge? The vulnerabilities aren't so bad in this case either ---
-the adversary could do the same attacks just by monitoring the network
+the adversary could do similar attacks just by monitoring the network
+% cue paper by steven and george
 Once the Tor client has fetched the bridge's server descriptor, it should
 remember the identity key fingerprint for that bridge relay. Thus if
@@ -703,16 +708,62 @@
 in the same arms race as all the other designs we described in
-3 options:
+In this section we describe four approaches to adding discovery
+components for our design, in order of increasing complexity. Note that
+we can deploy all four schemes at once --- bridges and blocked users can
+use the discovery approach that is most appropriate for their situation.
-- independent proxies. just tell your friends.
+\subsection{Independent bridges, no central discovery}
-- public proxies. given out like circumventors. or all sorts of other rate limiting ways.
+The first design is simply to have no centralized discovery component at
+all. Volunteers run bridges, and we assume they have some blocked users
+in mind and communicate their address information to them out-of-band
+(for example, through gmail). This design allows for small personal
+bridges that have only one or a handful of users in mind, but it can
+also support an entire community of users. For example, Citizen Lab's
+upcoming Psiphon single-hop proxy tool~\cite{psiphon} plans to use this
+\emph{social network} approach as its discovery component.
+There are some variations on this design. In the above example, the
+operator of the bridge seeks out and informs each new user about his
+bridge's address information and/or keys. Another approach involves
+blocked users introducing new blocked users to the bridges they know.
+That is, somebody in the blocked area can pass along a bridge's address to
+somebody else they trust. This scheme brings in appealing but complex game
+theory properties: the blocked user making the decision has an incentive
+only to delegate to trustworthy people, since an adversary who learns
+the bridge's address and filters it makes it unavailable for both of them.
+\subsection{Families of bridges}
+Because the blocked users are running our software too, we have many
+opportunities to improve usability or robustness. Our second design builds
+on the first by encouraging volunteers to run several bridges at once
+(or coordinate with other bridge volunteers), such that some fraction
+of the bridges are likely to be available at any given time.
+The blocked user's Tor client could periodically fetch an updated set of
+recommended bridges from any of the working bridges. Now the client can
+learn new additions to the bridge pool, and can expire abandoned bridges
+or bridges that the adversary has blocked, without the user ever needing
+to care. To simplify maintenance of the community's bridge pool, rather
+than mirroring all of the information at each bridge, each community
+could instead run its own bridge directory authority (accessed via the
+available bridges),
+\subsection{Social networks with directory-side support}
+In the above designs, 
 - social network scheme, with accounts and stuff.
+- public proxies. given out like circumventors. or all sorts of other rate limiting ways.
 In the first subsection we describe how to find a first bridge.
 Thus they can reach the BDA. From here we either assume a social
@@ -797,12 +848,12 @@
 connectivity, perhaps based on not getting their bridge relays blocked,
 Probably the most critical lesson learned in past work on reputation
-systems in privacy-oriented environments~\cite{p2p-econ} is the need for
+systems in privacy-oriented environments~\cite{rep-anon} is the need for
 verifiable transactions. That is, the entity computing and advertising
 reputations for participants needs to actually learn in a convincing
 way that a given transaction was successful or unsuccessful.
-(Lesson from designing reputation systems~\cite{p2p-econ}: easy to
+(Lesson from designing reputation systems~\cite{rep-anon}: easy to
 reward good behavior, hard to punish bad behavior.
 \subsection{How to allocate bridge addresses to users}
@@ -915,9 +966,9 @@
 Should bridge users sometimes send bursts of long-range drop cells?
-\subsection{Anonymity effects from becoming a bridge relay}
+\subsection{Anonymity effects from acting as a bridge relay}
-Against some attacks, becoming a bridge relay can improve anonymity. The
+Against some attacks, relaying traffic for others can improve anonymity. The
 simplest example is an attacker who owns a small number of Tor servers. He
 will see a connection from the bridge, but he won't be able to know
 whether the connection originated there or was relayed from somebody else.
@@ -943,7 +994,7 @@
 being used as a bridge but not whether it is adding traffic of its own.
 It is an open research question whether the benefits outweigh the risks. A
-lot of the decision rests on which the attacks users are most worried
+lot of the decision rests on which attacks the users are most worried
 about. For most users, we don't think running a bridge relay will be
 that damaging.
@@ -955,7 +1006,8 @@
 For Internet cafe Windows computers that let you attach your own USB key,
 a USB-based Tor image would be smart. There's Torpark, and hopefully
-there will be more options down the road. Worries about hardware or
+there will be more thoroughly analyzed options down the road. Worries
+about hardware or
 software keyloggers and other spyware --- and physical surveillance.
 If the system lets you boot from a CD or from a USB key, you can gain
@@ -1088,7 +1140,7 @@
 Bridge relays could always open their socks proxy. This is bad though,
-because they learn the bridge users' destinations, and secondly because
+because bridges learn the bridge users' destinations, and secondly because
 we've learned that open socks proxies tend to attract abusive users who
 have no idea they're using Tor.
@@ -1098,13 +1150,26 @@
 approach is probably a good way to help bootstrap the Psiphon network,
 if one of its barriers to deployment is a lack of volunteers willing
 to exit directly to websites. But it clearly drops some of the nice
-anonymity features Tor provides.
+anonymity and security features Tor provides.
 \subsection{Publicity attracts attention}
-both good and bad.
+Many people working on this field want to publicize the existence
+and extent of censorship concurrently with the deployment of their
+circumvention software. The easy reason for this two-pronged push is
+to attract volunteers for running proxies in their systems; but in many
+cases their main goal is not to build the software, but rather to educate
+the world about the censorship. The media also tries to do its part by
+broadcasting the existence of each new circumvention system.
+But at the same time, this publicity attracts the attention of the
+censors. We can slow down the arms race by not attracting as much
+attention, and just spreading by word of mouth. If our goal is to
+establish a solid social network of bridges and bridge users before
+the adversary gets involved, does this attention tradeoff work to our
 \subsection{The Tor website: how to get the software}
@@ -1126,6 +1191,8 @@
 Hidden services as bridges. Hidden services as bridge directory authorities.
 \bibliographystyle{plain} \bibliography{tor-design}
@@ -1164,7 +1231,7 @@
 rate limiting mechanisms:
 energy spent. captchas. relaying traffic for others?
-send us $10, we'll give you an account
+send us \$10, we'll give you an account
 so how do we reward people for being good?

More information about the tor-commits mailing list