[tor-bugs] #18361 [Tor Browser]: Issues with corporate censorship and mass surveillance

Tor Bug Tracker & Wiki blackhole at torproject.org
Tue Feb 23 21:22:27 UTC 2016


#18361: Issues with corporate censorship and mass surveillance
------------------------------------------+--------------------------
 Reporter:  ioerror                       |          Owner:  tbb-team
     Type:  enhancement                   |         Status:  new
 Priority:  High                          |      Milestone:
Component:  Tor Browser                   |        Version:
 Severity:  Critical                      |     Resolution:
 Keywords:  security, privacy, anonymity  |  Actual Points:
Parent ID:                                |         Points:
  Sponsor:                                |
------------------------------------------+--------------------------

Comment (by ford):

 I think it's great that CloudFlare is participating in this discussion and
 working to address the most immediate pain points.  Especially given the
 amount of vitriol getting thrown their way.

 But the larger issue is not remotely specific to CloudFlare.  Remember way
 back when Wikipedia allowed anonymous edits without logins, even by Tor
 users?  Or even farther back, when USENET was the thing but then died a
 heat death from uncontrollable spam and abuse, forcing everyone to scurry
 away to private mailing lists and walled-garden discussion websites?  Many
 websites and other online services would like to support privacy and
 anonymity, but most can't afford to spend all their time and financial
 resources dealing with anonymous abuse.

 In the longer term I think a deployable anonymous credential system of
 some kind is essential.  Blacklistable long-term credentials are
 definitely worth exploring further, but incur a lot of complexity and I
 don't think anyone knows yet how to make them highly usable.  The approach
 phw mentions sounds promising and I'd like to hear more about it.

 Giving users a bucket of anonymous tokens for solving a CAPTCHA may be a
 reasonable and arguably tractable starting-point (or stopgap) measure.
 But there are many other ways anonymous credentials could be produced and
 other useful "foundations of trust" for them, and I definitely sympathize
 with Tor users who feel like they're being treated as CAPTCHA-solving
 machines.  There needs to be a clear roadmap from CAPTCHA-based
 credentials to something-else-based credentials.

 Some other particular possibilities:

 - Anonymous credentials attesting that, e.g., "I am a user with a Twitter
 account who has been around at least 1 hear and has at least 100
 followers".  In other words, build on the investment that the big social
 media companies make all the time to detect and shutdown abusive or
 automated accounts.  This basis is not remotely perfect obviously, but
 pragmatically, social media identities that have "survived a while" and
 have friends/followers are much more expensive on the black market than
 fresh Sybil accounts created by paid CAPTCHA-solvers.  Thus, users who can
 produce better anonymous evidence that they're "real" might get a bigger
 pile or faster rate of anonymous tokens than they do just by solving a
 CAPTCHA.  My group started exploring this approach in our Crypto-Book
 project ([http]//dedis.cs.yale.edu/dissent/papers/cryptobook-abs) but
 there are certainly gaps to be filled to make it practical.

 - Anonymous credentials that can attest with even higher certainty that
 they represent "one and only one real person", e.g., credentials derived
 from pseudonyms distributed at physical pseudonym parties (see
 [http]//bford.info/pub/net/sybil.pdf and
 [http]//bford.github.io/2015/10/07/names.html).  No one would be
 "required" to participate in such a system, but those that do might be
 able to get an even bigger pile or faster flow of tokens on the basis of
 demonstrating with higher certainty that they're one and only one real
 person.  Further, this seems like ultimately the only kind of basis that
 might provide a legitimate "democratic foundation": e.g., a basis that
 would allow Tor to hold online polls or votes and be reasonably certain
 that each real human got one and only one vote.

 - Anonymous credentials based on reputation scores that users exhibiting
 "good/civil behavior" can build up over time.  Basically, use a "carrot"
 approach rather than the "stick" approach that blacklistable credentials
 tend to represent.  We're also starting to explore ideas in this space;
 see our upcoming NSDI paper on AnonRep
 ([http]//dedis.cs.yale.edu/dissent/papers/anonrep-abs).

 At any rate, the problem is definitely not at all simple; we need to start
 with baby steps (e.g., the CF+Google looping bug, then maybe a simple
 CAPTCHA-based credential scheme).  But in the longer term we need an
 architecture flexible enough to deal with abuse while allowing well-
 behaved users to demonstrate as such in multiple different ways based on
 multiple different trust foundations.

 P.S. To underscore the problem, I had to rewrite parts of this post twice
 already, because of trac.torproject.org deciding it looks like spam and
 rejecting it.  Pot, meet kettle.

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/18361#comment:104>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list