[tbb-bugs] #18361 [Tor Browser]: Issues with corporate censorship and mass surveillance

Tor Bug Tracker & Wiki blackhole at torproject.org
Mon Feb 22 13:59:44 UTC 2016

#18361: Issues with corporate censorship and mass surveillance
 Reporter:  ioerror                       |          Owner:  tbb-team
     Type:  enhancement                   |         Status:  new
 Priority:  High                          |      Milestone:
Component:  Tor Browser                   |        Version:
 Severity:  Critical                      |     Resolution:
 Keywords:  security, privacy, anonymity  |  Actual Points:
Parent ID:                                |         Points:
  Sponsor:                                |

Comment (by ioerror):

 Replying to [comment:37 jgrahamc]:
 > Replying to [comment:35 ioerror]:
 > > This is useful though it is unclear - is this what CF uses on the
 backend? Is this data the reason that Google's captchas are so hard to
 > It's a data source that we use for IP reputation. I was using it as
 illustrative as well because it's a third party. I don't know if there's
 any connection between Project Honeypot and Google's CAPTCHAs.

 How do we vet this information or these so-called "threat scores" other
 than trusting what someone says?

 > > Offering a read only version of these websites that prompts for a
 captcha on POST would be a very basic and simple way to reduce the flood
 of upset users. Ensuring that a captcha is solved and not stuck in a 14 or
 15 solution loop is another issue - that may be a bug unsolvable by CF but
 rather needs to be addressed by Google. Another option, as I mentioned
 above, might be to stop a user before ever reaching a website that is
 going to ask them to run javascript and connect them between two very
 large end points (CF and Google).
 > I'm not convinced about the R/O solution. Seems to me that Tor users
 would likely be more upset the moment they got stale information or
 couldn't POST to a forum or similar. I'd much rather solve the abuse
 problem and make this go away completely.

 Are you convinced that it is strictly worse than the current situation?
 I'm convinced that it is strictly better to only toss up a captcha that
 loads a Google research when a user is about to interact with the website
 in a major way.

 I do not believe that you can solve abuse on the internet anymore than a
 country "solve" healthcare or that the hacker community can "solve"
 surveillance. Abuse is relative and it is part of having free speech on
 the internet. There is no doubt a problem - but the solution is not to
 collectively punish millions of people (and their bots who are people too,
 man :-) ) based on ~1600 ip address "threat" scores.

 > Also, the CAPTCHA-loop thing is an issue that needs to be addressed by
 us and Google.

 Does that mean that Google, in addition to CF, has data on everyone
 hitting those captchas?

 > I still think the blinded tokens thing is going to be interesting to
 investigate because it would help anonymously prove that the User-Agent
 was controlled by a human and could be sent eliminating the need for any

 I'm not at all convinced that this can be done in the short term and it
 seems to assume that users only use graphical browsers. Attackers will be
 able to extract tokens and have farms of people solving things, when they
 need new tokens, so usually regular users pay the highest price.

 > > Does Google any end user connections for those captcha requests?
 > Can you rewrite that? Couldn't parse it.

 When a user is given a CF captcha - does Google see any request from them
 directly? Do they see the Tor Exit IP hitting them? Is it just CF or is it
 also Google? Do both companies get to run javascript in this user's

Ticket URL: <https://trac.torproject.org/projects/tor/ticket/18361#comment:41>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online

More information about the tbb-bugs mailing list