(Aside: I think this thread is unrelated enough to tor-dev at this point that I'm going to make this my last reply.)
On Tue, 2014-07-22 at 14:42 +0200, Mike Hearn wrote:
Regardless of the moral arguments you put forward, which I will not comment on, it seems like this idea would never be implemented because none of the Tor developers have a desire to implement such a dangerous feature.
I can argue that the lack of it is also dangerous, actually. It amounts to a form of "pick your poison".
Consider exit policies. Would Tor be better off if all relays were also required to exit all traffic? I think it's obvious the answer is no because there are currently ~5800 relays and ~1000 exits according to the lists from torstatus.blutmagie.de, so most Tor relay operators choose not to exit. If they didn't have that choice, there'd almost certainly be far fewer relays. Allowing relays to contribute as much as they feel comfortable with (or that their ISP feels comfortable with) helps the project a lot.
Well, Tor would be more anonymous if there were no exit policies, so yes, Tor would be better without exit policies. People closer than I to the Tor Project have said as much elsewhere in this thread.
Tor is not a large network. It's a niche product that routinely sacrifices usability for better anonymity, and as a result is politically vulnerable. I don't want Tor to be vulnerable, I think it's a useful piece of infrastructure that will be critical for improving the surveillance situation. Regardless, "anonymity loves company" and Tor has little. By demanding everyone who takes part support all uses of Tor simultaneously, including the obviously bad ones, you ensure some people will decide not to do so, reducing the company you have and thus making it easier for politicians/regulators/others to target the network.
This is not a security argument, it is a political argument. I notice that you don't ever, in fact, address the fact that your suggestion can be used to partition the network for clients.
There are other political responses to this argument. The most common one is to point out that all taxpayers support all uses of roads simultaneously, including the obviously bad ones. There are existing legal mechanisms in place without having people withhold tax dollars from roads that they feel primarily are "bad." If this were to happen in the United States, for example, one could imagine a host of negative social consequences, like roads to mosques or primarily black/hispanic/jewish communities being ill-funded.
The above argument is general - it would also apply to giving end users different tradeoffs in the TBB, for example, a mode designed for pseudonymity rather than anonymity that doesn't clear cookies at the end of the session. Then it'd be more convenient for users who don't mind if the services they use can correlate data across their chosen username, they just want a hidden IP address. Same logic applies - the more people use Tor, the safer it is.
Tor's refusal to sacrifice security is a fairly mundane example of consequentialist thinking. The consequence of a user having to log in to Gmail twice after closing TBB are pretty minimal. The consequence of a user accidentally downloading TBB Lite and getting shot are pretty severe.
Your proposal has a similar trade-off. You have to argue that the social benefit to Tor outweighs the potential for the attack that it enables. You've yet to clearly do this; so far you've just restated your point that there are bad things on Tor and that it would be good to fight them by any means necessary.
It may appear that because Tor has been around for some years and has not encountered any real political resistance that it will always be like this. Unfortunately I don't think that's a safe assumption, at least not any more. Strong end to end crypto apps that actually achieve viral growth and large social impact are vanishingly rare. Skype was one example until they got forced to undo it by introducing back doors. The Silk Road was another. The combination of Bitcoin and Tor is very powerful. We see this not only with black markets but also Cryptolocker, which appears to be the "perfect crime" (there are no obvious fixes). So times have changed and the risk of Tor coming to the attention of TPTB is much higher now.
I don't see the logic here. Tor faces both extreme political repression already, and is strikingly different from Microsoft's handing of Skype (which was rearchitected because supernodes made for bad UX and didn't scale), and The Silk Road (which was illegal from the start, and was rapidly replaced with several other marketplaces).
The best fixes for this are: 1. Allow people to explicitly take action against abuse of their own nodes, so they have a plausible answer when being visited individually.
2. Grow usage and size as much/as fast as possible, to maximise democratic immunity. Uber is a case study of this strategy right now.
The absence of (1) means it'll be much more tempting for governments to decide that all Tor users should be treated as a group.
This is already happening. We live in that world now. We can't go back.
Right now, by the way, the plausible answer is "it's impossible for me to filter out certain kinds of communication." In spite of that Tor is legal in all of the world that cares about such legal handwaving. In the parts of the world where Tor is truly dangerous, no amount of "oh okay I'll block that hidden service" will save you.
It seems better to evangelize Tor and bring about #2 than to torpedo Tor's primary use case by introducing a censorship mechanism.
Further, why do you think such infrastructure would be remotely successful in stopping botnets from using the Tor network? A botnet could just generate a thousand hidden service keys and cycle through them.
That's a technique that's been used with regular DNS, and beaten before (DGA). The bot gets reverse engineered to find the iteration function and the domain names/keys eventually get sinkholed. There are counter measures and counter-countermeasures, as always.
Is it really productive to damage Tor's primary value proposition (strong anonymity) in order to take one more step in an arms race?
But yes, some types of abusers are harder to deal with than others, that's for sure. If it helps, s/botnet/ransomware/. The same arguments apply. I don't want to dwell on just botnet controllers.
Ransomware existed before Tor, and it would continue to exist after this point. Any botnet or ransomware operator could just have bots host safe introduction points. They'd be less anonymous, but I bet they wouldn't care.
With respect to your specific counter-arguments:
So, this would be: * Socially damaging, because it would fly in the face of Tor's anti-censorship messaging
That seems like a risky argument to me - it's too easy for someone to flip it around by pointing out all the extremely nasty and socially damaging services that Tor currently protects. If you're going to talk about social damage you need answers for why HS policies would be more damaging than those things.
See above re: consequentialism, roads, etc. This is not a new concept.
Also, the Tor home page doesn't prominently mention anti-censorship anywhere, it talks about preserving privacy. If you wanted to build a system that's primarily about resisting censorship of data it would look more like Freenet than hidden services (which can be censored in a way using DoS attacks and the like).
Tor is used and promoted as an anti-censorship tool. That is what the bridge feature is primarily used for: evading censorship. If you google "censorship circumvention" Tor is named in the wikipedia page that is the first result, and the third result is Whonix.
* Technically damaging, because it would enable the worst class of attacks by allowing attackers to pick arbitrary introduction points
Who are the attackers, in this case, and how do they force a selection of introduction points? Let's say Snowden sets up a blog as a hidden service. It appears in nobodies policies, because everyone agrees that this is a website worth hiding.
If the attacker is the NSA, what do they do next?
They inject it into people's policies after compromising a connection to any directory server. Possibly one distributed with a backdoored TBB (which is as secure as your initial connection to torproject.org).
Do you really think that if you set up a censorship system, it's not going to increase attack surface? Any crypto scheme you can devise will not stand up against a motivated attacker on a long enough timeline.
* Not even technically helpful against other content, because they can change addresses faster than volunteers maintaining lists of all the CP onionsites can do the detective work (which you assume people will want to do, and do rapidly enough that this will be useful)
I didn't assume that, actually, I assumed that being able to set policies over the use of their own bandwidth would encourage people to contribute more - seems a safe assumption. You don't need perfection to achieve that outcome.
You do need perfection for all of the social arguments you're making. You've put this forward as a way for the Tor Project to deflect the bad PR of "bad content" on hidden services, but in order for that to happen the technique needs to *actually work* at reducing the amount of bad traffic. People don't make decisions rationally and aren't going to go from opposing Tor because bad people use it to supporting Tor because *they don't personally help* the bad people use it. It's bad enough that Tor is associated with bad people. This is called the halo/horns effect in the bias literature; it is very well studied.
But regardless, changing an onion address is no different to changing a website address. It's not sufficient just to change it. Your visitors have to know what the new address is. You're an intelligent guy so I'm sure you see why this matters.
These sites *already* change their addresses all the time. User experience and retention isn't their biggest concern.