On Tuesday, June 21, 2016 09:52:43 AM Tristan wrote:
But the point of Tor is to promote open access to the Internet. Once Tor starts filtering traffic, it's no better than the government censorship so many people use Tor to get around. They'd go from one filter to another.
This is not such an absolute truth as you make it sound.
A filter running on the exit node that kicks in, say, when it detects five hundred http connections from the same middle relay in under ten seconds (or whatever would be considered clearly obvious "undesirable behaviour" - would not trigger for 99% of legitimate uses of tor.
It's probably true that we could only detect a small fraction of all malicious traffic with high confidence, but if we could deter *some* obviously malicious traffic with a fairly low false-positive rate, then that's very far away from "government censorship", or switching one form of censorship for another. For example, it would never trigger on the poor chinese or iranian blogger blogging about government brutality, which is one of the prime examples always cited when defending TOR.
Such filters wouldn't have to be mandatory; if you want to run an unrestricted exit node, the more power to you. But it would give people donating resources to TOR an additional argument: "Yes, the TOR people do care about abuse of their service, and they're trying to give people ways to address it". Relay operators would have an additional option: Either run an unrestricted exit node, or run just a relay, or run an exit node filtering *some* obviously malicious traffic. This might lead to more people who'd be willing to donate exit bandwith.
On the client side, the tor process could flag circuits (locally) that use a filtered exit node so the tor browser could notify the user when an exit node might block legitimate traffic.
I understand your point, but I don't think we can accurately detect malicious traffic without compromising the security of other Tor users.
I'm not sure how this would compromise the security of other Tor users.