*Browser fingerprinting* fails because if I try to get bridges from my
desktop vs my laptop vs my cell vs at a library vs at an internet cafe, I
will not be mapped to the same user. Additionally, if I make a change to my
setup (switch from Firefox to Chrome, new monitor resolution, etc.) even on
the same machine I likely would not be mapped to the same profile (or if I
am, it seems that a censor would be able to generate enough profiles that
they end up in several "trusted" groups). I'd also argue that collecting
the browser fingerprints of some of the most vulnerable Tor users (those
that need bridges) is a risk. While I trust the Tor project to maintain
good data and security practices at present, am I confident that a new
vulnerability won't arise, or a new 0 day that could be used to acquire the
information Tor maintains? No - especially not with the value that a DB of
fingerprints of those subverting censorship would be to a censor.<<
Sure, you could get a few bridges, but it is not trivial easy for an adversary to get hundreds, therefore browser fingerprinting is effective.
Even if you switch from Firefox to Chrome, some aspects of the fingerprint remain the same. There are even some fingerprint test websites that can fingerprint your device, cross-browser, ex: UniqueMachine. I’m not exactly sure what you mean Sam by trusted groups. If the censor gets several bridges, it’s not a big deal. In terms of what I said about grouping similar fingerprints together, I thought a simple way to do it would be to assume the first fingerprints are more legitimate and the censor shows up later.
In terms of the anonymity risk. The way I see it is my suggestions should be a last resort for users. Obviously using Tor through an insecure distribution is much better than not being able to use Tor. If users can’t get a working bridge, they might switch to something dangerous like Ultrasurf, or who knows what the alternative is.
*Social networks* fail because they are not usable by all users (whether by
choice or by censorship). This approach will also require maintenance.<<
Social networks may not scale well, but it more than compensates by being super blocking resistant. I think it requires less maintenance.
I'll propose one more potential - PGP signing, which I think works better
than the above three, but is far from perfect.<<
I think you’re missing the aim. We are not merely trying to identify our users, but trying to prevent an adversary from creating too many identities. Unfortunately it seems like the more blocking resistant you make it, the more privacy invasive. Social networks are an exception. The censor may create as many identities as it wants, but it wont increase his knowledge of addresses. In a sense knowledge of a bridge is treated like an identity. And users must have knowledge of a bridge regardless of IP distribution.
- Reach-ability probing from adversarial regions (aka identifying burnt
bridges)<<
It seems obvious to me at least that in the stead of probing from enemies of the internet, we merely watch which countries are connecting to the bridges. If we see a consistent decrease from a country then we know something’s up. This is much safer.
- Ability for bridges to easily and quickly change IP address (refreshing
a bridge)<<
Don’t forget we need to also disable UpdateBridgesFromAuthority because it would be trivial to update the blacklist otherwise.
Cheers
anti-censorship-team@lists.torproject.org