Thus spake Georg Koppen (g.koppen@jondos.de):
That is definitely a good approach. But maybe there is research to be done here as well. Just a rough (and in part research) idea that I had in mind while asking you the question above: What about if we first started looking at different services offered in the web whether they can be deployed anonymously *at all* (or maybe more precisely (but not much): that can be deployed in a way that there is either no linkability at all or the linkability is not strong enough to endanger the user) (that would be worth some research, I guess)?
What technical properties of the web makes such services impossible to use?
The web is not the right object to reason about here. The more interesting question would be "What techical properties of a service makes it impossible to get used anonymously?" That remains to be researched. At the end, maybe there isn't any (though I doubt that).
Sure, anonymity is by definition impossible for things that require a name. As long as that name can be an ephemeral pseudonym, I think we're good on the web.
But once you start getting into real personal and/or biometric (even just a photo) details, you obviously lose your anonymity set. Again, I think what we're talking about here is "Layer 8".
Most of the ones I can think of are problematic because of "Layer 8" issues like users divilging too much information to specific services.
That may hold for most services, yes.
The idea of getting more users due to being not too strict here might be appealing but is not the right decision in the end. I think one has to realize that there are services in the web that are *designed* in a way that one EITHER may use them OR use anonymity services. Sure, the devil is in the details (e.g. there are probably a lot of services that may be usable anonymously but then are accompanied with a certain lack of usability. What about them? Should we decide against usability again or should we loosen our means to provide unlinkability here?) but that does not mean there is no way to find a good solution though.
At the end of the day, I don't believe we have to sacrifice much in terms of usability if we properly reason through the adversary capabilities.
I would be glad if that would be the case but I doubt that (having e.g. Facebook in mind).
Can you provide specific concerns about facebook wrt the properties from the blog post?
In short (and still roughly): I would like to start thinking from having all means available to surf the web anonymously and then downgrade them piece-by-piece to reach a trade-off between anonymity and usability. Services that may not be used anonymously at all would not trigger such a painful downgrade ("painful" as one usually tries first to hack around existing problems encountering unbelievable design issues and bugs and has to concede finally that it is in the user's interest to exclude that feature (again)).
Downgrading privacy would be a UI nightmare that no one would understand how to use, but assuming we can solve that problem: if we can find a way to apply these downgrade options to specific urlbar domains, this might make sense. Otherwise you introduce too many global fingerprinting issues by providing different privacy options.
No, you got me wrong here. The downgrading occurs while designing the anon mode not while using it. There should be just one mode in order to avoid fingerprinting issues. It is merely meant as a design principle for the dev: starting with all we can get and then downgrading our defenses until we reach a good balance between usability and anon features.
Ok. Let's try to do this. Where do we start from? Can we start from the design I proposed and make it more strict?
What do you have in mind in terms of stricter controls?
Hmmm... Dunno what you mean here.
What changes to the design might you propose?
The need for science especially comes in on the fingerprinting arena. Some fingerprinting opportunities may not actually be appealing to adversaries. Some may even appear appealing in theory, but in practice would be noticeable to the user, too noisy, and/or too error-prone. Hence I called for more panopticlick-style studies, especially of Javascript features, in the blog post.
Yes, that is definitely a good idea though I tend to avoid them all even if currently no adversary is using them (especially if no usability issue is at stake). First: no one knows whether one did not miss an attacker using this kind of attack vector and second: Getting rid of attack vectors is a good thing per se.
But going back to your original point, I contend you're not getting rid of any attack vectors by eliminating window.name and referer transmission. The adversary still has plenty of other ways to transmit the same information to 3rd party elements..
Yes, that's true.
Amusing story: I checked to see if Google+ might be using Google Chrome's privacy-preserving web-send feature (http://web-send.org/features.html) for their "+1" like button, and I discovered that sites who source the +1 button were encoding their URLs as a GET parameter to plus.google.com. So any referer protection you might expect to gain in the short term is already gone against Google. I think identifier transmission is really what is important in this case.
I am also pretty disappointed that Google is not even opting to use their own privacy-preserving submission system. Google+ seems like a great opportuity to push the adoption of web-send, so that 3rd-party identifier isolation can be done without breakage.
Folks who are inclined to making media shitstorms should try to jump on this one..
You're just preventing "accidental" information leakage at the cost of breaking functionality. The ad networks will adapt to this sort of change, and then you're right back where you started in terms of actual tracking, after years of punishing your users with broken sites..
This depends on how one designs the single features. But I got your point.
If you want to suggest how to fine-tune the referer/window.name policy, let's discuss that.
More broadly, perhaps there is some balance of per-tab isolation and origin isolation that is easily achievable in Firefox? In my experience, per-tab isolation is extremely hard. How much of that have you already implemented?