[tor-dev] Improving Private Browsing Mode/Tor Browser

Georg Koppen g.koppen at jondos.de
Thu Jun 23 09:55:49 UTC 2011

> Additionally, we expect that fingerprinting resistance will be an
> ongoing battle: as new browser features are added, new fingerprinting
> defenses will be needed. Furthermore, we'll likely be inclined to
> deploy unproven but better-than-nothing fingerprinting defenses (so
> long as they don't break much), where as the browser vendors may be
> more conservative on this front, too.

Yes, that seems likely.

>> And why having again add-ons that can probably be toggled on/off and
>> are thus more error-prone than just having an, say, Tor anon mode?
>> Or is this already included in the Tor anon mode but only separated
>> in the blog post for explanatory purposes?
> If we operate by upgrading private browsing mode, we'll effectively
> have the "toggle" in a place where users have already been trained by
> the UI to go for privacy. Torbutton would become an addon that is only
> active in private browsing mode. 

Okay. That means there is no additional toggling of Torbutton in this
enhanced private mode. The user just enters it and Torbutton is running
and doing its job and if the user does not want it anymore she does not
toggle anything but leaves this enhanced private browsing mode and
that's it, right?

> We also expect that if browser vendors become serious enough about
> privacy, they will be the ones who deal with all the linkability
> issues between the private and non-private states, not us.

Yes, that would be really helpful.

>> If one user requests
>> google.com, mail.google.com and other Google services within the 10
>> minutes interval (I am simplifying here a bit) without deploying TLS the
>> exit is still able to connect the whole activity and "sees" which
>> services that particular user is requesting/using. Even worse, if the
>> browser session is quite long there is a chance of recognizing that user
>> again if she happens to have the same exit mix more than once. Thus, I
>> do not see how that helps avoiding linkability for users that need/want
>> strong anonymity while surfing the web. Would be good to get that
>> explained in some detail. Or maybe I am missing a point here.
> We also hope to provide a "New Identity" functionality to address the
> persistent state issue, but perhaps this also should be an explicit
> responsibility of the mode rather than the addon..

Hmmm... If that is the answer to my questions then there is nothing like
avoiding getting tracked by exit mixes in the concept offered in the
blog post. Okay. How should the "New Identity" functionality work? Is
that identity generated automatically after a certain amount of time has
passed or does a user have to click manually on a button every time?

>> Assuming I understood TorButton's
>> Smart-Spoofing option properly: Why is it not applied to the
>> referer/window.name anymore? In other words: Why is the referer (and
>> window.name) not kept if the user surfs within one domain (let's say
>> from example.com to foo.example.com and then to foo.bar.example.com)?
> I don't really understand this question. The referer should be kept in
> these cases.

That sounds good. Then we probably had just different concepts of SOP in
mind. I was thinking about
http://tools.ietf.org/html/draft-abarth-origin-09 (see: section 3 and
4). That would treat http://example.com, http://foo.example.com and
http://foo.bar.example.com as different origins (let alone mixing
"http://" and "https://" and having different ports).

> Neither of these properties are really identifiers (yes yes,
> window.name can store identifiers, but it is more than that). Both are
> more like cross-page information channels.

Agreed, although the distinction is somewhat blurred here.

> Hence it doesn't make sense to "clear" them like cookies. Instead, It
> makes more sense to prohibit information transmission through them in
> certain cases.

I am not sure about that as "clearing" them for *certain contexts* seems
a good means to prohibit information transmission *in these contexts*:
If there isn't any information it cannot be transmitted (at least not by
referer or windows.name).

> I believe the cases where you want to prohibit the
> information transmission end up being the same for both of these
> information channels.

Yes, that's true.

> To respond to your previous paragraph, it is debatable exactly how
> strict a policy we want here, but my guess is that for Tor, we have
> enough IP unlinkability such that the answer can be "not very", in
> favor of not breaking sites that use these information channels
> legitimately.
> The fact is that other information channels exist for sites to
> communicate information about visitors to their 3rd party content. If
> you consider what you actually *can* restrict in terms of information
> transmission between sites and their 3rd party elements, the answer is
> "not much".
> So in my mind, it becomes a question of "What would you be actually
> preventing by *completely disabling* referers (and window.name)
> entirely?"
> It seems to me that the answer to this question is "You only prevent
> accidental leakage", because bad actors can use URL params as an
> information channel to their 3rd party elements just fine, and
> tracking and ad-targeting will continue. In a world without referers,
> sites would actually be incentivized to do this information passing,
> because ad networks will be able to serve better ads and pay them more
> money.
> If someone did a crawl of the top 10k sites and found that none of
> them would break by disabling or restricting referers, I might change
> my mind for Torbutton, because it is unlikely that sites will adapt
> just for Torbutton users. However, you still have the property that if
> the browser vendors decided to disable referers, sites would build
> mechanisms to transmit referer-style information anyway. Hence, when
> talking to browser makers, it doesn't make sense to recommend that
> they disable referer information. They should instead simply allow
> sites to have better privacy controls over them if they wish.
> Does this reasoning make sense? I suppose it is somewhat abstract, and
> very conditional.

It makes perfect sense to me. Thanks.

Another question came to my mind: You seem to be at pains not to break
parts of the web even in the anon mode even if that boils down to not
implement features that would better fit the needs for people looking
for unlinkability (one thing that comes to my mind here would be having
a context being tab dependent additionally). Why? Why not saying: "This
is Tor's anon mode. It is meant for people that strive for unlinkability
and might break some functionality. You may still use Tor in normal or
private browsing mode though (providing no or less unlinkability on the
browser level)." Do you think that's not worth the effort as Tor's IP
unlinkability is enough here (especially combined with the things you
suggested in the blog post)? I do not know Tor's user base very well but
could imagine that it contains a lot of users that would like to have
more unlinkability than the "we do not want to break any (or almost any)
part of the web for a better anonymity" fraction.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: OpenPGP digital signature
URL: <http://lists.torproject.org/pipermail/tor-dev/attachments/20110623/c23b2e2a/attachment-0001.pgp>

More information about the tor-dev mailing list