[tor-talk] Making TBB undetectable!

behnaz Shirazi skorpino789263 at gmail.com
Fri Oct 2 16:58:12 UTC 2015


On 10/1/15, Ben Tasker <ben at bentasker.co.uk> wrote:
>> False! A unique Tor exit IP that visits site1.com then site2.com won't
>> compromise same person visited those sites or tow different person who
>> used same Tor exit IP at the same time did that, thus anonymity
>> remains true.
>
> But if one has one fingerprint (the default TBB) and the other an
> 'undetectable' one, then you can easily differentiate that they are two
> different users. They both came from Tor exits, so you "know" they're TOR
> users, but one user changing TBB's signature means they no longer appear as
> close to identical as possible.

As I said it won't happen. It doesn't make sense to use
undetectableizer when using a public Tor exit node because that will
compromise you are using Tor thereby minority of undetectable users
won't hurt anonymity of major detectable users nor themselves.
undetectablizer Add-on is useful for private exit nodes. just think of
how Tor bridges are shared with users privately to prevent China
harvest and block all of them, a similar list can be provided by Tor
community or other groups to share private exit nodes after paying
some Bitcoins and anyone can mine Bitcoins anonymously by doing
computer works after a few days.

Undetectability is necessary but occasional. Most of the time you
don't need to be undetectable when searching websites or visiting
social networks but if attacker detect that someone is trying to hide
it's identity when entering a powerful vile's email account or when
trying to contact a high risk journalist, that might cost lives.

>> TBB because when a natural fingerprint is used once then there will be
>> no enough information available for data miners to link pseudonyms for
>> deanonymization,
>
> Used once, sure. But over time, it's likely going to get used more than
> once, unless you're planning on inserting some sort of randomisation to try
> and prevent that (by making some aspect different each session), but that
> randomisation then becomes a potential means to identify users who are
> using "UnidentifiableMode"

Yes. Using a randomized profiles each time user clicks on
UnidentifiableMode can solve reusing same unique fingerprint problem
however I prefer use common profiles that we have in mobile devices
because a lots of people are using them and they all have same
fingerprint. it would be a large haystack for hiding, if I several
times check same account from there I still look natural.


>> Undetectability is a crucial requirement for privacy protection tools
>> and unfortunately seems that Tor developers don't wanna put their time
>> on this issue. I hope other folks take this problem serious and do
>> something quickly.
>
> I don't _know_ but I suspect it's actually the opposite - thought has
> previously been put into the feasibility and risk and it's been decided
> that the current approach should be safer. Making something "Undetectable"
> is very, very hard as your margin for error is 0 (because 0.01 gives
> something that someone could use to make it identifiable). Making something
> common so you can blend into the crowd makes it easier to avoid
> (potentially) costly mistakes.
>
> Remember that those who are _really_ interested in de-anonymising via
> fingerprinting are _very_ good at finding means to differentiate between
> requests, one tiny slip-up is all it would take to make your
> "Unidentifiable" browser extremely identifiable. You'd then (potentially)
> be the only client with fingerprint a, coming from a Tor exit.

There are limited numbers of data requests possible (check out
browserleaks.com or browserspy.dk). We need list all of them and
compare with other browsers to spoof what is different.

> Even if you didn't slip up, let's say you make your requests look almost
> exactly like vanilla firefox. If you're the only user using that mode at a
> given time, every request coming from an exit with your fingerprint is an
> opportunity to correlate that traffic back to you. There's no immediate
> proof that all that traffic is you, but volumes would be low enough that
> you could then start examining requests with an aim to trying to prove it's
> all one user.
>
> Blending into the crowd is not without it's value.

You say that in UnidentifiableMode before closing the browser and
getting a new identity, opening different sites in different tabs can
be correlated to each other which is true if user choose a randomized
fingerprint however if a common fingerprint like mobile devices is
chosen then that correlation between different tabs becomes
impractical.

UnidentifiableMode is used in rare scenarios, in a dialog we can
inform users what happens when they activate it. At the moment If you
maximize TBB a message pop up and inform you that this action makes
you vulnerable to tracking.


On 10/1/15, Spencer <spencerone at openmailbox.org> wrote:
>Is a 'Natural Fingerprint' like a clearnet fingerprint, in that it identifies you as a regular, >non-tor, internet user, making you part of the larger herd?

I don't understand what do you mean by “clearnet fingerprint” ?
Fingerprint is generated locally inside the browser, it is about TBB
not the onion routers. Connecting to a website directly or via a
public Tor exit node as proxy gives one bit of information (true or
false flag) to destination website but we don't include this bit in
the fingerprinting attack.

>I see this as a blocker, as this add-on is most likely detectable, yeah?

Nah

> I see this as a blocker, as this add-on is most likely detectable, yeah?
>   If not, how, in the same, less, or maybe a bit more, amount of
> resources do you feel this could be accomplished?  Manually, this
> becomes quite the task as time progresses.  Is this something that would
> be added to a mail [something], like OpenPGP or TorBirdy are, because I
> feel like this would be detectable somehow, too.

As far as I know you can't fetch installed Add-ons by javascript, it
only works for plugins so it is not detectable either. Detecting
Add-ons is done by side channel attacks, for instance Adblock prevent
certain scripts or Noscript prevent certain objects, attacker can
simply call such elements and find out those Add-ons are already
installed or not.

We just change details a browser return to calls in a way that caller
can't recognize it is telling the truth or not. there is no need to
block things however the "resource://" should go away, it tells
everything even the exact Tor version you are using now. of course it
won't cause a detection if user choose a mobile device profile as
safari don't have such a thing.


On 10/1/15, Ben Tasker <ben at bentasker.co.uk> wrote:
>> Randomization, or some one click equivalent, is the only real option here
> when usability is considered; the manual effort each session is undesirable
> at the very least :)
>
> The problem you have there, is what to randomize, and how to do it in such
> a way that it does not itself become identifiable.
>
> To use an example, think about when you run cover traffic (whether over Tor
> or a VPN), the initial temptation is to have random levels of data
> travelling over the link. The problem there being it's not a 'natural'
> looking flow of data when you analyse it. So when you use the link, your
> natural usage is identifiable in the analysis.
>
> So you go for something more 'natural', but natural's hard to fake, so your
> cover traffic has an identifiable set of patterns, meaning on analysis you
> can discount it and still tell when the tunnel is being used for real
> traffic.
>
>
> When we're talking about making the browser unidentifiable as TBB, the very
> act of having something in the fingerprint that changes to prevent
> correlation between sessions provides an avenue by which it can be
> identified as TBB:
>
> Let's say you override reported screen width so it lies, and then use TBB
> to sign in to (sake of example) Facebook. Every time you start a new
> session and sign in to Facebook, your screen size is going to be different.
> That's *very* unusual. User's screen sizes will change from time to time
> (because they're in a window rather than full-screen, or on a laptop
> instead of a PC) but to be different every time?
>
> What about if you're signed in to FB in one tab, and browsing news in
> another. The news page has a Like button on it, and Facebook get a
> completely different screen size reported. You might just have the news on
> fullscreen, and FB windowed, but again, for it to happen every time is an
> unusual pattern.
>
> A bit of research would soon tell them you're using TBB even if they hadn't
> thought to see if the traffic was coming from an exit node.

Using a common fingerprint (e.g mobiles) all the time can solve this issue

> only if we resolve the traffic source; i.e., Tor exits.
>
> That's quite an issue to solve though. Even if we assume that the IP's of
> tor nodes weren't being published anymore, analysis of traffic patterns on
> a busy site would likely soon let you work out the IP's of some exits.
>
> Granted, you wouldn't immediately know whether those sources were Tor exits
> or simply proxies being used by multiple users, but finding out wouldn't be
> impossible. A determined adversary wanting to map out Tor exits could
> simply initiate a lot of connections via Tor and keep a record of where the
> other end (under their control) sees connections come from.
>
> Not as accurate as downloading the relay list, but depending on your aims
> you wouldn't need 100% coverage, so in the absence of the list it'd
> probably do. It raises the cost of identifying Tor exits, but only so long
> as the resulting list isn't then published (and kept up to date).

In a public wifi hotspot there is only one IP address and several
clients simultaneously visit different websites. It would be very
difficult for an attacker to find out a private Tor exit node is
actually a Tor exit node and what we want is to prevent websites be
able to instantly detect Tor like today. Don't forget that it is not
impossible to locate a user if a global adversary observe a big
portion of globe and deanonymize Tor itself but we still trust Tor for
anonymity thus we can trust undetectablizer Add-on in most of cases to
remain unidentifiable either.

> As others have said though, the aim isn't to hide that you're using Tor
> from your destination, and successfully doing so would (IMO) be a pretty
> non-trivial task

What? Undetectabilizer Add-on's aim is exactly hiding that we're using
Tor from the destination site. Pluggable Transports aim to hide that
we're using Tor from network observers located between user and
entry-guards.

Making undetectablizer Add-on is a trivial task. It took several years
for Tor devs to show up a warning message when user try maximize it's
browser. It was a trivial task too but they are busy with other things
and don't have enough resource to do more good stuff. I hope they run
a kickstarter campaign very soon or some other folk help out make this
Add-on happen.


More information about the tor-talk mailing list