[tor-talk] Risk of selectively enabling JavaScript

Mark McCarron mark.mccarron at live.co.uk
Tue Jan 7 20:49:52 UTC 2014


> Point by point.
> 
> > Javascript, by itself, is not an issue and poses no more of a security threat than any other type of data transferred online.  Coding errors in image handling, html parsing, ftp, etc., can all be used to inject code.
> 
> Note that (potential) privilege escalation bugs are found way more in 
> the Javascript component of Firefox. The Javascript engine is a 
> complicated and heavily optimized beast and (Javascript-accessed) 
> browser APIs have seen much more active development.
> 

Do we have any empirical figures on this across browsers?

> It is very reasonable to assume that more security problems are found 
> there, and it might be reasonable to use a whitelist to mitigate those 
> problems.
> 
> >   The idea that you are gaining some security or increased anonymity by disabling javascript is outright nonsense.  As TBB is a standard product, its fingerprint should be the same for everyone.
> 
> It's not "outright nonsense". It's supported by fact. Disabling 
> Javascript will protect you against Javascript 0days in TBB (or 
> non-0days deployed by the FBI against non-updated users). You may argue 
> that it's not a good or realistic defense, but not that it doesn't do 
> anything.
> 

Zero days can appear anywhere where data is transferred and code is incorrect.  If we are advising that JS should be disabled, we are making a statement about the quality of code in Firefox.  We may also be saying that the security services are sabotaging the product, but let's get some figures and name of responsible devs first.

> > The fact that TBB disables javascript is a [blah blah non-sequitur]
> 
> TBB doesn't disable Javascript by default. The premise of your, argument 
> falls apart.

Really???  I thought it was disabled in the latest betas.  Anyway, it doesn't change the argument since the recommended practice is to disable it.

> 
> > I think there is a solid argument for adding filters to the exit nodes that strip anything that could be used against a person and enforce default headers ,etc.  This will kill any fingerprinting, injection and tracking attempts.  If anyone still requires full non-modified access, they should be forced to explicitly allow that by clicking a button.
> Filtering at a exit-node level is ridiculous for multiple reasons. You 
> don't want to fix these issues on a stream level, and there are no 
> advantages compared with client-side filtering. NoScript is rightfully 
> in the TBB.
> 

There is because not everyone uses TBB.  It solves problems before they can become problems.

> Also, claiming that any amount of filtering will "kill any 
> fingerprinting, injection and tracking attempts" is naive at best. I can 
> think of dozens of attacks, starting with a malicious exit-node.

It would kill most of them by making all clients look the same since any exposed data is faked at the exit node.  That reduces the footprint and all we need to be concerned with a malicious exit nodes.

The idea behind defense-in-depth is that we continuously reduce the footprint and attack vectors systematically increasing the dollar value of any functional attack.  It is not a silver bullet approach.

> > That said, all of this is a complete waste of time if Tor does not start integrating techniques to prevent traffic analysis.
> >
> Location-privacy and privacy between different (pseudonymous) identities 
> have different attacks. We're talking about the latter here. 
> Furthermore, end-to-end traffic confirmation attacks (if that is what 
> you mean with traffic analysis) are not in Tor's adversary model. Tor is 
> very vulnerable to them.

Tor needs strategies for every form of attack.  If not, the entire platform becomes a waste of time.  The broad public perception will develop that the weakness are due to US government influence in the project.

That will be the end for Tor.
 		 	   		  


More information about the tor-talk mailing list