[tor-talk] Chaum Fathers Bastard Child To RubberHose ... PrivaTegrity cMix

Bryan Ford brynosaurus at gmail.com
Fri Jan 8 09:22:19 UTC 2016


Andy Isaacson wrote:
> The privaTegrity (PT) backdoor is significantly more malignant than the
> Tor dirauth issue.
> 
> If you pwn the Tor dirauths, you can sign and publish a false
> "consensns" to clients that will cause them to use only your relays for
> new connections, thus breaking anonymity for new connections.  Doing so
> leaves a trail of bits showing that this was done (mostly just on the
> target system).

This line of reasoning assumes that: (1) dirauth compromise necessarily happens “in-place”: i.e., the same servers at the same physical and network location as before but are starting to do bad things now; and (2) the compromised dirauth would publish the false consensus to the whole world.  Both of these assumptions, unfortunately, are likely false in many realistic threat models.

Both assumptions basically amount to assuming that the attacker will continue operating on the “good guy’s” turf, i.e., using the legitimate dirauth server’s hardware and network location but just make it start doing bad things in-place.  But why would a smart attacker operate on foreign (to the attacker) turf when he the option just to steal the key and quietly create any number of “evil twins” of the dirauth servers elsewhere in the world on his own turf, i.e., in a network environment he has control over? 

The obvious example is a state surveillance agency who steals/bribes/coerces a threshold of dirauth keys, imposes gag orders  of some kind on all of them, and then embeds those keys in Quantum Insert type devices, or into compromised “wifi access points” that MITM attack anyone who connects to the Internet through them.  The fake internet cafes as the 2009 G20 summit come to mind as an obvious example of this type of operation (http://www.theguardian.com/uk/2013/jun/16/gchq-intercepted-communications-g20-summits).  Any state-level attacker that controls the only way for users to communicate into or out of the country’s Internet basically has the power to perform persistent MITM attacks either en mass or selectively (i.e., just targeting users they don’t like).  And if they (secretly) hold copies of a threshold worth of dirauth private keys, they can silently create a completely internally consistent, “alternate view of the Tor universe” to a user whose Internet access they control.  Complete with a fake list of Tor relays, perhaps all implemented virtually on that same MITM device, and which thus might provide a rather nice user experience unless the attacker explicitly adds forwarding delays to avoid arousing suspicion.  The “real” dirauth servers remain otherwise uncompromised and doing the right thing, so no one on the “main” part of the Internet notices anything amiss; and the victim(s) don’t see anything amiss either because they have no way to compare notes (consensus documents) with the rest of the world.

The fundamental problem with arguments that attacks on dirauth/cMix-like clusters will leave a “trail of crumbs" is that it’s unfortunately all too realistic for today’s adversaries to split the network and keep a victim’s network view separate view of the world from the part of the Internet where the “good guys” live.  Yes, one of those users “might” notice the difference in various ways, e.g., by keeping a record of the consensus views he’s seen and comparing next time he travels to another country or uses an “uncompromised” Internet access point.  But many users around the world don’t often travel far from their home town, let alone to other countries.  And yes, these types of operations do often get found out sooner or later, often because the adversary makes some stupid mistake.  But it’s not ideal to have our security models depend on the adversary’s making mistakes.

I’m not criticizing Tor in particular here; this problem is just very difficult and nobody’s really solved it yet.  Our experimental Dissent anonymity system (http://dedis.cs.yale.edu/dissent/) currently relies on the same “small set of collectively-trusted servers” assumption, which I’m not happy with.  This motivated our current, ongoing work on scalable collective authorities or “cothorities” (https://github.com/dedis/cothority), in which hundreds or thousands of decentralized participants can witness documents (such as consensus reports) and contribute to a collective signature that clients can check without needing online communication or gossip.

There’s been discussion of using Certificate Transparency to address this kind of problem, but even CT unfortunately still doesn’t solve it, at least in its current design.  To perform a silent “evil twin” MITM attack like this, even on a CT-enabled Web client that already knows a target web site support CT, an attacker would need to steal (any) one CA key and (any) one or two log server keys, depending on verification policy.  With those keys, a MITM attacker (e.g., a not-so-great firewall or a fake Internet cafe) could create a completely consistent, fake CT universe complete with fake logs, and at the same time deny its victims any way connectivity to the “real” CT universe with which it could cross-check or learn about the attack.  So CT basically raises the silent compromise threshold to 3 stolen keys - that's a lot better than 1 (just a CA key) but far from what would be ideal.  Given that attackers have hundreds of CA keys to choose from and a growing list of log servers, many of them located in the same country, that gives potential, powerful attackers a veritable candyland of options for stealing keys to enable silent evil-twin MITM attacks.

Cheers
Bryan



More information about the tor-talk mailing list