[tor-relays] Towards a Tor Node Best Best Practices Document

Fabian Keil freebsd-listen at fabiankeil.de
Sun Apr 29 16:06:39 UTC 2012

Mike Perry <mikeperry at torproject.org> wrote:

> Thus spake Fabian Keil (freebsd-listen at fabiankeil.de):
> > > Attack Vector #2: Advanced Persistent Threat Key Theft
> > I assume "your" APT is somehow less capable then "my" APT,
> > but without knowing your definition I can't really tell
> > if the proposed defenses are effective against it.
> >
> > Adding your definition to the document would help, but personally
> > I would prefer it if the term APT wouldn't be used at all.
> I thought I did in the very next paragraphs?

Never mind. I assumed it was just an example for one of
several attacks the APT might do.

> > > If one-time methods fail or are beyond reach, the adversary has to
> > > resort to persistent machine compromise to retain access to node key
> > > material.
> > > 
> > > The APT attacker can use the same vector as #1 or perhaps an external
> > > vector such as daemon compromise, but they then must also plant a
> > > backdoor that would do something like trawl through the RAM of a
> > > machine, sniff out the keys (perhaps even grabbing the ephemeral TLS
> > > keys directly), and transmit them offsite for collection.
> > > 
> > > This is a significantly more expensive position for the adversary to
> > > maintain, because it is possible to notice upon a thorough forensic
> > > investigation during a perhaps unrelated incident, and it may trigger
> > > firewall warnings or other common least privilege defense alarms
> > > inadvertently.
> > 
> > I think this attack would be a lot more expensive than motivating
> > the right Debian developers to compromise a significant part of
> > the interesting Tor relays the next time they get updated.
> >
> > This attack would not only be harder to defend against, it also
> > sounds cool if we call it the apt(8)-based APT attack.
> > 
> > "My" APT could do that, but I assume "yours" can't?
> Hrmm. More accurately, your "apt APT" is not an attack against Tor, it's
> an attack against Debian. I classify that as out of scope. Similarly,
> attacks against Intel are also out of scope (even though they are quite
> possible and are even more terrifying). It's simply not our job to
> defend against them. 

I agree that defending against this is out of scope for the Tor project.
If it's done with the intention of compromising Tor relay, I'd still count
it as an attack against Tor, though.

> > > Once you start your tor process(es), you will want to copy your
> > > identity key offsite, and then remove it. Tor does not need it to
> > > remain on disk after startup, and removing it ensures that an
> > > attacker must deploy a kernel exploit to obtain it from memory.
> > > While you should not re-use the identity key after unexplained
> > > reboots, you may want to retain a copy for planned reboots and tor
> > > maintenance.
> > 
> > How often can a relay regenerate the identity key without
> > becoming a burden to the network?
> > 
> > I reused the identity keys after unexplained reboots in the past
> > as I assumed the cost of a new key (unknown to me) would be higher
> > than the cost of a compromise (unknown) multiplied by the likelihood
> > of the occurrence (also unknown to me, but estimated to be rather low
> > compared to other possible reboot causes).
> > 
> > In cases where a reboot is assumed to have been caused by a
> > system compromise, I wouldn't consider merely regenerating the
> > key without re-installing the whole system from known-good media
> > "best practice" anyway.
> You're failing to see the distinction made between adversaries, which
> was the entire point of the motivating section of the document. Rekeying
> *will* thwart some adversaries.

I'm not arguing that rekeying is useless. I just think that for
most Tor relays reboots are usually not the result of a compromise
and the lack of reboots doesn't proof anything either (I'm aware
that you weren't implying this).

For a relay operator concerned about key theft, rekeying after
a certain amount of time, even if there's no sign of a compromise,
seems to make more sense to me.

> > > Ok, that's it. What do people think? Personally, I think that if we
> > > can require a kernel exploit and/or weird memory gymnastics for key
> > > compromise, that would be a *huge* improvement. Do the above
> > > recommendations actually accomplish that?
> > 
> > Are "weird memory gymnastics" really that much more effort
> > than getting the relevant keys through ptrace directly?
> If they require a kernel exploit to perform, absolutely. If there are
> memory tricks root can perform without a kernel exploit, we should see
> if we can enumerate them so as to develop countermeasures.

My assumption was that a root user could get the key (or reenable
ptrace) through /dev/mem without relying on kernel exploits.

> > I suspect getting the keys through either mechanism might be
> > trivial compared to getting the infrastructure in place to use
> > the keys for a non-theoretical attack that is cost-effective.
> The infrastructure is already there for other reasons. See for example,
> the CALEA broadband intercept enhancements of 2007 in the USA. Those can
> absolutely be used to target specific Tor users and completely
> transparently deanonymize their Tor traffic today, with one-time key
> theft (via NSL subpoena) of Guard node keys. 

CALEA might provide access to the traffic, but the attacker still
has to analyze it. I'm not saying that is impossible or inconceivably
hard, but I'd expect it to be a lot more complicated than getting the
keys from a system the attacker already have root access.

> > I think your proposed measures might be useful for a relay
> > operator with a compatible system who is interested in spending
> > more time on his relay's security than he already is.
> > 
> > It's not clear to me, though, that they improve the security
> > of the Tor network significantly enough to be worth requiring
> > them or even calling them best practices (which could demotivate
> > operators who can't or don't want to implement them).
> Did I fail to motivate the defenses? In what way can we establish "more
> realistic" best practice defenses that are grounded in real attack
> scenarios and ordered by attack cost vs defense cost? I thought I had
> accomplished that...

I have no idea.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 196 bytes
Desc: not available
URL: <http://lists.torproject.org/pipermail/tor-relays/attachments/20120429/5fdc2888/attachment.pgp>

More information about the tor-relays mailing list