[tor-dev] Proposal 246: Defending Against Guard Discovery Attacks using Vanguards

s7r s7r at sky-ip.org
Sun Jul 19 11:00:52 UTC 2015


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 7/19/2015 9:26 AM, Roger Dingledine wrote:
> On Sat, Jul 18, 2015 at 03:11:26AM +0300, s7r wrote:
>> I still see the third hop (speaking from hidden service server
>> start point) is the weak part here. An attacker can connect to a
>> hidden service at his malicious relay selected as rendezvous.
>> Before you know it, all relays in third_guard_set are enumerated
>> by the attacker. This is why I think it's better to have a bigger
>> value for NUM_THIRD_GUARDS and a shorter period for
>> THIRD_GUARD_ROTATION.
> 
> I haven't been keeping up with this thread well, so maybe that has 
> already been mentioned, but in case it hasn't: also consider the
> case where the Tor client runs two hidden services, and so they
> share the same guard infrastructure. In today's design, they each
> have one guard, but many Tor clients have that one guard, so you
> can't conclude much from noticing it. In last year's design, they
> each have the same three guards, which is a nearly unique
> fingerprint. In the above design, where we have more third-level
> guards, we're heading back into the unique fingerprint territory.
> 
> So many constraints to consider at once! :)
> 
> --Roger
> 

Yet another very important aspect - thanks for stepping in.

Considering the fingerprint threat which is not less dangerous, I
assume that fingerprinting will work for an adversary only if we have
a reasonable number of relays in third_guard_set. If third_guard_set
is > 75% of all relays in the consensus, fingerprinting becomes
impossible since all clients would have similar fingerprints.

This would be a better approach. Sybil defense at second_guard_set and
first_guard_set and targeted observer attack defense considering
anti-fingerprinting measure at third_guard_set.

I have no problem in keeping second_guard_set as small as possible (2
relays) and for a reasonable rotation period. My concern is only with
third_guard_set which is always before an assumed evil rendezvous
point. I would like to rely on diversity and paths as random as
possible here and have second_guard_set + first_guard_set as Sybil
protections.

How would the numbers for a 5% adversary look like if we consider:

first_guard_set
NUM_FIRST_GUARDS = 1, with the current rotation period

second_guard_set
NUM_SECOND_GUARDS = two guards per guard in first_guard_set (so two,
if we don't change NumEntryGuards)
SECOND_GUARD_ROTATION = randomized, between M and N (where M~=2 weeks
; N~=1,5 months ; average 1 month)

third_guard_set not defined, choose random every time from all relays
in the consensus and keep the normal circuit lifetime per Tor's
default circuit design.

Obviously a 5% adversary will have super good chances to be selected
as third hop and learn about at least one relay in second_guard_set,
but how dangerous is this really? After such an event, the adversary
either has to compromise at least one relay in second_guard_set either
wait more and use more resources until he is selected as second guard.
But then he would need to also be selected as third guard + second
guard in the same circuit and learn the first guard (very small
chances). Then again try to compromise the relay in first_guard_set or
wait more to be picked as first guard + second guard + third hop on a
circuit requested by him at his evil rendezvous point - this becomes
noisy and with very very little chances of total deanon.

This is the maximum a Sybil adversary can do - wait for the victim to
pick his evil relays in a  circuit, and it offers in exchange
bandwidth to the network, handling the traffic for non-targeted users
as well as increasing the costs for other Sybil adversaries targeting
other users. It is true that a 5% adversary doesn't necessarily always
target just one single Tor client, but the success probability would
be calculated effectively per single Tor client.

The targeted observer attack offers nothing in return and it is just
as real, it requires only proper motivation (it won't be done just for
any boring reason). As stated in my previous email, perfect is
impossible currently, with so many threats, but why not make it better
and take in consideration all aspects.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBCAAGBQJVq4NkAAoJEIN/pSyBJlsRKr8H/iDjkjMlFPZ76jO+nLmied4v
8twZKGf89cd41L7yXFM74D4gbhLX0ihrx48jxFKDuNkpcny3LO7POk6H717stTBv
vXCKJPl3KKxz/0IGxX6tAZy16Udrrdm8VIqsjgjvzQXYdracxk4ERfuO/IJIkcYb
ca7NQFY9JdohEXd3+qTVuNYmaowaeWVZEvPhmsFLmLvd+QJA4XHSa3rRLtbHwTDL
Bt+ZtyxDERMLFMBvrBrt3x3gCbQUTmfvCL47ua12JN4xjXP1jst4qs7aWFJESSi5
Us44uJIIO8R/fPX5s+5yMQcCwIE2e2Z+S5o/8L+ITr6/m5Ep3hpMOGoqGDPR7e0=
=o54Y
-----END PGP SIGNATURE-----


More information about the tor-dev mailing list