Proposal: More robust consensus voting with diverse authority sets

Nick Mathewson nickm at
Tue Apr 1 16:02:35 UTC 2008

On Tue, Apr 01, 2008 at 05:02:13PM +0200, Peter Palfrader wrote:
> Filename: xxx-robust-voting.txt

Added as proposal 134.

> Objective:
>   The modified voting procedure outlined in this proposal obsoletes the
>   requirement for most authorities to exactly agree on the list of
>   authorities.

I like this objective.  I've tried to achieve it with earlier designs,
but ran into a couple of brick walls.

>   It is necessary to continue with the process in (5) even if we
>   are not in the largest subgraph.  Otherwise one rogue authority
>   could create a number of extra votes (by new authorities) so that
>   everybody stops at 5 and no consensus is built, even tho it would
>   be trusted by all clients.

This is the first attack I thought of; glad you thought of it too. :)

> Possible Attacks/Open Issues/Some thinking required:
>  Q: Can a number (less or exactly half) of the authorities cause an honest
>     authority to vote for "their" consensus rather than the one that would
>     result were all authorities taken into account?

This algorithm is graph-theory-ish enough that we should be able to
say something mathematically strong here.

I have a few more open issues:

  Q: What does this do for cacheing?  Currently, there is at most once
     live consensus at a time, and caches cache that.  What do caches
     do if there are multiple consensuses?  Do they act as clients do
     now, and only accept a consensus if it is signed by a majority of
     the authorities they recognize?  If so, will this ever lead to
     caches holding documents clients don't want, or repeatedly
     bugging the authorities for a consensus they don't have?  If so,
     what can be done?

  Q: How do we do this in a backward compatible way?  There should be
     a spec for that too.

  A less technical Q: What opportunities does this create for social
     attacks?  One of the reasons for choosing the current voting
     approach was to limit the incentives for a social attacker to
     attempt to peel off authorities by convincing only some of them
     to accept a slightly looking bogus authority.  There have been
     similar attacks in the remailer world, I believe.  How can we
     make this less likely?


More information about the tor-dev mailing list