[tor-dev] Proposal 280: Privacy-Preseving Statistics with Privcount in Tor

Karsten Loesing karsten at torproject.org
Tue Sep 12 18:27:12 UTC 2017

Hi Tim,

unfortunately, nobody from the metrics team can attend today's proposal
280 discussion in a few hours.

That's why we decided to provide some written feedback here.

We didn't find anything problematic in the proposal from the view of Tor

This is due to the narrow scope covering only the communication protocol
between tally servers and relays, as we understand it.

All topics related to deriving counts, calculating final results, and
anything else that could affect currently running metrics code are
explicitly excluded or not mentioned.

If we misunderstood the scope and there is actually a part that covers
current or future metrics code, please let us know, and we'll check that

Thanks for working on privacy-preserving statistics in Tor!

All the best,

On 2017-09-12 01:44, teor wrote:
>> On 8 Aug 2017, at 03:50, Nick Mathewson <nickm at torproject.org> wrote:
>> [reposting this message with permission.  It is a reply that I sent to
>> Aaron, where I quoted an email from him about this proposal. Tim and
>> Aaron had additional responses, which I'll let them quote here or not
>> as they think best.]
> [Re-posting this edited thread with permission. It's a conversation that
> continues on from the last re-post.]
> Aaron:
>> Tim:
>>> Aaron:
>>>> Nick:
>>>>> Aaron:
>>>>> ...
>>>>>> - I believe that instead of dealing with Tally Reporter (TR) failures using multiple subsets, you could instead simply use (t,n) secret sharing, which would survive any t-1 failures (but also allow any subset of size t to determine the individual DC counts). The DC would create one blinding value B and then use Shamir secret sharing to send a share of B to each TR. To aggregate, each TR would first add together its shares, which would yield a share of the sum of the blinding values from all DCs. Then the TRs could simply reconstruct that sum publicly, which, when subtracted from the public, blinded, noisy, counts would reveal the final noisy sum. This would be more efficient than having each TR publish multiple potential inputs to different subsets of TRs.
>>>>> So, I might have misunderstood the purpose here : I thought that the
>>>>> instances were to handle misbehaving DCs as well as malfunctioning
>>>>> TRs.
>>>> The mechanism you described (having each DC report different encrypted counters for different subsets of TRs) doesn’t handle failed (i.e. crashed) DCs. To handle failed DCs in the scheme you describe (with the blinding values started encrypted in a document), you can just have the TRs agree on which DCs succeeded at the end of the measurement and only use blinding values from those DCs. So you don’t need multiple TR subsets to handle failed DCs.
>>> Each *subset* of DCs reports to a subset of the TRs.
>>> This deals with malicious and outlying DC values, as well as failed DCs.
>>> And it deals with failed TRs as well.
>> This seems unnecessary and inefficient. DC failures can be handled by the TRs at the end. TR failures can be handled using Shamir secret sharing.
>>>> Also, I should mention the reasons that Rob and I didn’t mention fault tolerance in the PrivCount design:
>>>>  1. The TRs (aka the SKs) only need to be online long enough to receive their blinding values, add them, and send the sum out. Therefore a measurement can recover from a failed TR if its blinding values are persistently stored somewhere and if *at any point* the TR can be restarted.
>>> In the event of key compromise, or operator trust failure, or operator
>>> opt-out, the TR can never be restarted (securely).
>> If these are real concerns, then you should use Shamir secret sharing across the TRs. Honestly, they seem unlikely to me, and the cost of missing one round of statistics seems low. However, the cost of dealing with them is also low, and so you might as well do it!
>>>>  2. Handling DC failures is trivial. As mentioned above, the TRs simply wait until the end to determine which DCs succeeded and should have their blinding values included in the sum.
>>> How would you do this securely?
>>> Any scheme I think of allows a malicious TR to eliminate particular
>>> relays.
>> A malicious TR can in any case eliminate a particular relay by destroying the outputs of any subsets containing that relay. Destroying an output is done by using a random value as the blinding value, making the output random (and likely obviously so). The privacy comes from the differentially private noise, and because TRs won’t agree on subsets that would reduce the added noise below the desired amount, the adversary couldn’t break privacy by eliminate particular relays. Moreover, if you wanted, you could use a secure broadcast (e.g. the Dolev-Strong protocol) to enable the TRs to agree on the union of DCs that any one of the TRs received the counters documents from. Such a secure broadcast in used in PrivCount to get consensus on the the deployment and configuration documents.
>>>> Also, one thing I forgot to mention in my last email is that you have removed the Tally Server, which is an untrusted entity that essentially acts as a public bulletin board. Without such a collection point, who obtains the outputs of the TRs and computes the final result?
>>> We'll work with Tor metrics to decide on a mechanism for taking the
>>> counts from each TR subset, and turning them into a final count.
>>> This would probably be some kind of median, possibly discarding
>>> nonsensical values first.
>> If you plan to release multiple values from different DC subsets to handle nonsensical values, then you will have to increase the noise to handle the additional statistics. This can be done just as with handling DC failures: TRs agree on several DC subsets from among the DCs that didn’t fail and then release å blinding value sum for each subset. Note that DCs actually only need to send one set of blinding values and one set of counters to the TRs.
>>>> - Storing at the DC the blinded values encrypted to the TRs seems to violate forward privacy in that if during the measurement the adversary compromises a DC and then later (even after the final release) compromises the key of a TR, the adversary could determine the state of the DC’s counter at the time of compromise. The also applies to the optimization in Sec. 6 where the blinding values where a shared secret is hashed to produce the blinding values.
>>>>> Well, the adversary would need to compromise the key of _every_ TR in
>>>>> at least one instance, or they couldn't recover the actual counters.
>>>> That’s true.
>>>>> I guess we could, as in the original design (IIUC), send the encrypted
>>>>> blinding values (or public DH key in sec 6) immediately from the DC
>>>>> when it generates them, and then throw them away client-side.  Now the
>>>>> adversary would need to break into all the TRs while they were holding
>>>>> these encrypted blinding values.
>>>> Right, that is the original design and would provide a bit more forward security than in the current spec.
>>>>> Or, almost equivalently, I think we could make the TR public
>>>>> encryption keys only get used for one round. That's good practice in
>>>>> general, and it's a direction I generally like.
>>>> That would work, too.
>>>>> [*] One anomaly detection mechanism I've been thinking of is to look
>>>>> at different "protocol-warn" log messages.  These log messages
>>>>> indicate that some third party is not complying with the protocol.
>>>>> They're usually logged at info, since there's nothing an operator can
>>>>> do about them, but it would be good for us to get notification if some
>>>>> of them spike all of a sudden.
>>>> Really interesting idea! Rob and I are interested in looking for attacks on the Tor network using metrics as well. This kind of anomaly reminds of the RELAY_EARLY attack that you wrote a detector for.
> T
> --
> Tim Wilson-Brown (teor)
> teor2345 at gmail dot com
> PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
> ricochet:ekmygaiu4rzgsk6n
> ------------------------------------------------------------------------
> _______________________________________________
> tor-dev mailing list
> tor-dev at lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: OpenPGP digital signature
URL: <http://lists.torproject.org/pipermail/tor-dev/attachments/20170912/5324d1d1/attachment.sig>

More information about the tor-dev mailing list