On 2/11/21 7:42 PM, Nicholas Hopper wrote:
Hi George!
Suuuup, fellow Chaum fan! I liked bnymble! It was the least scary of all of the revocation schemes I've read. At least I could understand the linkability properties.
Were you at the party when Chaum's patents expired? I forget. I suppose there probably was more than one of those, too.
A couple of thoughts about this proposal:
On Thu, Feb 11, 2021 at 5:36 PM George Kadianakis desnacked@riseup.net wrote:> ## 4.1. Token issuer setup>
The Issuer creates a set of ephemeral RSA-1024 "issuance keys" that will be used during the issuance protocol. Issuers will be rotating these ephemeral keys every 6 hours.
The Issuer exposes the set of active issuance public keys through a REST HTTP API that can be accessed by visiting /issuers.keys.
Tor directory authorities periodically fetch the issuer's public keys and vote for those keys in the consensus so that they are readily available by clients. The keys in the current consensus are considered active, whereas the ones that have fallen off have expired.
XXX how many issuance public keys are active each time? how does overlapping keys work? clients and onions need to know precise expiration date for each key. this needs to be specified and tested for robustness.
XXX every how often does the fetch work? how does the voting work? which issuers are considered official? specify consensus method.
XXX An alternative approach: Issuer has a long-term ed25519 certification key that creates expiring certificates for the ephemeral issuance keys. Alice shows the certificate to the service to prove that the token comes from an issuer. The consensus includes the long-term certification key of the issuers to establish ground truth. This way we avoid the synchronization between dirauths and issuers, and the multiple overlapping active issuance keys. However, certificates might not fit in the INTRODUCE1 cell (prop220 certs take 104 bytes on their own). Also certificate metadata might create a vector for linkability attacks between the issuer and the verifier.
## 4.2. Onion service signals ongoing DoS attack
When an onion service is under DoS attack it adds the following line in the "encrypted" (inner) part of the v3 descriptor as a way to signal to its clients that tokens are required for gaining access:
"token-required" SP token-type SP issuer-list NL [At most once] token-type: Is the type of token supported ("res" for this proposal) issuer: A comma separated list of issuers which are supported by this onion service
How are issuers identified? I ask because of a potential problem noted below...
We debated this and ultimately decided on a REST service that listed issuer keys, that the dirauths fetch (See Section 4.1 above). We can pin the TLS key(s) used to auth the request.
### 4.3.1. Client preparation [DEST_DIGEST]
Alice first chooses an issuer supported by the onion service depending on her preferences by looking at the consensus and her Tor configuration file for the current list of active issuers.
After picking a supported issuer, she performs the following preparation before contacting the issuer:
Alice extracts the issuer's public key (N,e) from the consensus
Alice computes a destination digest as follows:
dest_digest = FDH_N(destination || salt) where: - 'destination' is the 32-byte ed25519 public identity key of the destination onion - 'salt' is a random 32-byte value,
Alice samples a blinding factor 'r' uniformly at random from [1, N)
Alice computes: blinded_message = dest_digest * r^e (mod N)
After this phase is completed, Alice has a blinded message that is tailored specifically for the destination onion service. Alice will send the blinded message to the Token Issuer, but because of the blinding the Issuer does not get to learn the dest_digest value.
XXX Is the salt needed? Reevaluate.
Yes, the salt is needed (or, *some* input besides the destination must go into the FDH) otherwise, all (unblinded) tokens signed by a given issuance key will be identical. This would be great for unlinkability but not so good for double-spend prevention. :)
Aha! George and I knew we needed to salt our Chaum burgers, but neither of us could remember why. Such ancient lore. Much teriyaki.
See, this is why we need real cryptographers looking at this stuff!
We propose a new EXT_FIELD_TYPE value:
[02] -- ANON_TOKEN
The EXT_FIELD content format is:
TOKEN_VERSION [1 byte] ISSUER_KEY [4 bytes] DEST_DIGEST [32 bytes] TOKEN [128 bytes] SALT [32 bytes]
where:
- TOKEN_VERSION is the version of the token ([0x01] for Res tokens)
- ISSUER_KEY is the public key of the chosen issuer (truncated to 4 bytes)
- DEST_DIGEST is the 'dest_digest' from above
- TOKEN is the 'token' from above
- SALT is the 32-byte 'salt' added during blinding
Is it a problem that it is trivial to produce an RSA key with a given 4-byte truncation? (so an adversarial issuer could choose a key to match another issuer's keys) Because you can generate an RSA key with a targeted most- or least-significant bytes value in roughly the same amount of work that it takes to generate an RSA key at all. (For example, if we are talking about the 4 least-significant bytes: find a prime p, then set the 4 least-significant bytes of a candidate q to (t*p^{-1} mod 2^{32}) before choosing the rest of q at random)
Well, again, the dirauths will list the full-length issuer fingerprints in the consensus to determine the actual key, so all we need here is a key-id hint which key to verify the signature with. Since we don't expect a lot of issuers, the likelihood of 32bit collision is low. I suppose we should probably specify that the REST service must reject any keys that collide key-ids.