[tor-dev] prop224: Ditching key blinding for shorter onion addresses

George Kadianakis desnacked at riseup.net
Fri Jul 29 15:26:38 UTC 2016


Hello people,

this is an experimental mail meant to address legitimate usability concerns
with the size of onion addresses after proposal 224 gets implemented. It's
meant for discussion and it's far from a full blown proposal.

Anyway, after prop224 gets implemented, we will go from 16-character onion
addresses to 52-character onion addresses. See here for more details:
          https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.txt#n395

This happens because we want the onion address to be a real public key, and not
the truncated hash of a public key as it is now. We want that so that we can do
fun cryptography with that public key. Specifically, we want to do key blinding
as specified here:
          https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.txt#n1692

As I understand it the key blinding scheme is trying to achieve the following properties:
a) Every HS has a permanent identity onion address
b) Clients use an ephemeral address to fetch descriptors from HSDir
c) Knowing the ephemeral address never reveals the permanent onion address
c) Descriptors are encrypted and can only be read by clients that know the identity onion key
d) Descriptors are signed and verifiable by clients who know the identity onion key
e) Descriptors are also verifiable in a weaker manner by HSDirs who know the ephemeral address

In this email I'm going to sketch a scheme that has all above properties except from (e).

The suggested scheme is basically the current HSDir protocol, but with clients
using ephemeral addresses for fetching HS descriptors. Also, we truncate onion
address hashes to something larger than 80bits.

Here is a sketch of the scheme:

------

Hidden service Alice has a long-term public identity key: A
Hidden service Alice has a long-term private identity key: a

The onion address of Alice, as in the current scheme, is a truncated H(A).
So let's say: onion_address = H(A) truncated to 128 bits.

The full public key A is contained in Alice's descriptor as it's currently the case.

When Alice wants to publish a descriptor she computes an ephemeral address
based on the current time period 't': ephemeral_address = H(t || onion_address)

Legitimate clients who want to fetch the descriptor also do the same, since
they know both 't' and 'onion_address'.

Descriptors are encrypted using a key derived from the onion_address. Hence,
only clients that know the onion_address can decrypt it.

Descriptors are signed using the long-term private key of the hidden service,
and can be verified by clients who manage to decrypt the descriptor.

---

Assuming the above is correct and makes sense (need more brain), it should
maintain all the security properties above except from (e).

So basically in this scheme, HSDirs won't be able to verify the signatures of
received descriptors.

The obvious question here is, is this a problem?

IIUC, having the HSDirs verify those signatures does not offer any additional
security, except from making sure that the descriptor signature was actually
created using a legitimate ed25519 key. Other than that, I don't see it
offering much.

So, what does this additional HSDir verification offer? It seems like a weak
way to ensure that no garbage is uploaded on the HSDir hash ring. However, any
reasonable attacker will put their garbage in a descriptor and sign it with a
random ed25519 key, and it will trivially pass the HSDir validation.

So do we actually care about this property enough to introduce huge onion
addresses to the system?

Please discuss and poke holes at the above system.

Cheers!


More information about the tor-dev mailing list