Greetings,
I'm supposed to write a Tor proposal for the migration of the long-term identity keys of Hidden Services. When I began writing the proposal, I realized that some of my choices might not be appreciated by Hidden Service operators, and that starting a discussion thread might be a good idea before writing the proposal.
The problem with the current long-term HS identity keys is that they are RSA-1024 keys which are considered weak, and they need to be upgraded to a cryptosystem with higher security properties.
One of the main issues with this operation, is whether Hidden Services will be accessible using their *old* identity keys even after the migration.
That is, when we change the identity keys of a Hidden Service, its onion also changes (since the onion is the truncated hash of its public key). This will be quite problematic for Hidden Services that have a well-established onion address.
There are basically two ways to do this:
a) After the transition, Hidden Services can be visited _only_ on their _new_ onion addresses.
This is quite brutal, but it's the most secure and unambiguous option (might also be easier to implement and deploy).
This change can be enforced both on the client-side, by rejecting any old RSA-1024 HS keys, and on the server-side, by only publishing the new keys in HS descriptors.
To make the transition easier, we could prepare a tool that generates a new identity keypair before the flag day, so that Hidden Service operators can learn their future onion address beforehand and announce it to their users.
b) After the transition, Hidden Services can use both old and new onion addresses.
This might result in a more harmonious transition, where Hidden Services advertise their new onion address to users that visit them in their old address.
.oO(It would also be interesting to do a redirection on the Tor protocol layer ("I got this descriptor by querying for the old onion address, but it also contains a new onion address. I should probably use the new one."), but I don't think it's possible to redirect the user without knowledge of the application-layer protocol (e.g. 302 for HTTP). Still, a Tor log message might be helpful.)
The cons of this approach is that supporting both addresses might make the HS protocol more complicated and painful to implement, and it might also result in some Hidden Services never moving to the new onion addresses since clients can still visit them using the old insecure ones.
This approach has a stricter variant, where the old addresses can only be used during a transitionary period (a few months?). After that, clients _have_ to use the new addresses. Of course, this means that we will have to do two flag days, coordinate Tor releases, and other no fun stuff.
I'm probably moving towards the latter option because the former will make many people unhappy.
Thoughts?
(This is not a thread to select the cryptosystem we are going to use. It will derail the discussion, and we might also need to select a specific type of cryptosystem in the end (e.g. a discrete-log-based system) so that schemes like https://trac.torproject.org/projects/tor/ticket/8106 can be possible.).
On 2013-05-17 15:23 , George Kadianakis wrote: [..]
That is, when we change the identity keys of a Hidden Service, its onion also changes (since the onion is the truncated hash of its public key). This will be quite problematic for Hidden Services that have a well-established onion address.
(just a brain ramble might be something useful, might be useless ;)
Each hidden service could run, on a given port/protocol, a service that answers with the hashes it is responsible for (a 'service packet'), eg by signing the packet with each of those sigs. The client can thereby know that that service is available as different hashes.
The 'service packet' could indicate a 'hash preference' thus enabling the client to pick the 'preferred' hash, and effectively allowing multi-homing of the service if the preferences are the same and/or allowing moving to a new crypto key, quite transparently, as the old-hash is still available and can be checked.
Note that it requires being able to sign those packets with the new hash, thus one has the private key for being able to do that, no cheating would be possible.
The 'service packet' could contain a "well known name" or "description" so that these packets can be stored/indexed and the user can use this identifier for accessing the service.
Of course, the question also becomes 'is the old one still valid or has it been compromised', that is a hard one to answer... I guess having an expiry of a key would be a good thing.
An alternate approach, given a DNS tree where due to DNSSEC it is trusted (yes, that comes with a lot of it's own caveats ;) ) one could state hidden.example.com has a CERT [1] record which has hash X and hash Y. That would be the forward mapping at least. A DANE[2] alike system also come to mind.
[1] https://tools.ietf.org/html/rfc4398 [2] https://tools.ietf.org/wg/dane/
Greets, Jeroen
On Fri, May 17, 2013 at 03:44:27PM +0200, Jeroen Massar wrote:
On 2013-05-17 15:23 , George Kadianakis wrote: [..]
That is, when we change the identity keys of a Hidden Service, its onion also changes (since the onion is the truncated hash of its public key). This will be quite problematic for Hidden Services that have a well-established onion address.
(just a brain ramble might be something useful, might be useless ;)
Each hidden service could run, on a given port/protocol, a service that answers with the hashes it is responsible for (a 'service packet'), eg by signing the packet with each of those sigs. The client can thereby know that that service is available as different hashes.
The 'service packet' could indicate a 'hash preference' thus enabling the client to pick the 'preferred' hash, and effectively allowing multi-homing of the service if the preferences are the same and/or allowing moving to a new crypto key, quite transparently, as the old-hash is still available and can be checked.
So you're imaging the ability to query the HS directly and request additional information? I think this is a good idea, in general, but HS are tricky. As they are right now, they can be forced to talk, which is a significant problem. Allowing additional querying will only add to this problem. Adding another trusted-third party for holding these mappings may be an option, but that also adds the the complexity of the system for an as-of-yet-unknown benefit (as far as I can tell).
Note that it requires being able to sign those packets with the new hash, thus one has the private key for being able to do that, no cheating would be possible.
The 'service packet' could contain a "well known name" or "description" so that these packets can be stored/indexed and the user can use this identifier for accessing the service.
This would be very useful, but still as above.
Of course, the question also becomes 'is the old one still valid or has it been compromised', that is a hard one to answer... I guess having an expiry of a key would be a good thing.
Tom's system may be able to provide some sort of guarantee.
An alternate approach, given a DNS tree where due to DNSSEC it is trusted (yes, that comes with a lot of it's own caveats ;) ) one could state hidden.example.com has a CERT [1] record which has hash X and hash Y. That would be the forward mapping at least. A DANE[2] alike system also come to mind.
[1] https://tools.ietf.org/html/rfc4398 [2] https://tools.ietf.org/wg/dane/
This is definitely a good starting point, but I hope we can develop a solution that is less complex and more suited for our goals.
Greets, Jeroen
Thanks! - Matt
George Kadianakis:
Thoughts?
Can you make .onion domains really long and therefor really safe against brute force?
Or have an option for maximum key length and a weaker default if common CPU's are still too slow? I mean, if you want to make 2048 bit keys the default because you feel most hidden services have CPU's which are too slow for 4096 bit keys, then use 2048 bit as default with an option to use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility)
Why are so many bits necessary? Isn't 128bits technically safe against brute force? At 256 bits you are pretty much safe from any volume of computational power that one could fathom within this century. The only real danger is a new computational model that is nondeterministic or something crazy like that. I feel like what exists currently (from a quantity of bits standpoint) is more than sufficient.
On Fri, May 17, 2013 at 11:09 AM, adrelanos adrelanos@riseup.net wrote:
George Kadianakis:
Thoughts?
Can you make .onion domains really long and therefor really safe against brute force?
Or have an option for maximum key length and a weaker default if common CPU's are still too slow? I mean, if you want to make 2048 bit keys the default because you feel most hidden services have CPU's which are too slow for 4096 bit keys, then use 2048 bit as default with an option to use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility) _______________________________________________ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
David Vorick:
Why are so many bits necessary? Isn't 128bits technically safe against brute force? At 256 bits you are pretty much safe from any volume of computational power that one could fathom within this century. The only real danger is a new computational model that is nondeterministic or something crazy like that. I feel like what exists currently (from a quantity of bits standpoint) is more than sufficient.
Sometimes clever people find ways to reduce the strength of an algorithm from X bit to X minus Y bit. Maybe its not necessary right now, maybe it is. It it irrelevant.
Just look how long people use Tor with weak cryptography. How long it takes to update that stuff. Therefore I am happy if strongest crypto is implemented at least as options are added while developers are at it. Who knows when the over next update comes.
Bonus points: it makes many more paranoid/less educated on the topic/etc. people happy; less discussions about it; fewer conspiracy theories.
Kidding: we don't know how much computing power extraterrestrials have; we don't know if anyone already secretly uses a quantum computer.
Symmetric cryptography (AES et al) key length - the 128, 256 etc bits you are talking about - is not directly comparable to public/private key cryptography, specifically RSA in this case. 1024 bits was considered a good strong RSA key... in 1995.
On Fri, May 17, 2013, at 08:29 AM, David Vorick wrote:
Why are so many bits necessary? Isn't 128bits technically safe against brute force? At 256 bits you are pretty much safe from any volume of computational power that one could fathom within this century. The only real danger is a new computational model that is nondeterministic or something crazy like that. I feel like what exists currently (from a quantity of bits standpoint) is more than sufficient.
On Fri, May 17, 2013 at 11:09 AM, adrelanos <[1]adrelanos@riseup.net> wrote:
George Kadianakis:
Thoughts?
Can you make .onion domains really long and therefor really safe against
brute force?
Or have an option for maximum key length and a weaker default if common
CPU's are still too slow? I mean, if you want to make 2048 bit keys the
default because you feel most hidden services have CPU's which are too
slow for 4096 bit keys, then use 2048 bit as default with an option to
use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful
updates (anyone or everyone) when the next update will be required?
(forward compatibility)
_______________________________________________
tor-dev mailing list
[2]tor-dev@lists.torproject.org
[3]https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
_______________________________________________
tor-dev mailing list
[4]tor-dev@lists.torproject.org
[5]https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Sent from my device
References
1. mailto:adrelanos@riseup.net 2. mailto:tor-dev@lists.torproject.org 3. https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev 4. mailto:tor-dev@lists.torproject.org 5. https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
On Fri, May 17, 2013 at 11:29 AM, David Vorick david.vorick@gmail.com wrote:
Why are so many bits necessary? Isn't 128bits technically safe against brute force?
Not for RSA keys. A 1024-bit RSA key is considered approximately as strong as an 80-bit symmetric key; 2048-bit keys are approximately as strong as a 112-bit symmetric key, and are the present recommended keysize. See https://www.rsa.com/rsalabs/node.asp?id=2004
(I imagine we will also be considering other asymmetric algorithms for this change, some of which provide more like the usual 1:1 keysize-to-security-parameter ratio.)
zw
On May 17, 2013 11:29 AM, "David Vorick" david.vorick@gmail.com wrote:
Why are so many bits necessary? Isn't 128bits technically safe against
brute force? At 256 bits you are pretty much safe from any volume of computational power that one could fathom within this century.
It sounds like you might be mixing up public key and symmetric ciphers. 128 bits is indeed fine for a symmetric cipher, though if you think quantum computing is around the corner you want 256.
But for public key ciphers, you're not worried about brute force searches: you're worried about factoring (for RSA-based stuff) or about discrete logarithms (for DH-based stuff including ElGamal, DSA, etc etc etc). Opinions differ on adequate key length, but may folks think that 2048-3072 bits is about right for RSA or for DH in Z_p*, whereas 192-256 bits is about right for DH in elliptic curve groups. Some conservative folks want more bits; some brave folks want fewer.
George, I would definitely create an extended transition time frame. 6 months or a year where both keys will work. just make it clear there is a cut off date.
And I think Adrelanos's concept is a valid one. Since we may need to do this again, why not put a structure in place that facilitates upgrades to the system itself.
On Fri, May 17, 2013 at 3:09 PM, adrelanos adrelanos@riseup.net wrote:
George Kadianakis:
Thoughts?
Can you make .onion domains really long and therefor really safe against brute force?
Or have an option for maximum key length and a weaker default if common CPU's are still too slow? I mean, if you want to make 2048 bit keys the default because you feel most hidden services have CPU's which are too slow for 4096 bit keys, then use 2048 bit as default with an option to use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility) _______________________________________________ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
George Kadianakis:
Thoughts?
Can you make .onion domains really long and therefor really safe against brute force?
Oh. That reminded me of a topic I forgot to insert in my original post.
An onion address is the truncated (80 bits) hash of the public identity key of a Hidden Service. This means that onion addresses currently provide 80-bits of self-authentication security.
An attacker who wants to impersonate a Hidden Service can start generating RSA-1024 keypairs till she finds a public key that when hashed has the same 80-bits prefix as the hash of the public key of the Hidden Service she wants to impersonate. If she finds such a colliding public key, and manages to give her own descriptor to the client instead of the descriptor of the original hidden service, she can effectively impersonate that hidden service.
While this is not an apparent threat at the moment, it would be a good idea to make onions more resistant to impersonation attacks in the future. The hash is currently truncated to 80-bits (16 characters of base32) because that's arguably a convenient sized string to pass around verbally or to write to a piece of paper or to graffiti on a wall.
If we move to the higher security of (e.g.) 128-bits, the base32 string suddenly becomes 26 characters. Is that still conveniently sized to pass around, or should we admit that we failed this goal and we are free to crank up the security to 256-bits (output size of sha-256) which is a 52 character string?
I'm not sure myself. I guess it depends on the security properties of the other primitives we select.
Or have an option for maximum key length and a weaker default if common CPU's are still too slow? I mean, if you want to make 2048 bit keys the default because you feel most hidden services have CPU's which are too slow for 4096 bit keys, then use 2048 bit as default with an option to use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility)
I was also trying to think of a solution to this problem, but I failed. You need some kind of scheme that allows Hidden Services to eventually migrate to other identity keypairs without changing their onion -- and the onion should still be able to act as an identifier for all those as-yet-unknown identity keypairs. I would be surprised if there is an elegant solution to this problem, but cryptography tends to surprise me.
A non-elegant solution might involve the Hidden Service having a long-term signing keypair that is stored offline, and with that keypair it signs its future identity keypairs (similarly to how PGP subkeys and master keys work). The onion in this case would be the hash of the long-term public key, and in the HS descriptors we would include the long-term public key, the current public identity key and a signature of the public identity key using the long-term key. This might work, but it's stupid, tiring (both for the implementors and the hidden service operators) and doesn't go well with schemes like https://trac.torproject.org/projects/tor/ticket/8106 .
George Kadianakis:
If we move to the higher security of (e.g.) 128-bits, the base32 string suddenly becomes 26 characters. Is that still conveniently sized to pass around, or should we admit that we failed this goal and we are free to crank up the security to 256-bits (output size of sha-256) which is a 52 character string?
In doubt: if possible, maintainable, not too much work, you name it... When having the less secure version as default, please let the hidden service hosts decide if they want to use the more secure version by using an option.
I don't know if the petname system is an completely orthogonal issue or if it could be considered when you decide this one.
Or have an option for maximum key length and a weaker default if common CPU's are still too slow? I mean, if you want to make 2048 bit keys the default because you feel most hidden services have CPU's which are too slow for 4096 bit keys, then use 2048 bit as default with an option to use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility)
I was also trying to think of a solution to this problem, but I failed.
Thanks for considering!
adrelanos:
George Kadianakis:
If we move to the higher security of (e.g.) 128-bits, the base32 string suddenly becomes 26 characters. Is that still conveniently sized to pass around, or should we admit that we failed this goal and we are free to crank up the security to 256-bits (output size of sha-256) which is a 52 character string?
In doubt: if possible, maintainable, not too much work, you name it... When having the less secure version as default, please let the hidden service hosts decide if they want to use the more secure version by using an option.
I don't know if the petname system is an completely orthogonal issue or if it could be considered when you decide this one.
Or have an option for maximum key length and a weaker default if common CPU's are still too slow? I mean, if you want to make 2048 bit keys the default because you feel most hidden services have CPU's which are too slow for 4096 bit keys, then use 2048 bit as default with an option to use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility)
I was also trying to think of a solution to this problem, but I failed.
I think you were heading in the right direction with the petname idea. What if we deployed a potentially shitty naming layer that "probably" won't break within the next 6-12 months, but *might* last quite a bit longer than that, for backward compatibility purposes.
This naming layer could allow interested parties to sign registration statements using their current onion key with an expiration time, satisfying our deprecation desires for the 80 bit name. If the naming layer actually survives without visible compromise until that point, we could allow it to store signed statements about translations between the new keys and their desired name (first-come, first-serve; names are reserved for N months until resigned).
A more specific version of this question is: How readily could we hack Namecoin or some other similar consensus-based naming system[1] into Tor?
Such a mechanism would obviously provide enumeratability for hidden services that chose to use it, but hopefully it would be optional: you can still use IP addresses in browsers, after all.
In terms of verification, it would be trivial to alter the browser UI to display the actual key behind the hidden service (ie: through a control port lookup command and some kind of URL icon that varied depending on consensus naming status).
We could also provide a hacked version of CertPatrol that monitors the underlying public keys for you, and it would also be relatively easy to add a "second-look" authentication layer through the HTTPS-Everywhere SSL Observatory, similar to what exists now for SSL public keys.
In fact, if we can agree on a solid consensus-based naming scheme as a valid transition step, I think it is worth my time to let the rest of the browser burn while I implement some kind of backup authentication + UI for this. After all, memorable hidden service naming would be a usability improvement.
Should we try it?
The major downside I am seeing is PR fallout from the hidden services that chose to use it.. They might be a unrepresentative subset of what actually people need hidden services for. I think the real win for hidden services is that we can turn them into arbitrary private communication endpoints, to allow people to communicate in ways that do not reveal their message contents *or* their social network. There probably are other uses whose promise would be lost in the noise generated from this scheme as well...
1. https://en.wikipedia.org/wiki/Namecoin.
We don't have to choose Namecoin, though. Another alternative is for the dirauths to add a URI for an "official" naming directory file as a parameter in the consensus consensus, and also provide its SHA256/SHA-3. A flatfile might be less efficient than Namecoin in terms of storage and bandwidth requirements, though. It's probably also easier to censor (unless it is something like a magnet link).
For all you Zooko's Triangle[2] fans: The Namecoin mechanism attempts to "square" the triangle with a first-come first-serve distributed consensus on the pet names document, but still fall back to "Secure+Global" at the expense of "Memorable". The interesting bit is that in this case, the browser UI can help you on the "Memorable" end, should the consensus mechanism fail behind your back.
2. https://en.wikipedia.org/wiki/Zooko%27s_triangle
adrelanos:
George Kadianakis:
If we move to the higher security of (e.g.) 128-bits, the base32
string
suddenly becomes 26 characters. Is that still conveniently sized to
pass
around, or should we admit that we failed this goal and we are free to crank up the security to 256-bits (output size of sha-256) which is a
52
character string?
In doubt: if possible, maintainable, not too much work, you name it... When having the less secure version as default, please let the hidden service hosts decide if they want to use the more secure version by using an option.
I don't know if the petname system is an completely orthogonal issue or if it could be considered when you decide this one.
Or have an option for maximum key length and a weaker default if
common
CPU's are still too slow? I mean, if you want to make 2048 bit keys
the
default because you feel most hidden services have CPU's which are
too
slow for 4096 bit keys, then use 2048 bit as default with an option
to
use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility)
I was also trying to think of a solution to this problem, but I
failed.
I think you were heading in the right direction with the petname idea. What if we deployed a potentially shitty naming layer that "probably" won't break within the next 6-12 months, but *might* last quite a bit longer than that, for backward compatibility purposes.
This naming layer could allow interested parties to sign registration statements using their current onion key with an expiration time, satisfying our deprecation desires for the 80 bit name. If the naming layer actually survives without visible compromise until that point, we could allow it to store signed statements about translations between the new keys and their desired name (first-come, first-serve; names are reserved for N months until resigned).
A more specific version of this question is: How readily could we hack Namecoin or some other similar consensus-based naming system[1] into Tor?
Such a mechanism would obviously provide enumeratability for hidden services that chose to use it, but hopefully it would be optional: you can still use IP addresses in browsers, after all.
In terms of verification, it would be trivial to alter the browser UI to display the actual key behind the hidden service (ie: through a control port lookup command and some kind of URL icon that varied depending on consensus naming status).
We could also provide a hacked version of CertPatrol that monitors the underlying public keys for you, and it would also be relatively easy to add a "second-look" authentication layer through the HTTPS-Everywhere SSL Observatory, similar to what exists now for SSL public keys.
In fact, if we can agree on a solid consensus-based naming scheme as a valid transition step, I think it is worth my time to let the rest of the browser burn while I implement some kind of backup authentication + UI for this. After all, memorable hidden service naming would be a usability improvement.
Should we try it?
The major downside I am seeing is PR fallout from the hidden services that chose to use it.. They might be a unrepresentative subset of what actually people need hidden services for. I think the real win for hidden services is that we can turn them into arbitrary private communication endpoints, to allow people to communicate in ways that do not reveal their message contents *or* their social network. There probably are other uses whose promise would be lost in the noise generated from this scheme as well...
We don't have to choose Namecoin, though. Another alternative is for the dirauths to add a URI for an "official" naming directory file as a parameter in the consensus consensus, and also provide its SHA256/SHA-3. A flatfile might be less efficient than Namecoin in terms of storage and bandwidth requirements, though. It's probably also easier to censor (unless it is something like a magnet link).
For all you Zooko's Triangle[2] fans: The Namecoin mechanism attempts to "square" the triangle with a first-come first-serve distributed consensus on the pet names document, but still fall back to "Secure+Global" at the expense of "Memorable". The interesting bit is that in this case, the browser UI can help you on the "Memorable" end, should the consensus mechanism fail behind your back.
(I forked the thread, since this is hopefully orthogonal to HS identity key migration.)
Chuff chuff! Train of thought coming up, since this is a problem I've also been thinking about lately...
Mike, I like the simplicity and implementability of your idea. Giving signed (<name> to <onion address>) mappings to the directory authorities (in a FIFO fashion) and then publishing them as a directory documents is effective and easy-ish both to implement and understand.
That said, I wonder what's actually going to happen if we implement and deploy this. I imagine that scammers will try to win the race-condition against the legitimate hidden services, and they will flood the directory with false mappings. For example, scammers might claim all memorable names for the Tor website hidden service, like "torproject" "torpoject1" etc. (and I doubt that anyone can win a race against a well equipped scammer...) In the end, many legit hidden services might need to register names like "t0rproj3ct1" and "123duckduckg0" which will be lost in the noise of that directory document. Then people might think that searching for "torproject" in the TBB petname tool ought to return the official torproject website, but instead the first results will be scammer websites.
Of course, the current situation, where people get their onions using pastebins and the "Hidden Wiki" (lulz), is not any better. Although I hope that when all URLs look random, people don't consider a URL being more official than other URLs (whereas in the above idea "torproject" might look a bit more official than "t0rpoj3ct"). Still, even with a current situation, a shallot-generated "torpr0jectkakqn.onion" might look more official than "idnxcnkne4qt76tg.onion" (which is the actual onion address for the torproject website)... I really don't know what's the best way to proceed here, it's tradeoffs all the way down.
If I could automagically generate secure technologies on a whim, I would say that some kind of decentralized reputation-based fair search engine for hidden services might squarify our Zooko's triangle a bit. "decentralized" so that no entities have pure control over the results. "reputation-based" so that legit hidden services flow on top. "fair" so that no scammers with lots of boxes can game the system. Unfortunately, "fair" and "reputation-based" usually contradict each other.
In any case, Mike, your idea is definitely worth considering, but before designing and implementing it we should think how to mitigate most easy attacks against it.
Also, thankfully the idea in its basic form is orthogonal to the identity key migration project.
I liked the new subject, so I'm sticking with it. :)
On Sun, May 19, 2013 at 04:37:22AM -0700, George Kadianakis wrote:
adrelanos:
George Kadianakis: I don't know if the petname system is an completely orthogonal issue or if it could be considered when you decide this one.
Or have an option for maximum key length and a weaker default if
common
CPU's are still too slow? I mean, if you want to make 2048 bit keys
the
default because you feel most hidden services have CPU's which are
too
slow for 4096 bit keys, then use 2048 bit as default with an option
to
use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility)
I was also trying to think of a solution to this problem, but I
failed.
I think you were heading in the right direction with the petname idea. What if we deployed a potentially shitty naming layer that "probably" won't break within the next 6-12 months, but *might* last quite a bit longer than that, for backward compatibility purposes.
So I think we should make some terms clear (just for the sake of clarity). We have, I guess, three different naming-system ideas floating here: petnames, (distibuted) namecoin-ish, and centralized consensus-based - rough summary.
Some months ago, the petname system interested me enough that I started to write a proposal for it. At this point, it's wound up in bitrot. Though I'd spent a bit of time working on it, there was no comprehensive way to accomplish it. One thing to remember about petnames is that they are *user defined*. It's a system where Alice gives Bob some-name and Bob assigns it a name (the same name or a different one) which he will forever know is mapped to that name Alice gave him. The advantage to this is that forgery is nearly impossible because if Eve gives Bob a name and tells him it's the same name that Alice gave him, the petname system will be a verifier - without requiring Bob to remember the name Alice gave him. In addition, Alice can give Bob idnxcnkne4qt76tg.onion and Bob can map that to 'torproject' and he is now forever able to access it using this memorable name. The only race condition in this system is Bob's Trusted Party from which he obtains the original address.
The problem I ran into with this scheme is where the mappings should be stored - who is in control of this? In short, is this a mapping that Tor persistently stores or is it a client application that handles this. AND if it is a client application, that becomes a usabibility nightmare because if Tor Browser has an interface for it, then that's great but what if I'm using irssi and lynx on a headless system? If Tor maintains this database, then for the petname to perform as expected, every application would need to support a minimal Controller and have the ability to resolve the name mappings (and possibly append to them, also).
For the other naming-system options, also see proposals/ideas/xxx-onion-nyms.txt and proposals/194-mnemonic-urls.txt in torspec, if you haven't already.
In terms of verification, it would be trivial to alter the browser UI to display the actual key behind the hidden service (ie: through a control port lookup command and some kind of URL icon that varied depending on consensus naming status).
Yeah, this would definitely be important to have, in any mapping-scheme.
We could also provide a hacked version of CertPatrol that monitors the underlying public keys for you, and it would also be relatively easy to add a "second-look" authentication layer through the HTTPS-Everywhere SSL Observatory, similar to what exists now for SSL public keys.
In fact, if we can agree on a solid consensus-based naming scheme as a valid transition step, I think it is worth my time to let the rest of the browser burn while I implement some kind of backup authentication + UI for this. After all, memorable hidden service naming would be a usability improvement.
No doubt :)
Should we try it?
The major downside I am seeing is PR fallout from the hidden services that chose to use it.. They might be a unrepresentative subset of what actually people need hidden services for. I think the real win for hidden services is that we can turn them into arbitrary private communication endpoints, to allow people to communicate in ways that do not reveal their message contents *or* their social network. There probably are other uses whose promise would be lost in the noise generated from this scheme as well...
Regarding PR fallout, I think that whichever scheme is chosen, it *can not* be a "managed" system. Anything involving a select few nodes generating a consensus where a "blacklist" can influence the results will be detrimental. However, on the flip-side, it must be a trustable system.
(I forked the thread, since this is hopefully orthogonal to HS identity key migration.)
Chuff chuff! Train of thought coming up, since this is a problem I've also been thinking about lately...
Chuff chuff? Is that like Choo Choo for those of us in the States? :)
Mike, I like the simplicity and implementability of your idea. Giving signed (<name> to <onion address>) mappings to the directory authorities (in a FIFO fashion) and then publishing them as a directory documents is effective and easy-ish both to implement and understand.
I also really like the simplicity of this idea, but I worry about any FUD that may be spread if it doesn't work *exactly* the way "some user" expects it to work.
That said, I wonder what's actually going to happen if we implement and deploy this. I imagine that scammers will try to win the race-condition against the legitimate hidden services, and they will flood the directory with false mappings. For example, scammers might claim all memorable names for the Tor website hidden service, like "torproject" "torpoject1" etc. (and I doubt that anyone can win a race against a well equipped scammer...) In the end, many legit hidden services might need to register names like "t0rproj3ct1" and "123duckduckg0" which will be lost in the noise of that directory document. Then people might think that searching for "torproject" in the TBB petname tool ought to return the official torproject website, but instead the first results will be scammer websites.
Being able to trust the results will be of the utmost importance, or else the entire system will be a waste.
Of course, the current situation, where people get their onions using pastebins and the "Hidden Wiki" (lulz), is not any better. Although I hope that when all URLs look random, people don't consider a URL being more official than other URLs (whereas in the above idea "torproject" might look a bit more official than "t0rpoj3ct"). Still, even with a current situation, a shallot-generated "torpr0jectkakqn.onion" might look more official than "idnxcnkne4qt76tg.onion" (which is the actual onion address for the torproject website)... I really don't know what's the best way to proceed here, it's tradeoffs all the way down.
If I could automagically generate secure technologies on a whim, I would say that some kind of decentralized reputation-based fair search engine for hidden services might squarify our Zooko's triangle a bit. "decentralized" so that no entities have pure control over the results. "reputation-based" so that legit hidden services flow on top. "fair" so that no scammers with lots of boxes can game the system. Unfortunately, "fair" and "reputation-based" usually contradict each other.
I think yes-and-no, depending on the context. It may actually be possible, if not necessary, to use a reputation-based system if we also want the assignments to be fair. At face value, I would take 'fair' to mean that if Alice and Bob both request 'HiddenWiki' at approximately the same time, then they both have a 50% chance of obtaining that assignment. However, if Bob has been given that name for the last 3 months, I would expect that Bob would have a much higher probability of receiving that mapping. I think, along this same line of thought, that initialization of this system may be a difficult problem, the necessity of ensuring it's trustable at first-launch.
- Matt
Matthew Finkel matthew.finkel@gmail.com wrote:
So I think we should make some terms clear (just for the sake of clarity). We have, I guess, three different naming-system ideas floating here: petnames, (distibuted) namecoin-ish, and centralized consensus-based - rough summary.
Some months ago, the petname system interested me enough that I started to write a proposal for it. At this point, it's wound up in bitrot. Though I'd spent a bit of time working on it, there was no comprehensive way to accomplish it.
I too started writing a petname proposal only to have it wind up on the backburner.
In a nutshell, there would be a sort of pseudo-DNS that allow a given .onion to define a petname through a file on their site. For example, somename.onion/petname.txt could shorten the address to bettername.pet. The pseudo-DNS would check if a hidden service is alive once every few days, and if the onion is down for thirty days, the petname is freed up for someone else to use. This has the side effect of promoting good onion upkeep.
I like the idea of federating hidden services and eepsites into one petname system, but not sure how possible/practical that would be. Of course, there's really nothing keeping an independent actor from making this and offering it as a firefox plugin for those who might want to use it.
Thoughts?
~Griffin
On Mon, May 20, 2013 at 12:11:37AM -0400, Griffin Boyce wrote:
Matthew Finkel matthew.finkel@gmail.com wrote:
So I think we should make some terms clear (just for the sake of clarity). We have, I guess, three different naming-system ideas floating here: petnames, (distibuted) namecoin-ish, and centralized consensus-based - rough summary.
Some months ago, the petname system interested me enough that I started to write a proposal for it. At this point, it's wound up in bitrot. Though I'd spent a bit of time working on it, there was no comprehensive way to accomplish it.
I too started writing a petname proposal only to have it wind up on the backburner.
In a nutshell, there would be a sort of pseudo-DNS that allow a given .onion to define a petname through a file on their site. For example, somename.onion/petname.txt could shorten the address to bettername.pet. The pseudo-DNS would check if a hidden service is alive once every few days, and if the onion is down for thirty days, the petname is freed up for someone else to use. This has the side effect of promoting good onion upkeep.
This could work well. Have you seen proposals/ideas/xxx-onion-nyms.txt in torspec? It's a similar idea but targeted for use with tor2web.
This isn't a petname system system, but it would be a step in the right direction for making HS more user friendly. I worry about the initial race condition for this type of system. How do we guarantee that the site resolving to "torproject" is torproject.org. It's this expectation that the mapping is obvious that will be the difficult part of the system. After 6 months (or so) the naming will stabilize and be (mostly) consistent month-to-month, but how do we guarantee that a malicious actor is not able to register popular internet domains (torproject, ddg, etc) before the legitimate/honest actor?
I like the idea of federating hidden services and eepsites into one petname system, but not sure how possible/practical that would be. Of course, there's really nothing keeping an independent actor from making this and offering it as a firefox plugin for those who might want to use it.
I know very little about eepsites, but as long as the guarantees provided by eepsites and HS are equivalent regarding security and anonymity, this is an interesting idea. The easiest/obvious way to accomplish this is to have gateways/peering-points between the two networks, I need to refresh my memory/read more about I2P/eepSites before I can argue a valid mechanism.
Unless, are you talking about running I2P and Tor on the same computer/network and being able use the same naming scheme to connect to both eepSites and Hidden Services? If so, a petname system is perfect for this because it is completely user defined. See Waterken's Petname Tool[0] for an example of such an addon. If a modified version of this add-on (or something similar) is included in TBB/"secure-browser" and not only remembers the websites you trust but also allows you to use your petname in-place-of the real name, then this would be a possibly-useful system.
Thoughts?
~Griffin
Technical Program Associate, Open Technology Institute #Foucault / PGP: 0xAE792C97 / OTR: saint@jabber.ccc.de
Thanks for sharing your thoughts!
- Matt
On Thu, Jun 06, 2013 at 12:48:42PM +0000, Matthew Finkel wrote:
On Mon, May 20, 2013 at 12:11:37AM -0400, Griffin Boyce wrote:
Matthew Finkel matthew.finkel@gmail.com wrote:
Unless, are you talking about running I2P and Tor on the same computer/network and being able use the same naming scheme to connect to both eepSites and Hidden Services? If so, a petname system is perfect for this because it is completely user defined. See Waterken's Petname Tool[0] for an example of such an addon. If a modified version of this add-on (or something similar) is included in TBB/"secure-browser" and not only remembers the websites you trust but also allows you to use your petname in-place-of the real name, then this would be a possibly-useful system.
Thoughts?
~Griffin
Technical Program Associate, Open Technology Institute #Foucault / PGP: 0xAE792C97 / OTR: saint@jabber.ccc.de
Thanks for sharing your thoughts!
- Matt
Sorry, forgot the link :(
[0] http://www.waterken.com/user/PetnameTool/ [1] http://www.skyhunter.com/marcs/petnames/IntroPetNames.html Also a good site for learning about petnames
This has the side effect of promoting good onion upkeep.
Which people might be loathe to do given the recent paper about deanon hidden services seeming to be relatively doable. At least until those issues are solved...
of the system. After 6 months (or so) the naming will stabilize and be (mostly) consistent month-to-month, but how do we guarantee that a
...not if people are replacing their network address every month.
I know very little about eepsites, but as long as the guarantees provided by eepsites and HS are equivalent regarding security and anonymity, this is an interesting idea. The easiest/obvious way to accomplish this is to have gateways/peering-points between the two networks ... Unless, are you talking about running I2P and Tor on the same computer/network and being able use the same naming scheme to connect to both eepSites and Hidden Services?
One such obvious scheme that exists today is your host simply routing packets out its tunnel interfaces resident on respective Tor / I2P / Phantom IPv6 address space to some such services.
Then anything, or set of things with unique addressing amongst them, can have some petname layer on top.
malicious actor is not able to register popular internet domains (torproject, ddg, etc) before the legitimate/honest actor?
Really? Lol. You're not going to solve that even if you recreate the non-anonymous internet. Petname strings in an anonymous censor free system have no gatekeepers. As with the internet, users will set up, choose, and duke it out in their own DNS for that if they want it... on top of the provided secure network addressing.
Even being able to put/maintain *any* name out there will be hard.
On Fri, Jun 07, 2013 at 02:23:55AM -0400, grarpamp wrote:
This has the side effect of promoting good onion upkeep.
Which people might be loathe to do given the recent paper about deanon hidden services seeming to be relatively doable. At least until those issues are solved...
of the system. After 6 months (or so) the naming will stabilize and be (mostly) consistent month-to-month, but how do we guarantee that a
...not if people are replacing their network address every month.
This shouldn't be a problem if the service id (onion address) remains the same across IP address changes. If the HS is stable then, as far as I understand this system, it should maintain its name.
I know very little about eepsites, but as long as the guarantees provided by eepsites and HS are equivalent regarding security and anonymity, this is an interesting idea. The easiest/obvious way to accomplish this is to have gateways/peering-points between the two networks ... Unless, are you talking about running I2P and Tor on the same computer/network and being able use the same naming scheme to connect to both eepSites and Hidden Services?
One such obvious scheme that exists today is your host simply routing packets out its tunnel interfaces resident on respective Tor / I2P / Phantom IPv6 address space to some such services.
Then anything, or set of things with unique addressing amongst them, can have some petname layer on top.
Sure
malicious actor is not able to register popular internet domains (torproject, ddg, etc) before the legitimate/honest actor?
Really? Lol. You're not going to solve that even if you recreate the non-anonymous internet. Petname strings in an anonymous censor free system have no gatekeepers. As with the internet, users will set up, choose, and duke it out in their own DNS for that if they want it... on top of the provided secure network addressing.
Even being able to put/maintain *any* name out there will be hard.
Right, which is why I'm not sure a centralized naming system will work in this environment. 1) The user loses the self-authentication of the service (whether or not they understood they had it in the first place). 2) It's not possible to guarantee a name maps to the same hidden service over long periods (see 1.) and if trust in placed in the name then this is important. If I visit https://google.com I expect not to be MITMd and I expect to receive a reply from Google Inc's webserver.
This has the side effect of promoting good onion upkeep.
Which people might be loathe to do given the recent paper about deanon hidden services seeming to be relatively doable. At least until those issues are solved...
of the system. After 6 months (or so) the naming will stabilize and be (mostly) consistent month-to-month, but how do we guarantee that a
...not if people are replacing their network address every month.
[perhaps to avoid some small deanon risk till then?]
This shouldn't be a problem if the service id (onion address) remains the same across IP address changes. If the HS is stable then, as far as I understand this system, it should maintain its name.
Meant 'network address' refer to onion (the anon system net addr), not the realworld IP.
One such obvious scheme that exists today is your host simply routing packets out its tunnel interfaces resident on respective Tor / I2P / Phantom IPv6 address space to some such services.
Then anything, or set of things with unique addressing amongst
if they want it... on top of the provided secure network addressing.
Matthew Finkel:
Some months ago, the petname system interested me enough that I started to write a proposal for it. At this point, it's wound up in bitrot. Though I'd spent a bit of time working on it, there was no comprehensive way to accomplish it. One thing to remember about petnames is that they are *user defined*. […]
The problem I ran into with this scheme is where the mappings should be stored - who is in control of this? In short, is this a mapping that Tor persistently stores or is it a client application that handles this. AND if it is a client application, that becomes a usabibility nightmare because if Tor Browser has an interface for it, then that's great but what if I'm using irssi and lynx on a headless system? If Tor maintains this database, then for the petname to perform as expected, every application would need to support a minimal Controller and have the ability to resolve the name mappings (and possibly append to them, also).
What looks like a possible way to solve the problem you describe:
The address book would be stored by the Tor daemon, in a persistent manner.
A new host extension would be introduced so that when an application tries to connect to `torproject.myonions` through Tor, it will connect to the hidden service that holds the name `torproject` in the local address book.
Editing the local address book would be done through commands sent through Tor control port. The Tor Browser could gain a new `about:myonions` page for GUI editing. Editing capacities could also be added to Arm for headless system. And we could even make the address book file human editable to have `vi` as a fallback.
(I don't really like `myonions` but I'm sure someone will come with something better.)
Usability wise, I wonder if we could implement some kind of web links that could quickly add a new name in the local address book (after user confirmation).
On Tue, Jun 18, 2013 at 09:56:21PM +0200, Lunar wrote:
Matthew Finkel:
Some months ago, the petname system interested me enough that I started to write a proposal for it. At this point, it's wound up in bitrot. Though I'd spent a bit of time working on it, there was no comprehensive way to accomplish it. One thing to remember about petnames is that they are *user defined*. […]
The problem I ran into with this scheme is where the mappings should be stored - who is in control of this? In short, is this a mapping that Tor persistently stores or is it a client application that handles this. AND if it is a client application, that becomes a usabibility nightmare because if Tor Browser has an interface for it, then that's great but what if I'm using irssi and lynx on a headless system? If Tor maintains this database, then for the petname to perform as expected, every application would need to support a minimal Controller and have the ability to resolve the name mappings (and possibly append to them, also).
What looks like a possible way to solve the problem you describe:
The address book would be stored by the Tor daemon, in a persistent manner.
A new host extension would be introduced so that when an application tries to connect to `torproject.myonions` through Tor, it will connect to the hidden service that holds the name `torproject` in the local address book.
Editing the local address book would be done through commands sent through Tor control port. The Tor Browser could gain a new `about:myonions` page for GUI editing. Editing capacities could also be added to Arm for headless system. And we could even make the address book file human editable to have `vi` as a fallback.
(I don't really like `myonions` but I'm sure someone will come with something better.)
Usability wise, I wonder if we could implement some kind of web links that could quickly add a new name in the local address book (after user confirmation).
I'd be a bit worried that we'd have a similar problem to the erstwhile ".exit" suffix: any website could include a link to "foo.myonions"; this may be able to be used to probe whether the user has a "foo" entry in her address book.
- Ian
Mike Perry mikeperry@torproject.org writes:
adrelanos:
George Kadianakis:
If we move to the higher security of (e.g.) 128-bits, the base32 string suddenly becomes 26 characters. Is that still conveniently sized to pass around, or should we admit that we failed this goal and we are free to crank up the security to 256-bits (output size of sha-256) which is a 52 character string?
In doubt: if possible, maintainable, not too much work, you name it... When having the less secure version as default, please let the hidden service hosts decide if they want to use the more secure version by using an option.
I don't know if the petname system is an completely orthogonal issue or if it could be considered when you decide this one.
Or have an option for maximum key length and a weaker default if common CPU's are still too slow? I mean, if you want to make 2048 bit keys the default because you feel most hidden services have CPU's which are too slow for 4096 bit keys, then use 2048 bit as default with an option to use the max. of 4096 bit.
Bonus point: Can you make the new implementation support less painful updates (anyone or everyone) when the next update will be required? (forward compatibility)
I was also trying to think of a solution to this problem, but I failed.
I think you were heading in the right direction with the petname idea. What if we deployed a potentially shitty naming layer that "probably" won't break within the next 6-12 months, but *might* last quite a bit longer than that, for backward compatibility purposes.
This naming layer could allow interested parties to sign registration statements using their current onion key with an expiration time, satisfying our deprecation desires for the 80 bit name. If the naming layer actually survives without visible compromise until that point, we could allow it to store signed statements about translations between the new keys and their desired name (first-come, first-serve; names are reserved for N months until resigned).
A more specific version of this question is: How readily could we hack Namecoin or some other similar consensus-based naming system[1] into Tor?
Such a mechanism would obviously provide enumeratability for hidden services that chose to use it, but hopefully it would be optional: you can still use IP addresses in browsers, after all.
In terms of verification, it would be trivial to alter the browser UI to display the actual key behind the hidden service (ie: through a control port lookup command and some kind of URL icon that varied depending on consensus naming status).
We could also provide a hacked version of CertPatrol that monitors the underlying public keys for you, and it would also be relatively easy to add a "second-look" authentication layer through the HTTPS-Everywhere SSL Observatory, similar to what exists now for SSL public keys.
In fact, if we can agree on a solid consensus-based naming scheme as a valid transition step, I think it is worth my time to let the rest of the browser burn while I implement some kind of backup authentication + UI for this. After all, memorable hidden service naming would be a usability improvement.
Should we try it?
The major downside I am seeing is PR fallout from the hidden services that chose to use it.. They might be a unrepresentative subset of what actually people need hidden services for. I think the real win for hidden services is that we can turn them into arbitrary private communication endpoints, to allow people to communicate in ways that do not reveal their message contents *or* their social network. There probably are other uses whose promise would be lost in the noise generated from this scheme as well...
We don't have to choose Namecoin, though. Another alternative is for the dirauths to add a URI for an "official" naming directory file as a parameter in the consensus consensus, and also provide its SHA256/SHA-3. A flatfile might be less efficient than Namecoin in terms of storage and bandwidth requirements, though. It's probably also easier to censor (unless it is something like a magnet link).
For all you Zooko's Triangle[2] fans: The Namecoin mechanism attempts to "square" the triangle with a first-come first-serve distributed consensus on the pet names document, but still fall back to "Secure+Global" at the expense of "Memorable". The interesting bit is that in this case, the browser UI can help you on the "Memorable" end, should the consensus mechanism fail behind your back.
FWIW, it seems that the I2P folks took a similar approach: http://www.i2p2.de/naming.html http://www.i2p2.de/hosts.txt
Unfortunately, I don't know how well that system has worked for them so far. It seems that their threat model doesn't include the adversary who hacks and alters the i2p2.i2p website or an evil operator of that site (although I guess that such an entity could also backdoor i2p anyway).
Any I2P users around here, who can tell us stories about the addressbook idea?
On Mon, Jun 10, 2013 at 4:10 PM, George Kadianakis desnacked@riseup.net wrote:
FWIW, it seems that the I2P folks took a similar approach: http://www.i2p2.de/naming.html http://www.i2p2.de/hosts.txt
Unfortunately, I don't know how well that system has worked for them so far. It seems that their threat model doesn't include the adversary who hacks and alters the i2p2.i2p website or an evil operator of that site (although I guess that such an entity could also backdoor i2p anyway).
hosts.txt is not automatically fetched — it is bundled with I2P package, and can be extended manually by the user via several “redirect” services that are automatically used for a name that's not in hosts.txt. E.g., when hiddenchan.i2p is put into browser URL, the local I2P proxy, seeing that the domain is unknown, redirects to one of the services (located in .i2p namespace), resulting in an offer to confirm the eepSite public key (which is shown) to be added to hosts.txt (or just the current session).
-- Maxim Kammerer Liberté Linux: http://dee.su/liberte
On 17 May 2013 09:23, George Kadianakis desnacked@riseup.net wrote:
There are basically two ways to do this:
A third comes to mind, somewhat similar to Mike's.
If we believe that 1024 RSA is not broken *now* (or at the very least, if it is broken it's too valuable to waste on breaking Tor's Hidden Services...) but that it certainly will be broken in the future - then I can't think of any mechanism that would allow a future system that keeps 1024 bit key-based addresses to be secure...
Without introducing a trusted third party. Imagine if a Hidden Service today were to generate a new identity key, 2048 (or 4096 or whatever) and submit this new key, and its current key to a Directory Server, signed by the 1024 bit key, and received back a signature of the data and a timestamp.
Now, n years down the road when 1024 bit is broken... but 2048 is not - a user enters the 1024 bit address, it goes through all the hoops and connects to the Hidden Service where the HS provides the 2048 bit key and the signed timestamp. The client trusts that the mapping between the broken 1024 and secure 2048 keys is valid because it trusts the directory authorities to only timestamp such mappings accurately, and the timestamp is in 2012 - before the "we're saying 1024 is broken now, don't trust timestamps after this date" flag day.
This isn't about petnames, and from an engineering perspective it's probably much more work than any other system.
-tom
On Mon, May 20, 2013 at 12:25:03AM -0400, Tom Ritter wrote:
On 17 May 2013 09:23, George Kadianakis desnacked@riseup.net wrote:
There are basically two ways to do this:
A third comes to mind, somewhat similar to Mike's.
If we believe that 1024 RSA is not broken *now* (or at the very least, if it is broken it's too valuable to waste on breaking Tor's Hidden Services...) but that it certainly will be broken in the future - then I can't think of any mechanism that would allow a future system that keeps 1024 bit key-based addresses to be secure...
Without introducing a trusted third party. Imagine if a Hidden Service today were to generate a new identity key, 2048 (or 4096 or whatever) and submit this new key, and its current key to a Directory Server, signed by the 1024 bit key, and received back a signature of the data and a timestamp.
Now, n years down the road when 1024 bit is broken... but 2048 is not - a user enters the 1024 bit address, it goes through all the hoops and connects to the Hidden Service where the HS provides the 2048 bit key and the signed timestamp. The client trusts that the mapping between the broken 1024 and secure 2048 keys is valid because it trusts the directory authorities to only timestamp such mappings accurately, and the timestamp is in 2012 - before the "we're saying 1024 is broken now, don't trust timestamps after this date" flag day.
This isn't about petnames, and from an engineering perspective it's probably much more work than any other system.
-tom
I suppose the followup question to this is "is there really a need for backwards compatability n years in the future?" I completely understand the usefulness of this feature but I'm unsure if maintaining this ability is really necessary. The other issue arises due to the fact that the HSDir are not fixed, so caching this mapping will be non-trivial.
Also, I may not be groking this idea, but which entity is signing the timestamp: "and received back a signature of the data and a timestamp."? is it the HS or the HSDir? And is this signature also created using a 1024 bit key?
- Matt
On Jun 6, 2013 9:56 AM, "Matthew Finkel" matthew.finkel@gmail.com wrote:
I suppose the followup question to this is "is there really a need for backwards compatability n years in the future?" I completely understand the usefulness of this feature but I'm unsure if maintaining this ability is really necessary. The other issue arises due to the fact that the HSDir are not fixed, so caching this mapping will be non-trivial.
Also, I may not be groking this idea, but which entity is signing the timestamp: "and received back a signature of the data and a timestamp."? is it the HS or the HSDir? And is this signature also created using a 1024 bit key?
The HS proves key ownership, and receives the time-stamped assertion "Key1024 and Key2048 were proven to be owned by the same entity on June 6, 2013". They will provide that assertion to clients contacting them post-Flag Day. The assertion can be signed with whatever key you like, ECC, 2048, 4096,etc.
But who is the timestamper? I originally imagined the Directory Authorities, but they don't want to have records of all HS. I wasn't as familiar with HS workings when I wrote that. I don't think HSDir's are long lived enough, or trustworthy enough, to be time stampers.
So now I'm not sure.
-tom
On Fri, May 17, 2013 at 06:23:28AM -0700, George Kadianakis wrote:
Greetings,
I'm supposed to write a Tor proposal for the migration of the long-term identity keys of Hidden Services. When I began writing the proposal, I realized that some of my choices might not be appreciated by Hidden Service operators, and that starting a discussion thread might be a good idea before writing the proposal.
The problem with the current long-term HS identity keys is that they are RSA-1024 keys which are considered weak, and they need to be upgraded to a cryptosystem with higher security properties.
One of the main issues with this operation, is whether Hidden Services will be accessible using their *old* identity keys even after the migration.
That is, when we change the identity keys of a Hidden Service, its onion also changes (since the onion is the truncated hash of its public key). This will be quite problematic for Hidden Services that have a well-established onion address.
Do we have any idea how many of these well-established HS exist? I expect the majority are simply "bookmarks" (or the respective service's equivalent) so updating the specific entry would be the hardest part about this transition, from the end user's point of view.
There are basically two ways to do this:
a) After the transition, Hidden Services can be visited _only_ on their _new_ onion addresses.
Hrm...
This is quite brutal, but it's the most secure and unambiguous option (might also be easier to implement and deploy).
Agree.
This change can be enforced both on the client-side, by rejecting any old RSA-1024 HS keys, and on the server-side, by only publishing the new keys in HS descriptors.
Agree.
To make the transition easier, we could prepare a tool that generates a new identity keypair before the flag day, so that Hidden Service operators can learn their future onion address beforehand and announce it to their users.
Ideally this is the best option, but realistically I think we all know it isn't. So,...
b) After the transition, Hidden Services can use both old and new onion addresses.
Yup.
This might result in a more harmonious transition, where Hidden Services advertise their new onion address to users that visit them in their old address.
.oO(It would also be interesting to do a redirection on the Tor protocol layer ("I got this descriptor by querying for the old onion address, but it also contains a new onion address. I should probably use the new one."), but I don't think it's possible to redirect the user without knowledge of the application-layer protocol (e.g. 302 for HTTP). Still, a Tor log message might be helpful.)
This would be a nice feature, but the current service descriptor doesn't have any room for this additional information. We'll need to develop a V3 service descriptor format to support longer "onion addresses"/service id, in any case, so adding another field for "Also-Known-As" may be a useful thing to have. It might be a good idea to also add a few reserved-for-future-use fields, along with it.
The cons of this approach is that supporting both addresses might make the HS protocol more complicated and painful to implement, and it might also result in some Hidden Services never moving to the new onion addresses since clients can still visit them using the old insecure ones.
This approach has a stricter variant, where the old addresses can only be used during a transitionary period (a few months?). After that, clients _have_ to use the new addresses. Of course, this means that we will have to do two flag days, coordinate Tor releases, and other no fun stuff.
I think this will be necessary. We'll probably need to have the new scheme operation for an extended period of time before the so-call flag day (that's an actual day in the US, you know...Flag Day :)). I think anything less than 6 months will be too short. This will also be dependent on how quickly this is implemented and how quickly the the version of Tor in which this is implemented becomes stable.
I think the easiest solution (but not necessarily the best) will come down to a transition/migration to a new descriptor format. There will be a period of time where both descriptor formats are valid and either of them will be returned by the HSDir depending on the query it receives from the client. If the client requests the descriptor of a "new style/scheme" service id, then return the V3 descriptor. If the client requests an "old style/scheme" descriptor, then return both unless the the client specifically states a descriptor version or the client's Tor version does not support one of the descriptor versions.
After the flag day, it'll be hit-or-miss as to whether a V2 descriptor is available depending on what the HSDir supports. Or, maybe at that point in time, the DirAuths only give a relay the HSDir flag if it is running >= x.x.x.x verion which only supports the new descriptor version, thus finalizing the obsoletion of the old version and service id.
I'm probably moving towards the latter option because the former will make many people unhappy.
Yeah, unhappy people are both no fun and more likely to be confused by the new system.
Thoughts?
(This is not a thread to select the cryptosystem we are going to use. It will derail the discussion, and we might also need to select a specific type of cryptosystem in the end (e.g. a discrete-log-based system) so that schemes like https://trac.torproject.org/projects/tor/ticket/8106 can be possible.).
Just some thoughts off the top of my head (and wanting to revitalize the conversation, mostly).
- Matt
Yeah, unhappy people are both no fun and more likely to be confused by the new system.
Thoughts?
Not really following this talk, but for the parts that revolve around a greater than 16 char onion address, I don't see much problem here. There are some DNS RFC name length limitations, maxpathlen posix, etc. But 16 vs a full sha-2/3 hash over some underlying keys is not a big deal. Look at how it's already longer than practical memorization, if not recognition. And how people just bookmark things. Look at I2P sizes. People will whine, but even their current usage does not merit such whine. Change in a name layer might, but there is no name layer today. Flags are ok, people will figure it out.
Now one single area I see problem is if you want to interoperate with I2P / Phantom / Onioncat. That would still be 'cool', yes. But does require some form of address magic if you go wider than current 80 bits. zzz forum has posts about how that could still work... since there won't be more than 2^80 nodes ever anyways, you just need an address map layer too.