Hi Alec, Seth, Peter, Mike, all,
I'm enthused about the progress Alec reported about the Onion RFC for certs for onion addresses in recent tor-dev posts and elsewhere.
I wanted to further discuss a design for binding .onion addresses with registered (route-insecure) addresses. This ties in to in-person discussions had with Seth, Peter, and Mike back in June about how this all dovetails with Let's Encrypt and Tor Browser, which is why I am also addressing this message to them directly. I hope they can comment on whether this design seems realistic in that regard and any major caveats, stumbling blocks, etc.
I'll start with a description of goals and complementarity comments about readability of onion addresses themselves, and other recent tor-dev discussion topics.
I did a partially related post to tor-assistants on one-sided onion services back in June that covered perhaps too many alternatives concerning onion services and too many goals and too much of the motivation, and none of that adequately separated. I think it left most scratching their heads. This is an attempt to be a bit narrower and hopefully clearer. Those not interested in even the briefly described motivations and background set out here can skip below to the high-level design itself.
Goals, Caveats, Complementarity to other recently discussed related topics
A main goal is to give people a way to provide route-secure access to their websites in somewhat the same way that the current certificate and https protocol infrastructure lets them provide data-secure access to their website.
I would really like to have a version of this be an offering as part of obtaining a certificate from Let's Encrypt because I would like it to encourage people to offer route-secure versions of their sites in the same way that Let's Encrypt as currently put forth is meant to encourage them to offer data-secure versions of their sites. Having this built into something like Let's Encrypt should make it easy for users to set up onionsites to provide security for their websites.
The design should to be neutral between double onion services (services where connections involve a Tor circuit from the client and a Tor circuit from the server, such as the currently deployed design) and single onion services (basically just having a Tor circuit from the client). There's a draft Tor Proposal by John Brooks, Roger Dingeldine, and me on single onion services that I believe John should be making available soon, but I want to leave such details aside. I'm taking single onion services as the paradigmatic typical case, but unless it creates a big problem I would like to assume both single and double onion services will be compatible.
Obviously an onion service tied to a registered domain doesn't cover many important uses of onion services, but it should cover many existing use cases. Note also that wanting to offer network-location protection for a service can be compatible with having a registered domain name for that service (and whether or not there was any attempt to obscure information about the registrant of the domain name). In some cases it is not compatible, but not necessarily.
I think this is basically complementary to ways to make onion addresses more readable, recognizable. I'm OK with whatever an address is that will be acceptable to the RFC and that maintains the current self-authenticating property (not getting into quibbles about computational strength of that self-authentication), as long as it remains something that will fit into a cert as described below.
I'm also leaving as an extension mentioned at the end below, offering an onion service for someone's site but not associated with a domain name she has registered. (E.g. an onionsite tied to Mary's Wordpress Blog, with, e.g., the goal being more about guarantees of binding to Mary than about guarantees of binding Wordpress.) I think that is another important and useful case, which we discussed in our W2SP paper, but I'd like to mostly leave it aside for now.
High-level Design
Creating the DV Cert
At least the same DV level of checking should occur as for existing registered domain names. So the email check should include the onion name that is being bound as well as the route-insecure name(s). For simplicity, I am assuming a single onion address and possibly a small number of registered domain names, although I'm guessing doing this for a similarly small number of onion addresses might be made to work as well. (I'm assuming no wildcards, but maybe I'm not being ambitious enough.)
Besides a check at the registered-domain name(s) a check should also be made that the onionsite verifies association with the registered-domain site(s). It is not as reasonable to assume email infrastructure exists corresponding to the onion address is in place. Instead a validation query protocol will be needed that simply connects to the onionsite and asks if it is acceptable to certify association of the onionsite with the registered-domain(s). Only if all DV checks complete successfully should the CA be willing to issue the Cert.
The cert obtained will have the onion address listed as a SAN (subjectAltName) in the certificate.
Connecting to an onionsite by the client
I assume that the default HTTPS-everywhere ruleset will be updated to include a direction of route-insecure addresses to onion addresses for sites that hold appropriate certificates. It would be reasonable to have this update occur initially along with the certification process. HTTPS-everywhere rules should differentiate whether they are being requested by Tor Browser or another browser.
If by Tor Browser, I assume(?) the redirection is straightforward: If a Tor-Browser request is to connect to a route-insecure address, HTTPS-everywhere should redirect this to an onion address. At that point everything should proceed as normal for connecting to an onion address.
If another browser it could be a setup config option whether clients can choose to be redirected via tor2web or simply always sent to a route-insecure address. I will assume for simplicity that all requests for route-insecure addresses by other browsers simply send to a route-insecure HTTPS address in the ruleset (if available). I'm going to also assume that requests for onion addresses for other browsers simply fail, although if the tor2web option was available and chosen at the time of setup, then there is another question how to offer this to the client, perhaps as an HTTPS Everywhere setting. Comments on the feasibility, usefulness, design etc. of the tor2web option would be bonus, but of course I'm most wanting to know about the viability of the most basic version of things.
If the Tor-Browser request is to connect to an onion address, proceed as normal except that it can now be redirected by HTTPS Everywhere to an https connection to the address. Certified connection to such an onionsite should work and be indicated as normal for any site with a DV Cert.
End basic description
Web of trust extension
W had suggested in the w2sp paper that people could bind their route-insecure address to their onion address with a gpg key. One rationale is that there is an existing web of trust that can be leveraged, and existing mechanisms to do the verification. This would allow not just securing of human-meaningful addresses and connections but also certification by more human-meaningful trust relations than the usual CA.
It also allows things like binding a wordpress blog to an onion address as mentioned above so people need not have a registered domain to have a human-meaningful address for a route-insecure site for which they would like to offer a route-secure alternative.
Drawbacks include:
1. There is no simple automated significant infrastructure to support this. The Monkeysphere plugin is out there, and would probably be leveraged to support something like this, but...
2. PGP/GPG remains something of a geek tool. That is possibly changing slowly, but perhaps it would be better not to wait for that. Also, it's yet another thing to learn about, so...
If onion keys could be themselves linked in a PGP-like web of trust, then this could be more directly relevant and meaning to people operating on the web, especially those unfamiliar with PGP-like notions. This would of course require things like the check and indication being either built into the browser or as a plugin. And it would similarly for people deciding when to sign another's key require an easy interface and easy to understand criteria as well as mechanisms for that to all happen securely. I think this holds more hope for successful widespread web of trust to complement the X.509 type trust hierarchy that the PGP plan would support. How to decide whether a site is trusted depending what each of these trust mechanisms indicate would also need to be worked out and might again be configurable with standard defaults.