Hi, all!
I've been trying to fill in all the cracks and corners for a revamp of
the hidden services protocol, based on earlier writings by George
Kadianakis and other discussions on the mailing list. (See draft
acknowledgments section below.)
After a bunch of comments, I'm ready to give this a number and call it
(draft) proposal 224. I'd like to know what doesn't make sense, what
I need to explain better, and what I need to design better. I'd like
to fill in the gaps and turn this into a …
[View More]more full document. I'd like
to answer the open questions. Comments are most welcome, especially if
they grow into improvements.
FWIW, I am likely to be offline for most of the current weekend,
because of Thanksgiving, so please be patient with my reply speed; I
hope to catch up with emails next week.
Filename: 224-rend-spec-ng.txt
Title: Next-Generation Hidden Services in Tor
Author: Nick Mathewson
Created: 2013-11-29
Status: Draft
-1. Draft notes
This document describes a proposed design and specification for
hidden services in Tor version 0.2.5.x or later. It's a replacement
for the current rend-spec.txt, rewritten for clarity and for improved
design.
Look for the string "TODO" below: it describes gaps or uncertainties
in the design.
Change history:
2013-11-29: Proposal first numbered. Some TODO and XXX items remain.
0. Hidden services: overview and preliminaries.
Hidden services aim to provide responder anonymity for bidirectional
stream-based communication on the Tor network. Unlike regular Tor
connections, where the connection initiator receives anonymity but
the responder does not, hidden services attempt to provide
bidirectional anonymity.
Other features include:
* [TODO: WRITE ME once there have been some more drafts and we know
what the summary should say.]
Participants:
Operator -- A person running a hidden service
Host, "Server" -- The Tor software run by the operator to provide
a hidden service.
User -- A person contacting a hidden service.
Client -- The Tor software running on the User's computer
Hidden Service Directory (HSDir) -- A Tor node that hosts signed
statements from hidden service hosts so that users can make
contact with them.
Introduction Point -- A Tor node that accepts connection requests
for hidden services and anonymously relays those requests to the
hidden service.
Rendezvous Point -- A Tor node to which clients and servers
connect and which relays traffic between them.
0.1. Improvements over previous versions.
[TODO write me once there have been more drafts and we know what the
summary should say.]
0.2. Notation and vocabulary
Unless specified otherwise, all multi-octet integers are big-endian.
We write sequences of bytes in two ways:
1. A sequence of two-digit hexadecimal values in square brackets,
as in [AB AD 1D EA].
2. A string of characters enclosed in quotes, as in "Hello". These
characters in these string are encoded in their ascii
representations; strings are NOT nul-terminated unless
explicitly described as NUL terminated.
We use the words "byte" and "octet" interchangeably.
We use the vertical bar | to denote concatenation.
We use INT_N(val) to denote the network (big-endian) encoding of the
unsigned integer "val" in N bytes. For example, INT_4(1337) is [00 00
05 39].
0.3. Cryptographic building blocks
This specification uses the following cryptographic building blocks:
* A stream cipher STREAM(iv, k) where iv is a nonce of length
S_IV_LEN bytes and k is a key of length S_KEY_LEN bytes.
* A public key signature system SIGN_KEYGEN()->seckey, pubkey;
SIGN_SIGN(seckey,msg)->sig; and SIGN_CHECK(pubkey, sig, msg) ->
{ "OK", "BAD" }; where secret keys are of length SIGN_SECKEY_LEN
bytes, public keys are of length SIGN_PUBKEY_LEN bytes, and
signatures are of length SIGN_SIG_LEN bytes.
This signature system must also support key blinding operations
as discussed in appendix [KEYBLIND] and in section [SUBCRED]:
SIGN_BLIND_SECKEY(seckey, blind)->seckey2 and
SIGN_BLIND_PUBKEY(pubkey, blind)->pubkey2 .
* A public key agreement system "PK", providing
PK_KEYGEN()->seckey, pubkey; PK_VALID(pubkey) -> {"OK", "BAD"};
and PK_HANDHAKE(seckey, pubkey)->output; where secret keys are
of length PK_SECKEY_LEN bytes, public keys are of length
PK_PUBKEY_LEN bytes, and the handshake produces outputs of
length PK_OUTPUT_LEN bytes.
* A cryptographic hash function H(d), which should be preimage and
collision resistant. It produces hashes of length HASH_LEN
bytes.
* A cryptographic message authentication code MAC(key,msg) that
produces outputs of length MAC_LEN bytes.
* A key derivation function KDF(key data, salt, personalization,
n) that outputs n bytes.
As a first pass, I suggest:
* Instantiate STREAM with AES128-CTR. [TODO: or ChaCha20?]
* Instantiate SIGN with Ed25519 and the blinding protocol in
[KEYBLIND].
* Instantiate PK with Curve25519.
* Instantiate H with SHA256. [TODO: really?]
* Instantiate MAC with HMAC using H.
* Instantiate KDF with HKDF using H.
For legacy purposes, we specify compatibility with older versions of
the Tor introduction point and rendezvous point protocols. These used
RSA1024, DH1024, AES128, and SHA1, as discussed in
rend-spec.txt. Except as noted, all RSA keys MUST have exponent
values of 65537.
As in [proposal 220], all signatures are generated not over strings
themselves, but over those strings prefixed with a distinguishing
value.
0.4. Protocol building blocks [BUILDING-BLOCKS]
In sections below, we need to transmit the locations and identities
of Tor nodes. We do so in the link identification format used by
EXTEND2 cells in the Tor protocol.
NSPEC (Number of link specifiers) [1 byte]
NSPEC times:
LSTYPE (Link specifier type) [1 byte]
LSLEN (Link specifier length) [1 byte]
LSPEC (Link specifier) [LSLEN bytes]
Link specifier types are as described in tor-spec.txt. Every set of
link specifiers MUST include at minimum specifiers of type [00]
(TLS-over-TCP, IPv4) and [02] (legacy node identity).
We also incorporate Tor's circuit extension handshakes, as used in
the CREATE2 and CREATED2 cells described in tor-spec.txt. In these
handshakes, a client who knows a public key for a server sends a
message and receives a message from that server. Once the exchange is
done, the two parties have a shared set of forward-secure key
material, and the client knows that nobody else shares that key
material unless they control the secret key corresponding to the
server's public key.
0.5. Assigned relay cell types
These relay cell types are reserved for use in the hidden service
protocol.
32 -- RELAY_COMMAND_ESTABLISH_INTRO
Sent from hidden service host to introduction point;
establishes introduction point. Discussed in
[REG_INTRO_POINT].
33 -- RELAY_COMMAND_ESTABLISH_RENDEZVOUS
Sent from client to rendezvous point; creates rendezvous
point. Discussed in [EST_REND_POINT].
34 -- RELAY_COMMAND_INTRODUCE1
Sent from client to introduction point; requests
introduction. Discussed in [SEND_INTRO1]
35 -- RELAY_COMMAND_INTRODUCE2
Sent from client to introduction point; requests
introduction. Same format as INTRODUCE1. Discussed in
[FMT_INTRO1] and [PROCESS_INTRO2]
36 -- RELAY_COMMAND_RENDEZVOUS1
Sent from introduction point to rendezvous point;
attempts to join introduction point's circuit to
client's circuit. Discussed in [JOIN_REND]
37 -- RELAY_COMMAND_RENDEZVOUS2
Sent from introduction point to rendezvous point;
reports join of introduction point's circuit to
client's circuit. Discussed in [JOIN_REND]
38 -- RELAY_COMMAND_INTRO_ESTABLISHED
Sent from introduction point to hidden service host;
reports status of attempt to establish introduction
point. Discussed in [INTRO_ESTABLISHED]
39 -- RELAY_COMMAND_RENDEZVOUS_ESTABLISHED
Sent from rendezvous point to client; acknowledges
receipt of ESTABLISH_RENDEZVOUS cell. Discussed in
[EST_REND_POINT]
40 -- RELAY_COMMAND_INTRODUCE_ACK
Sent form introduction point to client; acknowledges
receipt of INTRODUCE1 cell and reports success/failure.
Discussed in [INTRO_ACK]
0.5. Acknowledgments
[TODO reformat these once the lists are more complete.]
This design includes ideas from many people, including
Christopher Baines,
Daniel J. Bernstein,
Matthew Finkel,
Ian Goldberg,
George Kadianakis,
Aniket Kate,
Tanja Lange,
Robert Ransom,
It's based on Tor's original hidden service design by Roger
Dingledine, Nick Mathewson, and Paul Syverson, and on improvements to
that design over the years by people including
Tobias Kamm,
Thomas Lauterbach,
Karsten Loesing,
Alessandro Preite Martinez,
Robert Ransom,
Ferdinand Rieger,
Christoph Weingarten,
Christian Wilms,
We wouldn't be able to do any of this work without good attack
designs from researchers including
Alex Biryukov,
Lasse Øverlier,
Ivan Pustogarov,
Paul Syverson
Ralf-Philipp Weinmann,
See [ATTACK-REFS] for their papers.
Several of these ideas have come from conversations with
Christian Grothoff,
Brian Warner,
Zooko Wilcox-O'Hearn,
And if this document makes any sense at all, it's thanks to
editing help from
Matthew Finkel
George Kadianakis,
Peter Palfrader,
[XXX Acknowledge the huge bunch of people working on 8106.]
[XXX Acknowledge the huge bunch of people working on 8244.]
Please forgive me if I've missed you; please forgive me if I've
misunderstood your best ideas here too.
1. Protocol overview
In this section, we outline the hidden service protocol. This section
omits some details in the name of simplicity; those are given more
fully below, when we specify the protocol in more detail.
1.1. View from 10,000 feet
A hidden service host prepares to offer a hidden service by choosing
several Tor nodes to serve as its introduction points. It builds
circuits to those nodes, and tells them to forward introduction
requests to it using those circuits.
Once introduction points have been picked, the host builds a set of
documents called "hidden service descriptors" (or just "descriptors"
for short) and uploads them to a set of HSDir nodes. These documents
list the hidden service's current introduction points and describe
how to make contact with the hidden service.
When a client wants to connect to a hidden service, it first chooses
a Tor node at random to be its "rendezvous point" and builds a
circuit to that rendezvous point. If the client does not have an
up-to-date descriptor for the service, it contacts an appropriate
HSDir and requests such a descriptor.
The client then builds an anonymous circuit to one of the hidden
service's introduction points listed in its descriptor, and gives the
introduction point an introduction request to pass to the hidden
service. This introduction request includes the target rendezvous
point and the first part of a cryptographic handshake.
Upon receiving the introduction request, the hidden service host
makes an anonymous circuit to the rendezvous point and completes the
cryptographic handshake. The rendezvous point connects the two
circuits, and the cryptographic handshake gives the two parties a
shared key and proves to the client that it is indeed talking to the
hidden service.
Once the two circuits are joined, the client can send Tor RELAY cells
to the server. RELAY_BEGIN cells open streams to an external process
or processes configured by the server; RELAY_DATA cells are used to
communicate data on those streams, and so forth.
1.2. In more detail: naming hidden services [NAMING]
A hidden service's name is its long term master identity key. This
is encoded as a hostname by encoding the entire key in Base 32, and
adding the string ".onion" at the end.
(This is a change from older versions of the hidden service protocol,
where we used an 80-bit truncated SHA1 hash of a 1024 bit RSA key.)
The names in this format are distinct from earlier names because of
their length. An older name might look like:
unlikelynamefora.onion
yyhws9optuwiwsns.onion
And a new name following this specification might look like:
a1uik0w1gmfq3i5ievxdm9ceu27e88g6o7pe0rffdw9jmntwkdsd.onion
Note that since master keys are 32 bytes long, and 52 bytes of base
32 encoding can hold 260 bits of information, we have four unused
bits in each of these names.
[TODO: Alternatively, we could require that the first bit of the
master key always be zero, and use a 51-byte encoding. Or we could
require that the first two bits be zero, and use a 51-byte encoding
and reserve the first bit. Or we could require that the first nine
bits, or ten bits be zero, etc.]
1.3. In more detail: Access control [IMD:AC]
Access control for a hidden service is imposed at multiple points
through the process above.
In order to download a descriptor, clients must know which blinded
signing key was used to sign it. (See the next section for more info
on key blinding.) This blinded signing key is derived from the
service's public key and, optionally, an additional secret that is
not part of the hidden service's onion address. The public key and
this secret together constitute the service's "credential".
When the secret is in use, the hidden service gains protections
equivalent to the "stealth mode" in previous designs.
To learn the introduction points, the clients must decrypt the body
of the hidden service descriptor. The encryption key for these is
derived from the service's credential.
In order to make an introduction point send a request to the server,
the client must know the introduction point and know the service's
per-introduction-point authentication key from the hidden service
descriptor.
The final level of access control happens at the server itself, which
may decide to respond or not respond to the client's request
depending on the contents of the request. The protocol is extensible
at this point: at a minimum, the server requires that the client
demonstrate knowledge od the contents of the encrypted portion of the
hidden service descriptor. The service may additionally require a
user- or group-specific access token before it responds to requests.
1.4. In more detail: Distributing hidden service descriptors. [IMD:DIST]
Periodically, hidden service descriptors become stored at different
locations to prevent a single directory or small set of directories
from becoming a good DoS target for removing a hidden service.
For each period, the Tor directory authorities agree upon a
collaboratively generated random value. (See section 2.3 for a
description of how to incorporate this value into the voting
practice; generating the value is described in other proposals,
including [TODO: add a reference]) That value, combined with hidden service
directories' public identity keys, determines each HSDirs' position
in the hash ring for descriptors made in that period.
Each hidden service's descriptors are placed into the ring in
positions based on the key that was used to sign them. Note that
hidden service descriptors are not signed with the services' public
keys directly. Instead, we use a key-blinding system [KEYBLIND] to
create a new key-of-the-day for each hidden service. Any client that
knows the hidden service's credential can derive these blinded
signing keys for a given period. It should be impossible to derive
the blinded signing key lacking that credential.
The body of each descriptor is also encrypted with a key derived from
the credential.
To avoid a "thundering herd" problem where every service generates
and uploads a new descriptor at the start of each period, each
descriptor comes online at a time during the period that depends on
its blinded signing key. The keys for the last period remain valid
until the new keys come online.
1.5. In more detail: Scaling to multiple hosts
[THIS SECTION IS UNFINISHED]
In order to allow multiple hosts to provide a single hidden service,
I'm considering two options.
* We can have each server build an introduction circuit to each
introduction point, and have the introduction points responsible
for round-robining between these circuits. One service host is
responsible for picking the introduction points and publishing
the descriptors.
* We can have servers choose their introduction points
independently, and build circuits to them. One service host is
responsible for combining these introduction points into a
single descriptor.
If we want to avoid having a single "master" host without which the
whole service goes down (the "one service host" in the description
above), we need a way to fail over from one host to another. We also
need a way to coordinate between the hosts. This is as yet
undesigned. Maybe it should use a hidden service?
See [SCALING-REFS] for discussion on this topic.
[TODO: Finalize this design.]
1.6. In more detail: Backward compatibility with older hidden service
protocols
This design is incompatible with the clients, server, and hsdir node
protocols from older versions of the hidden service protocol as
described in rend-spec.txt. On the other hand, it is designed to
enable the use of older Tor nodes as rendezvous points and
introduction points.
1.7. In more detail: Offline operation
In this design, a hidden service's secret identity key may be stored
offline. It's used only to generate blinded identity keys, which are
used to sign descriptor signing keys. In order to operate a hidden
service, the operator can generate a number of descriptor signing
keys and their certifications (see [DESC-OUTER] and [ENCRYPTED-DATA]
below), and their corresponding descriptor encryption keys, and
export those to the hidden service hosts.
1.8. In more detail: Encryption Keys And Replay Resistance
To avoid replays of an introduction request by an introduction point,
a hidden service host must never accept the same request
twice. Earlier versions of the hidden service design used a
authenticated timestamp here, but including a view of the current
time can create a problematic fingerprint. (See proposal 222 for more
discussion.)
1.9. In more detail: A menagerie of keys
[In the text below, an "encryption keypair" is roughly "a keypair you
can do Diffie-Hellman with" and a "signing keypair" is roughly "a
keypair you can do ECDSA with."]
Public/private keypairs defined in this document:
Master (hidden service) identity key -- A master signing keypair
used as the identity for a hidden service. This key is not used
on its own to sign anything; it is only used to generate blinded
signing keys as described in [KEYBLIND] and [SUBCRED].
Blinded signing key -- A keypair derived from the identity key,
used to sign descriptor signing keys. Changes periodically for
each service. Clients who know a 'credential' consisting of the
service's public identity key and an optional secret can derive
the public blinded identity key for a service. This key is used
as an index in the DHT-like structure of the directory system.
Descriptor signing key -- A key used to sign hidden service
descriptors. This is signed by blinded signing keys. Unlike
blinded signing keys and master identity keys, the secret part
of this key must be stored online by hidden service hosts.
Introduction point authentication key -- A short-term signing
keypair used to identify a hidden service to a given
introduction point. A fresh keypair is made for each
introduction point; these are used to sign the request that a
hidden service host makes when establishing an introduction
point, so that clients who know the public component of this key
can get their introduction requests sent to the right
service. No keypair is ever used with more than one introduction
point. (previously called a "service key" in rend-spec.txt)
Introduction point encryption key -- A short-term encryption
keypair used when establishing connections via an introduction
point. Plays a role analogous to Tor nodes' onion keys. A fresh
keypair is made for each introduction point.
Symmetric keys defined in this document:
Descriptor encryption keys -- A symmetric encryption key used to
encrypt the body of hidden service descriptors. Derived from the
current period and the hidden service credential.
Public/private keypairs defined elsewhere:
Onion key -- Short-term encryption keypair
(Node) identity key
Symmetric key-like things defined elsewhere:
KH from circuit handshake -- An unpredictable value derived as
part of the Tor circuit extension handshake, used to tie a request
to a particular circuit.
2. Generating and publishing hidden service descriptors [HSDIR]
Hidden service descriptors follow the same metaformat as other Tor
directory objects. They are published anonymously to Tor servers with
the HSDir3 flag.
(Authorities should assign this flag as they currently assign the
HSDir flag, except that they should restrict it to Tor versions
implementing the HSDir parts of this specification.)
2.1. Deriving blinded keys and subcredentials [SUBCRED]
In each time period (see [TIME-PERIOD] for a definition of time
periods), a hidden service host uses a different blinded private key
to sign its directory information, and clients use a different
blinded public key as the index for fetching that information.
For a candidate for a key derivation method, see Appendix [KEYBLIND].
Additionally, clients and hosts derive a subcredential for each
period. Knowledge of the subcredential is needed to decrypt hidden
service descriptors for each period and to authenticate with the
hidden service host in the introduction process. Unlike the
credential, it changes each period. Knowing the subcredential, even
in combination with the blinded private key, does not enable the
hidden service host to derive the main credential--therefore, it is
safe to put the subcredential on the hidden service host while
leaving the hidden service's private key offline.
The subcredential for a period is derived as:
H("subcredential" |
credential |
blinded-public-key).
2.2. Locating, uploading, and downloading hidden service descriptors
[HASHRING]
To avoid attacks where a hidden service's descriptor is easily
targeted for censorship, we store them at different directories over
time, and use shared random values to prevent those directories from
being predictable far in advance.
Which Tor servers hosts a hidden service depends on:
* the current time period,
* the daily subcredential,
* the hidden service directories' public keys,
* a shared random value that changes in each time period,
* a set of network-wide networkstatus consensus parameters.
Below we explain in more detail.
2.2.1. Dividing time into periods [TIME-PERIODS]
To prevent a single set of hidden service directory from becoming a
target by adversaries looking to permanently censor a hidden service,
hidden service descriptors are uploaded to different locations that
change over time.
The length of a "time period" is controlled by the consensus
parameter 'hsdir-interval', and is a number of minutes between 30 and
14400 (10 days). The default time period length is 1500 (one day plus
one hour).
Time periods start with the Unix epoch (Jan 1, 1970), and are
computed by taking the number of whole minutes since the epoch and
dividing by the time period. So if the current time is 2013-11-12
13:44:32 UTC, making the seconds since the epoch 1384281872, the
number of minutes since the epoch is 23071364. If the current time
period length is 1500 (the default), then the current time period
number is 15380. It began 15380*1500*60 seconds after the epoch at
2013-11-11 20:00:00 UTC, and will end at (15380+1)*1500*60 seconds
after the epoch at 2013-11-12 21:00:00 UTC.
2.2.2. Overlapping time periods to avoid thundering herds [TIME-OVERLAP]
If every hidden service host were to generate a new set of keys and
upload a new descriptor at exactly the start of each time period, the
directories would be overwhelmed by every host uploading at the same
time. Instead, each public key becomes valid at its new location at a
deterministic time somewhat _before_ the period begins, depending on
the public key and the period.
The time at which a key might first become valid is determined by the
consensus parameter "hsdir-overlap-begins", which is an integer in
range [1,100] with default value 80. This parameter denotes a
percentage of the interval for which no overlap occurs. So for the
default interval (1500 minutes) and default overlap-begins value
(80%), new keys do not become valid for the first 1200 minutes of the
interval.
The new shared random value must be published *before* the start of
the next overlap interval by at least enough time to ensure that
clients all get it. [TODO: how much earlier?]
The time at which a key from the next interval becomes valid is
determined by taking the first two bytes of
OFFSET = H(Key | INT_8(Next_Period_Num))
as a big-endian integer, dividing by 65536, and treating that as a
fraction of the overlap interval.
For example, if the period is 1500 minutes long, and overlap interval
is 300 minutes long, and OFFSET begins with [90 50], then the next
key becomes valid at 1200 + 300 * (0x9050 / 65536) minutes, or
approximately 22 hours and 49 minutes after the beginning of the
period.
Hidden service directories should accept descriptors at least [TODO:
how much?] minutes before they would become valid, and retain them
for at least [TODO: how much?] minutes after the end of the period.
When a client is looking for a service, it must calculate its key
both for the current and for the subsequent period, to decide whether
the next period's key is valid yet.
2.2.3. Where to publish a service descriptor
The following consensus parameters control where a hidden service
descriptor is stored;
hsdir_n_replicas = an integer in range [1,16]
with default value 2.
hsdir_spread_fetch = an integer in range [1,128]
with default value 3.
hsdir_spread_store = an integer in range [1,128]
with default value 3.
hsdir_spread_accept = an integer in range [1,128]
with default value 8.
To determine where a given hidden service descriptor will be stored
in a given period, after the blinded public key for that period is
derived, the uploading or downloading party calculate
for replicanum in 1...hsdir_n_replicas:
hs_index(replicanum) = H("store-at-idx" |
blinded_public_key | replicanum |
periodnum)
where blinded_public_key is specified in section KEYBLIND, and
periodnum is defined in section TIME-PERIODS.
where n_replicas is determined by the consensus parameter
"hsdir_n_replicas".
Then, for each node listed in the current consensus with the HSDir3
flag, we compute a directory index for that node as:
hsdir_index(node) = H(node_identity_digest |
shared_random |
INT_8(period_num) )
where shared_random is the shared value generated by the authorities
in section PUB-SHAREDRANDOM.
Finally, for replicanum in 1...hsdir_n_replicas, the hidden service
host uploads descriptors to the first hsdir_spread_store nodes whose
indices immediately follow hs_index(replicanum).
When choosing an HSDir to download from, clients choose randomly from
among the first hsdir_spread_fetch nodes after the indices. (Note
that, in order to make the system better tolerate disappearing
HSDirs, hsdir_spread_fetch may be less than hsdir_spread_store.)
An HSDir should rejects a descriptor if that HSDir is not one of the
first hsdir_spread_accept HSDirs for that node.
[TODO: Incorporate the findings from proposal 143 here. But watch
out: proposal 143 did not analyze how much the set of nodes changes
over time, or how much client and host knowledge might diverge.]
2.2.4. URLs for anonymous uploading and downloading
Hidden service descriptors conforming to this specification are
uploaded with an HTTP POST request to the URL
/tor/rendezvous3/publish relative to the hidden service directory's
root, and downloaded with an HTTP GET request for the URL
/tor/rendezvous3/<z> where z is a base-64 encoding of the hidden
service's blinded public key.
[TODO: raw base64 is not super-nice for URLs, since it can have
slashes. We already use it for microdescriptor URLs, though. Do we
care here?]
These requests must be made anonymously, on circuits not used for
anything else.
2.3. Publishing shared random values [PUB-SHAREDRANDOM]
Our design for limiting the predictability of HSDir upload locations
relies on a shared random value that isn't predictable in advance or
too influenceable by an attacker. The authorities must run a protocol
to generate such a value at least once per hsdir period. Here we
describe how they publish these values; the procedure they use to
generate them can change independently of the rest of this
specification. For one possible (somewhat broken) protocol, see
Appendix [SHAREDRANDOM].
We add a new line in votes and consensus documents:
"hsdir-shared-random" PERIOD-START VALUE
PERIOD-START = YYYY-MM-DD HH:MM:SS
VALUE = A base-64 encoded 256-bit value.
To decide which hsdir-shared-random line to include in a consensus
for a given PERIOD-START, we choose whichever line appears verbatim
in the most votes, so long as it is listed by at least three
authorities. Ties are broken in favor of the lower value. More than
one PERIOD-START is allowed per vote, and per consensus. The same
PERIOD-START must not appear twice in a vote or in a consensus.
[TODO: Need to define a more robust algorithm. Need to cover cases
where multiple cluster of authorities publish a different value,
etc.]
The hs-dir-shared-random lines appear, sorted by PERIOD-START, in the
consensus immediately after the "params" line.
The authorities should publish the shared random value for the
current period, and, at a time at least three voting periods before
the overlap interval begins, the shared random value for the next
period.
[TODO: find out what weasel doesn't like here.]
2.4. Hidden service descriptors: outer wrapper [DESC-OUTER]
The format for a hidden service descriptor is as follows, using the
meta-format from dir-spec.txt.
"hs-descriptor" SP "3" SP public-key SP certification NL
[At start, exactly once.]
public-key is the blinded public key for the service, encoded in
base 64. Certification is a certification of a short-term ed25519
descriptor signing key using the public key, in the format of
proposal 220.
"time-period" SP YYYY-MM-DD HH:MM:SS NUM NL
[Exactly once.]
The time period for which this descriptor is relevant, including
its starting time and its period number.
"revision-counter" SP Integer NL
[Exactly once.]
The revision number of the descriptor. If an HSDir receives a
second descriptor for a key that it already has a descriptor for,
it should retain and serve the descriptor with the higher
revision-counter.
(Checking for monotonically increasing revision-counter values
prevents an attacker from replacing a newer descriptor signed by
a given key with a copy of an older version.)
"encrypted" NL encrypted-string
[Exactly once.]
An encrypted blob, whose format is discussed in [ENCRYPTED-DATA]
below. The blob is base-64 encoded and enclosed in -----BEGIN
MESSAGE---- and ----END MESSAGE---- wrappers.
"signature" SP signature NL
[exactly once, at end.]
A signature of all previous fields, using the signing key in the
hs-descriptor line. We use a separate key for signing, so that
the hidden service host does not need to have its private blinded
key online.
2.5. Hidden service descriptors: encryption format [ENCRYPTED-DATA]
The encrypted part of the hidden service descriptor is encrypted and
authenticated with symmetric keys generated as follows:
salt = 16 random bytes
secret_input = nonce | blinded_public_key | subcredential |
INT_4(revision_counter)
keys = KDF(secret_input, salt, "hsdir-encrypted-data",
S_KEY_LEN + S_IV_LEN + MAC_KEY_LEN)
SECRET_KEY = first S_KEY_LEN bytes of keys
SECRET_IV = next S_IV_LEN bytes of keys
MAC_KEY = last MAC_KEY_LEN bytes of keys
The encrypted data has the format:
SALT (random bytes from above) [16 bytes]
ENCRYPTED The plaintext encrypted with S [variable]
MAC MAC of both above fields [32 bytes]
The encryption format is ENCRYPTED =
STREAM(SECRET_IV,SECRET_KEY) xor Plaintext
Before encryption, the plaintext must be padded to a multiple of ???
bytes with NUL bytes. The plaintext must not be longer than ???
bytes. [TODO: how much? Should this be a parameter? What values in
practice is needed to hide how many intro points we have, and how
many might be legacy ones?]
The plaintext format is:
"create2-formats" SP formats NL
[Exactly once]
A space-separated list of integers denoting CREATE2 cell format
numbers that the server recognizes. Must include at least TAP and
ntor as described in tor-spec.txt. See tor-spec section 5.1 for a
list of recognized handshake types.
"authentication-required" SP types NL
[At most once]
A space-separated list of authentication types. A client that does
not support at least one of these authentication types will not be
able to contact the host. Recognized types are: 'password' and
'ed25519'. See [INTRO-AUTH] below.
At least once:
"introduction-point" SP link-specifiers NL
[Exactly once per introduction point at start of introduction
point section]
The link-specifiers is a base64 encoding of a link specifier
block in the format described in BUILDING-BLOCKS.
"auth-key" SP "ed25519" SP key SP certification NL
[Exactly once per introduction point]
Base-64 encoded introduction point authentication key that was
used to establish introduction point circuit, cross-certifying
the blinded public key key using the certification format of
proposal 220.
"enc-key" SP "ntor" SP key NL
[At most once per introduction point]
Base64-encoded curve25519 key used to encrypt request to
hidden service.
[TODO: I'd like to have a cross-certification here too.]
"enc-key" SP "legacy" NL key NL
[At most once per introduction point]
Base64-encoded RSA key, wrapped in "----BEGIN RSA PUBLIC
KEY-----" armor, for use with a legacy introduction point as
described in [LEGACY_EST_INTRO] and [LEGACY-INTRODUCE1] below.
Exactly one of the "enc-key ntor" and "enc-key legacy"
elements must be present for each introduction point.
[TODO: I'd like to have a cross-certification here too.]
Other encryption and authentication key formats are allowed; clients
should ignore ones they do not recognize.
3. The introduction protocol
The introduction protocol proceeds in three steps.
First, a hidden service host builds an anonymous circuit to a Tor
node and registers that circuit as an introduction point.
[Between these steps, the hidden service publishes its
introduction points and associated keys, and the client fetches
them as described in section [HSDIR] above.]
Second, a client builds an anonymous circuit to the introduction
point, and sends an introduction request.
Third, the introduction point relays the introduction request along
the introduction circuit to the hidden service host, and acknowledges
the introduction request to the client.
3.1. Registering an introduction point [REG_INTRO_POINT]
3.1.1. Extensible ESTABLISH_INTRO protocol. [EST_INTRO]
When a hidden service is establishing a new introduction point, it
sends a ESTABLISH_INTRO cell with the following contents:
AUTH_KEY_TYPE [1 byte]
AUTH_KEY_LEN [1 byte]
AUTH_KEY [AUTH_KEY_LEN bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
HANDSHAKE_AUTH [MAC_LEN bytes]
SIGLEN [1 byte]
SIG [SIGLEN bytes]
The AUTH_KEY_TYPE field indicates the type of the introduction point
authentication key and the type of the MAC to use in for
HANDSHAKE_AUTH. Recognized types are:
[00, 01] -- Reserved for legacy introduction cells; see
[LEGACY_EST_INTRO below]
[02] -- Ed25519; HMAC-SHA256.
[FF] -- Reserved for maintenance messages on existing
circuits; see MAINT_INTRO below.
[TODO: Should this just be a new relay cell type?
Matthew and George think so.]
The AUTH_KEY_LEN field determines the length of the AUTH_KEY
field. The AUTH_KEY field contains the public introduction point
authentication key.
The EXT_FIELD_TYPE, EXT_FIELD_LEN, EXT_FIELD entries are reserved for
future extensions to the introduction protocol. Extensions with
unrecognized EXT_FIELD_TYPE values must be ignored.
The ZERO field contains the byte zero; it marks the end of the
extension fields.
The HANDSHAKE_AUTH field contains the MAC of all earlier fields in
the cell using as its key the shared per-circuit material ("KH")
generated during the circuit extension protocol; see tor-spec.txt
section 5.2, "Setting circuit keys". It prevents replays of
ESTABLISH_INTRO cells.
SIGLEN is the length of the signature.
SIG is a signature, using AUTH_KEY, of all contents of the cell, up
to but not including SIG. These contents are prefixed with the string
"Tor establish-intro cell v1".
Upon receiving an ESTABLISH_INTRO cell, a Tor node first decodes the
key and the signature, and checks the signature. The node must reject
the ESTABLISH_INTRO cell and destroy the circuit in these cases:
* If the key type is unrecognized
* If the key is ill-formatted
* If the signature is incorrect
* If the HANDSHAKE_AUTH value is incorrect
* If the circuit is already a rendezvous circuit.
* If the circuit is already an introduction circuit.
[TODO: some scalability designs fail there.]
* If the key is already in use by another circuit.
Otherwise, the node must associate the key with the circuit, for use
later in INTRODUCE1 cells.
[TODO: The above will work fine with what we do today, but it will do
quite badly if we ever freak out and want to go back to RSA2048 or
bigger. Do we care?]
3.1.2. Registering an introduction point on a legacy Tor node [LEGACY_EST_INTRO]
Tor nodes should also support an older version of the ESTABLISH_INTRO
cell, first documented in rend-spec.txt. New hidden service hosts
must use this format when establishing introduction points at older
Tor nodes that do not support the format above in [EST_INTRO].
In this older protocol, an ESTABLISH_INTRO cell contains:
KEY_LENGTH [2 bytes]
KEY [KEY_LENGTH bytes]
HANDSHAKE_AUTH [20 bytes]
SIG [variable, up to end of relay payload]
The KEY_LENGTH variable determines the length of the KEY field.
The KEY field is a ASN1-encoded RSA public key.
The HANDSHAKE_AUTH field contains the SHA1 digest of (KH |
"INTRODUCE").
The SIG field contains an RSA signature, using PKCS1 padding, of all
earlier fields.
Note that since the relay payload itself may be no more than 498
bytes long, the KEY_LENGTH field can never have a first byte other
than [00] or [01]. These values are used to distinguish legacy
ESTABLISH_INTRO cells from newer ones.
Older versions of Tor always use a 1024-bit RSA key for these
introduction authentication keys.
Newer hidden services MAY use RSA keys up 1904 bits. Any more than
that will not fit in a RELAY cell payload.
3.1.3. Managing introduction circuits [MAINT_INTRO]
If the first byte of an ESTABLISH_INTRO cell is [FF], the cell's body
contains an administrative command for the circuit. The format of
such a command is:
Any number of times:
SUBCOMMAND_TYPE [2 bytes]
SUBCOMMAND_LEN [2 bytes]
SUBCOMMAND [COMMAND_LEN bytes]
Recognized SUBCOMMAND_TYPE values are:
[00 01] -- update encryption keys
[TODO: Matthew says, "This can be used to fork an intro point to
balance traffic over multiple hidden service servers while
maintaining the criteria for a valid ESTABLISH_INTRO
cell. -MF". Investigate.]
Unrecognized SUBCOMMAND_TYPE values should be ignored.
3.1.3.1. Updating encryption keys (subcommand 0001) [UPDATE-KEYS-SUBCMD]
Hidden service hosts send this subcommand to set their initial
encryption keys or update the configured public encryption keys
associated with this circuit. This message must be sent after
establishing an introduction point, before the circuit can be
advertised. These keys are given in the form:
NUMKEYS [1 byte]
NUMKEYS times:
KEYTYPE [1 byte]
KEYLEN [1 byte]
KEY [KEYLEN bytes]
COUNTER [4 bytes]
SIGLEN [1 byte]
SIGNATURE [SIGLEN bytes.]
The KEYTYPE value [01] is for Curve25519 keys.
The COUNTER field is a monotonically increasing value across a given
introduction point authentication key.
The SIGNATURE must be generated with the introduction point
authentication key, and must cover the entire subcommand body,
prefixed with the string "Tor hidden service introduction encryption
keys v1".
[TODO: Nothing is done here to prove ownership of the encryption
keys. Does that matter?]
[TODO: The point here is to allow encryption keys to change while
maintaining an introduction point and not forcing a client to
download a new descriptor. I'm not sure if that's worth it. It makes
clients who have seen a key before distinguishable from ones who have
not.]
[Matthew says: "Repeat-client over long periods of time will always
be distinguishable. It may be better to simply expire intro points
than try to preserve forward-secrecy, though". Must find out what he
meant.]
Setting the encryption keys for a given circuit replaces the previous
keys for that circuit. Clients who attempt to connect using the old
key receive an INTRO_ACK cell with error code [00 02] as described in
section [INTRO_ACK] below.
3.1.4. Acknowledging establishment of introduction point [INTRO_ESTABLISHED]
After setting up an introduction circuit, the introduction point
reports its status back to the hidden service host with an empty
INTRO_ESTABLISHED cell.
[TODO: make this cell type extensible. It should be able to include
data if that turns out to be needed.]
3.2. Sending an INTRODUCE1 cell to the introduction point. [SEND_INTRO1]
In order to participate in the introduction protocol, a client must
know the following:
* An introduction point for a service.
* The introduction authentication key for that introduction point.
* The introduction encryption key for that introduction point.
The client sends an INTRODUCE1 cell to the introduction point,
containing an identifier for the service, an identifier for the
encryption key that the client intends to use, and an opaque blob to
be relayed to the hidden service host.
In reply, the introduction point sends an INTRODUCE_ACK cell back to
the client, either informing it that its request has been delivered,
or that its request will not succeed.
3.2.1. INTRODUCE1 cell format [FMT_INTRO1]
An INTRODUCE1 cell has the following contents:
AUTH_KEYID [32 bytes]
ENC_KEYID [8 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
ENCRYPTED [Up to end of relay payload]
[TODO: Should we have a field to determine the type of ENCRYPTED, or
should we instead assume that there is exactly one encryption key per
encryption method? The latter is probably safer.]
Upon receiving an INTRODUCE1 cell, the introduction point checks
whether AUTH_KEYID and ENC_KEYID match a configured introduction
point authentication key and introduction point encryption key. If
they do, the cell is relayed; if not, it is not.
The AUTH_KEYID for an Ed25519 public key is the public key itself.
The ENC_KEYID for a Curve25519 public key is the first 8 bytes of the
public key. (This key ID is safe to truncate, since all the keys are
generated by the hidden service host, and the ID is only valid
relative to a single AUTH_KEYID.) The ENCRYPTED field is as
described in 3.3 below.
To relay an INTRODUCE1 cell, the introduction point sends an
INTRODUCE2 cell with exactly the same contents.
3.2.2. INTRODUCE_ACK cell format. [INTRO_ACK]
An INTRODUCE_ACK cell has the following fields:
STATUS [2 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
Recognized status values are:
[00 00] -- Success: cell relayed to hidden service host.
[00 01] -- Failure: service ID not recognzied
[00 02] -- Failure: key ID not recognized
[00 03] -- Bad message format
Recognized extension field types:
[00 01] -- signed set of encryption keys
The extension field type 0001 is a signed set of encryption keys; its
body matches the body of the key update command in
[UPDATE-KEYS-CMD]. Whenever sending status [00 02], the introduction
point MUST send this extension field.
3.2.3. Legacy formats [LEGACY-INTRODUCE1]
When the ESTABLISH_INTRO cell format of [LEGACY_EST_INTRO] is used,
INTRODUCE1 cells are of the form:
AUTH_KEYID_HASH [20 bytes]
ENC_KEYID [8 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
ENCRYPTED [Up to end of relay payload]
Here, AUTH_KEYID_HASH is the hash of the introduction point
authentication key used to establish the introduction.
Because of limitations in older versions of Tor, the relay payload
size for these INTRODUCE1 cells must always be at least 246 bytes, or
they will be rejected as invalid.
3.3. Processing an INTRODUCE2 cell at the hidden service. [PROCESS_INTRO2]
Upon receiving an INTRODUCE2 cell, the hidden service host checks
whether the AUTH_KEYID/AUTH_KEYID_HASH field and the ENC_KEYID fields
are as expected, and match the configured authentication and
encryption key(s) on that circuit.
The service host then checks whether it has received a cell with
these contents before. If it has, it silently drops it as a
replay. (It must maintain a replay cache for as long as it accepts
cells with the same encryption key.)
If the cell is not a replay, it decrypts the ENCRYPTED field,
establishes a shared key with the client, and authenticates the whole
contents of the cell as having been unmodified since they left the
client. There may be multiple ways of decrypting the ENCRYTPED field,
depending on the chosen type of the encryption key. Requirements for
an introduction handshake protocol are described in
[INTRO-HANDSHAKE-REQS]. We specify one below in section
[NTOR-WITH-EXTRA-DATA].
The decrypted plaintext must have the form:
REND_TOKEN [20 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
ONION_KEY_TYPE [2 bytes]
ONION_KEY [depends on ONION_KEY_TYPE]
NSPEC (Number of link specifiers) [1 byte]
NSPEC times:
LSTYPE (Link specifier type) [1 byte]
LSLEN (Link specifier length) [1 byte]
LSPEC (Link specifier) [LSLEN bytes]
PAD (optional padding) [up to end of plaintext]
Upon processing this plaintext, the hidden service makes sure that
any required authentication is present in the extension fields, and
then extends a rendezvous circuit to the node described in the LSPEC
fields, using the ONION_KEY to complete the extension. As mentioned
in [BUILDING-BLOCKS], the "TLS-over-TCP, IPv4" and "Legacy node
identity" specifiers must be present.
The hidden service SHOULD NOT reject any LSTYPE fields which it
doesn't recognize; instead, it should use them verbatim in its EXTEND
request to the rendezvous point.
The ONION_KEY_TYPE field is one of:
[01] TAP-RSA-1024: ONION_KEY is 128 bytes long.
[02] NTOR: ONION_KEY is 32 bytes long.
The ONION_KEY field describes the onion key that must be used when
extending to the rendezvous point. It must be of a type listed as
supported in the hidden service descriptor.
Upon receiving a well-formed INTRODUCE2 cell, the hidden service host
will have:
* The information needed to connect to the client's chosen
rendezvous point.
* The second half of a handshake to authenticate and establish a
shared key with the hidden service client.
* A set of shared keys to use for end-to-end encryption.
3.3.1. Introduction handshake encryption requirements [INTRO-HANDSHAKE-REQS]
When decoding the encrypted information in an INTRODUCE2 cell, a
hidden service host must be able to:
* Decrypt additional information included in the INTRODUCE2 cell,
to include the rendezvous token and the information needed to
extend to the rendezvous point.
* Establish a set of shared keys for use with the client.
* Authenticate that the cell has not been modified since the client
generated it.
Note that the old TAP-derived protocol of the previous hidden service
design achieved the first two requirements, but not the third.
3.3.2. Example encryption handshake: ntor with extra data [NTOR-WITH-EXTRA-DATA]
This is a variant of the ntor handshake (see tor-spec.txt, section
5.1.4; see proposal 216; and see "Anonymity and one-way
authentication in key-exchange protocols" by Goldberg, Stebila, and
Ustaoglu).
It behaves the same as the ntor handshake, except that, in addition
to negotiating forward secure keys, it also provides a means for
encrypting non-forward-secure data to the server (in this case, to
the hidden service host) as part of the handshake.
Notation here is as in section 5.1.4 of tor-spec.txt, which defines
the ntor handshake.
The PROTOID for this variant is
"hidden-service-ntor-curve25519-sha256-1". Define the tweak value
t_hsenc, and the tag value m_hsexpand as:
t_hsenc = PROTOID | ":hs_key_extract"
m_hsexpand = PROTOID | ":hs_key_expand"
To make an INTRODUCE cell, the client must know a public encryption
key B for the hidden service on this introduction circuit. The client
generates a single-use keypair:
x,X = KEYGEN()
and computes:
secret_hs_input = EXP(B,x) | AUTH_KEYID | X | B | PROTOID
info = m_hsexpand | subcredential
hs_keys = HKDF(secret_hs_input, t_hsenc, info,
S_KEY_LEN+MAC_LEN)
ENC_KEY = hs_keys[0:S_KEY_LEN]
MAC_KEY = hs_keys[S_KEY_LEN:S_KEY_LEN+MAC_KEY_LEN]
and sends, as the ENCRYPTED part of the INTRODUCE1 cell:
CLIENT_PK [G_LENGTH bytes]
ENCRYPTED_DATA [Padded to length of plaintext]
MAC [MAC_LEN bytes]
Substituting those fields into the INTRODUCE1 cell body format
described in [FMT_INTRO1] above, we have
AUTH_KEYID [32 bytes]
ENC_KEYID [8 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
ENCRYPTED:
CLIENT_PK [G_LENGTH bytes]
ENCRYPTED_DATA [Padded to length of plaintext]
MAC [MAC_LEN bytes]
(This format is as documented in [FMT_INTRO1] above, except that here
we describe how to build the ENCRYPTED portion. If the introduction
point is running an older Tor that does not support this protocol,
the first field is replaced by a 20-byte AUTH_KEYID_HASH field as
described in [LEGACY-INTRODUCE1].)
Here, the encryption key plays the role of B in the regular ntor
handshake, and the AUTH_KEYID field plays the role of the node ID.
The CLIENT_PK field is the public key X. The ENCRYPTED_DATA field is
the message plaintext, encrypted with the symmetric key ENC_KEY. The
MAC field is a MAC of all of the cell from the AUTH_KEYID through the
end of ENCRYPTED_DATA, using the MAC_KEY value as its key.
To process this format, the hidden service checks PK_VALID(CLIENT_PK)
as necessary, and then computes ENC_KEY and MAC_KEY as the client did
above, except using EXP(CLIENT_PK,b) in the calculation of
secret_hs_input. The service host then checks whether the MAC is
correct. If it is invalid, it drops the cell. Otherwise, it computes
the plaintext by decrypting ENCRYPTED_DATA.
The hidden service host now completes the service side of the
extended ntor handshake, as described in tor-spec.txt section 5.1.4,
with the modified PROTOID as given above. To be explicit, the hidden
service host generates a keypair of y,Y = KEYGEN(), and uses its
introduction point encryption key 'b' to computes:
xb = EXP(X,b)
secret_hs_input = xb | AUTH_KEYID | X | B | PROTOID
info = m_hsexpand | subcredential
hs_keys = HKDF(secret_hs_input, t_hsenc, info,
S_KEY_LEN+MAC_LEN)
HS_DEC_KEY = hs_keys[0:S_KEY_LEN]
HS_MAC_KEY = hs_keys[S_KEY_LEN:S_KEY_LEN+MAC_KEY_LEN]
(The above are used to check the MAC and then decrypt the
encrypted data.)
ntor_secret_input = EXP(X,y) | xb | ID | B | X | Y | PROTOID
NTOR_KEY_SEED = H(secret_input, t_key)
verify = H(secret_input, t_verify)
auth_input = verify | ID | B | Y | X | PROTOID | "Server"
(The above are used to finish the ntor handshake.)
The server's handshake reply is:
SERVER_PK Y [G_LENGTH bytes]
AUTH H(auth_input, t_mac) [H_LENGTH bytes]
These faileds can be send to the client in a RENDEZVOUS1 cell.
(See [JOIN_REND] below.)
The hidden service host now also knows the keys generated by the
handshake, which it will use to encrypt and authenticate data
end-to-end between the client and the server. These keys are as
computed in tor-spec.txt section 5.1.4.
3.4. Authentication during the introduction phase. [INTRO-AUTH]
Hidden services may restrict access only to authorized users. One
mechanism to do so is the credential mechanism, where only users who
know the credential for a hidden service may connect at all. For more
fine-grained conntrol, a hidden service can be configured with
password-based or public-key-based authentication.
3.4.1. Password-based authentication.
To authenticate with a password, the user must include an extension
field in the encrypted part of the INTRODUCE cell with an
EXT_FIELD_TYPE type of [01] and the contents:
Username [00] Password.
The username may not include any [00] bytes. The password may.
On the server side, the password MUST be stored hashed and salted,
ideally with scrypt or something better.
3.4.2. Ed25519-based authentication.
To authenticate with an Ed25519 private key, the user must include an
extension field in the encrypted part of the INTRODUCE cell with an
EXT_FIELD_TYPE type of [02] and the contents:
Nonce [16 bytes]
Pubkey [32 bytes]
Signature [64 bytes]
Nonce is a random value. Pubkey is the public key that will be used
to authenticate. [TODO: should this be an identifier for the public
key instead?] Signature is the signature, using Ed25519, of:
"Hidserv-userauth-ed25519"
Nonce (same as above)
Pubkey (same as above)
AUTH_KEYID (As in the INTRODUCE1 cell)
ENC_KEYID (As in the INTRODUCE1 cell)
The hidden service host checks this by seeing whether it recognizes
and would accept a signature from the provided public key. If it
would, then it checks whether the signature is correct. If it is,
then the correct user has authenticated.
Replay prevention on the whole cell is sufficient to prevent replays
on the authentication.
Users SHOULD NOT use the same public key with multiple hidden
services.
4. The rendezvous protocol
Before connecting to a hidden service, the client first builds a
circuit to an arbitrarily chosen Tor node (known as the rendezvous
point), and sends an ESTABLISH_RENDEZVOUS cell. The hidden service
later connects to the same node and sends a RENDEZVOUS cell. Once
this has occurred, the relay forwards the contents of the RENDEZVOUS
cell to the client, and joins the two circuits together.
4.1. Establishing a rendezvous point [EST_REND_POINT]
The client sends the rendezvous point a
RELAY_COMMAND_ESTABLISH_RENDEZVOUS cell containing a 20-byte value.
RENDEZVOUS_COOKIE [20 bytes]
Rendezvous points MUST ignore any extra bytes in an
ESTABLISH_RENDEZVOUS message. (Older versions of Tor did not.)
The rendezvous cookie is an arbitrary 20-byte value, chosen randomly
by the client. The client SHOULD choose a new rendezvous cookie for
each new connection attempt. If the rendezvous cookie is already in
use on an existing circuit, the rendezvous point should reject it and
destroy the circuit.
Upon receiving a ESTABLISH_RENDEZVOUS cell, the rendezvous point
associates the cookie with the circuit on which it was sent. It
replies to the client with an empty RENDEZVOUS_ESTABLISHED cell to
indicate success. [TODO: make this extensible]
The client MUST NOT use the circuit which sent the cell for any
purpose other than rendezvous with the given location-hidden service.
The client should establish a rendezvous point BEFORE trying to
connect to a hidden service.
4.2. Joining to a rendezvous point [JOIN_REND]
To complete a rendezvous, the hidden service host builds a circuit to
the rendezvous point and sends a RENDEZVOUS1 cell containing:
RENDEZVOUS_COOKIE [20 bytes]
HANDSHAKE_INFO [variable; depends on handshake type
used.]
If the cookie matches the rendezvous cookie set on any
not-yet-connected circuit on the rendezvous point, the rendezvous
point connects the two circuits, and sends a RENDEZVOUS2 cell to the
client containing the contents of the RENDEZVOUS1 cell.
Upon receiving the RENDEZVOUS2 cell, the client verifies that the
HANDSHAKE_INFO correctly completes a handshake, and uses the
handshake output to derive shared keys for use on the circuit.
[TODO: Should we encrypt HANDSHAKE_INFO as we did INTRODUCE2
contents? It's not necessary, but it could be wise. Similarly, we
should make it extensible.]
4.3. Using legacy hosts as rendezvous points
The behavior of ESTABLISH_RENDEZVOUS is unchanged from older versions
of this protocol, except that relays should now ignore unexpected
bytes at the end.
Old versions of Tor required that RENDEZVOUS cell payloads be exactly
168 bytes long. All shorter rendezvous payloads should be padded to
this length with [00] bytes.
5. Encrypting data between client and host
A successfully completed handshake, as embedded in the
INTRODUCE/RENDEZVOUS cells, gives the client and hidden service host
a shared set of keys Kf, Kb, Df, Db, which they use for sending
end-to-end traffic encryption and authentication as in the regular
Tor relay encryption protocol, applying encryption with these keys
before other encryption, and decrypting with these keys before other
encryption. The client encrypts with Kf and decrypts with Kb; the
service host does the opposite.
6. Open Questions:
Scaling hidden services is hard. There are on-going discussions that
you might be able to help with. See [SCALING-REFS].
How can we improve the HSDir unpredictability design proposed in
[SHAREDRANDOM]? See [SHAREDRANDOM-REFS] for discussion.
How can hidden service addresses become memorable while retaining
their self-authenticating and decentralized nature? See
[HUMANE-HSADDRESSES-REFS] for some proposals; many more are possible.
Hidden Services are pretty slow. Both because of the lengthy setup
procedure and because the final circuit has 6 hops. How can we make
the Hidden Service protocol faster? See [PERFORMANCE-REFS] for some
suggestions.
References:
[KEYBLIND-REFS]:
https://trac.torproject.org/projects/tor/ticket/8106https://lists.torproject.org/pipermail/tor-dev/2012-September/004026.html
[SHAREDRANDOM-REFS]:
https://trac.torproject.org/projects/tor/ticket/8244https://lists.torproject.org/pipermail/tor-dev/2013-November/005847.htmlhttps://lists.torproject.org/pipermail/tor-talk/2013-November/031230.html
[SCALING-REFS]:
https://lists.torproject.org/pipermail/tor-dev/2013-October/005556.html
[HUMANE-HSADDRESSES-REFS]:
https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/ideas/xxx-on…http://archives.seul.org/or/dev/Dec-2011/msg00034.html
[PERFORMANCE-REFS]:
"Improving Efficiency and Simplicity of Tor circuit
establishment and hidden services" by Overlier, L., and
P. Syverson
[TODO: Need more here! Do we have any? :( ]
[ATTACK-REFS]:
"Trawling for Tor Hidden Services: Detection, Measurement,
Deanonymization" by Alex Biryukov, Ivan Pustogarov,
Ralf-Philipp Weinmann
"Locating Hidden Servers" by Lasse Øverlier and Paul
Syverson
[ED25519-REFS]:
"High-speed high-security signatures" by Daniel
J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and
Bo-Yin Yang. http://cr.yp.to/papers.html#ed25519
Appendix A. Signature scheme with key blinding [KEYBLIND]
As described in [IMD:DIST] and [SUBCRED] above, we require a "key
blinding" system that works (roughly) as follows:
There is a master keypair (sk, pk).
Given the keypair and a nonce n, there is a derivation function
that gives a new blinded keypair (sk_n, pk_n). This keypair can
be used for signing.
Given only the public key and the nonce, there is a function
that gives pk_n.
Without knowing pk, it is not possible to derive pk_n; without
knowing sk, it is not possible to derive sk_n.
It's possible to check that a signature make with sk_n while
knowing only pk_n.
Someone who sees a large number of blinded public keys and
signatures made using those public keys can't tell which
signatures and which blinded keys were derived from the same
master keypair.
You can't forge signatures.
[TODO: Insert a more rigorous definition and better references.]
We propose the following scheme for key blinding, based on Ed25519.
(This is an ECC group, so remember that scalar multiplication is the
trapdoor function, and it's defined in terms of iterated point
addition. See the Ed25519 paper [Reference ED25519-REFS] for a fairly
clear writeup.)
Let the basepoint be written as B. Assume B has prime order l, so
lB=0. Let a master keypair be written as (a,A), where a is the private
key and A is the public key (A=aB).
To derive the key for a nonce N and an optional secret s, compute the
blinding factor h as H(A | s, B, N), and let:
private key for the period: a' = h a
public key for the period: A' = h' A = (ha)B
Generating a signature of M: given a deterministic random-looking r
(see EdDSA paper), take R=rB, S=r+hash(R,A',M)ah mod l. Send signature
(R,S) and public key A'.
Verifying the signature: Check whether SB = R+hash(R,A',M)A'.
(If the signature is valid,
SB = (r + hash(R,A',M)ah)B
= rB + (hash(R,A',M)ah)B
= R + hash(R,A',M)A' )
See [KEYBLIND-REFS] for an extensive discussion on this scheme and
possible alternatives. I've transcribed this from a description by
Tanja Lange at the end of the thread. [TODO: We'll want a proof for
this.]
(To use this with Tor, set N = INT_8(period-number) | INT_8(Start of
period in seconds since epoch).)
Appendix B. Selecting nodes [PICKNODES]
Picking introduction points
Picking rendezvous points
Building paths
Reusing circuits
(TODO: This needs a writeup)
Appendix C. Recommendations for searching for vanity .onions [VANITY]
EDITORIAL NOTE: The author thinks that it's silly to brute-force the
keyspace for a key that, when base-32 encoded, spells out the name of
your website. It also feels a bit dangerous to me. If you train your
users to connect to
llamanymityx4fi3l6x2gyzmtmgxjyqyorj9qsb5r543izcwymle.onion
I worry that you're making it easier for somebody to trick them into
connecting to
llamanymityb4sqi0ta0tsw6uovyhwlezkcrmczeuzdvfauuemle.onion
Nevertheless, people are probably going to try to do this, so here's a
decent algorithm to use.
To search for a public key with some criterion X:
Generate a random (sk,pk) pair.
While pk does not satisfy X:
Add the number 1 to sk
Add the scalar B to pk
Return sk, pk.
This algorithm is safe [source: djb, personal communication] [TODO:
Make sure I understood correctly!] so long as only the final (sk,pk)
pair is used, and all previous values are discarded.
To parallelize this algorithm, start with an independent (sk,pk) pair
generated for each independent thread, and let each search proceed
independently.
Appendix D. Numeric values reserved in this document
[TODO: collect all the lists of commands and values mentioned above]
[View Less]
The meek pluggable transport is currently running on the bridge I run,
which also happens to be the backend bridge for flash proxy. I'd like to
move it to a fast relay run by an experienced operator. I want to do
this both to diffuse trust, so that I don't run all the infrastructure,
and because my bridge is not especially fast and I'm not especially
adept at performance tuning.
All you will need to do is run the meek-server program, add some lines
to your torrc, and update the software when I …
[View More]ask you to. The more CPU,
memory, and bandwidth you have, the better, though at this point usage
is low enough that you won't even notice it if you are already running a
fast relay. I think it will help if your bridge is located in the U.S.,
because that reduces latency from Google App Engine.
The meek-server plugin is basically just a little web server:
https://gitweb.torproject.org/pluggable-transports/meek.git/tree/HEAD:/meek…
Since meek works differently than obfs3, for example, it doesn't help us
to have hundreds of medium-fast bridges. We need one (or maybe two or
three) big fat fast relays, because all the traffic that is bounced
through App Engine or Amazon will be pointed at it.
My PGP key is at https://www.bamsoftware.com/david/david.asc if you want
to talk about it.
David Fifield
[View Less]
Hello,
As we know, hidden services can be useful for all kinds of legitimate
things (Pond's usage is particularly interesting), however they do also
sometimes get used by botnets and other problematic things.
Tor provides exit policies to let exit relay operators restrict traffic
they consider to be unwanted or abusive. In this way a kind of
international group consensus emerges about what is and is not acceptable
usage of Tor. For instance, SMTP out is widely restricted.
Has there been any …
[View More]discussion of implementing similar controls for hidden
services, where relays would refuse to act as introduction points for
hidden services that match certain criteria e.g. have a particular key, or
whose key appears in a list downloaded occasionally via Tor itself. In this
way relay operators could avoid their resources being used for establishing
communication with botnet CnC servers.
Obviously such a scheme would require a protocol and client upgrade to
avoid nodes building circuits to relays that then refuse to introduce.
The downside is additional complexity. The upside is potentially recruiting
new relay operators.
[View Less]
If a bridge has
PublishServerDescriptor 0
does that prevent it from counting in metrics? If so, what's the right
setting to enable metrics (0|1|v3|bridge,...)? Is there a way to send
metrics data without also ending up in BridgeDB?
David Fifield
Hello,
this is an attempt to collect tasks that should be done for
SponsorR. You can find the SponsorR page here:
https://trac.torproject.org/projects/tor/wiki/org/sponsors/SponsorR
I'm going to focus only on the subset of those categories that
Roger/David told me are the most important for the sponsor. These are:
- Safe statistics collection
- Tor controller API improvements
- Performance improvements
- Opt-in HS indexing service
I haven't yet split projects into deliverables; this is a …
[View More]middle step
to getting there. Next step is to filter and then ticketify what we
have. After that we need to prioritize and pick the projects that will
become deliverables.
In each category, I have slightly ordered the items (so, more
important items will usually be on the top, but that's not always
true). I have also tried to include all the tickets that are marked as
SponsorR in trac.
So, let's go:
== Safe statistics collection ==
We've discussed this quite a bit over the past year and I think we all
pretty much agree on which stats are safe to collect and which not.
I think we all agree that collecting the number of HS circuits and
traffic volume from RPs (#13192) is harmless [0] and useful
information to have. We need to clean up Roger's patch to add that
information in extra-info descriptors, and then do some
visualisations. That would give us a good idea of how much HSes are used.
OTOH, other statistics like "# of HS descriptors" are not that harmless
and the upcoming HS redesign will block us from getting this information
anyway.
For now, I think we should focus on #13192 for this project.
== Tor controller API improvements ==
To better refine this project, we should think about what we want to
get out of it. Here are some outcomes:
a) A better control API allows us to perform better performance
measurements for HSes.
Karsten in #1944 worked on performance measurements of HS circuit
establishment. You can find his very useful results here:
http://ec2-54-92-231-52.compute-1.amazonaws.com/
We should understand exactly how Karsten is gathering those events,
and see whether we can improve the timing accuracy or if we are
missing any events. We need to also figure out how to do useful
measurements in causal events like the race between the
INTRODUCE_ACK cell and the RENDEZVOUS2. We also need to find a way
to match rendezvous circuits with introduction circuits:
https://trac.torproject.org/projects/tor/ticket/1944#comment:35
All in all, this seems like a project worth doing right because it
will be useful in the future. It can even act as an automated
regression test.
b) This might also be a good time to start working on automated
integration tests for HSes.
It should be possible to spin up private Chutney networks and test
that particular HSes are reachable. Or perform regression tests;
for example, Roger recently suggested writing a regression test to
make sure that clocks don't need to be synchronized to build HS
circuits (#13494).
We should also make testing networks better for HS testing:
- #13401 TestingTorNetwork should crank down RendPostPeriod too?
c) Tor should better expose error messages of failed operations. For
example, this could allow TBB to inform users whether they mistyped
the onion address or the HS is actually down, and it would also let
us do #13208. Proposal 229 and ticket #13212 are related to
this. We should see whether the PT team is planning to implement
proposal 229 and how we can synchronise.
d) There are various projects that are using HSes these days (TorChat,
Pond, GlobaLeaks, Ricochet, etc.). We should think whether we want
to support these use cases and how we can make their life easier.
For example, Fabio has been asking for a way to spin up HSes using
the control port (#5976). What other features do people want from
the control port?
And here are some more tickets marked as SponsorR from this category:
- #8993 Better hidden service support on Tor control interface
- #13206 Write up walkthrough of control port events when accessing a hidden service
- #2554 extend torperf to record hidden service time components
== Performance Improvements ==
This is the most juicy section. How can we make HS performance better?
IIUC, we are mainly interested in client-side performance, but if a
change makes both sides faster that's even better.
Some projects:
a) Looking at Karsten's #1944 results http://ec2-54-92-231-52.compute-1.amazonaws.com/
we see that fetching HS descriptors takes much more time than it should.
I wonder why this is the case. Is there another ntohl bug there?
We should perform measurements and get a good understanding of
what's going on in this step. Here are some tickets that Roger
opened to do exactly that:
- #13208 What's the average number of hsdir fetches before we get the hsdesc?
- #13209 Write a hidden service hsdir health measurer
And here is a ticket with a potential issue:
- #13207 Is rend_cache_clean_v2_descs_as_dir cutoff crazy high?
b) Improving the other parts of the circuit establishment process is
also important:
- #8239 Hidden services should try harder to reuse their old intro points
- #3733 Tor should abandon rendezvous circuits that cause a client request to time out
- #13222 Clients accessing a hidden service can establish their rend point in parallel to fetching the hsdesc
Furthermore, an area of Tor that might give us better performance
but we haven't really explored yet is preemptive circuits. #13239
is about building more internal circuits for HSes.
And here is a ticket suggesting more measurements:
- #13194 Track time between ESTABLISH_RENDEZVOUS and RENDEZVOUS1 cell
c) Another important project in this area is parallelizing HS crypto.
I haven't looked at what this would actually entail, but it will
probably involve implementing the undone parts of proposal 220/224.
d) This might be the time to implement Encrypted Services? Many people
have been asking for this feature and this might be the right time
to do it:
https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/ideas/xxx-en…
e) Following the trail of #13207, we should look at all the magic
numbers currently used by HSes, document them and see if they make
sense. This includes the number of IPs (#8950), the number of
HSDirs/replicas, the intro point expiration date, etc.
Also, we should revisit the flags used when doing path selection
for RPs, IPs, etc.
f) On a more researchy tone, this might also be a good point to start
poking at the HS scalability project since it will really affect HS
performance.
We should look at Christopher Baines' ideas and write a Tor
proposal out of them:
https://lists.torproject.org/pipermail/tor-dev/2014-April/006788.htmlhttps://lists.torproject.org/pipermail/tor-dev/2014-May/006812.html
Last time I looked, Christopher's ideas required implementing
proposal225 and #8239.
g) All the projects above are aiming at improving circuit
establishment performance, but none of them are dealing with
performance improvements after the HS circuit has been established.
On an even more researchy tone, Qingping Hou et al wrote a proposal
to reduce the length of HS circuits to 5 hops (down from 6). You
can find their proposal here:
https://lists.torproject.org/pipermail/tor-dev/2014-February/006198.html
The project is crazy and dangerous and needs lots of analysis, but
it's something worth considering. Maybe this is a good time to do
this analysis?
h) Back to the community again. There have recently appeared a few
messaging protocols that are inherently using HSes to provide link
layer confidentiality and anonymity [1]. Examples include Pond,
Ricochet and TorChat.
Some of these applications are creating one or more HSes per user,
with the assumption that HSes are something easy to make and there
is no problem in having lots of them. People are wondering how well
these applications scale and whether they are using the Tor network
the right way. See John Brooks' mail for a small analysis:
https://moderncrypto.org/mail-archive/messaging/2014/000434.html
It might be worth researching these use cases to see how well Tor
supports them and how they can be supported better (or whether they
are a bad idea entirely).
== Opt-in HS indexing service ==
This seems like a fun project that can be used in various ways in the
future. Of course, the feature must remain opt-in so that only
services that want to be public will surface.
For this project, we could make some sort of 'HS authority' which
collects HS information (the HS descriptor?) from volunteering
HSes. It's unclear who will run an HS authority; maybe we can work
with ahmia so that they integrate it in their infrastructure?
If we are more experimental, we can even build a basic petname system
using the HS authority [2]. Maybe just a "simple" NAME <-> PUBKEY
database where HSes can register themselves in a FIFO fashion. This
might cause tons of domain camping and attempts for dirty sybil
attacks, but it might develop into something useful. Worst case we can
shut it down and call the experiment done? AFAIK, I2P has been doing
something similar at https://geti2p.net/en/docs/naming
== Security / Miscellaneous ==
I also noticed that some tickets on trac were assigned to SponsorR but
I couldn't fit them in the above categories. They are mainly security
enhancements or code improvements. Here is a dump of the tickets:
Security:
- #13214 HS clients don't validate descriptor-id returned by HSDir
- #7803 Clients shouldn't send timestamps in INTRODUCE1 cells
- #8243 Getting the HSDir flag should require more effort
- #2715 Is rephist-calculated uptime the right metric for HSDir assignment?
Miscellaneous:
- #13223 Refactor rend_client_refetch_v2_renddesc()
- #13287 Investigate mysterious 24-hour lump in hsdir desc fetches
- #8902 Rumors that hidden services have trouble scaling to 100 concurrent connections
== Epilogue ==
What useful projects/tickets did I forget here?
Which tasks from the above we should not do? I just went ahead and
wrote down all the projects I could think of, with the idea that we
will filter stuff later.
Thanks!
Footnotes:
[0]: since RPs are picked at random by the client and not by the HS.
[1]: see https://moderncrypto.org/mail-archive/messaging/2014/000434.html
[2]: or if someone is more crazy, try to integrate GNUnet's GNS:
https://gnunet.org/gns
[View Less]
Hey Nick,
this mail is about the schemes we were discussing during the dev
meeting on how to protect HSes against guard discovery attacks (#9001).
I think we have some ideas on how to offer better protection against
such attacks, mainly by keeping our middle nodes more static than we
do currently.
For example, we could keep our middle nodes for 3-4 days instead of
choosing new ones for every circuit. As Roger has suggested, maybe we
don't even need to write the static middle nodes on the …
[View More]state file,
just use new ones if Tor has restarted.
Keeping middle nodes around for longer will make those attacks much
slower (it restricts them to one attack attempt every 3-4 days), but
are there any serious negative implications?
For example, if you were unlucky and you picked an evil middle node,
and you keep it for 3-4 days, that middle node will always see your
traffic coming through your guard (assuming a single guard per
client). If we assume you use a non-popular guard node (with only a
few clients using it), the middle guard might be able to think "Ah,
the circuit that comes from that guard node is always user X" making
your circuits a bit linkable from the PoV of your middle node.
What other attacks should we be wary about? Maybe partitioning attacks
based on client behavior?
And how should we move this forward if we decide it's worth it? Should
we start writing a Tor proposal?
Thanks!
[View Less]
Hi all,
Not content to let you have all the fun, I decided to run my own Tor network!
Kidding ;) But the Directory Authorities, the crappy experiment
leading up to Black Hat, and the promise that one can recreate the Tor
Network in the event of some catastrophe interests me enough that I
decided to investigate it. I'm aware of Chutney and Shadow, but I
wanted it to feel as authentic as possible, so I forwent those, and
just ran full-featured independent tor daemons. I explicitly wanted
to …
[View More]avoid setting TestingTorNetwork. I did have to edit a few other
parameters, but very few. [0]
I plan on doing a blog post, giving a HOWTO, but I thought I'd write
about my experience so far. I've found a number of interesting issues
that arise in the bootstrapping of a non-TestingTorNetwork, mostly
around reachability testing.
-----
One of the first things I ran into was a problem where I could not get
any routers to upload descriptors. An OR checks itself to determine
reachability before uploading a descriptor by building a circuit -
bypassed with AssumeReachable or TestingTorNetwork. This works fine
for Chutney and Shadow, as they reach into the OR and set
AssumeReachable. But if the Tor Network were to be rebooted... most
nodes out there would _not_ have AssumeReachable, and they would not
be able to perform self-testing with a consensus consisting of just
Directory Authorities. I think nodes left running would be okay, but
nodes restarted would be stuck in a startup loop. I imagine what
would actually happen is Noisebridge and TorServers and a few other
close friends would set the flag, they would get into the consensus,
and then the rest of the network would start coming back... (Or
possibly a few nodes could anticipate this problem ahead of time, and
set it now.)
What I had to do was make one of my Directory Authorities an exit -
this let the other nodes start building circuits through the
authorities and upload descriptors. Maybe an OR should have logic
that if it has a valid consensus with no Exit nodes, it should assume
it's reachable and send a descriptor - and then let the Directory
Authorities perform reachability tests for whether or not to include
it? From the POV of an intentional DoS - an OR doesn't have to obey
the reachability test of course, so no change there. It could
potentially lead to an unintentional DoS where all several thousand
routers start slamming the DirAuths as soon as a usable-but-blank
consensus is found... but AFAIK routers probe for a consensus based on
semi-random timing anyway, so that may mitigate that?
-----
Another problem I ran into was that nodes couldn't conduct
reachability tests when I had exits that were only using the Reduced
Exit Policy - because it doesn't list the ORPort/DirPort! (I was
using nonstandard ports actually, but indeed the reduced exit policy
does not include 9001 or 9030.) Looking at the current consensus,
there are 40 exits that exit to all ports, and 400-something exits
that use the ReducedExitPolicy. It seems like 9001 and 9030 should
probably be added to that for reachability tests?
-----
Continuing in this thread, another problem I hit was that (I believe)
nodes expect the 'Stable' flag when conducting certain reachability
tests. I'm not 100% certain - it may not prevent the relay from
uploading a descriptor, but it seems like if no acceptable exit node
is Stable - some reachability tests will be stuck. I see these sorts
of errors when there is no stable Exit node (the node generating the
errors is in fact a Stable Exit though, so it clearly uploaded its
descriptor and keeps running):
Oct 13 14:49:46.000 [warn] Making tunnel to dirserver failed.
Oct 13 14:49:46.000 [warn] We just marked ourself as down. Are your
external addresses reachable?
Oct 13 14:50:47.000 [notice] No Tor server allows exit to
[scrubbed]:25030. Rejecting.
Since ORPort/DirPort are not in the ReducedExitPolicy, this (may?)
restrict the number of nodes available for conducting a reachability
test. I think the Stable flag is calculated off the average age of
the network though, so the only time when this would cause a big
problem is when the network (DirAuths) have been running for a little
bit and a full exit node hasn't been added - it would have to wait
longer for the full exit node to get the Stable flag.
-----
Getting a BWAuth running was... nontrivial. Some of the things I found:
- SQLAlchemy 0.7.x is no longer supported. 0.9.x does not work, nor
0.8.x. 0.7.10 does.
- Several quasi-bugs with the code/documentation (the earliest three
commits here: https://github.com/tomrittervg/torflow/commits/tomedits)
- The bandwidth scanner actively breaks in certain situations of
divide-by-zero (https://github.com/tomrittervg/torflow/commit/053dfc17c0411dac0f6c4e43954f9…)
- The scanner will be perpetually stuck if you're sitting on the same
/16 and you don't perform the equivalent of EnforceDistinctSubnets 0
[1]
Ultimately, while I successfully produced a bandwidth file [2], I
wasn't convinced it was meaningful. There is a tremendous amount of
code complexity buried beneath the statement 'Scan the nodes and see
how fast they are', and a tremendous amount of informational
complexity behind 'Weight the nodes so users can pick a good stream'.
-----
I tested what it would look like if an imposter DirAuth started trying
to participate in the consensus. It generated the warning you would
expect:
Oct 14 00:04:31.000 [warn] Got a vote from an authority (nickname
authimposter, address W.X.Y.Z) with authority key ID Z. This key ID is
not recognized. Known v3 key IDs are: A, B, C, D
But it also generated a warning you would not expect, and sent me down
a rabbit hole for a while:
Oct 10 21:44:56.000 [debug] directory_handle_command_post(): Received
POST command.
Oct 10 21:44:56.000 [debug] directory_handle_command_post(): rewritten
url as '"/tor/post/consensus-signature"'.
Oct 10 21:44:56.000 [notice] Got a signature from W.X.Y.Z. Adding it
to the pending consensus.
Oct 10 21:44:56.000 [info]
dirvote_add_signatures_to_pending_consensus(): Have 1 signatures for
adding to ns consensus.
Oct 10 21:44:56.000 [info]
dirvote_add_signatures_to_pending_consensus(): Added -1 signatures to
consensus.
Oct 10 21:44:56.000 [info]
dirvote_add_signatures_to_pending_consensus(): Have 1 signatures for
adding to microdesc consensus.
Oct 10 21:44:56.000 [info]
dirvote_add_signatures_to_pending_consensus(): Added -1 signatures to
consensus.
Oct 10 21:44:56.000 [warn] Unable to store signatures posted by
W.X.Y.Z: Mismatched digest.
Over on the imposter:
Oct 14 00:19:32.000 [warn] http status 400 ("Mismatched digest.")
response after uploading signatures to dirserver 'W.X.Y.Z:15030'.
Please correct.
The imposter DirAuth is sending up a signature for a consensus that is
not the same consensus that the rest of the DirAuths computed.
Specifically, the imposter DirAuth lists itself as a dir-source and
the signature covers this line. (Everything else matches because the
imposter has been outvoted and respects that.)
I guess the lesson is, if you see the "Mismatched digest" warning in
conjunction with the unrecognized key ID - it's just one issue, not
two.
-----
The notion and problems of an imposter DirAuth also come up during how
the network behaves when adding a DirAuth. I started with 4
authorities, then started a fifth (auth5). Not interesting - it
behaved as the imposter scenario.
I then added auth5 to a single DirAuth (auth1) as a trusted DirAuth.
This resulted in a consensus with 3 signatures, as auth1 did not sign
the consensus. On auth1 I got warn messages:
A consensus needs 3 good signatures from recognized authorities for us
to accept it. This one has 2 (auth1 auth5). 3 (auth2 auth3 auth4) of
the authorities we know didn't sign it.
I then added auth5 to a second DirAuth (auth2) as a trusted DirAuth.
This results in a consensus for auth1, auth2, and auth5 - but auth3
and auth4 did not sign it or produce a consensus. Because the
consensus was only signed by 2 of the 4 Auths (e.g., not a majority) -
it was rejected by the relays (which did not list auth5). At this
point something interesting and unexpected happened:
The other 2 DirAuths (not knowing about auth5) did not have a
consensus. This tricked dirvote_recalculate_timing into thinking we
should use the TestingV3AuthInitialVotingInterval parameters, so they
got out of sync with the other 3 DirAuths (that did know about auth5).
That if/else statement seems very odd, and the parameters seem odd as
well. First off, I'm not clear what the parameters are intended to
represent. The man page says:
TestingV3AuthInitialVotingInterval N minutes|hours
Like V3AuthVotingInterval, but for initial voting interval before
the first consensus has been created. Changing this requires that
TestingTorNetwork is set. (Default: 30 minutes)
TestingV3AuthInitialVoteDelay N minutes|hours
Like TestingV3AuthInitialVoteDelay, but for initial voting interval
before the first consensus has been created. Changing this requires
that TestingTorNetwork is set. (Default: 5 minutes)
TestingV3AuthInitialDistDelay N minutes|hours
Like TestingV3AuthInitialDistDelay, but for initial voting interval
before the first consensus has been created. Changing this requires
that TestingTorNetwork is set. (Default: 5 minutes)
Notice that the first says "Like V3AuthVotingInterval", but the other
two just repeat their name? And how there _is no_
V3AuthInitialVotingInterval? And that you can't modify these
parameters without turning on TestingTorParameters (despite the fact
that they will be used without TestingTorNetwork?) And also,
unrelated to the naming, these parameters are a fallback case for when
we don't have a consensus, but if they're not kept in sync with
V3AuthVotingInterval and their kin - the DirAuth can wind up
completely out of sync and be unable to recover (except by luck).
It seems like these parameters should be renamed to V3AuthInitialXXX,
keep their existing defaults, remove the requirement on
TestingTorNetwork, and be documented as needing to be divisors of the
V3AuthVotingXXX parameter, to allow a DirAuth who has tripped into
them to be able to recover.
I have a number of other situations I want to test around adding,
subtracting, and manipulating traffic to a DirAuth to see if there are
other strange situations that can arise.
-----
Other notes:
- I was annoyed by TestingAuthDirTimeToLearnReachability several
times (as I refused to turn on TestingTorNetwork) - I wanted to
override it. I thought maybe that should be an option, but ultimately
convinced myself that in the event of a network reboot, the 30 minutes
would likely still be needed.
- The Directory Authority information is a bit out of date.
Specifically, I was most confused by V1 vs V2 vs V3 Directories. I am
not sure if the actual network's DirAuths set V1AuthoritativeDirectory
or V2AuthoritativeDirectory - but I eventually convinced myself that
only V3AuthoritativeDirectory was needed.
- It seems like an Authority will not vote for itself as an HSDir or
Stable... but I could't find precisely where that was in the code.
(It makes sense to not vote itself Stable, but I'm not sure why
HSDir...)
- The networkstatus-bridges file is not included in the tor man page
- I feel like the log message "Consensus includes unrecognized
authority" (currently info) is worthy of being upgraded to notice.
- While debugging, I feel this patch would be helpful. [3]
- I've had my eye on Proposal 164 for a bit, so I'm keeping that in mind
- I wanted the https://consensus-health.torproject.org/ page for my
network, but didn't want to run the java code, so I ported it to
python. This project is growing, and right now I've been editing
consensus_health_checker.py as well.
https://github.com/tomrittervg/doctor/commits/python-website I have a
few more TODOs for it (like download statistics), but it's coming
along.
-----
Finally, something I wanted to ask after was the idea of a node (an
OR, not a client) belonging to two or more Tor networks. From the POV
of the node operator, I would see it as a node would add some config
lines (maybe 'AdditionalDirServer' to add to, rather than redefining,
the default DirServers), and it would upload its descriptors to those
as well, fetch a consensus from all AdditionalDirServers, and allow
connections from and to nodes in either. I'm still reading through
the code to see which areas would be particularly confusing in the
context of multiple consensuses, but I thought I'd throw it out there.
-tom
-----
[0]
AuthoritativeDirectory 1
V3AuthoritativeDirectory 1
VersioningAuthoritativeDirectory 1
RecommendedClientVersions [stuff]
RecommendedServerVersions [stuff]
ConsensusParams [stuff]
AuthDirMaxServersPerAddr 0
AuthDirMaxServersPerAuthAddr 0
V3AuthVotingInterval 5 minutes
V3AuthVoteDelay 30 seconds
V3AuthDistDelay 30 seconds
V3AuthNIntervalsValid 3
MinUptimeHidServDirectoryV2 1 hour
-----
[1]
diff --git a/NetworkScanners/BwAuthority/bwauthority_child.py
b/NetworkScanners/BwAuthority/bwauthority_child.py
index 28b89c2..e07718f 100755
--- a/NetworkScanners/BwAuthority/bwauthority_child.py
+++ b/NetworkScanners/BwAuthority/bwauthority_child.py
@@ -60,7 +60,7 @@ __selmgr = PathSupport.SelectionManager(
percent_fast=100,
percent_skip=0,
min_bw=1024,
- use_all_exits=False,
+ use_all_exits=True,
uniform=True,
use_exit=None,
use_guards=False,
-----
[2]
node_id=$C447A9E99C66A96E775A5EF7A8B0DF96C414D0FE bw=37 nick=relay4
measured_at=1413257649 updated_at=1413257649 pid_error=0.0681488657221
pid_error_sum=0 pid_bw=59064 pid_delta=0 circ_fail=0.0
node_id=$70145044B8C20F46F991B7A38D9F27D157B1CB9D bw=37 nick=relay5
measured_at=1413257649 updated_at=1413257649 pid_error=0.0583026021009
pid_error_sum=0 pid_bw=59603 pid_delta=0 circ_fail=0.0
node_id=$7FAC1066DCCC0C62984B8E579C5AABBBAE8146B2 bw=37 nick=exit2
measured_at=1413257649 updated_at=1413257649 pid_error=0.0307144741235
pid_error_sum=0 pid_bw=55938 pid_delta=0 circ_fail=0.0
node_id=$9838F41EB01BA62B7AA67BDA942AC4DC3B2B0F98 bw=37 nick=exit3
measured_at=1413257649 updated_at=1413257649 pid_error=0.0124944051714
pid_error_sum=0 pid_bw=55986 pid_delta=0 circ_fail=0.0
node_id=$49090AC6DB52AD8FFF95AF1EC1E898126A9E5CA6 bw=37 nick=relay3
measured_at=1413257649 updated_at=1413257649 pid_error=0.0030073731241
pid_error_sum=0 pid_bw=56489 pid_delta=0 circ_fail=0.0
node_id=$F5C43BB6AD2256730197533596930A8DD7BEC367 bw=37 nick=exit1
measured_at=1413257649 updated_at=1413257649
pid_error=-0.0032777385693 pid_error_sum=0 pid_bw=55114 pid_delta=0
circ_fail=0.0
node_id=$3D53FF771CC3CB9DE0A55C33E5E8DA4238C96AB5 bw=37 nick=relay2
measured_at=1413257649 updated_at=1413257649
pid_error=-0.0418210520821 pid_error_sum=0 pid_bw=51021 pid_delta=0
circ_fail=0.0
-----
[3]
diff --git a/src/or/networkstatus.c b/src/or/networkstatus.c
index 890da0a..4d72add 100644
--- a/src/or/networkstatus.c
+++ b/src/or/networkstatus.c
@@ -1442,6 +1442,8 @@ networkstatus_note_certs_arrived(void)
waiting_body,
networkstatus_get_flavor_name(i),
NSSET_WAS_WAITING_FOR_CERTS)) {
+ log_info(LD_DIR, "After fetching certificates, we were able to "
+ "accept the consensus.");
tor_free(waiting_body);
}
}
[View Less]
In the past few months of bridge user graphs, there is an apparent
negative correlation between obfs3 users and vanilla users: when one
goes up, the other goes down. If you draw a horizontal line at about
5500, they are almost mirror images of each other. I don't see it with
any other transport pairs. Any idea why it might be?
I can see what could cause a simultaneous decrease in vanilla and
increase in obfs3: Tor gets blocked somewhere and users switch to obfs3.
But I wouldn't expect blocking …
[View More]events to look so smooth or happen so
frequently, and it doesn't explain why the reverse change happens later
(obfs3 being blocked while Tor is unblocked is less plausible). I can
also understand the overall long-term trend of obfs3 increasing and
vanilla decreasing. But I don't see why they should mirror each other so
closely over short time periods.
Some hypotheses:
1. There are lots of users who have a mix of vanilla and obfs3 bridges
configured. Their tor (randomly?) chooses one of them, which usually
works. The number of such users is constant over the short term;
i.e. the sum of obfs3+vanilla is constant, but the proportion of
obfs3 and vanilla fluctuates randomly.
2. Maybe vanilla-down/obfs3-up is caused by blocking events, and
vanilla-up/obfs3-down is caused by natural new-user churn and/or
coincidence.
3. There is something about the way BridgeDB hands out bridges, or the
way in which users use it, that causes it to give out obfs3 bridges
at the expense of vanilla and vice versa.
4. Some kind of feedback loop: obfs3 bridges get used and get
congested, so users switch to vanilla, which then get used and
congested, etc.
David Fifield
[View Less]
== What is bridge reachability data? ==
By bridge reachability data I'm referring to information about which
Tor bridges are censored in different parts of the world.
The OONI project has been developing a test that allows probes in
censored countries to test which bridges are blocked and which are
not. The test simply takes as input a list of bridges and tests
whether they work. It's also able to test obfuscated bridges with
various pluggable transports (PTs).
== Why do we care about this …
[View More]bridgability data? ==
A few different parties care about the results of the bridge
reachability test [0]. Some examples:
Tor developers and censorship researchers can study the bridge
reachability data to learn which PTs are currently useful around the
world, by seeing which pluggable transports get blocked and where. We
can also learn which bridge distribution mechanisms are busted and
which are not.
Bridge operators, the press, funders and curious people, can learn
which countries conduct censorship and how advanced technology they
use. They can also learn how long it takes jurisdictions to block
public bridges. And in general, they can get a better understanding of
how well Tor is doing in censorship circumvention around the world.
Finally, censored users and world travelers can use the data to learn
which PTs are safe to use in a given jurisdiction.
== Visualizing bridge reachability data ==
So let's look at the data.
Currently, OONI bridge reachability reports look like this:
https://ooni.torproject.org/reports/0.1/CN/bridge_reachability-2014-07-02T0…
and you can retrieve them from this directory listing:
https://ooni.torproject.org/reports/0.1/
That's nice, but I doubt that many people will be able to access (let
alone understand) those reports. Hence, we need some kind of
visualization (and better dir listing) to conveniently display the
data to human beings.
However, a simple x-to-y graph will not suffice: our ploblem is
multidimensional. There are many use cases for the data and bridges
have various characteristics (obfuscation method, distribution method,
etc.) hence there are more than one useful ways to visualize this
dataset.
To give you an idea, I will show you two mockups of visualizations
that I would find useful. Please don't pay attention to the data
itself, I just made some things up while on a train.
Here is one that shows which PTs are blocked in which countries:
https://people.torproject.org/~asn/bridget_vis/countries_pts.jpg The
list would only include countries that are blocking at least a
bridge. Green is "works", red is "blocked". Also, you can imagine the
same visualization, but instead of PT names for columns it has
distribution methods ("BridgeDB HTTP distributor", "BridgeDB mail
distributor", "Private bridge", etc.).
And here is another one that shows how fast jurisdictions block the
default TBB bridges:
https://people.torproject.org/~asn/bridget_vis/tbb_blocked_timeline.jpg
These visualizations could be helpful, but they are not the only ones.
What other use cases do you imagine using this dataset for?
What graphs or visualizations would you like to see?
[0]: Here are some use cases:
Tor developers / Researcers:
*** Which pluggable transports are blocked and where?
*** Do they do DPI? Or did they just block the TBB hardcoded bridges?
*** Which jurisdictions are most aggressive and what blocking technology do they use?
*** Do they block based on IP or on (IP && PORT)?
Users:
*** Which pluggable transport should I use in my jurisdiction?
Bridge operators / Press / Funders / Curious people:
*** Which jurisdictions conduct Tor censorship? (block pluggable transports/distribution methods)
*** How quickly do jurisdictions block bridges?
*** How many users/traffic (and which locations) did the blocked bridges serve?
**** Can be found out through extrainfo descriptors.
*** How well are Tor bridges doing in censorship circumvention?
[View Less]
Sometime, of many computers, of many hidden service, the TOR's
connection are interrupted, very slow, and can't connect to hidden
serivce or Internet site.
When i change the TOR identity all works better.
Sometime into the log file i can see a message like this:
* Hidden service descriptor corrupted.
* Invalid hidden service descriptor.
Are some hidden services under attack?
I try to create some hidden service to test... After 2 or 3 days the
hidden service become unavailable. But if i change …
[View More]the TOR identity of
hidden service (server), the hidden service become available.
Why???
One time, in 1-2 July, i was see the corruption of SSL connection over
TOR network (all connection).
What's going on? What's wrong?
Test system:
* Debian 7
* TAILS 1.1
* Parrot OS
* Windows 7
* Debian 7 (Another as server)
[View Less]