tor-dev
Threads by month
- ----- 2025 -----
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
December 2013
- 47 participants
- 43 discussions

28 May '15
Hi, all!
I've been trying to fill in all the cracks and corners for a revamp of
the hidden services protocol, based on earlier writings by George
Kadianakis and other discussions on the mailing list. (See draft
acknowledgments section below.)
After a bunch of comments, I'm ready to give this a number and call it
(draft) proposal 224. I'd like to know what doesn't make sense, what
I need to explain better, and what I need to design better. I'd like
to fill in the gaps and turn this into a more full document. I'd like
to answer the open questions. Comments are most welcome, especially if
they grow into improvements.
FWIW, I am likely to be offline for most of the current weekend,
because of Thanksgiving, so please be patient with my reply speed; I
hope to catch up with emails next week.
Filename: 224-rend-spec-ng.txt
Title: Next-Generation Hidden Services in Tor
Author: Nick Mathewson
Created: 2013-11-29
Status: Draft
-1. Draft notes
This document describes a proposed design and specification for
hidden services in Tor version 0.2.5.x or later. It's a replacement
for the current rend-spec.txt, rewritten for clarity and for improved
design.
Look for the string "TODO" below: it describes gaps or uncertainties
in the design.
Change history:
2013-11-29: Proposal first numbered. Some TODO and XXX items remain.
0. Hidden services: overview and preliminaries.
Hidden services aim to provide responder anonymity for bidirectional
stream-based communication on the Tor network. Unlike regular Tor
connections, where the connection initiator receives anonymity but
the responder does not, hidden services attempt to provide
bidirectional anonymity.
Other features include:
* [TODO: WRITE ME once there have been some more drafts and we know
what the summary should say.]
Participants:
Operator -- A person running a hidden service
Host, "Server" -- The Tor software run by the operator to provide
a hidden service.
User -- A person contacting a hidden service.
Client -- The Tor software running on the User's computer
Hidden Service Directory (HSDir) -- A Tor node that hosts signed
statements from hidden service hosts so that users can make
contact with them.
Introduction Point -- A Tor node that accepts connection requests
for hidden services and anonymously relays those requests to the
hidden service.
Rendezvous Point -- A Tor node to which clients and servers
connect and which relays traffic between them.
0.1. Improvements over previous versions.
[TODO write me once there have been more drafts and we know what the
summary should say.]
0.2. Notation and vocabulary
Unless specified otherwise, all multi-octet integers are big-endian.
We write sequences of bytes in two ways:
1. A sequence of two-digit hexadecimal values in square brackets,
as in [AB AD 1D EA].
2. A string of characters enclosed in quotes, as in "Hello". These
characters in these string are encoded in their ascii
representations; strings are NOT nul-terminated unless
explicitly described as NUL terminated.
We use the words "byte" and "octet" interchangeably.
We use the vertical bar | to denote concatenation.
We use INT_N(val) to denote the network (big-endian) encoding of the
unsigned integer "val" in N bytes. For example, INT_4(1337) is [00 00
05 39].
0.3. Cryptographic building blocks
This specification uses the following cryptographic building blocks:
* A stream cipher STREAM(iv, k) where iv is a nonce of length
S_IV_LEN bytes and k is a key of length S_KEY_LEN bytes.
* A public key signature system SIGN_KEYGEN()->seckey, pubkey;
SIGN_SIGN(seckey,msg)->sig; and SIGN_CHECK(pubkey, sig, msg) ->
{ "OK", "BAD" }; where secret keys are of length SIGN_SECKEY_LEN
bytes, public keys are of length SIGN_PUBKEY_LEN bytes, and
signatures are of length SIGN_SIG_LEN bytes.
This signature system must also support key blinding operations
as discussed in appendix [KEYBLIND] and in section [SUBCRED]:
SIGN_BLIND_SECKEY(seckey, blind)->seckey2 and
SIGN_BLIND_PUBKEY(pubkey, blind)->pubkey2 .
* A public key agreement system "PK", providing
PK_KEYGEN()->seckey, pubkey; PK_VALID(pubkey) -> {"OK", "BAD"};
and PK_HANDHAKE(seckey, pubkey)->output; where secret keys are
of length PK_SECKEY_LEN bytes, public keys are of length
PK_PUBKEY_LEN bytes, and the handshake produces outputs of
length PK_OUTPUT_LEN bytes.
* A cryptographic hash function H(d), which should be preimage and
collision resistant. It produces hashes of length HASH_LEN
bytes.
* A cryptographic message authentication code MAC(key,msg) that
produces outputs of length MAC_LEN bytes.
* A key derivation function KDF(key data, salt, personalization,
n) that outputs n bytes.
As a first pass, I suggest:
* Instantiate STREAM with AES128-CTR. [TODO: or ChaCha20?]
* Instantiate SIGN with Ed25519 and the blinding protocol in
[KEYBLIND].
* Instantiate PK with Curve25519.
* Instantiate H with SHA256. [TODO: really?]
* Instantiate MAC with HMAC using H.
* Instantiate KDF with HKDF using H.
For legacy purposes, we specify compatibility with older versions of
the Tor introduction point and rendezvous point protocols. These used
RSA1024, DH1024, AES128, and SHA1, as discussed in
rend-spec.txt. Except as noted, all RSA keys MUST have exponent
values of 65537.
As in [proposal 220], all signatures are generated not over strings
themselves, but over those strings prefixed with a distinguishing
value.
0.4. Protocol building blocks [BUILDING-BLOCKS]
In sections below, we need to transmit the locations and identities
of Tor nodes. We do so in the link identification format used by
EXTEND2 cells in the Tor protocol.
NSPEC (Number of link specifiers) [1 byte]
NSPEC times:
LSTYPE (Link specifier type) [1 byte]
LSLEN (Link specifier length) [1 byte]
LSPEC (Link specifier) [LSLEN bytes]
Link specifier types are as described in tor-spec.txt. Every set of
link specifiers MUST include at minimum specifiers of type [00]
(TLS-over-TCP, IPv4) and [02] (legacy node identity).
We also incorporate Tor's circuit extension handshakes, as used in
the CREATE2 and CREATED2 cells described in tor-spec.txt. In these
handshakes, a client who knows a public key for a server sends a
message and receives a message from that server. Once the exchange is
done, the two parties have a shared set of forward-secure key
material, and the client knows that nobody else shares that key
material unless they control the secret key corresponding to the
server's public key.
0.5. Assigned relay cell types
These relay cell types are reserved for use in the hidden service
protocol.
32 -- RELAY_COMMAND_ESTABLISH_INTRO
Sent from hidden service host to introduction point;
establishes introduction point. Discussed in
[REG_INTRO_POINT].
33 -- RELAY_COMMAND_ESTABLISH_RENDEZVOUS
Sent from client to rendezvous point; creates rendezvous
point. Discussed in [EST_REND_POINT].
34 -- RELAY_COMMAND_INTRODUCE1
Sent from client to introduction point; requests
introduction. Discussed in [SEND_INTRO1]
35 -- RELAY_COMMAND_INTRODUCE2
Sent from client to introduction point; requests
introduction. Same format as INTRODUCE1. Discussed in
[FMT_INTRO1] and [PROCESS_INTRO2]
36 -- RELAY_COMMAND_RENDEZVOUS1
Sent from introduction point to rendezvous point;
attempts to join introduction point's circuit to
client's circuit. Discussed in [JOIN_REND]
37 -- RELAY_COMMAND_RENDEZVOUS2
Sent from introduction point to rendezvous point;
reports join of introduction point's circuit to
client's circuit. Discussed in [JOIN_REND]
38 -- RELAY_COMMAND_INTRO_ESTABLISHED
Sent from introduction point to hidden service host;
reports status of attempt to establish introduction
point. Discussed in [INTRO_ESTABLISHED]
39 -- RELAY_COMMAND_RENDEZVOUS_ESTABLISHED
Sent from rendezvous point to client; acknowledges
receipt of ESTABLISH_RENDEZVOUS cell. Discussed in
[EST_REND_POINT]
40 -- RELAY_COMMAND_INTRODUCE_ACK
Sent form introduction point to client; acknowledges
receipt of INTRODUCE1 cell and reports success/failure.
Discussed in [INTRO_ACK]
0.5. Acknowledgments
[TODO reformat these once the lists are more complete.]
This design includes ideas from many people, including
Christopher Baines,
Daniel J. Bernstein,
Matthew Finkel,
Ian Goldberg,
George Kadianakis,
Aniket Kate,
Tanja Lange,
Robert Ransom,
It's based on Tor's original hidden service design by Roger
Dingledine, Nick Mathewson, and Paul Syverson, and on improvements to
that design over the years by people including
Tobias Kamm,
Thomas Lauterbach,
Karsten Loesing,
Alessandro Preite Martinez,
Robert Ransom,
Ferdinand Rieger,
Christoph Weingarten,
Christian Wilms,
We wouldn't be able to do any of this work without good attack
designs from researchers including
Alex Biryukov,
Lasse Øverlier,
Ivan Pustogarov,
Paul Syverson
Ralf-Philipp Weinmann,
See [ATTACK-REFS] for their papers.
Several of these ideas have come from conversations with
Christian Grothoff,
Brian Warner,
Zooko Wilcox-O'Hearn,
And if this document makes any sense at all, it's thanks to
editing help from
Matthew Finkel
George Kadianakis,
Peter Palfrader,
[XXX Acknowledge the huge bunch of people working on 8106.]
[XXX Acknowledge the huge bunch of people working on 8244.]
Please forgive me if I've missed you; please forgive me if I've
misunderstood your best ideas here too.
1. Protocol overview
In this section, we outline the hidden service protocol. This section
omits some details in the name of simplicity; those are given more
fully below, when we specify the protocol in more detail.
1.1. View from 10,000 feet
A hidden service host prepares to offer a hidden service by choosing
several Tor nodes to serve as its introduction points. It builds
circuits to those nodes, and tells them to forward introduction
requests to it using those circuits.
Once introduction points have been picked, the host builds a set of
documents called "hidden service descriptors" (or just "descriptors"
for short) and uploads them to a set of HSDir nodes. These documents
list the hidden service's current introduction points and describe
how to make contact with the hidden service.
When a client wants to connect to a hidden service, it first chooses
a Tor node at random to be its "rendezvous point" and builds a
circuit to that rendezvous point. If the client does not have an
up-to-date descriptor for the service, it contacts an appropriate
HSDir and requests such a descriptor.
The client then builds an anonymous circuit to one of the hidden
service's introduction points listed in its descriptor, and gives the
introduction point an introduction request to pass to the hidden
service. This introduction request includes the target rendezvous
point and the first part of a cryptographic handshake.
Upon receiving the introduction request, the hidden service host
makes an anonymous circuit to the rendezvous point and completes the
cryptographic handshake. The rendezvous point connects the two
circuits, and the cryptographic handshake gives the two parties a
shared key and proves to the client that it is indeed talking to the
hidden service.
Once the two circuits are joined, the client can send Tor RELAY cells
to the server. RELAY_BEGIN cells open streams to an external process
or processes configured by the server; RELAY_DATA cells are used to
communicate data on those streams, and so forth.
1.2. In more detail: naming hidden services [NAMING]
A hidden service's name is its long term master identity key. This
is encoded as a hostname by encoding the entire key in Base 32, and
adding the string ".onion" at the end.
(This is a change from older versions of the hidden service protocol,
where we used an 80-bit truncated SHA1 hash of a 1024 bit RSA key.)
The names in this format are distinct from earlier names because of
their length. An older name might look like:
unlikelynamefora.onion
yyhws9optuwiwsns.onion
And a new name following this specification might look like:
a1uik0w1gmfq3i5ievxdm9ceu27e88g6o7pe0rffdw9jmntwkdsd.onion
Note that since master keys are 32 bytes long, and 52 bytes of base
32 encoding can hold 260 bits of information, we have four unused
bits in each of these names.
[TODO: Alternatively, we could require that the first bit of the
master key always be zero, and use a 51-byte encoding. Or we could
require that the first two bits be zero, and use a 51-byte encoding
and reserve the first bit. Or we could require that the first nine
bits, or ten bits be zero, etc.]
1.3. In more detail: Access control [IMD:AC]
Access control for a hidden service is imposed at multiple points
through the process above.
In order to download a descriptor, clients must know which blinded
signing key was used to sign it. (See the next section for more info
on key blinding.) This blinded signing key is derived from the
service's public key and, optionally, an additional secret that is
not part of the hidden service's onion address. The public key and
this secret together constitute the service's "credential".
When the secret is in use, the hidden service gains protections
equivalent to the "stealth mode" in previous designs.
To learn the introduction points, the clients must decrypt the body
of the hidden service descriptor. The encryption key for these is
derived from the service's credential.
In order to make an introduction point send a request to the server,
the client must know the introduction point and know the service's
per-introduction-point authentication key from the hidden service
descriptor.
The final level of access control happens at the server itself, which
may decide to respond or not respond to the client's request
depending on the contents of the request. The protocol is extensible
at this point: at a minimum, the server requires that the client
demonstrate knowledge od the contents of the encrypted portion of the
hidden service descriptor. The service may additionally require a
user- or group-specific access token before it responds to requests.
1.4. In more detail: Distributing hidden service descriptors. [IMD:DIST]
Periodically, hidden service descriptors become stored at different
locations to prevent a single directory or small set of directories
from becoming a good DoS target for removing a hidden service.
For each period, the Tor directory authorities agree upon a
collaboratively generated random value. (See section 2.3 for a
description of how to incorporate this value into the voting
practice; generating the value is described in other proposals,
including [TODO: add a reference]) That value, combined with hidden service
directories' public identity keys, determines each HSDirs' position
in the hash ring for descriptors made in that period.
Each hidden service's descriptors are placed into the ring in
positions based on the key that was used to sign them. Note that
hidden service descriptors are not signed with the services' public
keys directly. Instead, we use a key-blinding system [KEYBLIND] to
create a new key-of-the-day for each hidden service. Any client that
knows the hidden service's credential can derive these blinded
signing keys for a given period. It should be impossible to derive
the blinded signing key lacking that credential.
The body of each descriptor is also encrypted with a key derived from
the credential.
To avoid a "thundering herd" problem where every service generates
and uploads a new descriptor at the start of each period, each
descriptor comes online at a time during the period that depends on
its blinded signing key. The keys for the last period remain valid
until the new keys come online.
1.5. In more detail: Scaling to multiple hosts
[THIS SECTION IS UNFINISHED]
In order to allow multiple hosts to provide a single hidden service,
I'm considering two options.
* We can have each server build an introduction circuit to each
introduction point, and have the introduction points responsible
for round-robining between these circuits. One service host is
responsible for picking the introduction points and publishing
the descriptors.
* We can have servers choose their introduction points
independently, and build circuits to them. One service host is
responsible for combining these introduction points into a
single descriptor.
If we want to avoid having a single "master" host without which the
whole service goes down (the "one service host" in the description
above), we need a way to fail over from one host to another. We also
need a way to coordinate between the hosts. This is as yet
undesigned. Maybe it should use a hidden service?
See [SCALING-REFS] for discussion on this topic.
[TODO: Finalize this design.]
1.6. In more detail: Backward compatibility with older hidden service
protocols
This design is incompatible with the clients, server, and hsdir node
protocols from older versions of the hidden service protocol as
described in rend-spec.txt. On the other hand, it is designed to
enable the use of older Tor nodes as rendezvous points and
introduction points.
1.7. In more detail: Offline operation
In this design, a hidden service's secret identity key may be stored
offline. It's used only to generate blinded identity keys, which are
used to sign descriptor signing keys. In order to operate a hidden
service, the operator can generate a number of descriptor signing
keys and their certifications (see [DESC-OUTER] and [ENCRYPTED-DATA]
below), and their corresponding descriptor encryption keys, and
export those to the hidden service hosts.
1.8. In more detail: Encryption Keys And Replay Resistance
To avoid replays of an introduction request by an introduction point,
a hidden service host must never accept the same request
twice. Earlier versions of the hidden service design used a
authenticated timestamp here, but including a view of the current
time can create a problematic fingerprint. (See proposal 222 for more
discussion.)
1.9. In more detail: A menagerie of keys
[In the text below, an "encryption keypair" is roughly "a keypair you
can do Diffie-Hellman with" and a "signing keypair" is roughly "a
keypair you can do ECDSA with."]
Public/private keypairs defined in this document:
Master (hidden service) identity key -- A master signing keypair
used as the identity for a hidden service. This key is not used
on its own to sign anything; it is only used to generate blinded
signing keys as described in [KEYBLIND] and [SUBCRED].
Blinded signing key -- A keypair derived from the identity key,
used to sign descriptor signing keys. Changes periodically for
each service. Clients who know a 'credential' consisting of the
service's public identity key and an optional secret can derive
the public blinded identity key for a service. This key is used
as an index in the DHT-like structure of the directory system.
Descriptor signing key -- A key used to sign hidden service
descriptors. This is signed by blinded signing keys. Unlike
blinded signing keys and master identity keys, the secret part
of this key must be stored online by hidden service hosts.
Introduction point authentication key -- A short-term signing
keypair used to identify a hidden service to a given
introduction point. A fresh keypair is made for each
introduction point; these are used to sign the request that a
hidden service host makes when establishing an introduction
point, so that clients who know the public component of this key
can get their introduction requests sent to the right
service. No keypair is ever used with more than one introduction
point. (previously called a "service key" in rend-spec.txt)
Introduction point encryption key -- A short-term encryption
keypair used when establishing connections via an introduction
point. Plays a role analogous to Tor nodes' onion keys. A fresh
keypair is made for each introduction point.
Symmetric keys defined in this document:
Descriptor encryption keys -- A symmetric encryption key used to
encrypt the body of hidden service descriptors. Derived from the
current period and the hidden service credential.
Public/private keypairs defined elsewhere:
Onion key -- Short-term encryption keypair
(Node) identity key
Symmetric key-like things defined elsewhere:
KH from circuit handshake -- An unpredictable value derived as
part of the Tor circuit extension handshake, used to tie a request
to a particular circuit.
2. Generating and publishing hidden service descriptors [HSDIR]
Hidden service descriptors follow the same metaformat as other Tor
directory objects. They are published anonymously to Tor servers with
the HSDir3 flag.
(Authorities should assign this flag as they currently assign the
HSDir flag, except that they should restrict it to Tor versions
implementing the HSDir parts of this specification.)
2.1. Deriving blinded keys and subcredentials [SUBCRED]
In each time period (see [TIME-PERIOD] for a definition of time
periods), a hidden service host uses a different blinded private key
to sign its directory information, and clients use a different
blinded public key as the index for fetching that information.
For a candidate for a key derivation method, see Appendix [KEYBLIND].
Additionally, clients and hosts derive a subcredential for each
period. Knowledge of the subcredential is needed to decrypt hidden
service descriptors for each period and to authenticate with the
hidden service host in the introduction process. Unlike the
credential, it changes each period. Knowing the subcredential, even
in combination with the blinded private key, does not enable the
hidden service host to derive the main credential--therefore, it is
safe to put the subcredential on the hidden service host while
leaving the hidden service's private key offline.
The subcredential for a period is derived as:
H("subcredential" |
credential |
blinded-public-key).
2.2. Locating, uploading, and downloading hidden service descriptors
[HASHRING]
To avoid attacks where a hidden service's descriptor is easily
targeted for censorship, we store them at different directories over
time, and use shared random values to prevent those directories from
being predictable far in advance.
Which Tor servers hosts a hidden service depends on:
* the current time period,
* the daily subcredential,
* the hidden service directories' public keys,
* a shared random value that changes in each time period,
* a set of network-wide networkstatus consensus parameters.
Below we explain in more detail.
2.2.1. Dividing time into periods [TIME-PERIODS]
To prevent a single set of hidden service directory from becoming a
target by adversaries looking to permanently censor a hidden service,
hidden service descriptors are uploaded to different locations that
change over time.
The length of a "time period" is controlled by the consensus
parameter 'hsdir-interval', and is a number of minutes between 30 and
14400 (10 days). The default time period length is 1500 (one day plus
one hour).
Time periods start with the Unix epoch (Jan 1, 1970), and are
computed by taking the number of whole minutes since the epoch and
dividing by the time period. So if the current time is 2013-11-12
13:44:32 UTC, making the seconds since the epoch 1384281872, the
number of minutes since the epoch is 23071364. If the current time
period length is 1500 (the default), then the current time period
number is 15380. It began 15380*1500*60 seconds after the epoch at
2013-11-11 20:00:00 UTC, and will end at (15380+1)*1500*60 seconds
after the epoch at 2013-11-12 21:00:00 UTC.
2.2.2. Overlapping time periods to avoid thundering herds [TIME-OVERLAP]
If every hidden service host were to generate a new set of keys and
upload a new descriptor at exactly the start of each time period, the
directories would be overwhelmed by every host uploading at the same
time. Instead, each public key becomes valid at its new location at a
deterministic time somewhat _before_ the period begins, depending on
the public key and the period.
The time at which a key might first become valid is determined by the
consensus parameter "hsdir-overlap-begins", which is an integer in
range [1,100] with default value 80. This parameter denotes a
percentage of the interval for which no overlap occurs. So for the
default interval (1500 minutes) and default overlap-begins value
(80%), new keys do not become valid for the first 1200 minutes of the
interval.
The new shared random value must be published *before* the start of
the next overlap interval by at least enough time to ensure that
clients all get it. [TODO: how much earlier?]
The time at which a key from the next interval becomes valid is
determined by taking the first two bytes of
OFFSET = H(Key | INT_8(Next_Period_Num))
as a big-endian integer, dividing by 65536, and treating that as a
fraction of the overlap interval.
For example, if the period is 1500 minutes long, and overlap interval
is 300 minutes long, and OFFSET begins with [90 50], then the next
key becomes valid at 1200 + 300 * (0x9050 / 65536) minutes, or
approximately 22 hours and 49 minutes after the beginning of the
period.
Hidden service directories should accept descriptors at least [TODO:
how much?] minutes before they would become valid, and retain them
for at least [TODO: how much?] minutes after the end of the period.
When a client is looking for a service, it must calculate its key
both for the current and for the subsequent period, to decide whether
the next period's key is valid yet.
2.2.3. Where to publish a service descriptor
The following consensus parameters control where a hidden service
descriptor is stored;
hsdir_n_replicas = an integer in range [1,16]
with default value 2.
hsdir_spread_fetch = an integer in range [1,128]
with default value 3.
hsdir_spread_store = an integer in range [1,128]
with default value 3.
hsdir_spread_accept = an integer in range [1,128]
with default value 8.
To determine where a given hidden service descriptor will be stored
in a given period, after the blinded public key for that period is
derived, the uploading or downloading party calculate
for replicanum in 1...hsdir_n_replicas:
hs_index(replicanum) = H("store-at-idx" |
blinded_public_key | replicanum |
periodnum)
where blinded_public_key is specified in section KEYBLIND, and
periodnum is defined in section TIME-PERIODS.
where n_replicas is determined by the consensus parameter
"hsdir_n_replicas".
Then, for each node listed in the current consensus with the HSDir3
flag, we compute a directory index for that node as:
hsdir_index(node) = H(node_identity_digest |
shared_random |
INT_8(period_num) )
where shared_random is the shared value generated by the authorities
in section PUB-SHAREDRANDOM.
Finally, for replicanum in 1...hsdir_n_replicas, the hidden service
host uploads descriptors to the first hsdir_spread_store nodes whose
indices immediately follow hs_index(replicanum).
When choosing an HSDir to download from, clients choose randomly from
among the first hsdir_spread_fetch nodes after the indices. (Note
that, in order to make the system better tolerate disappearing
HSDirs, hsdir_spread_fetch may be less than hsdir_spread_store.)
An HSDir should rejects a descriptor if that HSDir is not one of the
first hsdir_spread_accept HSDirs for that node.
[TODO: Incorporate the findings from proposal 143 here. But watch
out: proposal 143 did not analyze how much the set of nodes changes
over time, or how much client and host knowledge might diverge.]
2.2.4. URLs for anonymous uploading and downloading
Hidden service descriptors conforming to this specification are
uploaded with an HTTP POST request to the URL
/tor/rendezvous3/publish relative to the hidden service directory's
root, and downloaded with an HTTP GET request for the URL
/tor/rendezvous3/<z> where z is a base-64 encoding of the hidden
service's blinded public key.
[TODO: raw base64 is not super-nice for URLs, since it can have
slashes. We already use it for microdescriptor URLs, though. Do we
care here?]
These requests must be made anonymously, on circuits not used for
anything else.
2.3. Publishing shared random values [PUB-SHAREDRANDOM]
Our design for limiting the predictability of HSDir upload locations
relies on a shared random value that isn't predictable in advance or
too influenceable by an attacker. The authorities must run a protocol
to generate such a value at least once per hsdir period. Here we
describe how they publish these values; the procedure they use to
generate them can change independently of the rest of this
specification. For one possible (somewhat broken) protocol, see
Appendix [SHAREDRANDOM].
We add a new line in votes and consensus documents:
"hsdir-shared-random" PERIOD-START VALUE
PERIOD-START = YYYY-MM-DD HH:MM:SS
VALUE = A base-64 encoded 256-bit value.
To decide which hsdir-shared-random line to include in a consensus
for a given PERIOD-START, we choose whichever line appears verbatim
in the most votes, so long as it is listed by at least three
authorities. Ties are broken in favor of the lower value. More than
one PERIOD-START is allowed per vote, and per consensus. The same
PERIOD-START must not appear twice in a vote or in a consensus.
[TODO: Need to define a more robust algorithm. Need to cover cases
where multiple cluster of authorities publish a different value,
etc.]
The hs-dir-shared-random lines appear, sorted by PERIOD-START, in the
consensus immediately after the "params" line.
The authorities should publish the shared random value for the
current period, and, at a time at least three voting periods before
the overlap interval begins, the shared random value for the next
period.
[TODO: find out what weasel doesn't like here.]
2.4. Hidden service descriptors: outer wrapper [DESC-OUTER]
The format for a hidden service descriptor is as follows, using the
meta-format from dir-spec.txt.
"hs-descriptor" SP "3" SP public-key SP certification NL
[At start, exactly once.]
public-key is the blinded public key for the service, encoded in
base 64. Certification is a certification of a short-term ed25519
descriptor signing key using the public key, in the format of
proposal 220.
"time-period" SP YYYY-MM-DD HH:MM:SS NUM NL
[Exactly once.]
The time period for which this descriptor is relevant, including
its starting time and its period number.
"revision-counter" SP Integer NL
[Exactly once.]
The revision number of the descriptor. If an HSDir receives a
second descriptor for a key that it already has a descriptor for,
it should retain and serve the descriptor with the higher
revision-counter.
(Checking for monotonically increasing revision-counter values
prevents an attacker from replacing a newer descriptor signed by
a given key with a copy of an older version.)
"encrypted" NL encrypted-string
[Exactly once.]
An encrypted blob, whose format is discussed in [ENCRYPTED-DATA]
below. The blob is base-64 encoded and enclosed in -----BEGIN
MESSAGE---- and ----END MESSAGE---- wrappers.
"signature" SP signature NL
[exactly once, at end.]
A signature of all previous fields, using the signing key in the
hs-descriptor line. We use a separate key for signing, so that
the hidden service host does not need to have its private blinded
key online.
2.5. Hidden service descriptors: encryption format [ENCRYPTED-DATA]
The encrypted part of the hidden service descriptor is encrypted and
authenticated with symmetric keys generated as follows:
salt = 16 random bytes
secret_input = nonce | blinded_public_key | subcredential |
INT_4(revision_counter)
keys = KDF(secret_input, salt, "hsdir-encrypted-data",
S_KEY_LEN + S_IV_LEN + MAC_KEY_LEN)
SECRET_KEY = first S_KEY_LEN bytes of keys
SECRET_IV = next S_IV_LEN bytes of keys
MAC_KEY = last MAC_KEY_LEN bytes of keys
The encrypted data has the format:
SALT (random bytes from above) [16 bytes]
ENCRYPTED The plaintext encrypted with S [variable]
MAC MAC of both above fields [32 bytes]
The encryption format is ENCRYPTED =
STREAM(SECRET_IV,SECRET_KEY) xor Plaintext
Before encryption, the plaintext must be padded to a multiple of ???
bytes with NUL bytes. The plaintext must not be longer than ???
bytes. [TODO: how much? Should this be a parameter? What values in
practice is needed to hide how many intro points we have, and how
many might be legacy ones?]
The plaintext format is:
"create2-formats" SP formats NL
[Exactly once]
A space-separated list of integers denoting CREATE2 cell format
numbers that the server recognizes. Must include at least TAP and
ntor as described in tor-spec.txt. See tor-spec section 5.1 for a
list of recognized handshake types.
"authentication-required" SP types NL
[At most once]
A space-separated list of authentication types. A client that does
not support at least one of these authentication types will not be
able to contact the host. Recognized types are: 'password' and
'ed25519'. See [INTRO-AUTH] below.
At least once:
"introduction-point" SP link-specifiers NL
[Exactly once per introduction point at start of introduction
point section]
The link-specifiers is a base64 encoding of a link specifier
block in the format described in BUILDING-BLOCKS.
"auth-key" SP "ed25519" SP key SP certification NL
[Exactly once per introduction point]
Base-64 encoded introduction point authentication key that was
used to establish introduction point circuit, cross-certifying
the blinded public key key using the certification format of
proposal 220.
"enc-key" SP "ntor" SP key NL
[At most once per introduction point]
Base64-encoded curve25519 key used to encrypt request to
hidden service.
[TODO: I'd like to have a cross-certification here too.]
"enc-key" SP "legacy" NL key NL
[At most once per introduction point]
Base64-encoded RSA key, wrapped in "----BEGIN RSA PUBLIC
KEY-----" armor, for use with a legacy introduction point as
described in [LEGACY_EST_INTRO] and [LEGACY-INTRODUCE1] below.
Exactly one of the "enc-key ntor" and "enc-key legacy"
elements must be present for each introduction point.
[TODO: I'd like to have a cross-certification here too.]
Other encryption and authentication key formats are allowed; clients
should ignore ones they do not recognize.
3. The introduction protocol
The introduction protocol proceeds in three steps.
First, a hidden service host builds an anonymous circuit to a Tor
node and registers that circuit as an introduction point.
[Between these steps, the hidden service publishes its
introduction points and associated keys, and the client fetches
them as described in section [HSDIR] above.]
Second, a client builds an anonymous circuit to the introduction
point, and sends an introduction request.
Third, the introduction point relays the introduction request along
the introduction circuit to the hidden service host, and acknowledges
the introduction request to the client.
3.1. Registering an introduction point [REG_INTRO_POINT]
3.1.1. Extensible ESTABLISH_INTRO protocol. [EST_INTRO]
When a hidden service is establishing a new introduction point, it
sends a ESTABLISH_INTRO cell with the following contents:
AUTH_KEY_TYPE [1 byte]
AUTH_KEY_LEN [1 byte]
AUTH_KEY [AUTH_KEY_LEN bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
HANDSHAKE_AUTH [MAC_LEN bytes]
SIGLEN [1 byte]
SIG [SIGLEN bytes]
The AUTH_KEY_TYPE field indicates the type of the introduction point
authentication key and the type of the MAC to use in for
HANDSHAKE_AUTH. Recognized types are:
[00, 01] -- Reserved for legacy introduction cells; see
[LEGACY_EST_INTRO below]
[02] -- Ed25519; HMAC-SHA256.
[FF] -- Reserved for maintenance messages on existing
circuits; see MAINT_INTRO below.
[TODO: Should this just be a new relay cell type?
Matthew and George think so.]
The AUTH_KEY_LEN field determines the length of the AUTH_KEY
field. The AUTH_KEY field contains the public introduction point
authentication key.
The EXT_FIELD_TYPE, EXT_FIELD_LEN, EXT_FIELD entries are reserved for
future extensions to the introduction protocol. Extensions with
unrecognized EXT_FIELD_TYPE values must be ignored.
The ZERO field contains the byte zero; it marks the end of the
extension fields.
The HANDSHAKE_AUTH field contains the MAC of all earlier fields in
the cell using as its key the shared per-circuit material ("KH")
generated during the circuit extension protocol; see tor-spec.txt
section 5.2, "Setting circuit keys". It prevents replays of
ESTABLISH_INTRO cells.
SIGLEN is the length of the signature.
SIG is a signature, using AUTH_KEY, of all contents of the cell, up
to but not including SIG. These contents are prefixed with the string
"Tor establish-intro cell v1".
Upon receiving an ESTABLISH_INTRO cell, a Tor node first decodes the
key and the signature, and checks the signature. The node must reject
the ESTABLISH_INTRO cell and destroy the circuit in these cases:
* If the key type is unrecognized
* If the key is ill-formatted
* If the signature is incorrect
* If the HANDSHAKE_AUTH value is incorrect
* If the circuit is already a rendezvous circuit.
* If the circuit is already an introduction circuit.
[TODO: some scalability designs fail there.]
* If the key is already in use by another circuit.
Otherwise, the node must associate the key with the circuit, for use
later in INTRODUCE1 cells.
[TODO: The above will work fine with what we do today, but it will do
quite badly if we ever freak out and want to go back to RSA2048 or
bigger. Do we care?]
3.1.2. Registering an introduction point on a legacy Tor node [LEGACY_EST_INTRO]
Tor nodes should also support an older version of the ESTABLISH_INTRO
cell, first documented in rend-spec.txt. New hidden service hosts
must use this format when establishing introduction points at older
Tor nodes that do not support the format above in [EST_INTRO].
In this older protocol, an ESTABLISH_INTRO cell contains:
KEY_LENGTH [2 bytes]
KEY [KEY_LENGTH bytes]
HANDSHAKE_AUTH [20 bytes]
SIG [variable, up to end of relay payload]
The KEY_LENGTH variable determines the length of the KEY field.
The KEY field is a ASN1-encoded RSA public key.
The HANDSHAKE_AUTH field contains the SHA1 digest of (KH |
"INTRODUCE").
The SIG field contains an RSA signature, using PKCS1 padding, of all
earlier fields.
Note that since the relay payload itself may be no more than 498
bytes long, the KEY_LENGTH field can never have a first byte other
than [00] or [01]. These values are used to distinguish legacy
ESTABLISH_INTRO cells from newer ones.
Older versions of Tor always use a 1024-bit RSA key for these
introduction authentication keys.
Newer hidden services MAY use RSA keys up 1904 bits. Any more than
that will not fit in a RELAY cell payload.
3.1.3. Managing introduction circuits [MAINT_INTRO]
If the first byte of an ESTABLISH_INTRO cell is [FF], the cell's body
contains an administrative command for the circuit. The format of
such a command is:
Any number of times:
SUBCOMMAND_TYPE [2 bytes]
SUBCOMMAND_LEN [2 bytes]
SUBCOMMAND [COMMAND_LEN bytes]
Recognized SUBCOMMAND_TYPE values are:
[00 01] -- update encryption keys
[TODO: Matthew says, "This can be used to fork an intro point to
balance traffic over multiple hidden service servers while
maintaining the criteria for a valid ESTABLISH_INTRO
cell. -MF". Investigate.]
Unrecognized SUBCOMMAND_TYPE values should be ignored.
3.1.3.1. Updating encryption keys (subcommand 0001) [UPDATE-KEYS-SUBCMD]
Hidden service hosts send this subcommand to set their initial
encryption keys or update the configured public encryption keys
associated with this circuit. This message must be sent after
establishing an introduction point, before the circuit can be
advertised. These keys are given in the form:
NUMKEYS [1 byte]
NUMKEYS times:
KEYTYPE [1 byte]
KEYLEN [1 byte]
KEY [KEYLEN bytes]
COUNTER [4 bytes]
SIGLEN [1 byte]
SIGNATURE [SIGLEN bytes.]
The KEYTYPE value [01] is for Curve25519 keys.
The COUNTER field is a monotonically increasing value across a given
introduction point authentication key.
The SIGNATURE must be generated with the introduction point
authentication key, and must cover the entire subcommand body,
prefixed with the string "Tor hidden service introduction encryption
keys v1".
[TODO: Nothing is done here to prove ownership of the encryption
keys. Does that matter?]
[TODO: The point here is to allow encryption keys to change while
maintaining an introduction point and not forcing a client to
download a new descriptor. I'm not sure if that's worth it. It makes
clients who have seen a key before distinguishable from ones who have
not.]
[Matthew says: "Repeat-client over long periods of time will always
be distinguishable. It may be better to simply expire intro points
than try to preserve forward-secrecy, though". Must find out what he
meant.]
Setting the encryption keys for a given circuit replaces the previous
keys for that circuit. Clients who attempt to connect using the old
key receive an INTRO_ACK cell with error code [00 02] as described in
section [INTRO_ACK] below.
3.1.4. Acknowledging establishment of introduction point [INTRO_ESTABLISHED]
After setting up an introduction circuit, the introduction point
reports its status back to the hidden service host with an empty
INTRO_ESTABLISHED cell.
[TODO: make this cell type extensible. It should be able to include
data if that turns out to be needed.]
3.2. Sending an INTRODUCE1 cell to the introduction point. [SEND_INTRO1]
In order to participate in the introduction protocol, a client must
know the following:
* An introduction point for a service.
* The introduction authentication key for that introduction point.
* The introduction encryption key for that introduction point.
The client sends an INTRODUCE1 cell to the introduction point,
containing an identifier for the service, an identifier for the
encryption key that the client intends to use, and an opaque blob to
be relayed to the hidden service host.
In reply, the introduction point sends an INTRODUCE_ACK cell back to
the client, either informing it that its request has been delivered,
or that its request will not succeed.
3.2.1. INTRODUCE1 cell format [FMT_INTRO1]
An INTRODUCE1 cell has the following contents:
AUTH_KEYID [32 bytes]
ENC_KEYID [8 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
ENCRYPTED [Up to end of relay payload]
[TODO: Should we have a field to determine the type of ENCRYPTED, or
should we instead assume that there is exactly one encryption key per
encryption method? The latter is probably safer.]
Upon receiving an INTRODUCE1 cell, the introduction point checks
whether AUTH_KEYID and ENC_KEYID match a configured introduction
point authentication key and introduction point encryption key. If
they do, the cell is relayed; if not, it is not.
The AUTH_KEYID for an Ed25519 public key is the public key itself.
The ENC_KEYID for a Curve25519 public key is the first 8 bytes of the
public key. (This key ID is safe to truncate, since all the keys are
generated by the hidden service host, and the ID is only valid
relative to a single AUTH_KEYID.) The ENCRYPTED field is as
described in 3.3 below.
To relay an INTRODUCE1 cell, the introduction point sends an
INTRODUCE2 cell with exactly the same contents.
3.2.2. INTRODUCE_ACK cell format. [INTRO_ACK]
An INTRODUCE_ACK cell has the following fields:
STATUS [2 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
Recognized status values are:
[00 00] -- Success: cell relayed to hidden service host.
[00 01] -- Failure: service ID not recognzied
[00 02] -- Failure: key ID not recognized
[00 03] -- Bad message format
Recognized extension field types:
[00 01] -- signed set of encryption keys
The extension field type 0001 is a signed set of encryption keys; its
body matches the body of the key update command in
[UPDATE-KEYS-CMD]. Whenever sending status [00 02], the introduction
point MUST send this extension field.
3.2.3. Legacy formats [LEGACY-INTRODUCE1]
When the ESTABLISH_INTRO cell format of [LEGACY_EST_INTRO] is used,
INTRODUCE1 cells are of the form:
AUTH_KEYID_HASH [20 bytes]
ENC_KEYID [8 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
ENCRYPTED [Up to end of relay payload]
Here, AUTH_KEYID_HASH is the hash of the introduction point
authentication key used to establish the introduction.
Because of limitations in older versions of Tor, the relay payload
size for these INTRODUCE1 cells must always be at least 246 bytes, or
they will be rejected as invalid.
3.3. Processing an INTRODUCE2 cell at the hidden service. [PROCESS_INTRO2]
Upon receiving an INTRODUCE2 cell, the hidden service host checks
whether the AUTH_KEYID/AUTH_KEYID_HASH field and the ENC_KEYID fields
are as expected, and match the configured authentication and
encryption key(s) on that circuit.
The service host then checks whether it has received a cell with
these contents before. If it has, it silently drops it as a
replay. (It must maintain a replay cache for as long as it accepts
cells with the same encryption key.)
If the cell is not a replay, it decrypts the ENCRYPTED field,
establishes a shared key with the client, and authenticates the whole
contents of the cell as having been unmodified since they left the
client. There may be multiple ways of decrypting the ENCRYTPED field,
depending on the chosen type of the encryption key. Requirements for
an introduction handshake protocol are described in
[INTRO-HANDSHAKE-REQS]. We specify one below in section
[NTOR-WITH-EXTRA-DATA].
The decrypted plaintext must have the form:
REND_TOKEN [20 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
ONION_KEY_TYPE [2 bytes]
ONION_KEY [depends on ONION_KEY_TYPE]
NSPEC (Number of link specifiers) [1 byte]
NSPEC times:
LSTYPE (Link specifier type) [1 byte]
LSLEN (Link specifier length) [1 byte]
LSPEC (Link specifier) [LSLEN bytes]
PAD (optional padding) [up to end of plaintext]
Upon processing this plaintext, the hidden service makes sure that
any required authentication is present in the extension fields, and
then extends a rendezvous circuit to the node described in the LSPEC
fields, using the ONION_KEY to complete the extension. As mentioned
in [BUILDING-BLOCKS], the "TLS-over-TCP, IPv4" and "Legacy node
identity" specifiers must be present.
The hidden service SHOULD NOT reject any LSTYPE fields which it
doesn't recognize; instead, it should use them verbatim in its EXTEND
request to the rendezvous point.
The ONION_KEY_TYPE field is one of:
[01] TAP-RSA-1024: ONION_KEY is 128 bytes long.
[02] NTOR: ONION_KEY is 32 bytes long.
The ONION_KEY field describes the onion key that must be used when
extending to the rendezvous point. It must be of a type listed as
supported in the hidden service descriptor.
Upon receiving a well-formed INTRODUCE2 cell, the hidden service host
will have:
* The information needed to connect to the client's chosen
rendezvous point.
* The second half of a handshake to authenticate and establish a
shared key with the hidden service client.
* A set of shared keys to use for end-to-end encryption.
3.3.1. Introduction handshake encryption requirements [INTRO-HANDSHAKE-REQS]
When decoding the encrypted information in an INTRODUCE2 cell, a
hidden service host must be able to:
* Decrypt additional information included in the INTRODUCE2 cell,
to include the rendezvous token and the information needed to
extend to the rendezvous point.
* Establish a set of shared keys for use with the client.
* Authenticate that the cell has not been modified since the client
generated it.
Note that the old TAP-derived protocol of the previous hidden service
design achieved the first two requirements, but not the third.
3.3.2. Example encryption handshake: ntor with extra data [NTOR-WITH-EXTRA-DATA]
This is a variant of the ntor handshake (see tor-spec.txt, section
5.1.4; see proposal 216; and see "Anonymity and one-way
authentication in key-exchange protocols" by Goldberg, Stebila, and
Ustaoglu).
It behaves the same as the ntor handshake, except that, in addition
to negotiating forward secure keys, it also provides a means for
encrypting non-forward-secure data to the server (in this case, to
the hidden service host) as part of the handshake.
Notation here is as in section 5.1.4 of tor-spec.txt, which defines
the ntor handshake.
The PROTOID for this variant is
"hidden-service-ntor-curve25519-sha256-1". Define the tweak value
t_hsenc, and the tag value m_hsexpand as:
t_hsenc = PROTOID | ":hs_key_extract"
m_hsexpand = PROTOID | ":hs_key_expand"
To make an INTRODUCE cell, the client must know a public encryption
key B for the hidden service on this introduction circuit. The client
generates a single-use keypair:
x,X = KEYGEN()
and computes:
secret_hs_input = EXP(B,x) | AUTH_KEYID | X | B | PROTOID
info = m_hsexpand | subcredential
hs_keys = HKDF(secret_hs_input, t_hsenc, info,
S_KEY_LEN+MAC_LEN)
ENC_KEY = hs_keys[0:S_KEY_LEN]
MAC_KEY = hs_keys[S_KEY_LEN:S_KEY_LEN+MAC_KEY_LEN]
and sends, as the ENCRYPTED part of the INTRODUCE1 cell:
CLIENT_PK [G_LENGTH bytes]
ENCRYPTED_DATA [Padded to length of plaintext]
MAC [MAC_LEN bytes]
Substituting those fields into the INTRODUCE1 cell body format
described in [FMT_INTRO1] above, we have
AUTH_KEYID [32 bytes]
ENC_KEYID [8 bytes]
Any number of times:
EXT_FIELD_TYPE [1 byte]
EXT_FIELD_LEN [1 byte]
EXT_FIELD [EXTRA_FIELD_LEN bytes]
ZERO [1 byte]
ENCRYPTED:
CLIENT_PK [G_LENGTH bytes]
ENCRYPTED_DATA [Padded to length of plaintext]
MAC [MAC_LEN bytes]
(This format is as documented in [FMT_INTRO1] above, except that here
we describe how to build the ENCRYPTED portion. If the introduction
point is running an older Tor that does not support this protocol,
the first field is replaced by a 20-byte AUTH_KEYID_HASH field as
described in [LEGACY-INTRODUCE1].)
Here, the encryption key plays the role of B in the regular ntor
handshake, and the AUTH_KEYID field plays the role of the node ID.
The CLIENT_PK field is the public key X. The ENCRYPTED_DATA field is
the message plaintext, encrypted with the symmetric key ENC_KEY. The
MAC field is a MAC of all of the cell from the AUTH_KEYID through the
end of ENCRYPTED_DATA, using the MAC_KEY value as its key.
To process this format, the hidden service checks PK_VALID(CLIENT_PK)
as necessary, and then computes ENC_KEY and MAC_KEY as the client did
above, except using EXP(CLIENT_PK,b) in the calculation of
secret_hs_input. The service host then checks whether the MAC is
correct. If it is invalid, it drops the cell. Otherwise, it computes
the plaintext by decrypting ENCRYPTED_DATA.
The hidden service host now completes the service side of the
extended ntor handshake, as described in tor-spec.txt section 5.1.4,
with the modified PROTOID as given above. To be explicit, the hidden
service host generates a keypair of y,Y = KEYGEN(), and uses its
introduction point encryption key 'b' to computes:
xb = EXP(X,b)
secret_hs_input = xb | AUTH_KEYID | X | B | PROTOID
info = m_hsexpand | subcredential
hs_keys = HKDF(secret_hs_input, t_hsenc, info,
S_KEY_LEN+MAC_LEN)
HS_DEC_KEY = hs_keys[0:S_KEY_LEN]
HS_MAC_KEY = hs_keys[S_KEY_LEN:S_KEY_LEN+MAC_KEY_LEN]
(The above are used to check the MAC and then decrypt the
encrypted data.)
ntor_secret_input = EXP(X,y) | xb | ID | B | X | Y | PROTOID
NTOR_KEY_SEED = H(secret_input, t_key)
verify = H(secret_input, t_verify)
auth_input = verify | ID | B | Y | X | PROTOID | "Server"
(The above are used to finish the ntor handshake.)
The server's handshake reply is:
SERVER_PK Y [G_LENGTH bytes]
AUTH H(auth_input, t_mac) [H_LENGTH bytes]
These faileds can be send to the client in a RENDEZVOUS1 cell.
(See [JOIN_REND] below.)
The hidden service host now also knows the keys generated by the
handshake, which it will use to encrypt and authenticate data
end-to-end between the client and the server. These keys are as
computed in tor-spec.txt section 5.1.4.
3.4. Authentication during the introduction phase. [INTRO-AUTH]
Hidden services may restrict access only to authorized users. One
mechanism to do so is the credential mechanism, where only users who
know the credential for a hidden service may connect at all. For more
fine-grained conntrol, a hidden service can be configured with
password-based or public-key-based authentication.
3.4.1. Password-based authentication.
To authenticate with a password, the user must include an extension
field in the encrypted part of the INTRODUCE cell with an
EXT_FIELD_TYPE type of [01] and the contents:
Username [00] Password.
The username may not include any [00] bytes. The password may.
On the server side, the password MUST be stored hashed and salted,
ideally with scrypt or something better.
3.4.2. Ed25519-based authentication.
To authenticate with an Ed25519 private key, the user must include an
extension field in the encrypted part of the INTRODUCE cell with an
EXT_FIELD_TYPE type of [02] and the contents:
Nonce [16 bytes]
Pubkey [32 bytes]
Signature [64 bytes]
Nonce is a random value. Pubkey is the public key that will be used
to authenticate. [TODO: should this be an identifier for the public
key instead?] Signature is the signature, using Ed25519, of:
"Hidserv-userauth-ed25519"
Nonce (same as above)
Pubkey (same as above)
AUTH_KEYID (As in the INTRODUCE1 cell)
ENC_KEYID (As in the INTRODUCE1 cell)
The hidden service host checks this by seeing whether it recognizes
and would accept a signature from the provided public key. If it
would, then it checks whether the signature is correct. If it is,
then the correct user has authenticated.
Replay prevention on the whole cell is sufficient to prevent replays
on the authentication.
Users SHOULD NOT use the same public key with multiple hidden
services.
4. The rendezvous protocol
Before connecting to a hidden service, the client first builds a
circuit to an arbitrarily chosen Tor node (known as the rendezvous
point), and sends an ESTABLISH_RENDEZVOUS cell. The hidden service
later connects to the same node and sends a RENDEZVOUS cell. Once
this has occurred, the relay forwards the contents of the RENDEZVOUS
cell to the client, and joins the two circuits together.
4.1. Establishing a rendezvous point [EST_REND_POINT]
The client sends the rendezvous point a
RELAY_COMMAND_ESTABLISH_RENDEZVOUS cell containing a 20-byte value.
RENDEZVOUS_COOKIE [20 bytes]
Rendezvous points MUST ignore any extra bytes in an
ESTABLISH_RENDEZVOUS message. (Older versions of Tor did not.)
The rendezvous cookie is an arbitrary 20-byte value, chosen randomly
by the client. The client SHOULD choose a new rendezvous cookie for
each new connection attempt. If the rendezvous cookie is already in
use on an existing circuit, the rendezvous point should reject it and
destroy the circuit.
Upon receiving a ESTABLISH_RENDEZVOUS cell, the rendezvous point
associates the cookie with the circuit on which it was sent. It
replies to the client with an empty RENDEZVOUS_ESTABLISHED cell to
indicate success. [TODO: make this extensible]
The client MUST NOT use the circuit which sent the cell for any
purpose other than rendezvous with the given location-hidden service.
The client should establish a rendezvous point BEFORE trying to
connect to a hidden service.
4.2. Joining to a rendezvous point [JOIN_REND]
To complete a rendezvous, the hidden service host builds a circuit to
the rendezvous point and sends a RENDEZVOUS1 cell containing:
RENDEZVOUS_COOKIE [20 bytes]
HANDSHAKE_INFO [variable; depends on handshake type
used.]
If the cookie matches the rendezvous cookie set on any
not-yet-connected circuit on the rendezvous point, the rendezvous
point connects the two circuits, and sends a RENDEZVOUS2 cell to the
client containing the contents of the RENDEZVOUS1 cell.
Upon receiving the RENDEZVOUS2 cell, the client verifies that the
HANDSHAKE_INFO correctly completes a handshake, and uses the
handshake output to derive shared keys for use on the circuit.
[TODO: Should we encrypt HANDSHAKE_INFO as we did INTRODUCE2
contents? It's not necessary, but it could be wise. Similarly, we
should make it extensible.]
4.3. Using legacy hosts as rendezvous points
The behavior of ESTABLISH_RENDEZVOUS is unchanged from older versions
of this protocol, except that relays should now ignore unexpected
bytes at the end.
Old versions of Tor required that RENDEZVOUS cell payloads be exactly
168 bytes long. All shorter rendezvous payloads should be padded to
this length with [00] bytes.
5. Encrypting data between client and host
A successfully completed handshake, as embedded in the
INTRODUCE/RENDEZVOUS cells, gives the client and hidden service host
a shared set of keys Kf, Kb, Df, Db, which they use for sending
end-to-end traffic encryption and authentication as in the regular
Tor relay encryption protocol, applying encryption with these keys
before other encryption, and decrypting with these keys before other
encryption. The client encrypts with Kf and decrypts with Kb; the
service host does the opposite.
6. Open Questions:
Scaling hidden services is hard. There are on-going discussions that
you might be able to help with. See [SCALING-REFS].
How can we improve the HSDir unpredictability design proposed in
[SHAREDRANDOM]? See [SHAREDRANDOM-REFS] for discussion.
How can hidden service addresses become memorable while retaining
their self-authenticating and decentralized nature? See
[HUMANE-HSADDRESSES-REFS] for some proposals; many more are possible.
Hidden Services are pretty slow. Both because of the lengthy setup
procedure and because the final circuit has 6 hops. How can we make
the Hidden Service protocol faster? See [PERFORMANCE-REFS] for some
suggestions.
References:
[KEYBLIND-REFS]:
https://trac.torproject.org/projects/tor/ticket/8106
https://lists.torproject.org/pipermail/tor-dev/2012-September/004026.html
[SHAREDRANDOM-REFS]:
https://trac.torproject.org/projects/tor/ticket/8244
https://lists.torproject.org/pipermail/tor-dev/2013-November/005847.html
https://lists.torproject.org/pipermail/tor-talk/2013-November/031230.html
[SCALING-REFS]:
https://lists.torproject.org/pipermail/tor-dev/2013-October/005556.html
[HUMANE-HSADDRESSES-REFS]:
https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/ideas/xxx-on…
http://archives.seul.org/or/dev/Dec-2011/msg00034.html
[PERFORMANCE-REFS]:
"Improving Efficiency and Simplicity of Tor circuit
establishment and hidden services" by Overlier, L., and
P. Syverson
[TODO: Need more here! Do we have any? :( ]
[ATTACK-REFS]:
"Trawling for Tor Hidden Services: Detection, Measurement,
Deanonymization" by Alex Biryukov, Ivan Pustogarov,
Ralf-Philipp Weinmann
"Locating Hidden Servers" by Lasse Øverlier and Paul
Syverson
[ED25519-REFS]:
"High-speed high-security signatures" by Daniel
J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and
Bo-Yin Yang. http://cr.yp.to/papers.html#ed25519
Appendix A. Signature scheme with key blinding [KEYBLIND]
As described in [IMD:DIST] and [SUBCRED] above, we require a "key
blinding" system that works (roughly) as follows:
There is a master keypair (sk, pk).
Given the keypair and a nonce n, there is a derivation function
that gives a new blinded keypair (sk_n, pk_n). This keypair can
be used for signing.
Given only the public key and the nonce, there is a function
that gives pk_n.
Without knowing pk, it is not possible to derive pk_n; without
knowing sk, it is not possible to derive sk_n.
It's possible to check that a signature make with sk_n while
knowing only pk_n.
Someone who sees a large number of blinded public keys and
signatures made using those public keys can't tell which
signatures and which blinded keys were derived from the same
master keypair.
You can't forge signatures.
[TODO: Insert a more rigorous definition and better references.]
We propose the following scheme for key blinding, based on Ed25519.
(This is an ECC group, so remember that scalar multiplication is the
trapdoor function, and it's defined in terms of iterated point
addition. See the Ed25519 paper [Reference ED25519-REFS] for a fairly
clear writeup.)
Let the basepoint be written as B. Assume B has prime order l, so
lB=0. Let a master keypair be written as (a,A), where a is the private
key and A is the public key (A=aB).
To derive the key for a nonce N and an optional secret s, compute the
blinding factor h as H(A | s, B, N), and let:
private key for the period: a' = h a
public key for the period: A' = h' A = (ha)B
Generating a signature of M: given a deterministic random-looking r
(see EdDSA paper), take R=rB, S=r+hash(R,A',M)ah mod l. Send signature
(R,S) and public key A'.
Verifying the signature: Check whether SB = R+hash(R,A',M)A'.
(If the signature is valid,
SB = (r + hash(R,A',M)ah)B
= rB + (hash(R,A',M)ah)B
= R + hash(R,A',M)A' )
See [KEYBLIND-REFS] for an extensive discussion on this scheme and
possible alternatives. I've transcribed this from a description by
Tanja Lange at the end of the thread. [TODO: We'll want a proof for
this.]
(To use this with Tor, set N = INT_8(period-number) | INT_8(Start of
period in seconds since epoch).)
Appendix B. Selecting nodes [PICKNODES]
Picking introduction points
Picking rendezvous points
Building paths
Reusing circuits
(TODO: This needs a writeup)
Appendix C. Recommendations for searching for vanity .onions [VANITY]
EDITORIAL NOTE: The author thinks that it's silly to brute-force the
keyspace for a key that, when base-32 encoded, spells out the name of
your website. It also feels a bit dangerous to me. If you train your
users to connect to
llamanymityx4fi3l6x2gyzmtmgxjyqyorj9qsb5r543izcwymle.onion
I worry that you're making it easier for somebody to trick them into
connecting to
llamanymityb4sqi0ta0tsw6uovyhwlezkcrmczeuzdvfauuemle.onion
Nevertheless, people are probably going to try to do this, so here's a
decent algorithm to use.
To search for a public key with some criterion X:
Generate a random (sk,pk) pair.
While pk does not satisfy X:
Add the number 1 to sk
Add the scalar B to pk
Return sk, pk.
This algorithm is safe [source: djb, personal communication] [TODO:
Make sure I understood correctly!] so long as only the final (sk,pk)
pair is used, and all previous values are discarded.
To parallelize this algorithm, start with an independent (sk,pk) pair
generated for each independent thread, and let each search proceed
independently.
Appendix D. Numeric values reserved in this document
[TODO: collect all the lists of commands and values mentioned above]
8
22
I have been looking at doing some work on Tor as part of my degree, and
more specifically, looking at Hidden Services. One of the issues where I
believe I might be able to make some progress, is the Hidden Service
Scaling issue as described here [1].
So, before I start trying to implement a prototype, I thought I would
set out my ideas here to check they are reasonable (I have also been
discussing this a bit on #tor-dev). The goal of this is two fold, to
reduce the probability of failure of a hidden service and to increase
hidden service scalability.
I think what I am planning distils down to two main changes. Firstly,
when a OP initialises a hidden service, currently if you start a hidden
service using an existing keypair and address, the new OP's introduction
points replace the existing introduction points [2]. This does provide
some redundancy (if slow), but no load balancing.
My current plan is to change this such that if the OP has an existing
public/private keypair and address, it would attempt to lookup the
existing introduction points (probably over a Tor circuit). If found, it
then establishes introduction circuits to those Tor servers.
Then comes the second problem, following the above, the introduction
point would then disconnect from any other connected OP using the same
public key (unsure why as a reason is not given in the rend-spec). This
would need to change such that an introduction point can talk to more
than one instance of the hidden service.
These two changes combined should help with the two goals. Reliability
is improved by having multiple OP's providing the service, and having
all of these accessible from the introduction points. Scalability is
also improved, as you are not limited to one OP (as described above,
currently you can also have +1 but only one will receive most of the
traffic, and fail over is slow).
I am aware that there are several undefined parts of the above
description, e.g. how does a introduction point choose what circuit to
use? but at the moment I am more interested in the wider picture. It
would be good to get some feedback on this.
1: https://blog.torproject.org/blog/hidden-services-need-some-love
2:
http://tor.stackexchange.com/questions/13/can-a-hidden-service-be-hosted-by…
11
44
This list of open Tor proposals is based on one I sent out last
month[0], based on the one I did in June of last year[1], based on the
one I did in May[2] of the year before.
If you're looking for something to review, think about, or comment
on:
Review 212 (using older consensuses) or 215 (obsoleting
consensus methods) if you understand the directory system even a
little bit; they are quite simple.
Review 219 if you're a DNS geek, or you'd like Tor to work
better with more DNS types.
Review 220 (ed25519 identity keys) if you like designing
signature things, if you have good ideas about future-proofing
key type migration, or if you care about making Tor servers'
identity keys stronger.
Review 223 (ACE handshake) if you're a cryptographer, or a cryptography
implementer, and you'd like an even faster replacement for the
ntor handshake.
Review 224 if you want to look through a big, complex protocol
with a lot of pieces. Also review it if you care about hidden
services and making them better.
Review something else if you want to take a possibly good idea
that needs more momentum and promote it, fix it up, or finally
kill it off.
Finally: if you've sent something to tor-dev or to me that should
have a proposal number, but doesn't have one yet, please ping me
again to remind me!
[0] https://lists.torproject.org/pipermail/tor-dev/2013-November/005798.html
[1] https://lists.torproject.org/pipermail/tor-dev/2012-June/003641.html
[2] https://lists.torproject.org/pipermail/tor-dev/2011-May/002637.html
**NOTE**: The dates after each paragraph indicate when I last
revised the paragraph.
127 Relaying dirport requests to Tor download site / website
The idea here was to make it easier to fetch and learn about
Tor by making it easy for relays to automatically act as
proxies to the Tor website. It needs more discussion, and
there are some significant details to work out. It's not at
all clear whether this is actually a good idea or not.
Probably, there are better choices when it comes to
distributing software and updates. (11/2013)
131 Help users to verify they are using Tor [NEEDS-REVISION]
This one is not a crazy idea, but I marked it as
needs-revision since it doesn't seem to work so well with
our current designs. It seems mostly superseded by proposal
211. (11/2013)
132 A Tor Web Service For Verifying Correct Browser Configuration
This proposal was meant to give users a way to see if their
browser and privoxy (yes, it was a while ago) are correctly
configured by running a local webserver on 127.0.0.1. I'm not
sure the status here. Generally, I'm skeptical of designs
that run webservers on localhost, since they become a target
for cross-site attacks. (11/2013)
133 Incorporate Unreachable ORs into the Tor Network
This proposal had an idea for letting ORs that can only make
outgoing connections still relay data usefully in the network.
It's something we should keep in mind, and it's a pretty neat
idea, but it radically changes the network topology. Anybody
who wants to analyze new network topologies should definitely
have a look. (5/2011)
140 Provide diffs between consensuses
This proposal describes a way to transmit less directory
traffic by sending only differences between consensuses, rather
than the consensuses themselves. It is mainly languishing for
lack of an appropriately licensed, well-written, very small,
pure-C implementation of the "diff" and "patch" algorithms.
(The good diffs seem to be GPL (which we can't use without
changing Tor's license), or spaghetti code, or not easily
usable as a library, or not written in C, or very large, or
some combination of those.) (5/2011)
141 Download server descriptors on demand
The idea of this proposal was for clients to only download the
consensus, and later ask nodes for their own server descriptors
while building the circuit through them. It would make each
circuit more time-consuming to build, but make bootstrapping
much cheaper.
Microdescriptors obsolete a lot of this proposal, and present
some difficulties in using in a way compatible with
it. (6/2012)
143 Improvements of Distributed Storage for Tor Hidden Service
Descriptors
Here's a proposal from Karsten about making the hidden
service DHT more reliable and secure to use. It could use
more discussion and analysis. We should look into it as part
of efforts to improve designs for the next generation of
hidden services.
One problem with the analysis here, though, is that it
assumes a fixed set of servers that doesn't change. One
reason that we upload to N servers at each chosen point in
the ring is that the hidden service host and the hidden
service client may have different views of which servers
exist. We need to re-do the analysis with some fraction of
recent consensuses. (11/2013)
144 Increase the diversity of circuits by detecting nodes
belonging the same provider
This is a version of the good idea, "Let's do routing in a way
that tries to keep from routing traffic through the same
provider too much!" There are complex issues here that the
proposal doesn't completely address, but I think it might be a
fine idea for somebody to see how much more we know now than we
did in 2008, particularly in light of the relevant paper(s) by
Matt Edmann and Paul Syverson. (5/2011)
145 Separate "suitable as a guard" from "suitable as a new guard"
[NEEDS-RESEARCH]
Currently, the Guard flag means both "You can use this node as a
guard if you need to pick a new guard" and "If this node is
currently your guard, you can keep using it as a guard." This
proposal tries to separate these two concepts, so that clients can
stop picking a router once it is full of existing clients using it
as a guard, but the clients currently on it won't all drop it.
It's not clear whether this has anonymity issues, and it's not
clear whether the imagined performance gains are actually
worthwhile. (5/2011)
147 Eliminate the need for v2 directories in generating v3 directories
This proposal explains a way that we can phase out the
vestigial use of v2 directory documents in keeping authorities
well-informed enough to generating the v3 consensus. It's
still correct; somebody should implement it before the v2
directory code rots any further. (5/2011)
156 Tracking blocked ports on the client side
This proposal provides a way for clients to learn which ports
they are (and aren't) able to connect to, and connect to the
ones that work. It comes with a patch, too. It also lets
routers track ports that _they_ can't connect to.
I'm a little unconvinced that this will help a great deal: most
clients that have some ports blocked will need bridges, not
just restriction to a smaller set of ports. This could be good
behind restrictive firewalls, though.
The router-side part is a little iffy: routers that can't
connect to each other violate one of our network topology
assumptions, and even if we do want to track failed
router->router connections, the routers need to be sure that
they aren't fooled into trying to connect repeatedly to a
series of nonexistent addresses in an attempt to make them
believe that (say) they can't reach port 443.
This one is a paradigmatic "open" proposal: it needs more
discussion. The patch probably also needs to be ported to
0.2.3.x; it touches some code that has changed. (5/2011)
159 Exit Scanning
This is an overview of SoaT, with some ideas for how to integrate
it into Tor. (5/2011)
164 Reporting the status of server votes
This proposal explains a way for authorities to provide a
slightly more verbose document that relay operators can use to
diagnose reasons that their router was or was not listed in the
consensus. These documents would be like slightly more verbose
versions of the authorities' votes, and would explain *why* the
authority voted as it did. It wouldn't be too hard to
implement, and would be a fine project for somebody who wants
to get to know the directory code. (5/2011)
165 Easy migration for voting authority sets
This is a design for how to change the set of authorities without
having a flag day where the authority operators all reconfigure
their authorities at once. It needs more discussion. One
difficulty here is that we aren't talking much about changing the
set of authorities, but that may be a chicken-and-egg issue, since
changing the set is so onerous.
If anybody is interested, it would be great to move the discussion
ahead here. (5/2011)
168 Reduce default circuit window
This proposal reduces the default window for circuit sendme
cells. I think it's implemented (or mostly implemented) in
0.2.1.20? If so, we should make sure that tor-spec.txt is
updated and close it. (11/2013)
172 GETINFO controller option for circuit information
173 GETINFO Option Expansion
These would help controllers (particularly arm) provide more
useful information about a running Tor process. They're
accepted and some parts of 173 are even implemented: somebody
just needs to implement the rest. (5/2011)
175 Automatically promoting Tor clients to nodes
Here's Steven's proposal for adding a mode between "client
mode" and "relay mode" for "self-test to see if you would be a
good relay, and if so become one." It didn't get enough
attention when it was posted to the list; more people should
review it. (5/2011)
177 Abstaining from votes on individual flags
Here's my proposal for letting authorities have opinions about some
(flag,router) combinations without voting on whether _every_ router
should have that flag. It's simple, and I think it's basically
right. With more discussion and review, somebody could/should
build it, I think. (11/2013)
182 Credit Bucket
This proposal suggests an alternative approach to our current
token-bucket based rate-limiting, that promises better
performance, less buffering insanity, and a possible end to
double-gating issues. (6/2012)
185 Directory caches without DirPort
The old HTTP directory port feature is no longer used by
clients and relays under most circumstances. The proposal
explains how we can get rid of the requirement that non-bridge
directories have an open directory port. (6/2012)
188 Bridge Guards and other anti-enumeration defenses
This proposal suggests some ways to make it harder for a relay
on the Tor network to enumerate a list of Tor bridges. Worth
investigating and possibly implementing. (6/2012)
189 AUTHORIZE and AUTHORIZED cells
190 Bridge Client Authorization Based on a Shared Secret [NEEDS-REVISION]
191 Bridge Detection Resistance against MITM-capable Adversaries
Proposal 187 reserved the AUTHORIZE cell type; these
proposals suggests how it could work to try to make it
harder to probe for Tor bridges. They need more alternatives
and attention, and possibly some revision and analysis.
Number 190 needs revision, since its protocol isn't actually
so great. (11/2013)
192 Automatically retrieve and store information about bridges
This proposal gives an enhancement to the bridge information
protocol, where clients remember more things about bridges, and
are able to update what they know about them over time. Could
help a lot with bridge churn. (6/2012)
194 Mnemonic .onion URLs
Here's one of several competing "let's make .onion URLs
human-usable" proposals. This one makes sentences using a
fixed map. This kind of approach is likely to obsoleted if
we go ahead with current plans for hidden services that
would make .onion addresses much longer, though. (11/2013)
195 TLS certificate normalization for Tor 0.2.4.x
Here's the followup to proposal 179, containing all the parts
of proposal 179 that didn't get built, and a couple of other
tricks besides to try to make Tor's default protocol less
detectable. I'm pretty psyched about the part where we let
relays drop in any any self-signed or CA-issued certificate
that they like. Some of this is done in ticket #7145;
we should decide, however, how much we want to push towards
normalizing the main Tor protocol. (11/2013)
196 Extended ORPort and TransportControlPort
Here are some remaining pieces of the pluggable transport
protocol that give Tor finer control over the behavior of
its transports. Much of this is implemented in 0.2.5
now; we should figure out what's left, and whether we want
to build that. (11/2013)
197 Message-based Inter-Controller IPC Channel
This proposal is for an architectural enhancement in Tor
deployments, where Tor coordinates communications between the
various programs (Vidalia, TorBrowser, etc) that are using
it. (6/2012)
199 Integration of BridgeFinder and BridgeFinderHelper
Here's a proposal for how Tor can integrate with a client
program that finds bridges for it. I've seen some work being
done on things called "BridgeFinder"; I don't know what the
status of the current proposal is, though. (11/2013)
201 Make bridges report statistics on daily v3 network status requests
Here's a proposal for bridges to better estimate the number of
bridge users. (6/2012)
202 Two improved relay encryption protocols for Tor cells
Here's a sketch of the two broad classes of alternatives for
improving how relay encryption works. Right now, progress on
this proposal is stalled waiting for the ideal wide-block
construction to come along the line. (11/2013)
203 Avoiding censorship by impersonating an HTTPS server
This one is a design for making a bridge that acts like an
HTTPS server (by *being* an HTTPS server) until the user
proves they know it's a bridge. (11/2013)
209 Tuning the Parameters for the Path Bias Defense
In this proposal, Mike discusses alternative parameters for
getting better result out of the path-bias-attack detection
code. (11/2013)
210 Faster Headless Consensus Bootstrapping
This proposal suggests that we get our initial consensus by
launching multiple connections in parallel, and fetching the
consensus from whichever one completes. In my opinion, that
would be a fine idea when we're fetching our initial
consensus from non-Authority DirSources, but we shouldnt' do
anything to increase the load on authorities. (11/2013)
211 Internal Mapaddress for Tor Configuration Testing
Here, the idea is to serve an XML document over HTTP that
would let the know when it's using Tor. The XML document
would be returned when you make a request over Tor for a
magic address in 127.0.0.0/8. I think we need to do
_something_to solve this problem, but I'm not thrilled with
the idea of having any more magical addresses like this; we
got rid of .noconnect for a reason, after all. (11/2013)
212 Increase Acceptable Consensus Age
This proposal suggests that we increase the maximum age of a
consensus that clients are willing to use when they can't
find a new one, in order to make the network robust for
longer against a failure to reach consensus. In my
opinion, we should do that. If I recall correctly, there
was some tor-dev discussion on this one that should get
incorporated into a final, implementable version. (11/2013)
215 Let the minimum consensus method change with time
This proposal describes how we can raise the minimum
allowable consensus method that all authorities must
support, since the ancient "consensus method 1" would not
actually be viable to keep the Tor network running. We
should do this; see ticket #10163. (11/2013)
219 Support for full DNS and DNSSEC resolution in Tor
Here's a design to allow Tor to support a full range of DNS
request types. It probably isn't adequate on its to make
DNSSEC work realistically, since naive DNSSEC requires many
round trips that wouldn't be practical over Tor. It has a
ton of inline discussion that needs to get resolved before
this is buildable.
One thing to consider here is whether we can get the server-side
done with reasonable confidence, and figure out the client side
once more servers have upgraded. (12/2013)
220 Migrate server identity keys to Ed25519
This one is an intial design to migrate server identity keys to
Ed25519 for improved security margin. It needs more analysis of
long-term migration to other signing key types -- what do we do
if we want to add support for EdDSA over Curve2617 or something?
Can we make it easier than this? And it also needs analysis to
make sure it enables us to eventually drop RSA1024 keys
entirely.
I've started building this, though, so we'd better figure out
out fairly soon. Other proposals, like 224, depend on this one.
(12/2013)
223 Ace: Improved circuit-creation key exchange
Here's an interesting one. It proposes a replacement for the
ntor handshake, using the multi-exponentiation optimization, to
run a bit faster at an equivalent security level.
Assuming that more cryptographers like the security proof, and
that the ntor handshake winds up being critical-path in profiles
as more clients upgrade to 0.2.4 or 0.2.5, this is something we
should consider. (12/2013)
224 Next-Generation Hidden Services in Tor
This proposal outlines a more or less completely revised version
of the Tor hidden services protocol, improved to accomodate
better cryptography, better scalability, and defenses for
several attacks we'd never considered when we did the original
design.
Some parts of this one are clearly right; some (like
scalability) are entirely unwritten. This proposal needs a lot
of attention and improvements to get it done right. I hope to
implement this over the course of 2014. (12/2013)
225 Strawman proposal: commit-and-reveal shared rng
Proposal 224's solutions for bug #8244 require that authorities
regularly agree upon a shared random value which none of them
could have influenced or predicted in advance. This proposal
outlines a simple one that isn't perfect (it's vulnerable to DOS
and to limited influence by one or more collaborating hostile
authorities), but it's quite simple, and it's meant to start
discussion.
I hope that we never build this, but instead replace it with
something better. Some alternatives have already been discussed
on tor-dev; more work is needed, though. (12/2013)
Since last time:
223, 224, and 225 are new.
157 was finally finished and closed; see ticket #10162.
4
22
Hi,
I would like to help with the unit tests for Torsocks, I was thinking of
starting with utils and config-file tests, does this sound like a
reasonable place to start?
regards,
Luke
2
8
Hi everyone,
The torsocks 2.0-rc3 code is getting quite stable in my opinion. There
are still some issues but nothing feature critical.
https://github.com/dgoulet/torsocks
I would really love to have help with code review so it can get accepted
as a replacement in the near future. Some of you already gave feedbacks
so thanks but now it needs the "seal of approval" from the community :).
Here are some things you can start with. This new version is built to be
thread safe and provides a good compatibility layer for OSes (right now
FreeBSD, NetBSD, Linux and OS X are supported). Considering that, you
can direct your attention to this:
* Synchronization issues (multi threading, reentrancy, ...)
* Security obviously :)
* Correctness.
This is an "inprocess" library thus it has to be bullet proof so it does
not crash the application.
In src/lib/, every libc call is in its corresponding file matching the
man page name like for instance gethostbyname.c contains all reentrant
functions and other version of that family as well (gethostbyname_r,
gethostbyname2, etc...). Reviewing them just to make sure they mimick as
close as possible the libc behaviour.
In src/common, connection.c/.h and onion.c/.h are probably the two
interfaces that need review. Connection is the one handling new socket
and their state where Onion handles the cookies that are sent back to
the client (by default: 127.42.42.x) to map an onion address to that
cookie once a connection is established (connect()).
Other than that, man pages and documentation are also important.
You can send patches via email or/and pull request (github or not), I'm
easy for contributions! :)
Thanks to all!
David
3
6
I've been thinking about a couple of tricky use cases for pluggable
transport libraries, and whether we should do anything to try to support
them.
The first use case is the flashproxy/websocket use case.
flashproxy-client recognizes the two transport names "flashproxy" and
"websocket" as synonyms. That is, tor can ask for either one and they
will work equivalently. But what should happen when tor asks for "*";
i.e., the activation of all supported transports? We want to start only
one SOCKS listener, for the preferred name "flashproxy", not a separate
listener for every synonym. The way that you would indicate you support
two transport names would be done like this in pyptlib and goptlib
respectively:
ptclient.init(["flashproxy", "websocket"])
ptInfo, err := pt.ClientSetup([]string{"flashproxy", "websocket"})
but neither of those work for this use case, because if tor asks for
"*", you get two listeners. Maybe we don't care about this use case,
because as I understand it, tor will never ask for "*" anyway.
The second use case is the fog* use case. As a server, we may not want
to declare all the transports we support in advance. Rather, we may
prefer to look at the names of the transports tor has asked for, and
decide for each one whether we support it. The idea here is that since
we can arbitrarily chain a set of transports, we can't just enumerate
all possible chains and declare those as the transport we support. Both
pylib and goptlib require you to list all the transports you want to
support on initialization. We would like for tor to be able to ask for a
transport name like "obfs3|cbr|obfs3|websocket", and we check to see
whether we are able to construct such a chain. The current idea is to
only support a small number of predefined chains in a configuration
file, so that we can in fact declare them all in advance.
https://trac.torproject.org/projects/tor/ticket/9744
David Fifield
* fog is the transport combinator formerly known as Metallica; see
https://trac.torproject.org/projects/tor/ticket/9743.
1
1

10 Jan '14
(This message has been sitting in my drafts for a week or so, because
I fear that it might make no sense. Today I cleaned it up and decided
to post it.)
Hello Nick and Elly,
we were recently discussing various commit-and-reveal schemes to
accomplish the unpredictability of HSDir positions in the hash ring.
This is a thread to better coordinate on this subject. The
corresponding trac ticket number is 8244.
I left our IRC discussion with two conclusions in mind:
a) The simple approach of a commit-and-reveal protocol can not be
entirely secure since an adversary could choose not to reveal his
value (abort) which would allow him to influence the final result.
b) Proper protocols that achieve this goal are ugly, both in elegance
and in the number of rounds. This is basically the Byzantine
agreement problem which has ugly solutions and funny impossibility
results.
We started thinking of how disastrous a commit-and-reveal scheme could
be for our specific use case, and we decided that it's worth thinkihng
more about before moving to other heavyweight protocols.
Today, I thought a bit about it.
The goal of such a scheme would be to have all authorities
collectively generate and agree on a nonce. Adversaries who control a
subset of the authorities should not be able to influence the result
(except if they control a majority of the authorities; in which case
Tor is screwed anyway).
So we consider adversaries that can control multiple
authorities. Adversarial authorities can either be Byzantine (can lie,
malfunction, etc.) or abort unpredictably (because of random
failures).
To make this more concrete, let's also consider a simple
commit-and-reveal scheme for our use case.
Every ONCE_IN_A_WHILE:
1. Each authority publishes a signed document with a commitment value.
2. Authorities collect commitment documents from the other authorities.
3. After COMMIT_TIMEOUT minutes each authority publishes a signed
document that reveals the cleartext value of their previous
commitment.
4. Authorities collect cleartext values from other authorities and
check that they match the received commitments.
5. After REVEAL_TIMEOUT minutes each authority publishes a signed
document containing:
* a list of the received commitments and cleartext values that the
authority used in its nonce calculation
* the resulting nonce
6. Authorites collect the nonce documents from the other authorities,
and check that all authorities had the same commitment/cleartext
list and calculated the same nonce.
The final nonce derivation function should be unpredictable given at
least one honest contribution to the derivation function. For example,
if the inputs to the derivation function are big enough (e.g. each
authority publishes 32 random bytes), stuffing them into a hash
function should do the trick.
Also, assuming big enough COMMIT_TIMEOUT and REVEAL_TIMEOUT values,
honest authorities should not have trouble casting their vote in
time. Maybe we can even allow multi-hour timeout intervals, since we
are not in a particular hurry if ONCE_IN_A_WHILE is every 24 hours or
more.
For the rest of this analysis, the response of authorities towards
node failure is to ignore it: keep calm and and carry on. That is, if
a node doesn't send a message during COMMIT_TIMEOUT or REVEAL_TIMEOUT,
other nodes simply ignore the node for the rest of the protocol. There
might be other responses here (like restarting the protocol), but this
one is the simplest to reason about and probably the most secure. I
argue that the response of restarting the protocol in case of node
failure might be much worse than simply proceeding, but it depends on
the number of restarts we allow (and especially what happens in the
last round of the protocol if the maximum number of restarts is
reached).
Now, let's consider some adversarial tactics:
* (trivial case) If adversaries perform the protocol normally, the
resulting nonce should be unpredictable assuming that there is at
least one other honest authority.
* (stupid case) An adversary can abort during the COMMIT_TIMEOUT phase
but she doesn't gain much. The calculation contains without her
contributions.
* (interesting case) An adversary can abort during the REVEAL_TIMEOUT
phase. At this stage, she has seen the cleartext values of the
honest authorities and can pre-calculate the final nonce value using
her own cleartext values. If the adversary doesn't like the
resulting final nonce, she can abort a subset of her authorities, so
that she gets different final nonces (since the nonce calculation
would continue without considering the aborting
authorities). Specifically, if an adversary controls 'a'
authorities, she can choose between 2^a possible final nonce values,
by aborting different subsets of her authorities.
The next question is how this attack influences the probability of
an attacker that wants to inject his own HSDirs as the responsible
HSDirs of a Hidden Service. I should note that the final nonce is
still unpredictable (the adversary can just choose between multiple
unpredictable final nonces), so the best strategy of an attacker is
to add HSDirs in the hash ring (probably in strategically chosen
positions) and choose whichever of the possible 2^a nonce values (if
any) makes his HSDirs responsible for the target HS.
We can analyze this situation by modeling it as a game and
calculating the probability of the adversary winning. I tried to do
this in the appendix of this post.
If my logic is not flawed (could as well be) and my assumptions are
not ridiculous, an attacker has probability '1 - (1 - k/r)^d' of
getting his HSDir as the first responsible HSDir of a Hidden
Service. In the above formula, k is the number of adversarial HSDirs
in the hash ring, r is the total number of HSDirs, and d is d=2^a
that is the number of possible final nonce values he can choose from
by aborting some of his authorities. On the other hand, the
probability of an attacker succeding without the commit-and-reveal
scheme is simply k/r.
All in all, it seems that the commit-and-reveal protocol is not too
bad (if the adversary controls one or two authorities), but it's not
very good either. It's worth bearing in mind that footnote [0] might
change the probabilities for the worse.
It's also important that we look deeper into #8243.
If I get persuaded that my assumptions, models and math are correct, I
might make some graphs to demonstrate how the attacker probability
changes for different values of k, r and d.
---
APPENDIX:
Badness of commit-and-reveal scheme:
(Here be dragons, bad math and plenty of mental masturbation. Proceed
with caution.)
The problem with the commit-and-reveal scheme is that an adversary who
is not happy with the position of the Hidden Service in the hash ring,
has the option of choosing any subset of her a adversarial authorities
to abort the commit-and-reveal protocol during the REVEAL_TIMEOUT
phase, which lets her choose between 2^a positions for the Hidden
Service.
To evaluate how bad this is, we can model our problem as a game and
calculate the probability of the adversary winning.
In the game, we model the HSDir positions in the hash ring as random
integers from 0 to N. [0]
Similarly, we model the position of a Hidden Service in the hash ring
as another random integer from 0 to N.
We say that an adversary wins, if any of her HSDirs are the closest
(in a clockwise fashion) to the position of the Hidden Service.
Defining the game formally:
Game_1:
"""
Step 1: Each player p_i picks a number x_i uniformly in [0, N]
{This corresponds to HSDir positions in the hash ring}
Step 2: A random value phi is chosen uniformly in [0, N]
{This corresponds to the position of the Hidden Service in the hash ring}
Step 3: Winner is the player whose x_i is the closest to phi. We
define the distance of a player i as d_i = x_i - phi (mod N).
{This emulates the algorithm that Hidden Services use to pick their
responsible HSDirs}
"""
(For computing ease, we will assume that x_i and phi values are
distinct from each other. This should be true in real life too,
otherwise some key collision happened.)
Looking at the above game, and based on our assumptions, it is easy to
see that d_i values are actually random values in [0, N]. This can be
seen informally, since the players during step 1 don't know the value
of phi. It can also be seen formally by calculating P[d_i == 0],
P[d_i == 1], ..., P[d_i == N]: all of those probabilities should be 1/N.
This means that we can reduce the above game to a new game that is
easier to analyze:
Game_2:
"""
Step 1: Each player p_i picks a number d_i uniformly in [0, N].
{This corresponds to each player getting assigned a distance value d_i}
Step 2: Winner is the player with the smallest d_i value.
{The winner is the player with the smallest distance from the Hidden Service.}
"""
Now let's analyze Game_2:
Informally again, since the game is fair and symmetric, all players
have the same probability of winning. That is,
P[p_1 wins] == P[p_2 wins] == ... == P[p_r wins] (1)
where r is the total number of players.
{in our equivalent real life problem, r is the number of HSDirs }
Furthermore, since one player has to win, we have:
P[p_1 wins] + P[p_2 wins] + ... + P[p_r wins] == 1 (2)
Combining (1) and (2) we get that P[p_i wins] == 1/r (for 0 < i < r)
which means that all players have probability 1/r of winning. This is
intuitive and what we expected.
Now let's calculate the probability of an adversary winning the game,
assuming that he controls multiple players.
{or that he controls multiple HSDirs in our equivalent real life problem}
It's easy to see that the probablity of k players winning is k/r.
This means that:
P[adversary that controls k out of r players wins] = k/r (3)
with k <= r.
Now that we have these probabilities in place, we can calculate how
bad the commit-and-reveal scheme is for us. It's important to notice,
that an adversary who controls a authorities can choose between d = 2^a
values of phi in Game_1, but she doesn't get a chance to redistribute
her x_i values accordingly (since relays require a certain uptime till
they get the HSDir flag).
Informally again, I argue that choosing between d values of phi in
Game_1, is the same as restarting Game_2 d times (so that all players
get completely different d_i values). Hence, I believe that the
probability of an adversary winning if he has d choices of phi, is the
same as the adversary winning in at least one instance of Game_2 if
Game_2 is restarted d times. That is,
P[winning in at least one instance of Game_2 if you play d times]
== P[*not* loosing all d instances of Game_2]
== 1 - (P[loosing all d instances of Game_2]
== 1 - (P[loosing on Game_2])^d
== 1 - (1 - P[winning on Game_2])^d, and using (3) we get:
== 1 - (1 - k/r)^d (4)
where we assume that the adversary controls k out of r players of Game_2.
Now that we have (4), let's plug some real life Tor network data into
it to see what kind of probabilities we get:
If we assume that we have r=2000 HSDirs in total [1], and an adversary
controls k=100 HSDirs, the probability of him winning Game_2 is:
P[adversary wins] = k/r == 100/2000 == 0.05
Now, let's also assume that we are using the commit-and-reveal scheme
and the adversary controls 1 authority, hence d = 2^a == 2. Now we have:
P[adversary wins] = 1 - (1 - k/r)^d == 1 - (1 - 100/2000)^2 == 0.0975
If he controls 2 authorities, we have d == 4, and:
P[adversary wins] = 1 - (1 - k/r)^d == 1 - (1 - 100/2000)^4 == 0.1854
If he controls 3 authorities, we have d == 8, and:
P[adversary wins] = 1 - (1 - k/r)^d == 1 - (1 - 100/2000)^8 == 0.3697
And the numbers go on... [2]
It's worth noting here that this attacker only cares to get the first
position after the HS. An attacker who wants to do so probably plans
to squat *all* responsible HSDirs of a HS, and he can achieve it by
placing multiple clusters of HSDirs in different parts of the hash
ring. This is a different strategy from the attacker who only cares
about having a single responsible HSDir for an HS; the probabilities
for such an attacker would be better than the ones above.
[0]: This is a very *important* assumption, since adversarial HSDirs
can pretty much pick their position in the hash ring by brute
forcing their keys till they find one that puts them in a good
position. A good position would be a place where there is a big
gap between the adversarial HSDir and the previous honest
HSDirs. We will not consider this adversarial strategy since it
makes the math harder.
[1]: $ grep -i HSDir cached-microdesc-consensus | wc -l
2015
[2]: http://www.wolframalpha.com/input/?i=1+-+%281+-+k%2Fr%29%5Ed%2C+for+k%3D100…
3
3

CellStatistics circuit distribution scale could perhaps use adjustment
by starlight@binnacle.cx 04 Jan '14
by starlight@binnacle.cx 04 Jan '14
04 Jan '14
Have been running a guard for a couple
of months with 'CellStatistics' and
noticed that the distribution looks
out of whack:
cell-stats-end 2013-12-20 18:13:10 (86400 s)
cell-processed-cells 1409,9,6,6,6,5,4,3,2,1
cell-queued-cells 0.44,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00
cell-time-in-queue 98,1,1,1,0,13,2,1,1,0
cell-circuits-per-decile 15199
Seems like most of the circuits with significant
traffic end up in the first bucket and the
remaining nine buckets are of little
significance. I'm fairly certain that a
relative handful of circuits account for
99.9% of the cell traffic with cell-counts
in the tens-to-hundreds of thousands.
Most of that bot traffic I suppose.
Perhaps a log-scaled "loudness" breakdown
would make sense?
Nothing pressing here, just an observation
and a thought.
2
1
Attempting to build tbb-3.5.1-build1, and failing. See below for failure.
I am building on a fully-updated Ubuntu v12.04LTS/x86_64 system. I am using
the USE_LXC method because KVM won't work in this VMware VM.
On my first attempt I did a "make all". That didn't work, so I tried
"./mkbundle-linux.sh" (all I really care about is a Linux/x86_64 build)
and got the same error at what seems the same point in the build process.
This problem isn't listed in the "Known Issues and Quirks" section of the
doc.
Can anyone advise me on how to fix this problem?
Thanks.
-----------------------
2013-12-28 20:08:35,138 INFO : Searching for GRUB installation directory ... found: /boot/grub
2013-12-28 20:08:35,164 INFO : Calling hook: install_kernel
2013-12-28 20:08:59,860 INFO : Done.
2013-12-28 20:09:01,503 INFO : Running depmod.
2013-12-28 20:09:01,595 INFO : update-initramfs: Generating /boot/initrd.img-2.6.32-54-server
2013-12-28 20:09:06,598 INFO : Running postinst hook script /usr/sbin/update-grub.
2013-12-28 20:09:06,735 INFO : Searching for GRUB installation directory ... found: /boot/grub
2013-12-28 20:09:06,835 INFO : Searching for default file ... found: /boot/grub/default
2013-12-28 20:09:06,842 INFO : Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
2013-12-28 20:09:07,159 INFO : Searching for splash image ... none found, skipping ...
2013-12-28 20:09:07,261 INFO : Found kernel: /boot/vmlinuz-2.6.32-54-server
2013-12-28 20:09:07,536 INFO : Replacing config file /var/run/grub/menu.lst with new version
2013-12-28 20:09:07,605 INFO : Updating /boot/grub/menu.lst ... done
2013-12-28 20:09:07,611 INFO :
2013-12-28 20:09:07,723 INFO : Calling hook: post_install
2013-12-28 20:09:07,753 INFO : Calling hook: unmount_partitions
2013-12-28 20:09:07,754 INFO : Unmounting target filesystem
2013-12-28 20:09:11,171 INFO : Calling hook: convert
2013-12-28 20:09:11,172 INFO : Converting /tmp/tmptolHOn to qcow2, format base-lucid-amd64/tmptolHOn.qcow2
2013-12-28 20:09:27,192 INFO : Calling hook: fix_ownership
2013-12-28 20:09:27,207 INFO : Calling hook: deploy
2013-12-28 20:09:27,208 INFO : Calling hook: fix_ownership
Extracting partition for lxc
[sudo] password for steve:
lxc-start: No such file or directory - failed to get real path for '/home/steve/buildtbb/gitian-builder/target--'
lxc-start: failed to pin the container's rootfs
lxc-start: failed to spawn 'gitian'
lxc-start: No such file or directory - failed to remove cgroup '/sys/fs/cgroup/cpuset//lxc/gitian'
amd64 lucid VM creation failed
2
1
Nicolas Vigier transcribed 1.4K bytes:
> Hi,
Hey Nicolas,
Thanks for all the work you're doing, and my apologies that I hadn't responded
to your tor-dev@ call yet.
Just in case you haven't seen it, Lunar made a wiki page which has quite a bit
of info on it, and I filled in some more on BridgeDB. [0]
aabgsn maintained BridgeDB for a year or so, but no longer works on it (though
they are more than welcome to do so, if they wish to). sysrqb has been helping
me maintain BridgeDB quite a bit (feel free to CC them on BridgeDB topics).
> I am currently looking at the status and list of things to be done
> regarding automation on tor project. I have been looking at bridgedb :
> https://people.torproject.org/~boklm/automation/tor-automation-review.html#…
From that page:
> Continuous Build
> BridgeDB is not currently built and tested by Jenkins.
>
> However, Isis Lovecruft has a personnal development fork on github that is
> built and tested by travis-ci.org:
> https://travis-ci.org/isislovecruft/bridgedb/
>
> Packaging
> BridgeDB does not have packages. It is currently deployed using a Python virtualenv.
>
To my knowledge, BridgeDB is not currently deployed in a virtualenv (sysrqb
was the last to redeploy it). I recently refactored the main loop and scripts
so that it *can* run in a virtualenv, and it *should* be run in one, because:
1. We won't need to nag weasel/Sebastian to update/install BridgeDB dependencies.
2. Dependencies will not be installed via sudo.
I've been considering creating packages for BridgeDB on PyPI.
Pros:
* Even if we manually download the bundle, verify the hash, and then
install it, this seems potentially easier and less error-prone than
checking out a git tag, verifying it, and then building.
* Packaging it now reserves the 'bridgedb' Python namespace for our use.
Cons:
* I don't want to make people think that this thing is a polished
distribution system for people who wish to run their own BridgeAuths.
If proper packaging is helpful for Jenkins, however, I can easily do so.
> Testing
> Some unit tests are implemented in lib/bridgedb/Tests.py and can be run with
> the command python setup.py test.
Actually, the tests in lib/bridgedb/Tests.py are old tests. Running them with
`python setup.py test` or `make test` will run them via the Python stdlib
unittest module (which doesn't play nicely with Twisted's asynchronicity). See
#9865, #9872. [1] [2]
There are new tests in lib/bridgedb/test/test_*.py [3] and they can be run with
`[sudo] make [re]install && bridgedb test`.
I began setting up a system which will run the old lib/bridgedb/Tests.py
unittests with Twisted's trial runner (#9873). [4] The old unittests will get
run twice, once with removed/deprecated classes and functions which have been
taken out of BridgeDB's codebase, and again with new/refactored code; this
way, the old unittests function as a (partial) regression test suite.
The way I designed it, the removed/deprecated code (various classes/functions
before refactoring) will go into lib/bridgedb/test/deprecated.py, and they are
`twisted.python.monkey.MonkeyPatch`ed into place for a run of the old unitests
in lib/bridgedb/Tests.py. Then, the old unittests are run a second time with
the newly refactored code, so that the difference between the two can be
clearly seen, and bugs introduced by new code can (hopefully) be caught
immediately.
>
> Proposals
> Add BridgeDB build and test to Jenkins
Created ticket #10417: BridgeDB should be built and tested on Jenkins
https://trac.torproject.org/projects/tor/ticket/10417
> The main thing to be done that I have seen is running the unit tests
> with Jenkins when there are new commits. You can let me know if I missed
> something important, or if you have other ideas / needs.
Needs:
1. We need a lot more unittests, but this is perhaps not a task for
volunteers (or, rather, people who aren't very familiar with BridgeDB's
code).
2. BridgeDB needs *a lot* more documentation. It had almost none when I
started working on it 6 months ago; it has a few bits now. [8]
Questions:
1. Does it help if I use tox? [5] [6]
2. If not, I believe you'll need a shell script which Jenkins can use to
install BridgeDB in a virtualenv. [7] Or do you need some sort of Maven
thing, or both?
3. Is there somewhere I should put that documentation on torproject.org
(other than people.tpo/~isis)?
[0]: https://trac.torproject.org/projects/tor/wiki/AutomationInventory
[1]: https://trac.torproject.org/projects/tor/ticket/9865
[2]: https://trac.torproject.org/projects/tor/ticket/9872
[3]: https://gitweb.torproject.org/user/isis/bridgedb.git/tree/refs/heads/develo…
[4]: https://trac.torproject.org/projects/tor/ticket/9873
[5]: http://tox.readthedocs.org/en/latest/
[6]: http://alexgaynor.net/2010/dec/17/getting-most-out-tox/
[7]: http://www.alexconrad.org/2011/10/jenkins-and-python.html
[8]: https://para.noid.cat/bridgedb/
Thanks,
--
♥Ⓐ isis agora lovecruft
_________________________________________________________
GPG: 4096R/A3ADB67A2CDB8B35
Current Keys: https://blog.patternsinthevoid.net/isis.txt
3
5