Filename: 205-local-dnscache.txt
Title: Remove global client-side DNS caching
Author: Nick Mathewson
Created: 20 July 2012
Status: Open
0. Overview
This proposal suggests that, for reasons of security, we move
client-side DNS caching from a global cache to a set of per-circuit
caches.
This will break some things that used to work. I'll explain how to
fix them.
1. Background and Motivation
Since the earliest Tor releases, we've kept a client-side DNS
cache. This lets us implement exit policies and exit enclaves --
if we remember that www.mit.edu is 18.9.22.169 the first time we
see it, then we can avoid making future requests for www.mit.edu
via any node that blocks net 18. Also, if there happened to be a
Tor node at 18.9.22.169, we could use that node as an exit enclave.
But there are security issues with DNS caches. A malicious exit
node or DNS server can lie. And unlike other traffic, where the
effect of a lie is confined to the request in question, a malicious
exit node can affect the behavior of future circuits when it gives
a false DNS reply. This false reply could be used to keep a client
connecting to an MITM'd target, or to make a client use a chosen
node as an exit enclave for that node, or so on.
With IPv6, tracking attacks will become even more possible: A
hostile exit node can give every client a different IPv6 address
for every hostname they want to resolve, such that every one of
those addresses is under the attacker's control.
And even if the exit node is honest, having a cached DNS result can
cause Tor clients to build their future circuits distinguishably:
the exit on any subsequent circuit can tell whether the client knew
the IP for the address yet or not. Further, if the site's DNS
provides different answers to clients from different parts of the
world, then the client's cached choice of IP will reveal where it
first learned about the website.
So client-side DNS caching needs to go away.
2. Design
2.1. The basic idea
I propose that clients should cache DNS results in per-circuit DNS
caches, not in the global address map.
2.2. What about exit policies?
Microdescriptor-based clients have already dropped the ability to
track which nodes declare which exit policies, without much ill
effect. As we go forward, I think that remembering the IP address
of each request so that we can match it to exit policies will be
even less effective, especially if proposals to allow AS-based exit
policies can succeed.
2.3. What about exit enclaves?
Exit enclaves are already borken. They need to move towards a
cross-certification solution where a node advertises that it can
exit to a hostname or domain X.Y.Z, and a signed record at X.Y.Z
advertises that the node is an enclave exit for X.Y.Z. That's
out-of-scope for this proposal, except to note that nothing
proposed here keeps that design from working.
2.4. What about address mapping?
Our current address map algorithm is, more or less:
N = 0
while N < MAX_MAPPING && exists map[address]:
address = map[address]
N = N + 1
if N == MAX_MAPPING:
Give up, it's a loop.
Where 'map' is the union of all mapping entries derived from the
controller, the configuration file, trackhostexits maps,
virtual-address maps, DNS replies, and so on.
With this design, the DNS cache will not be part of the address
map. That means that entries in the address map which relied on
happening after the DNS cache entries can no longer work so well.
These would include:
A) Mappings from an IP address to a particular exit, either
manually declared or inserted by TrackHostExits.
B) Mappings from IP addresses to other IP addresses.
C) Mappings from IP addresses to hostnames.
We can try to solve these by introducing an extra step of address
mapping after the DNS cache is applied. In other words, we should
apply the address map, then see if we can attach to a circuit. If
we can, we try to apply that circuit's dns cache, then apply the
address map again.
2.5. What about the performance impact?
That all depends on application behavior.
If the application continues to make all of its requests with the
hostname, there shouldn't be much trouble. Exit-side DNS caches and
exit-side DNS will avoid any additional round trips across the Tor
network; compared to that, the time to do a DNS resolution at the
exit node *should* be small.
That said, this will hurt performance a little in the case where
the exit node and its resolver don't have the answer cached, and it
takes a long time to resolve the hostname.
If the application is doing "resolve, then connect to an IP", see
2.6 below.
2.6. What about DNSPort?
If the application is doing its own DNS caching, they won't get
much security benefit from here.
If the application is doing a resolve before each connect, there
will be a performance hit when the resolver is using a circuit that
hadn't previously resolved the address.
Also, DNSPort users: AutomapHostsOnResolve is your friend.
3. Alternate designs and future directions
3.1. Why keep client-side DNS caching at all?
A fine question! I am not sure it actually buys us anything any
longer, since exits also have DNS caching. Shall we discuss that?
It would sure simplify matters.
3.2. The impact of DNSSec
Once we get DNSSec support, clients will be able to verify whether
an exit's answers are correctly signed or not. When that happens,
we could get most of the benefits of global DNS caching back,
without most of the security issues, if we restrict it to
DNSSec-signed answers.
Also exists at
https://gitweb.torproject.org/user/mikeperry/torspec.git/blob/path-bias-tun…
--------------------------------------------------------------------
Title: Tuning the Parameters for the Path Bias Defense
Author: Mike Perry
Created: 01-10-2012
Status: Open
Target: 0.2.4.x+
Overview
This proposal describes how we can use the results of simulations in
combination with network scans to set reasonable limits for the Path
Bias defense, which causes clients to be informed about and ideally
rotate away from Guards that provide extremely low circuit success
rates.
Motivation
The Path Bias defense is designed to defend against a type of route capture
where malicious Guard nodes deliberately fail circuits that extend to
non-colluding Exit nodes to maximize their network utilization in favor of
carrying only compromised traffic.
This attack was explored in the academic literature in [1], and a
variant involving cryptographic tagging was posted to tor-dev[2] in
March.
In the extreme, the attack allows an adversary that carries c/n
of the network capacity to deanonymize c/n of the network
connections, breaking the O((c/n)^2) property of Tor's original
threat model.
Design Description
The Path Bias defense is a client-side accounting mechanism in Tor that
tracks the circuit failure rate for each of the client's guards.
Clients maintain two integers for each of their guards: a count of the
number of times a circuit was extended at least one hop through that
guard, and a count of the number of circuits that successfully complete
through that guard. The ratio of these two numbers is used to determine
a circuit success rate for that Guard.
The system should issue a notice log message when Guard success rate
falls below 70%, a warn when Guard success rate falls below 50%, and
should drop the Guard when the success rate falls below 30%.
To ensure correctness, checks are performed to ensure that
we do not count successes without also counting the first hop.
Similarly, to provide a moving average of recent Guard activity while
still preserving the ability to ensure correctness, we "scale" the
success counts by an integer divisor (currently 2) when the counts
exceed the moving average window (300) and when the division
does not produce integer truncation.
No log messages should be displayed, nor should any Guard be
dropped until it has completed at least 150 first hops (inclusive).
Analysis: Simulation
To test the defense in the face of various types of malicious and
non-malicious Guard behavior, I wrote a simulation program in
Python[3].
The simulation confirmed that without any defense, an adversary
that provides c/n of the network capacity is able to observe c/n
of the network flows using circuit failure attacks.
It also showed that with the defense, an adversary that wishes to
evade detection has compromise rates bounded by:
P(compromise) <= (c/n)^2 * (100/CUTOFF_PERCENT)
circs_per_client <= circuit_attempts*(c/n)
In this way, the defense restores the O((c/n)^2) compromise property,
but unfortunately only over long periods of time (see Security
Considerations below).
The spread between the cutoff values and the normal rate of circuit
success has a substantial effect on false positives. From the
simulation's results, the sweet spot for the size of this spread appears
to be 10%. In other words, we want to set the cutoffs such that they are
10% below the success rate we expect to see in normal usage.
The simulation also demonstrates that larger "scaling window" sizes
reduce false positives for instances where non-malicious guards
experience some ambient rate of circuit failure.
Analysis: Live Scan
Preliminary Guard node scanning using the txtorcon circuit scanner[4]
shows normal circuit completion rates between 80-90% for most Guard
nodes.
However, it also showed that CPU overload conditions can easily push
success rates as low as 45%. Even more concerning is that for a brief
period during the live scan, success rates dropped to 50-60%
network-wide (regardless of Guard node choice).
Based on these results, the notice condition should be 70%, the warn
condition should be 50%, and the drop condition should be 30%.
Future Analysis: Deployed Clients
It's my belief that further analysis should be done by deploying
loglines for all three thresholds in clients in the live network
to utilize user reports on how often high rates of circuit failure
are seen before we deploy changes to rotate away from failing Guards.
I believe these log lines should be deployed in 0.2.3.x clients,
to maximize the exposure of the code to varying network conditions,
so that we have enough data to consider deploying the Guard-dropping
cutoff in 0.2.4.x.
Security Considerations
While the scaling window does provide freshness and can help mitigate
"bait-and-switch" attacks, it also creates the possibility of conditions
where clients can be forced off their Guards due to temporary
network-wide CPU DoS. This provides another reason beyond false positive
concerns to set the scaling window as large as is reasonable.
A DoS directed at specific Guard nodes is unlikely to allow an
adversary to cause clients to rotate away from that Guard, because it
is unlikely that the DoS can be precise enough to allow first hops to
that Guard to succeed, but also cause extends to fail. This leaves
network-wide DoS as the primary vector for influencing clients.
Simulation results show that in order to cause clients to rotate away
from a Guard node that previously succeeded 80% of its circuits, an
adversary would need to induce a 25% success rate for around 350 circuit
attempts before the client would reject it, or a 5% success rate
for around 215 attempts, both using a scaling window of 300 circuits.
Assuming one circuit per Guard per 10 minutes of active client
activity, this is a sustained network-wide DoS attack of 60 hours
for the 25% case, or 38 hours for the 5% case.
Presumably this is enough time for the directory authorities to respond by
altering the pb_disablepct consensus parameter before clients rotate,
especially given that most clients are not active for even 38 hours on end,
and will tend to stop building circuits while idle.
If we raised the scaling window to 500 circuits, it would require 1050
circuits if the DoS brought circuit success down to 25% (175 hours), and
415 circuits if the DoS brought the circuit success down to 5% (69 hours).
The tradeoff, though, is that larger scaling window values allow Guard nodes
to compromise clients for duty cycles of around the size of this window (up to
the (c/n)^2 * 100/CUTOFF_PERCENT limit in aggregate), so we do have to find
balance between these concerns.
Implementation Notes: Log Messages
Log messages need to be chosen with care to avoid alarming users.
I suggest:
Notice: "Your Guard %s is failing more circuits than usual. Most likely
this means the Tor network is overloaded. Success counts are %d/%d."
Warn: "Your Guard %s is failing a very large amount of circuits. Most likely
this means the Tor network is overloaded, but it could also mean an attack
against you or potentially the Guard itself. Success counts are %d/%d."
Drop: "Your Guard %s is failing an extremely large amount of circuits. [Tor
has disabled use of this Guard.] Success counts are %d/%d."
The second piece of the Drop message would not be present in 0.2.3.x,
since the Guard won't actually be dropped.
Implementation Notes: Consensus Parameters
The following consensus parameters reflect the constants listed
in the proposal. These parameters should also be available
for override in torrc.
pb_mincircs=150
The minimum number of first hops before we log or drop Guards.
pb_noticepct=70
The threshold of circuit success below which we display a notice.
pb_warnpct=50
The threshold of circuit success below which we display a warn.
pb_disablepct=30
The threshold of circuit success below which we disable the guard.
pb_scalecircs=300
The number of first hops at which we scale the counts down.
pb_scalefactor=2
The integer divisor by which we scale.
1. http://freehaven.net/anonbib/cache/ccs07-doa.pdf
2. https://lists.torproject.org/pipermail/tor-dev/2012-March/003347.html
3. https://gitweb.torproject.org/torflow.git/tree/HEAD:/CircuitAnalysis/PathBi…
4. https://github.com/meejah/txtorcon/blob/exit_scanner/apps/exit_scanner/fail…
--
Mike Perry
Matthew Finkel:
> On 11/02/2012 07:36 PM, Jacob Appelbaum wrote:
>> Nick Mathewson:
>>> On Fri, Nov 2, 2012 at 1:34 PM, adrelanos <adrelanos(a)riseup.net> wrote:
>>>>
>>>>
>>>> Could you blog it please?
>>>
>>>
>>> I'd like to see more discussion from more people here first, and see
>>> whether somebody steps up to say, "Yeah, I can maintain that" here, or
>>> whether somebody else who knows more than me about the issues has something
>>> to say. Otherwise I don't know whether to write a "looking for maintainer"
>>> post, a "who wants to fork" post, a "don't use Torsocks, use XYZZY" post,
>>> or what.
>>>
>>
>> If Robert wants someone to maintain it, I'd be happy to do so. I had
>> wanted to extend it to do some various things anyway. I think it would
>> be a suitable base for a bunch of things I'd like to do in the next year.
>>
>> All the best,
>> Jake
>>
>
> I saw this thread earlier but didn't have a chance to reply. I was
> thinking about volunteering to patch it up and maintain it if no one
> else wanted to take it on, also, but if you want to take the lead on it
> then I'm more than happy to help you where ever possible...assuming this
> is the direction that's decided upon.
>
Hi,
I've pushed my first branch to fix the dlopen bugs:
https://gitweb.torproject.org/torsocks.git/shortlog/refs/heads/dlerror
It seems to fix the issues on my Ubuntu system. I could use some testing
on OS X, other GNU/Linux, and *BSD systems.
I've also updated the bug:
http://code.google.com/p/torsocks/issues/detail?id=3
One person has found that it fixed the issue for them as well.
Also - it seems that we probably should address this issue very soon:
http://code.google.com/p/torsocks/issues/detail?id=37https://trac.torproject.org/projects/tor/wiki/doc/torsocks#WorkaroundforIPv…
I've taken a stab at at least blocking IPv6 connect() calls:
https://gitweb.torproject.org/torsocks.git/shortlog/refs/heads/ipv6https://gitweb.torproject.org/torsocks.git/commit/95528585c1d13b0e17e9d3538…
Thoughts?
All the best,
Jake
Hi, all.
This is just the ntor proposal draft, as circulated last year, but
with a proposal number assigned to it, and a closing section about how
to make Tor actually work with it.
Filename: 216-ntor-handshake.txt
Title: Improved circuit-creation key exchange
Author: Nick Mathewson
Created: 11-May-2011
Status: Open
Summary:
This is an attempt to translate the proposed circuit handshake from
"Anonymity and one-way authentication in key-exchange protocols" by
Goldberg, Stebila, and Ustaoglu, into a Tor proposal format.
It assumes that proposal 200 is implemented, to provide an extended CREATE
cell format that can indicate what type of handshake is in use.
Notation:
Let a|b be the concatenation of a with b.
Let H(x,t) be a tweakable hash function of output width H_LENGTH bytes.
Let t_mac, t_key, and t_verify be a set of arbitrarily-chosen tweaks
for the hash function.
Let EXP(a,b) be a^b in some appropriate group G where the appropriate DH
parameters hold. Let's say elements of this group, when represented as
byte strings, are all G_LENGTH bytes long. Let's say we are using a
generator g for this group.
Let a,A=KEYGEN() yield a new private-public keypair in G, where a is the
secret key and A = EXP(g,a). If additional checks are needed to insure
a valid keypair, they should be performed.
Let PROTOID be a string designating this variant of the protocol.
Let KEYID be a collision-resistant (but not necessarily preimage-resistant)
hash function on members of G, of output length H_LENGTH bytes.
Instantiation:
Let's call this PROTOID "ntor-curve25519-sha256-1" (We might want to make
this shorter if it turns out to save us a block of hashing somewhere.)
Set H(x,t) == HMAC_SHA256 with message x and key t. So H_LENGTH == 32.
Set t_mac == PROTOID | ":mac"
t_key == PROTOID | ":key"
t_verify == PROTOID | ":verify"
Set EXP(a,b) == curve25519(.,b,a), and g == 9 . Let KEYGEN() do the
appropriate manipulations when generating the secret key (clearing the
low bits, twiddling the high bits).
Set KEYID(B) == B. (We don't need to use a hash function here, since our
keys are already very short. It is trivially collision-resistant, since
KEYID(A)==KEYID(B) iff A==B.)
Protocol:
Take a router with identity key digest ID.
As setup, the router generates a secret key b, and a public onion key
B with b, B = KEYGEN(). The router publishes B in its server descriptor.
To send a create cell, the client generates a keypair x,X = KEYGEN(), and
sends a CREATE cell with contents:
NODEID: ID -- H_LENGTH bytes
KEYID: KEYID(B) -- H_LENGTH bytes
CLIENT_PK: X -- G_LENGTH bytes
The server generates a keypair of y,Y = KEYGEN(), and computes
secret_input = EXP(X,y) | EXP(X,b) | ID | B | X | Y | PROTOID
KEY_SEED = H(secret_input, t_key)
verify = H(secret_input, t_verify)
auth_input = verify | ID | B | Y | X | PROTOID | "Server"
The server sends a CREATED cell containing:
SERVER_PK: Y -- G_LENGTH bytes
AUTH: H(auth_input, t_mac) -- H_LENGTH byets
The client then checks Y is in G^* [see NOTE below], and computes
secret_input = EXP(Y,x) | EXP(B,x) | ID | B | X | Y | PROTOID
KEY_SEED = H(secret_input, t_key)
verify = H(secret_input, t_verify)
auth_input = verify | ID | B | Y | X | PROTOID | "Server"
The client verifies that AUTH == H(auth_input, t_mac).
[NOTE: It may be adequate to check that EXP(Y,x) is not the point at
infinity. See tor-dev thread.]
Both parties now have a shared value for KEY_SEED. They expand this into
the keys needed for the Tor relay protocol.
Key expansion:
Currently, the key expansion formula used by Tor here is
K = SHA(K0 | [00]) | SHA(K0 | [01]) | SHA(K0 | [02]) | ...
where K0==g^xy, and K is divvied up into Df, Db, Kf, and Kb portions.
Instead, let's have it be
K = K_0 | K_1 | K_2 | K_3 | ...
Where K_0 = H(m_expand | INT8(i) , KEY_SEED )
and K_(i+1) = H(K_i | m_expand | INT8(i) , KEY_SEED )
and m_expend is an arbitrarily chosen value,
and INT8(i) is a octet with the value "i".
Ian says this is due to a construction from Krawczyk at
http://eprint.iacr.org/2010/264 .
Let m_expand be PROTOID | ":key_expand"
Performance notes:
In Tor's current circuit creation handshake, the client does:
One RSA public-key encryption
A full DH handshake in Z_p
A short AES encryption
Five SHA1s for key expansion
And the server does:
One RSA private-key decryption
A full DH handshake in Z_p
A short AES decryption
Five SHA1s for key expansion
While in the revised handshake, the client does:
A full DH handshake
A public-half of a DH handshake
3 H operations for the handshake
3 H operations for the key expansion
and the server does:
A full DH handshake
A private-half of a DH handshake
3 H operations for the handshake
3 H operations for the key expansion
Integrating with the rest of Tor:
Add a new optional entry to router descriptors and microdescriptors:
"onion-key-ntor" SP Base64Key NL
where Base64Key is a base-64 encoded 32-byte value, with padding
omitted.
Add a new consensus method to tell servers to copy "onion-key-ntor"
entries to from router descriptors to microdescriptors.
Add a "UseNTorHandshake" configuration option and a corresponding
consensus parameter to control whether clients use the ntor
handshake. If the configuration option is "auto", clients should
obey the consensus parameter. Have the configuration default be
"auto" and the consensus value initially be "0".
Reserve the handshake type [00 01] for this handshake in CREATE2 and
EXTEND2 cells.
Hi all,
I need to create specific circuit, specifying the exit route.
I've already found http://www.thesprawl.org/research/tor-control-protocol/ but this explain how to create circuit with nickames.Suddenly, nickname are not univoque and i think tor have problem creating circuit when the nickname of the exitroute is "Unnamed".Sometimes the creation of a new circuit fails:extendcircuit 0 hackerspaceseoul552 No such router "hackerspaceseoul"extendcircuit 0 HackerSpaceSeoul552 No such router "HackerSpaceSeoul"extendcircuit 0 HackerspaceSeoul552 No such router "HackerspaceSeoul"Even if https://atlas.torproject.org/#search/hack told me they are online.
And sometimes this appends even with TorCtl.py
DEBUG[Thu Nov 29 23:40:45 2012]:Extending circuit[!!!] Error creating circuit: 552 No such router "Communist"
How can i fix?
Thank you :)
Invita i tuoi amici e Tiscali ti premia! Il consiglio di un amico vale più di uno spot in TV. Per ogni nuovo abbonato 30 € di premio per te e per lui! Un amico al mese e parli e navighi sempre gratis: http://freelosophy.tiscali.it/
Hi,
while trying to compile the latest git-checkout against openssl-1.0.2,
I've come across the following issues:
----
make[1]: Entering directory `/usr/local/src/tor-git'
CC src/common/tortls.o
cc1: warnings being treated as errors
In file included from /opt/openssl/include/openssl/ssl.h:1382,
from src/common/tortls.c:36:
/opt/openssl/include/openssl/srtp.h:138: error: redundant redeclaration of
‘SSL_get_selected_srtp_profile’
/opt/openssl/include/openssl/srtp.h:135: note: previous declaration of
‘SSL_get_selected_srtp_profile’ was here
make[1]: *** [src/common/tortls.o] Error 1
make[1]: Leaving directory `/usr/local/src/tor-git'
make: *** [all] Error 2
----
There is an open ticket[0] in the openssl bugtracker for this. While the
proper solution is to fix openssl/include/openssl/srtp.h, I wanted to
compile without -Werror. However, when adding CFLAGS="-Wno-error" during
./configure, -Werror is still added to the ./Makefile and overriding
-Wno-error. When adding CFLAGS="-Wno-error" during "make" all the other
CFLAGS are gone too. Thus I ended up removing -Werror from the Makefile
and tortls.o compiled.
While this is really an issue with openssl, I wanted to have this
documented, just in case anybody else tries the same. If someone knows of
a better workaround (i.e. compiling just tortls.c with -Wno-error and
everything else with -Werror), please share! :-)
A bit later, compilation stops again:
----
CCLD src/or/tor
src/common/libor-crypto.a(aes.o): In function `aes_crypt':
aes.c:(.text+0x860): undefined reference to `CRYPTO_ctr128_encrypt'
collect2: ld returned 1 exit status
make[1]: *** [src/or/tor] Error 1
make[1]: Leaving directory `/usr/local/src/tor-git'
make: *** [all] Error 2
----
Hm, this leaves me puzzled for now. CRYPTO_ctr128_encrypt is still
included in openssl-1.0.2 and src/common/aes.o seems to be built with
this function included as well, not sure why src/common/libor-crypto.a
complains now:
----
$ grep -r CRYPTO_ctr128_encrypt /opt/openssl/
/opt/openssl/include/openssl/modes.h:void CRYPTO_ctr128_encrypt(const unsigned char *in, unsigned char *out,
/opt/openssl/include/openssl/modes.h:void CRYPTO_ctr128_encrypt_ctr32(const unsigned char *in, unsigned char *out,
Binary file /opt/openssl/bin/openssl matches
Binary file /opt/openssl/lib/libcrypto.a matches
$ grep -r CRYPTO_ctr128_encrypt .
./src/common/aes.c: CRYPTO_ctr128_encrypt((const unsigned char *)input,
Binary file ./src/common/aes.o matches
Binary file ./src/common/libor-crypto.a matches
----
Why do I (try to) build against openssl-1.0.2? I'm on Debian/stable which
still ships openssl-0.9.8o and I wanted to get rid of this "use a more recent
OpenSSL" message during startup :-)
Otherwise, today's git-checkout of tor runs just fine when built against
openssl-0.9.8 (on powerpc) - yay!
Christian.
[0] http://rt.openssl.org/Ticket/Display.html?id=2724
--
BOFH excuse #330:
quantum decoherence
I was asked to meet with some people doing work in dangerous areas of
Latin America. In general, these people can get around and work with
Microsoft Windows competently. They are all fluent in English, Spanish,
and various dialects found in Latin American countries. Most of them
had Spanish-language versions of Windows 7 on their laptops.
# Getting TBB
During the discussion, an informal usability study happened. I thought
having a discussion about this may be better than simply opening a trac
ticket.
Here's roughly the scenario. I asked people to download Tor Browser from
our website. All found Tor's website via Bing. Interestingly, some
searched for "tor", others for "tor browser", and one for "tor project".
They were all using Internet Explorer and Bing because that's the
default for Windows 7. Thankfully, our website is the top result for all
three queries at Bing.
They all found the big purple "Download Tor" button on the index page.
Issue #1: Running TBB from the website. When clicking the orange
"Download Tor Browser Bundle" button, IE prompts them to "Run", "Save",
or "Cancel". All of them chose "Run".
Issue #1A: When choosing "Run", a prompt appears, "The publisher of
tor-browser-2.2.39-5_en-US.exe couldn't be verified. Are you sure you
want to run the program?". All of them hit "Yes" and ignored the
warning.
Issue #1B: When the download completed, they were prompted with the
7zip self-extractor giving them a path similar to this:
"C:\Users\tor\AppData\Local\Microsoft\Windows\Temporary Internet
Files\Content.IE5\T868H68M\". All of them pushed the "Extract" button
and let 7zip extract TBB into that temporary directory.
A few of them went to the Downloads folder to try to find TBB.
However, it's not there because it's extracted into a temporary folder.
This folder is not reachable by the user through File Explorer.
Issue #2: Downloading TBB. After some Q&A about what happened, I asked
them to "Save" rather than "Run". TBB then downloads. The user is
prompted with a warning box stating, "The publisher of
tor-browser-2.2.39-5_en-US.exe couldn't be verified." And the user is
left to choose between "Run" and "View downloads". When clicking "Run"
the user is prompted with the 7-zip self-extractor prompt and
"C:\Users\tor\Downloads\" is the default path. All of them hit the
"Extract" button. None of them were sure what just happened nor why TBB
needs to be extracted.
Issue #2A: The self-extract completes and the user is left looking at
their IE window with the TBB download page. Three of them went to their
Downloads folder to find Tor. The other few waited for something to
happen. When I asked the waiting few why they were waiting, they said
because they "ran TBB" and expected the extraction to automatically
start TBB for them.
Issue #3: In the Downloads folder there are two things called "tor
browser", one is an application the other is a file folder. See
https://people.torproject.org/~andrew/2012-11-28-tbb-usability-test/2012-11…
for an example. Most of the people had hundreds of files in the
Download folder, so it wasn't as clear as this example screenshot.
Some people wanted to run the application, because in their mind, you
run an application.
When asked to go into the Tor Browser folder, they all found "Start Tor
Browser" and ran it (some double-left click, some right click and
choose "Open"). See
https://people.torproject.org/~andrew/2012-11-28-tbb-usability-test/2012-11…
for what it looks like by default.
Issue #4: Once TBB was started, the users would alt-tab between
applications or choose various apps in their task bar at the bottom of
the screen. They kept clicking the onion icon because they thought it
was TBB, when it brings up the Vidalia control panel. This is what it
looks like,
https://people.torproject.org/~andrew/2012-11-28-tbb-usability-test/2012-11…
Issue #5: No one knew what "Startpage" was nor why it was in the top
bar. Just like IE, they all wanted to search from the awesome bar by
default. This does work, and they can search via startpage.com via the
awesome bar.
From here on out, the normal TBB issues apply, as demonstrated by
Greg's HotPETS paper,
https://people.torproject.org/~andrew/hotpets12-1-usability.pdf
# Feedback
I asked how we can improve this entire process. The consensus is that
TBB needs to be a single application people can just run and get the
browser going. The extraction process was confusing and was sometimes
called an installation process. They felt that "running" it from the
tor download site was fine, so long as everything just worked and a
browser window popped up.
They also felt that the Vidalia control panel was unnecessary. It just
confused them. The only thing they felt was valuable was the Network
Map. A caveat is that we were on a US network which didn't censor. The
Message Log and bridge/obfsproxy functionality wasn't needed in this
case. I suspect it will be when they try this in Latin America.
--
Andrew
http://tpo.is/contact
pgp 0x6B4D6475
Greetings,
I'm attaching a proposal for adding authentication to the Extended
ORPort. The Extended ORPort is a yet unimplemented feature, that
allows pluggable transports proxies to communicate with Tor; it's a
prerequisite for pluggable transport statistics, rate limiting, and
other cool things.
Filename: XXX-ext-orport-auth.txt
Title: Tor Extended ORPort Authentication
Author: George Kadianakis
Created: 28-11-2012
Status: Open
Target: 0.2.5.x
1. Overview
This proposal defines a scheme for Tor components to authenticate to
each other using a shared-secret.
2. Motivation
Proposal 196 introduced new ways for pluggable transport proxies to
communicate with Tor. The communication happens using TCP in the same
fashion that controllers speak to the ControlPort.
To defend against cross-protocol attacks [0] on the transport ports,
we need to define an authentication scheme that will restrict passage
to unknown clients.
Tor's ControlPort uses an authentication scheme called safe-cookie
authentication [1]. Unfortunately, the design of the safe-cookie
authentication was influenced by the protocol structure of the
ControlPort and the need for backwards compatibility of the
cookie-file and can't be easily reused in other use cases.
3. Goals
The general goal of Extended ORPort authentication is to authenticate
the client based on a shared-secret that only authorized clients
should know.
Furthermore, its implementation should be flexible and easy to reuse,
so that it can be used as the authentication mechanism in front of
future Tor helper ports (for example, in proposal 199).
Finally, the protocol is able to support multiple authentication
schemes and each of them has different goals.
4. Protocol Specification
4.1. Initial handshake
When a client connects to the Extended ORPort, the server sends:
AuthTypes [variable]
EndAuthTypes [1 octet]
Where,
+ AuthTypes are the authentication schemes that the server supports
for this session. They are multiple concatenated 1-octet values that
take values from 1 to 255.
+ EndAuthTypes is the special value 0.
The client reads the list of supported authentication schemes and
replies with the one he prefers to use:
AuthType [1 octet]
Where,
+ AuthType is the authentication scheme that the client wants to use
for this session. A valid authentication type takes values from 1 to
255. A value of 0 means that the client did not like the
authentication types offered by the server.
If the client sent an AuthType of value 0, or an AuthType that the
server does not support, the server MUST close the connection.
4.2. Authentication types
4.2.1 SAFE_COOKIE handshake
Authentication type 1 is called SAFE_COOKIE.
4.2.1.1. Motivation and goals
The SAFE_COOKIE scheme is pretty-much identical to the authentication
scheme that was introduced for the ControlPort in proposal 193.
An additional goal of the SAFE_COOKIE authentication scheme (apart
from the goals of section 2), is that it should not leak the contents
of the cookie-file to untrusted parties.
Specifically, the SAFE_COOKIE protocol will never leak the actual
contents of the file. Instead, it uses a challenge-response protocol
(similar to the HTTP digest authentication of RFC2617) to ensure that
both parties know the cookie without leaking it.
4.2.1.2. Cookie-file format
The format of the cookie-file is:
StaticHeader [32 octets]
Cookie [32 octets]
Where,
+ StaticHeader is the following string:
"! Extended ORPort Auth Cookie !\x0a"
+ Cookie is the shared-secret. During the SAFE_COOKIE protocol, the
cookie is called CookieString.
Extended ORPort clients MUST make sure that the StaticHeader is
present in the cookie file, before proceeding with the
authentication protocol.
Details on how Tor locates the cookie file can be found in section 5
of proposal 196. Details on how transport proxies locate the cookie
file can be found in pt-spec.txt.
4.2.1.3. Protocol specification
A client that performs the SAFE_COOKIE handshake begins by sending:
ClientNonce [32 octets]
Where,
+ ClientNonce is 32 octets of random data.
Then, the server replies with:
ServerHash [32 octets]
ServerNonce [32 octets]
Where,
+ ServerHash is computed as:
HMAC-SHA256(CookieString,
"ExtORPort authentication server-to-client hash" | ClientNonce | ServerNonce)
+ ServerNonce is 32 random octets.
Upon receiving that data, the client computers ServerHash herself and
validates it against the ServerHash provided by the server.
If the server-provided ServerHash is invalid, the client MUST
terminate the connection.
Otherwise the client replies with:
ClientHash [32 octets]
Where,
+ ClientHash is computed as:
HMAC-SHA256(CookieString,
"ExtORPort authentication client-to-server hash" | ClientNonce | ServerNonce)
Upon receiving that data, the server computers ClientHash herself and
validates it against the ClientHash provided by the client.
Finally, the server replies with:
Status [1 octet]
Where,
+ Status is 1 if the authentication was successfull. If the
authentication failed, Status is 0.
4.3. Post-authentication
After completing the Extended ORPort authentication successfully, the
two parties should proceed with the Extended ORPort protocol on the
same TCP connection.
[0]:
http://archives.seul.org/or/announce/Sep-2007/msg00000.html
[1]:
https://gitweb.torproject.org/torspec.git/blob/79f488c32c43562522e5592f2c19…