commit 59e1ee05dd8f20b5e4cdc379928dc160a2faf85a
Author: Karsten Loesing <karsten.loesing(a)gmx.net>
Date: Wed Nov 20 09:04:30 2013 +0100
Update Nick Hopper's botnet report.
Updating from git://bobafett.dtc.umn.edu:15151/botnet-techreport.git,
commit 7d53747366700921e9a1bc124b3cc9b4078ececd, with minor format edits.
---
2013/botnet-tr/.gitignore | 1 +
2013/botnet-tr/botnet-tr.tex | 16 ++--
2013/botnet-tr/conclusion.tex | 27 +------
2013/botnet-tr/intro.tex | 93 +++++++++++++++++-------
2013/botnet-tr/isolate.tex | 44 ++++++-----
2013/botnet-tr/offensive.tex | 11 +--
2013/botnet-tr/results-circs-buildtime-cdf.pdf | Bin 0 -> 43937 bytes
2013/botnet-tr/results-tput-read.pdf | Bin 0 -> 16456 bytes
2013/botnet-tr/results-ttlb-bulk.pdf | Bin 0 -> 19360 bytes
2013/botnet-tr/reuse.tex | 69 ++++++++++++++++--
2013/botnet-tr/thispaper.bib | 11 +++
2013/botnet-tr/throttle.tex | 49 ++++++++++++-
12 files changed, 219 insertions(+), 102 deletions(-)
diff --git a/2013/botnet-tr/.gitignore b/2013/botnet-tr/.gitignore
index 717b741..572f94f 100644
--- a/2013/botnet-tr/.gitignore
+++ b/2013/botnet-tr/.gitignore
@@ -1,2 +1,3 @@
botnet-tr.pdf
+botnet-tr-2013-11-20.pdf
diff --git a/2013/botnet-tr/botnet-tr.tex b/2013/botnet-tr/botnet-tr.tex
index d78dd1c..83aa3bd 100644
--- a/2013/botnet-tr/botnet-tr.tex
+++ b/2013/botnet-tr/botnet-tr.tex
@@ -5,10 +5,10 @@
\usepackage{subcaption}
\usepackage{graphicx}
-\title{Draft: Protecting Tor from botnet abuse in the long term}
-\author{Nicholas Hopper and others}
-\reportid{2013-XX-YYY}
-\date{\today}
+\title{Protecting Tor from botnet abuse in the long term}
+\author{Nicholas Hopper}
+\reportid{2013-11-001}
+\date{November 20, 2013}
\begin{document}
\maketitle
@@ -20,12 +20,8 @@
\input{eval}
\input{conclusion}
\section{Acknowledgements}
-The following ideas or analyses were contributed to by others:
-\begin{itemize}
-\item Section~\ref{sec:cost}: Mike Perry, Roger Dingledine
-\item Section~\ref{sec:guard}: Mike Perry, Ian Goldberg
-\item Section~\ref{sec:fail}: Ian Goldberg
-\end{itemize}
+Roger Dingledine, Ian Goldberg, Rob Jansen, Mike Perry, and Max Schuchard all contributed
+significantly to the ideas, analysis, and discussion in this tech report.
\bibliography{anonbib,thispaper}
diff --git a/2013/botnet-tr/conclusion.tex b/2013/botnet-tr/conclusion.tex
index 978efb1..05614d4 100644
--- a/2013/botnet-tr/conclusion.tex
+++ b/2013/botnet-tr/conclusion.tex
@@ -1,28 +1,7 @@
-\section{Combining Approaches} \label{sec:combo}
-
-Several of the approaches considered so far could be used in
-combination, mitigating individual shortcomings:
-\begin{itemize}
-\item Guards could throttle {\sc extend} processing aggressively
- unless the client solves a CAPTCHA, then relaxing to a client-level
- throttle rate. This could reduce the impact caused by botnet
- clients trying to rendezvous with a throttled server, and if the
- CAPTCHAs are served by guard nodes, would reduce the vulnerability
- to automated solvers.
-
-\item Circuits created using {\sc create3} cells could process
- hidden-service related {\sc relay\_*} cells that come with a CAPTCHA
- proof, so that only completely (client and server) headless hidden services are
- de-prioritized by botnet traffic.
-
-\item Reusing failed partial circuits can be combined with any other approach
- to reduce the load caused by a temporary overload of onion-skins.
-\end{itemize}
-
\section{Conclusions}
-As of this draft, further evaluation is needed before recommending any
-approach. In the medium-term, de-listing the hidden service
-descriptor for the botnet may provide the most immediate relief. In
+As of this writing, further evaluation is needed before recommending any
+approach. In the medium-term, throttling Tor versions prior to
+0.2.4.17-rc may provide the most immediate relief. In
the long term, partial circuit reuse seems safe, pending complete
analysis of the impact on anonymity; further evaluation is needed to
determine the effectiveness of other measures.
diff --git a/2013/botnet-tr/intro.tex b/2013/botnet-tr/intro.tex
index 1f9cf86..d824b9b 100644
--- a/2013/botnet-tr/intro.tex
+++ b/2013/botnet-tr/intro.tex
@@ -6,8 +6,7 @@ control (C\&C) as a Tor Hidden Service. Figure~\ref{fig:users} shows that esti
daily clients increased from under 1 million to nearly 6 million in
three weeks. Figure~\ref{fig:torperf} shows the effects on
performance: measured downloading times for a 50 KiB file doubled,
-from 1.5 seconds to 3.0 seconds, and measured download failure and
-timeout rates increased from roughly 0\% to 1\%.
+from 1.5 seconds to 3.0 seconds.
\begin{figure}[!h]
\vspace{-12pt}
@@ -37,15 +36,17 @@ timeout rates increased from roughly 0\% to 1\%.
However, the amount of traffic being carried by the network did not
change dramatically, as seen in Figure~\ref{fig:bandwidth}. The
primary cause of the problems seems to be the increased processing
-load on Tor relays caused by the large increase in circuit building.
-When a tor client - an {\em Onion Proxy} or OP - connects to the network, it
-sends a {\sc create} cell to a guard node, which contains the first
-message in a diffie-hellman key exchange, called an ``onion skin'';
-the node receiving the create cell computes the shared key and replies
-with the second message. The OP next sends an onion skin in a {\sc
- extend} cell to the end of the circuit, which extracts the onion
-skin and sends it in a {\sc create} cell to the next hop, until all
-three hops have exchanged circuits.
+load on Tor relays caused by the large increase in key exchanges required to build anonymous encrypted tunnels, or
+{\em circuits}.
+When a Tor client - an {\em Onion Proxy} or OP - connects to the
+network, it sends a
+{\sc create} cell to a Tor node, called a {\em guard}, which contains
+the first message $g^a$ in a Diffie-Hellman key exchange, called an ``onion
+skin''; the node receiving the create cell computes the shared key
+$g^{ab}$ and replies with the second message $g^b$, creating a $1$-hop
+circuit. After this, the client iteratively sends onion skins in {\sc extend} cells to the end of the circuit,
+which extracts the onion skins and sends them in {\sc create} cells to
+the next relay, until all three hops have exchanged keys.
\begin{itemize}
\item Extending a circuit -- decrypting an ``onion skin'' and
participating in a Diffie-Hellman key exchange -- is sufficiently
@@ -53,11 +54,9 @@ three hops have exchanged circuits.
total bandwidth that high-weight relays can handle in onion-skins is
significantly lower than the network bandwidth.
-\item The use of Hidden Services causes at least three circuit
- building events every time a botnet client connects: one circuit
- establishes a ``rendezvous point'', one passes this node to the hidden
- service through an ``introduction point'', and finally the hidden
- service must build a circuit to the rendezvous point as well.
+\item The hidden service protocol -- explained in
+ section~\ref{sec:background} -- causes at least three circuits to be
+ built every time a bot connects.
\item When onion skins exceed the processing capacity of an OR, they
wait in decryption queues, causing circuit building latencies to
@@ -81,13 +80,16 @@ three hops have exchanged circuits.
\end{figure}
In response to this, the Tor Project modified release candidate
0.2.4.17-rc to prioritize processing of onionskins using the more
-efficient {\sf ntor} key exchange protocol. Adoption of this release
-has helped the situation: as Figure~\ref{fig:torperf} shows,
+efficient {\sf ntor}~\cite{ntor} key exchange protocol. Adoption of this release
+helped the situation: as Figure~\ref{fig:torperf} shows,
measured download 50 KiB times as of late September decreased to
roughly 2.0 seconds. Figure~\ref{fig:fail} shows that circuit
extensions using tor version 0.2.4.17-rc range between 5\% and 15\%, while circuit
-extensions using the stable release, version 0.2.3.25, range between
-5\% and 30\%.
+extensions using the stable release, version 0.2.3.25, ranged between
+5\% and 30\%. By November 2013, further efforts to find and remove the infection by
+anti-malware teams from companies including Microsoft have mitigated
+the immediate threat, though the several unmanaged hosts remaining
+could still revive the botnet.
In this document, we consider longer-term strategies to ease the load
on the network and reduce the impact on clients. Full evaluation of
@@ -110,7 +112,7 @@ deployed in response to such methods, where the behavior of both the
botnet and the tor software can change adaptively to circumvent
mitigation mechanisms. We note that some attacks are out of the
scope of this document, in particular we do not consider a botnet that
-simply communicates with non-hidden servers through tor, since such an
+simply communicates with non-hidden servers through Tor, since such an
attack must contain traditional network addresses, and we do not
consider a botnet that simply seeks to conduct a denial of service
attack on Tor by flooding the network with traffic in excess of its capacity.
@@ -119,14 +121,49 @@ The remainder of this manuscript describes the major categories of
medium- and long-term responses that Tor could consider, along with
the technical challenges that each would pose to the research
community. Section~\ref{sec:bot} considers medium-term strategies
-aimed at eliminating the current botnet threat to Tor.
-Section~\ref{sec:throttle} considers longer-term mechanisms to
+aimed at eliminating a current botnet threat to Tor.
+Section~\ref{sec:throttle} considers longer-term mechanisms to
limit the rate of circuit-building requests by any botnet.
-Se which ction~\ref{sec:fail} describes mechanisms to reduce the impact of
-circuit-building timeouts. Section~\ref{sec:iso} considers the idea
-of isolating hidden-service from regular Tor client traffic, and
-section~\ref{sec:combo} considers the possible effects of combining
-the various approaches.
+Section~\ref{sec:fail} describes mechanisms to reduce the load from
+ordinary clients. Section~\ref{sec:iso} considers the idea
+of isolating hidden-service from regular Tor client traffic.
+
+\section{Background: Tor Hidden Services} \label{sec:background}
+
+The Tor network provides a mechanism for clients to anonymously
+provide services (e.g., websites) that can be accessed by other users
+through Tor. We briefly review the protocol for this mechanism:
+\begin{enumerate}
+\item The hidden service (HS) picks a public ``{\em identity key}'' $PK_S$ and
+ associated secret key $SK_S$. The HS then computes an ``{\em onion
+ identifier}'' $o_S = H(PK_S)$ using a cryptographic hash function
+ $H$. Currently, the hash function $H$ is the output of SHA1,
+ truncated to 80 bits. This 10-byte identifier is base32-encoded to
+ produce a 16-byte \url{.onion} address that Tor users can use to
+ connect to HS, such as \url{3g2upl4pq6kufc4m.onion}.
+\item The HS constructs circuits terminating in at least
+ three different relays, and requests these relays to act as its {\em
+ introduction points} (IPs).
+\item The HS then produces a ``{\em descriptor},'' signed using
+ the $SK_S$, that lists $PK_S$ and its IPs. This
+ descriptor is published through a distributed hash ring of Tor
+ relays, using $o_S$ and a time period $\tau$ as an index.
+\item A client OP connects to the HS by retrieving
+ the descriptor using $o_S$ and $\tau$, and
+ building two circuits: one circuit terminates at an IP and
+ the other terminates at a randomly-selected relay
+ referred to as the {\em rendezvous point} (RP). The client asks the
+ IP to send the identity of the RP to the HS.
+\item The HS then builds a circuit to the RP, which
+ connects the client and HS.
+\end{enumerate}
+Since lookups to the distributed hash ring are performed through
+circuits as well, and each descriptor has three redundant copies, a
+client connecting to a hidden service could require building up to 6
+circuits; to reduce this load, clients cache descriptors and reuse rendezvous
+circuits any time a request is made less than ten minutes after the
+previous connection.
+
%%% Local Variables:
%%% mode: latex
diff --git a/2013/botnet-tr/isolate.tex b/2013/botnet-tr/isolate.tex
index e09f684..abb713f 100644
--- a/2013/botnet-tr/isolate.tex
+++ b/2013/botnet-tr/isolate.tex
@@ -1,4 +1,4 @@
-\section{Isolate or De-prioritize Hidden Service circuits}\label{sec:iso}
+\section{Can we isolate Hidden Service circuits?}\label{sec:iso}
Another approach to protect the regular users of the Tor network from
resource depletion by a hidden-service botnet would be to isolate hidden
@@ -6,28 +6,26 @@ service onion-skin processing from ordinary processing. By
introducing a mechanism that allows relays to recognize that an {\sc
extend} or {\sc create} cell is likely to carry hidden service
traffic, we could provide a means to protect the rest of the system
-from the effects of this traffic, by scheduling priority,
-differentiated services, or simple resource isolation.
+from the effects of this traffic, by scheduling priority or simple isolation.
An example of how this might work in practice is to introduce new {\sc
- extend3}/{\sc create3} cell types with the rule that a circuit that
-is created with an {\sc create3} cell will silently drop any of the
-following cell types: {\sc extend}, {\sc extend2}, or any of the {\sc
- relay\_*} cell types (32--40) associated with hidden services. If
-relays also silently drop {\sc extend3} cells on circuits created with
-{\sc create} or {\sc create2} cells, then {\sc create3} circuits are
-guaranteed not to carry hidden service traffic. Updated OPs would
-then create all circuits with {\sc create3}/{\sc extend3} unless
-connecting to a hidden service. When a sufficient number of clients
-and relays update their Tor version, a consensus flag could be used to
-signal relays to begin isolating processing of {\sc create} and {\sc
- create2} cells. For example, these cells might only be processed in
-the last 20ms of each 100ms period, leaving 80\% of processing
-capacity available for regular traffic. The flag could be triggered
-when hidden service circuits exceed a significant fraction of all
-circuits in the network.\footnote{Detecting this condition in a
- privacy-preserving manner represents another technical challenge
- requiring further research.}
+ nohs-extend}/{\sc nohs-create} cell types with the rule that a
+circuit that is created with an {\sc nohs-create} cell will silently
+drop a normal {\sc extend} cell, or any of the cell types associated
+with hidden services. If relays also silently drop {\sc nohs-extend}
+cells on circuits created with ordinary {\sc create} cells, then {\sc
+ nohs-create} circuits are guaranteed not to carry hidden service
+traffic. Updated clients would then create all circuits with {\sc
+ nohs-create} unless connecting to a hidden service. When a
+sufficient number of clients and relays update their Tor version, a
+consensus flag could be used to signal relays to begin isolating
+processing of ordinary {\sc create} cells. For example, these cells
+might only be processed in the last 20ms of each 100ms period, leaving
+80\% of processing capacity available for regular traffic. The flag
+could be triggered when hidden service circuits exceed a significant
+fraction of all circuits in the network.\footnote{Detecting this
+ condition in a privacy-preserving manner represents another
+ technical challenge requiring further research.}
This solution protects the network and typical users from a massive
botnet hidden service, but would, unfortunately, intensify the effect
@@ -38,7 +36,7 @@ stress the Tor hidden service ecosystem, while providing stronger
protection against botnet clients flooding the network.
One privacy concern related to this approach is that as the network
-upgrades to versions of Tor supporting {\sc create3}/{\sc extend3},
+upgrades to versions of Tor supporting {\sc nohs-create},
identification of hidden-service traffic approaches deterministic
certainty. By contrast, current hidden service circuits follow
traffic patterns that allow them to be identified with high
@@ -46,7 +44,7 @@ statistical confidence~\cite{oakland2013-trawling} only. Because
(excluding botnet traffic) the base rates of hidden service traffic
compared to all other traffic are low, this will also decrease the privacy
of hidden service users. One potential mitigation mechanism would be
-to have clients only use {\sc create3}/{\sc extend3} when the
+to have clients only use {\sc nohs-create} when the
consensus flag for hidden service isolation is activated, which would
indicate that hidden service clients would already have a large anonymity
set.
diff --git a/2013/botnet-tr/offensive.tex b/2013/botnet-tr/offensive.tex
index a6007a1..97e2f1a 100644
--- a/2013/botnet-tr/offensive.tex
+++ b/2013/botnet-tr/offensive.tex
@@ -14,18 +14,19 @@ requests is an order of magnitude larger than all other hidden
services combined. Once the descriptor has been discovered,
blacklisting the HS public key from the Hidden Service directory will
prevent attempts to reach the blacklisted \url{.onion} address by
-clients. Depending on the sophistication of the bot software,
-blacklisting the descriptor from the Hidden Service Directory could
-prevent the abuse altogether.
+clients.
Unfortunately, this approach is both technically and philosphically
-problematic. An adaptive or forward-looking botmaster can defeat
+problematic. Existing Tor client software does not deal gracefully
+with failed descriptor lookups, leading to a situation where lookup
+failures increase the circuit load on the network. Furthermore,
+an adaptive or forward-looking botmaster can defeat
identification through volume by multiplexing across multiple
\url{.onion} addresses. Blacklisting can be defeated using multiple
keys or domain generation algorithms -- in the hidden service case,
generating a sequence of public keys using a fixed-seed pseudorandom
sequence. Philosophically, a mechanism to blacklist certain hidden
-service keys could potentially be abused for censorship.
+service keys could potentially be abused for censorship.
\subsection{Deanonymize the server}
Assuming the botnet is prepared for blacklisting, the Tor Project
diff --git a/2013/botnet-tr/results-circs-buildtime-cdf.pdf b/2013/botnet-tr/results-circs-buildtime-cdf.pdf
new file mode 100644
index 0000000..19a6607
Binary files /dev/null and b/2013/botnet-tr/results-circs-buildtime-cdf.pdf differ
diff --git a/2013/botnet-tr/results-tput-read.pdf b/2013/botnet-tr/results-tput-read.pdf
new file mode 100644
index 0000000..6c30e87
Binary files /dev/null and b/2013/botnet-tr/results-tput-read.pdf differ
diff --git a/2013/botnet-tr/results-ttlb-bulk.pdf b/2013/botnet-tr/results-ttlb-bulk.pdf
new file mode 100644
index 0000000..09d731c
Binary files /dev/null and b/2013/botnet-tr/results-ttlb-bulk.pdf differ
diff --git a/2013/botnet-tr/reuse.tex b/2013/botnet-tr/reuse.tex
index d4824bc..05dd11e 100644
--- a/2013/botnet-tr/reuse.tex
+++ b/2013/botnet-tr/reuse.tex
@@ -1,4 +1,12 @@
-\section{Reusing failed partial circuits} \label{sec:fail}
+\section{Client-side circuit-building adjustments} \label{sec:fail}
+
+Although an adaptive botnet could always modify the Tor client code,
+the regular user base still represents a large fraction of the load on
+the network, so another set of potential solutions would be to focus
+on modifying the behavior of ordinary clients to reduce the
+circuit-building load.
+
+\subsection{Can we reuse failed partial circuits?}
Part of the problem caused by the heavy circuit-building load is that
when a circuit times out, the entire circuit is destroyed. This means
@@ -16,16 +24,17 @@ X_2 &=& pX_0 & & & + 1\ ,
\end{array}
\end{displaymath}
where $X_i$ is the expected number of cells to complete a partial
-circuit with $i$ hops. This gives us
-$$X_0 = - \frac{p^2 -3p + 3}{(p-1)^3}\ .$$
+circuit with $i$ hops. This gives us $X_0 = \frac{p^2 -3p +
+ 3}{(1-p)^3}\ .$
\begin{figure}
\begin{center}
-\includegraphics[width=0.6\textwidth]{graphs/loadvfail}
+\includegraphics[width=0.5\textwidth]{graphs/loadvfail}
+\vspace{-10pt}
\caption{Expected onion-skin load per circuit created, for failure rate $p$}\label{fig:plot}
\end{center}
+\vspace{-10pt}
\end{figure}
-
Conceptually, we can reduce this load by re-using a partially-built circuit,
e.g. when a timeout occurs, we truncate the circuit and attempt to
extend from the current endpoint. In this case, the expected number
@@ -36,11 +45,13 @@ causes a substantial reduction in load for the network.
Figure~\ref{fig:fail} shows typical failure rates for a stable (TAP)
and release candidate (ntor) roughly one month after the beginning of
the botnet event; we can see that at the observed failure rates
-ranging from 10\%-25\%, reusing partial circuits would reduce the load on the network by
-10-30\%.
+ranging from 10\%-25\%, reusing partial circuits would reduce the load
+on the network by 10-30\%.
Of course, this model ignores the fact that failure probabilities are
-neither static nor uniform across the entire Tor network. Reducing
+neither static nor uniform across the entire Tor network, and the fact
+that many nodes use ``create fast'' cells to exchange a first-hop key
+without using Diffie-Hellman key exchange. Reducing
the load introduced by failures will also reduce the rate of circuit
failures overall, but since CPU capacities vary widely across the Tor
network (and load balancing is by the essentially uncorrelated
@@ -51,6 +62,48 @@ selective denial of service attacks~\cite{ccs07-doa}, although such
attacks typically only become noticeably effective in situations where
we would already consider Tor to be compromised.
+\subsection{Can we build circuits less often?}
+
+One reason for the relatively mild impact of the Mevade incident is
+that, while hidden services initially build more circuits than
+ordinary downloads, Tor is configured to aggressively avoid repeating
+this process through the use of ``circuit dirtiness'' defaults. When a
+Tor client builds an ordinary circuit, is is marked as ``clean'' until
+it is first used for traffic, at which point it becomes ``dirty.''
+After a configurable amount of time (by default, 10 minutes) a
+``dirty'' circuit cannot be used for new traffic and will be closed
+after existing streams close. In contrast, the ``dirty timer'' for a
+hidden service circuit restarts every time it is used, so that as
+long as a hidden service is visited once very 10 minutes, there is no
+need to build a new circuit.
+
+Thus, another set of technical approaches to dealing with circuit
+stress would be to investigate ways to reduce the number of circuits
+that Tor clients build:
+\begin{itemize}
+\item Network consensus documents could include a ``recommended max
+ dirtiness'' parameter that would adjust the lifetime of ordinary
+ circuits, and a separate, potentially longer ``max dirtiness''
+ parameter for hidden services. The technical challenges here are: first,
+ balancing the tradeoffs between decreased anonymity for individual
+ users (the longer a circuit is used, the more chance that some user
+ activities will use the same circuit and be linked by an adversary)
+ versus allowing more users on the network; and second,
+ reliably setting this parameter in an automated fashion.
+
+\item Similarly, Tor clients currently build extra circuits for internal
+ purposes like descriptor fetching and timeout testing, that could be
+ disabled by an appropriately set consensus parameter.
+
+\item Finally, while ``ordinary'' circuits are built preemptively for use when
+ needed, some tasks such as hidden service connections, connecting to
+ ``rare'' exit ports, and others currently build ``on-demand''
+ circuits. Finding ways to avoid constructing new circuits for
+ these tasks, and analyzing the impact on anonymity and security,
+ could reduce the amount of circuit building and also the perceived
+ performance impact of a botnet.
+\end{itemize}
+
%%% Local Variables:
%%% mode: latex
diff --git a/2013/botnet-tr/thispaper.bib b/2013/botnet-tr/thispaper.bib
index 31f6c2d..511c92d 100644
--- a/2013/botnet-tr/thispaper.bib
+++ b/2013/botnet-tr/thispaper.bib
@@ -32,3 +32,14 @@
primaryClass = "cs.CR"
}
+@Article{ntor,
+ author = {Ian Goldberg and Douglas Stebila and Berkant Ustaoglu},
+ title = {Anonymity and one-way authentication in key exchange protocols},
+ journal = {Designs, Codes and Cryptography},
+ year = {2013},
+ volume = {67},
+ number = {2},
+ pages = {245--269},
+ month = {May}
+}
+
diff --git a/2013/botnet-tr/throttle.tex b/2013/botnet-tr/throttle.tex
index 2e8b6bf..e88af26 100644
--- a/2013/botnet-tr/throttle.tex
+++ b/2013/botnet-tr/throttle.tex
@@ -10,7 +10,7 @@ available than the set of all regular Tor clients and (ii) neither
bots nor the C\&C server are constrained to follow the standard Tor
algorithms, although the current implementations may do so.
-\subsection{Throttling by cost} \label{sec:cost}
+\subsection{Can we throttle by cost?} \label{sec:cost}
One way to control the rate at which circuit building requests enter
the network is by making it costly to send them. Tor could do this by
@@ -54,7 +54,7 @@ require further investigation:
compromise anonymity, and relay-specificity would
allow each relay to verify that tokens aren't double-spent. However,
this adds an extra signature-verification to the task of onion-skin processing
- and also means another server and key that must be maintained.
+ and another server and key that must be maintained.
\end{itemize}
All of these solutions also require adding extra verification to the
onion-skin handshake, incurring the additional risk implied in changes
@@ -115,7 +115,7 @@ this would have significant impact on the security of regular Tor
users since it might simply incentivize a botmaster to run many
compromised relays.
-\subsection{Guard Throttling} \label{sec:guard}
+\subsection{Can we throttle at the entry guard?} \label{sec:guard}
A more direct approach would be to simply have guard nodes rate-limit
the number of {\sc extend} cells they will process on a given
@@ -144,7 +144,10 @@ approach, ordinary clients would pick guards as usual, and guards
would enforce a low rate-limit $r_{\text{default}}$ on circuit
extensions, for example 30 circuits per hour.\footnote{Naturally, finding the
right number to use for this default rate is also an interesting
- research challenge.} OPs that need to build circuits at a higher rate
+ research challenge: a very low rate-limit could prevent bots from
+ flooding the network but might also disrupt legitimate hidden
+ service clients}
+ OPs that need to build circuits at a higher rate
$r_{\text{server}}$ -- say, 2000 per hour -- could follow a
cryptographic protocol that would result in a verifiable token that
assigns a deterministic, but unpredictable, guard node for the OP when
@@ -156,6 +159,26 @@ in the {\em BRAIDS} design by Jansen {\em et
al.}~\cite{ccs10-braids}. The rates $r_{\text{default}}$ and
$r_{\text{server}}$ could appear in the network consensus, to allow
adjustments for the volume of traffic in the network.
+Figure~\ref{fig:throttle} shows the result of simulating this strategy
+with $r_{\text{default}} = 10$ and $r_{\text{server}}=2000$ using the
+{\em shadow} simulator~\cite{shadow-ndss12}; despite nearly identical
+bandwidth usage, the throttled simulation has performance
+characteristics similar to the simulation with no botnet.
+
+\begin{figure}[t]
+\vspace{-20pt}
+\begin{tabular}{ccc}
+\includegraphics[width=0.33\textwidth]{results-ttlb-bulk.pdf}&
+\includegraphics[width=0.33\textwidth]{results-circs-buildtime-cdf.pdf}&
+\includegraphics[width=0.33\textwidth]{results-tput-read.pdf}\\
+(a) & (b) & (c)
+\end{tabular}
+\vspace{-10pt}
+\caption{Results of guard throttling: 20 relays, 200 clients, 500
+ bots. (a) 5MiB download times, (b) Circuit build times, (c) Total
+ bytes read}\label{fig:throttle}
+\vspace{-20pt}
+\end{figure}
An additional technical challenge associated with guard throttling is
the need to enforce the use of entry guards when building circuits.
@@ -167,6 +190,24 @@ detected by a distributed monitoring protocol, but designing secure
protocols of this type that avoid adversarial manipulation has proven
to be a difficult challenge.
+\subsection{Can we throttle by Tor version?}
+
+Another strategy that may be useful against a nonadaptive botmaster is
+to throttle (perhaps temporarily) by Tor version. In this strategy,
+nodes running old versions of Tor could be either severely throttled
+or completely ignored by nodes with the latest software. Since relays
+are typically kept up-to-date and most clients access Tor using the
+Tor Browser Bundle (which can inform clients on start-up when new
+versions are recommended or, in the future, might incorporate
+auto-update functionality) the impact on typical users would be
+minimal, but a bot that packages the Tor software may not be able to
+respond to updates. The technical challenges here involve designing
+reliable methods to generate new software versions,
+(sufficently) secure protocols to check that a client is running the
+software, and analyzing the use cases that might affect clients'
+ability to update frequently and the impact of throttling in these cases.
+
+
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "botnet-tr"