[or-cvs] r18795: {projects} flesh out section 1. start on section 2. (projects/performance)

arma at seul.org arma at seul.org
Sat Mar 7 08:53:18 UTC 2009


Author: arma
Date: 2009-03-07 03:53:18 -0500 (Sat, 07 Mar 2009)
New Revision: 18795

Modified:
   projects/performance/performance.tex
Log:
flesh out section 1. start on section 2.


Modified: projects/performance/performance.tex
===================================================================
--- projects/performance/performance.tex	2009-03-07 05:08:19 UTC (rev 18794)
+++ projects/performance/performance.tex	2009-03-07 08:53:18 UTC (rev 18795)
@@ -74,30 +74,46 @@
 \tableofcontents
 \pagebreak
 
-\section{Our congestion control does not work well}
+\section{Tor's congestion control does not work well}
+\label{sec:congestion}
 
-Tor's most critical performance problem is in how it combines high-volume
-streams with low-volume streams. ...
+One of Tor's critical performance problems is in how it combines
+high-volume streams with low-volume streams. We need to come up with ways
+to let the ``quiet'' streams (\eg web browsing) co-exist better with the
+``loud'' streams (\eg bulk transfer).
 
-\subsection{TCP backoff slows down all streams since we multiplex}
+\subsection{TCP backoff slows down every circuit at once}
 
-End-to-end congestion avoidance
+Tor combines every circuit going between two Tor relays into a single TCP
+connection. This approach is a smart idea in terms of anonymity, since
+putting all circuits on the same connection prevents an observer from
+learning which packets correspond to which circuit. But over the past
+year research has shown that it's a bad idea in terms of performance,
+since TCP's backoff mechanism only has one option when that connection
+is sending too many bytes: slow it down, and thus slow down all the
+circuits going across it.
 
+We could fix this by switching to one circuit per TCP connection. But
+that means that a relay with 1000 connections and 1000 circuits per
+connection would need a million sockets open; that's a problem for even
+the well-designed operating systems and routers out there.
+
+More generally,
 Tor currently uses two levels of congestion avoidance -- TCP flow control
 per-link, and a simple windowing scheme per-circuit.
 It has been suggested that this approach is causing performance problems,
 because the two schemes interact badly.
-Also, it is known that multiplexing multiple streams over a single TCP
-link gives poorer performance than keeping them separate.
+
 Experiments show that moving congestion management to be fully end-to-end
 offers a significant improvement in performance.
 
 There have been two proposals to resolve this problem, but their
 underlying principle is the same: use an unreliable protocol for links
 between Tor nodes, and perform error recovery and congestion management
-between the client and exit node.
-Joel Reardon~\cite{reardon-thesis} proposed using DTLS~\cite{DTLS}
-(a UDP variant of TLS), as the link protocol, a cut-down version of
+between the client and exit node. Tor partially funded Joel Reardon's
+thesis~\cite{reardon-thesis} under Ian Goldberg, which proposed using
+DTLS~\cite{DTLS}
+(a UDP variant of TLS) as the link protocol and a cut-down version of
 TCP to give reliability and congestion avoidance, but largely using the
 existing Tor cell protocol.
 Csaba Kiraly \detal~\cite{tor-l3-approach} proposed using
@@ -133,29 +149,41 @@
 IPsec stack, but this would be a substantial effort, and would lose some
 of the advantages of making use of existing building blocks.
 
-A significant issue with moving from TLS as the link protocol is that
-it is incompatible with Tor's current censorship-resistance strategy.
-Tor impersonates the TLS behaviour of HTTPS web-browsing, with the
-intention that it is difficult to block Tor, without blocking a
-significant amount of HTTPS.
-If Tor were to move to an unusual protocol, such as DTLS, it would be
-easier to block just Tor.
-Even IPsec is comparatively unusual on the open Internet.
+%A significant issue with moving from TLS as the link protocol is that
+%it is incompatible with Tor's current censorship-resistance strategy.
+%Tor impersonates the TLS behaviour of HTTPS web-browsing, with the
+%intention that it is difficult to block Tor, without blocking a
+%significant amount of HTTPS.
+%If Tor were to move to an unusual protocol, such as DTLS, it would be
+%easier to block just Tor.
+%Even IPsec is comparatively unusual on the open Internet.
 
-One option would be to modify the link protocol so that it impersonates
-an existing popular encrypted protocol.
-To avoid requiring low-level operating system access, this should be a
-UDP protocol.
-There are few options available, as TCP is significantly more popular.
-Voice over IP is one fruitful area, as these require low latency and
-hence UDP is common, but further investigation is needed.
+%One option would be to modify the link protocol so that it impersonates
+%an existing popular encrypted protocol.
+%To avoid requiring low-level operating system access, this should be a
+%UDP protocol.
+%There are few options available, as TCP is significantly more popular.
+%Voice over IP is one fruitful area, as these require low latency and
+%hence UDP is common, but further investigation is needed.
 
+Prof Goldberg has a second student picking up where Joel left off. He's
+currently working on fixing bugs in OpenSSL's implementation of DTLS along
+with other core libraries that we'd need to use if we go this direction.
 
+{\bf Impact}: high
 
-\subsection{We chose Tor's congestion control starting window sizes wrong}
+{\bf Difficulty}: high effort to get all the pieces in place; high risk
+that it would need further work to get right.
 
-Changing circuit window size
+{\bf Plan}: We should keep working with them to get this project closer
+to something we can deploy. The next step on our side is to deploy a
+separate testing Tor network that uses datagram protocols, and get more
+intuition from that. We could optimistically have this network deployed
+in late 2009.
 
+\subsection{We chose Tor's congestion control window sizes wrong}
+%Changing circuit window size
+
 Tor maintains a per-circuit maximum of unacknowledged cells
 (\texttt{CIRCWINDOW}).
 If this value is exceeded, it is assumed that the circuit has become
@@ -164,9 +192,19 @@
 this window size would substantially decrease latency (although not
 to the same extent as moving to a unreliable link protocol), while not
 affecting throughput.
-This reduction would improve user experience, and have the added benefit
-of reducing memory usage on Tor nodes.
 
+Specifically, right now the circuit window size is 512KB and the
+per-stream window size is 256KB. These numbers mean that a user
+downloading a large file receives it (in the ideal case) in chunks
+of 256KB, sending back acknowledgements for each chunk. In practice,
+though, the network has too many of these chunks moving around at once,
+so they spend most of their time waiting in memory at relays.
+
+Reducing the size of these chunks has several effects. First, we reduce
+memory usage at the relays, because there are fewer chunks waiting and
+because they're smaller. Second, because there are fewer bytes vying to
+get onto the network at each hop, users will see lower latency.
+
 More investigation is needed on precisely what should be the new value
 for the circuit window, and whether it should vary.
 Out of 200, 1\,000 (current value in Tor) and 5\,000, the optimum was
@@ -176,8 +214,121 @@
 Therefore, a different optimum may exist for networks with different
 characteristics.
 
-\subsection{Priority for circuit control cells, e.g. circuit creation}
+{\bf Impact}: Medium. It seems pretty clear that in the steady-state this
+patch is a good idea; but it's still up in the air whether the transition
+period will show immediate improvement or if there will be a period
+where people who upgrade get clobbered by people who haven't upgraded yet.
 
+{\bf Difficulty}: Low effort to deploy -- it's a several line patch! Medium
+risk that we haven't thought things through well enough and we'd need to
+back it out or change parts of it.
+
+{\bf Plan}: Once we start on 0.2.2.x (in the next few months), we should
+put the patch in and see how it fares. We should go for maximum effect,
+and choose the lowest possible setting of 100 cells (50KB) per chunk.
+
+%\subsection{Priority for circuit control cells, e.g. circuit creation}
+
+\section{Some users add way too much load}
+
+Section~\prettyref{sec:congestion} described mechanisms to let low-volume
+streams have a chance at competing with high-volume streams. Without
+those mechanisms, normal web browsing users will always get squeezed out
+by people pulling down larger content. But the next problem is that some
+users simply add more load than the network can handle.
+
+When we originally designed Tor, we aimed for high throughput. We
+figured that providing high throughput would mean we inherit good latency
+properties for free. However, now that it's clear we have several user
+profiles trying to use the Tor network at once, we need to consider
+changing some of those design choices. Some of those changes would aim
+for better latency and worse throughput.
+
+\subsection{Squeeze loud circuits}
+
+The Tor 0.2.0.x release included this change:
+\begin{verbatim}
+  - Change the way that Tor buffers data that it is waiting to write.
+    Instead of queueing data cells in an enormous ring buffer for each
+    client->relay or relay->relay connection, we now queue cells on a
+    separate queue for each circuit. This lets us use less slack memory,
+    and will eventually let us be smarter about prioritizing different
+    kinds of traffic.
+\end{verbatim}
+
+Currently when we're picking cells to write onto the network, we choose
+round-robin from each circuit that wants to write. We could instead
+remember which circuits had written many cells recently, and give priority
+to the ones that haven't.
+
+Technically speaking, we're reinventing more of TCP here, and we'd be
+better served by a general switch to DTLS+UDP. But there are two reasons
+to consider this separate approach.
+
+First is rapid deployment. We could get this change into the Tor 0.2.2.x
+development release in mid 2009, and as relays upgrade the change would
+gradually phase in. This timeframe is way earlier than the practical
+timeframe for switching to DTLS+UDP.
+
+The second reason is the flexibility this approach provides. We could
+give priorities based on recent activity (``if you've sent much more
+than the average in the past 10 seconds, then you get slowed down''),
+or we could base it on the total number of bytes sent on the circuit so
+far, or some combination.
+
+This meddling is tricky though: we could encounter feedback effects if
+we don't perfectly anticipate the results of our changes.
+
+Also, Bittorrent is designed to resist attacks like this -- it
+periodically drops its lowest-performing connection and replaces it with
+a new one. So we would want to make sure we're not going to accidentally
+increase the number of circuit creation requests and thus just shift
+the load problem.
+
+{\bf Impact}: High, if we get it right.
+
+{\bf Difficulty}: Medium effort to deploy -- we need to go look at the
+code to figure out where to change, how to efficiently keep stats on
+which circuits are active, etc. High risk that we'd get it wrong the
+first few times. Also, it will be hard to measure whether we've gotten
+it right or wrong.
+
+{\bf Plan}: Step one is to evaluate the complexity of changing the
+current code. We should do that for 0.2.2.x in mid 2009. Then we should
+write some proposals for various meddling we could do, and try to find
+the right balance between simplicity and projected effect.
+
+\subsection{Throttle bittorrent at exits}
+
+If we're right that Bittorrent traffic is a main reason for Tor's load,
+we could bundle a protocol analyzer with the exit relays. When they
+detect that a given outgoing stream is a protocol associated with bulk
+transfer, they could set a low rate limit on that stream. (Tor already
+supports per-stream rate limiting, though we've never bothered using it.)
+
+This is a slippery slope in many respects though. First is the
+wiretapping question: is an application that automatically looks
+at content wiretapping? It depends which lawyer you ask. Second is
+the network neutrality question: ``we're just delaying the traffic''.
+Third is the liability concern: once we add this feature in, what other
+requests are we going to get for throttling or blocking certain content,
+and does the capability to throttle certain content change the liability
+situation for the relay operator?
+
+{\bf Impact}: High.
+
+{\bf Difficulty}: Medium effort to deploy. High risk that we'd (rightly)
+code to figure out where to change, how to efficiently keep stats on
+which circuits are active, etc. High risk that we'd get it wrong the
+first few times. Also, it will be hard to measure whether we've gotten
+it right or wrong.
+
+{\bf Plan}: Not a good move.
+
+\subsection{Throttle/snipe at the client side}
+\subsection{Default exit policy of 80,443}
+\subsection{Need more options here, since these all suck}
+
 \section{Simply not enough capacity}
 
 \subsection{Tor server advocacy}
@@ -197,6 +348,7 @@
 members has their node fail, other team members may notice and provide
 assistance on fixing the problem.
 
+\subsection{Funding more relays directly}
 
 
 \subsection{incentives to relay}
@@ -486,8 +638,9 @@
 length payload, however Tor does not currently send padding cells,
 other than as a periodic keep-alive.
 
+\section{Last thoughts}
 
-\section{Some users add way too much load}
+\subsection{Lessons from economics}
 
 If, for example, the measures above doubled the effective capacity of the Tor network, the na\"{\i}ve hypothesis is that users would experience twice the throughput.
 Unfortunately this is not true, because it assumes that the number of users does not vary with bandwidth available.
@@ -548,19 +701,15 @@
 Alternatively, the network could be configured to share resources in a manner such that the utility to each user is more equal.
 In this case, it will be acceptable to all users that a single equilibrium point is formed, because its level will no longer be in terms of simple bandwidth.
 
-\subsection{Squeeze loud circuits}
-\subsection{Snipe bittorrent}
-\subsection{Throttle at the client side}
-\subsection{Default exit policy of 80,443}
-\subsection{Need more options here, since these all suck}
-
-\section{Last thoughts}
-
 \subsection{Metrics}
 
   Two approaches: "research conclusively first" vs "roll it out and see"
   Need ways to measure improvements
 
+\subsection{The plan moving forward}
+
+
+
 \subsection*{Acknowledgements}
 
 % Mike Perry provided many of the ideas discussed here



More information about the tor-commits mailing list