# [or-cvs] r18881: {projects} final polishing. time to send it and sleep. (projects/performance)

arma at seul.org arma at seul.org
Wed Mar 11 11:10:10 UTC 2009

Author: arma
Date: 2009-03-11 07:10:09 -0400 (Wed, 11 Mar 2009)
New Revision: 18881

Modified:
projects/performance/performance.tex
Log:
final polishing. time to send it and sleep.

Modified: projects/performance/performance.tex
===================================================================
--- projects/performance/performance.tex	2009-03-11 11:09:45 UTC (rev 18880)
+++ projects/performance/performance.tex	2009-03-11 11:10:09 UTC (rev 18881)
@@ -317,7 +317,7 @@
\subsection{Squeeze over-active circuits}
\label{sec:squeeze}

-The Tor 0.2.0.x release included this change:
+The Tor 0.2.0.30 release included this change:
\begin{verbatim}
- Change the way that Tor buffers data that it is waiting to write.
Instead of queueing data cells in an enormous ring buffer for each
@@ -1462,7 +1462,7 @@
{\bf Plan}: Overall, it seems like a delicate move, but with potentially
quite a good payoff. I'm not convinced yet either way.

-\section{The network overhead is still too high for modem users}
+\section{The network overhead may still be high for modem users}

Even if we resolve all the other pieces of the performance question,
there still remain some challenges posed uniquely by users with extremely
@@ -1478,8 +1478,8 @@
blog post on the topic provides background and

-Proposal 158, which further reduces the directory overhead, is scheduled
-to be deployed in the Tor 0.2.2.x series.\footnote{\url{https://svn.torproject.org/svn/tor/trunk/doc/spec/proposals/158-microdescriptors.txt}}
+Proposal 158 further reduces the directory overhead, and is scheduled
+to be deployed in Tor 0.2.2.x.\footnote{\url{https://svn.torproject.org/svn/tor/trunk/doc/spec/proposals/158-microdescriptors.txt}}

{\bf Impact}: Low for normal users, high for low-bandwidth users.

@@ -1487,10 +1487,9 @@

{\bf Risk}: Low.

-{\bf Plan}: Once we roll out proposal 158, I think we'll be in good
+{\bf Plan}: We should roll out proposal 158. Then we'll be in good
shape for a while. The next directory overhead challenge will be in
-splintering the network, but first we must get enough relays that the
-step is needed.
+advertising many more relays; but first we need to get the relays.

\subsection{Our TLS overhead can also be improved}

@@ -1595,11 +1594,12 @@
\prettyref{fig:equilibrium} is the typical supply and demand graph from
economics textbooks, except with long-term throughput per user substituted
for price, and number of users substituted for quantity of goods sold.
-Also, it is inverted, because users prefer higher throughput, whereas
-consumers prefer lower prices.
-Similarly, as the number of users increases, the bandwidth supplied
-by the network falls, whereas suppliers will produce more goods if the
-price is higher.
+%Also, it is inverted, because users prefer higher throughput, whereas
+%consumers prefer lower prices.
+%Similarly,
+As the number of users increases, the bandwidth supplied
+by the network falls.
+%, whereas suppliers will produce more goods if the price is higher.

In drawing the supply curve, we have assumed the network's bandwidth is
constant and shared equally over as many users as needed.
@@ -1712,7 +1712,7 @@
to tasks, and get everything started.

At the same time, we need to continue to work on ways to measure changes
-in the network: without snapshots for before' and after', we'll have
+in the network: without before' and after' snapshots, we'll have
a much tougher time telling whether a given idea is actually working.
Many of the plans here have a delay between when we roll out the change
and when the clients and relays have upgraded enough for the change to