# [or-cvs] r18860: {projects} finish my section 4 work (projects/performance)

arma at seul.org arma at seul.org
Tue Mar 10 12:44:48 UTC 2009

Author: arma
Date: 2009-03-10 08:44:48 -0400 (Tue, 10 Mar 2009)
New Revision: 18860

Modified:
projects/performance/performance.bib
projects/performance/performance.tex
Log:
finish my section 4 work

Modified: projects/performance/performance.bib
===================================================================
--- projects/performance/performance.bib	2009-03-10 12:43:31 UTC (rev 18859)
+++ projects/performance/performance.bib	2009-03-10 12:44:48 UTC (rev 18860)
@@ -154,7 +154,6 @@
www_pdf_url = {http://petworkshop.org/2007/papers/PET2007_preproc_Performance_comparison.pdf},
}

-
@Misc{economics-tor,
author = 	 {Steven J. Murdoch},
title = 	 {Economics of {Tor} performance},
@@ -164,3 +163,12 @@
note = 	 {\url{http://www.lightbluetouchpaper.org/2007/07/18/economics-of-tor-performance/}},
}

+ at inproceedings{hs-attack,
+  title = {Locating Hidden Servers},
+  author = {Lasse {\O}verlier and Paul Syverson},
+  booktitle = {Proceedings of the 2006 IEEE Symposium on Security and Privacy},
+  year = {2006},
+  month = {May},
+  publisher = {IEEE CS},
+}
+

Modified: projects/performance/performance.tex
===================================================================
--- projects/performance/performance.tex	2009-03-10 12:43:31 UTC (rev 18859)
+++ projects/performance/performance.tex	2009-03-10 12:44:48 UTC (rev 18860)
@@ -1118,20 +1118,53 @@
we should solve this by reweighting at the clients, reweighting in the
directory status, or ignoring the issue entirely.

-\subsection{Entry guards might be overloaded}
+\subsection{Older entry guards are overloaded}

-make guard flag easier to get, so there are more of them. also would
-improve anonymity since more entry points into the network.
+While the load on exit relays is skewed based on having an unusual exit
+policy, load on entry guards is skewed based on how long they've been
+in the network.

-also, are old guards more overloaded than new guards, since there are
-more clients that have the old guards in their state file?
+Since Tor clients choose a small number of entry guards and keep them
+for several months, a relay that's been listed with the Guard flag for a
+long time will accumulate an increasing number of clients. A relay that
+just earned its Guard flag for the first time will see very few clients.

-\subsection{Two hops vs three hops.}
+To combat this skew, clients should rotate entry guards every so
+often. We need to look at network performance metrics and discern how
+long it takes for the skew to become noticeable -- it might be that
+rotating to a new guard after a week or two is enough to substantially
+resolve the problem. We also need to consider the added risk that
+higher guard churn poses versus the original attack they were designed
+to thwart~\cite{hs-attack}, but I think a few weeks should still be
+plenty high.

+At the same time, there are fewer relays with the Guard flag than there
+should be. While the Exit flag really is a function of the relay's exit
+policy, the required properties for entry guards are much more vague:
+we want them to be fast enough'', and we want them to be likely to
+be around for a while more''. I think the requirements currently are too
+strict. This scarcity of entry guards in turn influences the anonymity
+the Tor network can provide, since there are fewer potential entry points
+into the network.

+{\bf Impact}: High.

+{\bf Effort}: Low.

+{\bf Risk}: Low.

+{\bf Plan}: We should do it, early in Tor 0.2.2.x. We'll need proposals
+first, both for the dropping old guards'' plan (to assess the tradeoff
+from the anonymity risk) and for the opening up the guard criteria''
+plan.
+
+%\subsection{Two hops vs three hops.}
+
+% People periodically suggest two hops rather than three. The problem
+% is that the design we have right now is more like two hops plus an
+% entry guard'', so removing any of the hops seems like a bad move.
+% But YMMV.
+
\section{Better handling of high/variable latency and failures}

\subsection{Our round-robin and rate limiting is too granular}