[tor-dev] Optimising Tor node selection probabilities

Steven Murdoch Steven.Murdoch at cl.cam.ac.uk
Mon Oct 13 12:28:31 UTC 2014


On 11 Oct 2014, at 01:14, Virgil Griffith <i at virgil.gr> wrote:

> Will a longer version of this paper be coming out, particularly one for developers?

I don’t have any immediate plans to do so, as my current thinking is it would end up being a queuing theory tutorial with the current paper appended, and plenty of people have done good queuing theory tutorials. It could be that I could come up with a slightly compressed tutorial, but really anyone who is interested in network performance needs to have a reasonably good understanding of queuing theory.

The work started off as some experiments using a simple optimisation method – gradient descent [1]. This method finds local minima, but is only guaranteed to find the global minimum if the function is convex (because the local minimum of a convex function for any starting point is the global minimum).

The function being minimised is the average latency for a cell entering the Tor network, and what we are varying in order to minimise latency is the distribution of traffic over nodes.

So the paper is mainly about finding out whether the function which gives the average latency, given a particular traffic distribution policy, is convex. It turns out that it is and so gradient descent is a valid method. If you don’t want to deal with the maths proving this result then you can just skip the middle of the paper which discusses Hessian matrices.

When you run gradient descent you end up with Figure 1 which shows that for the slow nodes the optimal approach is to not use them. The intuitive reason is that a cell spends some time in a queue and some time being processed once it reaches the head of the queue (processing consists of it being decrypted then put on the wire). For slow nodes, the processing time is so long, that even if the node’s queue is empty would be better to find a fast node and give it the cell instead. The cell will have to wait in a queue but processing time of a fast node is so much faster that it’s worth it.

The big open question in the paper is about the assumptions. Queuing theory is very powerful, but it quickly gets very ugly when you move away from simple models of reality. This paper uses M/D/1 [3] which is a reasonable approximation but not a perfect one. Moving even to a more slightly more realistic approximation makes the maths get much more difficult, so we didn’t do it. 

Best wishes,
Steven
 
[1] http://mathworld.wolfram.com/MethodofSteepestDescent.html
[2] http://en.wikipedia.org/wiki/Maxima_and_minima
[3] http://en.wikipedia.org/wiki/M/D/1_queue


More information about the tor-dev mailing list