Hi Rob,
On 27 Jul 2019, at 00:18, Rob Jansen rob.g.jansen@nrl.navy.mil wrote:
I am planning on performing an experiment on the Tor network to try to gauge the accuracy of the advertised bandwidths that relays report in their server descriptors. Briefly, the experiment involves running a speed test on every relay for a short time (about 20 seconds). Details follow.
...
Motivation
The capacity of Tor relays (maximum available goodput) is an important metric. Combined with mean goodput, it allows us to compute the bandwidth utilization of individual relays as well as the entire network in aggregate. Generally, capacity is used to help balance client load across relays, and relay utilization rates help Tor make informed decisions about how to allocate resources and prioritize performance and scalability improvements.
Can you define "goodput"? How is it different to the bandwidth reported by a standard speed test? How is it different to the bandwidth measured by sbws?
...
We will conduct the speed tests while minimizing network overhead. We will use a custom client that builds 2-relay circuits. The first relay will be the target relay we are speed testing, and the second relay will be a fast exit relay that we control. We will initiate data streams between a speedtest client and server running on the same machine as our exit relay.
The setup will look like:
speedtest-client <--> tor-client <--> target-relay <--> exit-relay <--> speedtest-server
All components will run on the same machine that we control except for the target-relay, which will rotate as we test different relays in the network. For each target relay, we plan to run the speedtest for 20 seconds in order to increase the probability that the 10 second mean goodput will reach the true capacity. We will measure each relay over a few days to ensure that our speedtest effects are reported by every relay.
Where is your server? How do you expect the location of your server to affect your results?
T