Hmm, this type of test doesn’t really seem to have much connection to the average Tor user. Middle relay <-> middle relay connections may be mostly servers, but residential/mobile connections in Russia/Iran/China likely don’t perform quite the same. Worse still, BBR can have measurable effects on packet retransmissions, and while it may require an unrealistic amount of state to track such things for many flows, it’s not a given that it won’t make tor traffic stand out (luckily, of course, large CDNs like DropBox, Spotify, YouTube, etc have been migrating to it, so maybe this won’t be the case in the future).
Sadly, the large scale deployments of BBR are mostly not high-latency links (as CDNs generally have a nearby datacenter for you to communicate with), and the high retransmission rates may result in more “lag” for browsing when absolute bandwidth isn’t the primary concern. On the flip side, Spotify’s measurements seem to indicate that, at least in some cases, the jitter can decrease enough to be noticeable for users.
Is there a way we could do measurements of packet loss/latency profiles of bridge users? This should enable simulation for things like this, but it sounds like there’s no good existing work in this domain?
Matt
On Jan 10, 2020, at 17:36, Roman Mamedov rm@romanrm.net wrote:
On Fri, 10 Jan 2020 16:24:56 +0000 Matt Corallo tor-lists@mattcorallo.com wrote:
Cool! What did your testing rig look like?
A few years ago I've got a dedicated server from one of these cheap French hosts, which appeared to have a congested uplink (low-ish upload speeds). Since the support was not able to solve this, but the server was very cheap to cancel just over that, I looked for ways to utilize it better even despite the congestion.
If I remember correctly, I also had a Japanese VPS at the time, so my tests were intentionally for a "difficult" case, uploading from France to Japan (with 250+ms ping).
Here are my completely unscientific scribbles of how all the various algorithms behaved. The scenario is uploading for a minute or so, observing the speed in MB/sec visually, then recording how it appeared to change during that minute (and then repeating this a couple of times to be certain).
tcp_bic.ko -- 6...5...4 tcp_highspeed.ko -- 2 tcp_htcp.ko -- 1.5...3...2 tcp_hybla.ko -- 3...2...1 tcp_illinois.ko -- 6...7...10 tcp_lp.ko -- 2...1 tcp_scalable.ko -- 5...4...3 tcp_vegas.ko -- 2.5 tcp_veno.ko -- 2.5 tcp_westwood.ko -- <1 tcp_yeah.ko -- 2...5...6
This was on the 3.14 kernel which did not have BBR yet to compare. In later comparisons, as mentioned before, it is on par or better than Illinois.
I suppose the real question is what does the latency/loss profile of the average Tor (bridge) user look like?
I think the real question is, is there any reason to *not* use BBR or that Illinois. So far I do not see a single one.
-- With respect, Roman