Hi,
On 11/01/2020 05:07, Matt Corallo wrote:
Sadly, the large scale deployments of BBR are mostly not high-latency links (as CDNs generally have a nearby datacenter for you to communicate with), and the high retransmission rates may result in more “lag” for browsing when absolute bandwidth isn’t the primary concern. On the flip side, Spotify’s measurements seem to indicate that, at least in some cases, the jitter can decrease enough to be noticeable for users.
BBR is good for Netflix, but is not so good for non-streaming traffic. You also get issues between competing flows which doesn't matter for Netflix (typically you only watch one video at a time) but would matter for Tor.
We don't have good models of what Tor traffic looks like, but I strongly suspect it is different to the Neflix/YouTube typical workloads.
Is there a way we could do measurements of packet loss/latency profiles of bridge users? This should enable simulation for things like this, but it sounds like there’s no good existing work in this domain?
We have two tools that build simulated/emulated Tor networks: chutney and shadow. Unfortunately, neither implements everything that would be required. We really want to see what happens when x% of the network switches congestion control algorithm and see how flows interact at large relays (either relay to relay, or guard connections).
If you have a large openstack cluster available, you could set up with your favorite orchestration tool a number of VMs with emulated WAN links between them, and connect a bunch of Tor clients to that network in other VMs, and perform measurements.
Last time I looked you could not switch TCP congestion control algorithm in Linux per-namespace (maybe you can now and you don't need to have multiple VMs).
Generally I would recommend *not* changing from TCP cubic unless you really understand the interactions that are going on between flows.
Thanks, Iain.