Mike Perry:
teor:
I have an alternative proposal:
Let's deploy sbws to half the bandwidth authorities, wait 2 weeks, and see if exit bandwidths improve.
We should measure the impact of this change using the tor-scaling measurement criteria. (And we should make sure it doesn't conflict with any other tor-scaling changes.)
I like this plan. To tightly control for emergent effects of all-sbws vs all-torflow, ideally we'd switch back and forth between all-sbws and all-torflow on a synchronized schedule, but this requires getting enough measurement instances of sbws and torflow for authorities to choose either the sbw file, or the torflow file, on some schedule. May be tricky to coordinate, but it would be the most rigorous way to do this.
We could do a version of this based on votes/bwfiles alone, without making dirauths toggle back and forth. However, this would not capture emergent effects (such as quicker bwadjustments in sbws due to decisions to pair relays with faster ones during measurement). Still, even comparing just votes would be better than nothing.
For this experiment, my metric of choice would be "Per-Relay Spare Network Capacity CDF" (see https://trac.torproject.org/projects/tor/wiki/org/roadmaps/CoreTor/Performan...), for both the overall consensus, and every authority's vote. It would also be useful to generate separate flag breakdowns of this CDF (ie produce separate CDFs for Guard-only, Middle-only, Exit-only, and Guard+Exit-only relays).
In this way, we have graphs of how the votes and the consensus distribution of the difference between self-reported and measured values across the network.
Arg, I misspoke here. The metric from that performance experiment page is the difference between peak observed bandwidth and bw history. This will still be interesting to measure load balancing effects, but it does not directly involve the measured values.. We may also want a metric that directly compares properties of the measured vs advertised values. See below.
We should be able to pinpoint any major disagreements in how relays are measured compared to their self-reported values with these metrics. (In the past, karsten produced very similar sets of CDFs of just the measured values per vote when we were updating bwauths, and we compared the shape of the measured CDF, but I think graphing the difference is more comprehensive).
We should also keep an eye on CDF-DL and the failure rainbow metrics, as they may be indirectly affected by improvements/regressions in load balancing, but I think the distribution of "spare capacity" is the first order metric we want.
Do you like these metrics? Do you think we should be using different ones? Should we try a few different metrics and see what makes sense based on the results?
As additional metrics, we could do the CDFs of the ratio of measured bw to advertised bw, and/or the metrics Karsten produced using just measured bw. (I can't still find the ticket where those were graphed during previous torflow updates, though).
These metrics would be pretty unique to torflow/sbws experiments, but if we have enough of those in the pipeline (such as changes to the scaling factor), they may be worth tracking over time.