[tor-dev] Raising AuthDirMaxServersPerAddr to 4?

teor teor at riseup.net
Mon Jun 3 22:38:29 UTC 2019


Hi Mike,

> On 4 Jun 2019, at 06:20, Mike Perry <mikeperry at torproject.org> wrote:
> 
> Mike Perry:
>> teor:
>>> I have an alternative proposal:
>>> 
>>> Let's deploy sbws to half the bandwidth authorities, wait 2 weeks, and
>>> see if exit bandwidths improve.
>>> 
>>> We should measure the impact of this change using the tor-scaling
>>> measurement criteria. (And we should make sure it doesn't conflict
>>> with any other tor-scaling changes.)
>> 
>> I like this plan. To tightly control for emergent effects of all-sbws vs
>> all-torflow, ideally we'd switch back and forth between all-sbws and
>> all-torflow on a synchronized schedule, but this requires getting enough
>> measurement instances of sbws and torflow for authorities to choose
>> either the sbw file, or the torflow file, on some schedule. May be
>> tricky to coordinate, but it would be the most rigorous way to do this.
>> 
>> We could do a version of this based on votes/bwfiles alone, without
>> making dirauths toggle back and forth. However, this would not capture
>> emergent effects (such as quicker bwadjustments in sbws due to decisions
>> to pair relays with faster ones during measurement). Still, even
>> comparing just votes would be better than nothing.

I don't know how possible this is: we would need two independent network
connections per bandwidth scanner, one for sbws, and one for torflow.

(Running two scanners on the same connection means that they compete
for bandwidth. Perhaps we could use Tor's BandwidthRate to share the
bandwidth.)

I also don't know how many authority operators are able to run sbws:
Roger might be stuck on Python 2.

And I don't know how often they will be able to switch configs.

Let's make some detailed plans with the dirauth list.

>> For this experiment, my metric of choice would be "Per-Relay Spare
>> Network Capacity CDF" (see
>> https://trac.torproject.org/projects/tor/wiki/org/roadmaps/CoreTor/PerformanceExperiments#MetricsDefinitions),
>> for both the overall consensus, and every authority's vote. It would
>> also be useful to generate separate flag breakdowns of this CDF (ie
>> produce separate CDFs for Guard-only, Middle-only, Exit-only, and
>> Guard+Exit-only relays).
>> 
>> In this way, we have graphs of how the votes and the consensus
>> distribution of the difference between self-reported and measured values
>> across the network. 
> 
> Arg, I misspoke here. The metric from that performance experiment page
> is the difference between peak observed bandwidth and bw history. This
> will still be interesting to measure load balancing effects, but it does
> not directly involve the measured values.. We may also want a metric
> that directly compares properties of the measured vs advertised values.
> See below.
> 
>> We should be able to pinpoint any major
>> disagreements in how relays are measured compared to their self-reported
>> values with these metrics. (In the past, karsten produced very similar
>> sets of CDFs of just the measured values per vote when we were updating
>> bwauths, and we compared the shape of the measured CDF, but I think
>> graphing the difference is more comprehensive).
>> 
>> We should also keep an eye on CDF-DL and the failure rainbow metrics, as
>> they may be indirectly affected by improvements/regressions in load
>> balancing, but I think the distribution of "spare capacity" is the first
>> order metric we want.

Yes, I agree: some idea of client bandwidth and latency is important.

>> Do you like these metrics? Do you think we should be using different
>> ones? Should we try a few different metrics and see what makes sense
>> based on the results?
> As additional metrics, we could do the CDFs of the ratio of measured bw
> to advertised bw, and/or the metrics Karsten produced using just
> measured bw. (I can't still find the ticket where those were graphed
> during previous torflow updates, though).
> 
> These metrics would be pretty unique to torflow/sbws experiments, but if
> we have enough of those in the pipeline (such as changes to the scaling
> factor), they may be worth tracking over time.

If we get funding for sbws experiments, we can definitely tweak the sbws
scaling parameters, and do some experiments.

At the moment, I'd like to focus on fixing critical sbws issues, deploying
sbws, and making sure it works at least as well as torflow.

T


More information about the tor-dev mailing list