[tor-dev] Bandwidth scanner: request for feedback
mikeperry at torproject.org
Wed Aug 29 21:11:57 UTC 2018
> Juga and pastly have been working hard on sbws.
> Sbws' results are now similar to torflow's results:
Congratulations, Juga and pastly!
> Now that sbws is close to torflow, we want some feedback on its
> design. We’ll work on the design at the tor meeting in September.
> Please feel free to give feedback by email, or on the tickets:
> What happens when sbws doesn't match torflow?
> We suggest this rule:
> If an sbws deployment is within X% of an existing bandwidth
> authority, sbws is ok. (The total consensus weights of the
> existing bandwidth authorities are within 25% - 50% of each
> other, see #25459.)
I would like an additional criteria for when we finally replace torflow
Ideally, I would like us to perform A/B experiments to ensure that our
performance metrics do not degrade in terms of average *or* quartile
range/performance variance. (Ie: alternate torflow results for a week vs
sbws for a week, and repeat for a few weeks). I realize this might be
complicated for dirauth operators, though. Can we make it easier
somehow, so that it is easy to switch which result files they are voting
If we can't do this, at minimum, we should definitely watch the change
in our average and quartile variance performance metrics when we first
switch to sbws.
Additionally, if we ever change how sbws behaves to be different than
torflow, I would like sbws to have a well-defined load balancing
equilibrium goal, and I would like us to not change this load balancing
equilibrium goal unless we perform A/B testing and compare the average
and variance of our performance metrics.
I'll explain what I mean by "load balancing equilibrium goal" below,
when I try to explain the PID mechanism again.
> How long should sbws keep relay bandwidths?
> Torflow uses the latest self-reported relay observed bandwidth
> and bandwidth rate.
> Torflow uses a complex feedback loop for measured bandwidths.
> We think sbws can use a simple average or exponentially
> decaying weighted average.
As I said in
this feedback loop is disabled. I know you don't believe that the
bandwidth auth spec is accurate, but I'm telling you it is. There's
just a lot going on there because the bwauths have required a long
history of experimentation to get to where they are now, just as sbws is
now encountering with trying to make various measurement and scaling
decisions. (As you A/B test ways to improve performance on the live
network, you tend to accumulate a lot of options for different ways of
The point of the PID control stuff was to formalize the type of load
balancing equilibrium goal that the bandwidth auths are using, and to
experiment with convergence on a specific target load balancing
equilibrium point (where that target equilibrium point is "all relays
have the same spare capacity for one additional client stream"). The
problem was that when you only use this criteria, faster relays run out
of CPU, memory, or sockets before this criteria was satisfied for them.
Hence all of the circuit failure reason statistics in the code base (to
try to back off on PID control if we hit a different limiting factor
other than bandwidth).
Unfortunately, Tor does not provide enough error code feedback to
reliably determine if a relay is low on memory, sockets, or CPU. Funding
ended for the bandwidths auths before we could implement proper overload
error feedback in Tor, and we got funding for me to work on Tor Browser
With the parameters in the current consensus (currently bwauthpid=1, and
no others), the PID control is operating as only "Proportional control":
(The default values for K_i and K_d are 0, as per Section 3.6 of the
In section 3.1 of the spec, I have a proof that using "Proportional
control" (ie PID control with no I or D) is equivalent to what we were
doing in Section 2.2. This means that Section 2.2 does describe what we
are doing now:
I left the PID code itself enabled (but in "Proportional-only" mode)
because it is cleaner, and it makes it formally clear that the bandwidth
authorities are actually measuring the difference in the ability of
relays to carry additional client traffic, and correcting for that
difference by adjusting weights in proportion to that difference. I
naively assumed that eventually Tor would get funding to implement
better feedback for CPU, memory, and socket overload. That was almost 10
(Incidentally, the trickiest non-bandwidth overload condition to report
was memory shortage -- a problem that effectively goes away in a
datagram Tor world with bounded queue lengths.. In fact, CPU limits
could also be implemented as a congestion drop condition in a datagram
scenario, and measured indirectly via relay throughput, leaving only
I'm glad that we are exploring load balancing again, and with a modern,
simpler, and well-tested code base. That's all good. But as you make
choices about how to load balance, please have a specific goal as to
what target load balancing equilibrium point you're actually going for.
(The reason why I did not consider raw measured stream bandwidth to be a
valid equilibrium is because it is *not* measuring the total capacity of
a relay, and it does not have an equilibrium point in terms of expected
client performance or overall load balancing. In order to use raw
measurements directly as a load balancing equilibrium point, you
actually need to measure total relay throughput in some way, such as
> How should we scale sbws consensus weights?
> If sbws' total consensus weight is different to torflow's total
> consensus weight, how should we scale sbws?
> (The weights might differ because the measurement method is
> different, or because scanners and servers are in different
> In the bandwidth file spec, we suggest linear scaling.
This seems reasonable.
I am wary of the idea of trying to use some kind of ideal distribution
for relays, that you mention in #27135. That is not something that you
can enforce in measurement without causing performance variance to
suffer tremendously. It can only be enforced by a consensus cutoff
threshold and/or relay operator incentive mechanisms.
I believe quite strongly that even if the Tor network gets faster on
average, if this comes at the cost of increased performance variance,
user experience and perceived speed of Tor will be much worse. There's
nothing more annoying than a system that is *usually* fast enough to do
what you need it to do, but fails to be fast enough for that activity at
> How should we round sbws consensus weights?
> Torflow currently rounds to 3 significant figures (which is a maximum
> of 0.5%). But I suggest 2 significant figures for sbws (or max 5%),
> - tor has a daily usage cycle that varies by 10% - 20%
> - existing bandwidth authorities vary by 25% - 50%
> Proposal 276 contains a slightly more complicated rounding algorithm,
> which we may want to implement in sbws or in tor:
If we can measure relays frequently enough such that we can accurately
report the effects of Tor's daily usage cycle and adjust our weights
accordingly, then I think that retaining the ability to represent this
variance is worth the overhead.
Again, this comes back to my belief that performance variance is
actually the major performance problem facing Tor right now.
On the other hand, if we cannot measure accurately or often enough for
this to matter, then it doesn't matter.
But a successor to sbws might, if we can manage to build one sooner than
a decade from now, so it would be wise not to bake this sig fig limit
into our actual consensus format.
> Does sbws need a maximum consensus weight fraction?
> Torflow uses 5%, but I suggest 1%, because the largest relay right
> now is only 0.5%.
If we ever get working multi-core crypto+networking, this number will
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 801 bytes
Desc: Digital signature
More information about the tor-dev