[tor-talk] Multicore, bandwidth, relays, capacity, location
thomaswhite at riseup.net
Thu Aug 13 10:23:33 UTC 2015
-----BEGIN PGP SIGNED MESSAGE-----
Totally agree on the fact. Most of the load is also placed onto exits
and I can certainly testify no matter how much bandwidth or CPU power
I gave the tor process as an exit with quite liberal exit policies, it
really ate it up and rarely had below 90% usage, let alone 50%.
Adding multicore would improve the speed of the network because it
means each relay can handle more load, thus is less likely to
bottleneck on CPU factors which is probably at exits, which I'd also
guess is where most of the latency resides. That would probably be a
good research point to add to the page of research options - to find
where the network choke points are and ways to reduce them.
Anyway this is another reason I am all in favour of hidden service
usage for tasks like updating the Tor browser over hidden services to
reduce exit node traffic and perhaps increase security with the end to
end encryption (yes yes I know the package signing key is checked on
clearnet download, I am just making a point for use).
Furthermore, we have other factors weighing in on the usage capacity
such as underweighted/measured relays, and I imagine there are lots of
non-guard or exit relays out there seeing only fractional volumes of
I think increasing the cryptographic strength of the network will also
increase cpu load, placing a higher degree of load on a CPU per packet
than before. If we are to upgrade any of the crypto behind Tor, I
think multicore should definitely be considered alongside it for
integration since at least on my exits, the CPU was always the
bottleneck and not the network.
On 13/08/2015 08:30, Roger Dingledine wrote:
> (Ugh -- please don't cross-post across lists. I'm going to pick
> the list that had the previous thread on it.)
> On Thu, Aug 13, 2015 at 03:03:10AM -0400, grarpamp wrote:
>> Tor appears maybe operating at 50% of bandwidth capacity...
>> If that's true, more bandwidth won't have any end user affect.
> Careful! I think this is a really bad conclusion. There is no
> practical way for the Tor network to use all of its capacity, and
> we shouldn't expect it to. Normal networks start to fall apart when
> they reach around 20-30% of total capacity. The Tor network is
> still way overloaded -- just not as overloaded as before.
> First, operating at 100% capacity would imply that we somehow line
> up every relay in every circuit to never have any spare capacity.
> In practice, each circuit will have some relay that is smallest /
> most congested at the time. The goal for performance is to have
> that smallest relay be not too bad.
> Second, a network operating "at" capacity pretty much guarantees
> congestion at most relays. In an ideal world, every relay would
> have spare capacity for every circuit. Or to put it another way,
> whenever a relay runs out of bandwidth (e.g. it hits its rate
> limit), then that adds a delay to all the circuits that are still
> hoping to get traffic through it. Or to turn it around once more:
> we want to minimize the number of cases where relays are operating
> at capacity. Excess capacity is necessary for good performance.
> For a historical perspective, you might enjoy the "Why Tor is
> slow" document and video:
> https://blog.torproject.org/blog/why-tor-is-slow (Some of the
> issues have been fixed since then, and some of them have not.)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
-----END PGP SIGNATURE-----
More information about the tor-talk