[tor-relays] How to reduce tor CPU load on a single bridge?

Gary C. New garycnew at yahoo.com
Mon Dec 27 21:38:23 UTC 2021


David/Roger:
Search the tor-relay mail archive for my previous responses on loadbalancing Tor Relays, which I've been successfully doing for the past 6 months with Nginx (it's possible to do with HAProxy as well). I haven't had time to implement it with a Tor Bridge, but I assume it will be very similar. Keep in mind it's critical to configure each Tor instance to use the same DirectoryAuthority and to disable the upstream timeouts on Nginx/HAProxy.
Happy Tor Loadbalancing!
Respectfully,

Gary
P.S. I believe there's a torrc config option to specify which cpu core a given Tor instance should use, too. 

    On Monday, December 27, 2021, 2:00:50 PM MST, Roger Dingledine <arma at torproject.org> wrote:  
 
 On Mon, Dec 27, 2021 at 12:05:26PM -0700, David Fifield wrote:
> I have the impression that tor cannot use more than one CPU core???is that
> correct? If so, what can be done to permit a bridge to scale beyond
> 1×100% CPU? We can fairly easily scale the Snowflake-specific components
> around the tor process, but ultimately, a tor client process expects to
> connect to a bridge having a certain fingerprint, and that is the part I
> don't know how to easily scale.
> 
> * Surely it's not possible to run multiple instances of tor with the
>  same fingerprint? Or is it? Does the answer change if all instances
>  are on the same IP address? If the OR ports are never used?

Good timing -- Cecylia pointed out the higher load on Flakey a few days
ago, and I've been meaning to post a suggestion somewhere. You actually
*can* run more than one bridge with the same fingerprint. Just set it
up in two places, with the same identity key, and then whichever one the
client connects to, the client will be satisfied that it's reaching the
right bridge.

There are two catches to the idea:

(A) Even though the bridges will have the same identity key, they won't
have the same circuit-level onion key, so it will be smart to "pin"
each client to a single bridge instance -- so when they fetch the bridge
descriptor, which specifies the onion key, they will continue to use
that bridge instance with that onion key. Snowflake in particular might
also want to pin clients to specific bridges because of the KCP state.

(Another option, instead of pinning clients to specific instances,
would be to try to share state among all the bridges on the backend,
e.g. so they use the same onion key, can resume the same KCP sessions,
etc. This option seems hard.)

(B) It's been a long time since anybody tried this, so there might be
surprises. :) But it *should* work, so if there are surprises, we should
try to fix them.

This overall idea is similar to the "router twins" idea from the distant
distant past:
https://lists.torproject.org/pipermail/tor-dev/2002-July/001122.html
https://lists.torproject.org/pipermail/tor-commits/2003-October/024388.html
https://lists.torproject.org/pipermail/tor-dev/2003-August/000236.html

> * Removing the fingerprint from the snowflake Bridge line in Tor Browser
>  would permit the Snowflake proxies to round-robin clients over several
>  bridges, but then the first hop would be unauthenticated (at the Tor
>  layer). It would be nice if it were possible to specify a small set of
>  permitted bridge fingerprints.

This approach would also require clients to pin to a particular bridge,
right? Because of the different state that each bridge will have?

--Roger

_______________________________________________
tor-relays mailing list
tor-relays at lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.torproject.org/pipermail/tor-relays/attachments/20211227/d3cfb823/attachment-0001.htm>


More information about the tor-relays mailing list