Kristian,

> I am not really concerned about my IPs being blacklisted as these are normal relays, not bridges.

I suppose if you have the address space and are running your relays in a server environment--it's your prerogative. In my case, I'm running my super relay, from home, with limited address space, so it is more suited to my needs.

> In that area I am a little bit old school, and I am indeed running them manually for now. I don’t think there is a technical reason for it. It’s just me being me.

I'm a proponent of individuality. Keep being you.

Respectfully,


Gary
On Wednesday, December 29, 2021, 03:32:55 AM MST, abuse--- via tor-relays <tor-relays@lists.torproject.org> wrote:


Hi Gary,

thanks!

> As an aside... Presently, are you using a single, public address with many ports or many, public addresses with a single port for your Tor deployments? Have you ever considered putting all those Tor instances behind a single, public address:port (fingerprint) to create one super bridge/relay? I'm just wondering if it makes sense to conserve and rotate through public address space to stay ahead of the blacklisting curve?

Almost all of my dedicated servers have multiple IPv4 addresses, and you can have up to two tor relays per IPv4. So, the answer is multiple IPs and on multiple different ports. A "super relay" still has no real merit for me. I am not really concerned about my IPs being blacklisted as these are normal relays, not bridges.

What I am doing now for new servers is running them for a week or two as bridges and only then I move them over to hosting relays. In the past I have not seen a lot of traffic on bridges, but this has changed very recently. I saw 200+ unique users in the past 6 hours on one of my new bridges yesterday with close to 100 Mbit/s of consistent traffic. There appears to be an increased need right now, which I am happy to tend to.

> Also... Do you mind disclosing what all your screen instances are for? Are you running your Tor instances manually and not in daemon mode? "Inquiring minds want to know." 😁

In that area I am a little bit old school, and I am indeed running them manually for now. I don’t think there is a technical reason for it. It’s just me being me.

Best Regards,
Kristian


Dec 29, 2021, 01:46 by tor-relays@lists.torproject.org:
Hi Kristian,

Thanks for the screenshot. Nice Machine! Not everyone is as fortunate as you when it comes to resources for their Tor deployments. While a cpu affinity option isn't high on the priority list, as you point out, many operating systems do a decent job of load management and there are third-party options available for cpu affinity, but it might be helpful for some to have an application layer option to tune their implementations natively.

As an aside... Presently, are you using a single, public address with many ports or many, public addresses with a single port for your Tor deployments? Have you ever considered putting all those Tor instances behind a single, public address:port (fingerprint) to create one super bridge/relay? I'm just wondering if it makes sense to conserve and rotate through public address space to stay ahead of the blacklisting curve?

Also... Do you mind disclosing what all your screen instances are for? Are you running your Tor instances manually and not in daemon mode? "Inquiring minds want to know." 😁

As always... It is great to engage in dialogue with you.

Respectfully,


Gary


On Tuesday, December 28, 2021, 1:39:31 PM MST, abuse@lokodlare.com <abuse@lokodlare.com> wrote:


Hi Gary,

why would that be needed? Linux has a pretty good thread scheduler imo and will shuffle loads around as needed.

Even Windows' thread scheduler is quite decent these days and tools like "Process Lasso" exist if additional fine tuning is needed.

Attached is one of my servers running multiple tor instances on a 12/24C platform. The load is spread quite evenly across all cores.

Best Regards,
Kristian


Dec 27, 2021, 22:08 by tor-relays@lists.torproject.org:
BTW... I just fact-checked my post-script and the cpu affinity configuration I was thinking of is for Nginx (not Tor). Tor should consider adding a cpu affinity configuration option. What happens if you configure additional Tor instances on the same machine (my Tor instances are on different machines) and start them up? Do they bind to a different or the same cpu core?

Respectfully,


Gary


On Monday, December 27, 2021, 2:44:59 PM MST, Gary C. New via tor-relays <tor-relays@lists.torproject.org> wrote:


David/Roger:

Search the tor-relay mail archive for my previous responses on loadbalancing Tor Relays, which I've been successfully doing for the past 6 months with Nginx (it's possible to do with HAProxy as well). I haven't had time to implement it with a Tor Bridge, but I assume it will be very similar. Keep in mind it's critical to configure each Tor instance to use the same DirectoryAuthority and to disable the upstream timeouts on Nginx/HAProxy.

Happy Tor Loadbalancing!

Respectfully,


Gary

P.S. I believe there's a torrc config option to specify which cpu core a given Tor instance should use, too.


On Monday, December 27, 2021, 2:00:50 PM MST, Roger Dingledine <arma@torproject.org> wrote:


On Mon, Dec 27, 2021 at 12:05:26PM -0700, David Fifield wrote:
> I have the impression that tor cannot use more than one CPU core???is that
> correct? If so, what can be done to permit a bridge to scale beyond
> 1×100% CPU? We can fairly easily scale the Snowflake-specific components
> around the tor process, but ultimately, a tor client process expects to
> connect to a bridge having a certain fingerprint, and that is the part I
> don't know how to easily scale.
>
> * Surely it's not possible to run multiple instances of tor with the
>  same fingerprint? Or is it? Does the answer change if all instances
>  are on the same IP address? If the OR ports are never used?

Good timing -- Cecylia pointed out the higher load on Flakey a few days
ago, and I've been meaning to post a suggestion somewhere. You actually
*can* run more than one bridge with the same fingerprint. Just set it
up in two places, with the same identity key, and then whichever one the
client connects to, the client will be satisfied that it's reaching the
right bridge.

There are two catches to the idea:

(A) Even though the bridges will have the same identity key, they won't
have the same circuit-level onion key, so it will be smart to "pin"
each client to a single bridge instance -- so when they fetch the bridge
descriptor, which specifies the onion key, they will continue to use
that bridge instance with that onion key. Snowflake in particular might
also want to pin clients to specific bridges because of the KCP state.

(Another option, instead of pinning clients to specific instances,
would be to try to share state among all the bridges on the backend,
e.g. so they use the same onion key, can resume the same KCP sessions,
etc. This option seems hard.)

(B) It's been a long time since anybody tried this, so there might be
surprises. :) But it *should* work, so if there are surprises, we should
try to fix them.

This overall idea is similar to the "router twins" idea from the distant
distant past:

> * Removing the fingerprint from the snowflake Bridge line in Tor Browser
>  would permit the Snowflake proxies to round-robin clients over several
>  bridges, but then the first hop would be unauthenticated (at the Tor
>  layer). It would be nice if it were possible to specify a small set of
>  permitted bridge fingerprints.

This approach would also require clients to pin to a particular bridge,
right? Because of the different state that each bridge will have?

--Roger


_______________________________________________
tor-relays mailing list
_______________________________________________
tor-relays mailing list


_______________________________________________
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays