On Sat, Jan 29, 2022 at 02:54:40AM +0000, Gary C. New via tor-relays wrote:
From your documentation, it sounds like you're running everything on the same machine? When expanding to additional machines, similar to the file limit issue, you'll have to expand the usable ports as well.
I don't think I understand your point. At 64K simultaneous connections, you run out of source ports for making connection 4-tuple unique, but I don't see how the same or different hosts makes a difference, in that respect.
On many Linux distros, the default ip_local_port_range is between 32768 - 61000.
The Tor Project recommends increasing it.
# echo 15000 64000 > /proc/sys/net/ipv4/ip_local_port_range
Thanks, that's a good tip. I added it to the installation guide.
I'm not using nyx. I'm just looking at the bandwidth on the network interface.
If you have time, would you mind installing nyx to validate observed similarities/differences between our loadbalanced configurations?
I don't have plans to do that.
I'm glad to hear you feel the IPv6 reporting appears to be a false-negative. Does this mean there's something wrong with IPv6 Heartbeat reporting?
I don't know if it's wrong, exactly. It's reporting something different than what ExtORPort is providing. The proximate connections to tor are indeed all IPv4.
I see. Perhaps IPv6 connections are less prolific and require more time to ramp?
No, it's not that. The bridge has plenty of connections from clients that use an IPv6 address, as the bridge-stats file shows:
```plain bridge-ip-versions v4=15352,v6=1160 ```
It's just that, unlike a direct TCP connection as the the case with a guard relay, the client connections pass through a chain of proxies and processes on the way to the tor: client → Snowflake proxy → snowflake-server WebSocket server → extor-static-cookie adapter → tor. The last link in the chain is IPv4, and evidently that is what the heartbeat log reports. The client's actual IP address is tunnelled, for metrics purposes, through this chain of proxies and processes, to tor using a special protocol called ExtORPort (see USERADDR at https://gitweb.torproject.org/torspec.git/tree/proposals/196-transport-contr...). It looks like the bridge-stats descriptor pays attention to the USERADDR information and the heartbeat log does not, that's all.
After expanding my reading of your related "issues," I see that your VPS provider only offers up to 8 cores. Is it possible to spin-up another VPS environment, with the same provider, on a separate VLAN, allowing route/ firewall access between the two VPS environments? This way you could test loadbalancing a Tor Bridge over a local network using multiple virtual environments.
Yes, there are many other potential ways to further expand the deployment, but I do not have much interest in that topic right now. I started the thread for help with a non-obvious point, namely getting past the bottleneck of a single-core tor process. I think that we have collectively found a satisfactory solution for that. The steps after that for further scaling are relatively straightforward, I think. Running one instance of snowflake-server on one host and all the instances of tor on a nearby host is a logical next step.