https://metrics.torproject.org/rs.html#search/185.220.101. We are 5 relay orgs sharing a /24. would be nice if you share the subnet with 1-2 other relay operators.
Logistically, how do you or how would you recommend to share a /24 across more than 1 organization? Lots of ideas / questions ... Are they required to be in the same data center and/or same rack and/or nearby data centers with a dedicated/private connection? Different servers next to each other? Router locally doing BGP for something like a /26 to specific servers or statically mapping /26 to specific server ports?
Doesn't seem possible via BGP to share a /24 across internet connections due to the limit on needing to be a /24 for default-free zone...
Sent with Proton Mail secure email.
On Wednesday, July 10th, 2024 at 6:27 AM, lists@for-privacy.net lists@for-privacy.net wrote:
On Mittwoch, 10. Juli 2024 00:32:04 CEST Osservatorio Nessuno via tor-relays wrote:
we are planning to get some hardware to run a physical Tor exit node, starting with a 1Gbps dedicated, unmetered uplink (10Gbps downlink). We will also route a /24 on it, so we will have large availability of addresses to run multiple instances. We have been running a few exit nodes so far, but never on our own hardware.
Your bottleneck is the 1G uplink. For comparison, I have 2x Xeon E5-2680v2 10C/20T and 256Gb RAM 2x 10G nic (LACP bond) and I can not achieve 10G throughput with it. As a rule of thumb, I would always count one instance per thread or core. I have 40T and 40 tor exit instances.
F3Netze has specified the hardware in Contact info: https://metrics.torproject.org/rs.html#search/185.220.100.
Which is the bandwith limit per core/Tore instance? Or what can we expect to be the bottleneck?
That depends on the CPU clock speed. Fast Ryzen or Epyc's can do 50-70 MiB/s per core/instance.
Due to some other requirements we need for some experiments (SFP ports, coreboot support, etc) we can mainly choose between these 2 CPUs: Intel i5-1235U Intel i7-1255U
The cost between the two models is significant enough in our case to pick the i7 only if it's really useful.
In both cases with 32GB of DDR5 RAM (we can max to 64 if needed, but is it?).
Should this allow us to saturate the uplink?
Guards need more resources than exits since the introduction of congestion- control and because of DDoS I would use 64GB RAM for a guard. With your IP space and 1G uplink, I would take the i5 with 32Gb, save the money and maybe add a second server later. Or if you build the hardware yourself, look for a used Epyc or Ryzen server. 16 or 32 core with high base clock. Used server hardware from the data center is like new.
To summarize, with this bandwith, this hardware and a /24 how many Tor exit nodes should be ideal to run considering that each of them could have their own address?
https://metrics.torproject.org/rs.html#search/185.220.101. We are 5 relay orgs sharing a /24. Currently 5x 2x10G(or 25G) With now 8 relays per IP, over 2000 instances can run in a /24 subnet. It would be nice if you share the subnet with 1-2 other relay operators.
-- ╰_╯ Ciao Marco!
Debian GNU/Linux
It's free software and it gives you freedom!_______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On Saturday, 22 February 2025 06:49 Tor at 1AEO via tor-relays wrote:
https://metrics.torproject.org/rs.html#search/185.220.101. We are 5 relay orgs sharing a /24. would be nice if you share the subnet with 1-2 other relay operators.
Logistically, how do you or how would you recommend to share a /24 across more than 1 organization? Lots of ideas / questions ... Are they required to be in the same data center and/or same rack and/or nearby data centers with a dedicated/private connection? Different servers next to each other?
Whether the servers need to be in one rack or data center depends on the provider’s backbone. There are providers who have several locations/data centers internally networked. But the most practical and safest way is to share a e.g. 1/4 rack: https://www.myloc.de/en/colocation/rack/quarter-rack.html
Router locally doing BGP for something like a /26 to specific servers or statically mapping /26 to specific server ports?
Doesn't seem possible via BGP to share a /24 across internet connections due to the limit on needing to be a /24 for default-free zone...
That was the reason for my statement. You can't BGP advertise smaler than /24 And hardly anyone can afford a 10G, 40G or 100G connection or several petabytes of traffic per month.
We have divided a /24 into /27 which are routed. One AS and each /27 is individually RIPE ASSIGNED PA
On Wednesday, July 10th, 2024 at 6:27 AM, lists@for-privacy.net <lists@for-
privacy.net> wrote:
On Mittwoch, 10. Juli 2024 00:32:04 CEST Osservatorio Nessuno via tor-relays> wrote:
we are planning to get some hardware to run a physical Tor exit node, starting with a 1Gbps dedicated, unmetered uplink (10Gbps downlink). We will also route a /24 on it, so we will have large availability of addresses to run multiple instances. We have been running a few exit nodes so far, but never on our own hardware.
Your bottleneck is the 1G uplink. For comparison, I have 2x Xeon E5-2680v2 10C/20T and 256Gb RAM 2x 10G nic (LACP bond) and I can not achieve 10G throughput with it. As a rule of thumb, I would always count one instance per thread or core. I have 40T and 40 tor exit instances.
F3Netze has specified the hardware in Contact info: https://metrics.torproject.org/rs.html#search/185.220.100.
Which is the bandwith limit per core/Tore instance? Or what can we expect to be the bottleneck?
That depends on the CPU clock speed. Fast Ryzen or Epyc's can do 50-70 MiB/s per core/instance.
Due to some other requirements we need for some experiments (SFP ports, coreboot support, etc) we can mainly choose between these 2 CPUs: Intel i5-1235U Intel i7-1255U
The cost between the two models is significant enough in our case to pick the i7 only if it's really useful.
In both cases with 32GB of DDR5 RAM (we can max to 64 if needed, but is it?).
Should this allow us to saturate the uplink?
Guards need more resources than exits since the introduction of congestion- control and because of DDoS I would use 64GB RAM for a guard. With your IP space and 1G uplink, I would take the i5 with 32Gb, save the money and maybe add a second server later. Or if you build the hardware yourself, look for a used Epyc or Ryzen server. 16 or 32 core with high base clock. Used server hardware from the data center is like new.
To summarize, with this bandwith, this hardware and a /24 how many Tor exit nodes should be ideal to run considering that each of them could have their own address?
https://metrics.torproject.org/rs.html#search/185.220.101. We are 5 relay orgs sharing a /24. Currently 5x 2x10G(or 25G) With now 8 relays per IP, over 2000 instances can run in a /24 subnet. It would be nice if you share the subnet with 1-2 other relay operators.
-- ╰_╯ Ciao Marco!
Debian GNU/Linux
It's free software and it gives you freedom!_______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays@lists.torproject.org