Excellent. Thank you.
Yes a blanket iotables rule is not going to work well in this set up as it pools all connections to all IP addresses into one. So if we accept 4 connections to port 443, a blanket iptables rules accepts 4 connections to all IP addresses combined and drops everything else and of course that brings your server to a halt.
In another thread in this mailing list, they had the same situation and I put a script together yesterday that you're welcome to try if you wish. Not sure if they've tried it yet or what the result has been. But the script is set up to apply the rules to two IP addresses at a time and leave the rest alone. So you can apply to two addresses on your server, assess the result and then either expand to the rest or stop altogether.
The script makes a back up of your existing iptables rules. All you have to do is restore it and everything goes back to how it was without having to reboot. It also specifically uses the mangle table and PREROUTING and it won't interfere with your existing rules. That should reduce the number of used ports as well. Flushing the mangle table will also get rid of these rules and you're back to how it was before.
You can get it here:
https://raw.githubusercontent.com/Enkidu-6/tor-ddos/dev/multiple/multi-addr....
Simply choose two of your IP addresses and the ORPort for each and run the script.
If it does what you expect it to do, all you have to do is to change the IP Addresses and run the script again until all your addresses are covered. Please save the iptables backup somewhere else as the second time you run the script, the original back up will be overwritten.
If one of your IP addresses has two ORPorts, the above script won't work and you should use the script below:
https://raw.githubusercontent.com/Enkidu-6/tor-ddos/dev/multiple/two-or.sh
Best of luck and I hope this helps.
On 12/5/2022 3:48 PM, Christopher Sheats wrote:
May I ask what your set up is? Are you running your relays on separate VMs on the main system or are you using a different set up like having all IP addresses on the same OS and using OutboundBindAddress , routing, etc... to separate them? If I know more, I might be able to make a script specific to your set up.
Thank you. Yes, of course.
Ubuntu server 22.04 runs on bare metal. Ansible-relayor manages 20 exit relays on each system. Netplan has each IP individually listed (sub-divided as a /25 per server from within a dedicated /24, similarly for v6 addresses). I believe an available IP is randomly picked by ansible-relayor and used statically in each torrc file.
Here is an example torrc:
# ansible-relayor generated torrc configuration file
# Note: manual changes will be OVERWRITTEN on the next ansible-playbook run
OfflineMasterKey 1
RunAsDaemon 0
Log notice syslog
OutboundBindAddress 23.129.64.130
SocksPort 0
User _tor-23.129.64.130_443
DataDirectory /var/lib/tor-instances/23.129.64.130_443
ORPort 23.129.64.130:443
ORPort [2620:18c:0:192::130]:443
OutboundBindAddress [2620:18c:0:192::130]
DirPort 23.129.64.130:80
Address 23.129.64.130
SyslogIdentityTag 23.129.64.130_443
ControlSocket /var/run/tor-instances/23.129.64.130_443/control GroupWritable RelaxDirModeCheck
Nickname ageis
ContactInfo url:emeraldonion.org proof:uri-rsa ciissversion:2 tech@emeraldonion.org
Sandbox 1
NoExec 1
# we are an exit relay!
ExitRelay 1
IPv6Exit 1
DirPort [2620:18c:0:192::130]:80 NoAdvertise
DirPortFrontPage /etc/tor/instances/tor-exit-notice.html
ExitPolicy reject 23.129.64.128/25:*,reject6 [2613:18c:0:192::]/64:*,accept *:*,accept6 *:*
MyFamily <snip>
# end of torrc
-- Christopher Sheats (yawnbox) Executive Director Emerald Onion Signal: +1 206.739.3390 Website: https://emeraldonion.org/ Mastodon: https://digitalcourage.social/@EmeraldOnion/
On Dec 4, 2022, at 10:08 PM, Chris tor@wcbsecurity.com wrote:
Sorry to hear it wasn't much help. Even though the additions I suggested didn't help they certainly couldn't cause any harm and can't be responsible for the drops in traffic.
As for the torutils scripts, I'm sure toralf would be able to better investigate that but I have a feeling you have a certain set up that might not have worked with the script. May I ask what your set up is? Are you running your relays on separate VMs on the main system or are you using a different set up like having all IP addresses on the same OS and using OutboundBindAddress , routing, etc... to separate them? If I know more, I might be able to make a script specific to your set up.
On 12/3/2022 2:07 PM, Christopher Sheats wrote:
Hello,
Thank you for this information. After 24-hours of testing, these configurations brought Tor to a halt.
At first I started with the sysctl modifications. After a few hours with just that, there was no improvement in ~75% inet_csk_bind_conflict utilization. I then installed Torutils for both IPv4 and IPv6. After only a couple of hours, Tor dropped to below 15 Mbps across both servers (40 relays). 16 hours later, Tor dropped below 2 Mbps.
I've removed all of these new settings and restarted.
-- Christopher Sheats (yawnbox) Executive Director Emerald Onion Signal: +1 206.739.3390 Website: https://emeraldonion.org/ Mastodon: https://digitalcourage.social/@EmeraldOnion/
On Dec 2, 2022, at 7:30 AM, Chris tor@wcbsecurity.com wrote:
Hi,
As I'm sure you've already gathered, your system is maxing out trying to deal with all the connection requests. When inet_csk_get_port is called and the port is found to be occupied then inet_csk_bind_conflict is called to resolve the conflict. So in normal circumstances you shouldn't see it in perf top much less at 79%. There are two ways to deal with it, and each method should be complimented by the other. One way is to try to increase the number of ports and reduce the wait time which you have somehow tried. I would add the following:
net.ipv4.tcp_fin_timeout = 20
net.ipv4.tcp_max_tw_buckets = 1200
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 8192
The complimentary method to the above is to lower the number of connection requests by removing the frivolous connection requests out of the equation using a few iptables rules.
I'm assuming the increased load you're experiencing is due to the current DDos attacks and I'm not sure if you're using anything to mitigate that but you should consider it.
You may find something useful at the following links
[1](https://github.com/Enkidu-6/tor-ddos)
[2](https://github.com/toralf/torutils)
[background](https://gitlab.torproject.org/tpo/community/support/-/issues/40093)
Cheers.
On 12/1/2022 3:35 PM, Christopher Sheats wrote:
Hello tor-relays,
We are using Ubuntu server currently for our exit relays. Occasionally, exit throughput will drop from ~4Gbps down to ~200Mbps and the only observable data point that we have is a significant increase in inet_csk_bind_conflict, as seen via 'perf top', where it will hit 85% [kernel] utilization.
A while back we thought we solved with with two /etc/sysctl.conf settings: net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_tw_reuse = 1
However we are still experiencing this problem.
Both of our (currently, two) relay servers suffer from the same problem, at the same time. They are AMD Epyc 7402P bare-metal servers each with 96GB RAM, each has 20 exit relays on them. This issue persists after upgrading to 0.4.7.11.
Screenshots of perf top are shared here: https://digitalcourage.social/@EmeraldOnion/109440197076214023
Does anyone have experience troubleshooting and/or fixing this problem?
Cheers,
-- Christopher Sheats (yawnbox) Executive Director Emerald Onion Signal: +1 206.739.3390 Website: https://emeraldonion.org/ Mastodon: https://digitalcourage.social/@EmeraldOnion/
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays