I've noticed a new kind of possible attack on some of my relays, as
early as Dec.23 which causes huge spikes of outbound traffic that
eventually maxes out RAM and crashes Tor. The newest one today lasted
for 5 hours switching between two of the three relays on the same IP.
During the attack, Tor becomes so busy processing the traffic that it
becomes unresponsive to new connections for minutes at a time and
effectively becomes a zombie exclusively processing the attacker's
traffic until it eventually crashes and restarts. The interesting part
is that when Tor restarts, it doesn't start from scratch building new
circuits but it starts right from where it left out and keeps processing
the previous connections.
I have tried shutting down Tor for over 5 minutes and within one minute
of restart, The RAM maxes out and the outbound traffic reaches the
previous heights.
This has been happening, not to all relays but to a select group of
relays at a time and unless you're monitoring your Tor port from
outside, you may not notice it's unresponsive. Another way to see if
it's happening to you too is to check your monthly history on the
metrics page and look for spikes of written bytes or sudden decrease of
read bytes where you see a big gap between the two.
I have included charts and excerpts from the log in my post in Tor forum
at below link:
https://forum.torproject.org/t/new-kind-of-attack/11122
I'd appreciate your insights and comments.
Thank you.
https://www.infobyip.com/ip-198.13.48.219.html
Shows Japan!
Malcolm
On Thu, Jan 18, 2024 at 4:00 PM NodNet <
tor_at_nodnetwork.org_0tpcqovw(a)duck.com> wrote:
> I think tor and Tor Project use IPFire's DB for GeoIP lookups, and
> 198.13.48.219 shows the following:
>
> NETWORK: 198.13.48.0/20
> AUTONOMOUS SYSTEM: AS20473 - AS-CHOOPA
> COUNTRY: United States of America
>
> https://www.ipfire.org/projects/location/lookup/198.13.48.219
>
> On 1/18/2024 11:22 AM, Jag Talon wrote:
> > Hello,
> >
> > I have a relay in Japan with the IP of 198.13.48.219, but it's being
> > marked as being in the US. I've tried using different websites like
> > www.iplocation.net, iplocation.io, and www.wolframalpha.com and
> > they're all telling me that the IP is in Japan.
> >
> > I'm wondering if perhaps there's an issue with the GeoIP lookup? Or
> > perhaps an outdated database?
> >
> > Thanks!
> >
> >
> > _______________________________________________
> > tor-relays mailing list
> > tor-relays(a)lists.torproject.org
> > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
> _______________________________________________
> tor-relays mailing list
> tor-relays(a)lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
>
Hello,
I have a relay in Japan with the IP of 198.13.48.219, but it's being
marked as being in the US. I've tried using different websites like
www.iplocation.net, iplocation.io, and www.wolframalpha.com and they're
all telling me that the IP is in Japan.
I'm wondering if perhaps there's an issue with the GeoIP lookup? Or
perhaps an outdated database?
Thanks!
--
Jag Talon
Designer for the Tor Project
Hello!
In case it affects you as you are still running your relay or bridge on
Tor 0.4.7.x: the 0.4.7 series is going EOL on *2024-01-31* (roughly in 2
weeks from now).
That's currently still 1346 relays, which means roughly 15% of the
advertised bandwidth of the network (and 964 bridges, which means
roughly 45% of the advertised bridges bandwidth).
Please make sure you have upgraded to the 0.4.8.x series by then.
Supported releases in general can be found on the network team wiki.[1]
Thanks,
Georg
[1]
https://gitlab.torproject.org/tpo/core/team/-/wikis/NetworkTeam/CoreTorRele…
> On 01/16/2024 02:48 AM Chris Enkidu-6 wrote ..
> If
> you're running other services on the same server of course the heavy
> load might disrupt those services as well, which is why it's always a
> good idea to only run Tor on the server and not to mix it with other
> services.
That is the funny thing, everything else was working fine. Only the Tor
relays experienced connection problems. Even the Tor bridge seemed to
be running fine.
> And in the long run, if you find that you don't
> have the resources or patience to deal with such heavy load, consider
> running a middle/Guard relay instead of an Exit. They're much easier
to
> deal with during an attack. But don't quit. It'll pass.
I'm still in the process of gaining experience but don't worry about me
quitting, I consider this my new hobby. My servers are exclusively
dedicated to this - so there is nothing important on them for me to
lose.
Also, I pride myself on running exclusively exit relays as this is what
is apparently needed and thus it is where the challenge lies for me.
Hi, I didn't see your original email and I don't currently have it
in my inbox to reply to it directly but I can tell you by looking at
your metrics history for "Alberta", indeed you had the same problem
on Dec.18 that happened to my relays. The attackers seem to be
picking a group of relays at a time and move on to others and it may
be a while before they come back again. However the problem gets
amplified if you're running an exit relay without having large
enough resources to deal with all this traffic. I'm not running an
exit relay, so I have to deal with the problem for a few hours, but
all this outgoing traffic from my relay and other relays that are
under attack will inevitably end up at exit relays. This means that
even if they move on from my relay to another one, the traffic will
still continue to poor into exit relays, so the duration of the
attack for an exit relay may be much longer. This shouldn't stop you
from bringing your rely back up. You're not doing anything wrong and
there's nothing wrong with your set up. If you're running other
services on the same server of course the heavy load might disrupt
those services as well, which is why it's always a good idea to only
run Tor on the server and not to mix it with other services. My
suggestion for the time being, keep running my firewall scripts as
they do help making the duration of the attack shorter by putting
some of those IP addresses in the block list and help making it less
painful and see how it goes. And in the long run, if you find that
you don't have the resources or patience to deal with such heavy
load, consider running a middle/Guard relay instead of an Exit.
They're much easier to deal with during an attack. But don't quit.
It'll pass. Cheers. On 1/15/2024 8:14 PM,
denny.obreham(a)a-n-o-n-y-m-e.net wrote: > > Sorry, but I'm going to
vent a little bit. I'm not a network > specialist, so 11 days ago I
sent the following email to this mailing > list asking for help
because two of my Tor exit relays were completely > frozen and I was
unable to put them online again. > > > Nobody answered, not even a
comment. No wait, there was one person - > very active on this
mailing list recently - who sent me an email > personally to tell me
that I was an idiot (referring to what, I don't > know) who should
kill himself. Adding furthermore that if I didn't > commit suicide
within 72 hours, he would personally come to my house > and kill me
with a Glock 9 mm. Fun stuff, very disturbing. > > > Anyway, no
serious answers, someone calling m e an idiot: I tried to > look as
best as I could at what I did wrong. Couldn't find anything. > My
only available solution was to rebuild completely my server, >
something I wanted to do for a while because of other undesired
quirks > that were bothering me with my setup. I knew it would take
a long time > - which I didn't really have - but I finally finished
my new setup > yesterday. (Don't look for
25FC41154DCB2CAE3ABD74A8DFCD5B90D2CFFD57 or > the bridge, they have
been shut down for the moment.) > > > Today, I read a line from
Chris Endiku-6 saying: "Thereâs something > going on for a while and
I havenât seen any mentions of it." The exact > problem I mentioned!
He says it goes "as early as Dec.23"; my problem > goes to Dec 18 as
shown in my previous email. Also, not mentioned in > my previous
email, before I renewed my setup, my tor-ddos firewall > rules (I
use the ones f rom Endiku-6) had blocked about 5 times more > IP
than usual - if that can be useful information to anyone. > > >
Sorry for the rent, I just needed to vent a little: There was >
something wrong, and I wasn't crazy or incompetent! > > > I still
would like to know how to restart such a relay, if this > happens
again in the future - other than reinstalling the entire > server,
that is. > > > Thanks in advance. > > > On 01/06/2024 01:52 PM
denny.obreham(a)a-n-o-n-y-m-e.net wrote .. > > I manage the two
following exit relays: * >
https://metrics.torproject.org/rs.html#details/25FC41154DCB2CAE3ABD7
4A8DFCD5B90D2CFFD57* >
https://metrics.torproject.org/rs.html#details/3B85067588C3F017D5CCF
7D8F65B5881B7D4C97C > They are both on the same IP and servers. A
few hours ago they > lost contact even though I have both Apache and
I2P servers on the > same machine that are reachable. The weird log
messages seem to go > back to Dec 18, after a restart: ``` Dec 18
07:49:47 a-n-o-n-y-m-e > Tor-Alberta[904715]: Bootstrapped 100%
(done): Done Dec 18 > 07:54:18 a-n-o-n-y-m-e Tor-Alberta[904715]:
Your computer is too > slow to handle this many circuit creation
requests! Please > consider using the MaxAdvertisedBandwidth config
option or > choosing a more restricted exit policy. Dec 18 07:57:11
> a-n-o-n-y-m-e Tor-Alberta[904715]: Your computer is too slow to >
handle this many circuit creation requests! Please consider using >
the MaxAdvertisedBandwidth config option or choosing a more >
restricted exit policy. [14254 similar message(s) suppressed in >
last 180 seconds] ``` There are a few more of these messages >
appearing afterward (My bandwidth is unlimited). This is the >
typical Heartbeat that comes afterward: ``` Dec 18 13:49:42 >
a-n-o-n-y-m-e Tor-Alberta[904715]: Heartbeat: Tor's uptime is 6:00 >
hours, with 476 circuits open. I've sent 30.87 GB and received >
9.04 GB. I've received 13401 connections on IPv4 and 0 on IPv6. >
I've made 46754 connections with IPv4 and 0 with IPv6. Dec 18 >
13:49:42 a-n-o-n-y-m-e Tor-Alberta[904715]: While not >
bootstrapping, fetched this many bytes: 6412201 (server descriptor >
fetch); 1424 (server descriptor upload); 376002 (consensus >
network-status fetch); 57564 (microdescriptor fetch) Dec 18 >
13:49:42 a-n-o-n-y-m-e Tor-Alberta[904715]: Circuit handshake >
stats since last time: 8/8 TAP, 1356688/6457307 NTor. Dec 18 >
13:49:42 a-n-o-n-y-m-e Tor-Alberta[904715]: Since startup we >
initiated 0 and received 0 v1 connections; initiated 0 and >
received 0 v2 connections; initiated 0 and received 0 v3 >
connections; initiated 0 and received 429 v4 connections; >
initiated 285 and received 11137 v5 connections. Dec 18 13:49:42 >
a-n-o-n-y-m-e Tor-Alberta[904715]: Heartbeat: DoS mitigation since >
startup: 0 circuits killed with too many cells, 0 circuits >
rejected, 0 marked addresses, 0 marked addresses for max queue, 0 >
same address concurrent connections rejected, 0 connections >
rejected, 0 single hop clients refused, 0 INTRODUCE2 rejected. ``` >
Then I have this kind of message appearing: ``` Dec 18 14:13:23 >
a-n-o-n-y-m-e Tor-Alberta[904715]: No circuits are opened. Relaxed >
timeout for circuit 4487 (a Measuring circuit timeout 3-hop >
circuit in state doing handshakes with channel state open) to >
60000ms. However, it appears the circuit has timed out anyway. [4 >
similar message(s) suppressed in last 3300 seconds] ``` Then a few >
days later, this bug report: ``` Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: tor_bug_occurred_(): Bug: >
../src/core/or/conflux.c:567: conflux_pick_first_leg: Non-fatal >
assertion !(smartlist_len(cfx->legs) <= 0) failed. (on Tor >
0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: Bug: >
Tor 0.4.8.10: Non-fatal assertion !(smartlist_len(cfx->legs) <= 0) >
failed in conflux_pick_first_leg at ../src/core/or/conflux.c:567. >
Stack trace: (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(log_backtrace_impl+0x5b) >
[0x55651f95b37b] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_bug_occurred_+0x18a) >
[0x55651f97294a] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/usr/bin/tor(conflux_decide_next_circ+0x40e) [0x55651fa12afe] (on >
Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: >
Bug: /usr/bin/tor(circuit_get_package_window+0x75) >
[0x55651fa12ec5] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x9ed63) [0x55651f908d63] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/usr/bin/tor(connection_edge_package_raw_inbuf+0xae) >
[0x55651f90b80e] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/usr/bin/tor(connection_edge_process_inbuf+0x6f) [0x55651fa2b9df] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x1c2fb4) [0x55651fa2cfb4] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x73ffc) [0x55651f8ddffc] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x1ff58) [0x7fcee899bf58] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x577) >
[0x7fcee899d8a7] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(do_main_loop+0x127) >
[0x55651f8de7c7] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_run_main+0x215) >
[0x55651f8e2805] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_main+0x4d) >
[0x55651f8e2c6d] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(main+0x1d) [0x55651f8d4dcd] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7fcee80a8d90] (on Tor >
0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: Bug: >
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) >
[0x7fcee80a8e40] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(_start+0x25) >
[0x55651f8d4e25] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: conflux_pick_first_leg(): Bug: Matching >
client sets: (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: conflux_log_set(): Bug: Conflux >
566550141B144136: 0 linked, 0 launched. Delivered: 75; teardown: >
0; Current: (nil), Previous: (nil) (on Tor 0.4.8.10 ) Dec 21 >
15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: >
conflux_pick_first_leg(): Bug: Matching server sets: (on Tor >
0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: >
conflux_log_set(): Bug: Conflux 566550141B144136: 0 linked, 0 >
launched. Delivered: 75; teardown: 0; Current: (nil), Previous: >
(nil) (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: conflux_pick_first_leg(): Bug: End conflux >
set dump (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: circuit_get_package_window(): Bug: Conflux >
has no circuit to send on. Circuit 0x556536e76c20 idx 138 marked >
at line ../src/core/or/command.c:663 (on Tor 0.4.8.10 ) Dec 21 >
15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: tor_bug_occurred_(): >
Bug: ../src/core/or/conflux.c:567: conflux_pick_first_leg: >
Non-fatal assertion !(smartlist_len(cfx->legs) <= 0) failed. (on >
Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: >
Bug: Tor 0.4.8.10: Non-fatal assertion !(smartlist_len(cfx->legs) >
<= 0) failed in conflux_pick_first_leg at >
../src/core/or/conflux.c:567. Stack trace: (on Tor 0.4.8.10 ) Dec >
21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: Bug: >
/usr/bin/tor(log_backtrace_impl+0x5b) [0x55651f95b37b] (on Tor >
0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: Bug: >
/usr/bin/tor(tor_bug_occurred_+0x18a) [0x55651f97294a] (on Tor >
0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: Bug: >
/usr/bin/tor(conflux_decide_next_circ+0x40e) [0x55651fa12afe] (on >
Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: >
Bug: /usr/bin/tor(circuit_get_package_window+0x75) >
[0x55651fa12ec5] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x9ed63) [0x55651f908d63] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/usr/bin/tor(connection_edge_package_raw_inbuf+0xae) >
[0x55651f90b80e] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/usr/bin/tor(connection_edge_process_inbuf+0x6f) [0x55651fa2b9df] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x1c34dd) [0x55651fa2d4dd] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x73ffc) [0x55651f8ddffc] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x1ff58) [0x7fcee899bf58] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x577) >
[0x7fcee899d8a7] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(do_main_loop+0x127) >
[0x55651f8de7c7] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_run_main+0x215) >
[0x55651f8e2805] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_main+0x4d) >
[0x55651f8e2c6d] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(main+0x1d) [0x55651f8d4dcd] >
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: >
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7fcee80a8d90] (on Tor >
0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: Bug: >
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) >
[0x7fcee80a8e40] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Bug: /usr/bin/tor(_start+0x25) >
[0x55651f8d4e25] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: conflux_pick_first_leg(): Bug: Matching >
client sets: (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: conflux_log_set(): Bug: Conflux >
566550141B144136: 0 linked, 0 launched. Delivered: 75; teardown: >
0; Current: (nil), Previous: (nil) (on Tor 0.4.8.10 ) Dec 21 >
15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: >
conflux_pick_first_leg(): Bug: Matching server sets: (on Tor >
0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: >
conflux_log_set(): Bug: Conflux 566550141B144136: 0 linked, 0 >
launched. Delivered: 75; teardown: 0; Current: (nil), Previous: >
(nil) (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: conflux_pick_first_leg(): Bug: End conflux >
set dump (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e >
Tor-Alberta[904715]: circuit_get_package_window(): Bug: Conflux >
has no circuit to send on. Circuit 0x556536e76c20 idx 138 marked >
at line ../src/core/or/command.c:663 (on Tor 0.4.8.10 ) ``` It is >
then an alternation of these three last messages (Heartbeat + "No >
circuits are opened" + bug) until yesterday where I get a series >
of these messages: ``` Jan 5 21:58:59 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Failed to find node for hop #1 of our path. >
Discarding this circuit. Jan 5 21:58:59 a-n-o-n-y-m-e >
Tor-Alberta[904715]: Our circuit 0 (id: 355722) died due to an >
invalid selected path, purpose Unlinked conflux circuit. This may >
be a torrc configuration issue, or a bug. ``` At this point, I >
learned that my relays were down via Tor Weather. I restarted the >
relays - even rebooted the whole server - and both relays seem to >
be unable to connect to the network. Here's a sample: ``` Jan 6 >
09:44:46 a-n-o-n-y-m-e Tor-Alberta[5621]: 254 connections have >
failed: Jan 6 09:44:46 a-n-o-n-y-m-e Tor-Alberta[5621]: 254 >
connections died in state connect()ing with SSL state (No SSL >
object) Jan 6 09:44:46 a-n-o-n-y-m-e Tor-Alberta[5621]: Problem >
bootstrapping. Stuck at 5% (conn): Connecting to a relay. >
(Connection timed out; TIMEOUT; count 256; recommendation warn; >
host 65369D044C659CD299E35763914FFD0FC9AD4509 at >
92.205.161.164:80) ``` I also have a bridge on this server >
(different IP) and it seems OK ( >
https://metrics.torproject.org/rs.html#details/7CEB9D16C5218FE1A9BAB
8E8A6EA9471D2E1F9B8 > ). Last messages after the server reboot are:
``` Jan 6 08:34:17 > a-n-o-n-y-m-e Tor-Nestor[998]: Bootstrapped
100% (done): Done Jan > 6 08:35:21 a-n-o-n-y-m-e Tor-Nestor[998]:
Performing bandwidth > self-test...done. Jan 6 09:54:22
a-n-o-n-y-m-e Tor-Nestor[998]: No > circuits are opened. Relaxed
timeout for circuit 527 (a Measuring > circuit timeout 3-hop circuit
in state doing handshakes with > channel state open) to 60000ms.
However, it appears the circuit > has timed out anyway. ``` I have
stopped both relays until I > figure out what is going on. Help
would be appreciated. > >
Sorry, but I'm going to vent a little bit. I'm not a network
specialist, so 11 days ago I sent the following email to this mailing
list asking for help because two of my Tor exit relays were completely
frozen and I was unable to put them online again.
Nobody answered, not even a comment. No wait, there was one person -
very active on this mailing list recently - who sent me an email
personally to tell me that I was an idiot (referring to what, I don't
know) who should kill himself. Adding furthermore that if I didn't
commit suicide within 72 hours, he would personally come to my house
and kill me with a Glock 9 mm. Fun stuff, very disturbing.
Anyway, no serious answers, someone calling me an idiot: I tried to
look as best as I could at what I did wrong. Couldn't find anything. My
only available solution was to rebuild completely my server, something
I wanted to do for a while because of other undesired quirks that were
bothering me with my setup. I knew it would take a long time - which I
didn't really have - but I finally finished my new setup yesterday.
(Don't look for 25FC41154DCB2CAE3ABD74A8DFCD5B90D2CFFD57 or the bridge,
they have been shut down for the moment.)
Today, I read a line from Chris Endiku-6 saying: "Thereâs something
going on for a while and I havenât seen any mentions of it." The exact
problem I mentioned! He says it goes "as early as Dec.23"; my problem
goes to Dec 18 as shown in my previous email. Also, not mentioned in my
previous email, before I renewed my setup, my tor-ddos firewall rules
(I use the ones from Endiku-6) had blocked about 5 times more IP than
usual - if that can be useful information to anyone.
Sorry for the rent, I just needed to vent a little: There was something
wrong, and I wasn't crazy or incompetent!
I still would like to know how to restart such a relay, if this happens
again in the future - other than reinstalling the entire server, that
is.
Thanks in advance.
On 01/06/2024 01:52 PM denny.obreham(a)a-n-o-n-y-m-e.net wrote ..
I manage the two following exit relays: *
https://metrics.torproject.org/rs.html#details/25FC41154DCB2CAE3ABD7
4A8DFCD5B90D2CFFD57 *
https://metrics.torproject.org/rs.html#details/3B85067588C3F017D5CCF
7D8F65B5881B7D4C97C They are both on the same IP and servers. A few
hours ago they lost contact even though I have both Apache and I2P
servers on the same machine that are reachable. The weird log
messages seem to go back to Dec 18, after a restart: ``` Dec 18
07:49:47 a-n-o-n-y-m-e Tor-Alberta[904715]: Bootstrapped 100%
(done): Done Dec 18 07:54:18 a-n-o-n-y-m-e Tor-Alberta[904715]: Your
computer is too slow to handle this many circuit creation requests!
Please consider using the MaxAdvertisedBandwidth config option or
choosing a more restricted exit policy. Dec 18 07:57:11
a-n-o-n-y-m-e Tor-Alberta[904715]: Your computer is too slow to
handle this many circuit creation requests! Please consider using
the MaxAdvertisedBandwidth config option or choosing a more
restricted exit policy. [14254 similar message(s) suppressed in last
180 seconds] ``` There are a few more of these messages appearing
afterward (My bandwidth is unlimited). This is the typical Heartbeat
that comes afterward: ``` Dec 18 13:49:42 a-n-o-n-y-m-e
Tor-Alberta[904715]: Heartbeat: Tor's uptime is 6:00 hours, with 476
circuits open. I've sent 30.87 GB and received 9.04 GB. I've
received 13401 connections on IPv4 and 0 on IPv6. I've made 46754
connections with IPv4 and 0 with IPv6. Dec 18 13:49:42 a-n-o-n-y-m-e
Tor-Alberta[904715]: While not bootstrapping, fetched this many
bytes: 6412201 (server descriptor fetch); 1424 (server descriptor
upload); 376002 (consensus network-status fetch); 57564
(microdescriptor fetch) Dec 18 13:49:42 a-n-o-n-y-m-e
Tor-Alberta[904715]: Circuit handshake stats since last time: 8/8
TAP, 1356688/6457307 NTor. Dec 18 13:49:42 a-n-o-n-y-m-e
Tor-Alberta[904715]: Since startup we initiated 0 and received 0 v1
connections; initiated 0 and received 0 v2 connections; initiated 0
and received 0 v3 connections; initiated 0 and received 429 v4
connections; initiated 285 and received 11137 v5 connections. Dec 18
13:49:42 a-n-o-n-y-m-e Tor-Alberta[904715]: Heartbeat: DoS
mitigation since startup: 0 circuits killed with too many cells, 0
circuits rejected, 0 marked addresses, 0 marked addresses for max
queue, 0 same address concurrent connections rejected, 0 connections
rejected, 0 single hop clients refused, 0 INTRODUCE2 rejected. ```
Then I have this kind of message appearing: ``` Dec 18 14:13:23
a-n-o-n-y-m-e Tor-Alberta[904715]: No circuits are opened. Relaxed
timeout for circuit 4487 (a Measuring circuit timeout 3-hop circuit
in state doing handshakes with channel state open) to 60000ms.
However, it appears the circuit has timed out anyway. [4 similar
message(s) suppressed in last 3300 seconds] ``` Then a few days
later, this bug report: ``` Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: tor_bug_occurred_(): Bug:
../src/core/or/conflux.c:567: conflux_pick_first_leg: Non-fatal
assertion !(smartlist_len(cfx->legs) <= 0) failed. (on Tor 0.4.8.10
) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: Bug: Tor
0.4.8.10: Non-fatal assertion !(smartlist_len(cfx->legs) <= 0)
failed in conflux_pick_first_leg at ../src/core/or/conflux.c:567.
Stack trace: (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(log_backtrace_impl+0x5b)
[0x55651f95b37b] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_bug_occurred_+0x18a)
[0x55651f97294a] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/usr/bin/tor(conflux_decide_next_circ+0x40e) [0x55651fa12afe] (on
Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]:
Bug: /usr/bin/tor(circuit_get_package_window+0x75) [0x55651fa12ec5]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x9ed63) [0x55651f908d63]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/usr/bin/tor(connection_edge_package_raw_inbuf+0xae)
[0x55651f90b80e] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/usr/bin/tor(connection_edge_process_inbuf+0x6f) [0x55651fa2b9df]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x1c2fb4) [0x55651fa2cfb4]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x73ffc) [0x55651f8ddffc]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x1ff58) [0x7fcee899bf58]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x577)
[0x7fcee899d8a7] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(do_main_loop+0x127)
[0x55651f8de7c7] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_run_main+0x215)
[0x55651f8e2805] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_main+0x4d)
[0x55651f8e2c6d] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(main+0x1d) [0x55651f8d4dcd]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)
[0x7fcee80a8d90] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)
[0x7fcee80a8e40] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(_start+0x25) [0x55651f8d4e25]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: conflux_pick_first_leg(): Bug: Matching client
sets: (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: conflux_log_set(): Bug: Conflux
566550141B144136: 0 linked, 0 launched. Delivered: 75; teardown: 0;
Current: (nil), Previous: (nil) (on Tor 0.4.8.10 ) Dec 21 15:18:48
a-n-o-n-y-m-e Tor-Alberta[904715]: conflux_pick_first_leg(): Bug:
Matching server sets: (on Tor 0.4.8.10 ) Dec 21 15:18:48
a-n-o-n-y-m-e Tor-Alberta[904715]: conflux_log_set(): Bug: Conflux
566550141B144136: 0 linked, 0 launched. Delivered: 75; teardown: 0;
Current: (nil), Previous: (nil) (on Tor 0.4.8.10 ) Dec 21 15:18:48
a-n-o-n-y-m-e Tor-Alberta[904715]: conflux_pick_first_leg(): Bug:
End conflux set dump (on Tor 0.4.8.10 ) Dec 21 15:18:48
a-n-o-n-y-m-e Tor-Alberta[904715]: circuit_get_package_window():
Bug: Conflux has no circuit to send on. Circuit 0x556536e76c20 idx
138 marked at line ../src/core/or/command.c:663 (on Tor 0.4.8.10 )
Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]:
tor_bug_occurred_(): Bug: ../src/core/or/conflux.c:567:
conflux_pick_first_leg: Non-fatal assertion
!(smartlist_len(cfx->legs) <= 0) failed. (on Tor 0.4.8.10 ) Dec 21
15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]: Bug: Tor 0.4.8.10:
Non-fatal assertion !(smartlist_len(cfx->legs) <= 0) failed in
conflux_pick_first_leg at ../src/core/or/conflux.c:567. Stack trace:
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(log_backtrace_impl+0x5b)
[0x55651f95b37b] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_bug_occurred_+0x18a)
[0x55651f97294a] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/usr/bin/tor(conflux_decide_next_circ+0x40e) [0x55651fa12afe] (on
Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e Tor-Alberta[904715]:
Bug: /usr/bin/tor(circuit_get_package_window+0x75) [0x55651fa12ec5]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x9ed63) [0x55651f908d63]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/usr/bin/tor(connection_edge_package_raw_inbuf+0xae)
[0x55651f90b80e] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/usr/bin/tor(connection_edge_process_inbuf+0x6f) [0x55651fa2b9df]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x1c34dd) [0x55651fa2d4dd]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(+0x73ffc) [0x55651f8ddffc]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x1ff58) [0x7fcee899bf58]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x577)
[0x7fcee899d8a7] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(do_main_loop+0x127)
[0x55651f8de7c7] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_run_main+0x215)
[0x55651f8e2805] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(tor_main+0x4d)
[0x55651f8e2c6d] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(main+0x1d) [0x55651f8d4dcd]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)
[0x7fcee80a8d90] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug:
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)
[0x7fcee80a8e40] (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: Bug: /usr/bin/tor(_start+0x25) [0x55651f8d4e25]
(on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: conflux_pick_first_leg(): Bug: Matching client
sets: (on Tor 0.4.8.10 ) Dec 21 15:18:48 a-n-o-n-y-m-e
Tor-Alberta[904715]: conflux_log_set(): Bug: Conflux
566550141B144136: 0 linked, 0 launched. Delivered: 75; teardown: 0;
Current: (nil), Previous: (nil) (on Tor 0.4.8.10 ) Dec 21 15:18:48
a-n-o-n-y-m-e Tor-Alberta[904715]: conflux_pick_first_leg(): Bug:
Matching server sets: (on Tor 0.4.8.10 ) Dec 21 15:18:48
a-n-o-n-y-m-e Tor-Alberta[904715]: conflux_log_set(): Bug: Conflux
566550141B144136: 0 linked, 0 launched. Delivered: 75; teardown: 0;
Current: (nil), Previous: (nil) (on Tor 0.4.8.10 ) Dec 21 15:18:48
a-n-o-n-y-m-e Tor-Alberta[904715]: conflux_pick_first_leg(): Bug:
End conflux set dump (on Tor 0.4.8.10 ) Dec 21 15:18:48
a-n-o-n-y-m-e Tor-Alberta[904715]: circuit_get_package_window():
Bug: Conflux has no circuit to send on. Circuit 0x556536e76c20 idx
138 marked at line ../src/core/or/command.c:663 (on Tor 0.4.8.10 )
``` It is then an alternation of these three last messages
(Heartbeat + "No circuits are opened" + bug) until yesterday where I
get a series of these messages: ``` Jan 5 21:58:59 a-n-o-n-y-m-e
Tor-Alberta[904715]: Failed to find node for hop #1 of our path.
Discarding this circuit. Jan 5 21:58:59 a-n-o-n-y-m-e
Tor-Alberta[904715]: Our circuit 0 (id: 355722) died due to an
invalid selected path, purpose Unlinked conflux circuit. This may be
a torrc configuration issue, or a bug. ``` At this point, I learned
that my relays were down via Tor Weather. I restarted the relays -
even rebooted the whole server - and both relays seem to be unable
to connect to the network. Here's a sample: ``` Jan 6 09:44:46
a-n-o-n-y-m-e Tor-Alberta[5621]: 254 connections have failed: Jan 6
09:44:46 a-n-o-n-y-m-e Tor-Alberta[5621]: 254 connections died in
state connect()ing with SSL state (No SSL object) Jan 6 09:44:46
a-n-o-n-y-m-e Tor-Alberta[5621]: Problem bootstrapping. Stuck at 5%
(conn): Connecting to a relay. (Connection timed out; TIMEOUT; count
256; recommendation warn; host
65369D044C659CD299E35763914FFD0FC9AD4509 at 92.205.161.164:80) ``` I
also have a bridge on this server (different IP) and it seems OK (
https://metrics.torproject.org/rs.html#details/7CEB9D16C5218FE1A9BAB
8E8A6EA9471D2E1F9B8 ). Last messages after the server reboot are:
``` Jan 6 08:34:17 a-n-o-n-y-m-e Tor-Nestor[998]: Bootstrapped 100%
(done): Done Jan 6 08:35:21 a-n-o-n-y-m-e Tor-Nestor[998]:
Performing bandwidth self-test...done. Jan 6 09:54:22 a-n-o-n-y-m-e
Tor-Nestor[998]: No circuits are opened. Relaxed timeout for circuit
527 (a Measuring circuit timeout 3-hop circuit in state doing
handshakes with channel state open) to 60000ms. However, it appears
the circuit has timed out anyway. ``` I have stopped both relays
until I figure out what is going on. Help would be appreciated.
I run the following relay: https://metrics.torproject.org/rs.html#details/6C336E553CC7E0416EBC8577A728…. I just noticed that my relay’s ‘first seen’ date got reset. Tor now thinks that my relay is less than 2 weeks old. But when you open the 6 months graph, you can see the actual ‘first seen’ date which is November 29th 2023. Is it possible to fix this ‘first seen’ date back to the actual value?
Starting about a week or so the number of connections raised rapidly to 18000+ and since then my middle relay reboots every 15 minutes. Lowering the relaybandwidth to a few MBytes partly solved these reboots. Before these unplanned reboots the relay has run for months at 20 - 40 MBytes traffic without issues.
The number of connections now is around 11000 per relay.
How can I prevent these reboots?