No stable flag from 6 out of 9?

My relay nicknamed dobbo appear to be partly in "bad standings" on most of the consensus authorities. I stumbled upon another relay - matlink - with a similar faith. I suspect something in my setup to be the culprit. The only thing in the log that looks a bit suspicious is a warning about a mismatch in ssl versions - "OpenSSL version from headers does not match the version we're running with". Plenty of traffic - just curious.. /Ole Rydahl

(Moving email communication to tor-relays) Hi Ole, Ole Rydahl:
My relay nicknamed dobbo appear to be partly in "bad standings" on most of the consensus authorities.> I stumbled upon another relay - matlink - with a similar faith.
I suspect something in my setup to be the culprit. The only thing in the log that looks a bit suspicious is a warning about a mismatch in ssl versions - "OpenSSL version from headers does not match the version we're running with".
Could you please provide more details of your system, how did you install tor and also the complete log line(s)? This could help to spot potential issues. Your relay seems to be running version 0.3.1.10 it will good to update it to the newest version. Cheers, ~Vasilis -- Fingerprint: 8FD5 CF5F 39FC 03EB B382 7470 5FBF 70B1 D126 0162 Pubkey: https://pgp.mit.edu/pks/lookup?op=get&search=0x5FBF70B1D1260162
Hi Vasilis,
I have been running a relay since 2013. At first on an openwrt router. Since it crashed when the load got high, I moved it to my (mail-) server. At present I run Fedora 27 and only upgrade Tor, when a new version is offered there. At present I start it manually after reboot - since doing it via systemctl no longer works (for me).
For some long periods mu relay has been un-operational. The latest down period was coursed by my non-intentional enabling an ipv6 firewall while claiming it could be reached by ipv6.
This is what I typically se in the log:
Mar 22 13:41:09 linux4 Tor[1731]: OpenSSL version from headers does not match the version we're running with. If you get weird crashes, that might be why. (Compiled with 1010007f: OpenSSL 1.1.0g 2 Nov 2017; running with 10100 07f: OpenSSL 1.1.0g-fips 2 Nov 2017). Mar 22 13:41:09 linux4 Tor[1731]: Tor 0.3.1.10 (git-e3966d47c7252409) running on Linux with Libevent 2.0.22-stable, OpenSSL 1.1.0g-fips, Zlib 1.2.11, Liblzma N/A, and Libzstd N/A. Mar 22 13:41:09 linux4 Tor[1731]: Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Mar 22 13:41:09 linux4 Tor[1731]: Read configuration file "/etc/tor/torrc". Mar 22 13:41:09 linux4 Tor[1731]: Based on detected system memory, MaxMemInQueues is set to 2048 MB. You can override this by setting MaxMemInQueues by hand. Mar 22 13:41:09 linux4 Tor[1731]: Opening Control listener on 127.0.0.1:9051 Mar 22 13:41:09 linux4 Tor[1731]: Opening OR listener on 0.0.0.0:9001 Mar 22 13:41:09 linux4 Tor[1731]: Opening OR listener on [2a05:f6c7:62:1::5]:9002 Mar 22 13:41:09 linux4 Tor[1731]: Opening Directory listener on 0.0.0.0:9030 Mar 22 13:41:30 linux4 Tor[1731]: Parsing GEOIP IPv4 file /usr/share/tor/geoip. Mar 22 13:41:30 linux4 Tor[1731]: Parsing GEOIP IPv6 file /usr/share/tor/geoip6. Mar 22 13:41:33 linux4 Tor[1731]: Your Tor server's identity key fingerprint is 'dobbo CE1FD7659F2DFE92B883083C0C6C974616D17F3D' Mar 22 13:41:33 linux4 Tor[1731]: Bootstrapped 0%: Starting Mar 22 13:43:02 linux4 Tor[1731]: Starting with guard context "default" Mar 22 13:43:02 linux4 Tor[1731]: Bootstrapped 80%: Connecting to the Tor network Mar 22 13:43:03 linux4 Tor[1731]: Guessed our IP address as 185.15.72.62 (source: 171.25.193.9). Mar 22 13:43:03 linux4 Tor[1731]: Self-testing indicates your ORPort is reachable from the outside. Excellent. Mar 22 13:43:03 linux4 Tor[1731]: Bootstrapped 85%: Finishing handshake with first hop Mar 22 13:43:04 linux4 Tor[1731]: Bootstrapped 90%: Establishing a Tor circuit Mar 22 13:43:06 linux4 Tor[1731]: Tor has successfully opened a circuit. Looks like client functionality is working. Mar 22 13:43:06 linux4 Tor[1731]: Bootstrapped 100%: Done Mar 22 13:44:03 linux4 Tor[1731]: Self-testing indicates your DirPort is reachable from the outside. Excellent. Publishing server descriptor. Mar 22 13:44:05 linux4 Tor[1731]: Performing bandwidth self-test...done. Mar 22 19:43:02 linux4 Tor[1731]: Heartbeat: Tor's uptime is 5:59 hours, with 4471 circuits open. I've sent 65.11 GB and received 64.45 GB. Mar 22 19:43:02 linux4 Tor[1731]: Circuit handshake stats since last time: 18847/18847 TAP, 193853/193853 NTor. Mar 22 19:43:02 linux4 Tor[1731]: Since startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3 connections, and 4701 v4 connections; and received 0 v1 connections, 1303 v2 connections, 2699 v3 connections, and 330 8 v4 connections. Mar 22 19:43:02 linux4 Tor[1731]: DoS mitigation since startup: 0 circuits rejected, 0 marked addresses. 0 connections closed. 103 single hop clients refused. Mar 23 01:43:02 linux4 Tor[1731]: Heartbeat: Tor's uptime is 11:59 hours, with 584 circuits open. I've sent 111.49 GB and received 110.41 GB. Mar 23 01:43:02 linux4 Tor[1731]: Circuit handshake stats since last time: 12687/12687 TAP, 112931/112931 NTor. Mar 23 01:43:02 linux4 Tor[1731]: Since startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3 connections, and 7850 v4 connections; and received 0 v1 connections, 2033 v2 connections, 4125 v3 connections, and 538 3 v4 connections. Mar 23 01:43:02 linux4 Tor[1731]: DoS mitigation since startup: 0 circuits rejected, 0 marked addresses. 0 connections closed. 187 single hop clients refused. Mar 23 07:43:02 linux4 Tor[1731]: Heartbeat: Tor's uptime is 17:59 hours, with 588 circuits open. I've sent 113.60 GB and received 112.47 GB. Mar 23 07:43:02 linux4 Tor[1731]: Circuit handshake stats since last time: 206/206 TAP, 1417/1417 NTor. Mar 23 07:43:02 linux4 Tor[1731]: Since startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3 connections, and 8171 v4 connections; and received 0 v1 connections, 2082 v2 connections, 4310 v3 connections, and 573 0 v4 connections.
This is my torrc:
Address qp12.dk #ControlSocket /run/tor/control #ControlSocketsGroupWritable 1 #CookieAuthentication 1 #CookieAuthFile /run/tor/control.authcookie #CookieAuthFileGroupReadable 1 User toranon #BridgeRelay 1 ContactInfo 0x31384448 Ole Rydahl (Had a cat named Dobbo) <ole_rydahl at qp12 dot dk> ControlPort 9051 CookieAuthentication 1 #RunAsDaemon 1 DataDirectory /var/lib/tor SocksPort 0 Log notice syslog DirPort 9030 DirReqStatistics 0 ExitPolicy reject *:* Nickname dobbo ORPort 9001 ORPort [2a05:f6c7:62:1::5]:9002 RelayBandwidthBurst 24144000 RelayBandwidthRate 24144000
I have a single public ipv4 address and a /56 range of ipv6 addresses. My isp don't bandwidth limit my 1Gbit fiber. When I do speed test it around 400Mbit up or down. The router handles 6000+ connections without reservations.
-----Oprindelig meddelelse----- Fra: Vasilis [mailto:andz@torproject.org] Sendt: 25. marts 2018 23:19
Hum, have you check for any weird logs in your router?
My apologies this email was supposed to go (MUA rules!) to the tor-relays mailing list instead. If you are OK with that we can move this discussion to tor-relays so others may be able to help, provide comments or resolve/understand similar issues.
Thanks.
Regards, ~Vasilis
Hi,
Feel free to move the thread to the mailing list.
The router typically has a low load - at present < 0.2, while serving 4000+ connections - 27% of the maximum 16384 connections. It reports 70 Mbyte free memory. Once a month or less it reports a ddos attach. I have stressed the router by sending a burst - 56 Mbyte/s - fragmented 64 kbyte pings. It generates a load of 4 but everything else appear to work as expected. Below is the repeated sequence in the log from the router
Mon Mar 26 09:40:24 2018 daemon.warn hnetd[13086]: Router DF849732 Mon Mar 26 09:40:57 2018 daemon.notice netifd: E0_4 (3731): Sending renew... Mon Mar 26 09:40:57 2018 daemon.notice netifd: E0_4 (3731): udhcpc: connect: Network is unreachable Mon Mar 26 09:41:34 2018 daemon.notice netifd: E0_4 (3731): Sending renew... Mon Mar 26 09:41:34 2018 daemon.notice netifd: E0_4 (3731): Lease of 185.15.72.62 obtained, lease time 300 Mon Mar 26 09:41:35 2018 daemon.info hnetd[3521]: platform: interface update for br-E0 detected Mon Mar 26 09:41:35 2018 daemon.info hnetd[3521]: platform: interface update for br-E0 detected Mon Mar 26 09:41:35 2018 daemon.info hnetd[3521]: platform: interface update for br-E0 detected Mon Mar 26 09:41:35 2018 daemon.info hnetd[3521]: iface: updated delegated prefix 2a05:f6c7:62::/56 to br-E0 Mon Mar 26 09:41:35 2018 daemon.info hnetd[3521]: platform: interface update for br-lan detected Mon Mar 26 09:41:35 2018 daemon.info hnetd[3521]: platform: interface update for lo detected Mon Mar 26 09:41:35 2018 daemon.notice hnetd[3521]: [pa]_tlv_cb remove local <TLV id=33,len=19: 0022000F000927C0000493E030FD123456789A> Mon Mar 26 09:41:35 2018 daemon.notice hnetd[3521]: [pa]_tlv_cb remove local <TLV id=33,len=120: 002200180004CBB000040860382A05F6C7006200002B00010000000000220020000927C0000493E06800000000000000000000FFFF0A0000002B000100000000002500240017002020014860486000000000000000008888200148604860000000000000000088440026000A060808080808080804040000> Mon Mar 26 09:41:35 2018 daemon.notice hnetd[3521]: [sd]_tlv_cb remove local <TLV id=33,len=19: 0022000F000927C0000493E030FD123456789A> Mon Mar 26 09:41:35 2018 daemon.notice hnetd[3521]: [sd]_tlv_cb remove local <TLV id=33,len=120: 002200180004CBB000040860382A05F6C7006200002B00010000000000220020000927C0000493E06800000000000000000000FFFF0A0000002B000100000000002500240017002020014860486000000000000000008888200148604860000000000000000088440026000A060808080808080804040000> Mon Mar 26 09:41:35 2018 daemon.notice hnetd[3521]: [pa]_tlv_cb add local <TLV id=33,len=19: 0022000F0008134200037F6230FD123456789A> Mon Mar 26 09:41:35 2018 daemon.notice hnetd[3521]: [pa]_tlv_cb add local <TLV id=33,len=120: 002200180003B5380002F1E8382A05F6C7006200002B000100000000002200200008134200037F626800000000000000000000FFFF0A0000002B000100000000002500240017002020014860486000000000000000008888200148604860000000000000000088440026000A060808080808080804040000> Mon Mar 26 09:41:35 2018 daemon.notice hnetd[3521]: [sd]_tlv_cb add local <TLV id=33,len=19: 0022000F0008134200037F6230FD123456789A> Mon Mar 26 09:41:35 2018 daemon.notice hnetd[3521]: [sd]_tlv_cb add local <TLV id=33,len=120: 002200180003B5380002F1E8382A05F6C7006200002B000100000000002200200008134200037F626800000000000000000000FFFF0A0000002B000100000000002500240017002020014860486000000000000000008888200148604860000000000000000088440026000A060808080808080804040000> Mon Mar 26 09:41:35 2018 daemon.warn hnetd[13139]: Router DF849732 Mon Mar 26 09:42:18 2018 daemon.info hnetd[3521]: platform: interface update for br-E0 detected Mon Mar 26 09:42:18 2018 daemon.info hnetd[3521]: platform: interface update for br-E0 detected Mon Mar 26 09:42:18 2018 daemon.info hnetd[3521]: platform: interface update for br-E0 detected Mon Mar 26 09:42:18 2018 daemon.info hnetd[3521]: iface: updated delegated prefix 2a05:f6c7:62::/56 to br-E0 Mon Mar 26 09:42:18 2018 daemon.info hnetd[3521]: platform: interface update for br-lan detected Mon Mar 26 09:42:18 2018 daemon.info hnetd[3521]: platform: interface update for lo detected Mon Mar 26 09:42:18 2018 daemon.notice hnetd[3521]: [pa]_tlv_cb remove local <TLV id=33,len=19: 0022000F0008134200037F6230FD123456789A> Mon Mar 26 09:42:18 2018 daemon.notice hnetd[3521]: [pa]_tlv_cb remove local <TLV id=33,len=120: 002200180003B5380002F1E8382A05F6C7006200002B000100000000002200200008134200037F626800000000000000000000FFFF0A0000002B000100000000002500240017002020014860486000000000000000008888200148604860000000000000000088440026000A060808080808080804040000> Mon Mar 26 09:42:18 2018 daemon.notice hnetd[3521]: [sd]_tlv_cb remove local <TLV id=33,len=19: 0022000F0008134200037F6230FD123456789A> Mon Mar 26 09:42:18 2018 daemon.notice hnetd[3521]: [sd]_tlv_cb remove local <TLV id=33,len=120: 002200180003B5380002F1E8382A05F6C7006200002B000100000000002200200008134200037F626800000000000000000000FFFF0A0000002B000100000000002500240017002020014860486000000000000000008888200148604860000000000000000088440026000A060808080808080804040000> Mon Mar 26 09:42:18 2018 daemon.notice hnetd[3521]: [pa]_tlv_cb add local <TLV id=33,len=19: 0022000F000769290002D54930FD123456789A> Mon Mar 26 09:42:18 2018 daemon.notice hnetd[3521]: [pa]_tlv_cb add local <TLV id=33,len=120: 0022001800055730000493E0382A05F6C7006200002B00010000000000220020000769290002D5496800000000000000000000FFFF0A0000002B000100000000002500240017002020014860486000000000000000008888200148604860000000000000000088440026000A060808080808080804040000> Mon Mar 26 09:42:18 2018 daemon.notice hnetd[3521]: [sd]_tlv_cb add local <TLV id=33,len=19: 0022000F000769290002D54930FD123456789A> Mon Mar 26 09:42:18 2018 daemon.notice hnetd[3521]: [sd]_tlv_cb add local <TLV id=33,len=120: 0022001800055730000493E0382A05F6C7006200002B00010000000000220020000769290002D5496800000000000000000000FFFF0A0000002B000100000000002500240017002020014860486000000000000000008888200148604860000000000000000088440026000A060808080808080804040000> Mon Mar 26 09:42:18 2018 daemon.warn hnetd[13195]: Router DF849732
/Ole

Hi,
The router typically has a low load - at present < 0.2, while serving 4000+ connections - 27% of the maximum 16384 connections. It reports 70 Mbyte free memory. Once a month or less it reports a ddos attach. I have stressed the router by sending a burst - 56 Mbyte/s - fragmented 64 kbyte pings. It generates a load of 4 but everything else appear to work as expected. Below is the repeated sequence in the log from the router
[...]
/Ole
Can't seem to find anything obvious in logs. It seems that sometimes IPv6 connectivity/routing may not be that ideal but this is just an attempt (of mine) to find out what's wrong and I can be completely wrong. Would you like to "experiment" by temporarily disabling your IPv6 address and report back if you see any changes? Cheers, ~Vasilis -- Fingerprint: 8FD5 CF5F 39FC 03EB B382 7470 5FBF 70B1 D126 0162 Pubkey: https://pgp.mit.edu/pks/lookup?op=get&search=0x5FBF70B1D1260162

-----Oprindelig meddelelse----- Fra: tor-relays [mailto:tor-relays-bounces@lists.torproject.org] På vegne af Vasilis Sendt: 30. marts 2018 16:33 Til: tor-relays@lists.torproject.org Emne: Re: [tor-relays] No stable flag from 6 out of 9?
Hi,
The router typically has a low load - at present < 0.2, while serving 4000+ connections - 27% of the maximum 16384 connections. It reports 70 Mbyte free memory. Once a month or less it reports a ddos attach. I have stressed the router by sending a burst - 56 Mbyte/s - fragmented 64 kbyte pings. It generates a load of 4 but everything else appear to work as expected. Below is the repeated sequence in the log from the router
[...]
/Ole
Can't seem to find anything obvious in logs. It seems that sometimes IPv6 connectivity/routing may not be that ideal but this is just an attempt (of mine) to find out what's wrong and I can be completely wrong.
Would you like to "experiment" by temporarily disabling your IPv6 address and report back if you see any changes?
Cheers, ~Vasilis -- Fingerprint: 8FD5 CF5F 39FC 03EB B382 7470 5FBF 70B1 D126 0162 Pubkey: https://pgp.mit.edu/pks/lookup?op=get&search=0x5FBF70B1D1260162 Hi,
Disabled IPv6 some hours ago (nyx/menu/reset tor) with no change on the moods of 6 of the muses. They still don't consider me "stable". Still plenty of traffic! /Ole

On 31. Mar 2018, at 14:45, Ole Rydahl <ole_rydahl@qp12.dk> wrote: Disabled IPv6 some hours ago (nyx/menu/reset tor) with no change on the moods of 6 of the muses. They still don't consider me "stable".
The respective dirauths aren't muses, they simply function as they are designed - they treat relays with unreliable connection histories as unstable. Having a working connection for a few days doesn't change that history, so you'll have to be patient to see an effect.

-----Oprindelig meddelelse----- Fra: tor-relays [mailto:tor-relays-bounces@lists.torproject.org] På vegne af Sebastian Hahn Sendt: 31. marts 2018 18:15 Til: tor-relays@lists.torproject.org Emne: Re: [tor-relays] No stable flag from 6 out of 9?
On 31. Mar 2018, at 14:45, Ole Rydahl <ole_rydahl@qp12.dk> wrote: Disabled IPv6 some hours ago (nyx/menu/reset tor) with no change on the moods of 6 of the muses. They still don't consider me "stable".
The respective dirauths aren't muses, they simply function as they are designed - they treat relays with unreliable connection histories as unstable. Having a working connection for a few days doesn't change that history, so you'll have to be patient to see an effect.
I'll be patient! (It took a while to obtain the stable flag from my wife too.) /Ole

Hi Ole, Ole Rydahl:
I'll be patient! (It took a while to obtain the stable flag from my wife too.) /Ole
It seems that your relay has gained the stable flag, can you please re-enable IPv6 connectivity so that we can find if that was the issue? Thanks, ~Vasilis -- Fingerprint: 8FD5 CF5F 39FC 03EB B382 7470 5FBF 70B1 D126 0162 Pubkey: https://pgp.mit.edu/pks/lookup?op=get&search=0x5FBF70B1D1260162

-----Oprindelig meddelelse----- Fra: tor-relays [mailto:tor-relays-bounces@lists.torproject.org] På vegne af Vasilis Sendt: 10. april 2018 13:08 Til: tor-relays@lists.torproject.org Emne: Re: [tor-relays] No stable flag from 6 out of 9?
Hi Ole,
Ole Rydahl:
I'll be patient! (It took a while to obtain the stable flag from my wife too.) /Ole
It seems that your relay has gained the stable flag, can you please re-enable IPv6 connectivity so that we can find if that was the issue?
Thanks, ~Vasilis -- Fingerprint: 8FD5 CF5F 39FC 03EB B382 7470 5FBF 70B1 D126 0162 Pubkey: https://pgp.mit.edu/pks/lookup?op=get&search=0x5FBF70B1D1260162
Hi Vasilis, I'm in France right now, but I'll re-enable IPv6 sometime Sunday and let you know. /Ole

-----Oprindelig meddelelse----- Fra: tor-relays [mailto:tor-relays-bounces@lists.torproject.org] På vegne af Vasilis Sendt: 10. april 2018 13:08 Til: tor-relays@lists.torproject.org Emne: Re: [tor-relays] No stable flag from 6 out of 9?
Hi Ole,
Ole Rydahl:
I'll be patient! (It took a while to obtain the stable flag from my wife too.) /Ole
It seems that your relay has gained the stable flag, can you please re-enable IPv6 connectivity so that we can find if that was the issue?
Thanks, ~Vasilis -- Fingerprint: 8FD5 CF5F 39FC 03EB B382 7470 5FBF 70B1 D126 0162 Pubkey: https://pgp.mit.edu/pks/lookup?op=get&search=0x5FBF70B1D1260162
Hi Vasilis, I enabled IPv6 yesterday evening. The relay maintained the flags including the stable-flag and got the additional IPv6 flag. As far as I can see, there is a quite large difference in the required "running" period between the 9 directory authorities. I interpreted that as an issue with my setup. 3 authorities voted stable after a few days, while the 6 remaining needs almost 30 days of observed running before granting the stable-flag. Regards Ole

On 15. Apr 2018, at 10:03, Ole Rydahl <ole_rydahl@qp12.dk> wrote: As far as I can see, there is a quite large difference in the required "running" period between the 9 directory authorities. I interpreted that as an issue with my setup. 3 authorities voted stable after a few days, while the 6 remaining needs almost 30 days of observed running before granting the stable-flag.
This is completely expected as 3 dirauths aren't currently equipped to notice ipv6 reachability failure.

-----Oprindelig meddelelse----- Fra: tor-relays [mailto:tor-relays-bounces@lists.torproject.org] På vegne af Sebastian Hahn Sendt: 15. april 2018 10:05 Til: tor-relays@lists.torproject.org Emne: Re: [tor-relays] No stable flag from 6 out of 9?
On 15. Apr 2018, at 10:03, Ole Rydahl <ole_rydahl@qp12.dk> wrote: As far as I can see, there is a quite large difference in the required "running" period between the 9 directory authorities. I interpreted that as an issue with my setup. 3 authorities voted stable after a few days, while the 6 remaining needs almost 30 days of observed running before granting the stable- flag.
This is completely expected as 3 dirauths aren't currently equipped to notice ipv6 reachability failure.
It's my experience that announcing ipv6 capability and actually not providing - results in not being part of the cached consensus since only 3 authorities acknowledge your relay as running. My situation was different all 9 directory authorities acknowledge my relay as running, but initially only 3 granted me the stable flag. Vasilis suggested an experiment where I stopped announcing ipv6-support a while ago. As a follow up - after I had obtained the stable flag from all 9 directory authorities - Vasilis suggested that I re-enabled ipv6 capability. /Ole

Hi,
On 15. Apr 2018, at 20:20, Ole Rydahl <ole_rydahl@qp12.dk> wrote: It's my experience that announcing ipv6 capability and actually not providing - results in not being part of the cached consensus since only 3 authorities acknowledge your relay as running.
My situation was different all 9 directory authorities acknowledge my relay as running, but initially only 3 granted me the stable flag.
Vasilis suggested an experiment where I stopped announcing ipv6-support a while ago. As a follow up - after I had obtained the stable flag from all 9 directory authorities - Vasilis suggested that I re-enabled ipv6 capability.
I read the thread, I think what's happened is that your ipv6 connection is intermittently flaky, not down all the time. That means dirauths with v6 note many more failures than those without it and that causes them to have a much worse availability history for you. Cheers Sebastian

-----Oprindelig meddelelse----- Fra: tor-relays [mailto:tor-relays-bounces@lists.torproject.org] På vegne af Sebastian Hahn Sendt: 15. april 2018 20:54 Til: tor-relays@lists.torproject.org Emne: Re: [tor-relays] No stable flag from 6 out of 9?
Hi,
On 15. Apr 2018, at 20:20, Ole Rydahl <ole_rydahl@qp12.dk> wrote: It's my experience that announcing ipv6 capability and actually not providing - results in not being part of the cached consensus since only 3 authorities acknowledge your relay as running.
My situation was different all 9 directory authorities acknowledge my relay as running, but initially only 3 granted me the stable flag.
Vasilis suggested an experiment where I stopped announcing ipv6-support a while ago. As a follow up - after I had obtained the stable flag from all 9 directory authorities - Vasilis suggested that I re- enabled ipv6 capability.
I read the thread, I think what's happened is that your ipv6 connection is intermittently flaky, not down all the time. That means dirauths with v6 note many more failures than those without it and that causes them to have a much worse availability history for you.
Cheers Sebastian
Hi Sebastian, You are probably right, but I assume, the relay would lose cached consensus - for a period - in case of lost ipv6 connectivity? Nothing in the log however indicates that. Neither before I disabled ipv6 nor after I re-enabled it. For other purposes, I ping6 Google every 5 minutes and send a mail to myself in case of no reply. 2 occasions during the last 2 month on March 6th (for 50 minutes) and April 10th (for 1 hour). The later one happened while my ipv6 capability was unannounced (Vasilis' suggestion). Anyway the traffic is flowing nicely now - as it did even before the stable flag was granted. I suggest we consider the problem solved. Regards Ole
participants (3)
-
Ole Rydahl
-
Sebastian Hahn
-
Vasilis