Hello everyone!
Since July 2017, there has been a steady decline in relays from ~7k to now ~6.5k. This is a bit unusual that is we don't see often such a steady behavior of relays going offline (at least that I can remember...).
It could certainly be something normal here. However, we shouldn't rule out a bug in tor as well. The steadyness of the decline makes me a bit more worried than usual.
You can see the decline has started around July 2017:
https://metrics.torproject.org/networksize.html?start=2017-06-01&end=201...
What happened around July in terms of tor release:
2017-06-08 09:35:17 -0400 802d30d9b7 (tag: tor-0.3.0.8) 2017-06-08 09:47:44 -0400 e14006a545 (tag: tor-0.2.5.14) 2017-06-08 09:47:58 -0400 aa89500225 (tag: tor-0.2.9.11) 2017-06-08 09:55:28 -0400 f833164576 (tag: tor-0.2.4.29) 2017-06-08 09:55:58 -0400 21a9e5371d (tag: tor-0.2.6.12) 2017-06-08 09:56:15 -0400 3db01d3b56 (tag: tor-0.2.7.8) 2017-06-08 09:58:36 -0400 64ac28ef5d (tag: tor-0.2.8.14) 2017-06-08 10:15:41 -0400 dc47d936d4 (tag: tor-0.3.1.3-alpha) ... 2017-06-29 16:56:13 -0400 fab91a290d (tag: tor-0.3.1.4-alpha) 2017-06-29 17:03:23 -0400 22b3bf094e (tag: tor-0.3.0.9) ... 2017-08-01 11:33:36 -0400 83389502ee (tag: tor-0.3.1.5-alpha) 2017-08-02 11:50:57 -0400 c33db290a9 (tag: tor-0.3.0.10)
Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.
That being said, I don't have an easy way to list which relays went offline during the decline (since July basically) to see if a common pattern emerges.
So few things. First, if anyone on this list noticed that their relay went off the consensus while still having tor running, it is a good time to inform this thread :).
Second, anyone could have an idea of what possibly is going on that is have one or more theories. Even better, if you have some tooling to try to list which relays went offline, that would be _awesome_.
Third, knowing what was the state of packaging in Debian/Redhat/Ubuntu/... around July could be useful. What if a package in distro X is broken and the update have been killing the relays? Or something like that...
Last, looking at the dirauth would be a good idea. Basically, when did the majority switched to 030 and then 031. Starting in July, what was the state of the dirauth version?
Any help is very welcome! Again, this decline could be from natural cause but for now I just don't want to rule out an issue in tor or packaging.
Cheers! David
I can state the reason I stopped hosting my exit relay was due to tor rpm package not being up to date for CentOS 7. The last available version was considered out of date and no longer supported. So instead of running a relay that was potentially detrimental to the health of the tor network I shutdown the node.
On Oct 23, 2017, 9:32 AM, at 9:32 AM, David Goulet dgoulet@torproject.org wrote:
Hello everyone!
Since July 2017, there has been a steady decline in relays from ~7k to now ~6.5k. This is a bit unusual that is we don't see often such a steady behavior of relays going offline (at least that I can remember...).
It could certainly be something normal here. However, we shouldn't rule out a bug in tor as well. The steadyness of the decline makes me a bit more worried than usual.
You can see the decline has started around July 2017:
https://metrics.torproject.org/networksize.html?start=2017-06-01&end=201...
What happened around July in terms of tor release:
2017-06-08 09:35:17 -0400 802d30d9b7 (tag: tor-0.3.0.8) 2017-06-08 09:47:44 -0400 e14006a545 (tag: tor-0.2.5.14) 2017-06-08 09:47:58 -0400 aa89500225 (tag: tor-0.2.9.11) 2017-06-08 09:55:28 -0400 f833164576 (tag: tor-0.2.4.29) 2017-06-08 09:55:58 -0400 21a9e5371d (tag: tor-0.2.6.12) 2017-06-08 09:56:15 -0400 3db01d3b56 (tag: tor-0.2.7.8) 2017-06-08 09:58:36 -0400 64ac28ef5d (tag: tor-0.2.8.14) 2017-06-08 10:15:41 -0400 dc47d936d4 (tag: tor-0.3.1.3-alpha) ... 2017-06-29 16:56:13 -0400 fab91a290d (tag: tor-0.3.1.4-alpha) 2017-06-29 17:03:23 -0400 22b3bf094e (tag: tor-0.3.0.9) ... 2017-08-01 11:33:36 -0400 83389502ee (tag: tor-0.3.1.5-alpha) 2017-08-02 11:50:57 -0400 c33db290a9 (tag: tor-0.3.0.10)
Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.
That being said, I don't have an easy way to list which relays went offline during the decline (since July basically) to see if a common pattern emerges.
So few things. First, if anyone on this list noticed that their relay went off the consensus while still having tor running, it is a good time to inform this thread :).
Second, anyone could have an idea of what possibly is going on that is have one or more theories. Even better, if you have some tooling to try to list which relays went offline, that would be _awesome_.
Third, knowing what was the state of packaging in Debian/Redhat/Ubuntu/... around July could be useful. What if a package in distro X is broken and the update have been killing the relays? Or something like that...
Last, looking at the dirauth would be a good idea. Basically, when did the majority switched to 030 and then 031. Starting in July, what was the state of the dirauth version?
Any help is very welcome! Again, this decline could be from natural cause but for now I just don't want to rule out an issue in tor or packaging.
Cheers! David
-- HiTVizeJUSe9JPvs6jBv/6i8YFvEYY/NZmNhD2UixVY=
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On 23 Oct (09:37:31), Eli wrote:
I can state the reason I stopped hosting my exit relay was due to tor rpm package not being up to date for CentOS 7. The last available version was considered out of date and no longer supported. So instead of running a relay that was potentially detrimental to the health of the tor network I shutdown the node.
I've just pinged our Fedora/CentOS packager, he pointed out this:
https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2017-abe6f98ebf
6 days ago, the latest up to date Tor LTS version was uploaded. :)
Big thanks for running a relay!
Cheers! David
On Oct 23, 2017, 9:32 AM, at 9:32 AM, David Goulet dgoulet@torproject.org wrote:
Hello everyone!
Since July 2017, there has been a steady decline in relays from ~7k to now ~6.5k. This is a bit unusual that is we don't see often such a steady behavior of relays going offline (at least that I can remember...).
It could certainly be something normal here. However, we shouldn't rule out a bug in tor as well. The steadyness of the decline makes me a bit more worried than usual.
You can see the decline has started around July 2017:
https://metrics.torproject.org/networksize.html?start=2017-06-01&end=201...
What happened around July in terms of tor release:
2017-06-08 09:35:17 -0400 802d30d9b7 (tag: tor-0.3.0.8) 2017-06-08 09:47:44 -0400 e14006a545 (tag: tor-0.2.5.14) 2017-06-08 09:47:58 -0400 aa89500225 (tag: tor-0.2.9.11) 2017-06-08 09:55:28 -0400 f833164576 (tag: tor-0.2.4.29) 2017-06-08 09:55:58 -0400 21a9e5371d (tag: tor-0.2.6.12) 2017-06-08 09:56:15 -0400 3db01d3b56 (tag: tor-0.2.7.8) 2017-06-08 09:58:36 -0400 64ac28ef5d (tag: tor-0.2.8.14) 2017-06-08 10:15:41 -0400 dc47d936d4 (tag: tor-0.3.1.3-alpha) ... 2017-06-29 16:56:13 -0400 fab91a290d (tag: tor-0.3.1.4-alpha) 2017-06-29 17:03:23 -0400 22b3bf094e (tag: tor-0.3.0.9) ... 2017-08-01 11:33:36 -0400 83389502ee (tag: tor-0.3.1.5-alpha) 2017-08-02 11:50:57 -0400 c33db290a9 (tag: tor-0.3.0.10)
Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.
That being said, I don't have an easy way to list which relays went offline during the decline (since July basically) to see if a common pattern emerges.
So few things. First, if anyone on this list noticed that their relay went off the consensus while still having tor running, it is a good time to inform this thread :).
Second, anyone could have an idea of what possibly is going on that is have one or more theories. Even better, if you have some tooling to try to list which relays went offline, that would be _awesome_.
Third, knowing what was the state of packaging in Debian/Redhat/Ubuntu/... around July could be useful. What if a package in distro X is broken and the update have been killing the relays? Or something like that...
Last, looking at the dirauth would be a good idea. Basically, when did the majority switched to 030 and then 031. Starting in July, what was the state of the dirauth version?
Any help is very welcome! Again, this decline could be from natural cause but for now I just don't want to rule out an issue in tor or packaging.
Cheers! David
-- HiTVizeJUSe9JPvs6jBv/6i8YFvEYY/NZmNhD2UixVY=
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
David Goulet:
Hello everyone!
Since July 2017, there has been a steady decline in relays from ~7k to now ~6.5k. This is a bit unusual that is we don't see often such a steady behavior of relays going offline (at least that I can remember...).
It could certainly be something normal here. However, we shouldn't rule out a bug in tor as well. The steadyness of the decline makes me a bit more worried than usual.
You can see the decline has started around July 2017:
https://metrics.torproject.org/networksize.html?start=2017-06-01&end=201...
What happened around July in terms of tor release:
2017-06-08 09:35:17 -0400 802d30d9b7 (tag: tor-0.3.0.8) 2017-06-08 09:47:44 -0400 e14006a545 (tag: tor-0.2.5.14) 2017-06-08 09:47:58 -0400 aa89500225 (tag: tor-0.2.9.11) 2017-06-08 09:55:28 -0400 f833164576 (tag: tor-0.2.4.29) 2017-06-08 09:55:58 -0400 21a9e5371d (tag: tor-0.2.6.12) 2017-06-08 09:56:15 -0400 3db01d3b56 (tag: tor-0.2.7.8) 2017-06-08 09:58:36 -0400 64ac28ef5d (tag: tor-0.2.8.14) 2017-06-08 10:15:41 -0400 dc47d936d4 (tag: tor-0.3.1.3-alpha) ... 2017-06-29 16:56:13 -0400 fab91a290d (tag: tor-0.3.1.4-alpha) 2017-06-29 17:03:23 -0400 22b3bf094e (tag: tor-0.3.0.9) ... 2017-08-01 11:33:36 -0400 83389502ee (tag: tor-0.3.1.5-alpha) 2017-08-02 11:50:57 -0400 c33db290a9 (tag: tor-0.3.0.10)
Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.
That being said, I don't have an easy way to list which relays went offline during the decline (since July basically) to see if a common pattern emerges.
So few things. First, if anyone on this list noticed that their relay went off the consensus while still having tor running, it is a good time to inform this thread :).
Second, anyone could have an idea of what possibly is going on that is have one or more theories. Even better, if you have some tooling to try to list which relays went offline, that would be _awesome_.
Third, knowing what was the state of packaging in Debian/Redhat/Ubuntu/... around July could be useful. What if a package in distro X is broken and the update have been killing the relays? Or something like that...
Last, looking at the dirauth would be a good idea. Basically, when did the majority switched to 030 and then 031. Starting in July, what was the state of the dirauth version?
Any help is very welcome! Again, this decline could be from natural cause but for now I just don't want to rule out an issue in tor or packaging.
(Replying to OP since it went OT)
As some of you know, TDP did a little suite of shell scripts based on OONI data to look at diversity statistics:
https://torbsd.github.io/oostats.html
With the source here for further tinkering:
https://github.com/torbsd/tdp-onion-stats/
Maybe something we could look at is "exception reports", which in some industries means regular reports that look at anomalies or "exceptions" which display out-of-the-ordinary statistics, generally prompting some sort of action.
In other words, daily reports would be run on, say, bw consensus by country, and if there was some statistically significant change over N periods of time, it would be noted. Or if a particular OS drops or jumps. Or if a particular AS jumps or declines for relays, bridges, whatever.
If done right, a bunch of these reports could point to particular changes to the network that need further investigation, and in some cases, might quickly point to the related issue. Eg, countryX shutdown ISP with a particular AS number, etc.
The more reports coupled with careful optimization over time could become an alarm system for Tor network changes, instead of just "er, such-and-such distro didnt update their packages then, I just found out in git."
Thoughts?
g
As some of you know, TDP did a little suite of shell scripts based on OONI data to look at diversity statistics:
I think you mean onionoo (not OONI).
In other words, daily reports would be run on, say, bw consensus by country, and if there was some statistically significant change over N periods of time, it would be noted. Or if a particular OS drops or jumps. Or if a particular AS jumps or declines for relays, bridges, whatever.
something related:
https://nusenu.github.io/OrNetRadar/ as ML: https://lists.riseup.net/www/info/ornetradar
https://nusenu.github.io/OrNetStats/
Then there is (will be) metrics-bot, I made some feature requests similar to your examples above here:
https://trac.torproject.org/projects/tor/ticket/23937#comment:1
also related: https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-consensus-health
nusenu:
As some of you know, TDP did a little suite of shell scripts based on OONI data to look at diversity statistics:
I think you mean onionoo (not OONI).
Yes. Sloppy on details when thinking conceptually sometimes :)
In other words, daily reports would be run on, say, bw consensus by country, and if there was some statistically significant change over N periods of time, it would be noted. Or if a particular OS drops or jumps. Or if a particular AS jumps or declines for relays, bridges, whatever.
something related:
https://nusenu.github.io/OrNetRadar/ as ML: https://lists.riseup.net/www/info/ornetradar
https://nusenu.github.io/OrNetStats/
Then there is (will be) metrics-bot, I made some feature requests similar to your examples above here:
https://trac.torproject.org/projects/tor/ticket/23937#comment:1
also related: https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-consensus-health
This is all good stuff, some of which I've seen before. Really impressive.
Maybe I'm missing something from my quick view of above, but I'm thinking about more general reports about the network, though.
For instance, the OP was about a decline in relays in July, but it's not being noticed until late October.
https://metrics.torproject.org/networksize.html?start=2017-06-01&end=201...
Was it picked up by any alerts earlier, especially those two big and short-term drops?
Exception reports are generally drastic changes that might be noticeable with general changes of the full snapshot of the network and its nodes.
Let me give this a bit more thought, but thanks nusenu.
g
Was it picked up by any alerts earlier, especially those two big and short-term drops?
I assume you mean the bridges line, they didn't remain unnoticed [1] and you can find their explanation here: https://metrics.torproject.org/news.html
[1] https://lists.torproject.org/pipermail/metrics-team/2017-August/000440.html
My relay has gone off the consensus. Fingerprint: E7FFF8C3D5736AB87215C5DB05620103033E69C3 Alias: rasptor4273 Am running Tor 0.2.5.14 on Debian, Raspberry Pi 2B. I upgraded to that version on September 3rd.
I grepped through these: https://collector.torproject.org/archive/relay-descriptors/consensuses/ and the latest entry I found for my alias is in the file ./17/2017-09-17-13-00-00-consensus.
Not sure what other information I can provide. Do let me know if I can do anything else to help troubleshoot.
Best, Joep
On Mon, Oct 23, 2017 at 9:14 PM, George george@queair.net wrote:
David Goulet:
Hello everyone!
Since July 2017, there has been a steady decline in relays from ~7k to
now
~6.5k. This is a bit unusual that is we don't see often such a steady
behavior
of relays going offline (at least that I can remember...).
It could certainly be something normal here. However, we shouldn't rule
out a
bug in tor as well. The steadyness of the decline makes me a bit more
worried
than usual.
You can see the decline has started around July 2017:
2017-06-01&end=2017-10-23
What happened around July in terms of tor release:
2017-06-08 09:35:17 -0400 802d30d9b7 (tag: tor-0.3.0.8) 2017-06-08 09:47:44 -0400 e14006a545 (tag: tor-0.2.5.14) 2017-06-08 09:47:58 -0400 aa89500225 (tag: tor-0.2.9.11) 2017-06-08 09:55:28 -0400 f833164576 (tag: tor-0.2.4.29) 2017-06-08 09:55:58 -0400 21a9e5371d (tag: tor-0.2.6.12) 2017-06-08 09:56:15 -0400 3db01d3b56 (tag: tor-0.2.7.8) 2017-06-08 09:58:36 -0400 64ac28ef5d (tag: tor-0.2.8.14) 2017-06-08 10:15:41 -0400 dc47d936d4 (tag: tor-0.3.1.3-alpha) ... 2017-06-29 16:56:13 -0400 fab91a290d (tag: tor-0.3.1.4-alpha) 2017-06-29 17:03:23 -0400 22b3bf094e (tag: tor-0.3.0.9) ... 2017-08-01 11:33:36 -0400 83389502ee (tag: tor-0.3.1.5-alpha) 2017-08-02 11:50:57 -0400 c33db290a9 (tag: tor-0.3.0.10)
Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.
That being said, I don't have an easy way to list which relays went
offline
during the decline (since July basically) to see if a common pattern
emerges.
So few things. First, if anyone on this list noticed that their relay
went off
the consensus while still having tor running, it is a good time to
inform this
thread :).
Second, anyone could have an idea of what possibly is going on that is
have
one or more theories. Even better, if you have some tooling to try to
list
which relays went offline, that would be _awesome_.
Third, knowing what was the state of packaging in
Debian/Redhat/Ubuntu/...
around July could be useful. What if a package in distro X is broken and
the
update have been killing the relays? Or something like that...
Last, looking at the dirauth would be a good idea. Basically, when did
the
majority switched to 030 and then 031. Starting in July, what was the
state of
the dirauth version?
Any help is very welcome! Again, this decline could be from natural
cause but
for now I just don't want to rule out an issue in tor or packaging.
(Replying to OP since it went OT)
As some of you know, TDP did a little suite of shell scripts based on OONI data to look at diversity statistics:
https://torbsd.github.io/oostats.html
With the source here for further tinkering:
https://github.com/torbsd/tdp-onion-stats/
Maybe something we could look at is "exception reports", which in some industries means regular reports that look at anomalies or "exceptions" which display out-of-the-ordinary statistics, generally prompting some sort of action.
In other words, daily reports would be run on, say, bw consensus by country, and if there was some statistically significant change over N periods of time, it would be noted. Or if a particular OS drops or jumps. Or if a particular AS jumps or declines for relays, bridges, whatever.
If done right, a bunch of these reports could point to particular changes to the network that need further investigation, and in some cases, might quickly point to the related issue. Eg, countryX shutdown ISP with a particular AS number, etc.
The more reports coupled with careful optimization over time could become an alarm system for Tor network changes, instead of just "er, such-and-such distro didnt update their packages then, I just found out in git."
Thoughts?
g
--
34A6 0A1F F8EF B465 866F F0C5 5D92 1FD1 ECF6 1682
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On 23 Oct (22:49:55), rasptor 4273 wrote:
My relay has gone off the consensus. Fingerprint: E7FFF8C3D5736AB87215C5DB05620103033E69C3
Interesting. And it is still running as of now without any problems? Can you give me the IP/ORPORT tuple?
You think you can add this to your torrc and then HUP your relay (very importatnt to NOT restart it).
Log info file <FULL_PATH_TO_LOG_FILE>
And then after a some hours (maybe a day), we'll be looking for "Decided to publish new relay descriptor".
If it appears, we know that your relay keeps uploading to the directory authorities so thus chances are that there is a problem on the dirauth side not finding you reachable.
Thanks! David
Alias: rasptor4273 Am running Tor 0.2.5.14 on Debian, Raspberry Pi 2B. I upgraded to that version on September 3rd.
I grepped through these: https://collector.torproject.org/archive/relay-descriptors/consensuses/ and the latest entry I found for my alias is in the file ./17/2017-09-17-13-00-00-consensus.
Not sure what other information I can provide. Do let me know if I can do anything else to help troubleshoot.
Best, Joep
On Mon, Oct 23, 2017 at 9:14 PM, George george@queair.net wrote:
David Goulet:
Hello everyone!
Since July 2017, there has been a steady decline in relays from ~7k to
now
~6.5k. This is a bit unusual that is we don't see often such a steady
behavior
of relays going offline (at least that I can remember...).
It could certainly be something normal here. However, we shouldn't rule
out a
bug in tor as well. The steadyness of the decline makes me a bit more
worried
than usual.
You can see the decline has started around July 2017:
2017-06-01&end=2017-10-23
What happened around July in terms of tor release:
2017-06-08 09:35:17 -0400 802d30d9b7 (tag: tor-0.3.0.8) 2017-06-08 09:47:44 -0400 e14006a545 (tag: tor-0.2.5.14) 2017-06-08 09:47:58 -0400 aa89500225 (tag: tor-0.2.9.11) 2017-06-08 09:55:28 -0400 f833164576 (tag: tor-0.2.4.29) 2017-06-08 09:55:58 -0400 21a9e5371d (tag: tor-0.2.6.12) 2017-06-08 09:56:15 -0400 3db01d3b56 (tag: tor-0.2.7.8) 2017-06-08 09:58:36 -0400 64ac28ef5d (tag: tor-0.2.8.14) 2017-06-08 10:15:41 -0400 dc47d936d4 (tag: tor-0.3.1.3-alpha) ... 2017-06-29 16:56:13 -0400 fab91a290d (tag: tor-0.3.1.4-alpha) 2017-06-29 17:03:23 -0400 22b3bf094e (tag: tor-0.3.0.9) ... 2017-08-01 11:33:36 -0400 83389502ee (tag: tor-0.3.1.5-alpha) 2017-08-02 11:50:57 -0400 c33db290a9 (tag: tor-0.3.0.10)
Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.
That being said, I don't have an easy way to list which relays went
offline
during the decline (since July basically) to see if a common pattern
emerges.
So few things. First, if anyone on this list noticed that their relay
went off
the consensus while still having tor running, it is a good time to
inform this
thread :).
Second, anyone could have an idea of what possibly is going on that is
have
one or more theories. Even better, if you have some tooling to try to
list
which relays went offline, that would be _awesome_.
Third, knowing what was the state of packaging in
Debian/Redhat/Ubuntu/...
around July could be useful. What if a package in distro X is broken and
the
update have been killing the relays? Or something like that...
Last, looking at the dirauth would be a good idea. Basically, when did
the
majority switched to 030 and then 031. Starting in July, what was the
state of
the dirauth version?
Any help is very welcome! Again, this decline could be from natural
cause but
for now I just don't want to rule out an issue in tor or packaging.
(Replying to OP since it went OT)
As some of you know, TDP did a little suite of shell scripts based on OONI data to look at diversity statistics:
https://torbsd.github.io/oostats.html
With the source here for further tinkering:
https://github.com/torbsd/tdp-onion-stats/
Maybe something we could look at is "exception reports", which in some industries means regular reports that look at anomalies or "exceptions" which display out-of-the-ordinary statistics, generally prompting some sort of action.
In other words, daily reports would be run on, say, bw consensus by country, and if there was some statistically significant change over N periods of time, it would be noted. Or if a particular OS drops or jumps. Or if a particular AS jumps or declines for relays, bridges, whatever.
If done right, a bunch of these reports could point to particular changes to the network that need further investigation, and in some cases, might quickly point to the related issue. Eg, countryX shutdown ISP with a particular AS number, etc.
The more reports coupled with careful optimization over time could become an alarm system for Tor network changes, instead of just "er, such-and-such distro didnt update their packages then, I just found out in git."
Thoughts?
g
--
34A6 0A1F F8EF B465 866F F0C5 5D92 1FD1 ECF6 1682
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
(don't take this information as granted this was a quick 'n dirty thing)
David Goulet:
Since July 2017
It appears to have started earlier than July if you graph metrics' csv file for better granularity. Maybe somewhere in mid May 2017 (maybe when tor 0.2.9.x -> 0.3.0 started to spread? -> correlate it with the relays by version graph)
which relays went offline
I compared two sets of relays (identified by FP):
1) all relays included in onionoo data from 2017-05-07 00:00 (that means also non-running relays that were running at some point during the week before that timestamp) This set has 7334 _running_ relays (8681 in total)
2) all relays - including non-running relays - included in onionoo data from 2017-10-23 21:00 that were added to the tor network **before** 2017-05-07. This set contains 5151 relays in total.
I'm afraid the results are not very useful due to the apparently high churn rate. 3885 relays that were in (1) are NOT in (2).
Disappeared relays by version (version as seen on 2017-05-07 this is NOT necessarily the same version they did run when they were last seen): +-------------------+----------+ | tor_version | relays | +-------------------+----------+ | 0.2.9.10 | 1161 | | 0.2.5.12 | 576 | | 0.2.7.6 | 511 | | 0.2.9.9 | 291 | | 0.2.4.27 | 251 | | 0.3.0.6 | 241 | | 0.2.4.23 | 158 | | 0.2.8.9 | 129 | | 0.2.6.10 | 95 | | 0.3.0.5-rc | 82 | | 0.2.9.8 | 64 | | 0.2.8.12 | 58 | | 0.2.8.11 | 41 | | 0.2.8.8 | 39 | | 0.2.8.7 | 30 | | 0.2.4.22 | 22 | | 0.3.0.4-rc | 14 | | 0.3.1.0-alpha-dev | 10 | | 0.2.4.20 | 9 | | 0.2.8.10 | 9 | | 0.2.5.10 | 9 | | 0.3.0.3-alpha | 8 | | 0.2.8.6 | 7 | | 0.2.6.9 | 6 | | 0.2.7.6-dev | 5 | | 0.2.7.5 | 5 | | 0.3.0.2-alpha | 5 | [...]
(retrieving the actually last seen version is feasible but requires processing [much] more data)
Disappeared relays by flags -> we loose guards, not exits (matches somewhat relays by flag graphs on metrics) Flags appear in this order: stable,fast,guard,exit,hsdir (0=not set, 1=set) +----------------------------------------+----------+ | flags | relays | +----------------------------------------+----------+ | 11001 | 775 | | 11101 | 609 | | 00000 | 582 | | 01000 | 561 | | 11000 | 453 | | 10000 | 259 | | 11100 | 196 | | 11111 | 131 | | 01010 | 91 | [...]
Disappeared relays by OS (this matches graphs on metrics): -> we loose Linux boxes, others are static +------------------------------+--------+ | OS | relays | +------------------------------+--------+ | Linux | 3434 | | Windows 8 | 119 | | Windows 7 | 111 | | FreeBSD | 108 | | OpenBSD | 58 | | Windows XP | 22 | | Windows 8 [server] | 10 | [...]
On 2017-10-24 03:30, nusenu wrote:
It appears to have started earlier than July if you graph metrics' csv file for better granularity. Maybe somewhere in mid May 2017 (maybe when tor 0.2.9.x -> 0.3.0 started to spread? -> correlate it with the relays by version graph)
I had a dead relay after doing that update: https://lists.torproject.org/pipermail/tor-relays/2017-May/012360.html
It was an AppArmor/Ubuntu-specific problem, though.
Regards, Alexander
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Am 23-Oct-17 um 15:32 schrieb David Goulet:
Since July 2017, there has been a steady decline in relays from ~7k to now ~6.5k. This is a bit unusual that is we don't see often such a steady behavior of relays going offline (at least that I can remember...).
It could certainly be something normal here. However, we shouldn't rule out a bug in tor as well. The steadyness of the decline makes me a bit more worried than usual.
That being said, I don't have an easy way to list which relays went offline during the decline (since July basically) to see if a common pattern emerges.
So few things. First, if anyone on this list noticed that their relay went off the consensus while still having tor running, it is a good time to inform this thread :).
Second, anyone could have an idea of what possibly is going on that is have one or more theories. Even better, if you have some tooling to try to list which relays went offline, that would be _awesome_.
a) Please find two pictures which show tap[1] and ntor[2] in 2016 and 2017 for a certain relay. Obviously the number of tap/ntor increases since July 2017.
b) Taps becoming hourly massive on all my guards since October 2017.
c) An other relay had the largest amount of taps. It received 6 million taps. The tap flood took 65 minutes and the tor cpu power went up from 60% before to 120-210% during the flood.
I can not prove but because of outbound packet abuse letters from an ISP I start thinking if this is an other measure to damage guard/hsdir flags. Beside the enormous consumption of cpu resources.
I hope this helps.
[TAP 1] https://i.imgur.com/jDj3M5W.jpg [NTOR 2] https://i.imgur.com/jDncdMx.jpg
- -- Cheers, Felix
I run a small relay and it went down intermittently during Nov 1 to Nov 25, with a lot of hiccups [1] since I started it earlier in the year, which may or may not be due to this attack. It is my first and only relay so I cannot relate.
What I can say is that during most of that time in November, the relay was running, the instance was running, tor was running (if with an older version), there were no traffic restrictions I can say, etc.
Obviously after that period the relay lost its Guard flag, and since 0.3.1.9 the relay it seems to be catching up quickly, actually with much more traffic than any time before.
In the past days I did a lot of cleanup so I cannot provide logs (I barely log notices, not even that if there are no issues).
[1] https://atlas.torproject.org/#details/1FA8F638298645BE58AC905276680889CB795A...
-------- Original Message -------- Subject: Re: [tor-relays] Decline in relays Local Time: December 26, 2017 11:16 AM UTC Time: December 26, 2017 11:16 AM From: zwiebel@quantentunnel.de To: tor-relays@lists.torproject.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Am 23-Oct-17 um 15:32 schrieb David Goulet:
Since July 2017, there has been a steady decline in relays from ~7k to now ~6.5k. This is a bit unusual that is we don't see often such a steady behavior of relays going offline (at least that I can remember...). It could certainly be something normal here. However, we shouldn't rule out a bug in tor as well. The steadyness of the decline makes me a bit more worried than usual.
That being said, I don't have an easy way to list which relays went offline during the decline (since July basically) to see if a common pattern emerges. So few things. First, if anyone on this list noticed that their relay went off the consensus while still having tor running, it is a good time to inform this thread :). Second, anyone could have an idea of what possibly is going on that is have one or more theories. Even better, if you have some tooling to try to list which relays went offline, that would be awesome.
a) Please find two pictures which show tap[1] and ntor[2] in 2016 and 2017 for a certain relay. Obviously the number of tap/ntor increases since July 2017.
b) Taps becoming hourly massive on all my guards since October 2017.
c) An other relay had the largest amount of taps. It received 6 million taps. The tap flood took 65 minutes and the tor cpu power went up from 60% before to 120-210% during the flood.
I can not prove but because of outbound packet abuse letters from an ISP I start thinking if this is an other measure to damage guard/hsdir flags. Beside the enormous consumption of cpu resources.
I hope this helps.
[TAP 1] https://i.imgur.com/jDj3M5W.jpg [NTOR 2] https://i.imgur.com/jDncdMx.jpg
Cheers, Felix -----BEGIN PGP SIGNATURE----- Version: GnuPG v2
iQIcBAEBCAAGBQJaQi/cAAoJEF1W24InZUQdA48QAOy8CnnJGMkl+d9B844JE4uE vZ2L96OSFOCl7Au3l+V/dYIvgdMUUe4ju8hQHhzB0918IY8Y4lMngTTgptVfwhKv cb6RB6Ib8/1zfzLtmrEn6pdiHoUY2qlm7xB6lzsfaz3JT+KOTq1adzV9DSQAAkNV Cp0+jdpYX/X3T7OOXzSxUDmKiqaMu7K181agMeyybUFzIPEZgmRCdnYNHmD2W2aH zjBfSm5J1OncFcs5GwmtCKCUq5DVrjjmYZHLB4E91ExQafwcqLYfqAQDqh8ui0tW W9//fkgPxcNgQ5hOQq2Ucf7cZJ1I12fKCApBYtfgfq91sCtt2+sozNnr4u5d5Jxy JxiWX/t5MEWjvXcAy3jOYoPnTiuDHwG6EYWjomU+RpZwqJkdV00043a8F9UzYe67 O7/pRcDSZe3MdL7CkLZcirNMS0dSHPlxLWJCd0XlWPs5d8aW/F8kRFndQoisN7c7 zxeFFUs9/NRPCCXrzymX/rTgUtlvZ8xKjQ0K8v/giLXoNxTf02P5FK4pcD3Bu47m qGTqfiaBFywDvFA5+icDZICJqFxtBG+6W0tWO8K79w+oKmqEyk6TBKhZDZWcH1K4 Qu62tkOZs7Qp8jKz6M8kYWsr+ATO8+IWz6o/xWTZJPVeir8qsZShR71Xz4kGftu2 1Sar8xKb/lw+xQAOoV27 =Nqx3
-----END PGP SIGNATURE-----
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
I can only spare a few GB or so day from my home internet, so I set the ACCOUNTING MAX daily limit to 3GB (in + out = 6GB) and RELAYBANDWIDTHRATE to 200KB (BURST 400KB), after much experimenting this was the balance between not hibernating before the 24hr are over and leaving enough bytes left over for me + my house 4K netflix use.
Recently, 3GB daily limit hasn't been enough to last the 24 hours some days it went offline at the afternoon, I raised ACCOUNTINGMAX to 4GB and it seems fine.
Perhaps this is related to the recently trends (I am non-exit, middle only). Unfortunately I do have to reboot my relay once a week or so due to updates or other people I live with messing with the home router's reset switch when they deem their netflix over wifi experience to be poor so I will probably never get the guard flag.
On Tue, Dec 26, 2017 at 12:04 PM, Iomega iomega@protonmail.ch wrote:
I run a small relay and it went down intermittently during Nov 1 to Nov 25, with a lot of hiccups [1] since I started it earlier in the year, which may or may not be due to this attack. It is my first and only relay so I cannot relate.
What I can say is that during most of that time in November, the relay was running, the instance was running, tor was running (if with an older version), there were no traffic restrictions I can say, etc.
Obviously after that period the relay lost its Guard flag, and since 0.3.1.9 the relay it seems to be catching up quickly, actually with much more traffic than any time before.
In the past days I did a lot of cleanup so I cannot provide logs (I barely log notices, not even that if there are no issues).
[1] https://atlas.torproject.org/#details/1FA8F638298645BE58AC9052766808 89CB795A94
-------- Original Message -------- Subject: Re: [tor-relays] Decline in relays Local Time: December 26, 2017 11:16 AM UTC Time: December 26, 2017 11:16 AM From: zwiebel@quantentunnel.de To: tor-relays@lists.torproject.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Am 23-Oct-17 um 15:32 schrieb David Goulet:
Since July 2017, there has been a steady decline in relays from ~7k to now ~6.5k. This is a bit unusual that is we don't see often such a steady behavior of relays going offline (at least that I can remember...). It could certainly be something normal here. However, we shouldn't rule out a bug in tor as well. The steadyness of the decline makes me a bit more worried than usual.
That being said, I don't have an easy way to list which relays went offline during the decline (since July basically) to see if a common pattern emerges. So few things. First, if anyone on this list noticed that their relay went off the consensus while still having tor running, it is a good time to inform this thread :). Second, anyone could have an idea of what possibly is going on that is have one or more theories. Even better, if you have some tooling to try to list which relays went offline, that would be *awesome*.
a) Please find two pictures which show tap[1] and ntor[2] in 2016 and 2017 for a certain relay. Obviously the number of tap/ntor increases since July 2017.
b) Taps becoming hourly massive on all my guards since October 2017.
c) An other relay had the largest amount of taps. It received 6 million taps. The tap flood took 65 minutes and the tor cpu power went up from 60% before to 120-210% during the flood.
I can not prove but because of outbound packet abuse letters from an ISP I start thinking if this is an other measure to damage guard/hsdir flags. Beside the enormous consumption of cpu resources.
I hope this helps.
[TAP 1] https://i.imgur.com/jDj3M5W.jpg [NTOR 2] https://i.imgur.com/jDncdMx.jpg
Cheers, Felix -----BEGIN PGP SIGNATURE----- Version: GnuPG v2
iQIcBAEBCAAGBQJaQi/cAAoJEF1W24InZUQdA48QAOy8CnnJGMkl+d9B844JE4uE vZ2L96OSFOCl7Au3l+V/dYIvgdMUUe4ju8hQHhzB0918IY8Y4lMngTTgptVfwhKv cb6RB6Ib8/1zfzLtmrEn6pdiHoUY2qlm7xB6lzsfaz3JT+KOTq1adzV9DSQAAkNV Cp0+jdpYX/X3T7OOXzSxUDmKiqaMu7K181agMeyybUFzIPEZgmRCdnYNHmD2W2aH zjBfSm5J1OncFcs5GwmtCKCUq5DVrjjmYZHLB4E91ExQafwcqLYfqAQDqh8ui0tW W9//fkgPxcNgQ5hOQq2Ucf7cZJ1I12fKCApBYtfgfq91sCtt2+sozNnr4u5d5Jxy JxiWX/t5MEWjvXcAy3jOYoPnTiuDHwG6EYWjomU+RpZwqJkdV00043a8F9UzYe67 O7/pRcDSZe3MdL7CkLZcirNMS0dSHPlxLWJCd0XlWPs5d8aW/F8kRFndQoisN7c7 zxeFFUs9/NRPCCXrzymX/rTgUtlvZ8xKjQ0K8v/giLXoNxTf02P5FK4pcD3Bu47m qGTqfiaBFywDvFA5+icDZICJqFxtBG+6W0tWO8K79w+oKmqEyk6TBKhZDZWcH1K4 Qu62tkOZs7Qp8jKz6M8kYWsr+ATO8+IWz6o/xWTZJPVeir8qsZShR71Xz4kGftu2 1Sar8xKb/lw+xQAOoV27 =Nqx3
-----END PGP SIGNATURE-----
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays@lists.torproject.org