I do wonder why 2 exit relays (at the same IP address) dropped down from about 8,000 connections to about 1,000 connections after exactly 1 month + 2 hours after they were installed.
Furthermore metrics.t.o shows:
IPv4 Exit Policy Summary reject 1-65535
It is a hardened Gentoo with LibreSSL, no changes to torrc AFAICT. FWIW I do use offline keys here (for the first time) with a lifetime of 3 months.
Rebooted today the system and upgraded from 0.3.5.3-apha to 0.3.5.4-alpha - doesn't helped.
relays: https://metrics.torproject.org/rs.html#details/509EAB4C5D10C9A9A24B4EA0CE402... https://metrics.torproject.org/rs.html#details/63BF46A63F9C21FD315CD061B3EAA...
On 11/8/18 7:57 PM, Toralf Förster wrote:
I do wonder why 2 exit relays (at the same IP address) dropped down from about 8,000 connections to about 1,000 connections after exactly 1 month + 2 hours after they were installed.
Hhm, is this tghe reason? :
/tmp/info.log:Nov 08 20:08:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '154.35.175.225:80' while fetching "/tor/server/d/0FA173DC50351481127F294FD459FBBFFF750205+7D8F21C27E46FDA6D7AFFA241C928F06 /tmp/info.log:Nov 08 20:09:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '199.58.81.140:80' while fetching "/tor/server/d/0C449A3D5F5A260C36E474761FE6483383A73297+997087FA3A6E0598877277263CE159551 /tmp/info.log:Nov 08 20:09:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '154.35.175.225:80' while fetching "/tor/server/d/6A11B095F5B196D6AAB342838736849D8566CCC8+BA0F331841DB0412E64AA66BE4CAF3F0 /tmp/info.log:Nov 08 20:10:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '194.109.206.212:80' while fetching "/tor/server/d/1EF608B2B100E1690829BE387E32ADA86BE8F884+BF9BC6E523E7F91C31D08A198B5F589 /tmp/info.log:Nov 08 20:10:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '86.59.21.38:80' while fetching "/tor/server/d/0C449A3D5F5A260C36E474761FE6483383A73297+997087FA3A6E0598877277263CE159551D2 /tmp/info.log:Nov 08 20:11:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '131.188.40.189:80' while fetching "/tor/server/d/0C449A3D5F5A260C36E474761FE6483383A73297+997087FA3A6E0598877277263CE15955 /tmp/info.log:Nov 08 20:11:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '154.35.175.225:80' while fetching "/tor/server/d/928DF3B8324E91E3D05961115E14C5A5CBFC8C73+201DF2B3E2850EF72599935F01281D42 /tmp/info.log:Nov 08 20:12:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '131.188.40.189:80' while fetching "/tor/server/d/0C449A3D5F5A260C36E474761FE6483383A73297+997087FA3A6E0598877277263CE15955 /tmp/info.log:Nov 08 20:12:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '193.23.244.244:80' while fetching "/tor/server/d/E7356CC48361C4E9BE9F3907C9B9697D6F312501+8C0C49B869833FCA1B4EF7074E37DA40 /tmp/info.log:Nov 08 20:13:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '193.23.244.244:80' while fetching "/tor/server/d/91FEBE13B9417B87A057ADE5338AD37380D58DF0+9FA48FB6EA5A31FD1D4BF1A276970EC2 /tmp/info.log:Nov 08 20:13:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '199.58.81.140:80' while fetching "/tor/server/d/0C449A3D5F5A260C36E474761FE6483383A73297+997087FA3A6E0598877277263CE159551 /tmp/info.log:Nov 08 20:14:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '131.188.40.189:80' while fetching "/tor/server/d/9CBE4CF855A6920BFF7EDF3D31211FEF67D2AB00+7D684529359AAACC8CF8F656DF2365A5 /tmp/info.log:Nov 08 20:14:57.000 [info] handle_response_fetch_desc(): Received http status code 404 ("Servers unavailable") from server '171.25.193.9:443' while fetching "/tor/server/d/0C449A3D5F5A260C36E474761FE6483383A73297+997087FA3A6E0598877277263CE159551
Toralf Förster:
I do wonder why 2 exit relays (at the same IP address) dropped down from about 8,000 connections to about 1,000 connections after exactly 1 month + 2 hours after they were installed.
can you give an absolute datetime for when the amount of connections started to drop? are these numbers for each tor instance or for both together? (since they run on the same box) (yyyy-mm-dd hh:mm UTC)
Furthermore metrics.t.o shows:
IPv4 Exit Policy Summary reject 1-65535
did that change? did it change at the same time?
Rebooted today the system and upgraded from 0.3.5.3-apha to 0.3.5.4-alpha - doesn't helped.
just to clarify: you had the problem with 0.3.5.3-alpha already (the problem did not appear after upgrading to 0.3.5.4-alpha)
On 11/8/18 8:22 PM, nusenu wrote:
can you give an absolute datetime for when the amount of connections started to drop? are these numbers for each tor instance or for both together? (since they run on the same box) (yyyy-mm-dd hh:mm UT
2018-11-06 21:00 UTC
(I do have the sysstat values if needed)
FWIW the system was rebooted and running fine for about 2 days: reboot system boot 4.18.17 Sun Nov 4 18:13 - 18:31 (4+00:18)
Furthermore metrics.t.o shows:
IPv4 Exit Policy Summary reject 1-65535
did that change? did it change at the same time?
This I realized today - didn' checked it before.
Rebooted today the system and upgraded from 0.3.5.3-apha to 0.3.5.4-alpha - doesn't helped.
just to clarify: you had the problem with 0.3.5.3-alpha already (the problem did not appear after upgrading to 0.3.5.4-alpha)
0.3.5.3-alpha is affected.
can you give an absolute datetime for when the amount of connections started to drop?
2018-11-06 21:00 UTC
are you sure this is UTC?
IPv4 Exit Policy Summary reject 1-65535
did that change? did it change at the same time?
This I realized today - didn' checked it before.
historical onionoo data indicates that 'zwiebeltoralf' current exit_policy_summary first appeared with that value on 2018-11-06 19:00 UTC which somewhat correlates with the time you observed the drop in connection count
and that your exit policy changed as well from [1] (2018-11-06 18:00 UTC) to [2] (2018-11-06 19:00 UTC) according to past onionoo data.
Generally speaking your exit policy appears to change quite often (multiple times a day). Is that expected?
no changes to torrc AFAICT.
Are you saying you did not change your tor configuration and exit policy at all during that time?
I did not look at the underlying descriptor data but onionoo data suggests that an exit policy change occurred which could have caused the change in connection counts.
I'm still surprised that you do not have more connections since even non-exits have more than 1k concurrent connections unless you are talking about specific connections only?
[1] reject 0.0.0.0/8:*, reject 169.254.0.0/16:*, reject 127.0.0.0/8:*, reject 192.168.0.0/16:*, reject 10.0.0.0/8:*, reject 172.16.0.0/12:*, reject 194.9.149.49/24:*, reject 162.218.232.15/21:*, reject 210.105.69.49/22:*, reject 65.202.15.202/27:*, reject 210.182.0.0/19:*, reject 185.25.91.128/27:*, reject 103.59.156.0/22:*, reject 195.78.231.151/22:*, reject 185.60.216.15/8:443, reject 81.3.7.31:443, reject 149.154.167.51:5222, reject 198.50.195.216:7777, reject 194.109.206.212:80, accept *:53, accept *:79, accept *:80, accept *:110, accept *:119, accept *:194, accept *:220, accept *:389, accept *:443, accept *:464, accept *:465, accept *:531, accept *:543, accept *:544, accept *:563, accept *:587, accept *:636, accept *:706, accept *:749, accept *:853, accept *:873, accept *:902, accept *:903, accept *:904, accept *:981, accept *:989, accept *:990, accept *:991, accept *:992, accept *:993, accept *:994, accept *:995, accept *:1194, accept *:1220, accept *:1293, accept *:1533, accept *:1677, accept *:1723, accept *:1755, accept *:1863, accept *:1883, accept *:2095, accept *:2096, accept *:2102, accept *:2103, accept *:2104, accept *:3128, accept *:3690, accept *:4321, accept *:4643, accept *:5050, accept *:5190, accept *:5222, accept *:5223, accept *:5269, accept *:5280, accept *:6660, accept *:6661, accept *:6662, accept *:6663, accept *:6664, accept *:6665, accept *:6666, accept *:6667, accept *:6668, accept *:6669, accept *:6679, accept *:6697, accept *:7777, accept *:8008, accept *:8074, accept *:8080, accept *:8082, accept *:8232, accept *:8233, accept *:8332, accept *:8333, accept *:8443, accept *:8883, accept *:8888, accept *:9418, accept *:11371, accept *:19294, accept *:19638, accept *:50002, accept *:64738, reject *:*
[2] reject 0.0.0.0/8:*, reject 127.0.0.0/8:*, reject 192.168.0.0/16:*, reject 10.0.0.0/8:*, reject 172.16.0.0/12:*, reject 159.0.0.0/8:*, reject 169.0.0.0/8:*, reject 194.9.149.0/24:*, reject 162.218.232.15/21:*, reject 210.105.69.49/22:*, reject 65.202.15.202/27:*, reject 210.182.0.0/19:*, reject 185.25.91.128/27:*, reject 103.59.156.0/22:*, reject 195.78.231.151/22:*, reject 104.244.42.80/8:443, reject 178.237.20.121/8:443, reject 185.103.30.39/8:443, reject 199.59.148.175/8:443, reject 104.108.60.52/8:80, reject 185.10.61.120/8:80, reject 199.58.81.140/8:80, reject 51.68.153.90/8:80, reject 91.228.7.11/8:80, reject 91.146.131.78/8:8080, reject 184.168.230.1/16:80, reject 87.240.129.133/24:443, reject 193.124.118.184/24:80, reject 109.201.135.79:443, reject 149.154.167.51:443, reject 151.101.113.2:443, reject 152.195.34.118:443, reject 152.195.39.72:443, reject 157.240.20.174:443, reject 164.100.78.192:443, reject 165.227.51.238:443, reject 2.19.46.136:443, reject 200.252.60.65:443, reject 203.119.215.109:443, reject 35.165.113.217:443, reject 35.165.136.85:443, reject 5.160.138.235:443, reject 51.68.71.35:443, reject 52.216.133.77:443, reject 52.222.158.94:443, reject 54.192.202.107:443, reject 62.240.232.140:443, reject 8.247.13.109:443, reject 8.252.23.115:443, reject 87.250.250.92:443, reject 88.208.10.18:443, reject 89.45.235.19:443, reject 94.100.180.76:443, reject 94.76.213.163:443, reject 149.154.167.51:5222, reject 109.206.161.83:80, reject 14.215.165.22:80, reject 149.126.77.208:80, reject 149.56.14.76:80, reject 162.220.11.2:80, reject 163.172.15.249:80, reject 163.172.205.124:80, reject 172.217.16.174:80, reject 173.214.243.68:80, reject 178.162.198.170:80, reject 182.176.96.104:80, reject 190.115.18.99:80, reject 194.109.206.212:80, reject 200.252.60.65:80, reject 203.119.215.109:80, reject 208.83.20.20:80, reject 212.27.63.35:80, reject 213.174.131.126:80, reject 213.184.225.53:80, reject 23.236.62.147:80, reject 5.79.69.137:80, reject 52.86.122.241:80, reject 62.149.140.105:80, reject 62.210.16.61:80, reject 72.52.4.119:80, reject 78.46.94.151:80, reject 82.209.230.66:80, reject 88.208.18.245:80, reject 88.208.52.134:80, reject 88.214.203.52:80, reject 93.184.220.29:80, reject 95.107.48.115:80, reject 95.211.212.148:80, reject 104.20.7.177:8080, reject 198.251.81.243:8080, reject 67.228.177.23:8080, reject 212.47.240.70:8888, reject 146.20.147.247:993, reject 217.146.190.234:993, reject 77.238.185.51:993, reject 96.114.157.78:993, accept *:53, accept *:79, accept *:80, accept *:110, accept *:119, accept *:194, accept *:220, accept *:389, accept *:443, accept *:464, accept *:465, accept *:531, accept *:543, accept *:544, accept *:563, accept *:587, accept *:636, accept *:706, accept *:749, accept *:853, accept *:873, accept *:902, accept *:903, accept *:904, accept *:981, accept *:989, accept *:990, accept *:991, accept *:992, accept *:993, accept *:994, accept *:995, accept *:1194, accept *:1220, accept *:1293, accept *:1533, accept *:1677, accept *:1723, accept *:1755, accept *:1863, accept *:1883, accept *:2095, accept *:2096, accept *:2102, accept *:2103, accept *:2104, accept *:3128, accept *:3690, accept *:4321, accept *:4643, accept *:5050, accept *:5190, accept *:5222, accept *:5223, accept *:5269, accept *:5280, accept *:6660, accept *:6661, accept *:6662, accept *:6663, accept *:6664, accept *:6665, accept *:6666, accept *:6667, accept *:6668, accept *:6669, accept *:6679, accept *:6697, accept *:7777, accept *:8008, accept *:8074, accept *:8080, accept *:8082, accept *:8232, accept *:8233, accept *:8332, accept *:8333, accept *:8443, accept *:8883, accept *:8888, accept *:9418, accept *:11371, accept *:19294, accept *:19638, accept *:50002, accept *:64738, reject *:*
On 11/8/18 9:12 PM, nusenu wrote:
2018-11-06 21:00 UTC
are you sure this is UTC?
ick, it was 21:00 CET (the dropdown may even started at 20:00 CET), but obvious it was an hour later
I did not look at the underlying descriptor data but onionoo data suggests that an exit policy change occurred which could have caused the change in connection counts.
indeed, I added networks to the reject lists at that time, but only 2 */8 class A nets - but will check ofc.
I'm still surprised that you do not have more connections since even non-exits have more than 1k concurrent connections unless you are talking about specific connections only?
I can try to check with "ExitRelay 0" - currently I downgraded to 0.3.4.9 to check that version.
Hi,
There are two likely possibilities here:
On 9 Nov 2018, at 06:17, Toralf Förster toralf.foerster@gmx.de wrote:
Signed PGP part On 11/8/18 9:12 PM, nusenu wrote:
2018-11-06 21:00 UTC
are you sure this is UTC?
ick, it was 21:00 CET (the dropdown may even started at 20:00 CET), but obvious it was an hour later
1. If your exit's DNS fails, it will reject all exit requests in its descriptor.
I did not look at the underlying descriptor data but onionoo data suggests that an exit policy change occurred which could have caused the change in connection counts.
indeed, I added networks to the reject lists at that time, but only 2 */8 class A nets - but will check ofc.
2. If you reject enough IP addresses in your exit policy:
If your exit blocks enough /8 networks, then its exit policy summary becomes reject all.
If the exit policy summary is too long, then it is truncated to a list of accept ports. (That doesn't seem to have happened here.)
Separately, if your exit doesn't exit to at least one /8 on ports 80 and 443, it loses the Exit flag: https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2531
I'm still surprised that you do not have more connections since even non-exits have more than 1k concurrent connections unless you are talking about specific connections only?
I can try to check with "ExitRelay 0" - currently I downgraded to 0.3.4.9 to check that version.
T
teor:
- If your exit's DNS fails, it will reject all exit requests in its descriptor.
are you saying that https://trac.torproject.org/projects/tor/ticket/21989 is already implemented and released in an alpha version?
the affected relays show 0% failure rate at Arthur's DNS check page https://arthuredelstein.net/exits/
On 9 Nov 2018, at 20:18, nusenu nusenu-lists@riseup.net wrote:
teor:
- If your exit's DNS fails, it will reject all exit requests in its descriptor.
are you saying that https://trac.torproject.org/projects/tor/ticket/21989 is already implemented and released in an alpha version?
This ticket is not yet implemented.
But some kinds of DNS failures will cause Tor exits to reject all exit traffic, see ServerDNSTestAddresses and ServerDNSDetectHijacking in: https://www.torproject.org/docs/tor-manual.html.en
the affected relays show 0% failure rate at Arthur's DNS check page https://arthuredelstein.net/exits/
I am not sure if this DNS check checks for hijacking.
But it looks like the issue was an overlong or overbroad exit policy.
T
On 11/9/18 12:43 AM, teor wrote:
- If you reject enough IP addresses in your exit policy:
If your exit blocks enough /8 networks, then its exit policy summary becomes reject all.
If the exit policy summary is too long, then it is truncated to a list of accept ports. (That doesn't seem to have happened here.)
Separately, if your exit doesn't exit to at least one /8 on ports 80 and 443, it loses the Exit flag: https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2531
I run the relays as non-exits over night, kicked off a bunch of rather rarely used ports together with few */8 networks today morning and restarted both - the issue is now gone here AFACT.
Thx for the hints (I'm still watching the DNSSEC traffic here).
tor-relays@lists.torproject.org