Another odd thin is atlas will say its down for say three hours but there are up to 1900 connections thumping away from other relays. The node has been running longer but I restarted thinking something was wrong. LOL
But like I said its not a problem - I'm wait to see if the VM provider has an issue and at 5 euros per month I'll run others in other country's for that money. Shame its to expensive to run one here in OZ. *SIGH*
----- Original Message -----
From: "Keepyourprivacy" keepyourprivacy@protonmail.ch To: tor-relays@lists.torproject.org Sent: Saturday, August 12, 2017 12:18:03 AM Subject: Re: [tor-relays] New Here
That's strange. Usually you shouldn't see more traffic than advertised.
Atlas is saying: Bandwidth rate: 300 KiB/s Bandwidth burst: 500 KiB/s Observed bandwidth: 638.69 KiB/s
Strange thing is, that the observed bandwidth is higher than your the one you are advertising. Maybe someone else can say something about this.
The "spikes" can come from the measurments, but usually the aren't that big that you should see such a high spikes.
The only thing i saw is, that you are using RelayBandwidthRate instead of just BandwidthRate in your config file. Don't know if this makes any difference?
-------- Original Message -------- Subject: Re: [tor-relays] New Here Local Time: 11 August 2017 4:10 PM UTC Time: 11 August 2017 2:10 PM From: paul@coffswifi.net To: tor-relays@lists.torproject.org
nickname: coffswifi Provider 1&1
torrc:
RunAsDaemon 1 Address 82.223.27.82 Nickname coffswifi RelayBandwidthRate 300 KBytes RelayBandwidthBurst 500 KBytes ContactInfo Paul <paul AT coffswifi dot net> DirPort 9030 ExitRelay 1 I'm running the default exit policy...
From: "Keepyourprivacy" keepyourprivacy@protonmail.ch To: tor-relays@lists.torproject.org Sent: Saturday, August 12, 2017 12:03:20 AM Subject: Re: [tor-relays] New Here
Thanks for running an exit! Mind sharing your torrc configuration? Maybe something is wrong in there...
Which provider are you using? Tor relay nickname would be helpful too
<blockquote>
-------- Original Message -------- Subject: [tor-relays] New Here Local Time: 11 August 2017 3:50 PM UTC Time: 11 August 2017 1:50 PM From: paul@coffswifi.net To: tor-relays@lists.torproject.org
Hi all,
Have been running a mirror for a couple of years and haven"t had much time to dedicate to and exit node but have finally committed to it. Found a provider that doesn"t seem to have a cap on data and it will cost me about five euros a month.
Question I have is two fold. First I find that atlas.torproject.org sometime says the node is down but its running and connected. Secondly how dose the bandwidth throttling work. I have set it to 300kbits with 500kbit burst but I can see the in/out traffic staying over 1mb for large periods of time and bursting to 7/8mb. Its not a problem just curios.
Regards,
Paul torproject.coffswifi.net _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
_______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
</blockquote>
_______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Maybe the network of your provider is going down sometimes. So even IF the server is not restarted, it is "offline", because other nodes can't connect to you. Maybe you should run uptimerobot.com and let it ping your server every three minutes to see, if its unreachable sometimes. Atlas has usually correct stats.
-------- Original Message -------- Subject: Re: [tor-relays] New Here Local Time: 11 August 2017 4:28 PM UTC Time: 11 August 2017 2:28 PM From: paul@coffswifi.net To: tor-relays@lists.torproject.org
Another odd thin is atlas will say its down for say three hours but there are up to 1900 connections thumping away from other relays. The node has been running longer but I restarted thinking something was wrong. LOL
But like I said its not a problem - I"m wait to see if the VM provider has an issue and at 5 euros per month I"ll run others in other country"s for that money. Shame its to expensive to run one here in OZ. *SIGH*
----- Original Message -----
From: "Keepyourprivacy" keepyourprivacy@protonmail.ch To: tor-relays@lists.torproject.org Sent: Saturday, August 12, 2017 12:18:03 AM Subject: Re: [tor-relays] New Here
That"s strange. Usually you shouldn"t see more traffic than advertised.
Atlas is saying: Bandwidth rate: 300 KiB/s Bandwidth burst: 500 KiB/s Observed bandwidth: 638.69 KiB/s
Strange thing is, that the observed bandwidth is higher than your the one you are advertising. Maybe someone else can say something about this.
The "spikes" can come from the measurments, but usually the aren"t that big that you should see such a high spikes.
The only thing i saw is, that you are using RelayBandwidthRate instead of just BandwidthRate in your config file. Don"t know if this makes any difference?
-------- Original Message -------- Subject: Re: [tor-relays] New Here Local Time: 11 August 2017 4:10 PM UTC Time: 11 August 2017 2:10 PM From: paul@coffswifi.net To: tor-relays@lists.torproject.org
nickname: coffswifi Provider 1&1
torrc:
RunAsDaemon 1 Address 82.223.27.82 Nickname coffswifi RelayBandwidthRate 300 KBytes RelayBandwidthBurst 500 KBytes ContactInfo Paul <paul AT coffswifi dot net> DirPort 9030 ExitRelay 1 I"m running the default exit policy...
From: "Keepyourprivacy" keepyourprivacy@protonmail.ch To: tor-relays@lists.torproject.org Sent: Saturday, August 12, 2017 12:03:20 AM Subject: Re: [tor-relays] New Here
Thanks for running an exit! Mind sharing your torrc configuration? Maybe something is wrong in there...
Which provider are you using? Tor relay nickname would be helpful too
<blockquote>
-------- Original Message -------- Subject: [tor-relays] New Here Local Time: 11 August 2017 3:50 PM UTC Time: 11 August 2017 1:50 PM From: paul@coffswifi.net To: tor-relays@lists.torproject.org
Hi all,
Have been running a mirror for a couple of years and haven"t had much time to dedicate to and exit node but have finally committed to it. Found a provider that doesn"t seem to have a cap on data and it will cost me about five euros a month.
Question I have is two fold. First I find that atlas.torproject.org sometime says the node is down but its running and connected. Secondly how dose the bandwidth throttling work. I have set it to 300kbits with 500kbit burst but I can see the in/out traffic staying over 1mb for large periods of time and bursting to 7/8mb. Its not a problem just curios.
Regards,
Paul torproject.coffswifi.net _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
</blockquote>
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Thanks for the link... nifty!
On Aug 11, 2017 09:36, "Keepyourprivacy" keepyourprivacy@protonmail.ch wrote:
Maybe the network of your provider is going down sometimes. So even IF the server is not restarted, it is "offline", because other nodes can't connect to you. Maybe you should run uptimerobot.com and let it ping your server every three minutes to see, if its unreachable sometimes. Atlas has usually correct stats
Hello,
I observed the same thing on our new exit. atlas says its down - but actually it is working. Did you test if you could exit through your relay at this time ?
best regards
Dirk
On 11.08.2017 16:28, Paul Templeton wrote:
Another odd thin is atlas will say its down for say three hours but there are up to 1900 connections thumping away from other relays. The node has been running longer but I restarted thinking something was wrong. LOL
But like I said its not a problem - I'm wait to see if the VM provider has an issue and at 5 euros per month I'll run others in other country's for that money. Shame its to expensive to run one here in OZ. *SIGH*
On Sun, Aug 13, 2017 at 08:51:05PM +0200, Dirk wrote:
I observed the same thing on our new exit. atlas says its down - but actually it is working. Did you test if you could exit through your relay at this time ?
Another thing to check is whether your relay is listed in the consensus documents for those hours.
https://collector.torproject.org/recent/relay-descriptors/consensuses/
That will help you narrow down whether it's an atlas (or onionoo) issue, or if you're actually missing from the consensus documents.
Note that the consensus file names use UTC for their time zone.
And if your relay *is* missing from the consensus, you can look at the votes for it at https://collector.torproject.org/recent/relay-descriptors/votes/ and try to figure out which authorities found it Running and which didn't.
--Roger
Hello Roger,
your lucky - I got an example now: DigiGesTor2e1 Mon Aug 14 21:20:26 CEST 2017 => according to atlas offline.
Prozess is running PID USER PR NI VIRT RES SHR S %CPU %MEM ZEIT+ BEFEHL
38629 debian-+ 20 0 502848 271376 52056 S 15.0 7.0 389:15.45 /usr/bin/tor --defaults-torrc /usr/share/tor/tor-service-defaults-torrc -f /etc/tor/torrc1 --hush
Notices log shows: ....... Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. ......
Missing here [1], Available here [2]
r DigiGesTor2e1 iEh73ZgL9ucgku5pDoxRwKpKU4w KN1gQJlf9jVmKcQKR/lwutD5Qew 2017-08-14 13:26:19 176.10.104.243 443 80 s Exit Fast V2Dir Valid v Tor 0.3.0.10 pr Cons=1-2 Desc=1-2 DirCache=1 HSDir=1-2 HSIntro=3-4 HSRend=1-2 Link=1-4 LinkAuth=1,3 Microdesc=1-2 Relay=1-2 w Bandwidth=10000 Measured=29000 p accept 20-21,23,43,53,79-81,88,110,143,194,220,389,443,464,531,543-544,554,563,636,706,749,873,902-904,981,989-995,1194,1220,1293,1500,1533,1677,1723,1755,1863,2082-2083,2086-2087,2095-2096,2102-2104,3128,3389,3690,4321,4643,5050,5190,5222-5223,5228,5900,6660-6669,6679,6697,8000,8008,8074,8080,8087-8088,8332-8333,8443,8888,9418,9999-10000,11371,12350,19294,19638,23456,33033,64738 id ed25519 tkWDeVfF3g8r1BEDs2l35f7A1LI/6c5vcOph9sygynQ m 13,14,15 sha256=PzOlOHNhVHDGschpvYdly5zq0HM9mA2oYwwxT/D26zE m 16,17 sha256=kgEOyPBRQ0BXpRTsE+ek++pCMVPQrf5XGYeyPWoLe2Q m 18,19,20 sha256=8OBpmY69O/GZNW25S1IqwE3OrahHdtHK4mQOcYl/c74 m 22,23,24,25,26 sha256=g1513UlD4dm3/2cjP5uNJdXELbFCQ1+uRFBbqSAfTG0
[1] https://collector.torproject.org/recent/relay-descriptors/consensuses/2017-0... [2] https://collector.torproject.org/recent/relay-descriptors/votes/2017-08-14-1...
The problem here is. Since the process is running neiter my scripts nor systemd will try to restart it.
best regards
Dirk
On 14.08.2017 05:08, Roger Dingledine wrote:
On Sun, Aug 13, 2017 at 08:51:05PM +0200, Dirk wrote:
I observed the same thing on our new exit. atlas says its down - but actually it is working. Did you test if you could exit through your relay at this time ?
Another thing to check is whether your relay is listed in the consensus documents for those hours.
https://collector.torproject.org/recent/relay-descriptors/consensuses/
That will help you narrow down whether it's an atlas (or onionoo) issue, or if you're actually missing from the consensus documents.
Note that the consensus file names use UTC for their time zone.
And if your relay *is* missing from the consensus, you can look at the votes for it at https://collector.torproject.org/recent/relay-descriptors/votes/ and try to figure out which authorities found it Running and which didn't.
--Roger
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On 15 Aug 2017, at 05:26, Dirk tor-relay.dirk@o.banes.ch wrote:
Hello Roger,
your lucky - I got an example now: DigiGesTor2e1 Mon Aug 14 21:20:26 CEST 2017 => according to atlas offline.
Prozess is running PID USER PR NI VIRT RES SHR S %CPU %MEM ZEIT+ BEFEHL
38629 debian-+ 20 0 502848 271376 52056 S 15.0 7.0 389:15.45 /usr/bin/tor --defaults-torrc /usr/share/tor/tor-service-defaults-torrc -f /etc/tor/torrc1 --hush
Notices log shows: ....... Aug 14 21:01:58.000 [warn] assign_to_cpuworker failed. Ignoring. ......
These are not very useful, please provide distinct log messages next time.
Missing here [1], Available here [2]
Next time it happens, check the relay flags here: https://consensus-health.torproject.org/consensus-health.html
It will tell you how many authorities could connect to your relay in the past hour or so.
...
[1] https://collector.torproject.org/recent/relay-descriptors/consensuses/2017-0... [2] https://collector.torproject.org/recent/relay-descriptors/votes/2017-08-14-1...
The problem here is. Since the process is running neiter my scripts nor systemd will try to restart it.
This is probably a network failure on your relay or provider, or your relay is running out of some resource (like file descriptors).
T -- Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n ------------------------------------------------------------------------
tor-relays@lists.torproject.org