When running two non-exit nodes, configured as a single family with no other members, and using identical bandwidth settings, is it to be expected that only one of the nodes ever obtains the guard flag? The node uptimes are pretty much the same as well, but consensus weight differs significantly. I don't really understand why that is, given what I read about node life cycles.
-Ralph
100% normal. Welcome to tor. No, no clue why ;)
Markus
Sent from my iPad
On 15 Sep 2016, at 18:12, Ralph Seichter tor-relays-ml@horus-it.de wrote:
When running two non-exit nodes, configured as a single family with no other members, and using identical bandwidth settings, is it to be expected that only one of the nodes ever obtains the guard flag? The node uptimes are pretty much the same as well, but consensus weight differs significantly. I don't really understand why that is, given what I read about node life cycles.
-Ralph _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On 15.09.2016 18:40, Markus Koch wrote:
100% normal. Welcome to tor. No, no clue why ;)
I was contemplating possible security considerations behind this. One particular person or organization responsible for the administration of multiple guards, when guards are sensitive because users connect to them directly... That sort of thing.
The alternative might be messed up node configurations, so I thought I'd better ask. ;-)
-Ralph
On Thu, 15 Sep 2016 19:39:07 +0200 Ralph Seichter tor-relays-ml@horus-it.de wrote:
On 15.09.2016 18:40, Markus Koch wrote:
100% normal. Welcome to tor. No, no clue why ;)
I was contemplating possible security considerations behind this. One particular person or organization responsible for the administration of multiple guards, when guards are sensitive because users connect to them directly... That sort of thing.
The alternative might be messed up node configurations, so I thought I'd better ask. ;-)
It is normal to run multiple nodes in one family and have most or all of them get the Guard flag. I don't see why two specifically must be any special (unless you mean both on the same IP?).
I think he meant that it will not get the flag the same time. I set up some relays and the "lifetime of a relay" is a nice indicator but relays behave different. One relay needed 14 days to be stable and I am 100% sure that the server and network was online 24/7. Just wait and see :)
Markus
2016-09-15 19:43 GMT+02:00 Roman Mamedov rm@romanrm.net:
On Thu, 15 Sep 2016 19:39:07 +0200 Ralph Seichter tor-relays-ml@horus-it.de wrote:
On 15.09.2016 18:40, Markus Koch wrote:
100% normal. Welcome to tor. No, no clue why ;)
I was contemplating possible security considerations behind this. One particular person or organization responsible for the administration of multiple guards, when guards are sensitive because users connect to them directly... That sort of thing.
The alternative might be messed up node configurations, so I thought I'd better ask. ;-)
It is normal to run multiple nodes in one family and have most or all of them get the Guard flag. I don't see why two specifically must be any special (unless you mean both on the same IP?).
-- With respect, Roman
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On 15.09.16 19:43, Roman Mamedov wrote:
It is normal to run multiple nodes in one family and have most or all of them get the Guard flag.
I don't see this happen. I would think that weeks of uninterrupted uptime should mean both nodes qualify, but only one has a guard flag. The nodes are on separate machines, with IP addresses dissimilar enough for me not to expect issues based on these.
-Ralph
On Thu, 15 Sep 2016 20:34:54 +0200 Ralph Seichter tor-relays-ml@horus-it.de wrote:
On 15.09.16 19:43, Roman Mamedov wrote:
It is normal to run multiple nodes in one family and have most or all of them get the Guard flag.
I don't see this happen. I would think that weeks of uninterrupted uptime should mean both nodes qualify, but only one has a guard flag. The nodes are on separate machines, with IP addresses dissimilar enough for me not to expect issues based on these.
You should post both fingerprint or even just Atlas links directly, maybe someone will have more ideas on why this could happen. One possibility is if you're perhaps on the lower brink of the Guard bandwidth requirements, and one qualifies by chance, while the other doesn't.
On 15.09.16 20:57, Roman Mamedov wrote:
You should post both fingerprint or even just Atlas links directly, maybe someone will have more ideas on why this could happen.
1) https://atlas.torproject.org/#details/0C3D5E19E3C75B505C8ACD26F89DCA2DF97055... 2) https://atlas.torproject.org/#details/790910748A9B5F0EB455273FF42A0DFA3E7ACD...
I'm not an expert, but #2 sure looks weird, from the graphs alone. Why the consensus weight is approx. 1/30 of #1's value I cannot understand either.
-Ralph
The Advertised Bandwidth is is significantly lower on TorRelay02HORUS too. Let me quote teor from another recent thread, I think the same info is helpful here:
-- begin quote --
Your relay reports a bandwidth based on the amount of traffic it has sustained in any 10 second period over the past day. You can also set a maximum advertised bandwidth on your relay. (Don't do this if you're trying to pick up more traffic.) Five bandwidth authorities measure each relay each week, and report how fast it is. Each of these factors can restrict the amount of bandwidth that the network assigns to your relay.
Here's one way of testing what your relay is capable of:
Run a Tor client as close to your relay as possible: tor DataDirectory /tmp/tor.$$ SOCKSPort [IPv4:]10000 EntryNodes your-relay-name
Then download a large file using port 10000 as a socks proxy.
That will give you some idea of how much traffic your relay can sustain, but it's worth noting that each client is limited to about 1 Mbps (I think - I can't find the manual page entry).
-- end quote --
From a quick glance, it seems that TorRelay02HORUS just isn't
providing the same bandwidth as TorRelay01HORUS. There could be many reasons for this, including hardware, other nodes on the same network rack at your host, upstream bandwidth for the datacenter, peering between the node and the bandwidth authorities, etc.
None of this is unusual. As I have said many times, when spinning up new relays, I often find it helpful to bring up many at the same time (ideally using automation like Ansible), find which ones perform best, keep those and tear down the others.
On 15.09.16 21:21, Green Dream wrote:
The Advertised Bandwidth is is significantly lower on TorRelay02HORUS too.
Indeed, even though bandwitdh settings are identical on both nodes:
BandwidthRate 96 MBytes BandwidthBurst 112 MBytes
Arm shows me that node #1 has upload/download of around 600 Mb/s right now, node #2 only around 35 Mb/s. I'll try the measurement you suggested later, this will require some preparation first.
From a quick glance, it seems that TorRelay02HORUS just isn't providing the same bandwidth as TorRelay01HORUS. There could be many reasons for this, including hardware, other nodes on the same network rack at your host, upstream bandwidth for the datacenter, peering between the node and the bandwidth authorities, etc.
I've alread checked memory and CPU, both nodes have ample capacities to spare in this regard. Maybe external factors are involved, like you have mentioned. I'll dig into this deeper. Looking at the graphs for #2, the spikes early in the life cycle surprise me most. Data throughput was much higher, and for a brief period the guard flag seems to have been awarded as well.
-Ralph
I have made some measurements. Downloading large files through Tor did not appear to show significant differences between both nodes, which could mean that Tor clients are either capped in general or the circuits were overall not fast enough to make my nodes reach their limits.
I also tried several iperf3 bandwidth measurements between the two Tor nodes and a third server which I know to be reliably fast. My Tor node #1 averaged 697 Mbits/sec, and #2 averaged 505 Mbits/sec -- while Tor was running on both nodes. I tried this with both IPv4 and IPv6, the latter being slightly faster.
It would appear that even though #2 has less bandwidth than #1, the available bandwidth of #2 is more than 10 times the bandwidth utilized by Tor on this machine. I still don't understand why TorRelay02HORUS is just limping along.
-Ralph
On 16 Sep 2016, at 07:58, Ralph Seichter tor-relays-ml@horus-it.de wrote:
I have made some measurements. Downloading large files through Tor did not appear to show significant differences between both nodes, which could mean that Tor clients are either capped in general or the circuits were overall not fast enough to make my nodes reach their limits.
I also tried several iperf3 bandwidth measurements between the two Tor nodes and a third server which I know to be reliably fast. My Tor node #1 averaged 697 Mbits/sec, and #2 averaged 505 Mbits/sec -- while Tor was running on both nodes. I tried this with both IPv4 and IPv6, the latter being slightly faster.
It would appear that even though #2 has less bandwidth than #1, the available bandwidth of #2 is more than 10 times the bandwidth utilized by Tor on this machine. I still don't understand why TorRelay02HORUS is just limping along.
A few things that affect consensus weight happen at random: * client usage, which affects observed bandwidth, which limits consensus weight, * the timing and pairing of bandwidth authority measurement, which limits consensus weight,
It's possible that by chance, 02 got a bad measurement a week ago, and 01 got a good one. Give it a few more weeks, and see if the measurements even out.
Tim
-Ralph _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n xmpp: teor at torproject dot org
tor-relays@lists.torproject.org