Hello, relatively new relay operator here.
I originally started running a relay on a Scaleways/Online.net VPS that had extra capacity; saw the line in the Tor FAQ about how them+OVH+Hetzner own the vast majority of relay capacity; decided to hunt around for some other cheap VPS providers to give more capacity and diversity to the network.
One of these relays, 'kima' (fingerprint 54A35E582F9E178542ECCFA48DBE14F401729969), is on ASN 11878, in a datacenter in Chicago, USA.
It has plenty of bandwidth available to it -- iperf3 output between it, a public iperf3 server, and my two other relays is at the end of the mail. The summary is that I seem to have in excess of 100Mbps of bandwidth available.
However the bwauths are consistently giving it poor scores, and as such I'm not seeing any traffic even after several days: faravahar: bw=88 moria1: bw=47 gabelmoo: bw=74 bastet: bw=2 longclaw: bw=1
The relay in question seems well-connected to the Internet, and is even relatively close to many of these bwauths: 7 hops and ~2ms away from faravahar, 12 hops and 25ms away from moria1, 10 hops and 52ms away from bastet...
Is this a problem with my relay, or something fishy with the bwauth network? What can I do about this, if anything?
Thanks! -Jimmy
== iperf3 output: ==
Connecting to host iperf.he.net, port 5201 [ 5] local 107.152.35.167 port 44742 connected to 216.218.207.42 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 40.2 MBytes 337 Mbits/sec 0 5.80 MBytes [ 5] 1.00-2.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes [ 5] 2.00-3.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes [ 5] 3.00-4.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes [ 5] 4.00-5.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes [ 5] 5.00-6.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes [ 5] 6.00-7.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes [ 5] 7.00-8.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes [ 5] 8.00-9.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes [ 5] 9.00-10.00 sec 62.5 MBytes 524 Mbits/sec 0 5.80 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 603 MBytes 506 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 603 MBytes 506 Mbits/sec receiver
iperf Done. Connecting to host iperf.he.net, port 5201 Reverse mode, remote host iperf.he.net is sending [ 5] local 107.152.35.167 port 44746 connected to 216.218.207.42 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 16.8 MBytes 141 Mbits/sec [ 5] 1.00-2.00 sec 24.2 MBytes 203 Mbits/sec [ 5] 2.00-3.00 sec 26.9 MBytes 226 Mbits/sec [ 5] 3.00-4.00 sec 25.1 MBytes 210 Mbits/sec [ 5] 4.00-5.00 sec 21.3 MBytes 179 Mbits/sec [ 5] 5.00-6.00 sec 17.1 MBytes 144 Mbits/sec [ 5] 6.00-7.00 sec 17.0 MBytes 142 Mbits/sec [ 5] 7.00-8.00 sec 17.8 MBytes 149 Mbits/sec [ 5] 8.00-9.00 sec 18.4 MBytes 154 Mbits/sec [ 5] 9.00-10.00 sec 18.7 MBytes 157 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 210 MBytes 176 Mbits/sec 336 sender [ 5] 0.00-10.00 sec 203 MBytes 171 Mbits/sec receiver
iperf Done. Connecting to host dangelo.nucleosynth.space, port 5201 [ 5] local 107.152.35.167 port 51112 connected to 185.234.72.43 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 6.80 MBytes 57.0 Mbits/sec 0 3.49 MBytes [ 5] 1.00-2.00 sec 28.8 MBytes 241 Mbits/sec 0 6.01 MBytes [ 5] 2.00-3.00 sec 27.5 MBytes 231 Mbits/sec 0 6.01 MBytes [ 5] 3.00-4.00 sec 28.8 MBytes 241 Mbits/sec 0 6.01 MBytes [ 5] 4.00-5.00 sec 28.8 MBytes 241 Mbits/sec 0 6.01 MBytes [ 5] 5.00-6.00 sec 27.5 MBytes 231 Mbits/sec 0 6.01 MBytes [ 5] 6.00-7.00 sec 27.5 MBytes 231 Mbits/sec 0 6.01 MBytes [ 5] 7.00-8.00 sec 30.0 MBytes 252 Mbits/sec 0 6.01 MBytes [ 5] 8.00-9.00 sec 27.5 MBytes 231 Mbits/sec 0 6.01 MBytes [ 5] 9.00-10.00 sec 27.5 MBytes 231 Mbits/sec 0 6.01 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 261 MBytes 219 Mbits/sec 0 sender [ 5] 0.00-10.11 sec 261 MBytes 216 Mbits/sec receiver
iperf Done. Connecting to host dangelo.nucleosynth.space, port 5201 Reverse mode, remote host dangelo.nucleosynth.space is sending [ 5] local 107.152.35.167 port 51116 connected to 185.234.72.43 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 5.58 MBytes 46.8 Mbits/sec [ 5] 1.00-2.00 sec 17.6 MBytes 148 Mbits/sec [ 5] 2.00-3.00 sec 24.0 MBytes 202 Mbits/sec [ 5] 3.00-4.00 sec 29.2 MBytes 245 Mbits/sec [ 5] 4.00-5.00 sec 26.4 MBytes 222 Mbits/sec [ 5] 5.00-6.00 sec 29.1 MBytes 244 Mbits/sec [ 5] 6.00-7.00 sec 27.1 MBytes 227 Mbits/sec [ 5] 7.00-8.00 sec 28.2 MBytes 237 Mbits/sec [ 5] 8.00-9.00 sec 27.7 MBytes 233 Mbits/sec [ 5] 9.00-10.00 sec 27.8 MBytes 233 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.11 sec 246 MBytes 204 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 243 MBytes 204 Mbits/sec receiver
iperf Done. Connecting to host mcnulty.nucleosynth.space, port 5201 [ 5] local 107.152.35.167 port 48354 connected to 51.158.165.51 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 6.66 MBytes 55.8 Mbits/sec 0 3.54 MBytes [ 5] 1.00-2.00 sec 28.8 MBytes 241 Mbits/sec 0 6.03 MBytes [ 5] 2.00-3.00 sec 28.8 MBytes 241 Mbits/sec 0 6.03 MBytes [ 5] 3.00-4.00 sec 27.5 MBytes 231 Mbits/sec 0 6.03 MBytes [ 5] 4.00-5.00 sec 28.8 MBytes 241 Mbits/sec 0 6.03 MBytes [ 5] 5.00-6.00 sec 27.5 MBytes 231 Mbits/sec 0 6.03 MBytes [ 5] 6.00-7.00 sec 28.8 MBytes 241 Mbits/sec 0 6.03 MBytes [ 5] 7.00-8.00 sec 28.8 MBytes 241 Mbits/sec 0 6.03 MBytes [ 5] 8.00-9.00 sec 28.8 MBytes 241 Mbits/sec 0 6.03 MBytes [ 5] 9.00-10.00 sec 27.5 MBytes 231 Mbits/sec 0 6.03 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 262 MBytes 219 Mbits/sec 0 sender [ 5] 0.00-10.10 sec 261 MBytes 217 Mbits/sec receiver
iperf Done. Connecting to host mcnulty.nucleosynth.space, port 5201 Reverse mode, remote host mcnulty.nucleosynth.space is sending [ 5] local 107.152.35.167 port 48358 connected to 51.158.165.51 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 5.25 MBytes 44.0 Mbits/sec [ 5] 1.00-2.00 sec 11.8 MBytes 98.6 Mbits/sec [ 5] 2.00-3.00 sec 13.2 MBytes 111 Mbits/sec [ 5] 3.00-4.00 sec 9.09 MBytes 76.3 Mbits/sec [ 5] 4.00-5.00 sec 7.99 MBytes 67.0 Mbits/sec [ 5] 5.00-6.00 sec 7.68 MBytes 64.5 Mbits/sec [ 5] 6.00-7.00 sec 8.18 MBytes 68.6 Mbits/sec [ 5] 7.00-8.00 sec 8.74 MBytes 73.3 Mbits/sec [ 5] 8.00-9.00 sec 8.22 MBytes 69.0 Mbits/sec [ 5] 9.00-10.00 sec 8.26 MBytes 69.3 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.10 sec 91.1 MBytes 75.6 Mbits/sec 633 sender [ 5] 0.00-10.00 sec 88.4 MBytes 74.1 Mbits/sec receiver
iperf Done.
An update on my relay 'kima' ($54A35E582F9E178542ECCFA48DBE14F401729969) --
Eventually I did get assigned more weight; the relay is currently at 4600.
Along the way I think I discovered one potential problem with the bwauth bootstrapping process, at least for sbws. (I'm not sure about torflow.)
When sbws is constructing a two-hop measurement circuit to run a test, it tries to pick an exit that has at least twice the consensus weight of the current relay-under-test: https://github.com/torproject/sbws/blob/master/sbws/core/scanner.py#L216
So this means that in this case, sbws would have picked any exit that was not a BadExit, has an acceptable ExitPolicy, and has a consensus weight of at least, well, 2. That's not a lot.
As it turns out, something like 10% of exits have under a 600Kbyte/sec advertised bandwidth. So it seems pretty easy from this weight=1 bootstrap scenario to get paired with an exit that will give poor test results.
Perhaps bwauth path selection should also choose a testing pair from exits/relays with a certain absolute minimum of weight or advertised bandwidth?
Best, -Jimmy
Hi,
Thanks for reporting this issue, and sorry it's taken us a while to get back to you. Many of us have been on leave over the holidays, and we're still catching up.
On 4 Jan 2020, at 10:18, trusting.mcnulty@protonmail.com wrote:
An update on my relay 'kima' ($54A35E582F9E178542ECCFA48DBE14F401729969) --
Eventually I did get assigned more weight; the relay is currently at 4600.
Along the way I think I discovered one potential problem with the bwauth bootstrapping process, at least for sbws. (I'm not sure about torflow.)
Torflow's partitions have a similar issue, but it's actually worse: a relay can get stuck in a low-bandwidth partition forever.
When sbws is constructing a two-hop measurement circuit to run a test, it tries to pick an exit that has at least twice the consensus weight of the current relay-under-test: https://github.com/torproject/sbws/blob/master/sbws/core/scanner.py#L216
So this means that in this case, sbws would have picked any exit that was not a BadExit, has an acceptable ExitPolicy, and has a consensus weight of at least, well, 2. That's not a lot.
As it turns out, something like 10% of exits have under a 600Kbyte/sec advertised bandwidth. So it seems pretty easy from this weight=1 bootstrap scenario to get paired with an exit that will give poor test results.
Perhaps bwauth path selection should also choose a testing pair from exits/relays with a certain absolute minimum of weight or advertised bandwidth?
I've opened a ticket for this issue: https://trac.torproject.org/projects/tor/ticket/33009
We'll try to resolve it before we deploy any more sbws instances.
T
tor-relays@lists.torproject.org