I am continually getting asked this question and then asked to back it up. I'm hoping the metrics team might be able to guide me here....
Firstly, while we don't have (public) datasets showing this, I am fairly certain that the public meek bridges have their bandwidth capped (to limit costs) and raising that cap would directly improve the speed of meek users. (Implying that the bottleneck the majority of the time is the meek transport.) Right?
Secondly, I know it's too simplistic to say "Our exits are operating at 85% of capacity, our guards are 65% and our middles at 40%" (for a host of reasons, including guard/middle overlap) but how close can we get to something simple that demonstrates more exits will help the (non-bridge) network the most? (If that is in fact the case!)
I might have more questions to relay, but for now these seem to be the main ones.
-tom
I wanna add one more thing to this list and it's the difficulty of adding bridges. Which is why we increased the number of default obfs4 bridges but I just quickly checked the live stats of mine and they're all under heavy load both traffic wise and CPU. tor is usually at 90% on each instance with avg ~200 Mbit/s traffic.
If this is what all other default bridge operators are seeing, then we're supplying way below the demand. Probably making some censored users unhappy.
They might think it's their adversary who's gotten more advanced in blocking that is throttling obfs4 traffic - which might be the case - and we have no way to measure it.
I don't know what are the plans for the long term approach, but for short term, I suppose we should encourage people who can run some secure, high speed and reliable obfs4 bridges, to do so and have it added to the Tor Browser and Orbot. Especially if they're trusted by the community.
✌️
Nima Fatemi:
I wanna add one more thing to this list and it's the difficulty of adding bridges. Which is why we increased the number of default obfs4 bridges but I just quickly checked the live stats of mine and they're all under heavy load both traffic wise and CPU. tor is usually at 90% on each instance with avg ~200 Mbit/s traffic.
If this is what all other default bridge operators are seeing, then we're supplying way below the demand. Probably making some censored users unhappy.
They might think it's their adversary who's gotten more advanced in blocking that is throttling obfs4 traffic - which might be the case - and we have no way to measure it.
I don't know what are the plans for the long term approach, but for short term, I suppose we should encourage people who can run some secure, high speed and reliable obfs4 bridges, to do so and have it added to the Tor Browser and Orbot. Especially if they're trusted by the community.
+1 - Katie
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Hi Tom,
On 10/01/17 20:13, Tom Ritter wrote:
I am continually getting asked this question and then asked to back it up. I'm hoping the metrics team might be able to guide me here....
I'll give this a try, but I wouldn't say that the metrics team is your best bet here. We're busy providing all the data, but we're spending way less time on looking at it than you would expect.
Firstly, while we don't have (public) datasets showing this, I am fairly certain that the public meek bridges have their bandwidth capped (to limit costs) and raising that cap would directly improve the speed of meek users. (Implying that the bottleneck the majority of the time is the meek transport.) Right?
That's also my intuition, but I don't have the data you're looking for. David would know better.
Secondly, I know it's too simplistic to say "Our exits are operating at 85% of capacity, our guards are 65% and our middles at 40%" (for a host of reasons, including guard/middle overlap) but how close can we get to something simple that demonstrates more exits will help the (non-bridge) network the most? (If that is in fact the case!)
I don't know an easy answer to this, but you could look at consensus bandwidth weights for a start:
https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2699
The latest consensus bandwidth weights are:
bandwidth-weights Wbd=0 Wbe=0 Wbg=4127 Wbm=10000 Wdb=10000 Web=10000 Wed=10000 Wee=10000 Weg=10000 Wem=10000 Wgb=10000 Wgd=0 Wgg=5873 Wgm=5873 Wmb=10000 Wmd=0 Wme=0 Wmg=4127 Wmm=10000
If you go add many more guards or exits, you should observe how those weights change.
You could also ask Mike for more details.
I might have more questions to relay, but for now these seem to be the main ones.
Sounds good. You should also ask relay operators on the tor-relays@ mailing list. Though asking on this list is a good start, too.
All the best, Karsten
On 17 January 2017 at 10:11, Karsten Loesing karsten@torproject.org wrote:
Secondly, I know it's too simplistic to say "Our exits are operating at 85% of capacity, our guards are 65% and our middles at 40%" (for a host of reasons, including guard/middle overlap) but how close can we get to something simple that demonstrates more exits will help the (non-bridge) network the most? (If that is in fact the case!)
I don't know an easy answer to this, but you could look at consensus bandwidth weights for a start:
https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2699
The latest consensus bandwidth weights are:
bandwidth-weights Wbd=0 Wbe=0 Wbg=4127 Wbm=10000 Wdb=10000 Web=10000 Wed=10000 Wee=10000 Weg=10000 Wem=10000 Wgb=10000 Wgd=0 Wgg=5873 Wgm=5873 Wmb=10000 Wmd=0 Wme=0 Wmg=4127 Wmm=10000
If you go add many more guards or exits, you should observe how those weights change.
I added the weights with human-readable descriptions to https://consensus-health.torproject.org/
I see the following: - Exits are never recommended for middle or guard positions - Guards are equally recommended as non-Guards for exit positions - Guards are recommended for middle, but as highly as non-Guards
So I'm drawing the following conclusions: - Exits are a bottleneck, if we had more exit bandwidth, we would see Exits start to be recommended (even at very low weights) for middle or guard positions - We have enough Guards that we can use them as middle relays (albeit at a lower rate than we'd weight non-Guards)
-tom
On Tue, Jan 17, 2017 at 05:11:12PM +0100, Karsten Loesing wrote:
Firstly, while we don't have (public) datasets showing this, I am fairly certain that the public meek bridges have their bandwidth capped (to limit costs) and raising that cap would directly improve the speed of meek users. (Implying that the bottleneck the majority of the time is the meek transport.) Right?
That's also my intuition, but I don't have the data you're looking for. David would know better.
You can probably look for some patterns in past bandwidth history and compare them to the history of rate limit changes at https://trac.torproject.org/projects/tor/wiki/doc/meek#Users
Active bridges: https://atlas.torproject.org/#details/AA033EEB61601B2B7312D89B62AAA23DC3ED8A... meek-azure https://atlas.torproject.org/#details/F4AD82B2032EDEF6C02C5A529C42CFAFE51656... meek-amazon (after November 2015) Defunct bridge fingerprints: https://atlas.torproject.org/#details/3FD131B74D9A96190B1EE5D31E91757FADA1A4... meek-amazon (before November 2015; #17473) https://atlas.torproject.org/#details/88F745840F47CE0C6A4FE61D827950B06F9E45... meek-google (before May 2016)
For example, take * 2015-10-02 Rate-limited the meek-azure bridge to 1.1 MB/s. (Azure grant expired.) and you can see a matching drop in bandwidth from 2.5 MB/s to 1.1 MB/s on the same date. Also, * 2016-01-14 Increased meek-azure rate limit to 3 MB/s. * 2016-01-16 Increased meek-amazon rate limit to 3 MB/s. is really visible in the bandwidth graphs.
tor-project@lists.torproject.org