(Ugggggghhh, top posting. /rant)
On 2.6.15 12:26, teor wrote:
Date: Tue, 02 Jun 2015 11:58:25 -0400 From: 12xBTM 12xbtm@gmail.com
Since, as far as I know, Tor connections are in-order delivery, dropping packets will cause retransmits, and back up (parts of) the transmission. This will reduce the bandwidth as a side-effect. This reduction in bandwidth is currently measured by the bwauths - but, as you say, it isn't a direct measurement.
All connections are over TCP, so loss plays a huge factor in available bandwidth. Measured performance will be utterly abysmal if loss is high, because congestion control uses loss as feedback and backs off quite aggressively (NB: Certain exotic variants that shouldn't be used with Tor are less aggressive here).
On Tue, 02 Jun 2015 12:45:59 -0400 12xBTM 12xbtm@gmail.com wrote:
I once ran a node on a college network for just a few weeks. Then I found out at peak hours, packet loss was >90%. My measured bandwidth never reflected that. Now, scale that up. An enemy makes a ton of nodes like that, and now has a large portion of circuit going through them. That will cripple the usability of those circuits and their users. Likewise with latency.
This is likely because your node's bandwidth never happened to get measured when loss was that high. The bwauths could attempt to perform more frequent measurements here, but IIRC even with the recent much needed improvements, it takes quite a while to fully measure the entire network (and the new code is a dramatic improvement over the code used when you were running your relay).
I don't see why measuring latency would be any better in this regard, and of all the things to do wrt the bwauths, bolting more features on is not something I would personally consider important right now.
Regards,