Date: Tue, 02 Jun 2015 11:58:25 -0400 From: 12xBTM 12xbtm@gmail.com
This is a little write-up on the quality of data connections involving nodes, and some thoughts on that. So, as of today, the quality of nodes is determined by their bandwidth, uptime, and not being malicious. There are two major missing factors from this, and I wanted to bounce some ideas around: Latency and Packet Loss. These are fairly simple tests that can be performed on nodes, albeit, it would increase the resource load on the bwauths. However, nodes with high latency or packet loss should be identified, and nodes should be measured with these two important factors.
High latency will increase the delay of the circuit, but will not appear in the consesus at all. A node can have enormous latency and hamper that circuit without anyone knowing. The same could be said for a node that drops a lot of packets, or drops a lot of packets at a certain time of day (peak usage).
Do the bwauths only measure the time to transmit the data, or do they include the time to connect as well? Including the time to connect would be a measure of latency, particularly for smaller file transfers. Are the bwauths only using one size of file transfer to measure bandwidth? Using a small and large size would allow measurement of mostly-latency and mostly-throughput with the current infrastructure.
Since, as far as I know, Tor connections are in-order delivery, dropping packets will cause retransmits, and back up (parts of) the transmission. This will reduce the bandwidth as a side-effect. This reduction in bandwidth is currently measured by the bwauths - but, as you say, it isn't a direct measurement.
In future, it would be great to have separate statistics for latency and packet loss - but this is perhaps a job for something more like Ooniprobe?
teor
teor2345 at gmail dot com pgp 0xABFED1AC https://gist.github.com/teor2345/d033b8ce0a99adbc89c5
teor at blah dot im OTR D5BE4EC2 255D7585 F3874930 DB130265 7C9EBBC7