On 2 Oct 2015, at 01:19, Dhalgren Tor dhalgren.tor@gmail.com wrote:
On Thu, Oct 1, 2015 at 10:17 PM, Yawning Angel yawning@schwanenlied.me wrote:
Using IP tables to drop packets also is going to add queuing delays since cwnd will get decreased in response to the loss (CUBIC uses beta of 0.2 IIRC).
Unfortunately true. Empirical arrival to a better result is the idea.
When saturated and rate-limiting, the relay sometimes is so bad connections time-out. Consistent though less-than-amazing performance is better than erratic sometimes-failing performance IMO. Tried a few exit relays that appear to be limited by 100MB physical links and have consensus weights around 60k (i.e. TCP congestion control is at work though the load is not excessive) and they function much better.
It may be less queuing delay (note: write() completes the moment data is in the outgoing buffer kernel side, so it may not be as apparent, and is somewhat harder to measure), and it's your relay so I don't care what you do, so do whatever you think works.
That said, placing an emphasis on unit-less quantities generated by a measurement system that is currently held together by duct tape, string, and chewing gum seems rather pointless and counter productive.
Really?
The consensus weight has a precise and predictable effect on the amount of traffic directed to the relay. So gaming the measuring system for a weight that yields the best-possible user experience is not "pointless." I am paying for this.
Does seem the system generating the measurements has problem and if someone can look at this issue that would seem "productive."
Still interested in hearing "a better idea.”
We could modify the *Bandwidth* options to take TCP overhead into account. Alternately, we could modify the documentation to explicitly state that TCP overhead and name resolution on exits (and perhaps other overheads?) *isn’t* taken into account by those options. This would inform relay operators to take the TCP and DNS overheads for their particular setup into account when configuring the *Bandwidth* options, if the overhead is significant for them.
You suggested TCP overhead was 5%-13%, I can include that in the manual. Do we know what fraction of exit traffic is DNS requests? Are there any other overheads/additional traffic we should note while updating the manual? (Or would would you suggest that we update the code? I’m not sure how much this actually helps, as, once deployed to all relays, the consensus weights for all relays that set a *Bandwidth* options would come out slightly lower, and other relays without *Bandwidth* options set would take up the load.)
I’ve updated https://trac.torproject.org/projects/tor/ticket/17170 https://trac.torproject.org/projects/tor/ticket/17170 with the Exit DNS.
Tim
Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP 968F094B
teor at blah dot im OTR CAD08081 9755866D 89E2A06F E3558B7F B5A9D14F