On Thu, Oct 1, 2015 at 10:17 PM, Yawning Angel <
yawning@schwanenlied.me> wrote:
Using IP tables to drop packets also is going to add queuing delays
since cwnd will get decreased in response to the loss (CUBIC uses beta
of 0.2 IIRC).
Unfortunately true. Empirical arrival to a better result is the idea.
When saturated and rate-limiting, the relay sometimes
is so bad connections time-out. Consistent though
less-than-amazing performance is better than erratic
sometimes-failing performance IMO. Tried a few
exit relays that appear to be limited by 100MB physical links
and have consensus weights around 60k (i.e. TCP congestion
control is at work though the load is not excessive) and
they function much better.
It may be less queuing delay (note: write() completes the moment data
is in the outgoing buffer kernel side, so it may not be as apparent,
and is somewhat harder to measure), and it's your relay so I don't care
what you do, so do whatever you think works.
That said, placing an emphasis on unit-less quantities generated by a
measurement system that is currently held together by duct tape,
string, and chewing gum seems rather pointless and counter
productive.
Really?
The consensus weight has a precise and predictable
effect on the amount of traffic directed to the relay. So gaming
the measuring system for a weight that yields the best-possible
user experience is not "pointless." I am paying for this.
Does seem the system generating the measurements has
problem and if someone can look at this issue that would
seem "productive."
Still interested in hearing "a better idea.”