On Thu, Oct 1, 2015 at 5:12 PM, Moritz Bartl moritz@torservers.net wrote:
On 10/01/2015 06:28 PM, Dhalgren Tor wrote:
This relay appears to have the same problem: sofia https://atlas.torproject.org/#details/7BB160A8F54BD74F3DA5F2CE701E8772B84185...
This is one of ours, and works just fine and the way it's supposed to?
Certainly 'sofia' is working well enough, but it's clearly spending much if it's time at or somewhat above the configured rate-limit in terms of load. This is sub-optimal for end-user latency because the relay delays traffic to enforce the rate limit. On this relay BandwidthBurst is unconfigured and perhaps setting it to the same value as BandwidthRate will cause the authorities to slightly lower the rating and eliminate the saturated state.
Your 18000000 is quite near the 16.5 MByte/s it is currently pushing since you must have changed something on Sept 26/27, so I don't really see the issue.
You are overlooking TCP/IP protocol bytes which add between 5 and 13% to the data and are considered billable traffic by providers. At 18M it's solidly over 100TB, at 16.5M it will consume 97TB in 31 days.
As said before in this thread, the consensus weight is a unitless unit that is relative to the rest of the network and of no 'external significance'.
YES I understand this. Nowhere do I say I expect the consensus weight to correspond directly to BandwidthRate. What I SAID is that ,based on comparative observation, the Dhalgren relay should be rated around 65000 to effect an approximate 90% utilization of the 18M limit. THIS is supposedly the intended design objective of the bandwidth allocation system.
If my quick calculation isn't off, 18000000 gives you 42.4TB per direction, which means your relay will stay below the projected 100TB limit.
Add TCP/IP overhead. I am looking at the service provider bandwidth consumption graph when determining the setting as well as including TCP/IP overhead in calculations.
How exactly do you determine that you see "too many connections"? Do you have any errors in the Tor log?
I determine this by
1) watching the service provider bandwidth graph
2) watching the output of "SETEVENTS BW" on a control channel and observing that every sample shows the relay is flat-line saturated at BandwidthRate.
3) observing that statistics show elevated cell-queuing delays when the relay has been in the saturated state, e.g.
cell-queued-cells 2.59,0.11,0.01,0.00,0.00,0.00,0.00,0.00,0.00,0.00 cell-time-in-queue 107,25,3,3,4,3,7,4,1,7
4) explicitly browsing through the relay utilizing "SETCONF ExitNodes=" and observing that latency is at minimum degraded and is sometimes terrible when the relay is overrated/saturated, while on the other hand latency is extraordinarily good when the relay is not in a saturated / rate-limited state.