hello there! very interesting information you shared here, thank you. while i dont have any further information to rely on, i would still like to share a few simple thoughts.
You're welcome.
lets assume for a second that there are no errors in the code and that Tor is not using a significantly less than optimal network topology.
i would argue against adjusting the traffic limit as you suggested, for the following reasons: -pending more data, this ratio might arbitrarily change at any point in time, causing either under utilization of the network. which is what you have set out to prevent. or over utilization which will cause troubles to the relay admins, like throttling by providers.
I clearly see where you're coming from. While I *certainly* do not propose a large-scale adjustment in relay configurations (that isn't my incentive actually), I still seriously think about doing the adjustments - certainly with some offset to the observed figures to address potential over-utilization - for our very own relays. We're looking forward to receive some sponsored VPS(s) that we would like to dedicate/contribute to the Tor network and I'm somehow concerned, i.e. think it's unfortunate, that - expectedly - 1/3 of our monthly traffic limit(s) wouldn't be used after all.
That said, please have a brief look at https://metrics.torproject.org/bandwidth.png?start=2010-09-19&dpi=72&... - i.e. a two-year bwadv vs bwhist sample of the whole Tor network. It actually shows a rather significant trend of the "1/3 bwadv vs bwhist discrepancy" I observed previously.
-its important to have more capacity then needed, this allows better stability and is helpful with dealing with sudden increases in Tor use, a somewhat common event,
Same here. I wouldn't want to argue against your assumptions as they - from a general/common-sense perspective - make perfect sense. Still, looking at the two-year sample graph, I cannot really see such a network behavior, i.e. sub-samples where the gap between bwadv vs bwhist would be significantly lower at times.
of course, this is not to take away anything from your observation and initiative. who knows, you might have found a serious problem with the network.
I don't think there's a serious problem with the network. I just would like to ask the list for the actual reasoning of the, from my perspective rather *large*, discrepancy. You named three (actually two) things:
1. under-/over-utilization: the trend seems to be pretty constant and with some offset calculated in, I would look forward to avoid over-utilization (just for our own initiative certainly) and thus potential throttling by providers for our very own contribution to the network.
2. adjustments to peak traffic: While I totally understand the argument, I just can't see that happening (at least) within the last two years looking at the sample graph.
thanks a lot for your feedback! Please don't get me wrong counter-arguing. I'm just considering opportunities for our own initiative and clearly see the general validity of your argumentation. Also I wouldn't want to exclude the possibility that I'm missing something important here and would like to ask everyone concerned to tell me where my (potential) misconception actually is ;-)
Cheers, Thomas
thanks