On 27 Nov 2018, at 07:46, Nick Mathewson nickm@alum.mit.edu wrote:
On Wed, Nov 21, 2018 at 5:10 PM Michael Rogers michael@briarproject.org wrote:
On 20/11/2018 19:28, Nick Mathewson wrote:
Hi! I don't know if this will be useful or not, but I'm wondering if you've seen this ticket: https://trac.torproject.org/projects/tor/ticket/28335
The goal of this branch is to create a "dormant mode" where Tor does not run any but the most delay- and rescheduling-tolerant of its periodic events. Tor enters this mode if a controller tells it to, or if (as a client) it passes long enough without user activity. When in dormant mode, it doesn't disconnect from the network, and it will wake up again if the controller tells it to, or it receives a new client connection.
The comments on the pull request (https://github.com/torproject/tor/pull/502) ...
One of the comments mentions a break-even point for consensus diffs, where it costs less bandwidth to fetch a fresh consensus than all the diffs from the last consensus you know about. Are diffs likely to remain available up to the break-even point, or are there times when it would be cheaper to use diffs, but you have to fetch a fresh consensus because some of the diffs have expired?
This shouldn't be a problem: directory caches will (by default) keep diffs slightly beyond the break-even point.
(I think. I haven't measured this in a while.)
We measured the number of cached consensuses after a collector outage in August:
https://lists.torproject.org/pipermail/tor-relays/2018-August/015850.html
Some relays had ~16 consensuses, others had ~24 or ~48.
If a relay is using lzma, zstd, and gzip compression, then it will only store about 16 past consensuses: https://github.com/torproject/tor/blob/672e26cad837e368dfe39d53546b85afd69ad...
If my sums are correct, that is: 128 files / 2 consensus flavours / (1 consensus + 3 compressed diffs) = 16 consensuses
Should we increase cache_max_num to 196? Should we increase cache_max_num when the sandbox isn't being used?
T