Thanks, Nick!
On 9/13/19 4:24 PM, Nick Mathewson wrote:
On Fri, Sep 13, 2019 at 2:05 PM Steve Snyder swsnyder@snydernet.net wrote:
Given the multiple compression types supported (none, lzma, zlib, zstd), what is the order of preference for runtime use?
Put another way, which compression method(s) should be supported to get optimal runtime performance from a Tor node?
For big objects like consensuses or consensus diffs that are sent over and over, relays prefer to use whichever compression method has the highest compression -- that's lzma2, then zstd, then zlib, then none. Lzma2 (aka xz) is more expensive to calculate, but the relays only need to calculate it once per compressed object, and then they can send it over and over.
For smaller objects that are compressed in a stream (descriptors and microdescriptors), relays will not use xz, since it would be to expensive to recompute it for every stream. They'll prefer zstd, then zlib, then none.
So if you want to save bandwidth above all, you should enable all compression algorithms.
If you want to save CPU above all, you should enable all compression algorithms except xz.
If you want to save bandwidth and CPU, I _think_ that enabling all the compression algorithms will result in Tor making good choices (as described above). But I'd appreciate benchmarks if anybody has tried it both ways to find out.
cheers,