commit af08b679715d8046aafcea89494d5bc4e0e96407 Author: Damian Johnson atagar@torproject.org Date: Fri Jan 22 08:46:56 2016 -0800
Adjusting benchmark for parsing with compression
Earlier I adjusted Stem's benchmarks for when calling parse_file() rather than the DescriptorReader. This was a ~5% improvement but I didn't have a number for compression so I left it as-is. This, however, made it look like the compression delta for python is particularly bad. Subtracting the flat overhead for the DescriptorReader from that benchmark...
uncompressed consensus: 876.09 -> 865.72 (-10.37ms) compressed consensus: 913.12 -> 902.75 --- docs/tutorials/mirror_mirror_on_the_wall.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/tutorials/mirror_mirror_on_the_wall.rst b/docs/tutorials/mirror_mirror_on_the_wall.rst index e5b423e..3f80db0 100644 --- a/docs/tutorials/mirror_mirror_on_the_wall.rst +++ b/docs/tutorials/mirror_mirror_on_the_wall.rst @@ -310,7 +310,7 @@ Few things to note about these benchmarks... * Metrics-lib and Stem can both read from compressed tarballs at a small performance cost. For instance, Metrics-lib can read an `lzma compressed <../faq.html#how-do-i-read-tar-xz-descriptor-archives>`_ consensus in - **255.76 ms** and Stem can do it in **913.12 ms**. + **255.76 ms** and Stem can do it in **902.75 ms**.
So what does code with each of these look like?