On January 27, 2019 1:07:12 PM UTC, notatorserver notatorserver@protonmail.com wrote:
I am running tor 4.0.1-alpha inside a docker container with a memory limit (6GB). Tor runs out of memory and aborts or crashes periodically.
Tor assumes that 64-bit machines have 8 GB of RAM, unless the relevant APIs return a lower value.
How does docker implement memory limits? Does it modify the values returned by the Linux RAM APIs?
Usually I see an error message like this: Could not mmap file "/var/lib/tor/data/diff-cache/1149": Out of memory ...(repeated some number of times)
Please paste your log on https://paste.debian.net
How many times are these messages repeated? Is the diff number the same, or different? How many files are in that directory?
followed by segfault
Sometimes I see a message: Out of memory on malloc(). Dying followed by abort.
Am I correct to assume the diff-cache is the issue here? Looking at the files it seems they are all pretty small (~500K). Is some badly behaved client requesting 12,000 of these diffs causing my relay to mmap them all at once?
How many times are these messages repeated?
or is it just expensive to generate and generating 30-60 at once is enough to use all the memory?
The diffs are generated once an hour, then cached for requests. Do the errors happen at generation time, or request time?
Are there any config options to reject expensive queries or otherwise limit concurrency?
Try MaxMemInQueues 4 GB
If that doesn't work, try NumCPUs 2 and please let us know, because we'd like to fix these kinds of bugs.
T
-- teor ----------------------------------------------------------------------