[tor-bugs] #24737 [Core Tor/Tor]: Recommend a MaxMemInQueues value in the Tor man page

Tor Bug Tracker & Wiki blackhole at torproject.org
Tue Jan 2 03:26:48 UTC 2018


#24737: Recommend a MaxMemInQueues value in the Tor man page
----------------------------+------------------------------------
 Reporter:  starlight       |          Owner:  (none)
     Type:  defect          |         Status:  new
 Priority:  Medium          |      Milestone:  Tor: 0.3.2.x-final
Component:  Core Tor/Tor    |        Version:
 Severity:  Normal          |     Resolution:
 Keywords:  doc, tor-relay  |  Actual Points:
Parent ID:                  |         Points:  0.5
 Reviewer:                  |        Sponsor:
----------------------------+------------------------------------

Comment (by teor):

 Moved conversation from #22255.

 Replying to [ticket:22255#comment:80 starlight]:
 > Replying to [ticket:22255#comment:79 teor]:
 > > I'd still like to see someone repeat this analysis with 0.3.2.8-rc,
 and post the results to #24737.
 > > It's going to be hard for us to close that ticket without any idea of
 the effect of our changes.
 >
 > I'm not willing to run a newer version till one is declared LTS, but can
 say that even when my relay is not under attack memory consumption goes to
 1.5G with the 1G max queue setting.  Seems to me the 2x max queues memory
 consumption is a function of the overheads associated with tor daemon
 queues and related processing, including malloc slack space.

 Saying 2x is a useful guide, but I think we can do better. Because I see
 very different behaviour on systems with a lot more RAM.

 This is how the overheads work on my 0.3.0 relay with 8 GB per tor
 instance, and a high MaxMemInQueues:
 * 512 MB per instance with no circuits
 * 256 - 512 MB extra per instance with relay circuits
 * 256 - 512 MB extra per instance with exit streams
 The RAM usage will occasionally spike to a few gigabytes, but I've never
 seen it all used.

 So I think we should document the following RAM usage and MaxMemInQueues
 settings:
 * Relays: minimum 768 MB, set MaxMemInQueues to (RAM per instance - 512
 MB)*N
 * Exits: minimum 1GB, set MaxMemInQueues to (RAM per instance - 768 MB)*N

 For all versions without the destroy cell patch (0.3.2.7-rc and all
 current versions as of 1 January 2018), N should be 0.5 or lower. It's
 reasonable to expect destroy cell queues and other objects to take up
 approximately the same amount of RAM as the queues.

 For all versions with the destroy cell patch (0.3.2.8-rc and all versions
 released after 1 January 2018), N should be 0.75 or lower. It's reasonable
 to expect destroy cell queues and other objects to take up a third of the
 queue RAM.

 Now we just have to turn this into a man page patch and wiki entry.

 > Anyone running a busy relay on an older/slower system and with
 MaxMemInQueues=1024MB can check /proc/<pid>/status to see how much memory
 is consumed.  Be sure DisableAllSwap=1 is set and the queue limit is not
 higher since the point is to observe actual memory consumed relative to a
 limit likely to be approached under normal operation.
 >
 > Another idea is to add an option to the daemon to cause queue memory
 preallocation.  This would be a nice hardening feature as it will reduce
 malloc() calls issued under stress, and of course would allow more
 accurate estimates of worst-case memory consumption.  If OOM strikes with
 preallocated queues that would indicate memory leakage.

 Please open a ticket for this feature in 0.3.4.

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/24737#comment:5>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list