[tor-bugs] #33367 [Circumvention/Snowflake]: Snowflake server using 1.5 GB memory, preventing other allocations

Tor Bug Tracker & Wiki blackhole at torproject.org
Tue Feb 18 18:52:00 UTC 2020


#33367: Snowflake server using 1.5 GB memory, preventing other allocations
-------------------------------------+------------------------
 Reporter:  dcf                      |          Owner:  (none)
     Type:  defect                   |         Status:  new
 Priority:  Medium                   |      Milestone:
Component:  Circumvention/Snowflake  |        Version:
 Severity:  Normal                   |     Resolution:
 Keywords:                           |  Actual Points:
Parent ID:                           |         Points:
 Reviewer:                           |        Sponsor:
-------------------------------------+------------------------

Old description:

> Thinking about #33364, I found that snowflake-server is chewing a lot of
> memory. It may be some memory leak or something.
>
> {{{
> $ top -o%MEM
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
> COMMAND
> 26910 debian-+  20   0 1916628 1.522g      0 S   0.0 77.8  58:51.37
> snowflake-serve
> }}}
>
> The memory use seems to be inhibiting other processes. `runsvdir` puts
> status messages in its own `argv` so you can inspect them with `ps`.
> Currently it's reflecting `xz` not being able to allocate memory to
> compress logs:
> {{{
> $ ps ax | grep runsvdir
>  1358 ?        Ss    94:01 runsvdir -P /etc/service log: locate memory \
> svlogd: warning: processor failed, restart: /home/snowflake-proxy
> /snowflake-proxy-standalone-17h.log.d xz: (stdin): Cannot allocate memory
> \
> svlogd: warning: processor failed, restart: /home/snowflake-proxy
> /snowflake-proxy-standalone-17h.log.d xz: (stdin): Cannot allocate memory
> \
> svlogd: warning: processor failed, restart: /home/snowflake-proxy
> /snowflake-proxy-standalone-17h.log.d
> }}}
>
> I even got it just now trying to run a diagnostic command (it doesn't
> always happen):
> {{{
> $ ps ax | grep standal
> -bash: fork: Cannot allocate memory
> }}}
>
> In the short term, looks like we need to restart the server. Then we need
> to figure out what's causing it to use so much memory.
>
> The server was last restarted 2020-02-10 18:57 (one week ago) for #32964.

New description:

 Thinking about #33364, I found that snowflake-server is chewing a lot of
 memory. It may be some memory leak or something.

 {{{
 $ top -o%MEM
   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
 COMMAND
 26910 debian-+  20   0 1916628 1.522g      0 S   0.0 77.8  58:51.37
 snowflake-serve
 }}}

 The memory use seems to be inhibiting other processes. `runsvdir` puts
 status messages in its own `argv` so you can inspect them with `ps`.
 Currently it's reflecting `xz` not being able to allocate memory to
 compress logs:
 {{{
 $ ps ax | grep runsvdir
  1358 ?        Ss    94:01 runsvdir -P /etc/service log: locate memory \
 svlogd: warning: processor failed, restart: /home/snowflake-proxy
 /snowflake-proxy-standalone-17h.log.d xz: (stdin): Cannot allocate memory
 \
 svlogd: warning: processor failed, restart: /home/snowflake-proxy
 /snowflake-proxy-standalone-17h.log.d xz: (stdin): Cannot allocate memory
 \
 svlogd: warning: processor failed, restart: /home/snowflake-proxy
 /snowflake-proxy-standalone-17h.log.d
 }}}

 I even got it just now trying to run a diagnostic command (it doesn't
 always happen):
 {{{
 $ ps ax | grep standal
 -bash: fork: Cannot allocate memory
 }}}

 In the short term, looks like we need to restart the server. Then we need
 to figure out what's causing it to use so much memory.

 The server was last restarted 2020-02-10 18:57 (one week ago) at
 [https://gitweb.torproject.org/pluggable-
 transports/snowflake.git/log/?id=ca9ae12c383405bc9a755e1bc902e9755495c1f1
 ca9ae12c383405bc9a755e1bc902e9755495c1f1] for #32964.

--

Comment (by dcf):

 Replying to [ticket:33367 dcf]:
 > The server was last restarted 2020-02-10 18:57 (one week ago) at
 [https://gitweb.torproject.org/pluggable-
 transports/snowflake.git/log/?id=ca9ae12c383405bc9a755e1bc902e9755495c1f1
 ca9ae12c383405bc9a755e1bc902e9755495c1f1] for #32964.

 Initially I suspected the recent websocketconn changes from #33144, but
 those can only have had effect since 2020-02-10 18:57 when the server was
 restarted, and the earliest reports of "Could not connect to the bridge"
 predate that (assuming that the memory usage and the issue in #33364 are
 the same).

 * 2020-01-30 #33122
 * 2020-02-02 #33126
 * 2020-02-02 #33127
 * 2020-02-18 #33364

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/33367#comment:2>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list