[tor-bugs] #21312 [Obfuscation/Snowflake]: snowflake-client is pegged at 100% cpu

Tor Bug Tracker & Wiki blackhole at torproject.org
Thu Mar 15 04:39:00 UTC 2018


#21312: snowflake-client is pegged at 100% cpu
-----------------------------------+------------------------------
 Reporter:  arlolra                |          Owner:  arlolra
     Type:  defect                 |         Status:  needs_review
 Priority:  High                   |      Milestone:
Component:  Obfuscation/Snowflake  |        Version:
 Severity:  Major                  |     Resolution:
 Keywords:                         |  Actual Points:
Parent ID:                         |         Points:
 Reviewer:                         |        Sponsor:
-----------------------------------+------------------------------

Comment (by dcf):

 Replying to [comment:33 dcf]:
 > Replying to [comment:32 arlolra]:
 > > > Here's a server-webrtc segfault from gdb.
 > >
 > > From the timeline in comment:29, maybe it's a race between the
 datachannel close on the server and reading from the OR (I'm assuming the
 logs never make it because they're async and the segfaults prevents them
 from being written).
 > >
 > > You can try something like,
 >
 > Good call. I tried your idea and added some debugging code to show the
 addresses of the data channel objects; it was definitely crashing while
 trying to dereference a NULL pointer before (notice `0x0` in the output).

 Touching base on server-webrtc, I didn't experience any crashes with it
 after [https://gitweb.torproject.org/pluggable-
 transports/snowflake.git/commit/?id=c834c76fc50677cfb98e516e5d9d630ecfe691c2
 c834c76fc5] (explicit frees). I downloaded a few hundred MB through it,
 then let it idle overnight. I was eyeballing the memory usage in top from
 time to time, checking for a slow leak. Here are the samples I happened to
 write down:
 ||=           time=||= mem=||
 || 2018-03-14 03:15|| 1.2%||
 ||            03:38|| 2.0%||
 ||            03:48|| 2.3%||
 ||            04:01|| 2.6%||
 ||            04:17|| 2.5%||
 ||            04:34|| 2.5%||
 ||            16:05|| 3.1%||
 ||            16:56|| 3.1%||
 After the server had been allowed to idle for a while, I saw repeated
 write attempts (roughly one per 2 seconds) to a `nil` data connection.
 These would have crashed the server before [https://gitweb.torproject.org
 /pluggable-
 transports/snowflake.git/commit/?id=c834c76fc50677cfb98e516e5d9d630ecfe691c2
 c834c76fc5]. But their repeated nature shows it's not jsut a transient
 race condition, but something persistent.
 {{{
 2018/03/14 17:20:43 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:20:45 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:20:47 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:20:50 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:20:52 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:20:54 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:20:58 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:21:00 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:21:00 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:21:03 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:21:05 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:21:06 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:21:11 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:21:12 Write 543 bytes 0x0 --> WebRTC
 2018/03/14 17:21:12 first signal interrupt
 2018/03/14 17:21:13 second signal interrupt
 }}}
 To get this output I just changed the log line:
 {{{
 log.Printf("Write %d bytes %p --> WebRTC", len(b), c.dc)
 }}}

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/21312#comment:47>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list