[tor-bugs] #29206 [Circumvention/Snowflake]: New design for client -- proxy protocol for Snowflake

Tor Bug Tracker & Wiki blackhole at torproject.org
Thu Jun 20 05:58:28 UTC 2019


#29206: New design for client -- proxy protocol for Snowflake
-------------------------------------------------+-------------------------
 Reporter:  cohosh                               |          Owner:  cohosh
     Type:  task                                 |         Status:
                                                 |  needs_review
 Priority:  Medium                               |      Milestone:
Component:  Circumvention/Snowflake              |        Version:
 Severity:  Normal                               |     Resolution:
 Keywords:  ex-sponsor-19, anti-censorship-      |  Actual Points:
  roadmap                                        |
Parent ID:                                       |         Points:  6
 Reviewer:  dcf                                  |        Sponsor:
                                                 |  Sponsor28-must
-------------------------------------------------+-------------------------

Comment (by dcf):

 Replying to [comment:13 cohosh]:
 > I'm going to ask for an incremental review on this one, mostly just to
 get another pair of eyes on what I've done and my next steps before I sink
 more time into going possibly the wrong direction:
 https://github.com/cohosh/snowflake/compare/sequencing

 High-level summary: splits the data stream into frames. Each frame is
 preceded by a 12-byte header consisting of seq, ack, and length fields.
 The receiver buffers an entire frame in memory, then accepts it if its
 sequence number exactly matches the expected sequence number, or drops it
 otherwise.

 Overall the design looks good.

 {{{
 type snowflakeHeader struct {
         seq    int
         ack    int
         length int //length of the accompanying data (including header
 length)
 }
 }}}
 The fields would be better as `uint32`, because the size of `int`
 [https://golang.org/ref/spec#Numeric_types may vary] (though I think you
 handled it correctly by always specifying 32 bits in serialization
 functions). `length` may be better as `uint16`; that's plenty for one call
 to `Write` and it avoids the risk that an implementation will try to
 buffer 4 GB in memory.

 IMO `length` should be ''exclusive'' of the header size. Otherwise you
 have to define what it means if `length` < 12. (I believe the current code
 doesn't work in this case because `header.length - snowflakeHeaderLen`
 underflows.) The packets sent by `sendAck` should have `length` = 0.

 {{{
         h.seq = int(binary.LittleEndian.Uint32(b[0:4]))
         h.ack = int(binary.LittleEndian.Uint32(b[4:8]))
         h.length = int(binary.LittleEndian.Uint32(b[8:12]))
 }}}
 Big-endian is more usual for network protocols.

 I'm not sure about the implementation of `Read`:
  * Should return `0` here, not `length`. `b` hasn't really been filled in
 with any de-serialized data yet.
    {{{
         length, err := s.Conn.Read(b)
         if err != nil {
                 return length, err
         }
    }}}
  * This should have `b[:length]` in place of `b`:
    {{{
         s.buffer = append(s.buffer, b...)
    }}}
  * This order of operations (`Read` then `copy`) will lose any data
 remaining in `s.out` at EOF. It would be better to do the `copy` first,
 and only when `s.out` is empty do a `Read` and possibly return an error.
    {{{
         length, err := s.Conn.Read(b)
         if err != nil {
                 return length, err
         }
         s.buffer = append(s.buffer, b...)

         n := copy(b, s.out)
         s.out = s.out[n:]
         if n == len(b) {
                 return n, err
         }
    }}}
 I would prefer to start with an implementation of `Read` that is simpler,
 if less efficient. I am picturing something like this (untested):
 {{{
 func NewSnowflakeReadWriter(sConn io.ReadWriteCloser) *SnowflakeReadWriter
 {
         pr, pw := io.Pipe()
         s := &SnowflakeReadWriter{
                 Conn: sConn,
                 pr: pr,
         }
         go s.readLoop(pw)
         return s
 }
 func (s *SnowflakeReadWriter) readLoop(pw io.PipeWriter) {
         var err error
         for err == nil {
                 // strip headers and write data into the pipe
                 var header snowflakeHeader
                 err = readHeader(s.Conn, &header)
                 if err != nil {
                         break
                 }
                 if header.seq == s.ack {
                         _, err = io.CopyN(pw, s.Conn,
 int64(header.length))
                 } else {
                         _, err = io.CopyN(ioutil.Discard, s.Conn,
 int64(header.length))
                 }
         }
         pw.CloseWithError(err)
 }
 func (s *SnowflakeReadWriter) Read(b []byte) (int, error) {
         // read de-headered data from the pipe
         return s.pr.Read(b)
 }
 }}}

 > * Implement a timeout (maybe making the ReadWriteCloser a net.Conn and
 implementing SetReadDeadline and a fixed size window to stop sending
 packets before others have been acknowledged)

 I agree that ultimately `SnowflakeReadWriter` should be a `net.Conn`,
 though I don't see why the read deadline should affect the sending of
 packets.

 > * Make the tests better. Right now I'm just reading and writing from a
 buffer, we should test with network connections

 [https://golang.org/pkg/net/#Pipe net.Pipe] can be good for this.

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/29206#comment:17>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list