tor-commits
Threads by month
- ----- 2025 -----
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
April 2020
- 25 participants
- 2156 discussions

23 Apr '20
commit 65ecb798ca8842a431214c2aa5133620e576c5f3
Author: David Fifield <david(a)bamsoftware.com>
Date: Thu Apr 23 20:36:55 2020 -0600
Update a comment (no signal pipe anymore).
---
client/lib/webrtc.go | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/client/lib/webrtc.go b/client/lib/webrtc.go
index 5aa7aec..3e20549 100644
--- a/client/lib/webrtc.go
+++ b/client/lib/webrtc.go
@@ -304,8 +304,8 @@ func (c *WebRTCPeer) sendOfferToBroker() {
c.answerChannel <- answer
}
-// Block until an SDP offer is available, send it to either
-// the Broker or signal pipe, then await for the SDP answer.
+// exchangeSDP blocks until an SDP offer is available, sends it to the Broker,
+// then awaits the SDP answer.
func (c *WebRTCPeer) exchangeSDP() error {
select {
case <-c.offerChannel:
1
0

[snowflake/master] Restore `go 1.13` to go.mod, lost in the turbotunnel merge.
by dcf@torproject.org 23 Apr '20
by dcf@torproject.org 23 Apr '20
23 Apr '20
commit 2f52217d2f62e61a05ea265257d2229410f79732
Author: David Fifield <david(a)bamsoftware.com>
Date: Thu Apr 23 17:08:49 2020 -0600
Restore `go 1.13` to go.mod, lost in the turbotunnel merge.
---
go.mod | 2 ++
1 file changed, 2 insertions(+)
diff --git a/go.mod b/go.mod
index 6502651..07c49a2 100644
--- a/go.mod
+++ b/go.mod
@@ -1,5 +1,7 @@
module git.torproject.org/pluggable-transports/snowflake.git
+go 1.13
+
require (
git.torproject.org/pluggable-transports/goptlib.git v1.1.0
github.com/golang/protobuf v1.3.1 // indirect
1
0

23 Apr '20
commit 0790954020b550f5d5351d9e54b108c9d357fa50
Author: David Fifield <david(a)bamsoftware.com>
Date: Tue Feb 4 22:27:58 2020 -0700
USERADDR support for turbotunnel sessions.
The difficulty here is that the whole point of turbotunnel sessions is
that they are not necessarily tied to a single WebSocket connection, nor
even a single client IP address. We use a heuristic: whenever a
WebSocket connection starts that has a new ClientID, we store a mapping
from that ClientID to the IP address attached to the WebSocket
connection in a lookup table. Later, when enough packets have arrived to
establish a turbotunnel session, we recover the ClientID associated with
the session (which kcp-go has stored in the RemoteAddr field), and look
it up in the table to get an IP address. We introduce a new data type,
clientIDMap, to store the clientID-to-IP mapping during the short time
between when a WebSocket connection starts and handleSession receives a
fully fledged KCP session.
---
server/server.go | 47 +++++++++++++++---
server/turbotunnel.go | 85 ++++++++++++++++++++++++++++++++
server/turbotunnel_test.go | 119 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 244 insertions(+), 7 deletions(-)
diff --git a/server/server.go b/server/server.go
index 028a24d..1a53de7 100644
--- a/server/server.go
+++ b/server/server.go
@@ -43,6 +43,11 @@ const requestTimeout = 10 * time.Second
// indefinitely.
const clientMapTimeout = 1 * time.Minute
+// How big to make the map of ClientIDs to IP addresses. The map is used in
+// turbotunnelMode to store a reasonable IP address for a client session that
+// may outlive any single WebSocket connection.
+const clientIDAddrMapCapacity = 1024
+
// How long to wait for ListenAndServe or ListenAndServeTLS to return an error
// before deciding that it's not going to return.
const listenAndServeErrorTimeout = 100 * time.Millisecond
@@ -114,6 +119,15 @@ var upgrader = websocket.Upgrader{
CheckOrigin: func(r *http.Request) bool { return true },
}
+// clientIDAddrMap stores short-term mappings from ClientIDs to IP addresses.
+// When we call pt.DialOr, tor wants us to provide a USERADDR string that
+// represents the remote IP address of the client (for metrics purposes, etc.).
+// This data structure bridges the gap between ServeHTTP, which knows about IP
+// addresses, and handleStream, which is what calls pt.DialOr. The common piece
+// of information linking both ends of the chain is the ClientID, which is
+// attached to the WebSocket connection and every session.
+var clientIDAddrMap = newClientIDMap(clientIDAddrMapCapacity)
+
// overrideReadConn is a net.Conn with an overridden Read method. Compare to
// recordingConn at
// https://dave.cheney.net/2015/05/22/struct-composition-with-go.
@@ -203,8 +217,16 @@ func turbotunnelMode(conn net.Conn, addr string, pconn *turbotunnel.QueuePacketC
return fmt.Errorf("reading ClientID: %v", err)
}
- // TODO: ClientID-to-client_ip address mapping
- // Peek at the first read packet to get the KCP conv ID.
+ // Store a a short-term mapping from the ClientID to the client IP
+ // address attached to this WebSocket connection. tor will want us to
+ // provide a client IP address when we call pt.DialOr. But a KCP session
+ // does not necessarily correspond to any single IP address--it's
+ // composed of packets that are carried in possibly multiple WebSocket
+ // streams. We apply the heuristic that the IP address of the most
+ // recent WebSocket connection that has had to do with a session, at the
+ // time the session is established, is the IP address that should be
+ // credited for the entire KCP session.
+ clientIDAddrMap.Set(clientID, addr)
errCh := make(chan error)
@@ -249,10 +271,9 @@ func turbotunnelMode(conn net.Conn, addr string, pconn *turbotunnel.QueuePacketC
}
// handleStream bidirectionally connects a client stream with the ORPort.
-func handleStream(stream net.Conn) error {
- // TODO: This is where we need to provide the client IP address.
- statsChannel <- false
- or, err := pt.DialOr(&ptInfo, "", ptMethodName)
+func handleStream(stream net.Conn, addr string) error {
+ statsChannel <- addr != ""
+ or, err := pt.DialOr(&ptInfo, addr, ptMethodName)
if err != nil {
return fmt.Errorf("connecting to ORPort: %v", err)
}
@@ -266,6 +287,17 @@ func handleStream(stream net.Conn) error {
// acceptStreams layers an smux.Session on the KCP connection and awaits streams
// on it. Passes each stream to handleStream.
func acceptStreams(conn *kcp.UDPSession) error {
+ // Look up the IP address associated with this KCP session, via the
+ // ClientID that is returned by the session's RemoteAddr method.
+ addr, ok := clientIDAddrMap.Get(conn.RemoteAddr().(turbotunnel.ClientID))
+ if !ok {
+ // This means that the map is tending to run over capacity, not
+ // just that there was not client_ip on the incoming connection.
+ // We store "" in the map in the absence of client_ip. This log
+ // message means you should increase clientIDAddrMapCapacity.
+ log.Printf("no address in clientID-to-IP map (capacity %d)", clientIDAddrMapCapacity)
+ }
+
smuxConfig := smux.DefaultConfig()
smuxConfig.Version = 2
smuxConfig.KeepAliveTimeout = 10 * time.Minute
@@ -273,6 +305,7 @@ func acceptStreams(conn *kcp.UDPSession) error {
if err != nil {
return err
}
+
for {
stream, err := sess.AcceptStream()
if err != nil {
@@ -283,7 +316,7 @@ func acceptStreams(conn *kcp.UDPSession) error {
}
go func() {
defer stream.Close()
- err := handleStream(stream)
+ err := handleStream(stream, addr)
if err != nil {
log.Printf("handleStream: %v", err)
}
diff --git a/server/turbotunnel.go b/server/turbotunnel.go
new file mode 100644
index 0000000..1d00897
--- /dev/null
+++ b/server/turbotunnel.go
@@ -0,0 +1,85 @@
+package main
+
+import (
+ "sync"
+
+ "git.torproject.org/pluggable-transports/snowflake.git/common/turbotunnel"
+)
+
+// clientIDMap is a fixed-capacity mapping from ClientIDs to address strings.
+// Adding a new entry using the Set method causes the oldest existing entry to
+// be forgotten.
+//
+// This data type is meant to be used to remember the IP address associated with
+// a ClientID, during the short period of time between when a WebSocket
+// connection with that ClientID began, and when a KCP session is established.
+//
+// The design requirements of this type are that it needs to remember a mapping
+// for only a short time, and old entries should expire so as not to consume
+// unbounded memory. It is not a critical error if an entry is forgotten before
+// it is needed; better to forget entries than to use too much memory.
+type clientIDMap struct {
+ lock sync.Mutex
+ // entries is a circular buffer of (ClientID, addr) pairs.
+ entries []struct {
+ clientID turbotunnel.ClientID
+ addr string
+ }
+ // oldest is the index of the oldest member of the entries buffer, the
+ // one that will be overwritten at the next call to Set.
+ oldest int
+ // current points to the index of the most recent entry corresponding to
+ // each ClientID.
+ current map[turbotunnel.ClientID]int
+}
+
+// newClientIDMap makes a new clientIDMap with the given capacity.
+func newClientIDMap(capacity int) *clientIDMap {
+ return &clientIDMap{
+ entries: make([]struct {
+ clientID turbotunnel.ClientID
+ addr string
+ }, capacity),
+ oldest: 0,
+ current: make(map[turbotunnel.ClientID]int),
+ }
+}
+
+// Set adds a mapping from clientID to addr, replacing any previous mapping for
+// clientID. It may also cause the clientIDMap to forget at most one other
+// mapping, the oldest one.
+func (m *clientIDMap) Set(clientID turbotunnel.ClientID, addr string) {
+ m.lock.Lock()
+ defer m.lock.Unlock()
+ if len(m.entries) == 0 {
+ // The invariant m.oldest < len(m.entries) does not hold in this
+ // special case.
+ return
+ }
+ // m.oldest is the index of the entry we're about to overwrite. If it's
+ // the current entry for any ClientID, we need to delete that clientID
+ // from the current map (that ClientID is now forgotten).
+ if i, ok := m.current[m.entries[m.oldest].clientID]; ok && i == m.oldest {
+ delete(m.current, m.entries[m.oldest].clientID)
+ }
+ // Overwrite the oldest entry.
+ m.entries[m.oldest].clientID = clientID
+ m.entries[m.oldest].addr = addr
+ // Add the overwritten entry to the quick-lookup map.
+ m.current[clientID] = m.oldest
+ // What was the oldest entry is now the newest.
+ m.oldest = (m.oldest + 1) % len(m.entries)
+}
+
+// Get returns a previously stored mapping. The second return value indicates
+// whether clientID was actually present in the map. If it is false, then the
+// returned address string will be "".
+func (m *clientIDMap) Get(clientID turbotunnel.ClientID) (string, bool) {
+ m.lock.Lock()
+ defer m.lock.Unlock()
+ if i, ok := m.current[clientID]; ok {
+ return m.entries[i].addr, true
+ } else {
+ return "", false
+ }
+}
diff --git a/server/turbotunnel_test.go b/server/turbotunnel_test.go
new file mode 100644
index 0000000..c4bf02b
--- /dev/null
+++ b/server/turbotunnel_test.go
@@ -0,0 +1,119 @@
+package main
+
+import (
+ "encoding/binary"
+ "testing"
+
+ "git.torproject.org/pluggable-transports/snowflake.git/common/turbotunnel"
+)
+
+func TestClientIDMap(t *testing.T) {
+ // Convert a uint64 into a ClientID.
+ id := func(n uint64) turbotunnel.ClientID {
+ var clientID turbotunnel.ClientID
+ binary.PutUvarint(clientID[:], n)
+ return clientID
+ }
+
+ // Does m.Get(key) and checks that the output matches what is expected.
+ expectGet := func(m *clientIDMap, clientID turbotunnel.ClientID, expectedAddr string, expectedOK bool) {
+ t.Helper()
+ addr, ok := m.Get(clientID)
+ if addr != expectedAddr || ok != expectedOK {
+ t.Errorf("expected (%+q, %v), got (%+q, %v)", expectedAddr, expectedOK, addr, ok)
+ }
+ }
+
+ // Checks that the len of m.current is as expected.
+ expectSize := func(m *clientIDMap, expectedLen int) {
+ t.Helper()
+ if len(m.current) != expectedLen {
+ t.Errorf("expected map len %d, got %d %+v", expectedLen, len(m.current), m.current)
+ }
+ }
+
+ // Zero-capacity map can't remember anything.
+ {
+ m := newClientIDMap(0)
+ expectSize(m, 0)
+ expectGet(m, id(0), "", false)
+ expectGet(m, id(1234), "", false)
+
+ m.Set(id(0), "A")
+ expectSize(m, 0)
+ expectGet(m, id(0), "", false)
+ expectGet(m, id(1234), "", false)
+
+ m.Set(id(1234), "A")
+ expectSize(m, 0)
+ expectGet(m, id(0), "", false)
+ expectGet(m, id(1234), "", false)
+ }
+
+ {
+ m := newClientIDMap(1)
+ expectSize(m, 0)
+ expectGet(m, id(0), "", false)
+ expectGet(m, id(1), "", false)
+
+ m.Set(id(0), "A")
+ expectSize(m, 1)
+ expectGet(m, id(0), "A", true)
+ expectGet(m, id(1), "", false)
+
+ m.Set(id(1), "B") // forgets the (0, "A") entry
+ expectSize(m, 1)
+ expectGet(m, id(0), "", false)
+ expectGet(m, id(1), "B", true)
+
+ m.Set(id(1), "C") // forgets the (1, "B") entry
+ expectSize(m, 1)
+ expectGet(m, id(0), "", false)
+ expectGet(m, id(1), "C", true)
+ }
+
+ {
+ m := newClientIDMap(5)
+ m.Set(id(0), "A")
+ m.Set(id(1), "B")
+ m.Set(id(2), "C")
+ m.Set(id(0), "D") // shadows the (0, "D") entry
+ m.Set(id(3), "E")
+ expectSize(m, 4)
+ expectGet(m, id(0), "D", true)
+ expectGet(m, id(1), "B", true)
+ expectGet(m, id(2), "C", true)
+ expectGet(m, id(3), "E", true)
+ expectGet(m, id(4), "", false)
+
+ m.Set(id(4), "F") // forgets the (0, "A") entry but should preserve (0, "D")
+ expectSize(m, 5)
+ expectGet(m, id(0), "D", true)
+ expectGet(m, id(1), "B", true)
+ expectGet(m, id(2), "C", true)
+ expectGet(m, id(3), "E", true)
+ expectGet(m, id(4), "F", true)
+
+ m.Set(id(5), "G") // forgets the (1, "B") entry
+ m.Set(id(0), "H") // forgets the (2, "C") entry and shadows (0, "D")
+ expectSize(m, 4)
+ expectGet(m, id(0), "H", true)
+ expectGet(m, id(1), "", false)
+ expectGet(m, id(2), "", false)
+ expectGet(m, id(3), "E", true)
+ expectGet(m, id(4), "F", true)
+ expectGet(m, id(5), "G", true)
+
+ m.Set(id(0), "I") // forgets the (0, "D") entry and shadows (0, "H")
+ m.Set(id(0), "J") // forgets the (3, "E") entry and shadows (0, "I")
+ m.Set(id(0), "K") // forgets the (4, "F") entry and shadows (0, "J")
+ m.Set(id(0), "L") // forgets the (5, "G") entry and shadows (0, "K")
+ expectSize(m, 1)
+ expectGet(m, id(0), "L", true)
+ expectGet(m, id(1), "", false)
+ expectGet(m, id(2), "", false)
+ expectGet(m, id(3), "", false)
+ expectGet(m, id(4), "", false)
+ expectGet(m, id(5), "", false)
+ }
+}
1
0

[snowflake/master] Immediately and unconditionally grant new SOCKS connections.
by dcf@torproject.org 23 Apr '20
by dcf@torproject.org 23 Apr '20
23 Apr '20
commit ee2fb42d33ea105995adfc84d2be47a9d6dfc97f
Author: David Fifield <david(a)bamsoftware.com>
Date: Thu Jan 30 23:49:41 2020 -0700
Immediately and unconditionally grant new SOCKS connections.
---
client/lib/snowflake.go | 10 +---------
client/snowflake.go | 8 ++++++++
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/client/lib/snowflake.go b/client/lib/snowflake.go
index 2065f73..199f8a4 100644
--- a/client/lib/snowflake.go
+++ b/client/lib/snowflake.go
@@ -16,22 +16,14 @@ const (
// Given an accepted SOCKS connection, establish a WebRTC connection to the
// remote peer and exchange traffic.
-func Handler(socks SocksConnector, snowflakes SnowflakeCollector) error {
+func Handler(socks net.Conn, snowflakes SnowflakeCollector) error {
// Obtain an available WebRTC remote. May block.
snowflake := snowflakes.Pop()
if nil == snowflake {
- if err := socks.Reject(); err != nil {
- log.Printf("socks.Reject returned error: %v", err)
- }
-
return errors.New("handler: Received invalid Snowflake")
}
defer snowflake.Close()
log.Println("---- Handler: snowflake assigned ----")
- err := socks.Grant(&net.TCPAddr{IP: net.IPv4zero, Port: 0})
- if err != nil {
- return err
- }
go func() {
// When WebRTC resets, close the SOCKS connection too.
diff --git a/client/snowflake.go b/client/snowflake.go
index af8447c..dda59ae 100644
--- a/client/snowflake.go
+++ b/client/snowflake.go
@@ -59,9 +59,17 @@ func socksAcceptLoop(ln *pt.SocksListener, snowflakes sf.SnowflakeCollector) {
log.Printf("SOCKS accepted: %v", conn.Req)
go func() {
defer conn.Close()
+
+ err := conn.Grant(&net.TCPAddr{IP: net.IPv4zero, Port: 0})
+ if err != nil {
+ log.Printf("conn.Grant error: %s", err)
+ return
+ }
+
err = sf.Handler(conn, snowflakes)
if err != nil {
log.Printf("handler error: %s", err)
+ return
}
}()
}
1
0

[snowflake/master] Let copyLoop exit when either direction finishes.
by dcf@torproject.org 23 Apr '20
by dcf@torproject.org 23 Apr '20
23 Apr '20
commit 904af9cb8aa6aa25e094da1c025c7afed55d46ea
Author: David Fifield <david(a)bamsoftware.com>
Date: Fri Feb 21 14:47:34 2020 -0700
Let copyLoop exit when either direction finishes.
Formerly we waiting until *both* directions finished. What this meant in
practice is that when the remote connection ended, copyLoop would become
useless but would continue blocking its caller until something else
finally closed the socks connection.
---
client/lib/snowflake.go | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/client/lib/snowflake.go b/client/lib/snowflake.go
index 199f8a4..409ce14 100644
--- a/client/lib/snowflake.go
+++ b/client/lib/snowflake.go
@@ -5,7 +5,6 @@ import (
"io"
"log"
"net"
- "sync"
"time"
)
@@ -41,20 +40,19 @@ func Handler(socks net.Conn, snowflakes SnowflakeCollector) error {
// Exchanges bytes between two ReadWriters.
// (In this case, between a SOCKS and WebRTC connection.)
func copyLoop(socks, webRTC io.ReadWriter) {
- var wg sync.WaitGroup
- wg.Add(2)
+ done := make(chan struct{}, 2)
go func() {
if _, err := io.Copy(socks, webRTC); err != nil {
log.Printf("copying WebRTC to SOCKS resulted in error: %v", err)
}
- wg.Done()
+ done <- struct{}{}
}()
go func() {
if _, err := io.Copy(webRTC, socks); err != nil {
log.Printf("copying SOCKS to WebRTC resulted in error: %v", err)
}
- wg.Done()
+ done <- struct{}{}
}()
- wg.Wait()
+ <-done
log.Println("copy loop ended")
}
1
0
commit 222ab3d85a4113088db3e3b742411806922c028c
Author: David Fifield <david(a)bamsoftware.com>
Date: Tue Jan 28 02:29:34 2020 -0700
Import Turbo Tunnel support code.
Copied and slightly modified from
https://gitweb.torproject.org/pluggable-transports/meek.git/log/?h=turbotun…
https://github.com/net4people/bbs/issues/21
RedialPacketConn is adapted from clientPacketConn in
https://dip.torproject.org/dcf/obfs4/blob/c64a61c6da3bf1c2f98221bb1e1af8a35…
https://github.com/net4people/bbs/issues/14#issuecomment-544747519
---
common/encapsulation/encapsulation.go | 194 +++++++++++++++++
common/encapsulation/encapsulation_test.go | 330 +++++++++++++++++++++++++++++
common/turbotunnel/clientid.go | 28 +++
common/turbotunnel/clientmap.go | 144 +++++++++++++
common/turbotunnel/consts.go | 13 ++
common/turbotunnel/queuepacketconn.go | 137 ++++++++++++
common/turbotunnel/redialpacketconn.go | 204 ++++++++++++++++++
7 files changed, 1050 insertions(+)
diff --git a/common/encapsulation/encapsulation.go b/common/encapsulation/encapsulation.go
new file mode 100644
index 0000000..bfe9b5b
--- /dev/null
+++ b/common/encapsulation/encapsulation.go
@@ -0,0 +1,194 @@
+// Package encapsulation implements a way of encoding variable-size chunks of
+// data and padding into a byte stream.
+//
+// Each chunk of data or padding starts with a variable-size length prefix. One
+// bit ("d") in the first byte of the prefix indicates whether the chunk
+// represents data or padding (1=data, 0=padding). Another bit ("c" for
+// "continuation") is the indicates whether there are more bytes in the length
+// prefix. The remaining 6 bits ("x") encode part of the length value.
+// dcxxxxxx
+// If the continuation bit is set, then the next byte is also part of the length
+// prefix. It lacks the "d" bit, has its own "c" bit, and 7 value-carrying bits
+// ("y").
+// cyyyyyyy
+// The length is decoded by concatenating value-carrying bits, from left to
+// right, of all value-carrying bits, up to and including the first byte whose
+// "c" bit is 0. Although in principle this encoding would allow for length
+// prefixes of any size, length prefixes are arbitrarily limited to 3 bytes and
+// any attempt to read or write a longer one is an error. These are therefore
+// the only valid formats:
+// 00xxxxxx xxxxxx₂ bytes of padding
+// 10xxxxxx xxxxxx₂ bytes of data
+// 01xxxxxx 0yyyyyyy xxxxxxyyyyyyy₂ bytes of padding
+// 11xxxxxx 0yyyyyyy xxxxxxyyyyyyy₂ bytes of data
+// 01xxxxxx 1yyyyyyy 0zzzzzzz xxxxxxyyyyyyyzzzzzzz₂ bytes of padding
+// 11xxxxxx 1yyyyyyy 0zzzzzzz xxxxxxyyyyyyyzzzzzzz₂ bytes of data
+// The maximum encodable length is 11111111111111111111₂ = 0xfffff = 1048575.
+// There is no requirement to use a length prefix of minimum size; i.e. 00000100
+// and 01000000 00000100 are both valid encodings of the value 4.
+//
+// After the length prefix follow that many bytes of padding or data. There are
+// no restrictions on the value of bytes comprising padding.
+//
+// The idea for this encapsulation is sketched here:
+// https://github.com/net4people/bbs/issues/9#issuecomment-524095186
+package encapsulation
+
+import (
+ "errors"
+ "io"
+ "io/ioutil"
+)
+
+// ErrTooLong is the error returned when an encoded length prefix is longer than
+// 3 bytes, or when ReadData receives an input whose length is too large to
+// encode in a 3-byte length prefix.
+var ErrTooLong = errors.New("length prefix is too long")
+
+// ReadData returns a new slice with the contents of the next available data
+// chunk, skipping over any padding chunks that may come first. The returned
+// error value is nil if and only if a data chunk was present and was read in
+// its entirety. The returned error is io.EOF only if r ended before the first
+// byte of a length prefix. If r ended in the middle of a length prefix or
+// data/padding, the returned error is io.ErrUnexpectedEOF.
+func ReadData(r io.Reader) ([]byte, error) {
+ for {
+ var b [1]byte
+ _, err := r.Read(b[:])
+ if err != nil {
+ // This is the only place we may return a real io.EOF.
+ return nil, err
+ }
+ isData := (b[0] & 0x80) != 0
+ moreLength := (b[0] & 0x40) != 0
+ n := int(b[0] & 0x3f)
+ for i := 0; moreLength; i++ {
+ if i >= 2 {
+ return nil, ErrTooLong
+ }
+ _, err := r.Read(b[:])
+ if err == io.EOF {
+ err = io.ErrUnexpectedEOF
+ }
+ if err != nil {
+ return nil, err
+ }
+ moreLength = (b[0] & 0x80) != 0
+ n = (n << 7) | int(b[0]&0x7f)
+ }
+ if isData {
+ p := make([]byte, n)
+ _, err := io.ReadFull(r, p)
+ if err == io.EOF {
+ err = io.ErrUnexpectedEOF
+ }
+ if err != nil {
+ return nil, err
+ }
+ return p, err
+ } else {
+ _, err := io.CopyN(ioutil.Discard, r, int64(n))
+ if err == io.EOF {
+ err = io.ErrUnexpectedEOF
+ }
+ if err != nil {
+ return nil, err
+ }
+ }
+ }
+}
+
+// dataPrefixForLength returns a length prefix for the given length, with the
+// "d" bit set to 1.
+func dataPrefixForLength(n int) ([]byte, error) {
+ switch {
+ case (n>>0)&0x3f == (n >> 0):
+ return []byte{0x80 | byte((n>>0)&0x3f)}, nil
+ case (n>>7)&0x3f == (n >> 7):
+ return []byte{0xc0 | byte((n>>7)&0x3f), byte((n >> 0) & 0x7f)}, nil
+ case (n>>14)&0x3f == (n >> 14):
+ return []byte{0xc0 | byte((n>>14)&0x3f), 0x80 | byte((n>>7)&0x7f), byte((n >> 0) & 0x7f)}, nil
+ default:
+ return nil, ErrTooLong
+ }
+}
+
+// WriteData encodes a data chunk into w. It returns the total number of bytes
+// written; i.e., including the length prefix. The error is ErrTooLong if the
+// length of data cannot fit into a length prefix.
+func WriteData(w io.Writer, data []byte) (int, error) {
+ prefix, err := dataPrefixForLength(len(data))
+ if err != nil {
+ return 0, err
+ }
+ total := 0
+ n, err := w.Write(prefix)
+ total += n
+ if err != nil {
+ return total, err
+ }
+ n, err = w.Write(data)
+ total += n
+ return total, err
+}
+
+var paddingBuffer = make([]byte, 1024)
+
+// WritePadding encodes padding chunks, whose total size (including their own
+// length prefixes) is n. Returns the total number of bytes written to w, which
+// will be exactly n unless there was an error. The error cannot be ErrTooLong
+// because this function will write multiple padding chunks if necessary to
+// reach the requested size. Panics if n is negative.
+func WritePadding(w io.Writer, n int) (int, error) {
+ if n < 0 {
+ panic("negative length")
+ }
+ total := 0
+ for n > 0 {
+ p := len(paddingBuffer)
+ if p > n {
+ p = n
+ }
+ n -= p
+ var prefix []byte
+ switch {
+ case ((p-1)>>0)&0x3f == ((p - 1) >> 0):
+ p = p - 1
+ prefix = []byte{byte((p >> 0) & 0x3f)}
+ case ((p-2)>>7)&0x3f == ((p - 2) >> 7):
+ p = p - 2
+ prefix = []byte{0x40 | byte((p>>7)&0x3f), byte((p >> 0) & 0x7f)}
+ case ((p-3)>>14)&0x3f == ((p - 3) >> 14):
+ p = p - 3
+ prefix = []byte{0x40 | byte((p>>14)&0x3f), 0x80 | byte((p>>7)&0x3f), byte((p >> 0) & 0x7f)}
+ }
+ nn, err := w.Write(prefix)
+ total += nn
+ if err != nil {
+ return total, err
+ }
+ nn, err = w.Write(paddingBuffer[:p])
+ total += nn
+ if err != nil {
+ return total, err
+ }
+ }
+ return total, nil
+}
+
+// MaxDataForSize returns the length of the longest slice that can pe passed to
+// WriteData, whose total encoded size (including length prefix) is no larger
+// than n. Call this to find out if a chunk of data will fit into a length
+// budget. Panics if n == 0.
+func MaxDataForSize(n int) int {
+ if n == 0 {
+ panic("zero length")
+ }
+ prefix, err := dataPrefixForLength(n)
+ if err == ErrTooLong {
+ return (1 << (6 + 7 + 7)) - 1 - 3
+ } else if err != nil {
+ panic(err)
+ }
+ return n - len(prefix)
+}
diff --git a/common/encapsulation/encapsulation_test.go b/common/encapsulation/encapsulation_test.go
new file mode 100644
index 0000000..333abb4
--- /dev/null
+++ b/common/encapsulation/encapsulation_test.go
@@ -0,0 +1,330 @@
+package encapsulation
+
+import (
+ "bytes"
+ "io"
+ "math/rand"
+ "testing"
+)
+
+// Return a byte slice with non-trivial contents.
+func pseudorandomBuffer(n int) []byte {
+ source := rand.NewSource(0)
+ p := make([]byte, n)
+ for i := 0; i < len(p); i++ {
+ p[i] = byte(source.Int63() & 0xff)
+ }
+ return p
+}
+
+func mustWriteData(w io.Writer, p []byte) int {
+ n, err := WriteData(w, p)
+ if err != nil {
+ panic(err)
+ }
+ return n
+}
+
+func mustWritePadding(w io.Writer, n int) int {
+ n, err := WritePadding(w, n)
+ if err != nil {
+ panic(err)
+ }
+ return n
+}
+
+// Test that ReadData(WriteData()) recovers the original data.
+func TestRoundtrip(t *testing.T) {
+ // Test above and below interesting thresholds.
+ for _, i := range []int{
+ 0x00, 0x01,
+ 0x3e, 0x3f, 0x40, 0x41,
+ 0xfe, 0xff, 0x100, 0x101,
+ 0x1ffe, 0x1fff, 0x2000, 0x2001,
+ 0xfffe, 0xffff, 0x10000, 0x10001,
+ 0xffffe, 0xfffff,
+ } {
+ original := pseudorandomBuffer(i)
+ var enc bytes.Buffer
+ n, err := WriteData(&enc, original)
+ if err != nil {
+ t.Fatalf("size %d, WriteData returned error %v", i, err)
+ }
+ if enc.Len() != n {
+ t.Fatalf("size %d, returned length was %d, written length was %d",
+ i, n, enc.Len())
+ }
+ inverse, err := ReadData(&enc)
+ if err != nil {
+ t.Fatalf("size %d, ReadData returned error %v", i, err)
+ }
+ if !bytes.Equal(inverse, original) {
+ t.Fatalf("size %d, got <%x>, expected <%x>", i, inverse, original)
+ }
+ }
+}
+
+// Test that WritePadding writes exactly as much as requested.
+func TestPaddingLength(t *testing.T) {
+ // Test above and below interesting thresholds. WritePadding also gets
+ // values above 0xfffff, the maximum value of a single length prefix.
+ for _, i := range []int{
+ 0x00, 0x01,
+ 0x3f, 0x40, 0x41, 0x42,
+ 0xff, 0x100, 0x101, 0x102,
+ 0x2000, 0x2001, 0x2002, 0x2003,
+ 0x10000, 0x10001, 0x10002, 0x10003,
+ 0x100001, 0x100002, 0x100003, 0x100004,
+ } {
+ var enc bytes.Buffer
+ n, err := WritePadding(&enc, i)
+ if err != nil {
+ t.Fatalf("size %d, WritePadding returned error %v", i, err)
+ }
+ if n != i {
+ t.Fatalf("requested %d bytes, returned %d", i, n)
+ }
+ if enc.Len() != n {
+ t.Fatalf("requested %d bytes, wrote %d bytes", i, enc.Len())
+ }
+ }
+}
+
+// Test that ReadData skips over padding.
+func TestSkipPadding(t *testing.T) {
+ var data = [][]byte{{}, {}, []byte("hello"), {}, []byte("world")}
+ var enc bytes.Buffer
+ mustWritePadding(&enc, 10)
+ mustWritePadding(&enc, 100)
+ mustWriteData(&enc, data[0])
+ mustWriteData(&enc, data[1])
+ mustWritePadding(&enc, 10)
+ mustWriteData(&enc, data[2])
+ mustWriteData(&enc, data[3])
+ mustWritePadding(&enc, 10)
+ mustWriteData(&enc, data[4])
+ mustWritePadding(&enc, 10)
+ mustWritePadding(&enc, 10)
+ for i, expected := range data {
+ actual, err := ReadData(&enc)
+ if err != nil {
+ t.Fatalf("slice %d, got error %v, expected %v", i, err, nil)
+ }
+ if !bytes.Equal(actual, expected) {
+ t.Fatalf("slice %d, got <%x>, expected <%x>", i, actual, expected)
+ }
+ }
+ p, err := ReadData(&enc)
+ if p != nil || err != io.EOF {
+ t.Fatalf("got (<%x>, %v), expected (%v, %v)", p, err, nil, io.EOF)
+ }
+}
+
+// Test that EOF before a length prefix returns io.EOF.
+func TestEOF(t *testing.T) {
+ p, err := ReadData(bytes.NewReader(nil))
+ if p != nil || err != io.EOF {
+ t.Fatalf("got (<%x>, %v), expected (%v, %v)", p, err, nil, io.EOF)
+ }
+}
+
+// Test that an EOF while reading a length prefix, or while reading the
+// subsequent data/padding, returns io.ErrUnexpectedEOF.
+func TestUnexpectedEOF(t *testing.T) {
+ for _, test := range [][]byte{
+ {0x40}, // expecting a second length byte
+ {0xc0}, // expecting a second length byte
+ {0x41, 0x80}, // expecting a third length byte
+ {0xc1, 0x80}, // expecting a third length byte
+ {0x02}, // expecting 2 bytes of padding
+ {0x82}, // expecting 2 bytes of data
+ {0x02, 'X'}, // expecting 1 byte of padding
+ {0x82, 'X'}, // expecting 1 byte of data
+ {0x41, 0x00}, // expecting 128 bytes of padding
+ {0xc1, 0x00}, // expecting 128 bytes of data
+ {0x41, 0x00, 'X'}, // expecting 127 bytes of padding
+ {0xc1, 0x00, 'X'}, // expecting 127 bytes of data
+ {0x41, 0x80, 0x00}, // expecting 32768 bytes of padding
+ {0xc1, 0x80, 0x00}, // expecting 32768 bytes of data
+ {0x41, 0x80, 0x00, 'X'}, // expecting 32767 bytes of padding
+ {0xc1, 0x80, 0x00, 'X'}, // expecting 32767 bytes of data
+ } {
+ p, err := ReadData(bytes.NewReader(test))
+ if p != nil || err != io.ErrUnexpectedEOF {
+ t.Fatalf("<%x> got (<%x>, %v), expected (%v, %v)", test, p, err, nil, io.ErrUnexpectedEOF)
+ }
+ }
+}
+
+// Test that length encodings that are longer than they could be are still
+// interpreted.
+func TestNonMinimalLengthEncoding(t *testing.T) {
+ for _, test := range []struct {
+ enc []byte
+ expected []byte
+ }{
+ {[]byte{0x81, 'X'}, []byte("X")},
+ {[]byte{0xc0, 0x01, 'X'}, []byte("X")},
+ {[]byte{0xc0, 0x80, 0x01, 'X'}, []byte("X")},
+ } {
+ p, err := ReadData(bytes.NewReader(test.enc))
+ if err != nil {
+ t.Fatalf("<%x> got error %v, expected %v", test.enc, err, nil)
+ }
+ if !bytes.Equal(p, test.expected) {
+ t.Fatalf("<%x> got <%x>, expected <%x>", test.enc, p, test.expected)
+ }
+ }
+}
+
+// Test that ReadData only reads up to 3 bytes of length prefix.
+func TestReadLimits(t *testing.T) {
+ // Test the maximum length that's possible with 3 bytes of length
+ // prefix.
+ maxLength := (0x3f << 14) | (0x7f << 7) | 0x7f
+ data := bytes.Repeat([]byte{'X'}, maxLength)
+ prefix := []byte{0xff, 0xff, 0x7f} // encodes 0xfffff
+ p, err := ReadData(bytes.NewReader(append(prefix, data...)))
+ if err != nil {
+ t.Fatalf("got error %v, expected %v", err, nil)
+ }
+ if !bytes.Equal(p, data) {
+ t.Fatalf("got %d bytes unequal to %d bytes", len(p), len(data))
+ }
+ // Test a 4-byte prefix.
+ prefix = []byte{0xc0, 0xc0, 0x80, 0x80} // encodes 0x100000
+ data = bytes.Repeat([]byte{'X'}, maxLength+1)
+ p, err = ReadData(bytes.NewReader(append(prefix, data...)))
+ if p != nil || err != ErrTooLong {
+ t.Fatalf("got (<%x>, %v), expected (%v, %v)", p, err, nil, ErrTooLong)
+ }
+ // Test that 4 bytes don't work, even when they encode an integer that
+ // would fix in 3 bytes.
+ prefix = []byte{0xc0, 0x80, 0x80, 0x80} // encodes 0x0
+ data = []byte{}
+ p, err = ReadData(bytes.NewReader(append(prefix, data...)))
+ if p != nil || err != ErrTooLong {
+ t.Fatalf("got (<%x>, %v), expected (%v, %v)", p, err, nil, ErrTooLong)
+ }
+
+ // Do the same tests with padding lengths.
+ data = []byte("hello")
+ prefix = []byte{0x7f, 0xff, 0x7f} // encodes 0xfffff
+ padding := bytes.Repeat([]byte{'X'}, maxLength)
+ enc := bytes.NewBuffer(append(prefix, padding...))
+ mustWriteData(enc, data)
+ p, err = ReadData(enc)
+ if err != nil {
+ t.Fatalf("got error %v, expected %v", err, nil)
+ }
+ if !bytes.Equal(p, data) {
+ t.Fatalf("got <%x>, expected <%x>", p, data)
+ }
+ prefix = []byte{0x40, 0xc0, 0x80, 0x80} // encodes 0x100000
+ padding = bytes.Repeat([]byte{'X'}, maxLength+1)
+ enc = bytes.NewBuffer(append(prefix, padding...))
+ mustWriteData(enc, data)
+ p, err = ReadData(enc)
+ if p != nil || err != ErrTooLong {
+ t.Fatalf("got (<%x>, %v), expected (%v, %v)", p, err, nil, ErrTooLong)
+ }
+ prefix = []byte{0x40, 0x80, 0x80, 0x80} // encodes 0x0
+ padding = []byte{}
+ enc = bytes.NewBuffer(append(prefix, padding...))
+ mustWriteData(enc, data)
+ p, err = ReadData(enc)
+ if p != nil || err != ErrTooLong {
+ t.Fatalf("got (<%x>, %v), expected (%v, %v)", p, err, nil, ErrTooLong)
+ }
+}
+
+// Test that WriteData and WritePadding only accept lengths that can be encoded
+// in up to 3 bytes of length prefix.
+func TestWriteLimits(t *testing.T) {
+ maxLength := (0x3f << 14) | (0x7f << 7) | 0x7f
+ var enc bytes.Buffer
+ n, err := WriteData(&enc, bytes.Repeat([]byte{'X'}, maxLength))
+ if n != maxLength+3 || err != nil {
+ t.Fatalf("got (%d, %v), expected (%d, %v)", n, err, maxLength, nil)
+ }
+ enc.Reset()
+ n, err = WriteData(&enc, bytes.Repeat([]byte{'X'}, maxLength+1))
+ if n != 0 || err != ErrTooLong {
+ t.Fatalf("got (%d, %v), expected (%d, %v)", n, err, 0, ErrTooLong)
+ }
+
+ // Padding gets an extra 3 bytes because the prefix is counted as part
+ // of the length.
+ enc.Reset()
+ n, err = WritePadding(&enc, maxLength+3)
+ if n != maxLength+3 || err != nil {
+ t.Fatalf("got (%d, %v), expected (%d, %v)", n, err, maxLength+3, nil)
+ }
+ // Writing a too-long padding is okay because WritePadding will break it
+ // into smaller chunks.
+ enc.Reset()
+ n, err = WritePadding(&enc, maxLength+4)
+ if n != maxLength+4 || err != nil {
+ t.Fatalf("got (%d, %v), expected (%d, %v)", n, err, maxLength+4, nil)
+ }
+}
+
+// Test that WritePadding panics when given a negative length.
+func TestNegativeLength(t *testing.T) {
+ for _, n := range []int{-1, ^0} {
+ var enc bytes.Buffer
+ panicked, nn, err := testNegativeLengthSub(t, &enc, n)
+ if !panicked {
+ t.Fatalf("WritePadding(%d) returned (%d, %v) instead of panicking", n, nn, err)
+ }
+ }
+}
+
+// Calls WritePadding(w, n) and augments the return value with a flag indicating
+// whether the call panicked.
+func testNegativeLengthSub(t *testing.T, w io.Writer, n int) (panicked bool, nn int, err error) {
+ defer func() {
+ if r := recover(); r != nil {
+ panicked = true
+ }
+ }()
+ t.Helper()
+ nn, err = WritePadding(w, n)
+ return false, n, err
+}
+
+// Test that MaxDataForSize panics when given a 0 length.
+func TestMaxDataForSizeZero(t *testing.T) {
+ defer func() {
+ if r := recover(); r == nil {
+ t.Fatal("didn't panic")
+ }
+ }()
+ MaxDataForSize(0)
+}
+
+// Test thresholds of available sizes for MaxDataForSize.
+func TestMaxDataForSize(t *testing.T) {
+ for _, test := range []struct {
+ size int
+ expected int
+ }{
+ {0x01, 0x00},
+ {0x02, 0x01},
+ {0x3f, 0x3e},
+ {0x40, 0x3e},
+ {0x41, 0x3f},
+ {0x1fff, 0x1ffd},
+ {0x2000, 0x1ffd},
+ {0x2001, 0x1ffe},
+ {0xfffff, 0xffffc},
+ {0x100000, 0xffffc},
+ {0x100001, 0xffffc},
+ {0x7fffffff, 0xffffc},
+ } {
+ max := MaxDataForSize(test.size)
+ if max != test.expected {
+ t.Fatalf("size %d, got %d, expected %d", test.size, max, test.expected)
+ }
+ }
+}
diff --git a/common/turbotunnel/clientid.go b/common/turbotunnel/clientid.go
new file mode 100644
index 0000000..17257e1
--- /dev/null
+++ b/common/turbotunnel/clientid.go
@@ -0,0 +1,28 @@
+package turbotunnel
+
+import (
+ "crypto/rand"
+ "encoding/hex"
+)
+
+// ClientID is an abstract identifier that binds together all the communications
+// belonging to a single client session, even though those communications may
+// arrive from multiple IP addresses or over multiple lower-level connections.
+// It plays the same role that an (IP address, port number) tuple plays in a
+// net.UDPConn: it's the return address pertaining to a long-lived abstract
+// client session. The client attaches its ClientID to each of its
+// communications, enabling the server to disambiguate requests among its many
+// clients. ClientID implements the net.Addr interface.
+type ClientID [8]byte
+
+func NewClientID() ClientID {
+ var id ClientID
+ _, err := rand.Read(id[:])
+ if err != nil {
+ panic(err)
+ }
+ return id
+}
+
+func (id ClientID) Network() string { return "clientid" }
+func (id ClientID) String() string { return hex.EncodeToString(id[:]) }
diff --git a/common/turbotunnel/clientmap.go b/common/turbotunnel/clientmap.go
new file mode 100644
index 0000000..fa12915
--- /dev/null
+++ b/common/turbotunnel/clientmap.go
@@ -0,0 +1,144 @@
+package turbotunnel
+
+import (
+ "container/heap"
+ "net"
+ "sync"
+ "time"
+)
+
+// clientRecord is a record of a recently seen client, with the time it was last
+// seen and a send queue.
+type clientRecord struct {
+ Addr net.Addr
+ LastSeen time.Time
+ SendQueue chan []byte
+}
+
+// ClientMap manages a mapping of live clients (keyed by address, which will be
+// a ClientID) to their respective send queues. ClientMap's functions are safe
+// to call from multiple goroutines.
+type ClientMap struct {
+ // We use an inner structure to avoid exposing public heap.Interface
+ // functions to users of clientMap.
+ inner clientMapInner
+ // Synchronizes access to inner.
+ lock sync.Mutex
+}
+
+// NewClientMap creates a ClientMap that expires clients after a timeout.
+//
+// The timeout does not have to be kept in sync with QUIC's internal idle
+// timeout. If a client is removed from the client map while the QUIC session is
+// still live, the worst that can happen is a loss of whatever packets were in
+// the send queue at the time. If QUIC later decides to send more packets to the
+// same client, we'll instantiate a new send queue, and if the client ever
+// connects again with the proper client ID, we'll deliver them.
+func NewClientMap(timeout time.Duration) *ClientMap {
+ m := &ClientMap{
+ inner: clientMapInner{
+ byAge: make([]*clientRecord, 0),
+ byAddr: make(map[net.Addr]int),
+ },
+ }
+ go func() {
+ for {
+ time.Sleep(timeout / 2)
+ now := time.Now()
+ m.lock.Lock()
+ m.inner.removeExpired(now, timeout)
+ m.lock.Unlock()
+ }
+ }()
+ return m
+}
+
+// SendQueue returns the send queue corresponding to addr, creating it if
+// necessary.
+func (m *ClientMap) SendQueue(addr net.Addr) chan []byte {
+ m.lock.Lock()
+ defer m.lock.Unlock()
+ return m.inner.SendQueue(addr, time.Now())
+}
+
+// clientMapInner is the inner type of ClientMap, implementing heap.Interface.
+// byAge is the backing store, a heap ordered by LastSeen time, to facilitate
+// expiring old client records. byAddr is a map from addresses (i.e., ClientIDs)
+// to heap indices, to allow looking up by address. Unlike ClientMap,
+// clientMapInner requires external synchonization.
+type clientMapInner struct {
+ byAge []*clientRecord
+ byAddr map[net.Addr]int
+}
+
+// removeExpired removes all client records whose LastSeen timestamp is more
+// than timeout in the past.
+func (inner *clientMapInner) removeExpired(now time.Time, timeout time.Duration) {
+ for len(inner.byAge) > 0 && now.Sub(inner.byAge[0].LastSeen) >= timeout {
+ heap.Pop(inner)
+ }
+}
+
+// SendQueue finds the existing client record corresponding to addr, or creates
+// a new one if none exists yet. It updates the client record's LastSeen time
+// and returns its SendQueue.
+func (inner *clientMapInner) SendQueue(addr net.Addr, now time.Time) chan []byte {
+ var record *clientRecord
+ i, ok := inner.byAddr[addr]
+ if ok {
+ // Found one, update its LastSeen.
+ record = inner.byAge[i]
+ record.LastSeen = now
+ heap.Fix(inner, i)
+ } else {
+ // Not found, create a new one.
+ record = &clientRecord{
+ Addr: addr,
+ LastSeen: now,
+ SendQueue: make(chan []byte, queueSize),
+ }
+ heap.Push(inner, record)
+ }
+ return record.SendQueue
+}
+
+// heap.Interface for clientMapInner.
+
+func (inner *clientMapInner) Len() int {
+ if len(inner.byAge) != len(inner.byAddr) {
+ panic("inconsistent clientMap")
+ }
+ return len(inner.byAge)
+}
+
+func (inner *clientMapInner) Less(i, j int) bool {
+ return inner.byAge[i].LastSeen.Before(inner.byAge[j].LastSeen)
+}
+
+func (inner *clientMapInner) Swap(i, j int) {
+ inner.byAge[i], inner.byAge[j] = inner.byAge[j], inner.byAge[i]
+ inner.byAddr[inner.byAge[i].Addr] = i
+ inner.byAddr[inner.byAge[j].Addr] = j
+}
+
+func (inner *clientMapInner) Push(x interface{}) {
+ record := x.(*clientRecord)
+ if _, ok := inner.byAddr[record.Addr]; ok {
+ panic("duplicate address in clientMap")
+ }
+ // Insert into byAddr map.
+ inner.byAddr[record.Addr] = len(inner.byAge)
+ // Insert into byAge slice.
+ inner.byAge = append(inner.byAge, record)
+}
+
+func (inner *clientMapInner) Pop() interface{} {
+ n := len(inner.byAddr)
+ // Remove from byAge slice.
+ record := inner.byAge[n-1]
+ inner.byAge[n-1] = nil
+ inner.byAge = inner.byAge[:n-1]
+ // Remove from byAddr map.
+ delete(inner.byAddr, record.Addr)
+ return record
+}
diff --git a/common/turbotunnel/consts.go b/common/turbotunnel/consts.go
new file mode 100644
index 0000000..4699d1d
--- /dev/null
+++ b/common/turbotunnel/consts.go
@@ -0,0 +1,13 @@
+// Package turbotunnel provides support for overlaying a virtual net.PacketConn
+// on some other network carrier.
+//
+// https://github.com/net4people/bbs/issues/9
+package turbotunnel
+
+import "errors"
+
+// The size of receive and send queues.
+const queueSize = 32
+
+var errClosedPacketConn = errors.New("operation on closed connection")
+var errNotImplemented = errors.New("not implemented")
diff --git a/common/turbotunnel/queuepacketconn.go b/common/turbotunnel/queuepacketconn.go
new file mode 100644
index 0000000..14a9833
--- /dev/null
+++ b/common/turbotunnel/queuepacketconn.go
@@ -0,0 +1,137 @@
+package turbotunnel
+
+import (
+ "net"
+ "sync"
+ "sync/atomic"
+ "time"
+)
+
+// taggedPacket is a combination of a []byte and a net.Addr, encapsulating the
+// return type of PacketConn.ReadFrom.
+type taggedPacket struct {
+ P []byte
+ Addr net.Addr
+}
+
+// QueuePacketConn implements net.PacketConn by storing queues of packets. There
+// is one incoming queue (where packets are additionally tagged by the source
+// address of the client that sent them). There are many outgoing queues, one
+// for each client address that has been recently seen. The QueueIncoming method
+// inserts a packet into the incoming queue, to eventually be returned by
+// ReadFrom. WriteTo inserts a packet into an address-specific outgoing queue,
+// which can later by accessed through the OutgoingQueue method.
+type QueuePacketConn struct {
+ clients *ClientMap
+ localAddr net.Addr
+ recvQueue chan taggedPacket
+ closeOnce sync.Once
+ closed chan struct{}
+ // What error to return when the QueuePacketConn is closed.
+ err atomic.Value
+}
+
+// NewQueuePacketConn makes a new QueuePacketConn, set to track recent clients
+// for at least a duration of timeout.
+func NewQueuePacketConn(localAddr net.Addr, timeout time.Duration) *QueuePacketConn {
+ return &QueuePacketConn{
+ clients: NewClientMap(timeout),
+ localAddr: localAddr,
+ recvQueue: make(chan taggedPacket, queueSize),
+ closed: make(chan struct{}),
+ }
+}
+
+// QueueIncoming queues and incoming packet and its source address, to be
+// returned in a future call to ReadFrom.
+func (c *QueuePacketConn) QueueIncoming(p []byte, addr net.Addr) {
+ select {
+ case <-c.closed:
+ // If we're closed, silently drop it.
+ return
+ default:
+ }
+ // Copy the slice so that the caller may reuse it.
+ buf := make([]byte, len(p))
+ copy(buf, p)
+ select {
+ case c.recvQueue <- taggedPacket{buf, addr}:
+ default:
+ // Drop the incoming packet if the receive queue is full.
+ }
+}
+
+// OutgoingQueue returns the queue of outgoing packets corresponding to addr,
+// creating it if necessary. The contents of the queue will be packets that are
+// written to the address in question using WriteTo.
+func (c *QueuePacketConn) OutgoingQueue(addr net.Addr) <-chan []byte {
+ return c.clients.SendQueue(addr)
+}
+
+// ReadFrom returns a packet and address previously stored by QueueIncoming.
+func (c *QueuePacketConn) ReadFrom(p []byte) (int, net.Addr, error) {
+ select {
+ case <-c.closed:
+ return 0, nil, &net.OpError{Op: "read", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
+ default:
+ }
+ select {
+ case <-c.closed:
+ return 0, nil, &net.OpError{Op: "read", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
+ case packet := <-c.recvQueue:
+ return copy(p, packet.P), packet.Addr, nil
+ }
+}
+
+// WriteTo queues an outgoing packet for the given address. The queue can later
+// be retrieved using the OutgoingQueue method.
+func (c *QueuePacketConn) WriteTo(p []byte, addr net.Addr) (int, error) {
+ select {
+ case <-c.closed:
+ return 0, &net.OpError{Op: "write", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
+ default:
+ }
+ // Copy the slice so that the caller may reuse it.
+ buf := make([]byte, len(p))
+ copy(buf, p)
+ select {
+ case c.clients.SendQueue(addr) <- buf:
+ return len(buf), nil
+ default:
+ // Drop the outgoing packet if the send queue is full.
+ return len(buf), nil
+ }
+}
+
+// closeWithError unblocks pending operations and makes future operations fail
+// with the given error. If err is nil, it becomes errClosedPacketConn.
+func (c *QueuePacketConn) closeWithError(err error) error {
+ var newlyClosed bool
+ c.closeOnce.Do(func() {
+ newlyClosed = true
+ // Store the error to be returned by future PacketConn
+ // operations.
+ if err == nil {
+ err = errClosedPacketConn
+ }
+ c.err.Store(err)
+ close(c.closed)
+ })
+ if !newlyClosed {
+ return &net.OpError{Op: "close", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
+ }
+ return nil
+}
+
+// Close unblocks pending operations and makes future operations fail with a
+// "closed connection" error.
+func (c *QueuePacketConn) Close() error {
+ return c.closeWithError(nil)
+}
+
+// LocalAddr returns the localAddr value that was passed to NewQueuePacketConn.
+func (c *QueuePacketConn) LocalAddr() net.Addr { return c.localAddr }
+
+func (c *QueuePacketConn) SetDeadline(t time.Time) error { return errNotImplemented }
+func (c *QueuePacketConn) SetReadDeadline(t time.Time) error { return errNotImplemented }
+func (c *QueuePacketConn) SetWriteDeadline(t time.Time) error { return errNotImplemented }
diff --git a/common/turbotunnel/redialpacketconn.go b/common/turbotunnel/redialpacketconn.go
new file mode 100644
index 0000000..cf6a8c9
--- /dev/null
+++ b/common/turbotunnel/redialpacketconn.go
@@ -0,0 +1,204 @@
+package turbotunnel
+
+import (
+ "context"
+ "errors"
+ "net"
+ "sync"
+ "sync/atomic"
+ "time"
+)
+
+// RedialPacketConn implements a long-lived net.PacketConn atop a sequence of
+// other, transient net.PacketConns. RedialPacketConn creates a new
+// net.PacketConn by calling a provided dialContext function. Whenever the
+// net.PacketConn experiences a ReadFrom or WriteTo error, RedialPacketConn
+// calls the dialContext function again and starts sending and receiving packets
+// on the new net.PacketConn. RedialPacketConn's own ReadFrom and WriteTo
+// methods return an error only when the dialContext function returns an error.
+//
+// RedialPacketConn uses static local and remote addresses that are independent
+// of those of any dialed net.PacketConn.
+type RedialPacketConn struct {
+ localAddr net.Addr
+ remoteAddr net.Addr
+ dialContext func(context.Context) (net.PacketConn, error)
+ recvQueue chan []byte
+ sendQueue chan []byte
+ closed chan struct{}
+ closeOnce sync.Once
+ // The first dial error, which causes the clientPacketConn to be
+ // closed and is returned from future read/write operations. Compare to
+ // the rerr and werr in io.Pipe.
+ err atomic.Value
+}
+
+// NewQueuePacketConn makes a new RedialPacketConn, with the given static local
+// and remote addresses, and dialContext function.
+func NewRedialPacketConn(
+ localAddr, remoteAddr net.Addr,
+ dialContext func(context.Context) (net.PacketConn, error),
+) *RedialPacketConn {
+ c := &RedialPacketConn{
+ localAddr: localAddr,
+ remoteAddr: remoteAddr,
+ dialContext: dialContext,
+ recvQueue: make(chan []byte, queueSize),
+ sendQueue: make(chan []byte, queueSize),
+ closed: make(chan struct{}),
+ err: atomic.Value{},
+ }
+ go c.dialLoop()
+ return c
+}
+
+// dialLoop repeatedly calls c.dialContext and passes the resulting
+// net.PacketConn to c.exchange. It returns only when c is closed or dialContext
+// returns an error.
+func (c *RedialPacketConn) dialLoop() {
+ ctx, cancel := context.WithCancel(context.Background())
+ for {
+ select {
+ case <-c.closed:
+ cancel()
+ return
+ default:
+ }
+ conn, err := c.dialContext(ctx)
+ if err != nil {
+ c.closeWithError(err)
+ cancel()
+ return
+ }
+ c.exchange(conn)
+ conn.Close()
+ }
+}
+
+// exchange calls ReadFrom on the given net.PacketConn and places the resulting
+// packets in the receive queue, and takes packets from the send queue and calls
+// WriteTo on them, making the current net.PacketConn active.
+func (c *RedialPacketConn) exchange(conn net.PacketConn) {
+ readErrCh := make(chan error)
+ writeErrCh := make(chan error)
+
+ go func() {
+ defer close(readErrCh)
+ for {
+ select {
+ case <-c.closed:
+ return
+ case <-writeErrCh:
+ return
+ default:
+ }
+
+ var buf [1500]byte
+ n, _, err := conn.ReadFrom(buf[:])
+ if err != nil {
+ readErrCh <- err
+ return
+ }
+ p := make([]byte, n)
+ copy(p, buf[:])
+ select {
+ case c.recvQueue <- p:
+ default: // OK to drop packets.
+ }
+ }
+ }()
+
+ go func() {
+ defer close(writeErrCh)
+ for {
+ select {
+ case <-c.closed:
+ return
+ case <-readErrCh:
+ return
+ case p := <-c.sendQueue:
+ _, err := conn.WriteTo(p, c.remoteAddr)
+ if err != nil {
+ writeErrCh <- err
+ return
+ }
+ }
+ }
+ }()
+
+ select {
+ case <-readErrCh:
+ case <-writeErrCh:
+ }
+}
+
+// ReadFrom reads a packet from the currently active net.PacketConn. The
+// packet's original remote address is replaced with the RedialPacketConn's own
+// remote address.
+func (c *RedialPacketConn) ReadFrom(p []byte) (int, net.Addr, error) {
+ select {
+ case <-c.closed:
+ return 0, nil, &net.OpError{Op: "read", Net: c.LocalAddr().Network(), Source: c.LocalAddr(), Addr: c.remoteAddr, Err: c.err.Load().(error)}
+ default:
+ }
+ select {
+ case <-c.closed:
+ return 0, nil, &net.OpError{Op: "read", Net: c.LocalAddr().Network(), Source: c.LocalAddr(), Addr: c.remoteAddr, Err: c.err.Load().(error)}
+ case buf := <-c.recvQueue:
+ return copy(p, buf), c.remoteAddr, nil
+ }
+}
+
+// WriteTo writes a packet to the currently active net.PacketConn. The addr
+// argument is ignored and instead replaced with the RedialPacketConn's own
+// remote address.
+func (c *RedialPacketConn) WriteTo(p []byte, addr net.Addr) (int, error) {
+ // addr is ignored.
+ select {
+ case <-c.closed:
+ return 0, &net.OpError{Op: "write", Net: c.LocalAddr().Network(), Source: c.LocalAddr(), Addr: c.remoteAddr, Err: c.err.Load().(error)}
+ default:
+ }
+ buf := make([]byte, len(p))
+ copy(buf, p)
+ select {
+ case c.sendQueue <- buf:
+ return len(buf), nil
+ default:
+ // Drop the outgoing packet if the send queue is full.
+ return len(buf), nil
+ }
+}
+
+// closeWithError unblocks pending operations and makes future operations fail
+// with the given error. If err is nil, it becomes errClosedPacketConn.
+func (c *RedialPacketConn) closeWithError(err error) error {
+ var once bool
+ c.closeOnce.Do(func() {
+ // Store the error to be returned by future read/write
+ // operations.
+ if err == nil {
+ err = errors.New("operation on closed connection")
+ }
+ c.err.Store(err)
+ close(c.closed)
+ once = true
+ })
+ if !once {
+ return &net.OpError{Op: "close", Net: c.LocalAddr().Network(), Addr: c.LocalAddr(), Err: c.err.Load().(error)}
+ }
+ return nil
+}
+
+// Close unblocks pending operations and makes future operations fail with a
+// "closed connection" error.
+func (c *RedialPacketConn) Close() error {
+ return c.closeWithError(nil)
+}
+
+// LocalAddr returns the localAddr value that was passed to NewRedialPacketConn.
+func (c *RedialPacketConn) LocalAddr() net.Addr { return c.localAddr }
+
+func (c *RedialPacketConn) SetDeadline(t time.Time) error { return errNotImplemented }
+func (c *RedialPacketConn) SetReadDeadline(t time.Time) error { return errNotImplemented }
+func (c *RedialPacketConn) SetWriteDeadline(t time.Time) error { return errNotImplemented }
1
0
commit 70126177fbdf5b1fa4977f2fc26f624641708098
Author: David Fifield <david(a)bamsoftware.com>
Date: Tue Jan 28 02:32:02 2020 -0700
Turbo Tunnel client and server.
The client opts into turbotunnel mode by sending a magic token at the
beginning of each WebSocket connection (before sending even the
ClientID). The token is just a random byte string I generated. The
server peeks at the token and, if it matches, uses turbotunnel mode.
Otherwise, it unreads the token and continues in the old
one-session-per-WebSocket mode.
---
client/lib/snowflake.go | 100 +++++++++++++++----
client/lib/turbotunnel.go | 68 +++++++++++++
common/turbotunnel/consts.go | 4 +
go.mod | 4 +-
go.sum | 21 +++-
server/server.go | 231 +++++++++++++++++++++++++++++++++++++++++--
6 files changed, 399 insertions(+), 29 deletions(-)
diff --git a/client/lib/snowflake.go b/client/lib/snowflake.go
index 409ce14..4b7dd4d 100644
--- a/client/lib/snowflake.go
+++ b/client/lib/snowflake.go
@@ -1,11 +1,16 @@
package lib
import (
+ "context"
"errors"
"io"
"log"
"net"
"time"
+
+ "git.torproject.org/pluggable-transports/snowflake.git/common/turbotunnel"
+ "github.com/xtaci/kcp-go/v5"
+ "github.com/xtaci/smux"
)
const (
@@ -13,43 +18,98 @@ const (
SnowflakeTimeout = 30 * time.Second
)
+type dummyAddr struct{}
+
+func (addr dummyAddr) Network() string { return "dummy" }
+func (addr dummyAddr) String() string { return "dummy" }
+
// Given an accepted SOCKS connection, establish a WebRTC connection to the
// remote peer and exchange traffic.
func Handler(socks net.Conn, snowflakes SnowflakeCollector) error {
- // Obtain an available WebRTC remote. May block.
- snowflake := snowflakes.Pop()
- if nil == snowflake {
- return errors.New("handler: Received invalid Snowflake")
+ clientID := turbotunnel.NewClientID()
+
+ // We build a persistent KCP session on a sequence of ephemeral WebRTC
+ // connections. This dialContext tells RedialPacketConn how to get a new
+ // WebRTC connection when the previous one dies. Inside each WebRTC
+ // connection, we use EncapsulationPacketConn to encode packets into a
+ // stream.
+ dialContext := func(ctx context.Context) (net.PacketConn, error) {
+ log.Printf("redialing on same connection")
+ // Obtain an available WebRTC remote. May block.
+ conn := snowflakes.Pop()
+ if conn == nil {
+ return nil, errors.New("handler: Received invalid Snowflake")
+ }
+ log.Println("---- Handler: snowflake assigned ----")
+ // Send the magic Turbo Tunnel token.
+ _, err := conn.Write(turbotunnel.Token[:])
+ if err != nil {
+ return nil, err
+ }
+ // Send ClientID prefix.
+ _, err = conn.Write(clientID[:])
+ if err != nil {
+ return nil, err
+ }
+ return NewEncapsulationPacketConn(dummyAddr{}, dummyAddr{}, conn), nil
}
- defer snowflake.Close()
- log.Println("---- Handler: snowflake assigned ----")
+ pconn := turbotunnel.NewRedialPacketConn(dummyAddr{}, dummyAddr{}, dialContext)
+ defer pconn.Close()
- go func() {
- // When WebRTC resets, close the SOCKS connection too.
- snowflake.WaitForReset()
- socks.Close()
- }()
+ // conn is built on the underlying RedialPacketConn—when one WebRTC
+ // connection dies, another one will be found to take its place. The
+ // sequence of packets across multiple WebRTC connections drives the KCP
+ // engine.
+ conn, err := kcp.NewConn2(dummyAddr{}, nil, 0, 0, pconn)
+ if err != nil {
+ return err
+ }
+ defer conn.Close()
+ // Permit coalescing the payloads of consecutive sends.
+ conn.SetStreamMode(true)
+ // Disable the dynamic congestion window (limit only by the
+ // maximum of local and remote static windows).
+ conn.SetNoDelay(
+ 0, // default nodelay
+ 0, // default interval
+ 0, // default resend
+ 1, // nc=1 => congestion window off
+ )
+ // On the KCP connection we overlay an smux session and stream.
+ smuxConfig := smux.DefaultConfig()
+ smuxConfig.Version = 2
+ smuxConfig.KeepAliveTimeout = 10 * time.Minute
+ sess, err := smux.Client(conn, smuxConfig)
+ if err != nil {
+ return err
+ }
+ defer sess.Close()
+ stream, err := sess.OpenStream()
+ if err != nil {
+ return err
+ }
+ defer stream.Close()
- // Begin exchanging data. Either WebRTC or localhost SOCKS will close first.
- // In eithercase, this closes the handler and induces a new handler.
- copyLoop(socks, snowflake)
- log.Println("---- Handler: closed ---")
+ // Begin exchanging data.
+ log.Printf("---- Handler: begin stream %v ---", stream.ID())
+ copyLoop(socks, stream)
+ log.Printf("---- Handler: closed stream %v ---", stream.ID())
return nil
}
// Exchanges bytes between two ReadWriters.
-// (In this case, between a SOCKS and WebRTC connection.)
-func copyLoop(socks, webRTC io.ReadWriter) {
+// (In this case, between a SOCKS connection and smux stream.)
+func copyLoop(socks, stream io.ReadWriter) {
done := make(chan struct{}, 2)
go func() {
- if _, err := io.Copy(socks, webRTC); err != nil {
+ if _, err := io.Copy(socks, stream); err != nil {
log.Printf("copying WebRTC to SOCKS resulted in error: %v", err)
}
done <- struct{}{}
}()
go func() {
- if _, err := io.Copy(webRTC, socks); err != nil {
- log.Printf("copying SOCKS to WebRTC resulted in error: %v", err)
+ if _, err := io.Copy(stream, socks); err != nil {
+ log.Printf("copying SOCKS to stream resulted in error: %v", err)
}
done <- struct{}{}
}()
diff --git a/client/lib/turbotunnel.go b/client/lib/turbotunnel.go
new file mode 100644
index 0000000..aad2e6a
--- /dev/null
+++ b/client/lib/turbotunnel.go
@@ -0,0 +1,68 @@
+package lib
+
+import (
+ "bufio"
+ "errors"
+ "io"
+ "net"
+ "time"
+
+ "git.torproject.org/pluggable-transports/snowflake.git/common/encapsulation"
+)
+
+var errNotImplemented = errors.New("not implemented")
+
+// EncapsulationPacketConn implements the net.PacketConn interface over an
+// io.ReadWriteCloser stream, using the encapsulation package to represent
+// packets in a stream.
+type EncapsulationPacketConn struct {
+ io.ReadWriteCloser
+ localAddr net.Addr
+ remoteAddr net.Addr
+ bw *bufio.Writer
+}
+
+// NewEncapsulationPacketConn makes
+func NewEncapsulationPacketConn(
+ localAddr, remoteAddr net.Addr,
+ conn io.ReadWriteCloser,
+) *EncapsulationPacketConn {
+ return &EncapsulationPacketConn{
+ ReadWriteCloser: conn,
+ localAddr: localAddr,
+ remoteAddr: remoteAddr,
+ bw: bufio.NewWriter(conn),
+ }
+}
+
+// ReadFrom reads an encapsulated packet from the stream.
+func (c *EncapsulationPacketConn) ReadFrom(p []byte) (int, net.Addr, error) {
+ data, err := encapsulation.ReadData(c.ReadWriteCloser)
+ if err != nil {
+ return 0, c.remoteAddr, err
+ }
+ return copy(p, data), c.remoteAddr, nil
+}
+
+// WriteTo writes an encapsulated packet to the stream.
+func (c *EncapsulationPacketConn) WriteTo(p []byte, addr net.Addr) (int, error) {
+ // addr is ignored.
+ _, err := encapsulation.WriteData(c.bw, p)
+ if err == nil {
+ err = c.bw.Flush()
+ }
+ if err != nil {
+ return 0, err
+ }
+ return len(p), nil
+}
+
+// LocalAddr returns the localAddr value that was passed to
+// NewEncapsulationPacketConn.
+func (c *EncapsulationPacketConn) LocalAddr() net.Addr {
+ return c.localAddr
+}
+
+func (c *EncapsulationPacketConn) SetDeadline(t time.Time) error { return errNotImplemented }
+func (c *EncapsulationPacketConn) SetReadDeadline(t time.Time) error { return errNotImplemented }
+func (c *EncapsulationPacketConn) SetWriteDeadline(t time.Time) error { return errNotImplemented }
diff --git a/common/turbotunnel/consts.go b/common/turbotunnel/consts.go
index 4699d1d..80f70af 100644
--- a/common/turbotunnel/consts.go
+++ b/common/turbotunnel/consts.go
@@ -6,6 +6,10 @@ package turbotunnel
import "errors"
+// This magic prefix is how a client opts into turbo tunnel mode. It is just a
+// randomly generated byte string.
+var Token = [8]byte{0x12, 0x93, 0x60, 0x5d, 0x27, 0x81, 0x75, 0xf5}
+
// The size of receive and send queues.
const queueSize = 32
diff --git a/go.mod b/go.mod
index 4366d6a..6502651 100644
--- a/go.mod
+++ b/go.mod
@@ -1,7 +1,5 @@
module git.torproject.org/pluggable-transports/snowflake.git
-go 1.13
-
require (
git.torproject.org/pluggable-transports/goptlib.git v1.1.0
github.com/golang/protobuf v1.3.1 // indirect
@@ -9,6 +7,8 @@ require (
github.com/pion/sdp/v2 v2.3.4
github.com/pion/webrtc/v2 v2.2.2
github.com/smartystreets/goconvey v1.6.4
+ github.com/xtaci/kcp-go/v5 v5.5.12
+ github.com/xtaci/smux v1.5.12
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa
golang.org/x/text v0.3.2 // indirect
diff --git a/go.sum b/go.sum
index 3708fc0..6768e02 100644
--- a/go.sum
+++ b/go.sum
@@ -21,6 +21,10 @@ github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
+github.com/klauspost/cpuid v1.2.2 h1:1xAgYebNnsb9LKCdLOvFWtAxGU/33mjJtyOVbmUa0Us=
+github.com/klauspost/cpuid v1.2.2/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
+github.com/klauspost/reedsolomon v1.9.3 h1:N/VzgeMfHmLc+KHMD1UL/tNkfXAt8FnUqlgXGIduwAY=
+github.com/klauspost/reedsolomon v1.9.3/go.mod h1:CwCi+NUr9pqSVktrkN+Ondf06rkhYZ/pcNv7fu+8Un4=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
@@ -82,14 +86,28 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
+github.com/templexxx/cpu v0.0.1 h1:hY4WdLOgKdc8y13EYklu9OUTXik80BkxHoWvTO6MQQY=
+github.com/templexxx/cpu v0.0.1/go.mod h1:w7Tb+7qgcAlIyX4NhLuDKt78AHA5SzPmq0Wj6HiEnnk=
+github.com/templexxx/xorsimd v0.4.1 h1:iUZcywbOYDRAZUasAs2eSCUW8eobuZDy0I9FJiORkVg=
+github.com/templexxx/xorsimd v0.4.1/go.mod h1:W+ffZz8jJMH2SXwuKu9WhygqBMbFnp14G2fqEr8qaNo=
+github.com/tjfoc/gmsm v1.0.1 h1:R11HlqhXkDospckjZEihx9SW/2VW0RgdwrykyWMFOQU=
+github.com/tjfoc/gmsm v1.0.1/go.mod h1:XxO4hdhhrzAd+G4CjDqaOkd0hUzmtPR/d3EiBBMn/wc=
+github.com/xtaci/kcp-go/v5 v5.5.12 h1:iALGyvti/oBbl1TbVoUpHEUHCorDEb3tEKl1CPY3KXM=
+github.com/xtaci/kcp-go/v5 v5.5.12/go.mod h1:H0T/EJ+lPNytnFYsKLH0JHUtiwZjG3KXlTM6c+Q4YUo=
+github.com/xtaci/lossyconn v0.0.0-20190602105132-8df528c0c9ae h1:J0GxkO96kL4WF+AIT3M4mfUVinOCPgf2uUWYFUzN0sM=
+github.com/xtaci/lossyconn v0.0.0-20190602105132-8df528c0c9ae/go.mod h1:gXtu8J62kEgmN++bm9BVICuT/e8yiLI2KFobd/TRFsE=
+github.com/xtaci/smux v1.5.12 h1:n9OGjdqQuVZXLh46+L4IR5tR2wvuUFwRABnN/V55bIY=
+github.com/xtaci/smux v1.5.12/go.mod h1:OMlQbT5vcgl2gb49mFkYo6SMf+zP3rcjcwQz7ZU7IGY=
golang.org/x/crypto v0.0.0-20190228161510-8dd112bcdc25/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d h1:9FCpayM9Egr1baVnV1SX0H87m+XB0B8S0hAMi99X/3U=
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20191126235420-ef20fe5d7933/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa h1:F+8P+gmewFQYRk6JoLQLwjBCTu3mcIURZfNkVweuRKA=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f h1:wMNYb4v58l5UBM7MYRLPG6ZhfOqbKu7X5eyFl8ZhKvA=
@@ -99,7 +117,8 @@ golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20190228124157-a34e9553db1e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
+golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8 h1:JA8d3MPx/IToSyXZG/RhwYEtfrKO1Fxrqe8KrkiLXKM=
+golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
diff --git a/server/server.go b/server/server.go
index c03e41c..028a24d 100644
--- a/server/server.go
+++ b/server/server.go
@@ -3,6 +3,8 @@
package main
import (
+ "bufio"
+ "bytes"
"crypto/tls"
"flag"
"fmt"
@@ -20,9 +22,13 @@ import (
"time"
pt "git.torproject.org/pluggable-transports/goptlib.git"
+ "git.torproject.org/pluggable-transports/snowflake.git/common/encapsulation"
"git.torproject.org/pluggable-transports/snowflake.git/common/safelog"
+ "git.torproject.org/pluggable-transports/snowflake.git/common/turbotunnel"
"git.torproject.org/pluggable-transports/snowflake.git/common/websocketconn"
"github.com/gorilla/websocket"
+ "github.com/xtaci/kcp-go/v5"
+ "github.com/xtaci/smux"
"golang.org/x/crypto/acme/autocert"
"golang.org/x/net/http2"
)
@@ -30,6 +36,13 @@ import (
const ptMethodName = "snowflake"
const requestTimeout = 10 * time.Second
+// How long to remember outgoing packets for a client, when we don't currently
+// have an active WebSocket connection corresponding to that client. Because a
+// client session may span multiple WebSocket connections, we keep packets we
+// aren't able to send immediately in memory, for a little while but not
+// indefinitely.
+const clientMapTimeout = 1 * time.Minute
+
// How long to wait for ListenAndServe or ListenAndServeTLS to return an error
// before deciding that it's not going to return.
const listenAndServeErrorTimeout = 100 * time.Millisecond
@@ -49,8 +62,8 @@ additional HTTP listener on port 80 to work with ACME.
flag.PrintDefaults()
}
-// Copy from WebSocket to socket and vice versa.
-func proxy(local *net.TCPConn, conn *websocketconn.Conn) {
+// Copy from one stream to another.
+func proxy(local *net.TCPConn, conn net.Conn) {
var wg sync.WaitGroup
wg.Add(2)
@@ -101,7 +114,23 @@ var upgrader = websocket.Upgrader{
CheckOrigin: func(r *http.Request) bool { return true },
}
-type HTTPHandler struct{}
+// overrideReadConn is a net.Conn with an overridden Read method. Compare to
+// recordingConn at
+// https://dave.cheney.net/2015/05/22/struct-composition-with-go.
+type overrideReadConn struct {
+ net.Conn
+ io.Reader
+}
+
+func (conn *overrideReadConn) Read(p []byte) (int, error) {
+ return conn.Reader.Read(p)
+}
+
+type HTTPHandler struct {
+ // pconn is the adapter layer between stream-oriented WebSocket
+ // connections and the packet-oriented KCP layer.
+ pconn *turbotunnel.QueuePacketConn
+}
func (handler *HTTPHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ws, err := upgrader.Upgrade(w, r, nil)
@@ -116,15 +145,182 @@ func (handler *HTTPHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Pass the address of client as the remote address of incoming connection
clientIPParam := r.URL.Query().Get("client_ip")
addr := clientAddr(clientIPParam)
+
+ var token [len(turbotunnel.Token)]byte
+ _, err = io.ReadFull(conn, token[:])
+ if err != nil {
+ // Don't bother logging EOF: that happens with an unused
+ // connection, which clients make frequently as they maintain a
+ // pool of proxies.
+ if err != io.EOF {
+ log.Printf("reading token: %v", err)
+ }
+ return
+ }
+
+ switch {
+ case bytes.Equal(token[:], turbotunnel.Token[:]):
+ err = turbotunnelMode(conn, addr, handler.pconn)
+ default:
+ // We didn't find a matching token, which means that we are
+ // dealing with a client that doesn't know about such things.
+ // "Unread" the token by constructing a new Reader and pass it
+ // to the old one-session-per-WebSocket mode.
+ conn2 := &overrideReadConn{Conn: conn, Reader: io.MultiReader(bytes.NewReader(token[:]), conn)}
+ err = oneshotMode(conn2, addr)
+ }
+ if err != nil {
+ log.Println(err)
+ return
+ }
+}
+
+// oneshotMode handles clients that did not send turbotunnel.Token at the start
+// of their stream. These clients use the WebSocket as a raw pipe, and expect
+// their session to begin and end when this single WebSocket does.
+func oneshotMode(conn net.Conn, addr string) error {
statsChannel <- addr != ""
or, err := pt.DialOr(&ptInfo, addr, ptMethodName)
if err != nil {
- log.Printf("failed to connect to ORPort: %s", err)
- return
+ return fmt.Errorf("failed to connect to ORPort: %s", err)
}
defer or.Close()
proxy(or, conn)
+
+ return nil
+}
+
+// turbotunnelMode handles clients that sent turbotunnel.Token at the start of
+// their stream. These clients expect to send and receive encapsulated packets,
+// with a long-lived session identified by ClientID.
+func turbotunnelMode(conn net.Conn, addr string, pconn *turbotunnel.QueuePacketConn) error {
+ // Read the ClientID prefix. Every packet encapsulated in this WebSocket
+ // connection pertains to the same ClientID.
+ var clientID turbotunnel.ClientID
+ _, err := io.ReadFull(conn, clientID[:])
+ if err != nil {
+ return fmt.Errorf("reading ClientID: %v", err)
+ }
+
+ // TODO: ClientID-to-client_ip address mapping
+ // Peek at the first read packet to get the KCP conv ID.
+
+ errCh := make(chan error)
+
+ // The remainder of the WebSocket stream consists of encapsulated
+ // packets. We read them one by one and feed them into the
+ // QueuePacketConn on which kcp.ServeConn was set up, which eventually
+ // leads to KCP-level sessions in the acceptSessions function.
+ go func() {
+ for {
+ p, err := encapsulation.ReadData(conn)
+ if err != nil {
+ errCh <- err
+ break
+ }
+ pconn.QueueIncoming(p, clientID)
+ }
+ }()
+
+ // At the same time, grab packets addressed to this ClientID and
+ // encapsulate them into the downstream.
+ go func() {
+ // Buffer encapsulation.WriteData operations to keep length
+ // prefixes in the same send as the data that follows.
+ bw := bufio.NewWriter(conn)
+ for p := range pconn.OutgoingQueue(clientID) {
+ _, err := encapsulation.WriteData(bw, p)
+ if err == nil {
+ err = bw.Flush()
+ }
+ if err != nil {
+ errCh <- err
+ break
+ }
+ }
+ }()
+
+ // Wait until one of the above loops terminates. The closing of the
+ // WebSocket connection will terminate the other one.
+ <-errCh
+
+ return nil
+}
+
+// handleStream bidirectionally connects a client stream with the ORPort.
+func handleStream(stream net.Conn) error {
+ // TODO: This is where we need to provide the client IP address.
+ statsChannel <- false
+ or, err := pt.DialOr(&ptInfo, "", ptMethodName)
+ if err != nil {
+ return fmt.Errorf("connecting to ORPort: %v", err)
+ }
+ defer or.Close()
+
+ proxy(or, stream)
+
+ return nil
+}
+
+// acceptStreams layers an smux.Session on the KCP connection and awaits streams
+// on it. Passes each stream to handleStream.
+func acceptStreams(conn *kcp.UDPSession) error {
+ smuxConfig := smux.DefaultConfig()
+ smuxConfig.Version = 2
+ smuxConfig.KeepAliveTimeout = 10 * time.Minute
+ sess, err := smux.Server(conn, smuxConfig)
+ if err != nil {
+ return err
+ }
+ for {
+ stream, err := sess.AcceptStream()
+ if err != nil {
+ if err, ok := err.(net.Error); ok && err.Temporary() {
+ continue
+ }
+ return err
+ }
+ go func() {
+ defer stream.Close()
+ err := handleStream(stream)
+ if err != nil {
+ log.Printf("handleStream: %v", err)
+ }
+ }()
+ }
+}
+
+// acceptSessions listens for incoming KCP connections and passes them to
+// acceptStreams. It is handler.ServeHTTP that provides the network interface
+// that drives this function.
+func acceptSessions(ln *kcp.Listener) error {
+ for {
+ conn, err := ln.AcceptKCP()
+ if err != nil {
+ if err, ok := err.(net.Error); ok && err.Temporary() {
+ continue
+ }
+ return err
+ }
+ // Permit coalescing the payloads of consecutive sends.
+ conn.SetStreamMode(true)
+ // Disable the dynamic congestion window (limit only by the
+ // maximum of local and remote static windows).
+ conn.SetNoDelay(
+ 0, // default nodelay
+ 0, // default interval
+ 0, // default resend
+ 1, // nc=1 => congestion window off
+ )
+ go func() {
+ defer conn.Close()
+ err := acceptStreams(conn)
+ if err != nil {
+ log.Printf("acceptStreams: %v", err)
+ }
+ }()
+ }
}
func initServer(addr *net.TCPAddr,
@@ -140,7 +336,12 @@ func initServer(addr *net.TCPAddr,
return nil, fmt.Errorf("cannot listen on port %d; configure a port using ServerTransportListenAddr", addr.Port)
}
- var handler HTTPHandler
+ handler := HTTPHandler{
+ // pconn is shared among all connections to this server. It
+ // overlays packet-based client sessions on top of ephemeral
+ // WebSocket connections.
+ pconn: turbotunnel.NewQueuePacketConn(addr, clientMapTimeout),
+ }
server := &http.Server{
Addr: addr.String(),
Handler: &handler,
@@ -176,6 +377,24 @@ func initServer(addr *net.TCPAddr,
break
}
+ // Start a KCP engine, set up to read and write its packets over the
+ // WebSocket connections that arrive at the web server.
+ // handler.ServeHTTP is responsible for encapsulation/decapsulation of
+ // packets on behalf of KCP. KCP takes those packets and turns them into
+ // sessions which appear in the acceptSessions function.
+ ln, err := kcp.ServeConn(nil, 0, 0, handler.pconn)
+ if err != nil {
+ server.Close()
+ return server, err
+ }
+ go func() {
+ defer ln.Close()
+ err := acceptSessions(ln)
+ if err != nil {
+ log.Printf("acceptSessions: %v", err)
+ }
+ }()
+
return server, err
}
1
0

23 Apr '20
commit 2022496d3b6fc76b7725135758c37d7d49546d3d
Author: David Fifield <david(a)bamsoftware.com>
Date: Wed Mar 18 18:00:44 2020 -0600
Use a global RedialPacketConn and smux.Session.
This allows multiple SOCKS connections to share the available proxies,
and in particular prevents a SOCKS connection from being starved of a
proxy when the maximum proxy capacity is less then the number of the
number of SOCKS connections.
This is option 4 from https://bugs.torproject.org/33519.
---
client/lib/snowflake.go | 78 ++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 71 insertions(+), 7 deletions(-)
diff --git a/client/lib/snowflake.go b/client/lib/snowflake.go
index 4b7dd4d..27991b2 100644
--- a/client/lib/snowflake.go
+++ b/client/lib/snowflake.go
@@ -6,6 +6,7 @@ import (
"io"
"log"
"net"
+ "sync"
"time"
"git.torproject.org/pluggable-transports/snowflake.git/common/turbotunnel"
@@ -23,9 +24,10 @@ type dummyAddr struct{}
func (addr dummyAddr) Network() string { return "dummy" }
func (addr dummyAddr) String() string { return "dummy" }
-// Given an accepted SOCKS connection, establish a WebRTC connection to the
-// remote peer and exchange traffic.
-func Handler(socks net.Conn, snowflakes SnowflakeCollector) error {
+// newSession returns a new smux.Session and the net.PacketConn it is running
+// over. The net.PacketConn successively connects through Snowflake proxies
+// pulled from snowflakes.
+func newSession(snowflakes SnowflakeCollector) (net.PacketConn, *smux.Session, error) {
clientID := turbotunnel.NewClientID()
// We build a persistent KCP session on a sequence of ephemeral WebRTC
@@ -54,7 +56,6 @@ func Handler(socks net.Conn, snowflakes SnowflakeCollector) error {
return NewEncapsulationPacketConn(dummyAddr{}, dummyAddr{}, conn), nil
}
pconn := turbotunnel.NewRedialPacketConn(dummyAddr{}, dummyAddr{}, dialContext)
- defer pconn.Close()
// conn is built on the underlying RedialPacketConn—when one WebRTC
// connection dies, another one will be found to take its place. The
@@ -62,9 +63,9 @@ func Handler(socks net.Conn, snowflakes SnowflakeCollector) error {
// engine.
conn, err := kcp.NewConn2(dummyAddr{}, nil, 0, 0, pconn)
if err != nil {
- return err
+ pconn.Close()
+ return nil, nil, err
}
- defer conn.Close()
// Permit coalescing the payloads of consecutive sends.
conn.SetStreamMode(true)
// Disable the dynamic congestion window (limit only by the
@@ -81,9 +82,72 @@ func Handler(socks net.Conn, snowflakes SnowflakeCollector) error {
smuxConfig.KeepAliveTimeout = 10 * time.Minute
sess, err := smux.Client(conn, smuxConfig)
if err != nil {
+ conn.Close()
+ pconn.Close()
+ return nil, nil, err
+ }
+
+ return pconn, sess, err
+}
+
+// sessionManager_ maintains a single global smux.Session that is shared among
+// incoming SOCKS connections.
+type sessionManager_ struct {
+ mutex sync.Mutex
+ sess *smux.Session
+}
+
+// Get creates and returns a new global smux.Session if none exists yet. If one
+// already exists, it returns the existing one. It monitors the returned session
+// and if it ever fails, sets things up so the next call to Get will create a
+// new session.
+func (manager *sessionManager_) Get(snowflakes SnowflakeCollector) (*smux.Session, error) {
+ manager.mutex.Lock()
+ defer manager.mutex.Unlock()
+
+ if manager.sess == nil {
+ log.Printf("starting a new session")
+ pconn, sess, err := newSession(snowflakes)
+ if err != nil {
+ return nil, err
+ }
+ manager.sess = sess
+ go func() {
+ // If the session dies, set it to be recreated.
+ for {
+ <-time.After(5 * time.Second)
+ if sess.IsClosed() {
+ break
+ }
+ }
+ log.Printf("discarding finished session")
+ // Close the underlying to force any ongoing WebRTC
+ // connection to close as well, and relinquish the
+ // SnowflakeCollector.
+ pconn.Close()
+ manager.mutex.Lock()
+ manager.sess = nil
+ manager.mutex.Unlock()
+ }()
+ } else {
+ log.Printf("reusing the existing session")
+ }
+
+ return manager.sess, nil
+}
+
+var sessionManager = sessionManager_{}
+
+// Given an accepted SOCKS connection, establish a WebRTC connection to the
+// remote peer and exchange traffic.
+func Handler(socks net.Conn, snowflakes SnowflakeCollector) error {
+ // Return the global smux.Session.
+ sess, err := sessionManager.Get(snowflakes)
+ if err != nil {
return err
}
- defer sess.Close()
+
+ // On the smux session we overlay a stream.
stream, err := sess.OpenStream()
if err != nil {
return err
1
0

23 Apr '20
commit 2202910e2b699e0bd9e6eca0d59094c15684707b
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Thu Apr 23 15:20:45 2020 -0400
Latest version of proposal 295 (from Jan :/ )
---
proposals/295-relay-crypto-with-adl.txt | 268 ++++++++++++++++++++++++++------
1 file changed, 223 insertions(+), 45 deletions(-)
diff --git a/proposals/295-relay-crypto-with-adl.txt b/proposals/295-relay-crypto-with-adl.txt
index cfb58a2..d3414c4 100644
--- a/proposals/295-relay-crypto-with-adl.txt
+++ b/proposals/295-relay-crypto-with-adl.txt
@@ -2,7 +2,8 @@ Filename: 295-relay-crypto-with-adl.txt
Title: Using ADL for relay cryptography (solving the crypto-tagging attack)
Author: Tomer Ashur, Orr Dunkelman, Atul Luykx
Created: 22 Feb 2018
-Last-Modified: 10 July 2019
+Last-Modified: 13 Jan. 2020
+
Status: Open
@@ -37,7 +38,12 @@ Status: Open
available online at https://eprint.iacr.org/2017/239 .
For authentication between the OP and the edge node we use
- the PIV scheme: https://eprint.iacr.org/2013/835
+ the PIV scheme: https://eprint.iacr.org/2013/835 .
+
+ A recent paper presented a birthday bound distinguisher
+ against the ADL scheme, thus showing that the RUP security
+ proof is tight: https://eprint.iacr.org/2019/1359 .
+
2. Preliminaries
@@ -74,7 +80,7 @@ Status: Open
ENC_KEY_LEN -- The key length used for encryption (e.g., AES). We
recommend ENC_KEY_LEN = 256.
-2.4. Key derivation (replaces Section 5.2.2)
+2.4. Key derivation (replaces Section 5.2.2 in Tor-spec.txt)
For newer KDF needs, Tor uses the key derivation function HKDF
from RFC5869, instantiated with SHA256. The generated key
@@ -93,26 +99,27 @@ Status: Open
When used in the ntor handshake a string of key material is
generated and is used in the following way:
- Length Purpose Notation
- ------ ------- --------
- HASH_LEN forward digest IV DF
- HASH_LEN backward digest IV DB
- ENC_KEY_LEN encryption key Kf
- ENC_KEY_LEN decryption key Kb
- DIG_KEY_LEN forward digest key Khf
- DIG_KEY_LEN backward digest key Khb
- ENC_KEY_LEN forward tweak key Ktf
- ENC_KEY_LEN backward tweak key Ktb
- DIGEST_LEN nonce to use in the *
- hidden service protocol
-
- * I am not sure that we need this any longer.
+ Length Purpose Notation
+ ------ ------- --------
+ HASH_LEN forward authentication digest IV AF
+ HASH_LEN forward digest IV DF
+ HASH_LEN backward digest IV DB
+ ENC_KEY_LEN encryption key Kf
+ ENC_KEY_LEN decryption key Kb
+ DIG_KEY_LEN forward digest key Khf
+ DIG_KEY_LEN backward digest key Khb
+ ENC_KEY_LEN forward tweak key Ktf
+ ENC_KEY_LEN backward tweak key Ktb
+ DIGEST_LEN nonce to use in the
+ hidden service protocol(*)
+
+ (*) I am not sure that if this is still needed.
Excess bytes from K are discarded.
2.6. Ciphers
- For hashing(*) we use GHASH with a DIG_KEY_LEN-bit key. We write
+ For hashing(*) we use GHASH(**) with a DIG_KEY_LEN-bit key. We write
this as Digest(K,M) where K is the key and M the message to be
hashed.
@@ -128,19 +135,23 @@ Status: Open
the message to be encrypted (resp., decrypted).
(*) The terms hash and digest are used interchangeably.
+ (**) Proposal 308 suggested that using POLYVAL [GLL18]
+ would be more efficient here. This proposal will work just the
+ same if POLYVAL is used instead of GHASH.
3. Routing relay cells
Let n denote the integer representing the destination node. For
- I = 1...n, we set Tf'_{I} = DF_I and Tb'_{I} = DB_I
- where DF_I and DB_I are generated according to Section 2.4.
+ I = 1...n, we set Tf'_{I} = DF_I, Tb'_{I} = DB_I, and
+ Ta'_I = AF_I where DF_I, DB_I, and AF_I are generated
+ according to Section 2.4.
3.1. Forward Direction
The forward direction is the direction that CREATE/CREATE2 cells
are sent.
-3.1.1. Routing from the Origin
+3.1.1. Routing from the origin
When an OP sends a relay cell, they prepare the
cell as follows:
@@ -148,8 +159,9 @@ Status: Open
The OP prepares the authentication part of the message:
C_{n+1} = M
- T_{n+1} = Digest(Khf_n,C_{n+1})
- N_{n+1} = T_{n+1} ^ E(Ktf_n,T_{n+1} ^ 0)
+ Ta_I = Digest(Khf_n,Ta'_I||C_{n+1})
+ N_{n+1} = Ta_I ^ E(Ktf_n,Ta_I ^ 0)
+ Ta'_{I} = Ta_I
Then, the OP prepares the multi-layered encryption:
@@ -161,9 +173,9 @@ Status: Open
The OP sends C_1 and N_1 to node 1.
-3.1.2. Relaying Forward at Onion Routers
+3.1.2. Relaying forward at onion routers
- When a forward relay cell is received by OR I, it decrypts the
+ When a forward relay cell is received by OR_I, it decrypts the
payload with the stream cipher, as follows:
'Forward' relay cell:
@@ -180,14 +192,14 @@ Status: Open
For more information, see section 4 below.
-3.2. Backward Direction
+3.2. Backward direction
The backward direction is the opposite direction from
CREATE/CREATE2 cells.
-3.2.1. Relaying Backward at Onion Routers
+3.2.1. Relaying backward at onion routers
- When a backward relay cell is received by OR I, it encrypts the
+ When a backward relay cell is received by OR_I, it encrypts the
payload with the stream cipher, as follows:
'Backward' relay cell:
@@ -200,7 +212,7 @@ Status: Open
with C_{n+1} = M and N_{n+1}=0. Once encrypted, the node passes
C_I and N_I along the circuit towards the OP.
-3.2.2. Routing to the Origin
+3.2.2. Routing to the origin
When a relay cell arrives at an OP, the OP decrypts the payload
with the stream cipher as follows:
@@ -240,17 +252,22 @@ Status: Open
authentication is now included in the nonce part of the payload.
The old 'Recognized' field is removed and the node always tries to
- authenticate the message as follows:
+ authenticate the message as follows.
- forward direction (executed by the end node):
+4.1.1 forward direction (executed by the end node):
+
+ Ta_I = Digest(Khf_n,Ta'_I||C_{n+1})
+ Tag = Ta_I ^ D(Ktf_n,Ta_I ^ N_{n+1})
- T_{n+1} = Digest(Khf_n,C_{n+1})
- Tag = T_{n+1} ^ D(Ktf_n,T_{n+1} ^ N_{n+1})
+ If Tag = 0:
+ Ta'_I = Ta_I
+ The message is authenticated.
+ Otherwise:
+ Ta'_I remains unchanged.
+ The message is not authenticated.
- The message is recognized and authenticated
- (i.e., M = C_{n+1}) if and only if Tag = 0.
- backward direction (executed by the OP):
+4.1.2 backward direction (executed by the OP):
The message is recognized and authenticated
(i.e., C_{n+1} = M) if and only if N_{n+1} = 0.
@@ -264,14 +281,14 @@ Status: Open
and version-heterogenic circuits
When a cell is prepared to be routed from the origin (see Section
- 3.1.1) the encrypted nonce N is appended to the encrypted cell
- (occupying the last 16 bytes of the cell). If the cell is
- prepared to be sent to a node supporting the new protocol, S is
- combined with other sources to generate the layer's
- nonce. Otherwise, if the node only supports the old protocol, n
- is still appended to the encrypted cell (so that following nodes
- can still recover their nonce), but a synchronized nonce (as per
- the old protocol) is used in CTR-mode.
+ 3.1.1 above) the encrypted nonce N is appended to the encrypted
+ cell (occupying the last 16 bytes of the cell). If the cell is
+ prepared to be sent to a node supporting the new protocol, N is
+ used to generate the layer's nonce. Otherwise, if the node only
+ supports the old protocol, N is still appended to the encrypted
+ cell (so that following nodes can still recover their nonce),
+ but a synchronized nonce (as per the old protocol) is used in
+ CTR-mode.
When a cell is sent along the circuit in the 'backward'
direction, nodes supporting the new protocol always assume that
@@ -371,7 +388,7 @@ Status: Open
long as an honest node supporting the new protocol processes the
message between two dishonest ones.
-5.3 The Running Digest
+5.3. The running digest
Unlike the old protocol, the running digest is now computed as
the output of a GHASH call instead of a hash function call
@@ -385,3 +402,164 @@ Status: Open
repeat with low probability. GHASH is a universal hash function,
hence it gives such a guarantee assuming its key is chosen
uniformly at random.
+
+6. Forward secrecy
+
+ Inspired by the approach of Proposal 308, a small modification
+ to this proposal makes it forward secure. The core idea is to
+ replace the encryption key KF_n after de/encrypting the cell.
+ As an added benefit, this would allow to keep the authentication
+ layer stateless (i.e., without keeping a running digest for
+ this layer).
+
+ Below we present the required changes to the sections above.
+
+6.1. Routing from the Origin (replacing 3.1.1 above)
+
+ When an OP sends a relay cell, they prepare the
+ cell as follows:
+
+ The OP prepares the authentication part of the message:
+
+ C_{n+1} = M
+ T_{n+1} = Digest(Khf_n,C_{n+1})
+ N_{n+1} = T_{n+1} ^ E(Ktf_n,T_{n+1} ^ 0)
+
+
+ Then, the OP prepares the multi-layered encryption:
+ For the final layer n:
+ (C_n,Kf'_n) = Encrypt(Kf_n,N_{n+1},C_{I+1}||0||0) (*)
+ T_n = Digest(Khf_I,Tf'_n||C_n)
+ N_n = T_I ^ E(Ktf_n,T_n ^ N_{n+1})
+ Tf'_n = T_n
+ Kf_n = Kf'_n
+
+ (*) CTR mode is used to generate two additional blocks. This
+ 256-bit value is denoted K'f_n and is used in subsequent
+ steps to replace the encryption key of this layer.
+ To achieve forward secrecy it is important that the
+ obsolete Kf_n is erased in a non-recoverable way.
+
+ For layer I=(n-1)...1:
+ C_I = Encrypt(Kf_I,N_{I+1},C_{I+1})
+ T_I = Digest(Khf_I,Tf'_I||C_I)
+ N_I = T_I ^ E(Ktf_I,T_I ^ N_{I+1})
+ Tf'_I = T_I
+
+ The OP sends C_1 and N_1 to node 1.
+
+ Alternatively, if we want that all nodes use the same functionality
+ OP prepares the cell as follows:
+
+ For layer I=n...1:
+ (C_I,K'f_I) = Encrypt(Kf_I,N_{I+1},C_{I+1}||0||0) (*)
+ T_I = Digest(Khf_I,Tf'_I||C_I)
+ N_I = T_I ^ E(Ktf_I,T_I ^ N_{I+1})
+ Tf'_I = T_I
+ Kf_I = Kf'_I
+
+ (*) CTR mode is used to generate two additional blocks. This
+ 256-bit value is denoted K'f_n and is used in subsequent
+ steps to replace the encryption key of this layer.
+ To achieve forward secrecy it is important that the
+ obsolete Kf_n is erased in a non-recoverable way.
+
+ This scheme offers forward secrecy in all levels of the circuit.
+
+6.2. Relaying Forward at Onion Routers (replacing 3.1.2 above)
+
+ When a forward relay cell is received by OR I, it decrypts the
+ payload with the stream cipher, as follows:
+
+ 'Forward' relay cell:
+
+ T_I = Digest(Khf_I,Tf'_I||C_I)
+ N_{I+1} = T_I ^ D(Ktf_I,T_I ^ N_I)
+ C_{I+1} = Decrypt(Kf_I,N_{I+1},C_I||0||0)
+ Tf'_I = T_I
+
+ The OR then decides whether it recognizes the relay cell as described below.
+ Depending on the choice of scheme from 6.1 the OR uses the last two blocks
+ of C_{I+1} to update the encryption key or discards them.
+
+ If the cell is recognized the OR also processes the contents of the relay
+ cell. Otherwise, it passes C_{I+1}||N_{I+1} along the circuit if the circuit
+ continues.
+
+ For more information about recognizing and authenticating relay cells,
+ see 5.4.5 below.
+
+6.3. Relaying Backward at Onion Routers (replacing 3.2.1 above)
+
+ When an edge node receives a message M to be routed back to the
+ origin, it encrypts it as follows:
+
+ T_n = Digest(Khb_n,Tb'_n||M)
+ N_n = T_n ^ E(Ktb_n,T_n ^ 0)
+ (C_n,K'b_n) = Encrypt(Kb_n,N_n,M||0||0) (*)
+ Tb'_n = T_n
+ Kb_n = K'b_n
+
+ (*) CTR mode is used to generate two additional blocks. This
+ 256-bit value is denoted K'b_n and will be used in
+ subsequent steps to replace the encryption key of this layer.
+ To achieve forward secrecy it is important that the obsolete
+ K'b_n is erased in a non-recoverable way.
+
+ Once encrypted, the edge node sends C_n and N_n along the circuit towards
+ the OP. When a backward relay cell is received by OR_I (I<n), it encrypts
+ the payload with the stream cipher, as follows:
+
+ 'Backward' relay cell:
+
+ T_I = Digest(Khb_I,Tb'_I||C_{I+1})
+ N_I = T_I ^ E(Ktb_I,T_I ^ N_{I+1})
+ C_I = Encrypt(Kb_I,N_I,C_{I+1})
+ Tb'_I = T_I
+
+ Each node passes C_I and N_I along the circuit towards the OP.
+
+ If forward security is desired for all layers in the circuit, all OR's
+ encrypt as follows:
+ T_I = Digest(Khb_I,Tb'_I||C_{I+1})
+ N_I = T_I ^ E(Ktb_I,T_I ^ 0)
+ (C_I,K'b_I) = Encrypt(Kb_n,N_n,M||0||0)
+ Tb'_I = T_I
+ Kb_I = K'b_I
+
+
+6.4. Routing to the Origin (replacing 3.2.2 above)
+
+ When a relay cell arrives at an OP, the OP decrypts the payload
+ with the stream cipher as follows:
+
+ OP receives relay cell from node 1:
+
+ For I=1...n, where n is the end node on the circuit:
+ C_{I+1} = Decrypt(Kb_I,N_I,C_I)
+ T_I = Digest(Khb_I,Tb'_I||C_{I+1})
+ N_{I+1} = T_I ^ D(Ktb_I,T_I ^ N_I)
+ Tb'_I = T_I
+
+ And updates the encryption keys according to the strategy
+ chosen for 6.3.
+
+ If the payload is recognized (see Section 4.1),
+ then:
+
+ The sending node is I. Process the payload!
+
+
+6.5. Recognizing and authenticating a relay cell (replacing 4.1.1 above):
+
+ Authentication in the forward direction is done as follows:
+
+ T_{n+1} = Digest(Khf_n,C_{n+1})
+ Tag = T_{n+1} ^ D(Ktf_n,T_{n+1} ^ N_{n+1})
+
+ The message is recognized and authenticated
+ (i.e., M = C_{n+1}) if and only if Tag = 0.
+
+ No changes are required to the authentication process when the relay
+ cell is sent backwards.
+
1
0
commit 2ec807e10464c9881baef6318ff41ce58c07171e
Author: Nick Mathewson <nickm(a)torproject.org>
Date: Thu Apr 23 15:21:46 2020 -0400
whitespace fixes on proposal 295.
---
proposals/295-relay-crypto-with-adl.txt | 144 ++++++++++++++++----------------
1 file changed, 71 insertions(+), 73 deletions(-)
diff --git a/proposals/295-relay-crypto-with-adl.txt b/proposals/295-relay-crypto-with-adl.txt
index d3414c4..a1752df 100644
--- a/proposals/295-relay-crypto-with-adl.txt
+++ b/proposals/295-relay-crypto-with-adl.txt
@@ -3,7 +3,6 @@ Title: Using ADL for relay cryptography (solving the crypto-tagging attack)
Author: Tomer Ashur, Orr Dunkelman, Atul Luykx
Created: 22 Feb 2018
Last-Modified: 13 Jan. 2020
-
Status: Open
@@ -39,11 +38,11 @@ Status: Open
For authentication between the OP and the edge node we use
the PIV scheme: https://eprint.iacr.org/2013/835 .
-
+
A recent paper presented a birthday bound distinguisher
- against the ADL scheme, thus showing that the RUP security
+ against the ADL scheme, thus showing that the RUP security
proof is tight: https://eprint.iacr.org/2019/1359 .
-
+
2. Preliminaries
@@ -110,7 +109,7 @@ Status: Open
DIG_KEY_LEN backward digest key Khb
ENC_KEY_LEN forward tweak key Ktf
ENC_KEY_LEN backward tweak key Ktb
- DIGEST_LEN nonce to use in the
+ DIGEST_LEN nonce to use in the
hidden service protocol(*)
(*) I am not sure that if this is still needed.
@@ -136,15 +135,15 @@ Status: Open
(*) The terms hash and digest are used interchangeably.
(**) Proposal 308 suggested that using POLYVAL [GLL18]
- would be more efficient here. This proposal will work just the
- same if POLYVAL is used instead of GHASH.
+ would be more efficient here. This proposal will work just the
+ same if POLYVAL is used instead of GHASH.
3. Routing relay cells
Let n denote the integer representing the destination node. For
- I = 1...n, we set Tf'_{I} = DF_I, Tb'_{I} = DB_I, and
- Ta'_I = AF_I where DF_I, DB_I, and AF_I are generated
- according to Section 2.4.
+ I = 1...n, we set Tf'_{I} = DF_I, Tb'_{I} = DB_I, and
+ Ta'_I = AF_I where DF_I, DB_I, and AF_I are generated
+ according to Section 2.4.
3.1. Forward Direction
@@ -255,7 +254,7 @@ Status: Open
authenticate the message as follows.
4.1.1 forward direction (executed by the end node):
-
+
Ta_I = Digest(Khf_n,Ta'_I||C_{n+1})
Tag = Ta_I ^ D(Ktf_n,Ta_I ^ N_{n+1})
@@ -281,13 +280,13 @@ Status: Open
and version-heterogenic circuits
When a cell is prepared to be routed from the origin (see Section
- 3.1.1 above) the encrypted nonce N is appended to the encrypted
+ 3.1.1 above) the encrypted nonce N is appended to the encrypted
cell (occupying the last 16 bytes of the cell). If the cell is
prepared to be sent to a node supporting the new protocol, N is
- used to generate the layer's nonce. Otherwise, if the node only
- supports the old protocol, N is still appended to the encrypted
- cell (so that following nodes can still recover their nonce),
- but a synchronized nonce (as per the old protocol) is used in
+ used to generate the layer's nonce. Otherwise, if the node only
+ supports the old protocol, N is still appended to the encrypted
+ cell (so that following nodes can still recover their nonce),
+ but a synchronized nonce (as per the old protocol) is used in
CTR-mode.
When a cell is sent along the circuit in the 'backward'
@@ -402,20 +401,20 @@ Status: Open
repeat with low probability. GHASH is a universal hash function,
hence it gives such a guarantee assuming its key is chosen
uniformly at random.
-
+
6. Forward secrecy
- Inspired by the approach of Proposal 308, a small modification
- to this proposal makes it forward secure. The core idea is to
+ Inspired by the approach of Proposal 308, a small modification
+ to this proposal makes it forward secure. The core idea is to
replace the encryption key KF_n after de/encrypting the cell.
- As an added benefit, this would allow to keep the authentication
- layer stateless (i.e., without keeping a running digest for
- this layer).
-
+ As an added benefit, this would allow to keep the authentication
+ layer stateless (i.e., without keeping a running digest for
+ this layer).
+
Below we present the required changes to the sections above.
-
+
6.1. Routing from the Origin (replacing 3.1.1 above)
-
+
When an OP sends a relay cell, they prepare the
cell as follows:
@@ -424,7 +423,7 @@ Status: Open
C_{n+1} = M
T_{n+1} = Digest(Khf_n,C_{n+1})
N_{n+1} = T_{n+1} ^ E(Ktf_n,T_{n+1} ^ 0)
-
+
Then, the OP prepares the multi-layered encryption:
For the final layer n:
@@ -433,13 +432,13 @@ Status: Open
N_n = T_I ^ E(Ktf_n,T_n ^ N_{n+1})
Tf'_n = T_n
Kf_n = Kf'_n
-
- (*) CTR mode is used to generate two additional blocks. This
- 256-bit value is denoted K'f_n and is used in subsequent
+
+ (*) CTR mode is used to generate two additional blocks. This
+ 256-bit value is denoted K'f_n and is used in subsequent
steps to replace the encryption key of this layer.
- To achieve forward secrecy it is important that the
- obsolete Kf_n is erased in a non-recoverable way.
-
+ To achieve forward secrecy it is important that the
+ obsolete Kf_n is erased in a non-recoverable way.
+
For layer I=(n-1)...1:
C_I = Encrypt(Kf_I,N_{I+1},C_{I+1})
T_I = Digest(Khf_I,Tf'_I||C_I)
@@ -447,27 +446,27 @@ Status: Open
Tf'_I = T_I
The OP sends C_1 and N_1 to node 1.
-
- Alternatively, if we want that all nodes use the same functionality
+
+ Alternatively, if we want that all nodes use the same functionality
OP prepares the cell as follows:
-
+
For layer I=n...1:
(C_I,K'f_I) = Encrypt(Kf_I,N_{I+1},C_{I+1}||0||0) (*)
T_I = Digest(Khf_I,Tf'_I||C_I)
N_I = T_I ^ E(Ktf_I,T_I ^ N_{I+1})
Tf'_I = T_I
Kf_I = Kf'_I
-
- (*) CTR mode is used to generate two additional blocks. This
- 256-bit value is denoted K'f_n and is used in subsequent
+
+ (*) CTR mode is used to generate two additional blocks. This
+ 256-bit value is denoted K'f_n and is used in subsequent
steps to replace the encryption key of this layer.
- To achieve forward secrecy it is important that the
- obsolete Kf_n is erased in a non-recoverable way.
-
+ To achieve forward secrecy it is important that the
+ obsolete Kf_n is erased in a non-recoverable way.
+
This scheme offers forward secrecy in all levels of the circuit.
-
+
6.2. Relaying Forward at Onion Routers (replacing 3.1.2 above)
-
+
When a forward relay cell is received by OR I, it decrypts the
payload with the stream cipher, as follows:
@@ -478,36 +477,36 @@ Status: Open
C_{I+1} = Decrypt(Kf_I,N_{I+1},C_I||0||0)
Tf'_I = T_I
- The OR then decides whether it recognizes the relay cell as described below.
- Depending on the choice of scheme from 6.1 the OR uses the last two blocks
- of C_{I+1} to update the encryption key or discards them.
-
- If the cell is recognized the OR also processes the contents of the relay
- cell. Otherwise, it passes C_{I+1}||N_{I+1} along the circuit if the circuit
+ The OR then decides whether it recognizes the relay cell as described below.
+ Depending on the choice of scheme from 6.1 the OR uses the last two blocks
+ of C_{I+1} to update the encryption key or discards them.
+
+ If the cell is recognized the OR also processes the contents of the relay
+ cell. Otherwise, it passes C_{I+1}||N_{I+1} along the circuit if the circuit
continues.
For more information about recognizing and authenticating relay cells,
see 5.4.5 below.
-
+
6.3. Relaying Backward at Onion Routers (replacing 3.2.1 above)
When an edge node receives a message M to be routed back to the
origin, it encrypts it as follows:
-
+
T_n = Digest(Khb_n,Tb'_n||M)
N_n = T_n ^ E(Ktb_n,T_n ^ 0)
(C_n,K'b_n) = Encrypt(Kb_n,N_n,M||0||0) (*)
Tb'_n = T_n
Kb_n = K'b_n
-
- (*) CTR mode is used to generate two additional blocks. This
- 256-bit value is denoted K'b_n and will be used in
- subsequent steps to replace the encryption key of this layer.
- To achieve forward secrecy it is important that the obsolete
- K'b_n is erased in a non-recoverable way.
-
- Once encrypted, the edge node sends C_n and N_n along the circuit towards
- the OP. When a backward relay cell is received by OR_I (I<n), it encrypts
+
+ (*) CTR mode is used to generate two additional blocks. This
+ 256-bit value is denoted K'b_n and will be used in
+ subsequent steps to replace the encryption key of this layer.
+ To achieve forward secrecy it is important that the obsolete
+ K'b_n is erased in a non-recoverable way.
+
+ Once encrypted, the edge node sends C_n and N_n along the circuit towards
+ the OP. When a backward relay cell is received by OR_I (I<n), it encrypts
the payload with the stream cipher, as follows:
'Backward' relay cell:
@@ -518,7 +517,7 @@ Status: Open
Tb'_I = T_I
Each node passes C_I and N_I along the circuit towards the OP.
-
+
If forward security is desired for all layers in the circuit, all OR's
encrypt as follows:
T_I = Digest(Khb_I,Tb'_I||C_{I+1})
@@ -526,7 +525,7 @@ Status: Open
(C_I,K'b_I) = Encrypt(Kb_n,N_n,M||0||0)
Tb'_I = T_I
Kb_I = K'b_I
-
+
6.4. Routing to the Origin (replacing 3.2.2 above)
@@ -540,26 +539,25 @@ Status: Open
T_I = Digest(Khb_I,Tb'_I||C_{I+1})
N_{I+1} = T_I ^ D(Ktb_I,T_I ^ N_I)
Tb'_I = T_I
-
- And updates the encryption keys according to the strategy
+
+ And updates the encryption keys according to the strategy
chosen for 6.3.
-
+
If the payload is recognized (see Section 4.1),
then:
The sending node is I. Process the payload!
-
-
+
+
6.5. Recognizing and authenticating a relay cell (replacing 4.1.1 above):
-
- Authentication in the forward direction is done as follows:
+
+ Authentication in the forward direction is done as follows:
T_{n+1} = Digest(Khf_n,C_{n+1})
Tag = T_{n+1} ^ D(Ktf_n,T_{n+1} ^ N_{n+1})
-
+
The message is recognized and authenticated
(i.e., M = C_{n+1}) if and only if Tag = 0.
-
- No changes are required to the authentication process when the relay
+
+ No changes are required to the authentication process when the relay
cell is sent backwards.
-
1
0