[anti-censorship-team] Backward-compatible Turbo Tunnel in Snowflake

David Fifield david at bamsoftware.com
Tue Feb 4 04:22:16 UTC 2020


On Fri, Jan 31, 2020 at 07:24:48PM -0700, David Fifield wrote:
> == Backward compatibility ==
> 
> The branch as of commit 07495371d67f914d2c828bbd3d7facc455996bd2 is not
> backward compatible with the mainline Snowflake code. That's because the
> server expects to find a ClientID and length-prefixed packets, and
> currently deployed clients don't work that way. However, I think it will
> be possible to make the server backward compatible. My plan is to
> reserve a distinguished static token (64-bit value) and have the client
> send that at the beginning of the stream, before its ClientID, to
> indicate that it uses Turbo Tunnel features. The token will be selected
> to be distinguishable from any protocol that non–Turbo Tunnel clients
> might use (i.e., Tor TLS). Then, the server's ServeHTTP function can
> choose one of two implementations, depending on whether it sees the
> magic token or not.

https://gitweb.torproject.org/user/dcf/snowflake.git/commit/?h=turbotunnel&id=5cb64396b95fd16587c3595ff4c8265ccdb94018

Here's a commit that restores compatibility with current Snowflake
clients. It works the way I described: the client sends a magic token at
the beginning, one that non–Turbo Tunnel clients will never send. The
server looks for the token, and depending on whether the token is
present or not, chooses whether to handle each WebSocket connection in
Turbo Tunnel mode or the old one-session-per-WebSocket mode. The magic
token is 1293605d278175f5, which I got from Python os.urandom(8).

The addition of a magic token prefix very slightly constrains traffic
shaping. The client is still free to shape its traffic however it likes:
it doesn't have to send the whole 8-byte token at once; and it can put
any amount of padding after the token, because once the server sees the
token, it will know how to interpret the padding scheme that follows.
The only constraint is on the server, which cannot start using padding
until it receives all 8 bytes of the token and is sure that it is
talking to a Tubro Tunnel–aware client. Be aware that I'm not taking
advantage of any of the traffic shaping features made possible by the
packet encapsulation and padding layer, so the fact that the client
sends 8 bytes (the token) then another 8 bytes (the ClientID) will be
visible in packet sizes.

Here is how to test it out with both kinds of clients.
	git remote add dcf https://git.torproject.org/user/dcf/snowflake.git
	git checkout -b turbotunnel --track dcf/turbotunnel
	for d in client server broker proxy-go; do (cd $d && go build); done
Run the broker:
	broker/broker --disable-tls --addr 127.0.0.1:8000
Run a proxy:
	proxy-go/proxy-go --broker http://127.0.0.1:8000/ --relay ws://127.0.0.1:8080/
Run the server:
	tor -f torrc.server
	# contents of torrc.server:
	DataDirectory datadir-server
	SocksPort 0
	ORPort 9001
	ExtORPort auto
	BridgeRelay 1
	AssumeReachable 1
	PublishServerDescriptor 0
	ServerTransportListenAddr snowflake 0.0.0.0:8080
	ServerTransportPlugin snowflake exec server/server --disable-tls --log snowflake-server.log
Run a turbotunnel client:
	tor -f torrc.client
	# contents of torrc.client:
	DataDirectory datadir-client
	UseBridges 1
	SocksPort 9250
	ClientTransportPlugin snowflake exec client/client --url http://127.0.0.1:8000/ --ice stun:stun.l.google.com:19302 --log snowflake-client.log
	Bridge snowflake 0.0.3.0:1
Kill the turbotunnel client, and switch back to the master branch,
keeping the broker, proxy, and server running. Rebuild the client and
run it again.
	(cd client && go build)
	tor -f torrc.client



More information about the anti-censorship-team mailing list