[tor-qa] 3.5.4-meek-1 (meek bundles with browser TLS camouflage)

David Fifield david at bamsoftware.com
Sat Apr 12 04:34:54 UTC 2014


On Fri, Apr 11, 2014 at 10:08:41PM -0400, Roger Dingledine wrote:
> On Fri, Apr 11, 2014 at 02:45:56PM -0700, David Fifield wrote:
> > Please try these bundles, which are configured to run the meek transport
> > automatically. What's different about these than past meek bundles, is
> > that they use a web browser extension to make the HTTPS requests, so
> > that the TLS layer looks like Firefox. They are built on top of the
> > recent 3.5.4 release so they have the OpenSSL Heartbleed fix.
> > 
> > https://people.torproject.org/~dcf/pt-bundle/3.5.4-meek-1/
> > 
> > With luck, the bundles will work for you without any special
> > configuration. If you look at your network traffic while you are using
> > them, you will see some HTTPS connections to www.google.com, and nothing
> > to any Tor bridge or any protocol other than HTTPS.
> 
> Yep. Looks good, works out of the box for me. Nice.
> 
> A bit slower than normal Tor use -- what's the overhead that meek
> introduces in terms of number of bytes that I send/receive compared to
> the 'real' underlying bytes? Or might this slowness be that my cpu and
> local computer is quite slow, so it's taking a while to load a whole
> second firefox in the background? Or maybe this is the wrong list to
> ask this question. :)

For every chunk of data that the transport reads from tor, it adds an
HTTP header plus another layer of TLS. A chunk is usually 586 bytes, but
it could be bigger if more than one cell is buffered between tor and the
transport. It can be many kilobytes during bootstrapping or bulk
downloads.

A source of extra latency is the additional hop your traffic takes
through App Engine. My guess is that this matters more than byte
overhead.

The other source of extra latency is serialization of requests. You have
to wait for the response to your first request before you can issue a
second. My guess is that this matters more than the extra hop. Here are
some latency measurements we made:
https://trac.torproject.org/projects/tor/ticket/10935#comment:7

The second browser instance probably doesn't matter much. The
serialization and deserialization between the transport plugin and the
browser extension is more expensive CPU-wise than I anticipated, but I
don't think it's a bottleneck. Starting up the second browser is a
one-time cost.

> Also, is there a reason your bridge needs to still be on 0.2.4.6-alpha-dev?
> https://globe.torproject.org/#/bridge/49395CD2424DF8ACFB4D580548A315A199EBD30F

There's no good reason. I'll take care of it. But we are actually
looking for someone with experience running fast bridges to run a fast
bridge located close to App Engine. There's no reason it has to be me
and it's better if it's not me.

David Fifield


More information about the tor-qa mailing list