[tor-dev] Control-port filtering: can it have a reasonable threat model?

Yawning Angel yawning at schwanenlied.me
Mon Apr 3 20:55:22 UTC 2017


For what it's worth, since there's a filter that's shipped and
nominally supported "officially"...

On Mon, 3 Apr 2017 14:41:19 -0400
Nick Mathewson <nickm at torproject.org> wrote:
> But I could be wrong!  Maybe there are subsets that are safer than
> others.

https://gitweb.torproject.org/tor-browser/sandboxed-tor-browser.git/tree/src/cmd/sandboxed-tor-browser/internal/tor

The threat model I used when writing it was, "firefox is probably owned
by the CIA/NSA/FBI/FSB/DGSE/AVID/GCHQ/BND/Illuminati/Reptilians, the
filter itself is trusted".  There's a feature vs annonymity tradeoff,
so it's up to the user to enable the circuit display if they want
firefox to have visibility into certain things.

Allowed (Passed through to the tor daemon):

 * `SIGNAL NEWNYM`.  If both `addressmap_clear_transient();`
   and `rend_client_purge_state();` aren't important then it can
   disallow the call, because it rewrites the SOCKS isolation for all
   connections to the SOCKSPort.

   At one point this was entirely synthetic and not propagated.  It's
   only a huge problem if people are not using the containerized tor
   instance.

   It's worth noting that even if I change the behavior to just change
   the SOCKS auth, a misbehaving firefox can still force new circuits
   for itself.

   The sandbox code could pop up a modal dialog box asking if the user
   really wants to "New Identity" or "New Tor Circuit for this Site",
   so that "scary" behavior requires manual user intervention (since
   torbutton's confirmation is probably subverted and not to be
   trusted).

 * (Optional) `GETCONF BRIDGE`.  The Tor Browser circuit display uses
   this to filter out Bridges from the display.  Since the circuit
   display is optional, this only happens if the user explicitly
   decides that they want the circuit display.

 * (Optional) `GETINFO ns/id/`.  Required for the circuit display.
   Mostly harmless.

 * (Optional) `GETINFO ip-to-country/`.  Required for the circuit
   display.  Harmless.  Could be handled by the filter.

Synthetic (Responses generated by the filter):

 * `PROTOCOLINFO`.  Not used by Tor Browser, even though it should be.
   Everything except the tor version is synthetic.

 * `AUTHENTICATE`.  Just returns success since the filtered control
   port does not require authentication.

 * `AUTHCHALLENGE`.  Just returns an error.  See `AUTHENTICATE`.

 * `QUIT`.  Only prior to the `AUTHENTICATE` call.  Not actually used
   by Tor Browser ever.

 * `GETINFO net/listeners/socks`.  torbutton freaks out without this.
   The response synthetically generated to match what torbutton expects.

 * (Optional) `SETEVENTS STREAM`.  Required for the circuit display.
   Events are synthetically generated to only include streams that
   firefox created.

 * (Optional) `GETINFO circuit-status`.  Required for the circuit
   display.  Responses are synthetically generated to only include
   circuits that firefox created.

Denied:

 * Everything else.

> So above, I see a few common patterns:
>   * Many restrictive filters still let the application learn enough
> about the user's behavior to deanonymize them.  If the threat model is
> intended to resist a hostile application, then that application can't
> be allowed to communicate with the outside world, even over Tor.

  "The only truly secure system is one that is powered off, cast in a
   block of concrete and sealed in a lead-lined room with armed guards -
   and even then I have my doubts." -- spaf

>   * The NEWNYM-based side-channel above is a little scary.

I don't think this is solvable while giving the application the ability
to re-generate circuits.  Maybe my modal doom dialog box should run
away from the user's mouse cursor, and play klaxon sounds too.

The use model I officially support is "sandboxed-tor-browser launches a
tor daemon in a separate container dedicated to firefox".  People who
do other things, get what they deserve.

> And where do we go forward from here?

If it were up to me, I'd re-write the circuit display to only show the
exit(s) when applicable, since IMO firefox is not to be trusted with
the IP address of the user's Guard.

But the circuit display when running sandboxed defaults to off, so
people that enable it, presumably fully understand the implications of
doing so.

> The filters above seem to have been created by granting the
> applications only the commands that they actually need, and by
> filtering all the other commands.  But if we'd like filters that
> actually provide some security against hostile applications using the
> control port, we'll need to take a different tactic: we'll need to
> define the threat models that we're trying to work within, and see
> what we can safely expose under those models.

"Via the control port a subverted firefox can get certain information
about what firefox is doing, if the user configures it that way,
otherwise, all it can do is repeatedly NEWNYM" is what I think I ended
up with.

Though I have the benefit of being able to force all application network
traffic through code I control, which makes life easier.

Regards,

-- 
Yawning Angel
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://lists.torproject.org/pipermail/tor-dev/attachments/20170403/a30d1f62/attachment.sig>


More information about the tor-dev mailing list