[ux] collaborating on the redesign of the connection to Tor in Tor Browser and Tails

Cecylia Bocovich cohosh at torproject.org
Fri Apr 16 23:51:21 UTC 2021


Hi sajolida! Thanks for the extensive write-up and your thoughts on this
work. I have a few thoughts in line with snippets from discussions in
this thread.

On 2021-03-03 6:15 p.m., sajolida wrote:
> Consent
> -------
>
> Both designs include a step when the user gives their consent before
> Tor Browser or Tails tries to connect to Tor automatically, and detect
> if bridges are needed for example.
>
> Tor Browser:
>
> https://gitlab.torproject.org/tpo/anti-censorship/trac/-/issues/40004/designs/desktop-IMG_0.0_-_Quickstart_Consent.png
>
>
> Tails:
>
> https://gitlab.tails.boum.org/tails/blueprints/-/wikis/network_connection#the-million-dollar-question
>
>
> As you can see, the way that Tails will ask for this consent is much
> more explicit. This difference already exists somehow in the current
> implementations so maybe it's a deeper product design choice.
>
> For us, it is very important to allow our users to hide the fact that
> they are using Tails to their local network (at least as much as
> possible). This can be critical for people in places where using Tor
> is criminalized or only rarely used, but also for people who might be
> using Tails from their workplace or trying to avoid domestic
> surveillance, parental controls, etc.
>
> Bridges are not only good to avoid censorship but also to avoid being
> identified as a Tor user.
>
> The fact that this question is not as explicit in the design of the
> upcoming Quickstart procedure of Tor Browser makes me think that this
> is not so much of a design goal for Tor Browser.
>
> Did I understand correctly?
>
> Otherwise, if we share the same goal, then I think that our design
> should look more alike in order to best serve this class of users.
> I'm happy to discuss this more in depth.

Hiding the usage of Tor is a use-case for anti-censorship tools that
I've seen mentioned often but it makes me a little bit nervous. While in
some cases this is a side effect of anti-censorship tools (if a censor
knew your connection was a Tor connection they would just block it,
wouldn't they?), this is not the threat model for which anti-censorship
tools are developed.

It is considered an acceptable (and perhaps even expected) outcome for
many obfuscation methods for a censor to eventually discover and block
(even non-default) bridges. And this happens all the time. Part of our
work is in making sure fresh bridges are being delivered to the users
if/when this happens.

Also, anti-censorship tools are frequently evaluated against state-level
censors whose ability to filter traffic has to scale to an ISP-level of
traffic. It's possible that attacks that are impractical for a major ISP
are much more practical for a home or office network. They are also not
evaluated against an adversary performing a targeted attack.

I would like to echo Antonela's comment here:

"If Tor is criminalized in some country, do we trust _just_ in a bridge
to keep the user safe? Are you oversimplifying the threat model or
offering technical solutions to social problems?"

It's very possible that our existing obfuscation methods are useful for
some of these scenarios and preferable to using vanilla Tor. It's also
possible that some obfuscation methods are safer than others in these
scenarios, but without more detailed knowledge of the specific threat
models and contexts I'm nervous about claiming this solution.

On 2021-04-06 8:51 a.m., Antonela Debiasi wrote:
> On 3/31/21 9:54 p.m., sajolida wrote:
>
>> The "Hide Tor" option that we designed for Tails might prevent you
>> from being busted by your husband or your boss. 
> This part suggests that using Tor is suspicious, which is a narrative we are trying to demystify. We can probably are not there, but we will not be even closer if we ship a feature that reinforces this idea.
>
> Again, I'm curious about what the community thinks about it. I'll probably bring this to discussion with the anti-censorship team to collect their perspective as well.

Yeah, I can see how this is something Tor Project would want to take a
different stance on than Tails.

To add another anecdote, Amazon and Google and Microsoft have decided
they do not want their infrastructure to be used for domain fronting
(the obfuscation method used by meek) because it is "
abused by bad actors and threat actors engaging in illegal activities"
[0]. Shipping anti-censorship options in our UI for the purpose of
"hiding" rather than "Internet freedom" sounds spicy :)


The UX research on consent is very interesting and the threat models you
listed are important ones. I would like to see some more stories and
research on these use-cases for anti-censorship tools, including
questioning whether anti-censorship tools are the best path forward for
them.

- Cecylia

[0]
https://www.microsoft.com/security/blog/2021/03/26/securing-our-approach-to-domain-fronting-within-azure/




More information about the UX mailing list