On Thu, Jul 05, 2018 at 06:46:21PM +0000, Tom Ritter wrote:
On 3 July 2018 at 18:03, Matthew Finkel matthew.finkel@gmail.com wrote:
Hi All,
This is the beginning of a conversation about creating a plan for moving towards sandboxing Tor Browser on every platform.
Thanks for this Matt! I started typing and kept typing and just kept typing and well... sorry.
Thanks for this valuable response.
Over the last few years, there existed a Sandboxed Tor Browser on only Linux[0] (created and maintained by Yawning Angel). However, Tor Browser aims at providing a Private Browser on all supported platforms (Microsoft Windows, Apple Mac OS X, GNU/Linux, and Android (AOSP))[1][2], this means we must provide a sandboxed run-time environment on all platforms.
[snip] (Read Tom's original mail for the distinction sandboxing different parts of Firefox)
Firefox has not done any work on sandboxing the parent process. All the discussions I have with folks at Tor are about protecting the parent process for additional defense in depth. Thus we're assuming that a) An attacker has exploited the content process but cannot achieve their goal from that position and b) decides to attack the parent process.
Both of these assumptions are not necessarily true, and thus means we need to address the opposite of those assumptions _too_; and we should prioritize that work in relation to the larger architectural goal of "How do we sandbox the parent process".
Right, my perspective on this is achieving both of these goals. In general, 1) can we sandbox each process better and 2) can we create an environment for those processes where, in the event of executing a successful expoit, the attacker gains the smallest amount of additional information possible. While we may be able to make small improvements on (1) in the current Firefox codebase, it seems (2) is more difficult and it is a much larger goal than the browsers have historically targeted.
To the point of (b): This document focuses on sandboxing as supporting the goals of the design doc, which I think is a fine approach. But we should be aware of other approaches to evaluating sandboxing and one is from the capability reduction standpoint. A lot of sandbox escapes today occur by getting code execution in the content process and then exploiting the kernel. So identifying the features on each platform that can reduce kernel attack surface, and then seeing what prevents us from using these features (starting in the content process but eventually including the parent process) would be a worthwhile exercise. Similarly to focusing on the kernel, we can look at what others permissions are needed (by the content and eventually the parent) and determine which ones are the scariest and work to either remove them entirely, or move them to a separate process.
Yes, this is a good approach. My original email was not as precise about my intentions as it should've been. I'll mention more later.
To the point of (a), we should consider what goals an attacker has when exploiting Tor Browser. Enumerating attacks is likely to miss some, but not considering attacks means we're trying to defend ourselves from 'everything' with no priority. I think I would classify attacks into High Impact and Opportunistic Impact, and focus specifically on "code execution achieved through an exploit" (and not things like 'exploiting' a fingerprinting vector we have not solved in the web platform.) We may miss some attacks; but the ones we can list today are, I think, accurate in their impact.
High Impact goals are ones that do one or more of the following: achieve a proxy bypass, temporarily corrupt's tor's configuration leading to a direct-to-relay connection (by changing the user's bridge/guard/DirAuths), permanently corrupts the user's machine (by installing malware or affecting tor's configuration), or (less bad without a proxy bypass but still bad) retrieves an identifier that uniquely and persistently identifies a user (MAC address, serial) and exfiltrates it.
Opportunistic goals are ones that compromise the browser process as it currently runs (without persistence) and rely on the user performing an action using the browser that discloses information. For example, reading other websites cookies is only useful if the user is active on other sites in a way that identifies them.
When one is able to achieve High Impact goals from the Content Process
- it seems to me that engineering effort should be focused on closing
_those_ holes first, before trying to build a solution for the parent process. (I'm not saying we shouldn't plan for the parent process though!)
Having Mozilla's help identifying what is needed, and where we should start, in this area will be extremely helpful.
I could talk more about what actions an attacker can perform if they exploit the Content Process (both currently and post-Fission https://wiki.mozilla.org/Project_Fission ) and how they can attack the Parent Process; but I'm going to skip that for now; but would be happy to do so if it seems helpful.
Unfortunately, each operating system provides a unique set of sandboxing techniques and capabilities, so we must work with the facilities we are given. In some cases, we may need to be creative about how we achieve our goals.
I do not have all the answers, and there are some open questions below.
On Windows, we have the Windows integrity mechanism[3], Windows Containers[4], SetProcessMitigationPolicy[5], App Container[6], and maybe some others. Some of this functionality is already used by Firefox.
For Windows Container I found the following helpful reading: https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/
In particular, they require Windows Server or Windows 10 Professional or Enterprise. Containers are awesome, and in particular Hyper-V based containers are essentially Virtual Machines with their own kernel and therefore we could do really powerful things with them. But I think we need to wait a few years until it becomes more widely available.
Ugh. Okay, I read the About page, but I didn't read the "Running your first container" close enough and I missed the Professional/Enterprise requirement.
There's https://www.bromium.com/ which operates in this space, and despite having a bunch of slick marketing and buzzwords and stuff actually has a really powerful technology core.
That website is...hard to read. But yes, ideally, that is where I think private browsing mode should go in the future, where the browser is contained by a cross-platform VM and we have a minimal trusted computing base with a limited attach surface. In this way, the browser only needs one set of sandboxing techniques in the browser. All platform-specific mitigations and restrictions are in an abstraction layer at the VM-Kernel interface.
App Container is primarily for Windows Store apps, but can also be used for 'Legacy Apps' per https://docs.microsoft.com/en-us/windows/desktop/secauthz/appcontainer-for-l... . I can't quite tell how useful it is.
I'm hoping we can take advantage of this, but I haven't tested it. Most of this email is based on theoretical sandboxing options based on my research - I'm certainly not an expert on many areas here.
The Chromium/Firefox sandbox on Windows is composed of the interaction of SetProcessMitigationOptions (of which there are many options), Integrity Level (both an 'Initial' and a 'Delayed') as well as Job Level, Access Token Level, Alternate Desktop, and Alternate Windows Station.
I'm hopeful we can use more of these than we do currently.
On Mac OS X, there are two sandboxing techniques available. The first is Seatbelt[7]. Apple deprecated it in favor of code signing entitlements[8]. Unfortunately, the code signing entitlements are not as fine-grained as those Seatbelt provides, but we can enable different entitlements per "target"[9]. I don't know if this will be difficult for us. Therefore, we should utilize the code-signing restrictions where they are appropriate, but we should follow Safari[10], Chromium[11], and Firefox[12][13] by applying restrictive Sealtbelt policies where applicable.
Can you apply both Seatbelt and Code Signing Entitlements?
Unknown, good question.
Also; have you experimented with/confirmed the restriction mentioned at All Hands; that a process with a Seatbelt Policy cannot launch a process with a more restrictive seatbelt policy? (I think mcs ran into this with the Mac Sandbox policy that used to ship with Tor Browser Alpha.) This is a significant problem to any Apple-based sandboxing policy, and frankly one I think someone should make a semi-organized petition from browser makers to Apple to fix.
Yes - I wonder if the sandbox_init() call failed because it was denied by our initial sb policy. But, at the same time, I wonder if after adding a first-layer of sandboxing that (somehow) allows calling sandbox_init() again, does this result in futher restricting the current sandbox policy, or does this overwrite the original policy.
The man page (as available online), does not describe this behavior other than saying calling sandbox_init() places the current process into a sandbox(7). And that man page only says "New processes inherit the sandbox of their parent."
https://www.manpagez.com/man/3/sandbox_init/ https://www.manpagez.com/man/7/sandbox/
Complaining about this design is most likely a dead-end, however - maybe that shouldn't stop us from making some noise, though.
"The sandbox_init() and sandbox_free_error() functions are DEPRECATED. Developers who wish to sandbox an app should instead adopt the App Sand- box feature described in the App Sandbox Design Guide."
Additionally; OSX 10.14 Mojave provides new sandboxing features; including ones that feature-match Microsoft's Arbitrary Code Guard and Code Integrity Guard. I don't know much about these; but they are powerful and in the future, Mozilla will probably be interested in them. (Future being 1-2+ years from now.)
That sounds worth investigating.
On GNU/Linux, we can use the namespacing and secure computing (Secure Computing) facilities in the kernel exposed to userspace. Sandboxed Tor Browser on Linux already shows how these can be combined and form a sandbox. In particular, we can use bubblewrap[14] as a setuid sandboxing helper (if user namespace is not enabled), if it is available. In addition, we can reduce the syscall surface area with Seccomp-BFP. CGroups provide a way for limiting the resources available within the sandbox. We may also want to manually proxy/filter other system functionality (X11).
Last, but not least, on Android, we begin with a fairly strict sandbox provided by its permissioning model[15], distinct per-app users, and SELinux policies. Because every app is run using a distinct user, and each has its own storage directories, we get Linux's DAC on the file system for "free". In addition, we can use some of the same techniques available on GNU/Linux on Android, namely the seccomp-bpf and namespaces, if they are available. Android provides privilege and permission isolation within an App by using Services. With some refactoring, we can isolate some parts of Firefox into isolated permissionless services[15].
Something else to consider is how each of these sandboxing technologies (on each platform) are enabled. Some are applied at runtime by the process, the process starts more privileged and then drops privileges. Others are applied by the OS before execution, and the process never has the elevated privileges.
If a process needs to do something that the sandbox should prevent - like creating a socket connection to another process - there are two ways (I think only two) to achieve that goal. The first way is a process that _has_ permission gives a resource (like a socket connection) to the restricted process. The second way is the restricted process starts with less restrictive permissions, does the privileged action (like creating a socket) and then applies the sandbox (entire in whole or an additional component of it) to itself - ideally before processing any user input that could be malicious.
I think there's a lot of focus in this document on applying sandbox policies by the OS before/as the process starts and not following the 'drop privileges' model. But dropping privileges is much more flexible. I think some confusion with this model is the notion that the process is 'sandboxing itself' and of course one can't trust a process that is simultaneously compromised and attempting to perform security operations so that model must be broken - but this notion is incorrect. The process - before it is able to be compromised by attacker input, before it processes anything from the web - instructs the OS to apply the sandbox to itself, and cannot later opt-out of that restriction. It's true that something could go wrong during the sandbox application and result in a the process staying elevated - but I think that is easy to code defensively for and even test at runtime.
I imagine both sandboxing techniques working in parallel. I agree letting the process drop its privileges is much more flexible, and the process should do this. However, if we exclude platform limitations, there isn't a reason why a process should be created with any privileges it never needs.
I'll begin by describing the goals, as I see it, for sandboxing Tor Browser. Hopefully, this will help us evaluate the different available techniques. These goals are derived from the Design document[16] and are the means for archieving Tor Browser's end-goal.
In particular, the sandboxing techniques preserve the Security Requirements of a private browser when the browser, itself, fails at maintaining those criteria. By this I mean the sandbox should be designed such that if the browser process loses any of the Security properties (through a logical bug, exploited vulnerability, etc), the sandbox provides an additional layer of those properties and the user is not in immediate danger. The sandbox may, in some situations, improve the Privacy properties of Tor Browser, for example if a component/device is emulated instead of providing the browser with raw access. We should use these mechanisms when they are available.
- Proxy Obedience
- State Separation
- Disk Avoidance
- Application Data Isolation
- Secure Updating
- Usable
- Cross-Origin Fingerprinting Unlinkability
We'll go through these one-by-one and describe the role of a sandbox and how we can achieve this on each platform.
Yes; but I would phrase things differently. I also want to call out our strengths and weaknesses.
Our goals, as I understand them, are twofold:
- Improve the sandboxing of the browser as a whole to a higher level
to i) reduce attack surface from vulnerabilities and common attacker positions and ii) to act as a safety net for design doc goals 2) Eventually, eliminate Tor Browser as a fork of Firefox. What this means in detail is (to me at least but probably everyone) not fully determined, but I'll assume a world where there's at least the window of branding changes, pref flips and system-like addons but no functionality patches...
Our strengths, as I see them:
- We are privacy research experts
- We're willing to compromise on user freedom, web features, and
performance for security and privacy 3) We're willing to experiment with more complicated user interfaces
I think we want to move away from this, right? Ideally, we'll have a simpler and more powerful user interface (with limited user choice).
- Our use case for the browser is (probably) different. We're
(probably) not seeing long lived browser sessions and therefore have more frequent restarts. We're (probably) not seeing a lot of logged in sessions, so restarting the browser (especially with an open-tabs-restore feature) is not as painful.
There are some (many?) users who rely on Tor Browser for their normal browsing activities, and they only use non-private browsing mode when it is actually needed. I'm not sure we can make an assumption that Tor Browser is restarted frequently (except for installing updates, hopefully).
Of course we want our browser to be usable, so we're not going to go crazy on (2) or (3), but that we allow the user to disable JavaScript, and have and expect to keep a 'security slider' are examples of where we're willing to go.
Our weaknesses, as I see them:
- We're tracking ESR for now
- Un-upstreamed patches are painful for us
- We are not well suited to undertake large browser architecture
projects, especially if it diverges from Firefox 4) We are neither browser architecture experts, nor sandboxing experts. In particular this makes us ill-suited to predict feature breakage or performance degradation from a hypothesized sandbox/architecture change.
I agree on these points, in general. It seems, unfortunately, that the browser's have made surprising sandboxing choices thus far. Admittedly, they have an existing user base and they worry any wrong choice will cost them market share, but that leaves Tor Browser in a precarious place. As a result, we are becoming sandboxing experts as best we can, and in the limited time we have available.
In particular, while looking at this, I think this gives particular weight to aligning ourselves with future Firefox plans. And that we can often begin adopting those plans before Firefox does.
I absolutely agree with this.
Note, this design assumes a launcher-based sandboxed architecture (similar to the Sandboxed Tor Browser on Linux):
------------------ | Launcher | ------------------ | | ------------------------------ | | | | v | | ---------------- | v | Sandbox | v ----------------- | ----------- | ----------------- | Sandbox | | | Broker | | | Sandbox | | ------------- | | ----------- | | ------------- | | | Firefox | | --------------- | | Tor | | | ------------- |------/ \_________| ------------- | ----------------- Controller ----------------- |____________________________________| Proxy IPC
Note, the sandboxed broker process (or processes) moves some functionality from the launcher process into a child process. This process is not as heavily sandboxed as the Firefox and Tor process, but, for example it would not need networking. This process would handle sending NEWNYM, provide a circuit display, change bridge configuration (general controller functions). In addition, it would handle copying files into and out of the Firefox sandbox, as an example.
Unfortunately, there is not an existing cross-platform solution for this design. We can take bits-and-pieces from other solutions, creating this will require engineering effort.
As a nit, this diagram shows all the processes as being 'enveloped' by a sandbox; but that illustration is misleading if one is using, e.g. Seatbelt on Mac. It's more accurate that the process is named Sandboxed-Tor or something like that, rather than depicting some additional 'thing' (which appears to be a process) that 'is' the sandbox. This illustration makes it look more like you're running stuff in a container. (Which may actually be the case for some platforms though.)
Ideally, the sandbox would be a sandbox-container, and at this point I think that should be our goal. Deficiencies on each platform which prevent this are problems we must overcome. In the diagram, all firefox processes are grouped together because they will be within the same sandbox. The fact that there are child processes with more restrictions is a detail I didn't include - mostly because we want a sandbox that succeeds where/when firefox's or tor's mitigations fail.
Maybe on OS X it would be better if we followed Docker's lead and investigate integrating with the MacOS Hypervisor.
Also: I think the IPC diagrams of Firefox <-> Tor are accurate; but we leave out the Content Processes in this illustration. If in the future Mozilla or Tor separates browser functionality into a helper process to reduce privileges in the Content or Parent process, it becomes possible that that helper process (a child of 'Firefox') will communicate with the Broker (and Tor). For example if/when Necko leaves the parent process and all network operations happen in a helper process off Firefox.
Finally and most substantially, I want to think about the assumption you're making about a launcher process. I agree entirely that we want our parent-process-is-sandboxed end-goal to look logically like your diagram during operation. Firefox can't talk to the network or Tor's control port: it talks to a broker that talks to Tor's control port and it talks to Tor for SOCKS proxying using a Named Pipe or Domain Socket. Certain operations are also relegated to the Broker and Firefox can't perform them.
But getting to that state... Using a Launcher process is one solution; but it's not the only solution. So I think we should keep that in mind and not assume a launcher process; but instead plan for a logical run-time design of:
+-------------------+
|-|-| Sandboxed-Firefox |------------ | | +-------------------+ | | | | | | Primary (Existing) Browser IPC | | | | | | +-------------------+ | | --| Sandboxed |- | | | | Content Processes | | | | +-------------------+ | | | | | | TBD IPC TBD IPC | | | | | | +-------------------+ | | |---| Other Sandboxed |--| | | | Helper Processes | | | | +-------------------+ | | | | | | +--------------------+ | | |---| Sandboxed |-| | |-| Networking Process | | | +--------------------+ | | | | Tor Control | | SOCKS over Domain Requests | | Socket/Named Pipe | | | | +--------------------+ | | | Sandboxed Broker |----------| | +--------------------+ | | Filtered Control Requests | +--------------------+ |-| Sandboxed Tor | +--------------------+
Spelled out:
- The FF Parent Process talks to and controls the Content Processes
(using existing IPC mechanisms) and maybe/probably interfaces with the Networking process and other Helper Processes in unknown future ways. The content processes probably talk to the Network process directly, they might also talk to the other helper processes.
- The Networking process talks to Tor using SOCKS with (probably) a
domain socket or named pipe
- Tor Control requests are sent from the Parent Process to the broker
which filters them and then passes them to Tor over the control port.
- The broker is most likely the least sandboxed process and may
provide additional functionality to the parent process; for example perhaps it passes a writable file handle in a particular directory so the user can save a download.
Yes. This is the general design from the operating system's perspective.
I think we should plan on creating a proof of concept toy program on all platforms that mimics the process execution and mitigations. (I once started one of those here: https://github.com/tomrittervg/sandboxsandbox/blob/master/SandboxSandbox/San... ) This will allow us to experiment/assert with certainty about what operations are allowed and disallowed and allow easier code review and experimentation by individuals especially those looking at their platform outside their area of experience. It's also lower cost that trying to implement it in FF from the get-go.
Yes, that was part of my plan, as well. In reality, when we have a working implementation, I'd like us to integrate it into the XRE subsystem, and not as a completely separate wrapper application. However, this means there will be a lot more upfront logic than currently exists (but maybe this is already true with the on-going launcher work, I haven't looked at that yet).
- Proxy Obedience
The last line of defense against a proxy-bypass is a mechanism for dropping all outgoing IP packets from the browser that are:
This wi> - not TCP
- or TCP but not destined for the Tor proxy port (SOCKS or HTTP)
- or all packets (including TCP) if the proxy listener is not a TCP port (Unix domain socket, named pipe, etc)
Dropping implies the packets are sent from the process and then dropped by the OS. This is not always the case, on Windows (with user_restricted) and Mac at least, the process is just outright prevented from making successful network calls.
True. This is also correct on Linux with seccomp. My phrasing was more general.
iii) Windows 7+ If the platform is Windows 8+ and the user cannot elevate to Administrator or if the platform is Windows 7: - TODO: can/should we use DLL injection of WinSocks and manually filter all proxy-bypass network connections?
I would expect we could use the same type of shims the content process currently uses to achieve this goal.
Okay, I'll look into that some more.
Is there something we can use for mitigating ROP gadgets within the sandbox?
I'm not sure where this goal came from, but it seems very independent of sandboxing. But I think the general answer is: No, not until one has or can emulate Execute-Only memory, which might be possible if one runs their own hypervisor (I don't think Hyper-V can do it but maybe?)
This is related, but I see it didn't logically flow from the surrounding throughts. While I was thinking about using DLL injection for WinSocks, I thought about how that wouldn't prevent an attacker using ROP for bypassing it.
Back to networking: I think you missed the most straightforward one: the USER_RESTRICTED token level which blocks networking.
See also https://bugzilla.mozilla.org/show_bug.cgi?id=1403931 and https://bugzilla.mozilla.org/show_bug.cgi?id=1177594
Yes, that looks like something we definitely want Mozilla to land in Firefox.
b) Mac OS X i) We can restrict network access using the com.apple.security.network.client and com.apple.security.network.server entitlements[21]. ii) We can further restrict which types of connections the browser is allowed. Previously, there was a seatbelt profile for Tor Browser[22], but it is not usable with newer versions of the browser. We can continue using it for only allowing access requests for Unix domain sockets, and denying all other network connections.
Note that the content Sandbox for Mac already enforces networking restrictions.
Great.
c) GNU/Linux i) On Linux, significant progress was already made in terms of sandboxing Tor Browser. There are two methods we can use: - Isolated network namespace via bubblewrap[14] - Seccomp-BPF on socket syscalls ii) If bubblewrap is not available, we can fallback on using unshare if it is available and user namespace is enabled iii) If user namespaces are not avialable, we ask the user for elevated privileges and manually fork into the new network namespace (worst-case).
d) Android i) We can use the same (or similar) Seccomp-BPF filter that we use on Linux ii) We can move the Gecko service into an isolated process without any permissions (including Networking). iii) If all networking goes through Necko, and Neck is in the isolated process, then the remaining code, outside the isolated process, should be proxy-safe or it is the tor process.
As with all things sandboxing, we need to be sure there are not IPC mechanisms to a privledged process that bypasses the sandbox restrictions. On the networking side, there is https://searchfox.org/mozilla-central/source/dom/network/PUDPSocket.ipdl
- which i think is used by WebRTC.
Thanks, good point. This is something we'll need to address before we enable WebRTC, anyway. On Android this code should be run by the background GeckoService, so as long as it is sandboxed that shouldn't bypass the proxy - but this needs verification.
- State Separation
Per-platform sandboxing techniques: a) Microsoft Windows i) I'm not sure if there is more we can do here.
b) Mac OS X i) The Seatbelt profile[22] can restrict access to all other Firefox profiles ii) The Seatbelt profile can only allow access for bundled libraries iii) We may exclude entitlements for most system services
c) GNU/Linux i) We can create new IPC, mount, and uts namespaces ii) We can provide a clean environment iii) The new mount namespace contains only the required bundled libraries required for running iv) D-Bus/I-Bus access would be nice, if we can do it safely.
d) Android i) All apps are isolated by default, so there should never be shared state. ii) Ensure we have the smallest set of intent filters we need iii) Ensure we have the smallest set of exported components we need iv) Ensure we have the smallest set of broadcast receivers we need v) Ensure we only use Broadcast intents where it is necessary vi) Is there something else we should investigate here?
Why is there so much difference between the other OSes and Windows?
Only because Windows is not my strength and I had trouble finding information reading through their documentation. It was a general statement of cluelessness.
The goal for State Separation, as I understand it, is to ensure that no existing browser settings/profiles/plugins are used in Tor Browser. I think there would be two reasons this could happen: a) The browser itself has a bug that we should fix, where it loads some existing data b) Some third party utility has installed something into Tor Browser
(b) seems generally unmitigatable. For (a) the primary mechanism to prevent this is "don't have browser bugs". The sandbox's role is "If there is a browser bug, block it from trying to read and use that data." So anything we can do in any sandbox to restrict access to existing user data or system directories would support this goal.
Right. If there are more steps we can take for achieving this, then we should enumerate them and consider them. My suggestions are not authoritative, they're simply what I found when I was researching the topic.
- Disk Avoidance
In general, I'd like #7449 mitigated.
[....]
What about restricting write access to temp directories? That seems like the quickest and most compatible option (although it wouldn't catch every possible occurrence of this issue.)
In general, similar to State Separation; I think there's two categories of issues here: a) Things the browser does that are browser bugs b) Things the OS does independent of the application
A sandbox can provide a safety net for things in (a).
The sandbox should also mitigate against (b) when it is possible. If the OS doesn't allow this then it is an OS limitation and there isn't much we can do, but some OSes do provide some mechanisms for this, as we should take advantage of all of them.
- Application Data Isolation
I'll claim the per-platform sandboxing techniques are the same as Disk Avoidance.
I think this one is even easier, since one can simply prevent write access to any directory except the containing one. This would have to come at the cost of additional engineering to change how the File Save dialogs work though - or you'll have a lot of confused and upset users.
And yet, we still want to maintain the smallest difference between Tor Browser and mozilla-esr. sigh.
- Secure Updating
Nearly the same sandboxing techniques can be used on all platforms. A launcher-component (maybe not the launcher, but a reduced-privileged process, not the browser) downloads the update. Next, on download completion, a launcher-component verifies the download and installs it (I can't justify verifying the download in an unprivileged, sandboxed process - I'd like to do it, but I don't see a benefit).
a) Microsoft Windows i) Layer read-only filesystem on top of install dir, prevents presistant modifications
b) Mac OS X i) Seatbelt profile prevents write access of app binaries
c) GNU/Linux i) Mount namespace provides read-only access of install dir ii) Mount namespace does not include launcher binaries
d) Android i) If App was installed using a App Store, then that is responsible for updating ii) If the app was installed by the user (side-loaded), then the we can isolate downloading into a separate process, and then verification can be handled in an isolated process. Finally, the Android Package Manager controls installing the update.
This one is at odds with (4) which says "You're not allowed to write anything outside your install directory." We need to be more explicit about what parts of the install directory one is allowed to write to.
Right. I think (4) is no longer true and the design document should be changed so it reflects this. Only the Update mechanism should have the privileges needed for modifying the contents of the install directory. This means the modifiable profile directory must be located somewhere outside the install dir.
Web Extensions come into play here. And themes. If we want to allow them; we need to carve out exceptions for them. And ideally we would use the sandbox to enforce an additional safety net against system add-ons if that's possible (e.g. they're not installed into the same place as web extensions.)
With the way Tor Browser currently installs extensions I think these two goals can co-exist, right?
- Usable
Unfortunately, despite the above suggestions, this sandboxing model fails if users find its unusable. In particular, with temporary file systems, we should provide a method for safely extracting a file from the temporary location and copying it to a permanent location using the launcher process.
I would argue that this extraction should be automatic. Anything else is unusuable. Users won't be used to being unable to save downloaded files whereever on their system they want; adding another thing on top of that would be too painful.
This seems dangerous. We'll need to think carefully about this.
In addition, exposing operating system functionality which may be fingerprintable and/or another attack vector should be optional if it unbreaks the web (such as providing pulseaudio on Linux[26]). As usual. we must be careful about giving users too many customizable switches.
Yea; this one is tough.
- Cross-Origin Fingerprinting Unlinkability
I've less experience here on non-Linux OS (and Sandboxed Tor Browser is already a good base on Linux). Which other components should we consider spoofing (if possible) or go out of our way and disallow access (if not possible using the above means)?
So the Browser already brokers access to features on the Web Platform and can be thought of (and really is) a 'sandbox' for webpages. You're only fingerprintable based on what the Web Platform provides. If it provides something fingerprintable; we need to either fix that fingerprint or block access to it at that layer.
I would argue that the role sandboxing should provide here is fingerprinting protection in the face of code execution in the content process. Assume an attacker who has a goal: identify what person is using Tor Browser. Their actions are going to be goal oriented. A website cannot get your MAC address. But if a website exploits a content process and is evaluating what is the best cost / benefit tradeoff for the next step in their exploit chain: the lowest cost is going to be 'zero' if the content process does not restrict access to a OS/computer feature that would identify the user.
Right, exactly. I think this is one property we want in a sandbox.
So the goal of sandboxing in this area would be to restrict access to any hardware/machine/OS identifiers like OS serial number, MAC address, device ids, serial numbers, etc. After that (the 'cookies' of device identifiers if you will), the goal would be to restrict access to machine-specific features that create a unique fingerprint: like your GPU (which I illustrate because it can render slightly unique canvas data) or your audio system (which I illustrate because it can apparently generate slightly unique web audio data.)
I don't know how to do this on all the platforms, so this is something that we'll need documented. If Mozilla already have this information and/or if this is already available in Firefox, then that's a good first step.
Thanks for this detailed response.
- Matt