Hi All,
This is the beginning of a conversation about creating a plan for moving towards sandboxing Tor Browser on every platform. Over the last few years, there existed a Sandboxed Tor Browser on only Linux[0] (created and maintained by Yawning Angel). However, Tor Browser aims at providing a Private Browser on all supported platforms (Microsoft Windows, Apple Mac OS X, GNU/Linux, and Android (AOSP))[1][2], this means we must provide a sandboxed run-time environment on all platforms. Unfortunately, each operating system provides a unique set of sandboxing techniques and capabilities, so we must work with the facilities we are given. In some cases, we may need to be creative about how we achieve our goals.
I do not have all the answers, and there are some open questions below.
On Windows, we have the Windows integrity mechanism[3], Windows Containers[4], SetProcessMitigationPolicy[5], App Container[6], and maybe some others. Some of this functionality is already used by Firefox.
On Mac OS X, there are two sandboxing techniques available. The first is Seatbelt[7]. Apple deprecated it in favor of code signing entitlements[8]. Unfortunately, the code signing entitlements are not as fine-grained as those Seatbelt provides, but we can enable different entitlements per "target"[9]. I don't know if this will be difficult for us. Therefore, we should utilize the code-signing restrictions where they are appropriate, but we should follow Safari[10], Chromium[11], and Firefox[12][13] by applying restrictive Sealtbelt policies where applicable.
On GNU/Linux, we can use the namespacing and secure computing (Secure Computing) facilities in the kernel exposed to userspace. Sandboxed Tor Browser on Linux already shows how these can be combined and form a sandbox. In particular, we can use bubblewrap[14] as a setuid sandboxing helper (if user namespace is not enabled), if it is available. In addition, we can reduce the syscall surface area with Seccomp-BFP. CGroups provide a way for limiting the resources available within the sandbox. We may also want to manually proxy/filter other system functionality (X11).
Last, but not least, on Android, we begin with a fairly strict sandbox provided by its permissioning model[15], distinct per-app users, and SELinux policies. Because every app is run using a distinct user, and each has its own storage directories, we get Linux's DAC on the file system for "free". In addition, we can use some of the same techniques available on GNU/Linux on Android, namely the seccomp-bpf and namespaces, if they are available. Android provides privilege and permission isolation within an App by using Services. With some refactoring, we can isolate some parts of Firefox into isolated permissionless services[15].
I'll begin by describing the goals, as I see it, for sandboxing Tor Browser. Hopefully, this will help us evaluate the different available techniques. These goals are derived from the Design document[16] and are the means for archieving Tor Browser's end-goal.
In particular, the sandboxing techniques preserve the Security Requirements of a private browser when the browser, itself, fails at maintaining those criteria. By this I mean the sandbox should be designed such that if the browser process loses any of the Security properties (through a logical bug, exploited vulnerability, etc), the sandbox provides an additional layer of those properties and the user is not in immediate danger. The sandbox may, in some situations, improve the Privacy properties of Tor Browser, for example if a component/device is emulated instead of providing the browser with raw access. We should use these mechanisms when they are available.
1) Proxy Obedience 2) State Separation 3) Disk Avoidance 4) Application Data Isolation 5) Secure Updating 6) Usable 7) Cross-Origin Fingerprinting Unlinkability
We'll go through these one-by-one and describe the role of a sandbox and how we can achieve this on each platform. Note, this design assumes a launcher-based sandboxed architecture (similar to the Sandboxed Tor Browser on Linux):
------------------ | Launcher | ------------------ | | ------------------------------ | | | | v | | ---------------- | v | Sandbox | v ----------------- | ----------- | ----------------- | Sandbox | | | Broker | | | Sandbox | | ------------- | | ----------- | | ------------- | | | Firefox | | --------------- | | Tor | | | ------------- |------/ _________| ------------- | ----------------- Controller ----------------- |____________________________________| Proxy IPC
Note, the sandboxed broker process (or processes) moves some functionality from the launcher process into a child process. This process is not as heavily sandboxed as the Firefox and Tor process, but, for example it would not need networking. This process would handle sending NEWNYM, provide a circuit display, change bridge configuration (general controller functions). In addition, it would handle copying files into and out of the Firefox sandbox, as an example.
Unfortunately, there is not an existing cross-platform solution for this design. We can take bits-and-pieces from other solutions, creating this will require engineering effort.
1) Proxy Obedience
The last line of defense against a proxy-bypass is a mechanism for dropping all outgoing IP packets from the browser that are: - not TCP - or TCP but not destined for the Tor proxy port (SOCKS or HTTP) - or all packets (including TCP) if the proxy listener is not a TCP port (Unix domain socket, named pipe, etc)
We should also consider guarding the creation of all non-Unix domain sockets and non-named pipes with a pref. I believe sandboxing should be optional, but enabled by default (at least with first few versions), so excluding that code at compile-time is not an option. However, a pref-guard only stops unintended proxy-bypass through normal control flow. This only slightly increases the difficulty of some exploit methods.
Per-platform sandboxing techniques: a) Microsoft Windows i) Windows 8+ If the user can elevate for administrator permissions, then the launcher can: - install a network filter using the Windows Filter Platform[17]. This can deny any connection from any process (based on the process's fully-qualified file name)[18]. This adds some risk because if the launcher process exits unexpected for whatever reason, the firewall rule may remain in place. ii) Windows 10 We can create a new network namespace[19] using the (undocumented) HNSCall procedure[20]. This is very appealing. iii) Windows 7+ If the platform is Windows 8+ and the user cannot elevate to Administrator or if the platform is Windows 7: - TODO: can/should we use DLL injection of WinSocks and manually filter all proxy-bypass network connections? Is there something we can use for mitigating ROP gadgets within the sandbox?
b) Mac OS X i) We can restrict network access using the com.apple.security.network.client and com.apple.security.network.server entitlements[21]. ii) We can further restrict which types of connections the browser is allowed. Previously, there was a seatbelt profile for Tor Browser[22], but it is not usable with newer versions of the browser. We can continue using it for only allowing access requests for Unix domain sockets, and denying all other network connections.
c) GNU/Linux i) On Linux, significant progress was already made in terms of sandboxing Tor Browser. There are two methods we can use: - Isolated network namespace via bubblewrap[14] - Seccomp-BPF on socket syscalls ii) If bubblewrap is not available, we can fallback on using unshare if it is available and user namespace is enabled iii) If user namespaces are not avialable, we ask the user for elevated privileges and manually fork into the new network namespace (worst-case).
d) Android i) We can use the same (or similar) Seccomp-BPF filter that we use on Linux ii) We can move the Gecko service into an isolated process without any permissions (including Networking). iii) If all networking goes through Necko, and Neck is in the isolated process, then the remaining code, outside the isolated process, should be proxy-safe or it is the tor process.
2) State Separation
Per-platform sandboxing techniques: a) Microsoft Windows i) I'm not sure if there is more we can do here.
b) Mac OS X i) The Seatbelt profile[22] can restrict access to all other Firefox profiles ii) The Seatbelt profile can only allow access for bundled libraries iii) We may exclude entitlements for most system services
c) GNU/Linux i) We can create new IPC, mount, and uts namespaces ii) We can provide a clean environment iii) The new mount namespace contains only the required bundled libraries required for running iv) D-Bus/I-Bus access would be nice, if we can do it safely.
d) Android i) All apps are isolated by default, so there should never be shared state. ii) Ensure we have the smallest set of intent filters we need iii) Ensure we have the smallest set of exported components we need iv) Ensure we have the smallest set of broadcast receivers we need v) Ensure we only use Broadcast intents where it is necessary vi) Is there something else we should investigate here?
3) Disk Avoidance In general, I'd like #7449 mitigated.
Per-platform sandboxing techniques: a) Microsoft Windows i) Windows 10+ - On very recent versions of Windows, we can create temporary filesystem Layers[23][24]. I'm not sure if these are memory-backed or filesystem-backed. - We can create a read-only layer over the installation directory - #18367 may be useful (side-by-side user/app data on Windows) - Can we can isolate the container from system services?
b) Mac OS X i) We can use entitlements[25] ii) The Seatbelt profile[22] provided additional access restrictions. iii) Is there a filesystem layering mechanism available in OS X (like tmpfs in Linux)? - Specifically, if we can use this for ~/Library/Caches/TemporaryItems and ~/Downloads
c) GNU/Linux i) We can create new IPC, mount, and uts namespaces ii) #18369 may be useful (side-by-side user/app data on Linux)
d) Android i) The app is self-contained by default ii) Need #26574 iii) Additional auditing required
4) Application Data Isolation I'll claim the per-platform sandboxing techniques are the same as Disk Avoidance.
5) Secure Updating Nearly the same sandboxing techniques can be used on all platforms. A launcher-component (maybe not the launcher, but a reduced-privileged process, not the browser) downloads the update. Next, on download completion, a launcher-component verifies the download and installs it (I can't justify verifying the download in an unprivileged, sandboxed process - I'd like to do it, but I don't see a benefit).
a) Microsoft Windows i) Layer read-only filesystem on top of install dir, prevents presistant modifications
b) Mac OS X i) Seatbelt profile prevents write access of app binaries
c) GNU/Linux i) Mount namespace provides read-only access of install dir ii) Mount namespace does not include launcher binaries
d) Android i) If App was installed using a App Store, then that is responsible for updating ii) If the app was installed by the user (side-loaded), then the we can isolate downloading into a separate process, and then verification can be handled in an isolated process. Finally, the Android Package Manager controls installing the update.
6) Usable
Unfortunately, despite the above suggestions, this sandboxing model fails if users find its unusable. In particular, with temporary file systems, we should provide a method for safely extracting a file from the temporary location and copying it to a permanent location using the launcher process. In addition, exposing operating system functionality which may be fingerprintable and/or another attack vector should be optional if it unbreaks the web (such as providing pulseaudio on Linux[26]). As usual. we must be careful about giving users too many customizable switches.
7) Cross-Origin Fingerprinting Unlinkability
I've less experience here on non-Linux OS (and Sandboxed Tor Browser is already a good base on Linux). Which other components should we consider spoofing (if possible) or go out of our way and disallow access (if not possible using the above means)?
[0] https://trac.torproject.org/projects/tor/wiki/doc/TorBrowser/Sandbox/Linux [1] https://www.mozilla.org/en-US/firefox/60.0esr/system-requirements/ [2] https://support.mozilla.org/en-US/kb/will-firefox-work-my-mobile-device [3] https://msdn.microsoft.com/en-us/library/bb625963.aspx [4] https://github.com/Microsoft/dotnet-computevirtualization/blob/master/src/Mi... [5] https://docs.microsoft.com/en-us/windows/desktop/api/processthreadsapi/nf-pr... [6] https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh464936(v=w...) [7] https://developer.apple.com/library/archive/documentation/Security/Conceptua... [8] https://help.apple.com/xcode/mac/current/#/dev88ff319e7 [9] https://developer.apple.com/library/archive/documentation/Security/Conceptua... [10] https://trac.webkit.org/browser/webkit/releases/Apple/Safari%2011.1.1/WebKit... [11] https://www.chromium.org/developers/design-documents/sandbox/osx-sandboxing-... [12] https://wiki.mozilla.org/Security/Sandbox#OSX [13] https://dxr.mozilla.org/mozilla-central/source/security/sandbox/mac/SandboxP... [14] https://github.com/projectatomic/bubblewrap [15] https://developer.android.com/reference/android/Manifest.permission [15] https://developer.android.com/guide/topics/manifest/service-element#isolated [16] https://www.torproject.org/projects/torbrowser/design/ [17] https://docs.microsoft.com/en-us/windows/desktop/FWP/windows-filtering-platf... [18] https://docs.microsoft.com/en-us/windows/desktop/FWP/permitting-and-blocking... [19] https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-... [20] https://github.com/Microsoft/hcsshim/blob/master/internal/hns/namespace.go#L... [21] https://developer.apple.com/library/archive/documentation/Miscellaneous/Refe... [22] https://gitweb.torproject.org/builders/tor-browser-build.git/tree/projects/t... [23] https://github.com/Microsoft/hcsshim/blob/master/internal/wclayer/createlaye... [24] https://github.com/Microsoft/hcsshim/blob/master/internal/wclayer/createscra... [25] https://developer.apple.com/library/archive/documentation/Security/Conceptua... [26] https://trac.torproject.org/projects/tor/wiki/doc/TorBrowser/Sandbox/Linux#H...
On 3 July 2018 at 18:03, Matthew Finkel matthew.finkel@gmail.com wrote:
Hi All,
This is the beginning of a conversation about creating a plan for moving towards sandboxing Tor Browser on every platform.
Thanks for this Matt! I started typing and kept typing and just kept typing and well... sorry.
Over the last few years, there existed a Sandboxed Tor Browser on only Linux[0] (created and maintained by Yawning Angel). However, Tor Browser aims at providing a Private Browser on all supported platforms (Microsoft Windows, Apple Mac OS X, GNU/Linux, and Android (AOSP))[1][2], this means we must provide a sandboxed run-time environment on all platforms.
For the benefit of others reading tbb-dev, I want to explain the difference between the 'Content Process Sandbox' and the 'Parent Process Sandbox'. This is intended to answer confusion like "Chrome is sandboxed.... isn't Firefox sandboxed? I thought I saw some stuff in the release notes about it?"
Firefox (and Chrome, and Edge) have a parent process that executes multiple child or 'content' processes. (It also executes other helper processes like a Web Extensions process and a GPU process on some platforms.) The content process is responsible for parsing HTML, rendering it, executing javascript, running the javascript JIT, etc. The parent process does the actual TLS handshakes, HTTP conversations, renders the browser UI (and composites the page contents I think but I'm not sure on that), and some other stuff.
Firefox has been working on a restrictive sandbox for the Content Process for the past few years. We've got a Mac sandbox that can only be tightened a small amount more. Our Windows sandbox is pretty good but is missing one large attack surface reduction called 'win32k.sys lockdown' that is blocked on ~1 year of clock time worth of graphics refactoring. I'm unsure of the relative strength of our Linux sandbox, but I think it's fairly good if you exclude stuff like "The X11 protocol lets you do pretty crazy things and we need to lock that down".
Firefox has not done any work on sandboxing the parent process. All the discussions I have with folks at Tor are about protecting the parent process for additional defense in depth. Thus we're assuming that a) An attacker has exploited the content process but cannot achieve their goal from that position and b) decides to attack the parent process.
Both of these assumptions are not necessarily true, and thus means we need to address the opposite of those assumptions _too_; and we should prioritize that work in relation to the larger architectural goal of "How do we sandbox the parent process".
To the point of (b): This document focuses on sandboxing as supporting the goals of the design doc, which I think is a fine approach. But we should be aware of other approaches to evaluating sandboxing and one is from the capability reduction standpoint. A lot of sandbox escapes today occur by getting code execution in the content process and then exploiting the kernel. So identifying the features on each platform that can reduce kernel attack surface, and then seeing what prevents us from using these features (starting in the content process but eventually including the parent process) would be a worthwhile exercise. Similarly to focusing on the kernel, we can look at what others permissions are needed (by the content and eventually the parent) and determine which ones are the scariest and work to either remove them entirely, or move them to a separate process.
To the point of (a), we should consider what goals an attacker has when exploiting Tor Browser. Enumerating attacks is likely to miss some, but not considering attacks means we're trying to defend ourselves from 'everything' with no priority. I think I would classify attacks into High Impact and Opportunistic Impact, and focus specifically on "code execution achieved through an exploit" (and not things like 'exploiting' a fingerprinting vector we have not solved in the web platform.) We may miss some attacks; but the ones we can list today are, I think, accurate in their impact.
High Impact goals are ones that do one or more of the following: achieve a proxy bypass, temporarily corrupt's tor's configuration leading to a direct-to-relay connection (by changing the user's bridge/guard/DirAuths), permanently corrupts the user's machine (by installing malware or affecting tor's configuration), or (less bad without a proxy bypass but still bad) retrieves an identifier that uniquely and persistently identifies a user (MAC address, serial) and exfiltrates it.
Opportunistic goals are ones that compromise the browser process as it currently runs (without persistence) and rely on the user performing an action using the browser that discloses information. For example, reading other websites cookies is only useful if the user is active on other sites in a way that identifies them.
When one is able to achieve High Impact goals from the Content Process - it seems to me that engineering effort should be focused on closing _those_ holes first, before trying to build a solution for the parent process. (I'm not saying we shouldn't plan for the parent process though!)
I could talk more about what actions an attacker can perform if they exploit the Content Process (both currently and post-Fission https://wiki.mozilla.org/Project_Fission ) and how they can attack the Parent Process; but I'm going to skip that for now; but would be happy to do so if it seems helpful.
Unfortunately, each operating system provides a unique set of sandboxing techniques and capabilities, so we must work with the facilities we are given. In some cases, we may need to be creative about how we achieve our goals.
I do not have all the answers, and there are some open questions below.
On Windows, we have the Windows integrity mechanism[3], Windows Containers[4], SetProcessMitigationPolicy[5], App Container[6], and maybe some others. Some of this functionality is already used by Firefox.
For Windows Container I found the following helpful reading: https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/
In particular, they require Windows Server or Windows 10 Professional or Enterprise. Containers are awesome, and in particular Hyper-V based containers are essentially Virtual Machines with their own kernel and therefore we could do really powerful things with them. But I think we need to wait a few years until it becomes more widely available.
There's https://www.bromium.com/ which operates in this space, and despite having a bunch of slick marketing and buzzwords and stuff actually has a really powerful technology core.
App Container is primarily for Windows Store apps, but can also be used for 'Legacy Apps' per https://docs.microsoft.com/en-us/windows/desktop/secauthz/appcontainer-for-l... . I can't quite tell how useful it is.
The Chromium/Firefox sandbox on Windows is composed of the interaction of SetProcessMitigationOptions (of which there are many options), Integrity Level (both an 'Initial' and a 'Delayed') as well as Job Level, Access Token Level, Alternate Desktop, and Alternate Windows Station.
On Mac OS X, there are two sandboxing techniques available. The first is Seatbelt[7]. Apple deprecated it in favor of code signing entitlements[8]. Unfortunately, the code signing entitlements are not as fine-grained as those Seatbelt provides, but we can enable different entitlements per "target"[9]. I don't know if this will be difficult for us. Therefore, we should utilize the code-signing restrictions where they are appropriate, but we should follow Safari[10], Chromium[11], and Firefox[12][13] by applying restrictive Sealtbelt policies where applicable.
Can you apply both Seatbelt and Code Signing Entitlements?
Also; have you experimented with/confirmed the restriction mentioned at All Hands; that a process with a Seatbelt Policy cannot launch a process with a more restrictive seatbelt policy? (I think mcs ran into this with the Mac Sandbox policy that used to ship with Tor Browser Alpha.) This is a significant problem to any Apple-based sandboxing policy, and frankly one I think someone should make a semi-organized petition from browser makers to Apple to fix.
Additionally; OSX 10.14 Mojave provides new sandboxing features; including ones that feature-match Microsoft's Arbitrary Code Guard and Code Integrity Guard. I don't know much about these; but they are powerful and in the future, Mozilla will probably be interested in them. (Future being 1-2+ years from now.)
On GNU/Linux, we can use the namespacing and secure computing (Secure Computing) facilities in the kernel exposed to userspace. Sandboxed Tor Browser on Linux already shows how these can be combined and form a sandbox. In particular, we can use bubblewrap[14] as a setuid sandboxing helper (if user namespace is not enabled), if it is available. In addition, we can reduce the syscall surface area with Seccomp-BFP. CGroups provide a way for limiting the resources available within the sandbox. We may also want to manually proxy/filter other system functionality (X11).
Last, but not least, on Android, we begin with a fairly strict sandbox provided by its permissioning model[15], distinct per-app users, and SELinux policies. Because every app is run using a distinct user, and each has its own storage directories, we get Linux's DAC on the file system for "free". In addition, we can use some of the same techniques available on GNU/Linux on Android, namely the seccomp-bpf and namespaces, if they are available. Android provides privilege and permission isolation within an App by using Services. With some refactoring, we can isolate some parts of Firefox into isolated permissionless services[15].
Something else to consider is how each of these sandboxing technologies (on each platform) are enabled. Some are applied at runtime by the process, the process starts more privileged and then drops privileges. Others are applied by the OS before execution, and the process never has the elevated privileges.
If a process needs to do something that the sandbox should prevent - like creating a socket connection to another process - there are two ways (I think only two) to achieve that goal. The first way is a process that _has_ permission gives a resource (like a socket connection) to the restricted process. The second way is the restricted process starts with less restrictive permissions, does the privileged action (like creating a socket) and then applies the sandbox (entire in whole or an additional component of it) to itself - ideally before processing any user input that could be malicious.
I think there's a lot of focus in this document on applying sandbox policies by the OS before/as the process starts and not following the 'drop privileges' model. But dropping privileges is much more flexible. I think some confusion with this model is the notion that the process is 'sandboxing itself' and of course one can't trust a process that is simultaneously compromised and attempting to perform security operations so that model must be broken - but this notion is incorrect. The process - before it is able to be compromised by attacker input, before it processes anything from the web - instructs the OS to apply the sandbox to itself, and cannot later opt-out of that restriction. It's true that something could go wrong during the sandbox application and result in a the process staying elevated - but I think that is easy to code defensively for and even test at runtime.
I'll begin by describing the goals, as I see it, for sandboxing Tor Browser. Hopefully, this will help us evaluate the different available techniques. These goals are derived from the Design document[16] and are the means for archieving Tor Browser's end-goal.
In particular, the sandboxing techniques preserve the Security Requirements of a private browser when the browser, itself, fails at maintaining those criteria. By this I mean the sandbox should be designed such that if the browser process loses any of the Security properties (through a logical bug, exploited vulnerability, etc), the sandbox provides an additional layer of those properties and the user is not in immediate danger. The sandbox may, in some situations, improve the Privacy properties of Tor Browser, for example if a component/device is emulated instead of providing the browser with raw access. We should use these mechanisms when they are available.
- Proxy Obedience
- State Separation
- Disk Avoidance
- Application Data Isolation
- Secure Updating
- Usable
- Cross-Origin Fingerprinting Unlinkability
We'll go through these one-by-one and describe the role of a sandbox and how we can achieve this on each platform.
Yes; but I would phrase things differently. I also want to call out our strengths and weaknesses.
Our goals, as I understand them, are twofold: 1) Improve the sandboxing of the browser as a whole to a higher level to i) reduce attack surface from vulnerabilities and common attacker positions and ii) to act as a safety net for design doc goals 2) Eventually, eliminate Tor Browser as a fork of Firefox. What this means in detail is (to me at least but probably everyone) not fully determined, but I'll assume a world where there's at least the window of branding changes, pref flips and system-like addons but no functionality patches...
Our strengths, as I see them: 1) We are privacy research experts 2) We're willing to compromise on user freedom, web features, and performance for security and privacy 3) We're willing to experiment with more complicated user interfaces 4) Our use case for the browser is (probably) different. We're (probably) not seeing long lived browser sessions and therefore have more frequent restarts. We're (probably) not seeing a lot of logged in sessions, so restarting the browser (especially with an open-tabs-restore feature) is not as painful.
Of course we want our browser to be usable, so we're not going to go crazy on (2) or (3), but that we allow the user to disable JavaScript, and have and expect to keep a 'security slider' are examples of where we're willing to go.
Our weaknesses, as I see them: 1) We're tracking ESR for now 2) Un-upstreamed patches are painful for us 3) We are not well suited to undertake large browser architecture projects, especially if it diverges from Firefox 4) We are neither browser architecture experts, nor sandboxing experts. In particular this makes us ill-suited to predict feature breakage or performance degradation from a hypothesized sandbox/architecture change.
In particular, while looking at this, I think this gives particular weight to aligning ourselves with future Firefox plans. And that we can often begin adopting those plans before Firefox does.
Note, this design assumes a launcher-based sandboxed architecture (similar to the Sandboxed Tor Browser on Linux):
------------------ | Launcher | ------------------ | | ------------------------------ | | | | v | | ---------------- | v | Sandbox | v ----------------- | ----------- | ----------------- | Sandbox | | | Broker | | | Sandbox | | ------------- | | ----------- | | ------------- | | | Firefox | | --------------- | | Tor | | | ------------- |------/ \_________| ------------- | ----------------- Controller ----------------- |____________________________________| Proxy IPC
Note, the sandboxed broker process (or processes) moves some functionality from the launcher process into a child process. This process is not as heavily sandboxed as the Firefox and Tor process, but, for example it would not need networking. This process would handle sending NEWNYM, provide a circuit display, change bridge configuration (general controller functions). In addition, it would handle copying files into and out of the Firefox sandbox, as an example.
Unfortunately, there is not an existing cross-platform solution for this design. We can take bits-and-pieces from other solutions, creating this will require engineering effort.
As a nit, this diagram shows all the processes as being 'enveloped' by a sandbox; but that illustration is misleading if one is using, e.g. Seatbelt on Mac. It's more accurate that the process is named Sandboxed-Tor or something like that, rather than depicting some additional 'thing' (which appears to be a process) that 'is' the sandbox. This illustration makes it look more like you're running stuff in a container. (Which may actually be the case for some platforms though.)
Also: I think the IPC diagrams of Firefox <-> Tor are accurate; but we leave out the Content Processes in this illustration. If in the future Mozilla or Tor separates browser functionality into a helper process to reduce privileges in the Content or Parent process, it becomes possible that that helper process (a child of 'Firefox') will communicate with the Broker (and Tor). For example if/when Necko leaves the parent process and all network operations happen in a helper process off Firefox.
Finally and most substantially, I want to think about the assumption you're making about a launcher process. I agree entirely that we want our parent-process-is-sandboxed end-goal to look logically like your diagram during operation. Firefox can't talk to the network or Tor's control port: it talks to a broker that talks to Tor's control port and it talks to Tor for SOCKS proxying using a Named Pipe or Domain Socket. Certain operations are also relegated to the Broker and Firefox can't perform them.
But getting to that state... Using a Launcher process is one solution; but it's not the only solution. So I think we should keep that in mind and not assume a launcher process; but instead plan for a logical run-time design of:
+-------------------+ |-|-| Sandboxed-Firefox |------------ | | +-------------------+ | | | | | | Primary (Existing) Browser IPC | | | | | | +-------------------+ | | --| Sandboxed |- | | | | Content Processes | | | | +-------------------+ | | | | | | TBD IPC TBD IPC | | | | | | +-------------------+ | | |---| Other Sandboxed |--| | | | Helper Processes | | | | +-------------------+ | | | | | | +--------------------+ | | |---| Sandboxed |-| | |-| Networking Process | | | +--------------------+ | | | | Tor Control | | SOCKS over Domain Requests | | Socket/Named Pipe | | | | +--------------------+ | | | Sandboxed Broker |----------| | +--------------------+ | | Filtered Control Requests | +--------------------+ |-| Sandboxed Tor | +--------------------+
Spelled out: - The FF Parent Process talks to and controls the Content Processes (using existing IPC mechanisms) and maybe/probably interfaces with the Networking process and other Helper Processes in unknown future ways. The content processes probably talk to the Network process directly, they might also talk to the other helper processes. - The Networking process talks to Tor using SOCKS with (probably) a domain socket or named pipe - Tor Control requests are sent from the Parent Process to the broker which filters them and then passes them to Tor over the control port. - The broker is most likely the least sandboxed process and may provide additional functionality to the parent process; for example perhaps it passes a writable file handle in a particular directory so the user can save a download.
I think we should plan on creating a proof of concept toy program on all platforms that mimics the process execution and mitigations. (I once started one of those here: https://github.com/tomrittervg/sandboxsandbox/blob/master/SandboxSandbox/San... ) This will allow us to experiment/assert with certainty about what operations are allowed and disallowed and allow easier code review and experimentation by individuals especially those looking at their platform outside their area of experience. It's also lower cost that trying to implement it in FF from the get-go.
- Proxy Obedience
The last line of defense against a proxy-bypass is a mechanism for dropping all outgoing IP packets from the browser that are:
This wi> - not TCP
- or TCP but not destined for the Tor proxy port (SOCKS or HTTP)
- or all packets (including TCP) if the proxy listener is not a TCP port (Unix domain socket, named pipe, etc)
Dropping implies the packets are sent from the process and then dropped by the OS. This is not always the case, on Windows (with user_restricted) and Mac at least, the process is just outright prevented from making successful network calls.
We should also consider guarding the creation of all non-Unix domain sockets and non-named pipes with a pref. I believe sandboxing should be optional, but enabled by default (at least with first few versions), so excluding that code at compile-time is not an option. However, a pref-guard only stops unintended proxy-bypass through normal control flow. This only slightly increases the difficulty of some exploit methods.
I agree that a pref-based solution is only actually useful for testing purposes and doesn't provide security.
Per-platform sandboxing techniques: a) Microsoft Windows i) Windows 8+ If the user can elevate for administrator permissions, then the launcher can: - install a network filter using the Windows Filter Platform[17]. This can deny any connection from any process (based on the process's fully-qualified file name)[18]. This adds some risk because if the launcher process exits unexpected for whatever reason, the firewall rule may remain in place. ii) Windows 10 We can create a new network namespace[19] using the (undocumented) HNSCall procedure[20]. This is very appealing.
I think this is only possible if you're using a version of Windows which supports containers, which most of our users won't have.
iii) Windows 7+ If the platform is Windows 8+ and the user cannot elevate to Administrator or if the platform is Windows 7: - TODO: can/should we use DLL injection of WinSocks and manually filter all proxy-bypass network connections?
I would expect we could use the same type of shims the content process currently uses to achieve this goal.
Is there something we can use for mitigating ROP gadgets within the sandbox?
I'm not sure where this goal came from, but it seems very independent of sandboxing. But I think the general answer is: No, not until one has or can emulate Execute-Only memory, which might be possible if one runs their own hypervisor (I don't think Hyper-V can do it but maybe?)
Back to networking: I think you missed the most straightforward one: the USER_RESTRICTED token level which blocks networking.
See also https://bugzilla.mozilla.org/show_bug.cgi?id=1403931 and https://bugzilla.mozilla.org/show_bug.cgi?id=1177594
b) Mac OS X i) We can restrict network access using the com.apple.security.network.client and com.apple.security.network.server entitlements[21]. ii) We can further restrict which types of connections the browser is allowed. Previously, there was a seatbelt profile for Tor Browser[22], but it is not usable with newer versions of the browser. We can continue using it for only allowing access requests for Unix domain sockets, and denying all other network connections.
Note that the content Sandbox for Mac already enforces networking restrictions.
c) GNU/Linux i) On Linux, significant progress was already made in terms of sandboxing Tor Browser. There are two methods we can use: - Isolated network namespace via bubblewrap[14] - Seccomp-BPF on socket syscalls ii) If bubblewrap is not available, we can fallback on using unshare if it is available and user namespace is enabled iii) If user namespaces are not avialable, we ask the user for elevated privileges and manually fork into the new network namespace (worst-case).
d) Android i) We can use the same (or similar) Seccomp-BPF filter that we use on Linux ii) We can move the Gecko service into an isolated process without any permissions (including Networking). iii) If all networking goes through Necko, and Neck is in the isolated process, then the remaining code, outside the isolated process, should be proxy-safe or it is the tor process.
As with all things sandboxing, we need to be sure there are not IPC mechanisms to a privledged process that bypasses the sandbox restrictions. On the networking side, there is https://searchfox.org/mozilla-central/source/dom/network/PUDPSocket.ipdl - which i think is used by WebRTC.
- State Separation
Per-platform sandboxing techniques: a) Microsoft Windows i) I'm not sure if there is more we can do here.
b) Mac OS X i) The Seatbelt profile[22] can restrict access to all other Firefox profiles ii) The Seatbelt profile can only allow access for bundled libraries iii) We may exclude entitlements for most system services
c) GNU/Linux i) We can create new IPC, mount, and uts namespaces ii) We can provide a clean environment iii) The new mount namespace contains only the required bundled libraries required for running iv) D-Bus/I-Bus access would be nice, if we can do it safely.
d) Android i) All apps are isolated by default, so there should never be shared state. ii) Ensure we have the smallest set of intent filters we need iii) Ensure we have the smallest set of exported components we need iv) Ensure we have the smallest set of broadcast receivers we need v) Ensure we only use Broadcast intents where it is necessary vi) Is there something else we should investigate here?
Why is there so much difference between the other OSes and Windows?
The goal for State Separation, as I understand it, is to ensure that no existing browser settings/profiles/plugins are used in Tor Browser. I think there would be two reasons this could happen: a) The browser itself has a bug that we should fix, where it loads some existing data b) Some third party utility has installed something into Tor Browser
(b) seems generally unmitigatable. For (a) the primary mechanism to prevent this is "don't have browser bugs". The sandbox's role is "If there is a browser bug, block it from trying to read and use that data." So anything we can do in any sandbox to restrict access to existing user data or system directories would support this goal.
- Disk Avoidance
In general, I'd like #7449 mitigated.
Per-platform sandboxing techniques: a) Microsoft Windows i) Windows 10+ - On very recent versions of Windows, we can create temporary filesystem Layers[23][24]. I'm not sure if these are memory-backed or filesystem-backed. - We can create a read-only layer over the installation directory - #18367 may be useful (side-by-side user/app data on Windows) - Can we can isolate the container from system services?
Again, non-consumer versions of Windows only.
b) Mac OS X i) We can use entitlements[25] ii) The Seatbelt profile[22] provided additional access restrictions. iii) Is there a filesystem layering mechanism available in OS X (like tmpfs in Linux)? - Specifically, if we can use this for ~/Library/Caches/TemporaryItems and ~/Downloads
c) GNU/Linux i) We can create new IPC, mount, and uts namespaces ii) #18369 may be useful (side-by-side user/app data on Linux)
d) Android i) The app is self-contained by default ii) Need #26574 iii) Additional auditing required
What about restricting write access to temp directories? That seems like the quickest and most compatible option (although it wouldn't catch every possible occurrence of this issue.)
In general, similar to State Separation; I think there's two categories of issues here: a) Things the browser does that are browser bugs b) Things the OS does independent of the application
A sandbox can provide a safety net for things in (a).
- Application Data Isolation
I'll claim the per-platform sandboxing techniques are the same as Disk Avoidance.
I think this one is even easier, since one can simply prevent write access to any directory except the containing one. This would have to come at the cost of additional engineering to change how the File Save dialogs work though - or you'll have a lot of confused and upset users.
- Secure Updating
Nearly the same sandboxing techniques can be used on all platforms. A launcher-component (maybe not the launcher, but a reduced-privileged process, not the browser) downloads the update. Next, on download completion, a launcher-component verifies the download and installs it (I can't justify verifying the download in an unprivileged, sandboxed process - I'd like to do it, but I don't see a benefit).
a) Microsoft Windows i) Layer read-only filesystem on top of install dir, prevents presistant modifications
b) Mac OS X i) Seatbelt profile prevents write access of app binaries
c) GNU/Linux i) Mount namespace provides read-only access of install dir ii) Mount namespace does not include launcher binaries
d) Android i) If App was installed using a App Store, then that is responsible for updating ii) If the app was installed by the user (side-loaded), then the we can isolate downloading into a separate process, and then verification can be handled in an isolated process. Finally, the Android Package Manager controls installing the update.
This one is at odds with (4) which says "You're not allowed to write anything outside your install directory." We need to be more explicit about what parts of the install directory one is allowed to write to.
Web Extensions come into play here. And themes. If we want to allow them; we need to carve out exceptions for them. And ideally we would use the sandbox to enforce an additional safety net against system add-ons if that's possible (e.g. they're not installed into the same place as web extensions.)
- Usable
Unfortunately, despite the above suggestions, this sandboxing model fails if users find its unusable. In particular, with temporary file systems, we should provide a method for safely extracting a file from the temporary location and copying it to a permanent location using the launcher process.
I would argue that this extraction should be automatic. Anything else is unusuable. Users won't be used to being unable to save downloaded files whereever on their system they want; adding another thing on top of that would be too painful.
In addition, exposing operating system functionality which may be fingerprintable and/or another attack vector should be optional if it unbreaks the web (such as providing pulseaudio on Linux[26]). As usual. we must be careful about giving users too many customizable switches.
Yea; this one is tough.
- Cross-Origin Fingerprinting Unlinkability
I've less experience here on non-Linux OS (and Sandboxed Tor Browser is already a good base on Linux). Which other components should we consider spoofing (if possible) or go out of our way and disallow access (if not possible using the above means)?
So the Browser already brokers access to features on the Web Platform and can be thought of (and really is) a 'sandbox' for webpages. You're only fingerprintable based on what the Web Platform provides. If it provides something fingerprintable; we need to either fix that fingerprint or block access to it at that layer.
I would argue that the role sandboxing should provide here is fingerprinting protection in the face of code execution in the content process. Assume an attacker who has a goal: identify what person is using Tor Browser. Their actions are going to be goal oriented. A website cannot get your MAC address. But if a website exploits a content process and is evaluating what is the best cost / benefit tradeoff for the next step in their exploit chain: the lowest cost is going to be 'zero' if the content process does not restrict access to a OS/computer feature that would identify the user.
So the goal of sandboxing in this area would be to restrict access to any hardware/machine/OS identifiers like OS serial number, MAC address, device ids, serial numbers, etc. After that (the 'cookies' of device identifiers if you will), the goal would be to restrict access to machine-specific features that create a unique fingerprint: like your GPU (which I illustrate because it can render slightly unique canvas data) or your audio system (which I illustrate because it can apparently generate slightly unique web audio data.)
-tom
[ Trying to keep the scope of my reply limited. nb: Linux centric since that's what I know/have/wrote. ]
On 07/05/2018 06:46 PM, Tom Ritter wrote: [snip]
I think there's a lot of focus in this document on applying sandbox policies by the OS before/as the process starts and not following the 'drop privileges' model. But dropping privileges is much more flexible. I think some confusion with this model is the notion that the process is 'sandboxing itself' and of course one can't trust a process that is simultaneously compromised and attempting to perform security operations so that model must be broken - but this notion is incorrect. The process - before it is able to be compromised by attacker input, before it processes anything from the web - instructs the OS to apply the sandbox to itself, and cannot later opt-out of that restriction. It's true that something could go wrong during the sandbox application and result in a the process staying elevated - but I think that is easy to code defensively for and even test at runtime.
Well, yeah. Matt's architecture is derived sandboxed-tor-browser, whose design assumptions were along the lines of "Firefox's codebase is a gigantic mess of kludges upon kludges that shouldn't be trusted, or altered if at all possible".
Having the sandbox be a entirely separate component: a) Happened to mesh nicely with the way sandboxing is typically implemented on my target platform. b) Enabled using a "safe" language. c) Enabled far more rapid development than otherwise possible. d) Made it easier to validate correctness. e) Slowed my inevitable descent into insanity by keeping my interaction with the Firefox code to a minimum.
Our weaknesses, as I see them:
- We're tracking ESR for now
- Un-upstreamed patches are painful for us
- We are not well suited to undertake large browser architecture
projects, especially if it diverges from Firefox 4) We are neither browser architecture experts, nor sandboxing experts. In particular this makes us ill-suited to predict feature breakage or performance degradation from a hypothesized sandbox/architecture change.
The number of upstream (Firefox) changes that were needed to be to get the Linux sandbox to work was exactly 0. There was one fix I backported from a more current firefox release, and two upstream firefox bugs that I worked around (all without altering the firefox binary at all).
The design and code survived more or less intact from 7.0.x to at least the entirely of the 7.5 stable series (I don't run alpha, I assume there's some changes required for 8.0, but the code's deprecated and I can't be bothered to check. It would have survived if the time and motivation was available).
Spelled out:
- The FF Parent Process talks to and controls the Content Processes
(using existing IPC mechanisms) and maybe/probably interfaces with the Networking process and other Helper Processes in unknown future ways. The content processes probably talk to the Network process directly, they might also talk to the other helper processes.
- The Networking process talks to Tor using SOCKS with (probably) a
domain socket or named pipe
- Tor Control requests are sent from the Parent Process to the broker
which filters them and then passes them to Tor over the control port.
- The broker is most likely the least sandboxed process and may
provide additional functionality to the parent process; for example perhaps it passes a writable file handle in a particular directory so the user can save a download.
How do you envision updates to work in this model? Having the sandbox be externalized and a separate component makes it marginally more resilient to the updater/updates being malicious (though I would also agree that it merely shifts the risks onto the sandbox update mechanism).
It is also not clear to me how to do things like "peek at the executable's ELF header to only bind mount the minimum number of shared libraries required for the executable to run" from within the executable itself.
As with all things sandboxing, we need to be sure there are not IPC mechanisms to a privledged process that bypasses the sandbox restrictions. On the networking side, there is https://searchfox.org/mozilla-central/source/dom/network/PUDPSocket.ipdl
- which i think is used by WebRTC.
The old sandboxed-tor-browser code doesn't care about such things.
- Disk Avoidance
In general, I'd like #7449 mitigated.
[snip]
What about restricting write access to temp directories? That seems like the quickest and most compatible option (although it wouldn't catch every possible occurrence of this issue.)
The old Linux code used mount namespaces to limit writes to:
* A subset of the profile directory. * tmpfs that will get torn down and discarded upon exit. * The Downloads/Desktop directories.
This catches most cases, though the browser can still mess up it's own profile dir. There is an option that takes this a step further and shadowed the profile directory into another tmpfs filesystem rendering the system amnesiac except for the Downloads/Desktop directories (which was what I used day to day), but for obvious reasons that was disabled by default.
- Cross-Origin Fingerprinting Unlinkability
[snip]
So the Browser already brokers access to features on the Web Platform and can be thought of (and really is) a 'sandbox' for webpages. You're only fingerprintable based on what the Web Platform provides. If it provides something fingerprintable; we need to either fix that fingerprint or block access to it at that layer.
Extra things that the Linux sandbox did:
* Included an extension whitelist, so that users can't add new extensions. * Disabled addon auto updates, because addons.mozilla.org is not to be trusted. * Forced software rendering (though WebGL in that configuration is busted on certain systems for unrelated reasons). * Disabled access to the audio subsystem unless configured to allow it. * (Probably other things, I don't remember.)
I would argue that the role sandboxing should provide here is fingerprinting protection in the face of code execution in the content process. Assume an attacker who has a goal: identify what person is using Tor Browser. Their actions are going to be goal oriented. A website cannot get your MAC address. But if a website exploits a content process and is evaluating what is the best cost / benefit tradeoff for the next step in their exploit chain: the lowest cost is going to be 'zero' if the content process does not restrict access to a OS/computer feature that would identify the user.
The goal should be to provide fingerprint protection in the face of "code execution anywhere in the Tor Browser".
So the goal of sandboxing in this area would be to restrict access to any hardware/machine/OS identifiers like OS serial number, MAC address, device ids, serial numbers, etc. After that (the 'cookies' of device identifiers if you will), the goal would be to restrict access to machine-specific features that create a unique fingerprint: like your GPU (which I illustrate because it can render slightly unique canvas data) or your audio system (which I illustrate because it can apparently generate slightly unique web audio data.)
The filesystem also provides a considerable amount of identifying information about a user. It's not much of a sandbox if the adversary can just exfiltrate the contents of the user's home directory over tor.
Anyway, as far as I can tell, the differences in what you're suggesting vs the existing/proposed architecture boils down to "how much of firefox should be trusted?". To this day, I remain in the "as little as possible" camp, but "nie mój cyrk, nie moje małpy".
Regards,
On Thu, Jul 05, 2018 at 06:46:21PM +0000, Tom Ritter wrote:
On 3 July 2018 at 18:03, Matthew Finkel matthew.finkel@gmail.com wrote:
Hi All,
This is the beginning of a conversation about creating a plan for moving towards sandboxing Tor Browser on every platform.
Thanks for this Matt! I started typing and kept typing and just kept typing and well... sorry.
Thanks for this valuable response.
Over the last few years, there existed a Sandboxed Tor Browser on only Linux[0] (created and maintained by Yawning Angel). However, Tor Browser aims at providing a Private Browser on all supported platforms (Microsoft Windows, Apple Mac OS X, GNU/Linux, and Android (AOSP))[1][2], this means we must provide a sandboxed run-time environment on all platforms.
[snip] (Read Tom's original mail for the distinction sandboxing different parts of Firefox)
Firefox has not done any work on sandboxing the parent process. All the discussions I have with folks at Tor are about protecting the parent process for additional defense in depth. Thus we're assuming that a) An attacker has exploited the content process but cannot achieve their goal from that position and b) decides to attack the parent process.
Both of these assumptions are not necessarily true, and thus means we need to address the opposite of those assumptions _too_; and we should prioritize that work in relation to the larger architectural goal of "How do we sandbox the parent process".
Right, my perspective on this is achieving both of these goals. In general, 1) can we sandbox each process better and 2) can we create an environment for those processes where, in the event of executing a successful expoit, the attacker gains the smallest amount of additional information possible. While we may be able to make small improvements on (1) in the current Firefox codebase, it seems (2) is more difficult and it is a much larger goal than the browsers have historically targeted.
To the point of (b): This document focuses on sandboxing as supporting the goals of the design doc, which I think is a fine approach. But we should be aware of other approaches to evaluating sandboxing and one is from the capability reduction standpoint. A lot of sandbox escapes today occur by getting code execution in the content process and then exploiting the kernel. So identifying the features on each platform that can reduce kernel attack surface, and then seeing what prevents us from using these features (starting in the content process but eventually including the parent process) would be a worthwhile exercise. Similarly to focusing on the kernel, we can look at what others permissions are needed (by the content and eventually the parent) and determine which ones are the scariest and work to either remove them entirely, or move them to a separate process.
Yes, this is a good approach. My original email was not as precise about my intentions as it should've been. I'll mention more later.
To the point of (a), we should consider what goals an attacker has when exploiting Tor Browser. Enumerating attacks is likely to miss some, but not considering attacks means we're trying to defend ourselves from 'everything' with no priority. I think I would classify attacks into High Impact and Opportunistic Impact, and focus specifically on "code execution achieved through an exploit" (and not things like 'exploiting' a fingerprinting vector we have not solved in the web platform.) We may miss some attacks; but the ones we can list today are, I think, accurate in their impact.
High Impact goals are ones that do one or more of the following: achieve a proxy bypass, temporarily corrupt's tor's configuration leading to a direct-to-relay connection (by changing the user's bridge/guard/DirAuths), permanently corrupts the user's machine (by installing malware or affecting tor's configuration), or (less bad without a proxy bypass but still bad) retrieves an identifier that uniquely and persistently identifies a user (MAC address, serial) and exfiltrates it.
Opportunistic goals are ones that compromise the browser process as it currently runs (without persistence) and rely on the user performing an action using the browser that discloses information. For example, reading other websites cookies is only useful if the user is active on other sites in a way that identifies them.
When one is able to achieve High Impact goals from the Content Process
- it seems to me that engineering effort should be focused on closing
_those_ holes first, before trying to build a solution for the parent process. (I'm not saying we shouldn't plan for the parent process though!)
Having Mozilla's help identifying what is needed, and where we should start, in this area will be extremely helpful.
I could talk more about what actions an attacker can perform if they exploit the Content Process (both currently and post-Fission https://wiki.mozilla.org/Project_Fission ) and how they can attack the Parent Process; but I'm going to skip that for now; but would be happy to do so if it seems helpful.
Unfortunately, each operating system provides a unique set of sandboxing techniques and capabilities, so we must work with the facilities we are given. In some cases, we may need to be creative about how we achieve our goals.
I do not have all the answers, and there are some open questions below.
On Windows, we have the Windows integrity mechanism[3], Windows Containers[4], SetProcessMitigationPolicy[5], App Container[6], and maybe some others. Some of this functionality is already used by Firefox.
For Windows Container I found the following helpful reading: https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/
In particular, they require Windows Server or Windows 10 Professional or Enterprise. Containers are awesome, and in particular Hyper-V based containers are essentially Virtual Machines with their own kernel and therefore we could do really powerful things with them. But I think we need to wait a few years until it becomes more widely available.
Ugh. Okay, I read the About page, but I didn't read the "Running your first container" close enough and I missed the Professional/Enterprise requirement.
There's https://www.bromium.com/ which operates in this space, and despite having a bunch of slick marketing and buzzwords and stuff actually has a really powerful technology core.
That website is...hard to read. But yes, ideally, that is where I think private browsing mode should go in the future, where the browser is contained by a cross-platform VM and we have a minimal trusted computing base with a limited attach surface. In this way, the browser only needs one set of sandboxing techniques in the browser. All platform-specific mitigations and restrictions are in an abstraction layer at the VM-Kernel interface.
App Container is primarily for Windows Store apps, but can also be used for 'Legacy Apps' per https://docs.microsoft.com/en-us/windows/desktop/secauthz/appcontainer-for-l... . I can't quite tell how useful it is.
I'm hoping we can take advantage of this, but I haven't tested it. Most of this email is based on theoretical sandboxing options based on my research - I'm certainly not an expert on many areas here.
The Chromium/Firefox sandbox on Windows is composed of the interaction of SetProcessMitigationOptions (of which there are many options), Integrity Level (both an 'Initial' and a 'Delayed') as well as Job Level, Access Token Level, Alternate Desktop, and Alternate Windows Station.
I'm hopeful we can use more of these than we do currently.
On Mac OS X, there are two sandboxing techniques available. The first is Seatbelt[7]. Apple deprecated it in favor of code signing entitlements[8]. Unfortunately, the code signing entitlements are not as fine-grained as those Seatbelt provides, but we can enable different entitlements per "target"[9]. I don't know if this will be difficult for us. Therefore, we should utilize the code-signing restrictions where they are appropriate, but we should follow Safari[10], Chromium[11], and Firefox[12][13] by applying restrictive Sealtbelt policies where applicable.
Can you apply both Seatbelt and Code Signing Entitlements?
Unknown, good question.
Also; have you experimented with/confirmed the restriction mentioned at All Hands; that a process with a Seatbelt Policy cannot launch a process with a more restrictive seatbelt policy? (I think mcs ran into this with the Mac Sandbox policy that used to ship with Tor Browser Alpha.) This is a significant problem to any Apple-based sandboxing policy, and frankly one I think someone should make a semi-organized petition from browser makers to Apple to fix.
Yes - I wonder if the sandbox_init() call failed because it was denied by our initial sb policy. But, at the same time, I wonder if after adding a first-layer of sandboxing that (somehow) allows calling sandbox_init() again, does this result in futher restricting the current sandbox policy, or does this overwrite the original policy.
The man page (as available online), does not describe this behavior other than saying calling sandbox_init() places the current process into a sandbox(7). And that man page only says "New processes inherit the sandbox of their parent."
https://www.manpagez.com/man/3/sandbox_init/ https://www.manpagez.com/man/7/sandbox/
Complaining about this design is most likely a dead-end, however - maybe that shouldn't stop us from making some noise, though.
"The sandbox_init() and sandbox_free_error() functions are DEPRECATED. Developers who wish to sandbox an app should instead adopt the App Sand- box feature described in the App Sandbox Design Guide."
Additionally; OSX 10.14 Mojave provides new sandboxing features; including ones that feature-match Microsoft's Arbitrary Code Guard and Code Integrity Guard. I don't know much about these; but they are powerful and in the future, Mozilla will probably be interested in them. (Future being 1-2+ years from now.)
That sounds worth investigating.
On GNU/Linux, we can use the namespacing and secure computing (Secure Computing) facilities in the kernel exposed to userspace. Sandboxed Tor Browser on Linux already shows how these can be combined and form a sandbox. In particular, we can use bubblewrap[14] as a setuid sandboxing helper (if user namespace is not enabled), if it is available. In addition, we can reduce the syscall surface area with Seccomp-BFP. CGroups provide a way for limiting the resources available within the sandbox. We may also want to manually proxy/filter other system functionality (X11).
Last, but not least, on Android, we begin with a fairly strict sandbox provided by its permissioning model[15], distinct per-app users, and SELinux policies. Because every app is run using a distinct user, and each has its own storage directories, we get Linux's DAC on the file system for "free". In addition, we can use some of the same techniques available on GNU/Linux on Android, namely the seccomp-bpf and namespaces, if they are available. Android provides privilege and permission isolation within an App by using Services. With some refactoring, we can isolate some parts of Firefox into isolated permissionless services[15].
Something else to consider is how each of these sandboxing technologies (on each platform) are enabled. Some are applied at runtime by the process, the process starts more privileged and then drops privileges. Others are applied by the OS before execution, and the process never has the elevated privileges.
If a process needs to do something that the sandbox should prevent - like creating a socket connection to another process - there are two ways (I think only two) to achieve that goal. The first way is a process that _has_ permission gives a resource (like a socket connection) to the restricted process. The second way is the restricted process starts with less restrictive permissions, does the privileged action (like creating a socket) and then applies the sandbox (entire in whole or an additional component of it) to itself - ideally before processing any user input that could be malicious.
I think there's a lot of focus in this document on applying sandbox policies by the OS before/as the process starts and not following the 'drop privileges' model. But dropping privileges is much more flexible. I think some confusion with this model is the notion that the process is 'sandboxing itself' and of course one can't trust a process that is simultaneously compromised and attempting to perform security operations so that model must be broken - but this notion is incorrect. The process - before it is able to be compromised by attacker input, before it processes anything from the web - instructs the OS to apply the sandbox to itself, and cannot later opt-out of that restriction. It's true that something could go wrong during the sandbox application and result in a the process staying elevated - but I think that is easy to code defensively for and even test at runtime.
I imagine both sandboxing techniques working in parallel. I agree letting the process drop its privileges is much more flexible, and the process should do this. However, if we exclude platform limitations, there isn't a reason why a process should be created with any privileges it never needs.
I'll begin by describing the goals, as I see it, for sandboxing Tor Browser. Hopefully, this will help us evaluate the different available techniques. These goals are derived from the Design document[16] and are the means for archieving Tor Browser's end-goal.
In particular, the sandboxing techniques preserve the Security Requirements of a private browser when the browser, itself, fails at maintaining those criteria. By this I mean the sandbox should be designed such that if the browser process loses any of the Security properties (through a logical bug, exploited vulnerability, etc), the sandbox provides an additional layer of those properties and the user is not in immediate danger. The sandbox may, in some situations, improve the Privacy properties of Tor Browser, for example if a component/device is emulated instead of providing the browser with raw access. We should use these mechanisms when they are available.
- Proxy Obedience
- State Separation
- Disk Avoidance
- Application Data Isolation
- Secure Updating
- Usable
- Cross-Origin Fingerprinting Unlinkability
We'll go through these one-by-one and describe the role of a sandbox and how we can achieve this on each platform.
Yes; but I would phrase things differently. I also want to call out our strengths and weaknesses.
Our goals, as I understand them, are twofold:
- Improve the sandboxing of the browser as a whole to a higher level
to i) reduce attack surface from vulnerabilities and common attacker positions and ii) to act as a safety net for design doc goals 2) Eventually, eliminate Tor Browser as a fork of Firefox. What this means in detail is (to me at least but probably everyone) not fully determined, but I'll assume a world where there's at least the window of branding changes, pref flips and system-like addons but no functionality patches...
Our strengths, as I see them:
- We are privacy research experts
- We're willing to compromise on user freedom, web features, and
performance for security and privacy 3) We're willing to experiment with more complicated user interfaces
I think we want to move away from this, right? Ideally, we'll have a simpler and more powerful user interface (with limited user choice).
- Our use case for the browser is (probably) different. We're
(probably) not seeing long lived browser sessions and therefore have more frequent restarts. We're (probably) not seeing a lot of logged in sessions, so restarting the browser (especially with an open-tabs-restore feature) is not as painful.
There are some (many?) users who rely on Tor Browser for their normal browsing activities, and they only use non-private browsing mode when it is actually needed. I'm not sure we can make an assumption that Tor Browser is restarted frequently (except for installing updates, hopefully).
Of course we want our browser to be usable, so we're not going to go crazy on (2) or (3), but that we allow the user to disable JavaScript, and have and expect to keep a 'security slider' are examples of where we're willing to go.
Our weaknesses, as I see them:
- We're tracking ESR for now
- Un-upstreamed patches are painful for us
- We are not well suited to undertake large browser architecture
projects, especially if it diverges from Firefox 4) We are neither browser architecture experts, nor sandboxing experts. In particular this makes us ill-suited to predict feature breakage or performance degradation from a hypothesized sandbox/architecture change.
I agree on these points, in general. It seems, unfortunately, that the browser's have made surprising sandboxing choices thus far. Admittedly, they have an existing user base and they worry any wrong choice will cost them market share, but that leaves Tor Browser in a precarious place. As a result, we are becoming sandboxing experts as best we can, and in the limited time we have available.
In particular, while looking at this, I think this gives particular weight to aligning ourselves with future Firefox plans. And that we can often begin adopting those plans before Firefox does.
I absolutely agree with this.
Note, this design assumes a launcher-based sandboxed architecture (similar to the Sandboxed Tor Browser on Linux):
------------------ | Launcher | ------------------ | | ------------------------------ | | | | v | | ---------------- | v | Sandbox | v ----------------- | ----------- | ----------------- | Sandbox | | | Broker | | | Sandbox | | ------------- | | ----------- | | ------------- | | | Firefox | | --------------- | | Tor | | | ------------- |------/ \_________| ------------- | ----------------- Controller ----------------- |____________________________________| Proxy IPC
Note, the sandboxed broker process (or processes) moves some functionality from the launcher process into a child process. This process is not as heavily sandboxed as the Firefox and Tor process, but, for example it would not need networking. This process would handle sending NEWNYM, provide a circuit display, change bridge configuration (general controller functions). In addition, it would handle copying files into and out of the Firefox sandbox, as an example.
Unfortunately, there is not an existing cross-platform solution for this design. We can take bits-and-pieces from other solutions, creating this will require engineering effort.
As a nit, this diagram shows all the processes as being 'enveloped' by a sandbox; but that illustration is misleading if one is using, e.g. Seatbelt on Mac. It's more accurate that the process is named Sandboxed-Tor or something like that, rather than depicting some additional 'thing' (which appears to be a process) that 'is' the sandbox. This illustration makes it look more like you're running stuff in a container. (Which may actually be the case for some platforms though.)
Ideally, the sandbox would be a sandbox-container, and at this point I think that should be our goal. Deficiencies on each platform which prevent this are problems we must overcome. In the diagram, all firefox processes are grouped together because they will be within the same sandbox. The fact that there are child processes with more restrictions is a detail I didn't include - mostly because we want a sandbox that succeeds where/when firefox's or tor's mitigations fail.
Maybe on OS X it would be better if we followed Docker's lead and investigate integrating with the MacOS Hypervisor.
Also: I think the IPC diagrams of Firefox <-> Tor are accurate; but we leave out the Content Processes in this illustration. If in the future Mozilla or Tor separates browser functionality into a helper process to reduce privileges in the Content or Parent process, it becomes possible that that helper process (a child of 'Firefox') will communicate with the Broker (and Tor). For example if/when Necko leaves the parent process and all network operations happen in a helper process off Firefox.
Finally and most substantially, I want to think about the assumption you're making about a launcher process. I agree entirely that we want our parent-process-is-sandboxed end-goal to look logically like your diagram during operation. Firefox can't talk to the network or Tor's control port: it talks to a broker that talks to Tor's control port and it talks to Tor for SOCKS proxying using a Named Pipe or Domain Socket. Certain operations are also relegated to the Broker and Firefox can't perform them.
But getting to that state... Using a Launcher process is one solution; but it's not the only solution. So I think we should keep that in mind and not assume a launcher process; but instead plan for a logical run-time design of:
+-------------------+
|-|-| Sandboxed-Firefox |------------ | | +-------------------+ | | | | | | Primary (Existing) Browser IPC | | | | | | +-------------------+ | | --| Sandboxed |- | | | | Content Processes | | | | +-------------------+ | | | | | | TBD IPC TBD IPC | | | | | | +-------------------+ | | |---| Other Sandboxed |--| | | | Helper Processes | | | | +-------------------+ | | | | | | +--------------------+ | | |---| Sandboxed |-| | |-| Networking Process | | | +--------------------+ | | | | Tor Control | | SOCKS over Domain Requests | | Socket/Named Pipe | | | | +--------------------+ | | | Sandboxed Broker |----------| | +--------------------+ | | Filtered Control Requests | +--------------------+ |-| Sandboxed Tor | +--------------------+
Spelled out:
- The FF Parent Process talks to and controls the Content Processes
(using existing IPC mechanisms) and maybe/probably interfaces with the Networking process and other Helper Processes in unknown future ways. The content processes probably talk to the Network process directly, they might also talk to the other helper processes.
- The Networking process talks to Tor using SOCKS with (probably) a
domain socket or named pipe
- Tor Control requests are sent from the Parent Process to the broker
which filters them and then passes them to Tor over the control port.
- The broker is most likely the least sandboxed process and may
provide additional functionality to the parent process; for example perhaps it passes a writable file handle in a particular directory so the user can save a download.
Yes. This is the general design from the operating system's perspective.
I think we should plan on creating a proof of concept toy program on all platforms that mimics the process execution and mitigations. (I once started one of those here: https://github.com/tomrittervg/sandboxsandbox/blob/master/SandboxSandbox/San... ) This will allow us to experiment/assert with certainty about what operations are allowed and disallowed and allow easier code review and experimentation by individuals especially those looking at their platform outside their area of experience. It's also lower cost that trying to implement it in FF from the get-go.
Yes, that was part of my plan, as well. In reality, when we have a working implementation, I'd like us to integrate it into the XRE subsystem, and not as a completely separate wrapper application. However, this means there will be a lot more upfront logic than currently exists (but maybe this is already true with the on-going launcher work, I haven't looked at that yet).
- Proxy Obedience
The last line of defense against a proxy-bypass is a mechanism for dropping all outgoing IP packets from the browser that are:
This wi> - not TCP
- or TCP but not destined for the Tor proxy port (SOCKS or HTTP)
- or all packets (including TCP) if the proxy listener is not a TCP port (Unix domain socket, named pipe, etc)
Dropping implies the packets are sent from the process and then dropped by the OS. This is not always the case, on Windows (with user_restricted) and Mac at least, the process is just outright prevented from making successful network calls.
True. This is also correct on Linux with seccomp. My phrasing was more general.
iii) Windows 7+ If the platform is Windows 8+ and the user cannot elevate to Administrator or if the platform is Windows 7: - TODO: can/should we use DLL injection of WinSocks and manually filter all proxy-bypass network connections?
I would expect we could use the same type of shims the content process currently uses to achieve this goal.
Okay, I'll look into that some more.
Is there something we can use for mitigating ROP gadgets within the sandbox?
I'm not sure where this goal came from, but it seems very independent of sandboxing. But I think the general answer is: No, not until one has or can emulate Execute-Only memory, which might be possible if one runs their own hypervisor (I don't think Hyper-V can do it but maybe?)
This is related, but I see it didn't logically flow from the surrounding throughts. While I was thinking about using DLL injection for WinSocks, I thought about how that wouldn't prevent an attacker using ROP for bypassing it.
Back to networking: I think you missed the most straightforward one: the USER_RESTRICTED token level which blocks networking.
See also https://bugzilla.mozilla.org/show_bug.cgi?id=1403931 and https://bugzilla.mozilla.org/show_bug.cgi?id=1177594
Yes, that looks like something we definitely want Mozilla to land in Firefox.
b) Mac OS X i) We can restrict network access using the com.apple.security.network.client and com.apple.security.network.server entitlements[21]. ii) We can further restrict which types of connections the browser is allowed. Previously, there was a seatbelt profile for Tor Browser[22], but it is not usable with newer versions of the browser. We can continue using it for only allowing access requests for Unix domain sockets, and denying all other network connections.
Note that the content Sandbox for Mac already enforces networking restrictions.
Great.
c) GNU/Linux i) On Linux, significant progress was already made in terms of sandboxing Tor Browser. There are two methods we can use: - Isolated network namespace via bubblewrap[14] - Seccomp-BPF on socket syscalls ii) If bubblewrap is not available, we can fallback on using unshare if it is available and user namespace is enabled iii) If user namespaces are not avialable, we ask the user for elevated privileges and manually fork into the new network namespace (worst-case).
d) Android i) We can use the same (or similar) Seccomp-BPF filter that we use on Linux ii) We can move the Gecko service into an isolated process without any permissions (including Networking). iii) If all networking goes through Necko, and Neck is in the isolated process, then the remaining code, outside the isolated process, should be proxy-safe or it is the tor process.
As with all things sandboxing, we need to be sure there are not IPC mechanisms to a privledged process that bypasses the sandbox restrictions. On the networking side, there is https://searchfox.org/mozilla-central/source/dom/network/PUDPSocket.ipdl
- which i think is used by WebRTC.
Thanks, good point. This is something we'll need to address before we enable WebRTC, anyway. On Android this code should be run by the background GeckoService, so as long as it is sandboxed that shouldn't bypass the proxy - but this needs verification.
- State Separation
Per-platform sandboxing techniques: a) Microsoft Windows i) I'm not sure if there is more we can do here.
b) Mac OS X i) The Seatbelt profile[22] can restrict access to all other Firefox profiles ii) The Seatbelt profile can only allow access for bundled libraries iii) We may exclude entitlements for most system services
c) GNU/Linux i) We can create new IPC, mount, and uts namespaces ii) We can provide a clean environment iii) The new mount namespace contains only the required bundled libraries required for running iv) D-Bus/I-Bus access would be nice, if we can do it safely.
d) Android i) All apps are isolated by default, so there should never be shared state. ii) Ensure we have the smallest set of intent filters we need iii) Ensure we have the smallest set of exported components we need iv) Ensure we have the smallest set of broadcast receivers we need v) Ensure we only use Broadcast intents where it is necessary vi) Is there something else we should investigate here?
Why is there so much difference between the other OSes and Windows?
Only because Windows is not my strength and I had trouble finding information reading through their documentation. It was a general statement of cluelessness.
The goal for State Separation, as I understand it, is to ensure that no existing browser settings/profiles/plugins are used in Tor Browser. I think there would be two reasons this could happen: a) The browser itself has a bug that we should fix, where it loads some existing data b) Some third party utility has installed something into Tor Browser
(b) seems generally unmitigatable. For (a) the primary mechanism to prevent this is "don't have browser bugs". The sandbox's role is "If there is a browser bug, block it from trying to read and use that data." So anything we can do in any sandbox to restrict access to existing user data or system directories would support this goal.
Right. If there are more steps we can take for achieving this, then we should enumerate them and consider them. My suggestions are not authoritative, they're simply what I found when I was researching the topic.
- Disk Avoidance
In general, I'd like #7449 mitigated.
[....]
What about restricting write access to temp directories? That seems like the quickest and most compatible option (although it wouldn't catch every possible occurrence of this issue.)
In general, similar to State Separation; I think there's two categories of issues here: a) Things the browser does that are browser bugs b) Things the OS does independent of the application
A sandbox can provide a safety net for things in (a).
The sandbox should also mitigate against (b) when it is possible. If the OS doesn't allow this then it is an OS limitation and there isn't much we can do, but some OSes do provide some mechanisms for this, as we should take advantage of all of them.
- Application Data Isolation
I'll claim the per-platform sandboxing techniques are the same as Disk Avoidance.
I think this one is even easier, since one can simply prevent write access to any directory except the containing one. This would have to come at the cost of additional engineering to change how the File Save dialogs work though - or you'll have a lot of confused and upset users.
And yet, we still want to maintain the smallest difference between Tor Browser and mozilla-esr. sigh.
- Secure Updating
Nearly the same sandboxing techniques can be used on all platforms. A launcher-component (maybe not the launcher, but a reduced-privileged process, not the browser) downloads the update. Next, on download completion, a launcher-component verifies the download and installs it (I can't justify verifying the download in an unprivileged, sandboxed process - I'd like to do it, but I don't see a benefit).
a) Microsoft Windows i) Layer read-only filesystem on top of install dir, prevents presistant modifications
b) Mac OS X i) Seatbelt profile prevents write access of app binaries
c) GNU/Linux i) Mount namespace provides read-only access of install dir ii) Mount namespace does not include launcher binaries
d) Android i) If App was installed using a App Store, then that is responsible for updating ii) If the app was installed by the user (side-loaded), then the we can isolate downloading into a separate process, and then verification can be handled in an isolated process. Finally, the Android Package Manager controls installing the update.
This one is at odds with (4) which says "You're not allowed to write anything outside your install directory." We need to be more explicit about what parts of the install directory one is allowed to write to.
Right. I think (4) is no longer true and the design document should be changed so it reflects this. Only the Update mechanism should have the privileges needed for modifying the contents of the install directory. This means the modifiable profile directory must be located somewhere outside the install dir.
Web Extensions come into play here. And themes. If we want to allow them; we need to carve out exceptions for them. And ideally we would use the sandbox to enforce an additional safety net against system add-ons if that's possible (e.g. they're not installed into the same place as web extensions.)
With the way Tor Browser currently installs extensions I think these two goals can co-exist, right?
- Usable
Unfortunately, despite the above suggestions, this sandboxing model fails if users find its unusable. In particular, with temporary file systems, we should provide a method for safely extracting a file from the temporary location and copying it to a permanent location using the launcher process.
I would argue that this extraction should be automatic. Anything else is unusuable. Users won't be used to being unable to save downloaded files whereever on their system they want; adding another thing on top of that would be too painful.
This seems dangerous. We'll need to think carefully about this.
In addition, exposing operating system functionality which may be fingerprintable and/or another attack vector should be optional if it unbreaks the web (such as providing pulseaudio on Linux[26]). As usual. we must be careful about giving users too many customizable switches.
Yea; this one is tough.
- Cross-Origin Fingerprinting Unlinkability
I've less experience here on non-Linux OS (and Sandboxed Tor Browser is already a good base on Linux). Which other components should we consider spoofing (if possible) or go out of our way and disallow access (if not possible using the above means)?
So the Browser already brokers access to features on the Web Platform and can be thought of (and really is) a 'sandbox' for webpages. You're only fingerprintable based on what the Web Platform provides. If it provides something fingerprintable; we need to either fix that fingerprint or block access to it at that layer.
I would argue that the role sandboxing should provide here is fingerprinting protection in the face of code execution in the content process. Assume an attacker who has a goal: identify what person is using Tor Browser. Their actions are going to be goal oriented. A website cannot get your MAC address. But if a website exploits a content process and is evaluating what is the best cost / benefit tradeoff for the next step in their exploit chain: the lowest cost is going to be 'zero' if the content process does not restrict access to a OS/computer feature that would identify the user.
Right, exactly. I think this is one property we want in a sandbox.
So the goal of sandboxing in this area would be to restrict access to any hardware/machine/OS identifiers like OS serial number, MAC address, device ids, serial numbers, etc. After that (the 'cookies' of device identifiers if you will), the goal would be to restrict access to machine-specific features that create a unique fingerprint: like your GPU (which I illustrate because it can render slightly unique canvas data) or your audio system (which I illustrate because it can apparently generate slightly unique web audio data.)
I don't know how to do this on all the platforms, so this is something that we'll need documented. If Mozilla already have this information and/or if this is already available in Firefox, then that's a good first step.
Thanks for this detailed response.
- Matt
On 6 July 2018 at 16:05, Yawning Angel yawning@schwanenlied.me wrote:
The number of upstream (Firefox) changes that were needed to be to get the Linux sandbox to work was exactly 0. There was one fix I backported from a more current firefox release, and two upstream firefox bugs that I worked around (all without altering the firefox binary at all).
The design and code survived more or less intact from 7.0.x to at least the entirely of the 7.5 stable series (I don't run alpha, I assume there's some changes required for 8.0, but the code's deprecated and I can't be bothered to check. It would have survived if the time and motivation was available).
It really seems to me on the outside that that Linux is an exception to Mac/Windows (and maybe Android) - it's got rich support for containerization and restrictions and by and large sandboxed-tor-browser was able to live very nicely within them.
Spelled out:
- The FF Parent Process talks to and controls the Content Processes
(using existing IPC mechanisms) and maybe/probably interfaces with the Networking process and other Helper Processes in unknown future ways. The content processes probably talk to the Network process directly, they might also talk to the other helper processes.
- The Networking process talks to Tor using SOCKS with (probably) a
domain socket or named pipe
- Tor Control requests are sent from the Parent Process to the broker
which filters them and then passes them to Tor over the control port.
- The broker is most likely the least sandboxed process and may
provide additional functionality to the parent process; for example perhaps it passes a writable file handle in a particular directory so the user can save a download.
How do you envision updates to work in this model? Having the sandbox be externalized and a separate component makes it marginally more resilient to the updater/updates being malicious (though I would also agree that it merely shifts the risks onto the sandbox update mechanism).
Probably: the parent process checks to see if an update file is present upon restart; if there is, it validates and applies the update. After applying the update (or if no update is present) it drops privileges such that it is unable to write/replace any of the existing files.
An attacker who compromises the parent process can download and place a malicious update file in the directory; but doesn't have permission to apply it. They can only compromise the update install mechanism of the running process (not the process on disk) - but it doesn't have permission to apply it. Upon process restart you're running an uncompromised process that will reject the update.
It is also not clear to me how to do things like "peek at the executable's ELF header to only bind mount the minimum number of shared libraries required for the executable to run" from within the executable itself.
Yea; probably not. Unless we do wind up with firefox.exe launching another firefox.exe that acts as the parent process. Which is underway for Windows; but not Linux.
Anyway, as far as I can tell, the differences in what you're suggesting vs the existing/proposed architecture boils down to "how much of firefox should be trusted?". To this day, I remain in the "as little as possible" camp, but "nie mój cyrk, nie moje małpy".
Pretty much. I definitely think a sandboxed-tor-browser type is better - much better. It's just way, way more work on Mac/Windows and way more divergence from Firefox's current and future plans.
----------------------
On 6 July 2018 at 18:30, Matthew Finkel matthew.finkel@gmail.com wrote:
When one is able to achieve High Impact goals from the Content Process
- it seems to me that engineering effort should be focused on closing
_those_ holes first, before trying to build a solution for the parent process. (I'm not saying we shouldn't plan for the parent process though!)
Having Mozilla's help identifying what is needed, and where we should start, in this area will be extremely helpful.
We could definitely do a brainstorming session with the Mozilla sandboxing folks to open a bunch of bugs on this.
There's https://www.bromium.com/ which operates in this space, and despite having a bunch of slick marketing and buzzwords and stuff actually has a really powerful technology core.
That website is...hard to read. But yes, ideally, that is where I think private browsing mode should go in the future, where the browser is contained by a cross-platform VM and we have a minimal trusted computing base with a limited attach surface.
That would be ideal...
In this way, the browser only needs one set of sandboxing techniques in the browser. All platform-specific mitigations and restrictions are in an abstraction layer at the VM-Kernel interface.
That seems to assume a Docker-like approach where you've got a cross-platform virtualization engine...
I think there will always be a lot of platform-specific sandboxing code present, in both the browser and in any container-solution that gets devised. Unless you have the container solution run the browser on a fixed OS with whatever host OS but that seems problematic for a lot of reasons.
I also think it's likely that if Firefox goes to a container-based approach, it's going to integrate with each system differently; not use a cross-platform solution like Docker. You're just going to get better performance and integration out of something from the native OS. This is where Edge is going: https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-...
The Chromium/Firefox sandbox on Windows is composed of the interaction of SetProcessMitigationOptions (of which there are many options), Integrity Level (both an 'Initial' and a 'Delayed') as well as Job Level, Access Token Level, Alternate Desktop, and Alternate Windows Station.
I'm hopeful we can use more of these than we do currently.
We can; and today* if we accept certain breakage.
* Today really meaning after weeks of testing...
Also; have you experimented with/confirmed the restriction mentioned at All Hands; that a process with a Seatbelt Policy cannot launch a process with a more restrictive seatbelt policy? (I think mcs ran into this with the Mac Sandbox policy that used to ship with Tor Browser Alpha.) This is a significant problem to any Apple-based sandboxing policy, and frankly one I think someone should make a semi-organized petition from browser makers to Apple to fix.
Yes - I wonder if the sandbox_init() call failed because it was denied by our initial sb policy. But, at the same time, I wonder if after adding a first-layer of sandboxing that (somehow) allows calling sandbox_init() again, does this result in futher restricting the current sandbox policy, or does this overwrite the original policy.
The man page (as available online), does not describe this behavior other than saying calling sandbox_init() places the current process into a sandbox(7). And that man page only says "New processes inherit the sandbox of their parent."
https://www.manpagez.com/man/3/sandbox_init/ https://www.manpagez.com/man/7/sandbox/
Complaining about this design is most likely a dead-end, however - maybe that shouldn't stop us from making some noise, though.
"The sandbox_init() and sandbox_free_error() functions are DEPRECATED. Developers who wish to sandbox an app should instead adopt the App Sand- box feature described in the App Sandbox Design Guide."
I'm told by our Mac person (Alex) that seatbelt is not going to get removed. "It's been deprecated for a while, and (a) all the platform binaries are sandboxed using it, (b) every web browser (including safari) uses it." So that's something.
Is there something we can use for mitigating ROP gadgets within the sandbox?
I'm not sure where this goal came from, but it seems very independent of sandboxing. But I think the general answer is: No, not until one has or can emulate Execute-Only memory, which might be possible if one runs their own hypervisor (I don't think Hyper-V can do it but maybe?)
This is related, but I see it didn't logically flow from the surrounding throughts. While I was thinking about using DLL injection for WinSocks, I thought about how that wouldn't prevent an attacker using ROP for bypassing it.
I hadn't considered that; but I assume it is not possible or Chromium's sandbox from the get-go would be insecure.
As with all things sandboxing, we need to be sure there are not IPC mechanisms to a privledged process that bypasses the sandbox restrictions. On the networking side, there is https://searchfox.org/mozilla-central/source/dom/network/PUDPSocket.ipdl
- which i think is used by WebRTC.
Thanks, good point. This is something we'll need to address before we enable WebRTC, anyway. On Android this code should be run by the background GeckoService, so as long as it is sandboxed that shouldn't bypass the proxy - but this needs verification.
This is probably an issue today absent WebRTC; a compromised content process can probably invoke these IPC methods. But this would only matter on Mac, since that's the only platform that prevents network connections from the content process.
So the goal of sandboxing in this area would be to restrict access to any hardware/machine/OS identifiers like OS serial number, MAC address, device ids, serial numbers, etc. After that (the 'cookies' of device identifiers if you will), the goal would be to restrict access to machine-specific features that create a unique fingerprint: like your GPU (which I illustrate because it can render slightly unique canvas data) or your audio system (which I illustrate because it can apparently generate slightly unique web audio data.)
I don't know how to do this on all the platforms, so this is something that we'll need documented. If Mozilla already have this information and/or if this is already available in Firefox, then that's a good first step.
Same on the brainstorming; but it'd be good to come up with a list of identifiers we're concerned about (and ideally some examples of how to get them0 so Mozilla folks can think about what the sandbox doesn't/can't prevent.
-tom
On 07/03/2018 06:03 PM, Matthew Finkel wrote:
On GNU/Linux, we can use the namespacing and secure computing (Secure Computing) facilities in the kernel exposed to userspace. Sandboxed Tor Browser on Linux already shows how these can be combined and form a sandbox. In particular, we can use bubblewrap[14] as a setuid sandboxing helper (if user namespace is not enabled), if it is available. In addition, we can reduce the syscall surface area with Seccomp-BFP. CGroups provide a way for limiting the resources available within the sandbox. We may also want to manually proxy/filter other system functionality (X11).
On a side note, on Linux you could also use flatpack if you: * Make 2 packages, one for the browser, one for tor + PTs. * Fix the browser to treat tor-button, NoScript, and HTTPSE along with all the prefs as system components. (As in, Tor Browser should be able to be shipped without a default profile directory, and Do The Right Thing).
Regards,