On 3 July 2018 at 18:03, Matthew Finkel matthew.finkel@gmail.com wrote:
Hi All,
This is the beginning of a conversation about creating a plan for moving towards sandboxing Tor Browser on every platform.
Thanks for this Matt! I started typing and kept typing and just kept typing and well... sorry.
Over the last few years, there existed a Sandboxed Tor Browser on only Linux[0] (created and maintained by Yawning Angel). However, Tor Browser aims at providing a Private Browser on all supported platforms (Microsoft Windows, Apple Mac OS X, GNU/Linux, and Android (AOSP))[1][2], this means we must provide a sandboxed run-time environment on all platforms.
For the benefit of others reading tbb-dev, I want to explain the difference between the 'Content Process Sandbox' and the 'Parent Process Sandbox'. This is intended to answer confusion like "Chrome is sandboxed.... isn't Firefox sandboxed? I thought I saw some stuff in the release notes about it?"
Firefox (and Chrome, and Edge) have a parent process that executes multiple child or 'content' processes. (It also executes other helper processes like a Web Extensions process and a GPU process on some platforms.) The content process is responsible for parsing HTML, rendering it, executing javascript, running the javascript JIT, etc. The parent process does the actual TLS handshakes, HTTP conversations, renders the browser UI (and composites the page contents I think but I'm not sure on that), and some other stuff.
Firefox has been working on a restrictive sandbox for the Content Process for the past few years. We've got a Mac sandbox that can only be tightened a small amount more. Our Windows sandbox is pretty good but is missing one large attack surface reduction called 'win32k.sys lockdown' that is blocked on ~1 year of clock time worth of graphics refactoring. I'm unsure of the relative strength of our Linux sandbox, but I think it's fairly good if you exclude stuff like "The X11 protocol lets you do pretty crazy things and we need to lock that down".
Firefox has not done any work on sandboxing the parent process. All the discussions I have with folks at Tor are about protecting the parent process for additional defense in depth. Thus we're assuming that a) An attacker has exploited the content process but cannot achieve their goal from that position and b) decides to attack the parent process.
Both of these assumptions are not necessarily true, and thus means we need to address the opposite of those assumptions _too_; and we should prioritize that work in relation to the larger architectural goal of "How do we sandbox the parent process".
To the point of (b): This document focuses on sandboxing as supporting the goals of the design doc, which I think is a fine approach. But we should be aware of other approaches to evaluating sandboxing and one is from the capability reduction standpoint. A lot of sandbox escapes today occur by getting code execution in the content process and then exploiting the kernel. So identifying the features on each platform that can reduce kernel attack surface, and then seeing what prevents us from using these features (starting in the content process but eventually including the parent process) would be a worthwhile exercise. Similarly to focusing on the kernel, we can look at what others permissions are needed (by the content and eventually the parent) and determine which ones are the scariest and work to either remove them entirely, or move them to a separate process.
To the point of (a), we should consider what goals an attacker has when exploiting Tor Browser. Enumerating attacks is likely to miss some, but not considering attacks means we're trying to defend ourselves from 'everything' with no priority. I think I would classify attacks into High Impact and Opportunistic Impact, and focus specifically on "code execution achieved through an exploit" (and not things like 'exploiting' a fingerprinting vector we have not solved in the web platform.) We may miss some attacks; but the ones we can list today are, I think, accurate in their impact.
High Impact goals are ones that do one or more of the following: achieve a proxy bypass, temporarily corrupt's tor's configuration leading to a direct-to-relay connection (by changing the user's bridge/guard/DirAuths), permanently corrupts the user's machine (by installing malware or affecting tor's configuration), or (less bad without a proxy bypass but still bad) retrieves an identifier that uniquely and persistently identifies a user (MAC address, serial) and exfiltrates it.
Opportunistic goals are ones that compromise the browser process as it currently runs (without persistence) and rely on the user performing an action using the browser that discloses information. For example, reading other websites cookies is only useful if the user is active on other sites in a way that identifies them.
When one is able to achieve High Impact goals from the Content Process - it seems to me that engineering effort should be focused on closing _those_ holes first, before trying to build a solution for the parent process. (I'm not saying we shouldn't plan for the parent process though!)
I could talk more about what actions an attacker can perform if they exploit the Content Process (both currently and post-Fission https://wiki.mozilla.org/Project_Fission ) and how they can attack the Parent Process; but I'm going to skip that for now; but would be happy to do so if it seems helpful.
Unfortunately, each operating system provides a unique set of sandboxing techniques and capabilities, so we must work with the facilities we are given. In some cases, we may need to be creative about how we achieve our goals.
I do not have all the answers, and there are some open questions below.
On Windows, we have the Windows integrity mechanism[3], Windows Containers[4], SetProcessMitigationPolicy[5], App Container[6], and maybe some others. Some of this functionality is already used by Firefox.
For Windows Container I found the following helpful reading: https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/
In particular, they require Windows Server or Windows 10 Professional or Enterprise. Containers are awesome, and in particular Hyper-V based containers are essentially Virtual Machines with their own kernel and therefore we could do really powerful things with them. But I think we need to wait a few years until it becomes more widely available.
There's https://www.bromium.com/ which operates in this space, and despite having a bunch of slick marketing and buzzwords and stuff actually has a really powerful technology core.
App Container is primarily for Windows Store apps, but can also be used for 'Legacy Apps' per https://docs.microsoft.com/en-us/windows/desktop/secauthz/appcontainer-for-l... . I can't quite tell how useful it is.
The Chromium/Firefox sandbox on Windows is composed of the interaction of SetProcessMitigationOptions (of which there are many options), Integrity Level (both an 'Initial' and a 'Delayed') as well as Job Level, Access Token Level, Alternate Desktop, and Alternate Windows Station.
On Mac OS X, there are two sandboxing techniques available. The first is Seatbelt[7]. Apple deprecated it in favor of code signing entitlements[8]. Unfortunately, the code signing entitlements are not as fine-grained as those Seatbelt provides, but we can enable different entitlements per "target"[9]. I don't know if this will be difficult for us. Therefore, we should utilize the code-signing restrictions where they are appropriate, but we should follow Safari[10], Chromium[11], and Firefox[12][13] by applying restrictive Sealtbelt policies where applicable.
Can you apply both Seatbelt and Code Signing Entitlements?
Also; have you experimented with/confirmed the restriction mentioned at All Hands; that a process with a Seatbelt Policy cannot launch a process with a more restrictive seatbelt policy? (I think mcs ran into this with the Mac Sandbox policy that used to ship with Tor Browser Alpha.) This is a significant problem to any Apple-based sandboxing policy, and frankly one I think someone should make a semi-organized petition from browser makers to Apple to fix.
Additionally; OSX 10.14 Mojave provides new sandboxing features; including ones that feature-match Microsoft's Arbitrary Code Guard and Code Integrity Guard. I don't know much about these; but they are powerful and in the future, Mozilla will probably be interested in them. (Future being 1-2+ years from now.)
On GNU/Linux, we can use the namespacing and secure computing (Secure Computing) facilities in the kernel exposed to userspace. Sandboxed Tor Browser on Linux already shows how these can be combined and form a sandbox. In particular, we can use bubblewrap[14] as a setuid sandboxing helper (if user namespace is not enabled), if it is available. In addition, we can reduce the syscall surface area with Seccomp-BFP. CGroups provide a way for limiting the resources available within the sandbox. We may also want to manually proxy/filter other system functionality (X11).
Last, but not least, on Android, we begin with a fairly strict sandbox provided by its permissioning model[15], distinct per-app users, and SELinux policies. Because every app is run using a distinct user, and each has its own storage directories, we get Linux's DAC on the file system for "free". In addition, we can use some of the same techniques available on GNU/Linux on Android, namely the seccomp-bpf and namespaces, if they are available. Android provides privilege and permission isolation within an App by using Services. With some refactoring, we can isolate some parts of Firefox into isolated permissionless services[15].
Something else to consider is how each of these sandboxing technologies (on each platform) are enabled. Some are applied at runtime by the process, the process starts more privileged and then drops privileges. Others are applied by the OS before execution, and the process never has the elevated privileges.
If a process needs to do something that the sandbox should prevent - like creating a socket connection to another process - there are two ways (I think only two) to achieve that goal. The first way is a process that _has_ permission gives a resource (like a socket connection) to the restricted process. The second way is the restricted process starts with less restrictive permissions, does the privileged action (like creating a socket) and then applies the sandbox (entire in whole or an additional component of it) to itself - ideally before processing any user input that could be malicious.
I think there's a lot of focus in this document on applying sandbox policies by the OS before/as the process starts and not following the 'drop privileges' model. But dropping privileges is much more flexible. I think some confusion with this model is the notion that the process is 'sandboxing itself' and of course one can't trust a process that is simultaneously compromised and attempting to perform security operations so that model must be broken - but this notion is incorrect. The process - before it is able to be compromised by attacker input, before it processes anything from the web - instructs the OS to apply the sandbox to itself, and cannot later opt-out of that restriction. It's true that something could go wrong during the sandbox application and result in a the process staying elevated - but I think that is easy to code defensively for and even test at runtime.
I'll begin by describing the goals, as I see it, for sandboxing Tor Browser. Hopefully, this will help us evaluate the different available techniques. These goals are derived from the Design document[16] and are the means for archieving Tor Browser's end-goal.
In particular, the sandboxing techniques preserve the Security Requirements of a private browser when the browser, itself, fails at maintaining those criteria. By this I mean the sandbox should be designed such that if the browser process loses any of the Security properties (through a logical bug, exploited vulnerability, etc), the sandbox provides an additional layer of those properties and the user is not in immediate danger. The sandbox may, in some situations, improve the Privacy properties of Tor Browser, for example if a component/device is emulated instead of providing the browser with raw access. We should use these mechanisms when they are available.
- Proxy Obedience
- State Separation
- Disk Avoidance
- Application Data Isolation
- Secure Updating
- Usable
- Cross-Origin Fingerprinting Unlinkability
We'll go through these one-by-one and describe the role of a sandbox and how we can achieve this on each platform.
Yes; but I would phrase things differently. I also want to call out our strengths and weaknesses.
Our goals, as I understand them, are twofold: 1) Improve the sandboxing of the browser as a whole to a higher level to i) reduce attack surface from vulnerabilities and common attacker positions and ii) to act as a safety net for design doc goals 2) Eventually, eliminate Tor Browser as a fork of Firefox. What this means in detail is (to me at least but probably everyone) not fully determined, but I'll assume a world where there's at least the window of branding changes, pref flips and system-like addons but no functionality patches...
Our strengths, as I see them: 1) We are privacy research experts 2) We're willing to compromise on user freedom, web features, and performance for security and privacy 3) We're willing to experiment with more complicated user interfaces 4) Our use case for the browser is (probably) different. We're (probably) not seeing long lived browser sessions and therefore have more frequent restarts. We're (probably) not seeing a lot of logged in sessions, so restarting the browser (especially with an open-tabs-restore feature) is not as painful.
Of course we want our browser to be usable, so we're not going to go crazy on (2) or (3), but that we allow the user to disable JavaScript, and have and expect to keep a 'security slider' are examples of where we're willing to go.
Our weaknesses, as I see them: 1) We're tracking ESR for now 2) Un-upstreamed patches are painful for us 3) We are not well suited to undertake large browser architecture projects, especially if it diverges from Firefox 4) We are neither browser architecture experts, nor sandboxing experts. In particular this makes us ill-suited to predict feature breakage or performance degradation from a hypothesized sandbox/architecture change.
In particular, while looking at this, I think this gives particular weight to aligning ourselves with future Firefox plans. And that we can often begin adopting those plans before Firefox does.
Note, this design assumes a launcher-based sandboxed architecture (similar to the Sandboxed Tor Browser on Linux):
------------------ | Launcher | ------------------ | | ------------------------------ | | | | v | | ---------------- | v | Sandbox | v ----------------- | ----------- | ----------------- | Sandbox | | | Broker | | | Sandbox | | ------------- | | ----------- | | ------------- | | | Firefox | | --------------- | | Tor | | | ------------- |------/ \_________| ------------- | ----------------- Controller ----------------- |____________________________________| Proxy IPC
Note, the sandboxed broker process (or processes) moves some functionality from the launcher process into a child process. This process is not as heavily sandboxed as the Firefox and Tor process, but, for example it would not need networking. This process would handle sending NEWNYM, provide a circuit display, change bridge configuration (general controller functions). In addition, it would handle copying files into and out of the Firefox sandbox, as an example.
Unfortunately, there is not an existing cross-platform solution for this design. We can take bits-and-pieces from other solutions, creating this will require engineering effort.
As a nit, this diagram shows all the processes as being 'enveloped' by a sandbox; but that illustration is misleading if one is using, e.g. Seatbelt on Mac. It's more accurate that the process is named Sandboxed-Tor or something like that, rather than depicting some additional 'thing' (which appears to be a process) that 'is' the sandbox. This illustration makes it look more like you're running stuff in a container. (Which may actually be the case for some platforms though.)
Also: I think the IPC diagrams of Firefox <-> Tor are accurate; but we leave out the Content Processes in this illustration. If in the future Mozilla or Tor separates browser functionality into a helper process to reduce privileges in the Content or Parent process, it becomes possible that that helper process (a child of 'Firefox') will communicate with the Broker (and Tor). For example if/when Necko leaves the parent process and all network operations happen in a helper process off Firefox.
Finally and most substantially, I want to think about the assumption you're making about a launcher process. I agree entirely that we want our parent-process-is-sandboxed end-goal to look logically like your diagram during operation. Firefox can't talk to the network or Tor's control port: it talks to a broker that talks to Tor's control port and it talks to Tor for SOCKS proxying using a Named Pipe or Domain Socket. Certain operations are also relegated to the Broker and Firefox can't perform them.
But getting to that state... Using a Launcher process is one solution; but it's not the only solution. So I think we should keep that in mind and not assume a launcher process; but instead plan for a logical run-time design of:
+-------------------+ |-|-| Sandboxed-Firefox |------------ | | +-------------------+ | | | | | | Primary (Existing) Browser IPC | | | | | | +-------------------+ | | --| Sandboxed |- | | | | Content Processes | | | | +-------------------+ | | | | | | TBD IPC TBD IPC | | | | | | +-------------------+ | | |---| Other Sandboxed |--| | | | Helper Processes | | | | +-------------------+ | | | | | | +--------------------+ | | |---| Sandboxed |-| | |-| Networking Process | | | +--------------------+ | | | | Tor Control | | SOCKS over Domain Requests | | Socket/Named Pipe | | | | +--------------------+ | | | Sandboxed Broker |----------| | +--------------------+ | | Filtered Control Requests | +--------------------+ |-| Sandboxed Tor | +--------------------+
Spelled out: - The FF Parent Process talks to and controls the Content Processes (using existing IPC mechanisms) and maybe/probably interfaces with the Networking process and other Helper Processes in unknown future ways. The content processes probably talk to the Network process directly, they might also talk to the other helper processes. - The Networking process talks to Tor using SOCKS with (probably) a domain socket or named pipe - Tor Control requests are sent from the Parent Process to the broker which filters them and then passes them to Tor over the control port. - The broker is most likely the least sandboxed process and may provide additional functionality to the parent process; for example perhaps it passes a writable file handle in a particular directory so the user can save a download.
I think we should plan on creating a proof of concept toy program on all platforms that mimics the process execution and mitigations. (I once started one of those here: https://github.com/tomrittervg/sandboxsandbox/blob/master/SandboxSandbox/San... ) This will allow us to experiment/assert with certainty about what operations are allowed and disallowed and allow easier code review and experimentation by individuals especially those looking at their platform outside their area of experience. It's also lower cost that trying to implement it in FF from the get-go.
- Proxy Obedience
The last line of defense against a proxy-bypass is a mechanism for dropping all outgoing IP packets from the browser that are:
This wi> - not TCP
- or TCP but not destined for the Tor proxy port (SOCKS or HTTP)
- or all packets (including TCP) if the proxy listener is not a TCP port (Unix domain socket, named pipe, etc)
Dropping implies the packets are sent from the process and then dropped by the OS. This is not always the case, on Windows (with user_restricted) and Mac at least, the process is just outright prevented from making successful network calls.
We should also consider guarding the creation of all non-Unix domain sockets and non-named pipes with a pref. I believe sandboxing should be optional, but enabled by default (at least with first few versions), so excluding that code at compile-time is not an option. However, a pref-guard only stops unintended proxy-bypass through normal control flow. This only slightly increases the difficulty of some exploit methods.
I agree that a pref-based solution is only actually useful for testing purposes and doesn't provide security.
Per-platform sandboxing techniques: a) Microsoft Windows i) Windows 8+ If the user can elevate for administrator permissions, then the launcher can: - install a network filter using the Windows Filter Platform[17]. This can deny any connection from any process (based on the process's fully-qualified file name)[18]. This adds some risk because if the launcher process exits unexpected for whatever reason, the firewall rule may remain in place. ii) Windows 10 We can create a new network namespace[19] using the (undocumented) HNSCall procedure[20]. This is very appealing.
I think this is only possible if you're using a version of Windows which supports containers, which most of our users won't have.
iii) Windows 7+ If the platform is Windows 8+ and the user cannot elevate to Administrator or if the platform is Windows 7: - TODO: can/should we use DLL injection of WinSocks and manually filter all proxy-bypass network connections?
I would expect we could use the same type of shims the content process currently uses to achieve this goal.
Is there something we can use for mitigating ROP gadgets within the sandbox?
I'm not sure where this goal came from, but it seems very independent of sandboxing. But I think the general answer is: No, not until one has or can emulate Execute-Only memory, which might be possible if one runs their own hypervisor (I don't think Hyper-V can do it but maybe?)
Back to networking: I think you missed the most straightforward one: the USER_RESTRICTED token level which blocks networking.
See also https://bugzilla.mozilla.org/show_bug.cgi?id=1403931 and https://bugzilla.mozilla.org/show_bug.cgi?id=1177594
b) Mac OS X i) We can restrict network access using the com.apple.security.network.client and com.apple.security.network.server entitlements[21]. ii) We can further restrict which types of connections the browser is allowed. Previously, there was a seatbelt profile for Tor Browser[22], but it is not usable with newer versions of the browser. We can continue using it for only allowing access requests for Unix domain sockets, and denying all other network connections.
Note that the content Sandbox for Mac already enforces networking restrictions.
c) GNU/Linux i) On Linux, significant progress was already made in terms of sandboxing Tor Browser. There are two methods we can use: - Isolated network namespace via bubblewrap[14] - Seccomp-BPF on socket syscalls ii) If bubblewrap is not available, we can fallback on using unshare if it is available and user namespace is enabled iii) If user namespaces are not avialable, we ask the user for elevated privileges and manually fork into the new network namespace (worst-case).
d) Android i) We can use the same (or similar) Seccomp-BPF filter that we use on Linux ii) We can move the Gecko service into an isolated process without any permissions (including Networking). iii) If all networking goes through Necko, and Neck is in the isolated process, then the remaining code, outside the isolated process, should be proxy-safe or it is the tor process.
As with all things sandboxing, we need to be sure there are not IPC mechanisms to a privledged process that bypasses the sandbox restrictions. On the networking side, there is https://searchfox.org/mozilla-central/source/dom/network/PUDPSocket.ipdl - which i think is used by WebRTC.
- State Separation
Per-platform sandboxing techniques: a) Microsoft Windows i) I'm not sure if there is more we can do here.
b) Mac OS X i) The Seatbelt profile[22] can restrict access to all other Firefox profiles ii) The Seatbelt profile can only allow access for bundled libraries iii) We may exclude entitlements for most system services
c) GNU/Linux i) We can create new IPC, mount, and uts namespaces ii) We can provide a clean environment iii) The new mount namespace contains only the required bundled libraries required for running iv) D-Bus/I-Bus access would be nice, if we can do it safely.
d) Android i) All apps are isolated by default, so there should never be shared state. ii) Ensure we have the smallest set of intent filters we need iii) Ensure we have the smallest set of exported components we need iv) Ensure we have the smallest set of broadcast receivers we need v) Ensure we only use Broadcast intents where it is necessary vi) Is there something else we should investigate here?
Why is there so much difference between the other OSes and Windows?
The goal for State Separation, as I understand it, is to ensure that no existing browser settings/profiles/plugins are used in Tor Browser. I think there would be two reasons this could happen: a) The browser itself has a bug that we should fix, where it loads some existing data b) Some third party utility has installed something into Tor Browser
(b) seems generally unmitigatable. For (a) the primary mechanism to prevent this is "don't have browser bugs". The sandbox's role is "If there is a browser bug, block it from trying to read and use that data." So anything we can do in any sandbox to restrict access to existing user data or system directories would support this goal.
- Disk Avoidance
In general, I'd like #7449 mitigated.
Per-platform sandboxing techniques: a) Microsoft Windows i) Windows 10+ - On very recent versions of Windows, we can create temporary filesystem Layers[23][24]. I'm not sure if these are memory-backed or filesystem-backed. - We can create a read-only layer over the installation directory - #18367 may be useful (side-by-side user/app data on Windows) - Can we can isolate the container from system services?
Again, non-consumer versions of Windows only.
b) Mac OS X i) We can use entitlements[25] ii) The Seatbelt profile[22] provided additional access restrictions. iii) Is there a filesystem layering mechanism available in OS X (like tmpfs in Linux)? - Specifically, if we can use this for ~/Library/Caches/TemporaryItems and ~/Downloads
c) GNU/Linux i) We can create new IPC, mount, and uts namespaces ii) #18369 may be useful (side-by-side user/app data on Linux)
d) Android i) The app is self-contained by default ii) Need #26574 iii) Additional auditing required
What about restricting write access to temp directories? That seems like the quickest and most compatible option (although it wouldn't catch every possible occurrence of this issue.)
In general, similar to State Separation; I think there's two categories of issues here: a) Things the browser does that are browser bugs b) Things the OS does independent of the application
A sandbox can provide a safety net for things in (a).
- Application Data Isolation
I'll claim the per-platform sandboxing techniques are the same as Disk Avoidance.
I think this one is even easier, since one can simply prevent write access to any directory except the containing one. This would have to come at the cost of additional engineering to change how the File Save dialogs work though - or you'll have a lot of confused and upset users.
- Secure Updating
Nearly the same sandboxing techniques can be used on all platforms. A launcher-component (maybe not the launcher, but a reduced-privileged process, not the browser) downloads the update. Next, on download completion, a launcher-component verifies the download and installs it (I can't justify verifying the download in an unprivileged, sandboxed process - I'd like to do it, but I don't see a benefit).
a) Microsoft Windows i) Layer read-only filesystem on top of install dir, prevents presistant modifications
b) Mac OS X i) Seatbelt profile prevents write access of app binaries
c) GNU/Linux i) Mount namespace provides read-only access of install dir ii) Mount namespace does not include launcher binaries
d) Android i) If App was installed using a App Store, then that is responsible for updating ii) If the app was installed by the user (side-loaded), then the we can isolate downloading into a separate process, and then verification can be handled in an isolated process. Finally, the Android Package Manager controls installing the update.
This one is at odds with (4) which says "You're not allowed to write anything outside your install directory." We need to be more explicit about what parts of the install directory one is allowed to write to.
Web Extensions come into play here. And themes. If we want to allow them; we need to carve out exceptions for them. And ideally we would use the sandbox to enforce an additional safety net against system add-ons if that's possible (e.g. they're not installed into the same place as web extensions.)
- Usable
Unfortunately, despite the above suggestions, this sandboxing model fails if users find its unusable. In particular, with temporary file systems, we should provide a method for safely extracting a file from the temporary location and copying it to a permanent location using the launcher process.
I would argue that this extraction should be automatic. Anything else is unusuable. Users won't be used to being unable to save downloaded files whereever on their system they want; adding another thing on top of that would be too painful.
In addition, exposing operating system functionality which may be fingerprintable and/or another attack vector should be optional if it unbreaks the web (such as providing pulseaudio on Linux[26]). As usual. we must be careful about giving users too many customizable switches.
Yea; this one is tough.
- Cross-Origin Fingerprinting Unlinkability
I've less experience here on non-Linux OS (and Sandboxed Tor Browser is already a good base on Linux). Which other components should we consider spoofing (if possible) or go out of our way and disallow access (if not possible using the above means)?
So the Browser already brokers access to features on the Web Platform and can be thought of (and really is) a 'sandbox' for webpages. You're only fingerprintable based on what the Web Platform provides. If it provides something fingerprintable; we need to either fix that fingerprint or block access to it at that layer.
I would argue that the role sandboxing should provide here is fingerprinting protection in the face of code execution in the content process. Assume an attacker who has a goal: identify what person is using Tor Browser. Their actions are going to be goal oriented. A website cannot get your MAC address. But if a website exploits a content process and is evaluating what is the best cost / benefit tradeoff for the next step in their exploit chain: the lowest cost is going to be 'zero' if the content process does not restrict access to a OS/computer feature that would identify the user.
So the goal of sandboxing in this area would be to restrict access to any hardware/machine/OS identifiers like OS serial number, MAC address, device ids, serial numbers, etc. After that (the 'cookies' of device identifiers if you will), the goal would be to restrict access to machine-specific features that create a unique fingerprint: like your GPU (which I illustrate because it can render slightly unique canvas data) or your audio system (which I illustrate because it can apparently generate slightly unique web audio data.)
-tom