On 6 July 2018 at 16:05, Yawning Angel yawning@schwanenlied.me wrote:
The number of upstream (Firefox) changes that were needed to be to get the Linux sandbox to work was exactly 0. There was one fix I backported from a more current firefox release, and two upstream firefox bugs that I worked around (all without altering the firefox binary at all).
The design and code survived more or less intact from 7.0.x to at least the entirely of the 7.5 stable series (I don't run alpha, I assume there's some changes required for 8.0, but the code's deprecated and I can't be bothered to check. It would have survived if the time and motivation was available).
It really seems to me on the outside that that Linux is an exception to Mac/Windows (and maybe Android) - it's got rich support for containerization and restrictions and by and large sandboxed-tor-browser was able to live very nicely within them.
Spelled out:
- The FF Parent Process talks to and controls the Content Processes
(using existing IPC mechanisms) and maybe/probably interfaces with the Networking process and other Helper Processes in unknown future ways. The content processes probably talk to the Network process directly, they might also talk to the other helper processes.
- The Networking process talks to Tor using SOCKS with (probably) a
domain socket or named pipe
- Tor Control requests are sent from the Parent Process to the broker
which filters them and then passes them to Tor over the control port.
- The broker is most likely the least sandboxed process and may
provide additional functionality to the parent process; for example perhaps it passes a writable file handle in a particular directory so the user can save a download.
How do you envision updates to work in this model? Having the sandbox be externalized and a separate component makes it marginally more resilient to the updater/updates being malicious (though I would also agree that it merely shifts the risks onto the sandbox update mechanism).
Probably: the parent process checks to see if an update file is present upon restart; if there is, it validates and applies the update. After applying the update (or if no update is present) it drops privileges such that it is unable to write/replace any of the existing files.
An attacker who compromises the parent process can download and place a malicious update file in the directory; but doesn't have permission to apply it. They can only compromise the update install mechanism of the running process (not the process on disk) - but it doesn't have permission to apply it. Upon process restart you're running an uncompromised process that will reject the update.
It is also not clear to me how to do things like "peek at the executable's ELF header to only bind mount the minimum number of shared libraries required for the executable to run" from within the executable itself.
Yea; probably not. Unless we do wind up with firefox.exe launching another firefox.exe that acts as the parent process. Which is underway for Windows; but not Linux.
Anyway, as far as I can tell, the differences in what you're suggesting vs the existing/proposed architecture boils down to "how much of firefox should be trusted?". To this day, I remain in the "as little as possible" camp, but "nie mój cyrk, nie moje małpy".
Pretty much. I definitely think a sandboxed-tor-browser type is better - much better. It's just way, way more work on Mac/Windows and way more divergence from Firefox's current and future plans.
----------------------
On 6 July 2018 at 18:30, Matthew Finkel matthew.finkel@gmail.com wrote:
When one is able to achieve High Impact goals from the Content Process
- it seems to me that engineering effort should be focused on closing
_those_ holes first, before trying to build a solution for the parent process. (I'm not saying we shouldn't plan for the parent process though!)
Having Mozilla's help identifying what is needed, and where we should start, in this area will be extremely helpful.
We could definitely do a brainstorming session with the Mozilla sandboxing folks to open a bunch of bugs on this.
There's https://www.bromium.com/ which operates in this space, and despite having a bunch of slick marketing and buzzwords and stuff actually has a really powerful technology core.
That website is...hard to read. But yes, ideally, that is where I think private browsing mode should go in the future, where the browser is contained by a cross-platform VM and we have a minimal trusted computing base with a limited attach surface.
That would be ideal...
In this way, the browser only needs one set of sandboxing techniques in the browser. All platform-specific mitigations and restrictions are in an abstraction layer at the VM-Kernel interface.
That seems to assume a Docker-like approach where you've got a cross-platform virtualization engine...
I think there will always be a lot of platform-specific sandboxing code present, in both the browser and in any container-solution that gets devised. Unless you have the container solution run the browser on a fixed OS with whatever host OS but that seems problematic for a lot of reasons.
I also think it's likely that if Firefox goes to a container-based approach, it's going to integrate with each system differently; not use a cross-platform solution like Docker. You're just going to get better performance and integration out of something from the native OS. This is where Edge is going: https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-...
The Chromium/Firefox sandbox on Windows is composed of the interaction of SetProcessMitigationOptions (of which there are many options), Integrity Level (both an 'Initial' and a 'Delayed') as well as Job Level, Access Token Level, Alternate Desktop, and Alternate Windows Station.
I'm hopeful we can use more of these than we do currently.
We can; and today* if we accept certain breakage.
* Today really meaning after weeks of testing...
Also; have you experimented with/confirmed the restriction mentioned at All Hands; that a process with a Seatbelt Policy cannot launch a process with a more restrictive seatbelt policy? (I think mcs ran into this with the Mac Sandbox policy that used to ship with Tor Browser Alpha.) This is a significant problem to any Apple-based sandboxing policy, and frankly one I think someone should make a semi-organized petition from browser makers to Apple to fix.
Yes - I wonder if the sandbox_init() call failed because it was denied by our initial sb policy. But, at the same time, I wonder if after adding a first-layer of sandboxing that (somehow) allows calling sandbox_init() again, does this result in futher restricting the current sandbox policy, or does this overwrite the original policy.
The man page (as available online), does not describe this behavior other than saying calling sandbox_init() places the current process into a sandbox(7). And that man page only says "New processes inherit the sandbox of their parent."
https://www.manpagez.com/man/3/sandbox_init/ https://www.manpagez.com/man/7/sandbox/
Complaining about this design is most likely a dead-end, however - maybe that shouldn't stop us from making some noise, though.
"The sandbox_init() and sandbox_free_error() functions are DEPRECATED. Developers who wish to sandbox an app should instead adopt the App Sand- box feature described in the App Sandbox Design Guide."
I'm told by our Mac person (Alex) that seatbelt is not going to get removed. "It's been deprecated for a while, and (a) all the platform binaries are sandboxed using it, (b) every web browser (including safari) uses it." So that's something.
Is there something we can use for mitigating ROP gadgets within the sandbox?
I'm not sure where this goal came from, but it seems very independent of sandboxing. But I think the general answer is: No, not until one has or can emulate Execute-Only memory, which might be possible if one runs their own hypervisor (I don't think Hyper-V can do it but maybe?)
This is related, but I see it didn't logically flow from the surrounding throughts. While I was thinking about using DLL injection for WinSocks, I thought about how that wouldn't prevent an attacker using ROP for bypassing it.
I hadn't considered that; but I assume it is not possible or Chromium's sandbox from the get-go would be insecure.
As with all things sandboxing, we need to be sure there are not IPC mechanisms to a privledged process that bypasses the sandbox restrictions. On the networking side, there is https://searchfox.org/mozilla-central/source/dom/network/PUDPSocket.ipdl
- which i think is used by WebRTC.
Thanks, good point. This is something we'll need to address before we enable WebRTC, anyway. On Android this code should be run by the background GeckoService, so as long as it is sandboxed that shouldn't bypass the proxy - but this needs verification.
This is probably an issue today absent WebRTC; a compromised content process can probably invoke these IPC methods. But this would only matter on Mac, since that's the only platform that prevents network connections from the content process.
So the goal of sandboxing in this area would be to restrict access to any hardware/machine/OS identifiers like OS serial number, MAC address, device ids, serial numbers, etc. After that (the 'cookies' of device identifiers if you will), the goal would be to restrict access to machine-specific features that create a unique fingerprint: like your GPU (which I illustrate because it can render slightly unique canvas data) or your audio system (which I illustrate because it can apparently generate slightly unique web audio data.)
I don't know how to do this on all the platforms, so this is something that we'll need documented. If Mozilla already have this information and/or if this is already available in Firefox, then that's a good first step.
Same on the brainstorming; but it'd be good to come up with a list of identifiers we're concerned about (and ideally some examples of how to get them0 so Mozilla folks can think about what the sandbox doesn't/can't prevent.
-tom