"The system maintains backwards compatibility. We cannot ask billions of websites to rewrite their code."
I don't understand this requirement. Very few sites use SharedArrayBuffer, those few that do probably had to rewrite code to deal with it being disabled.
I also don't understand how cross-origin has anything to do with it either. Either your sandbox works, in that case cross-origin isolation shouldn't matter, or it doesn't work, in which case cross-origin isolation is not a real protection.
Am I missing something here?
Firefox is only maybe 5% of users and it has other performance problems, if SharedArrayBuffer doesn't "just work" then I'm inclined to have them take that performance hit or use a different browser.
Under Spectre, if the attacker can run SharedArrayBuffer code in your process, even "sandboxed," it can read memory from anywhere else in that process.
So I guess you're right that if the sandbox "works" you don't care about cross-origin isolation, but it turns out that sandboxes don't work if you run multiple sandboxes in the same process.
The mitigation browsers have chosen is to isolate each origin in its own process, preventing other origins from communicating with it. To regain access to SharedArrayBuffer, you have to opt in to this extreme form of cross-origin isolation.
It would be nice to just make the whole web default to cross-origin isolation, but tons of websites rely on cross-origin communication features, and browsers can't just force them all to be compatible with isolation, so isolation has to be opt-in.
How exactly does site-isolation prevent cross-origin communication that doesn't rely on SharedArrayBuffer, i.e. that vast majority of use-cases? It's just message passing.
I can see that site-isolation is arguably too expensive on mobile and why you might want an opt-in mechanism there, somewhere down the line.
However, I don't think there are good arguments for not just enabling it on Desktop right now, without making developers jump through hoops. Until Chrome enables SharedArrayBuffers on mobile, I have no reason to care anyway.
It doesn’t need to, since that communication is consensual: the sender must explicitly send the information, and the receiver must explicitly be interested in it (and can check what origin it is from). The problem with SharedArrayBuffer (with Spectre) is that is allows the “receiver” to read whatever it wants from the other origin, just by virtue of ending up in the same browser context.
Site isolation disables all of it. With "Cross-Origin-Embedder-Policy: require-corp," you can't even embed a cross-site image unless the other image allows it with a "Cross-Origin-Resource-Policy: cross-origin"
Enabling that on desktop today would break every website that embeds cross-origin images, e.g. everybody using a separate CDN for images would be broken.
You're describing how this proposed cross-origin isolation scheme works. I understand that, I don't understand why it is necessary to make it work that way.
Chrome has been doing site isolation with multiple processes for a for a while, it "just works" and it doesn't break sites.
Site isolation and origin isolation are separate concerns. In the "origin isolation" model, you need to ensure different origins are in different processes, and that their data don't leak from one to the other. In site isolation, you only care about tabs not being able to communicate with each-other.
Also, you seem to be missing something: Chrome is going to implement the same set of headers, with the same set of restrictions when they are applied. This isn't an arbitrary firefox decision, every web browser is expected to follow suit. See the various mentions of "chrome" in https://web.dev/coop-coep/
Chrome’s site isolation doesn’t solve the “image from another origin” problem. Those still exist in the containing origin process’s memory. It solves the “frame from another origin” problem, which is the more acute issue but not the only one.
Why? It's a straightforward matter. You can have the conventional behavior with the necessary limitations to which everyone has adapted, or you can opt in to a modified environment with new rules that would break some sites but provides additional capabilities.
> Am I missing something here?
Yes; the clearly explained rational is somehow being missed. The sandbox is an OS process as necessitated by Spectre. Without the new opt-in capability content from multiple origins -- some of them hostile -- are mixed into a process, and so the shared memory capabilities must be disabled. This new opt-in capability creates the necessary mapping; when enabled the content from arbitrary origins will not be mixed in a process and so the shared memory and HRT features can be permitted.
> Without the new opt-in capability content from multiple origins -- some of them hostile -- are mixed into a process, and so the shared memory capabilities must be disabled.
That's an arbitrary requirement on the part of Firefox developers, and it's a security issue in its own right. Any of the numerous exploits that regularly show up in Firefox could take advantage of this, not just Spectre.
Chrome has site-isolation enabled by default, at least on Desktop, I don't see why Firefox shouldn't follow suit.
This is a concern somewhat orthogonal to site isolation as implemented in Chrome.
Say you have a web page at https://a.com that does <img src="https://b.com/foo.png">. That's allowed in browsers (including Chrome with site isolation enabled), because it's _very_ common on the web and has been for a long time, and disallowing it would break very many sites. But in that situation the browser attempts to prevent a.com from reading the actual pixel data of the image (which comes from b.com). That protection would be violated if the site could just use a Spectre attack to read the pixel data.
So there are three options if you want to keep the security guarantee that you can't read image pixel data cross-site.
1) You could have the pixel data for the image living in a separate process but getting properly composited into the a.com webpage. This is not something any browser does right now, would involve a fair amount of engineering work, and comes with some memory tradeoffs that are not great. It would certainly be a bit of a research project to see how and whether this could be done reasonably.
2) You can attempt to prevent Spectre attacks, e.g. by disallowing things like SharedArrayBuffer. This is the current state in Firefox.
3) You can attempt to ensure that a site's process has access to _either_ SharedArrayBuffer _or_ cross-site image data but never both. This is the solution described in the article. Since current websites widely rely on cross-site images but not much on SharedArrayBuffer, the default is "cross-site images but no SharedArrayBuffer", but sites can opt into the "SharedArrayBuffer but no cross-site images" behavior. There is also an opt-in for the image itself to say "actually, I'm OK with being loaded cross-site even when SharedArrayBuffer is allowed"; in that case a site that opts into the "no cross-site images" behavior will still be able to load that specific image cross-site.
I guess you have a fourth option: Just give up on the security guarantee of "no cross-site pixel data reading". That's what Chrome has been doing on desktop for a while now, by shipping SharedArrayBuffer enabled unconditionally. They are now trying to move away from that to option 3 at the same time as Firefox is moving from option 2 to option 3.
Similar concerns apply to other resources that can currently be loaded cross-site but don't allow cross-site access to the raw bytes of the resource in that situation: video, audio, scripts, stylesheets.
I hope that explains what you are missing in your original comment in terms of the threat model being addressed here, but please do let me know if something is still not making sense!
Keeping image/video/audio data out of process actually sounds kinda reasonable to me :-).
I think the really compelling example is cross-origin script loading. I can't imagine a realistic way to keep the script data out of process but let it be used with low overhead.
Oh, I think it's doable; the question is how much the memory overhead for the extra processes is.
I agree that doing this for script (and style) data is much harder from a conceptual point of view! On the other hand, the protections there are already much weaker: once you are running the script, you can find out all sorts of things about it based on its access patterns to various globals and built-in objects (which you control).
To safely use SharedArrayBuffer you have to give something else up, like the ability to fetch arbitrary resources with <img>. Most sites that want SharedArrayBuffer would be fine with a tradeoff like this, and so this post describes a way they can opt in to the necessary restrictions.
> I also don't understand how cross-origin has anything to do with it either. Either your sandbox works, in that case cross-origin isolation shouldn't matter, or it doesn't work, in which case cross-origin isolation is not a real protection.
It doesn't work in general. It kind of works if you're putting each sandbox into its own process. Assuming there aren't any undiscovered microarchitectural attacks at the moment.
I don't understand this requirement. Very few sites use SharedArrayBuffer, those few that do probably had to rewrite code to deal with it being disabled.
I also don't understand how cross-origin has anything to do with it either. Either your sandbox works, in that case cross-origin isolation shouldn't matter, or it doesn't work, in which case cross-origin isolation is not a real protection.
Am I missing something here?
Firefox is only maybe 5% of users and it has other performance problems, if SharedArrayBuffer doesn't "just work" then I'm inclined to have them take that performance hit or use a different browser.