Why? It's a straightforward matter. You can have the conventional behavior with the necessary limitations to which everyone has adapted, or you can opt in to a modified environment with new rules that would break some sites but provides additional capabilities.
> Am I missing something here?
Yes; the clearly explained rational is somehow being missed. The sandbox is an OS process as necessitated by Spectre. Without the new opt-in capability content from multiple origins -- some of them hostile -- are mixed into a process, and so the shared memory capabilities must be disabled. This new opt-in capability creates the necessary mapping; when enabled the content from arbitrary origins will not be mixed in a process and so the shared memory and HRT features can be permitted.
> Without the new opt-in capability content from multiple origins -- some of them hostile -- are mixed into a process, and so the shared memory capabilities must be disabled.
That's an arbitrary requirement on the part of Firefox developers, and it's a security issue in its own right. Any of the numerous exploits that regularly show up in Firefox could take advantage of this, not just Spectre.
Chrome has site-isolation enabled by default, at least on Desktop, I don't see why Firefox shouldn't follow suit.
This is a concern somewhat orthogonal to site isolation as implemented in Chrome.
Say you have a web page at https://a.com that does <img src="https://b.com/foo.png">. That's allowed in browsers (including Chrome with site isolation enabled), because it's _very_ common on the web and has been for a long time, and disallowing it would break very many sites. But in that situation the browser attempts to prevent a.com from reading the actual pixel data of the image (which comes from b.com). That protection would be violated if the site could just use a Spectre attack to read the pixel data.
So there are three options if you want to keep the security guarantee that you can't read image pixel data cross-site.
1) You could have the pixel data for the image living in a separate process but getting properly composited into the a.com webpage. This is not something any browser does right now, would involve a fair amount of engineering work, and comes with some memory tradeoffs that are not great. It would certainly be a bit of a research project to see how and whether this could be done reasonably.
2) You can attempt to prevent Spectre attacks, e.g. by disallowing things like SharedArrayBuffer. This is the current state in Firefox.
3) You can attempt to ensure that a site's process has access to _either_ SharedArrayBuffer _or_ cross-site image data but never both. This is the solution described in the article. Since current websites widely rely on cross-site images but not much on SharedArrayBuffer, the default is "cross-site images but no SharedArrayBuffer", but sites can opt into the "SharedArrayBuffer but no cross-site images" behavior. There is also an opt-in for the image itself to say "actually, I'm OK with being loaded cross-site even when SharedArrayBuffer is allowed"; in that case a site that opts into the "no cross-site images" behavior will still be able to load that specific image cross-site.
I guess you have a fourth option: Just give up on the security guarantee of "no cross-site pixel data reading". That's what Chrome has been doing on desktop for a while now, by shipping SharedArrayBuffer enabled unconditionally. They are now trying to move away from that to option 3 at the same time as Firefox is moving from option 2 to option 3.
Similar concerns apply to other resources that can currently be loaded cross-site but don't allow cross-site access to the raw bytes of the resource in that situation: video, audio, scripts, stylesheets.
I hope that explains what you are missing in your original comment in terms of the threat model being addressed here, but please do let me know if something is still not making sense!
Keeping image/video/audio data out of process actually sounds kinda reasonable to me :-).
I think the really compelling example is cross-origin script loading. I can't imagine a realistic way to keep the script data out of process but let it be used with low overhead.
Oh, I think it's doable; the question is how much the memory overhead for the extra processes is.
I agree that doing this for script (and style) data is much harder from a conceptual point of view! On the other hand, the protections there are already much weaker: once you are running the script, you can find out all sorts of things about it based on its access patterns to various globals and built-in objects (which you control).
Why? It's a straightforward matter. You can have the conventional behavior with the necessary limitations to which everyone has adapted, or you can opt in to a modified environment with new rules that would break some sites but provides additional capabilities.
> Am I missing something here?
Yes; the clearly explained rational is somehow being missed. The sandbox is an OS process as necessitated by Spectre. Without the new opt-in capability content from multiple origins -- some of them hostile -- are mixed into a process, and so the shared memory capabilities must be disabled. This new opt-in capability creates the necessary mapping; when enabled the content from arbitrary origins will not be mixed in a process and so the shared memory and HRT features can be permitted.