Why do comments like this get rated down so quickly. The process boundary was supposed to be the level of protection. Preventing your own process from accessing itself was never part of the memory model.
Chip manufacturers have known about JavaScript and it's implementations for decades.
ARM for instance had gone so far as to make JS specific instructions (FJCVTZS, Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero), and before that they had instructions to help a JIT (ThumbEE).
Not sure why we're buying the chip companies' shtick of "oh, poor us, we never knew people would use our chips like that, we just specifically optimized for it and provided support instructions"
> Not sure why we're buying the chip companies' shtick of "oh, poor us, we never knew people would use our chips like that, we just specifically optimized for it and provided support instructions"
Hardware moves slowly and can't be fixed easily. Browsers should just be doing this (which they are with site isolation) instead of hardware trying to hack in patches to cover up bad software architecture.
And you can't fix a bug you didn't know about anyway, so why would you have expected ARM to fix something nobody knew was real in the first place?
Running untrusted code in the same process isn't "bad software architecture". And we've been doing that for as long as microprocessors have had OoOE. Java got big right when the first consumer OoOE chips came out in the 90s.
The JVM took responsibility for sandbox isolation. It took until Spectre/Meltdown to widely demonstrate that this was a poor decision, because it turned out in process sandbox isolation is a promise that cannot be kept. And the point of the JVM was to run the same code on all sorts of hardware, so it doesn't get to blame the Intel or Spark or Motorola CPU it happens to be running on.
If they were erroneous, why weren't they corrected for two decades by those very same chip makers? It was various security researchers that first warned about something like this, not an Intel engineer saying "y'all are holding it wrong".
They should have provided mechanisms for multiple memory domains in the same process, so that people can use their chips securely to do the work the expect to do with them, yes.
Got it. Chip makers have an obligation to accommodate (proactively, no less) stupid things developers do, like running untrusted code in the same process. Makes total sense.
You're trying to casually brush away the entire field of software fault isolation as "stupid things developers do". In fact, SFI has been a respectable area of systems research since at least 1987.
I'm not trying to casually brush away the whole field. MMU-based protection has been overly limiting ever since MMUs were developed. That doesn't mean that every design for providing isolation beyond what the MMU can guarantee is well-thought-out. Certainly, I see no basis for blaming chip makers for the fact that VM developers came up with an attempt at same-process isolation that doesn't work.
It's like all the whining people do about GCC doing unexpected things when faced with code that relies on undefined behavior. That's not GCC's fault.
If hardware architects had intended not to support software fault isolation, then they would have said so back when the field was developed. It's not like people with experience in hardware design weren't in the peer review circles for those papers. Steve Lucco, one of the authors of the 1993 paper, went on to work at Transmeta.
This isn't like GCC, where the C standards bodies officially got together and said "don't do this".
> Certainly, I see no basis for blaming chip makers for the fact that VM developers came up with an attempt at same-process isolation that doesn't work.
The issue isn't that there's a bug in their VM implementation, it's that with current hardware general VMs and same process isolation are mutually exclusive.
They knew since the 90s that people were doing this and expecting it to work, or did you miss that whole Java thing? Java became big as Intel was adding OoOE to their cores.
Hardware and software is codesigned, and yes the onus is on the chip manufacturers to release chips that let you continue to use them securely.
Meltdown let processes access kernel memory which was supposed to be hardware protected. That is a violation of the chipmakers obligation, not the software obligation.
And that is a different discussion. The article, and the discussion here, is regarding side channels attacks within a single process. I'm pretty sure everyone agrees that the hardware (or some conspiracy of the hardware and kernel) must provide process isolation.