The depressing thing is looking at the performance cost of so many of these mitigations that are fundamentally due to the use of unsafe languages.
It results in a somewhat pernicious effect: a person can compare their unsafe language to a safe one and say “see it’s faster” because the safe language is being subject to the same hardware costs but they’re unnecessary.
The lack of memory safety in many core languages means we take mitigations that increase memory usage, increase processor complexity, and reduce overall performance.
The reality is build time checking for safety isn't as safe as it sounds, as others have mentioned. Not only are malicious devs a problem, but hardware fails too.
Build time safety checking is valid and useful primarily for developer productivity. Runtime checks which reinforce those (and other) constructs are for ensuring things proceed as intended. There is no contradiction. Your problem is many very useful developers prefer not to have all the build time checks, and you're not going to change their minds!
Technology adoption almost always proceeds faster in cases where the existing work can be carried over as much as possible. I often wish this wasn't the case, but pragmatically it is. To get to a safer world we cannot require rebuilding it from scratch.
There are no "safe" languages. Fast managed languages JIT compile your code to the same unsafe machine code as C, and JITs frequently have exploitable bugs. Most (all?) of the techniques which help mitigate safety bugs in "unsafe" languages also help or can be used to help mitigate safety bugs in JITs. And then there's all the standard library functions implemented in "unsafe" languages.
Then there's Rust, which doesn't even claim to be a safe language. It claims to (and does a good job at!) containing the unsafety to certain parts of the code, which is nice but there's still a whole lot of code that's marked unsafe which benefits from these mitigations.
Huh, interesting. I'm very curious about how those typing rules ensure memory safety, presumably with neither something like Rust's borrow checker nor garbage collection. I didn't think that was possible.
A JIT implementation of safe language 1 would still be dangerous in safe language 2 since it's emitting and running raw assembly. So you need a safe assembly language no matter what, I think.
Unless safe language 2 can provably write safe compilers in it, but I'm not sure I've seen any attempts at that for JITs. There is CompCert for C.
CHERI gives you greater granularity over memory access it doesn’t magic away security vulnerabilities.
But that doesn’t negate what I stated: there’s a massive amount of complexity, and significant performance costs of all these features. They’re needed because the prevalence of unsafe code means we need a global enforcement mechanism which means hardware.
Hence we take a real and permanent cost in hardware that applies to all code on the system. The cost is hardware complexity, runtime performance, and memory, and it is borne by all software on the system.
CHERI does more than help eliminate security vulnerabilities. Consider that today we rely on the MMU to provide memory isolation between Unix processes; CHERI enables isolation without switching page tables, at a smaller hardware cost (though it's not like you can drop unmodified software into such an architecture). So I don't think it's correct to consider this yet another layer of complexity. If anything it has the potential to lead to simpler system designs.