>The safe execution of any untrusted Turing complete code is a pipe dream.
The safe execution of any code requires an operating environment that never trusts the code with more than the least privilege required to complete a task. It has worked in mainframes that way for decades.
The IT zeitgeist these days makes me sad. Things can be better, but almost everyone is pushing in counterproductive directions, or has given up hope.
> The safe execution of any code requires an operating environment that never trusts the code with more than the least privilege required to complete a task. It has worked in mainframes that way for decades.
It has nothing to do with any OS level security features. We are talking about things happening below the level of what software can see.
You just cannot see any sign of such attack by looking at any register the OS can see.
Timing attacks are only a subset of side channel attacks, though. One can also imagine thermal attacks -- the amount of power you consume leaks information about what you're doing. And if I share a processor with you, there's various ways I can imagine estimating your power usage. On a processor that has dynamic clocking, the clock speed I'm running at is an indicator of the operations you're doing. Even without dynamic clocking, the probability of an ECC error, for example, is likely to change with temperature.
Eliminating timing vulnerabilities is necessary to allow potentially-hostile workloads to share hardware, but it is not sufficient.
Determining what clockspeed you're running seems like it would also require access to timing information though, right? RAM errors is an interesting idea for sure, but I think that can and should be shored up at the RAM level. I think a strong sandbox, WebAssembly and the like, should be pretty reasonable to run untrusted.
A 2013 paper[1] demonstrating exactly that: side channel detecting thermals that's measured without measuring on-CPU timing.
Instead they measured CPU temperature through frequency drift measured through change of network packet markers. A bit contrived but they made it workable quite reliably.
I can still determine timing information by measuring how long it takes to execute a program. The only way to prevent this is to enforce constant-time programs by delaying a response until a specific amount of time (see constant time comparison functions in cryptography). That's not feasible for many applications, especially operations on a latency-sensitive critical path.
You can run algorithms deterministically in a multi-tenant system. Only allow tenants to run deterministic algorithms and side-channels are eliminated. Algorithms with provable time bounds can be run and the output delayed until the known time bound to eliminate timing attacks.
The people operating mainframes have a vastly diffrerent mindset and skills from the average computer/smartphone user. The former can and do dedicate 40h/week and more to studying documentation and tweaking the sandboxes of the stuff they run.
Yes, but it wouldn't include code that exploits those vulnerabilities, by design, and it wouldn't trust any other code, so in effect, it would shield the system from it.
The safe execution of any code requires an operating environment that never trusts the code with more than the least privilege required to complete a task. It has worked in mainframes that way for decades.
The IT zeitgeist these days makes me sad. Things can be better, but almost everyone is pushing in counterproductive directions, or has given up hope.