This assumes the absence of a sandbox. Trusted computing can happen with our without a sandbox, much like "regular" computing.
If your system is running unsandboxed, untrusted third party code, that's pretty bad, regardless of the presence or absence of a trusted platform. As an example: FLOSS systems are definitely capable of running malware.
On the other hand, a reproducible build of open source software might well be what runs in (and relies on the attestation provided by) a trusted computing platform.
I do see one practical concern with integrating trusted computing on a general purpose computer:
If an implementation depends mostly on security through obscurity to achieve the desired attestation capabilities, this makes it much harder to audit it for vulnerabilities or backdoors. But I don't see how that is a fundamental property of a trusted computing system.
A reproducible build is very different than cryptographically attesting the binary state of especially the kernel.
If I can't produce a binary with the same "reproducible state" as the one you had because your _kernel_ was one that I don't run (especially because maybe I don't _want_ to run it), that destroys all the value of a reproducible build.
A reproducible build should not _undermine_ software freedoms, specifically those protected by a Free Software license. But trusted computing always undermines software freedoms: that's by design. It's intentional. It's all about locking 100% of the users into a single monoculture where there is minimal freedom.
And that's fine in a managed IT environment such as a corporation. But it's not ok when I buy hardware and the manufacturer refuses to hand over the certificate chain to me.
A kernel would not be something that you would run in a trusted enclave. It's way too big of a surface area (containing your entire operating system and application layer, after all), so what would be the point of attesting that to anyone?
This is the "old" way of using a TPM, and I agree, it does not make sense at all. After all, it never came to be, and that's not only because of the vocal protests against it. It simply does not make sense!
But I would encourage you to read up on how, for example, the Signal foundation is thinking about using something like SGX.
It was the ME firmware that was compromised with this CSME exploit.
Of course, few people notice if the ME firmware is updated, and Intel doesn't often update deployed ME firmware. But it verifies the BIOS, and the BIOS verifies the kernel.
And that's the point where people start caring. Hence I used the kernel as an example.
Signal is thinking about using SGX, but they also have other ways to grant the user reasonable security. After this CSME exploit, Signal may reconsider using SGX.
Either way, my point still stands: verifiable builds do not rely on trusted computing at this point. I hope they never do. It would be twisted logic to tell the user the only way they can have their software freedom is by asking the trusted computing infrastructure to verify it for them. The trusted computing infrastructure being absolutely as opaque and locked-down as possible. Trusted computing is not reproducible! Its designed-in purpose is to be opaque, to hide things from the user.
> verifiable builds do not rely on trusted computing at this point.
Of course they don't. They are orthogonal, i.e. one does not imply the other, but one also does not prevent the other.
The verifiable build serves you, the hardware owner, in knowing that the software does what its vendor claims.
The trusted platform's assertion serves the software vendor, allowing them to trust the environment that their software is running in.
> It would be twisted logic to tell the user the only way they can have their software freedom is by asking the trusted computing infrastructure to verify it for them.
Nobody is saying that. If you want to trust your computer to do what you think it does, you don't want trusted computing; you probably want reproducible builds, trusted boot etc. But trusted computing also does not inherently prevent you from doing that. The two are orthogonal!
> If you want to trust your computer to do what you think it does, you don't want trusted computing ...
I suppose this depends on precisely what's meant by the term. Trusted computing based on a root of trust you control is incredibly useful by providing guarantees to you about remote systems you own and operate. This can be realized at present using Secure Boot and a TPM, but more hardware support (ex VM screening or SGX based on your own keys) would be nice.
You seem to be using trusted computing to refer only to hardware that works against the owner. Instead, I've always thought of it as referring to a device that is capable of providing various guarantees about the code it executes - it just happens that current implementations are primarily designed to provide those guarantees to third parties instead of the owner.
> You seem to be using trusted computing to refer only to hardware that works against the owner. Instead, I've always thought of it as referring to a device that is capable of providing various guarantees about the code it executes - it just happens that current implementations are primarily designed to provide those guarantees to third parties instead of the owner.
This is also my take on it. As with many issues in tech, it’s not about the tech itself, but rather who the tech is working for.
If your system is running unsandboxed, untrusted third party code, that's pretty bad, regardless of the presence or absence of a trusted platform. As an example: FLOSS systems are definitely capable of running malware.
On the other hand, a reproducible build of open source software might well be what runs in (and relies on the attestation provided by) a trusted computing platform.
I do see one practical concern with integrating trusted computing on a general purpose computer:
If an implementation depends mostly on security through obscurity to achieve the desired attestation capabilities, this makes it much harder to audit it for vulnerabilities or backdoors. But I don't see how that is a fundamental property of a trusted computing system.