Hacker News new | past | comments | ask | show | jobs | submit login
Pluton is not currently a threat to software freedom (mjg59.dreamwidth.org)
143 points by foodstances on Jan 9, 2022 | hide | past | favorite | 204 comments



> Remote attestation has been possible since TPMs started shipping over two decades ago.

The difference now is that Microsoft are saying they will only support machines which have these TPMs, and therefore they can credibly argue in a few years that the only secure PCs (and thus the only PCs that ISPs should allow online) are ones which can produce a remote attestation to prove they are running the latest OS updates (from an OS vendor that is approved by the government).

> If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements.

The trap hasn't been sprung yet, but those are the teeth, yes. Then say goodbye to Tor, E2E encrypted messengers, unapproved VPN apps, and bittorrent clients that don't check a Content ID database.


> The difference now is that Microsoft are saying they will only support machines which have these TPMs

That's a reason to worry about Windows 11 requiring a TPM, rather than a reason to worry about Pluton specifically. But even so, I don't think it's an especially realistic one - outside extremely constrained setups, it's very hard to make remote attestation work in a way that gives you any meaningful guarantees (eg, simply forward the challenge on to a machine that is running the "approved" OS).

> The trap hasn't been sprung yet, but those are the teeth, yes.

Again, something they could just do today while zero people have Pluton.

If Microsoft want to lock-down the entire x86 market, they can do that now. They don't need to wait years for everyone to shift to new hardware that has Pluton in it.


> it's very hard to make remote attestation work in a way that gives you any meaningful guarantees (eg, simply forward the challenge on to a machine that is running the "approved" OS).

I was imagining something like that would be possible (for people with enough tech knowledge), but it's good to have it confirmed, thank you. There would presumably be a cat-and-mouse game of the "approved" OS trying to detect if it was being co-opted into such a scheme.

> They don't need to wait years for everyone to shift to new hardware that has Pluton in it.

As you say, I'm more worried about Windows 11 than Pluton, but presumably the "importance" of Pluton is part of Microsoft's excuse for not supporting non-TPM hardware any more. Once Windows 10 is out of security support (for home users at least), it will be easier for Microsoft to claim that non-TPM Windows devices are de facto insecure.


> it will be easier for Microsoft to claim that non-TPM Windows devices are de facto insecure.

Which only means that programs can choose to not service devices without TPM - things like Netflix/Streaming Services and online competitive games, although it might take 10 years with the amount of people that will be unable to upgrade to 11 or upgrade their computer to one with a tpm at all. With computers become more and more about browsing the web, and especially with the chip shortage, people aren't upgrading their hardware as often.


> Which only means that programs can choose to not service devices without TPM

But those "programs" could include "an online check made by your ISP, mandated by your government". If your computer doesn't pass the check, it won't be allowed online. What good is a phone call if you're unable to speak?

> it might take 10 years

I think more like 5, although the government might start slowly, like only preventing non-TPM devices from accessing "sensitive" online services, e.g. banks or anything that requires a payment.

The next step would be connecting the "online check" with a biometric ID, enforced by the device. Every time you unlock your device, it would request from the government a random ID that is included in every packet sent, and those IDs would be tied to your legal identity in a government database.

Letting someone else use your device would be similar to letting someone else use your car, in that you are responsible for whatever is done while you are logged in, unless you report it stolen.


> I think more like 5, although the government might start slowly, like only preventing non-TPM devices from accessing "sensitive" online services, e.g. banks or anything that requires a payment.

This has already happened for mobile banking apps on Android: Many of them already use SafetyNet with hardware attestation. The only reason not all of them do require hardware attestation is that not all of the older Android phones support that, which is exactly the situation Microsoft wants to change for TPM. And increasingly, other apps seem to be starting to use root detection and safety net for frivolous use cases such as McDonalds.


Sadly, it is true. I had it on my local bank app. It is annoying, but the future is for everyone to see. I am only able to vote with my feet and go to the branch in person.

Best we can do is start educating people now.


I'm curious what you think we're losing here? I mean, I can't remote order with McDonalds on my vintage Windows 95 PC.

To me, the platforms are simply improving security and slowly jettisoning older systems which cause security issues. We don't allow TLS 1.1 for a reason.


None of these have any requirement on some TPM specification. A government can already do as much invasive monitoring as they want, either by forcing citizens to install MITM root CAs[0] or generally requiring invasive identity checks when people sign in, or just limiting what privacy-invasive devices are even allowed to be sold at all. Banks can already go "lol no web frontend for you, go use our mobile app". And neither Visa/Mastercard nor their bank partners are going to allow such strict restrictions that'll surely reduce the amount of impulsive purchases people can make, and you forget that every online payment is already hard tied to your identity via your bank / credit accounts.

> in that you are responsible for whatever is done while you are logged in, unless you report it stolen.

This is only really true for insurance purposes - for stuff like red light cams, the tickets are invalid if you weren't the one driving (which is why some newer ones snap temporary pictures of people in the driver seat in case they end up running the light).

0: https://news.ycombinator.com/item?id=20472179


Right now it is politically unthinkable for Western governments to demand people install MITM root CAs, and technically infeasible that they would re-encrypt every TLS connection (and check for encryption being layered inside the decrypted streams). (When Kazakhstan tried, they also faced resistance from software makers, but I wonder what would happen if those software makers happened to be based in the same country that was implementing this policy.)

It is much more thinkable, however, especially in 5 years, perhaps after a (false flag?) cyber-attack takes down an electricity grid in some country, that a government could prevent "insecure"/"unpatched" devices from going online. This wouldn't require any personal information to be shared with the government (at least, no more than current ISP data retention laws already require), and Microsoft would be all too happy to build support for this right into Windows for free, as it would make it harder for "unapproved" operating systems to be used in that country.

> the tickets are invalid if you weren't the one driving

I guess what I meant was "the government will punish you unless you can prove someone else was using your device" so you won't be able to escape prosecution by sharing a device and saying "I can't remember who was using it at that time". Similarly, I believe in some jurisdictions a car owner is expected to know who was using their car at any given point in the past so that speeding tickets can be assigned to the correct person.*

Anyway, I can imagine the law going further and matching the dystopian vision of "The Right to Read", which includes this passage: "Of course, if the school ever found out that he had given Lissa his own password, it would be curtains for both of them as students, regardless of what she had used it for. School policy was that any interference with their means of monitoring students' computer use was grounds for disciplinary action. It didn't matter whether you did anything harmful — the offense was making it hard for the administrators to check on you."

* "It is also illegal [in the UK] to decline to provide the driver's details, whether it was you or another person." https://news.jardinemotors.co.uk/how-to/speeding-fine-faqs-w...


> Right now it is politically unthinkable for Western governments to demand people

Well now you hit the nail on the head. The issue hasn't been technical for a long time but rather one of "image". People have to still believe they have freedoms and whatever curtails them is for their own good. As long as you're given a good reason to submit to extreme measures (9/11 made the Patriot Act acceptable), or they happen slowly enough that you can't really see a boundary being crossed, these measures will eventually be put in place. And nobody will see a huge difference because they won't remember a time when it was hugely different.


> I think more like 5 [years]

Please drop the hyperbole, there is already enough of an impedance mismatch here. We're talking about slow moving ecosystems, and social normalizing of new technological restrictions. The current locked boot mess has taken oven twenty years to develop since the Trusted Computing Platform Alliance was founded. The pace of change accelerates, but five years won't even make remote attestation available in browsers. I'd say it's at least 15 years until a significant number of websites would require it. Using it for network access control would take further technological development (probably on the corporate side), and then some kind of crisis to drive ISPs/governments to demand consumer implementation. It's worrying because it's a step on the slow monotonic authoritarian march, not because the sky is falling right now.


> Please drop the hyperbole

What if I had told you 5 years ago that in 2020, people in Western countries would be forbidden from leaving their homes without permission, and would have to show a digital pass on their phone to be allowed to go into shops?

The technology for remote attestation already exists, and it would take less than a year to roll out checks for it across all ISPs in a country. As you say, it would need some sort of crisis for a government to demand it, but an ill-intentioned government with an offensive cyber-war capability could manufacture that crisis tomorrow if it wanted.

We already have authoritarian Western nations like Poland allegedly using cyber-weapons against opposition politicians[0]. I don't think that claiming existing technology could be used in 5 years is a claim that "the sky is falling right now". The main thing holding back such a scheme is that it would force a lot of legitimate users offline, which is why I think 5 years should be enough time to make those affected users a small enough minority that a government could ignore them.

[0] https://www.euronews.com/2022/01/05/polish-watergate-tension...


which is why I think 5 years should be enough time to make those affected users a small enough minority that a government could ignore them.

When predictions of our future read like dystopian science fiction such as Stallman's "Right to Read", 1984, etc. the only course of action is to educate the masses and strongly oppose any further progression down that path.


> What if I had told you 5 years ago that in 2020, people in Western countries would be forbidden from leaving their homes without permission, and would have to show a digital pass on their phone to be allowed to go into shops?

Well I'm coming from a USian perspective, so that prediction wouldn't have come true. But really, trying to contain contagious disease is a societal response with longstanding precedent, and implementing a digital ID like that is technologically easy (at minimum, it's just showing fields from a database). If you had predicted these actions because of a pandemic, it would have been plausible.

Meanwhile if you had predicted similar digital passes in 2000 it would not have been immediately plausible because very few people were carrying around a computer in their pocket. That had to be developed first by private industry, wanted by the consumer market, and the idea of having "apps" for various facets of your life socially normalized, before it could come to pass.

> The technology for remote attestation already exists

What do you mean by technology ? Yes the concept exists, and yes some implementations exist, and yes some are in the hands of consumers. But I wouldn't say the "technology exists" for general web browsing, in that it's available for a single actor, even controlling both ends, to decide to start using remote attestation.

> The main thing holding back such a scheme is that it would force a lot of legitimate users offline

Yes that is one aspect. Another aspect is the lack of implementations for companies to use to start demanding its use. Another aspect is that there has been no application of it to network access control. Yet another aspect is that the government does not understand they have this lever to pull until the trail is blazed by industry. In Y2K authoritarian government went "find a way to stop bad communications on the Internet" and their underlings went "uhh pull the plug?". In Y2020 there are many companies selling carrier-scale TLS MITM and other DPI gear.

All of these things take time. As I said, it has been over 20 years since the TCPA was founded, and you can see where we are. You can directly translate your arguments here to arguments about secure boot in 2000, and yet governments in 2005 were not trying to prohibit computers without secure boot. We had to take a long roundabout trip through a new device type of phones/tablets (for RA this could be security keys) for it to become palatable.

Only now that the market has gotten there on its own would it be plausible for a government to prohibit any device that isn't locked down with secure boot. Even so, it wouldn't be currently advantageous for the more totalitarian countries to mandate this, since they do not fully control the device's manufacturers. That is another progression that will take time before it's ready to click into place.


Personally I agree over the timeline, and hence I find it more worrisome, as a more abrupt change would cause uproar and resistance, while a 20-year long rollout won't be noticed by most (boiled frog effect)


Exactly! Long before remote attestation is used for half of the things we're talking about here (eg prohibiting Adblock), its functionality will have been normalized for other seemingly-necessary uses. People will be wanting to buy devices with RA, similar to how they currently want to buy computers with secure boot and HDCP so they can watch better quality Netflix. And that's the scary part.

It also makes it harder to spread awareness of the threat, since the really concerning implications sound farfetched. Not thinking about it too hard, why would anybody buy a computer that restricts what they can do? Well, the Market finds a way.


>I think more like 5, although the government might start slowly, like only preventing non-TPM devices from accessing "sensitive" online services, e.g. banks or anything that requires a payment.

if that occurs, is that really of microsoft's doing, or of the government and all the other companies that are complicit? I can plausibly imagine a future where microsoft stays its course (ie. it doesn't lock down the x86 platform), but companies still force you to use locked down devices by forcing you to use mobile apps to do online banking. You already sort of see this with messaging apps, where a few (eg. signal) are mobile-only.


> You already sort of see this with messaging apps, where a few (eg. signal) are mobile-only.

Signal has clients for Windows, Mac, and GNU/Linux.


> To use the Signal desktop app, Signal must first be installed on your phone.


I have CalyxOS installed on an old Pixel 2 for "mobile app required" stuff. Signal can verify your account via a voice call to a landline phone number, and I have a cheap VoIP number that worked fine for that.

But the general point is taken. For example my HOA requires payment via Zelle, and my bank requires that I use their mobile app to make Zelle payments. I can still run their app on CalyxOS just fine via microG, but I feel like microG is something Google would find a way to shut down if it were to hit some critical mass of adoption.


Look at Google safety net and you will get a clear idea what is happening. If you want to use some streaming apps, etc they will make sure you run an unmodified and up to date OS.

On a side note: Microsoft already starts patronising users e.g. by blocking access to security tokens from nonelevated processes. I hate it when my os starts messing with my freedom to develop sth on top. It all comes in the name of security but will in the end effect freedom.


That's a bullshit scenario.

There are way more android and apple devices online than PCs. No ISP would do anything for PCs alone and if they did, I could easily turn my PC into an "Android Tablet". So Microsoft would have to get Google and Apple behind the same plan and then phase out all existing devices and force all ISPs to implement this. This would yield a huge public outrage because the first states to follow would be China et. al., where remote attestation would enforce you to install the latest government, ahem, upgrade, to your device. Of course the US government and various European nations would very much like to follow suit, but they would be slower than China and then look like they follow the authoritarian path a bit too closely.

Remote attestation will be sold to streaming providers so they can extend their DRM to cover unpatched systems. Maybe multiplayer games will follow. This ain't gonna happen at the ISP level.


That scenario is already reality on Android, where many apps and services will not run unless you use a blessed OS and OS version, verified through remote attestation.


can you name some of those apps?


Search for "SafetyNet"


Basically Google Pay and other NFC payment providers. Haven't encountered SafetyNet requirements anywhere else.


Even regular banking apps use that nowadays. And there's some amount of this also being used on streaming apps obviously.


The McDonald's app. Pokemon Go.


Let's be realistic here. The real competition to Microsoft, Chrome OS, already has a feature to prevent you from delaying updates. It's not a bug or a risk, it's a feature. And it does not require any sort of TPM to be enforced. Microsoft could force all its users to run the latest version, and to run only signed executables today. What Pluton does is it allows those two things to happen more securely.


I don't think this is plausible (government mandate of remote attestation for any kind of Internet access), but if this happens, then I just add smallest and cheapest PC possible (think Atomic Pi) with this remote attestation hardware capability (Proton/TPM/whatever) to the separate VLAN on my home network (so it can't access any other host on LAN side of the router) and forget about the little thing until it fails, e.g. for next 15 years or so. I wouldn't trust this device with my data, I wouldn't run any meaningful applications on it, heck I won't ever attach any monitor or human input devices to the damn thing.


As I explain[0] in response to a sibling comment, sadly it won't be enough (eventually) to have just one locked down device on your home network, they will all have to be individually locked down to access the internet.

[0] https://news.ycombinator.com/item?id=29866732


No problem for me, as I'm on the verge to say goodbye to Windows anyway. But I'm pretty sure it would be a problem for most of the people using Windows.


Once the vast majority of devices are remote attestation capable (Windows 11 requiring TPM will accelerate this trend), content providers may refuse to serve you unless you attest that you are running a walled-garden OS that won't allow you to ad-block, capture content, run any sort of proxy server, etc.

At some point, even ISPs might require remote attestation to allow you to connect your device to the internet. The IETF is already working on standards for the attestation of network devices[0][1].

I speculate that there will temporarily (perhaps similarly to iOS jailbreaking, which is not available at this time for the newest devices/iOS version[2]) be exploits allowing you fool the attestation by e.g. redirecting it to another device as the author suggests, but the end effect will be that vast majority of people will be effectively confined to a walled garden and even determined hobbyists will only be able to use their general computation capable devices to access all content (or even connect them to the internet) some of the time.

[0] https://archive.fo/uQULm

[1] https://datatracker.ietf.org/doc/draft-ietf-rats-tpm-based-n...

[2] https://en.wikipedia.org/w/index.php?title=IOS_jailbreaking&...


Where did the ISP idea come from?!

How can ISPs do anything close to this when they're not even concerned with how many devices you have? ISPs just do not connect individual end user "devices", they connect subnets.

> content providers may refuse to serve you

Providers of Hollywood-copyright-mafia content like Netflix have already been demanding hardware DRM (at least for high resolutions) for years.

Providers of public ad-supported content like YouTube care about maximizing views above everything. They'll happily serve a 4K stream to a Windows 98 machine if it can connect with modern TLS somehow. YouTube isn't even trying to fight youtube-dl all that much, there was an attempt at throttling recently but it was very quickly defeated. Heck, YouTube Music on the web does not use DRM at all, and that's all music-copyright-mafia content there.


> How can ISPs do anything close to this when they're not even concerned with how many devices you have?

Unfortunately that's not guaranteed to always be the case. The "Trusted Computer Group" already have ways for network operators to answer "Who and what’s on my network?"[0], and it's possible to set up an IPsec VPN between your device and the ISP where the key is only known to the TPM on your device.[1]

Of course the user could try to proxy requests from an "untrusted" machine to a "trusted" one, and piggyback the connection, but I imagine that applications which allow this won't be allowed in "secure" app stores, and "secure" operating systems would in any case firewall off packets coming from "untrusted" machines in the first place.

[0] https://trustedcomputinggroup.org/work-groups/trusted-networ...

[1] https://wiki.strongswan.org/projects/strongswan/wiki/Trusted...


The "not currently" in the title is very important foreshadowing.

This is merely another battle in the war on general-purpose computing.

They will build their kingdom piece-by-piece, and under innocuous-sounding adjectives such as "safety" and "security".

Each of these pieces may look innocuous and perhaps even helpful, but don't lose sight of their ultimate goal.

Once all the pieces are in place to achieve total lockdown, there will be no going back.

Articles like this that say "it hasn't happened yet" and try to spin a positive narrative are not showing the big picture. Arguably, Big Tech does not want you to see the big picture.


100% this. They call it fear mongering and paranoia for now because it's only something that could happen.

There used to be debates about whether face recognition should be allowed at all. In 2017 an executive order rolled it out at airports, where it's now used by the CBP and some airlines. The TSA is now considering using it. The debates are over, it's happening and there are now articles about how convenient it is to board without a boarding pass. The definition of normal continues to shift slowly towards universal surveillance. Every little increment is enabled by a few years of the previous increment being normalized and a morsel of security or convenience.


Exactly - I remember the warnings about how TPMs would eventually get normalized and how Windows would require one to run. Well, we've arrived at that point !

Soon even buying a PC without a TPM will become very hard - if we're not already at that point ? (What are our options these days ?)


If you are interested about this topic (what OSes should do today) I can advise this talk: https://www.usenix.org/conference/osdi21/presentation/fri-ke...

A bit long but I didn't get bored


Honestly I can't imagine any group of companies in the tech space being more resourceful than 10,000 neglected teenagers with nothing but a computer and a bad attitude. Especially after the former tells the latter that they can only do "approved" things with their computer.


Do you really think they'll be able to break the strong crypto which will ultimately be used to "secure" this?

Maybe they could exploit a buffer overflow or other such bug, but if our opponents are so keen on adopting "secure languages", that path to freedom is going to close too.

When governments were scared that encryption was going to be used against them and wanted to ban it, we should've realised that the same situation could apply to us. I'm not at all arguing in favour of such bans, but the underlying message was just as applicable.


I think tech companies as a general rule have proven unreliable at understanding, much less implementing, "security" in software or hardware. It's fair to say the landscape is changing but human frailty has not. If I have been reminded of any idiom most often while scrolling through security bulletins, it's "when there is a will, there is a way." We may not have the same class of problem we had before, but I'll still likely be scrolling through security bulletins a decade from now.

Also consider: the fundamental parts of a computer are still analog. Hardware bypasses, 3d-printed micro-circuitry, modified components or distributables, who knows? In my estimation, the cat and mouse game will continue for quite some time.


The fact that there is no "user override"[0] feature in any of these security processors is blatantly obvious evidence that they are designed to control and restrict first and foremost. I have read mjg's other posts on the topic and have no reason to believe he is arguing in bad faith but I'm still not convinced one bit.

[0]: https://www.eff.org/wp/trusted-computing-promise-and-risk


I wouldn't call it bad faith, but more like furthering the industry narrative.

Unfortunately a lot of intelligent individuals are perfectly content to help the corporations and governments tighten the nooses on everyone, including themselves, in return for $$$. They've convinced themselves that they are doing good.


The fearmongering about Pluton feels very similar to the criticism that was levied against UEFI Secure Boot when it was being debuted. In the end, x86 systems didn't become any more locked down.

I predict that this will blow over, and won't be a big deal in a few years time once FOSS drivers for what is effectively just a new breed of TPM are released.

If in five years, it turns out I was wrong, I'll eat my hat. Although defining "my hat" by then might be difficult, as it'll probably be subscription based.


Some x86 systems weren't completely locked down, but similar systems successfully lock down millions of phones, tablets and console devices (which are x86 systems these days).

The trend for security in desktop computing that's pushed by these large companies is to, over time, approach similar levels of lock down that mobile devices currently have. Both Windows and macOS are approaching the iOS security model that depends on manufacturers blessing what software can run on their products, and banning software they don't want users to run.

For example, with Defender on Windows and Gatekeeper on macOS, developers need to buy certificates from Microsoft and Apple's partners in order to distribute and run their software on users' desktop computers. If developers want their software to run on Windows or macOS, they need to remain in good standing with Microsoft or Apple. If Microsoft or Apple decides they don't like you or your app, all they need to do is to revoke your signing certificate, and Defender and Gatekeeper won't let your software run on Windows or macOS. That, or they can choose to no longer renew your certificates after they expire.


> Some x86 systems weren't completely locked down, but similar systems successfully lock down millions of phones, tablets and console devices.

so shouldn't we be protesting against the systems that are locked down, instead of protesting against largely non-problematic implementations? For instance, with secureboot you can load your own keys, and the TPM isn't some sort of coprocessor that has access to your entire system.

>If Microsoft or Apple decides they don't like you or your app, all they need to do is to revoke your signing certificate, and Defender and Gatekeeper won't let your software run on Windows or macOS.

I'm not sure about gatekeeper, but at least on windows smartscreen can be disabled. I understand how having a gatekeeper sucks, but I also understand the problem of malicious software, which gatekeeping partially mitigates. In the end the fact that you can disable makes it a non-issue for me.


It is not a non-issue. Because 95% of people will not disable it. This means that if Microsoft asks some company to make changes to their program, then they will have a lot of leverage behind that ask. Even if you personally disable the gatekeeping, you will be affected indirectly as the market for non-compliant programs will be unsustainable. Everything you run will be microsoft compliant, outside maybe one or two hyper-niche things.

This is what Android has taught us.


Except there are a ton of people (as in millions of them) who have smartscreen disabled because they're using a non-microsoft antivirus program. So no, this is a non-issue.

Also, smartscreen is not a naive block of unsigned code. Code blocking is reputation based, and people disabling smartscreen and running a binary contributes to that reputation. Which means that people like gp are actively helping by continuing to use Windows and running safe-but-unsigned apps. So, to reiterate, not an issue.


> Both Windows and macOS are approaching the iOS security model that depends on manufacturers blessing what software can run on their products, and banning software they don't want users to run.

That's been said for years, and hasn't held true. I can boot a Linux kernel on my M1 macbook. Apple could easily have locked it down in exactly the same manner as their iOS/iPadOS devices, yet chose not to. I can still install whatever I want. The default state of the system has a locked down root volume. And the default behaviour is not to install untrusted software, unless you jump through a couple of hoops. Those are good defaults. Those are damn good defaults for most people. If you're running untrusted code in your webbrowser all day long, you want your base system to be as unmalleable as possible, and as untrusting as possible to third party code. But I can still work around that with almost no hassle. Homebrew still installs software as easily as it used to nearly a decade ago; it just might need the occasional --no-quarantine flag for unsigned software.

Even recently they appeared to have actively assisted in the running on non-macOS operating systems on their hardware: removing the requirement for kernel images to be in mach-O format[1].

[1]: https://twitter.com/marcan42/status/1471799568807636994


> > Both Windows and macOS are approaching the iOS security model that depends on manufacturers blessing what software can run on their products, and banning software they don't want users to run.

> That's been said for years, and hasn't held true.

It certainly has. Unsigned binaries were recently deprecated entirely on M1 Macs. Microsoft even released versions of the Surface that can only run Windows and only run apps blessed by Microsoft. With each iteration on these products, the screws are tightened a bit more.

Software freedom is not just about being able to run Linux. Most Mac users buy Macs because of macOS and its integrations, running Linux doesn't help them out. Software freedom on macOS definitely does, though. As it stands, that freedom has been chipped away at with new releases of Apple's software and hardware.

For example, I'm the author of several open source utilities for macOS. Users had no problem using the utilities a few years ago, but because they're unsigned or not Notarized, macOS tricks users into thinking that they're either broken or malicious. Even self-signing the apps has macOS treating them as if they're radioactive. Users don't understand the scary signing and certificate alerts, so they end up thinking they've downloaded malware. The solution to this is to pay Apple $100 every year, and then regularly have them scan and approve of the apps via Notarization. That's antithetical to software freedom. Regular users who want to use un-Notarized software are left frightened and without having their needs met. Software freedom is important for everyone, not just developers and power users.


Heck, the amount of work it takes just to install gdb and debug another process on Mac OS is insane. There's no clear instructions on apple's website: the best thing to do is follow a stack overflow post with something like 14 instructions on how to generate the right kind of self-signed cert, acknowledge all the warning messages, and then follow the various comments for os-version specific alterations. It took me ages.


> Unsigned binaries were recently deprecated entirely on M1 Macs.

Except bins signed by self-signed certs are still treated basically the same as unsigned binaries were before.


You don't even need a true signature. An ad-hoc one (which can be linker-generated) and has no cryptographic key attached is considered as valid.


And in the next N releases of macOS those features will be quietly removed since 99% users are running properly notarized binaries anyway...


That’s certainly an option. But absolutely nothing points to it being the actual thing that will happen other than wild baseless speculation.


Why would that happen in the next N releases, when it hasn't happened in the previous M releases? What's changed?


I think there's some perception by people like this that --- there's some massive goal towards restricting users, and each change in the security policy is an incremental step.

But it doesn't really make sense:

- All the technical work to restrict users could certainly be done in one release: it's not that hard.

- As to market acceptance, I don't think any of the changes re: binary signing are "getting users used to" being restricted.

So, requiring signed binaries doesn't appreciably make the technical or market challenges of restricting unapproved apps easier.


From my post:

> Even self-signing the apps has macOS treating them as if they're radioactive.


It's reasonable to know the app isn't self-signed and having to do the right-click "Open" for the first launch.

I appreciate that I can both benefit from PKI attestation of apps (for a small degree of protection against malware), and I can override it and run unsigned stuff.


>That's antithetical to software freedom. Regular users who want to use un-Notarized software are left frightened and without having their needs met.

It's easy to argue "give me software freedom or give me death!" if you're a technically competent user that probably won't fall for a trojan, but what about everyone else? Don't you think there's a reasonable argument to locking down systems to improve security? To be clear, I'm not arguing for sacrificing software freedom wholesale for security, only in default configurations.


The argument doesn't hold. It uses the 99.9% of the users to crack down on the 0.1% (the devs) who have the ability to redefine what software is. Doing so, the big companies make sure they have the ability to rule their ecosystem. Using the argument of security, I'm sure they'll have the go from the governments.

So why would a company want total control on its ecosystem ? Because government don't want social unrest. So if you can ensure your platform is free of "terrorist", then you can discuss with government better. For example, if you're secure, you can position yourself as a reliable player on banking, e-health, etc. That is, you gain a very strong position to shape society in ways you're interested in. Don't forget that big companies have the power to do that and that those who command them are not required to be benevolent. They are private companies so there's no oversight on which interest they serve first.

It's not all doom and gloom though . As computer gets into our lives, more and more government and parliaments will become aware of the issue and there will be a place to fight for our rights. It's already the case.

The only thing that matter is : a computer is a general purpose machine and must stay a "general purpose" machine.


> The only thing that matter is : a computer is a general purpose machine and must stay a "general purpose" machine.

Fully agreed. This is the most important point. No company or vendor should prevent me from running the software I want, in the way I want, be it modified for my own purposes or not.

Sure, if you only look onto the security side it may be more secure if you can only run approved software, but it is in no circumstances okay to reduce the freedom of a user on his/her private machine. (In a business setting it makes sense to only allow software approved by the IT-Department)


>The argument doesn't hold. It uses the 99.9% of the users to crack down on the 0.1% (the devs) who have the ability to redefine what software is.

I'm not sure what the "crack down" is when you can disable it fairly easily.

>So why would a company want total control on its ecosystem ? Because government don't want social unrest.

You'd think that if they want to suppress uprisings, the mechanism they use to do so will be slightly more robust than a setting in the developer options.

>The only thing that matter is : a computer is a general purpose machine and must stay a "general purpose" machine.

How is this related to what we're talking about? What gatekeeper/smartscreen is doing is effectively operating a whitelist system. The platform itself is still open, and you could still do whatever you want before. What's more is that you can disable the system, so I'm not seeing what the issue is.


This rhetoric about evil """the government""" spying on you because terrorists is at this point quite out of date or even stale.

I'm far more worried about companies locking things down due to legitimate concern (security) with malicious intent.

Than being arrested for being mistaken for osama bin laden because I decided to grow a beard.


> Apple could easily have locked it down in exactly the same manner as their iOS/iPadOS devices

Yes. That is in fact the problem. They shouldn't have the ability at all. Given the ability it will be done, it is only a question of when and why.

Corps have this ability already and are building in tech to make circumvention even more difficult. We are one update away as it is now.


How do you propose preventing that, outside of legal remedies? If they design the hardware it obviously can be done. Hell, with the right external chip you could do it with a MOS 6502.

And as for barring it legally, remember that there are valid uses for locked down systems. It can be a useful security barrier.


> > Both Windows and macOS are approaching the iOS security model that depends on manufacturers blessing what software can run on their products, and banning software they don't want users to run.

> That's been said for years, and hasn't held true.

https://news.ycombinator.com/item?id=25074959


However, if you jump through those hoops, you lose certain functionality, namely Apple Pay and the ability to run iOS apps on Mac.


> In the end, x86 systems didn't become any more locked down.

And non-x86 systems? Wasn't there a line of MS Surface devices where secure boot could not be disabled, and users were stuck with Windows? It feels careless to only care about x86, especially as other platforms proliferate.

In any case, lockdown is not the only threat that Trusted Computing presents. Remote attestation itself is dangerous. If we remove our x86 blinkers and look at the mobile world, we see it's already happening, with countless apps, including ones important to modern day life such as banking, refusing to run on rooted phones.

You may say, "Oh, I will use my x86 desktop system at home for Free Computing, and allow phones, consoles, tablets, surface devices, etc etc, to become locked down." Like the old free speech zones, this is a toothless freedom, tamed and neutered. The user-empowering Free Software you will write will have no users - they will be on locked devices.


> Wasn't there a line of MS Surface devices where secure boot could not be disabled, and users were stuck with Windows?

All Windows RT devices (32-bit Arm desktop Windows). Not only Secure Boot was locked down there, but apps had to be signed by Microsoft.

64-bit Windows on Arm adopts the security policy of x86_64 Windows, which means that you can turn off Secure Boot on production hardware. (and run your regular apps too)


Your ARM smartphone and/or IOT device don't support UEFI or secureboot, yet they were still locked down and you couldn't flash third party OSes. The problem is locked bootloaders, not UEFI or secureboot. Fearmongering over a largely non-problematic implementation (secureboot explicitly allows you to load your own keys) is exactly OP's point.


This sounds very much like "there are many ways to lock out users, why are you complaining about this specific method, when other platforms used a different one?"


No it’s more like “why complain about a method that isn’t being used to lock down devices instead ones that are actually being used for that purpose”


We should be complaining when it happens, not that any of these methods exist - they're super useful to have in many applications, eg. access control door locks, keeping PKI HSMs locked down, etc.


While that's true, with regard to some Surface devices, as I understand it, ARM systems have only become more open and interoperable over the past few years; although this holds true a lot more for the server side than desktop side.

The main issue these days is driver support. The PC platform was an anomaly in backwards compatibility, at least historically. I'm not arguing that it's going to be easy for FOSS. It's going to be an uphill battle, regardless of how locked down they are (and I'm just arguing that they won't be that locked down—see the recent M1 Macs for an example; Apple could easily have locked down those systems in exactly the same manner as iOS/iPadOS devices, but chose not to).


For arm: anything that runs Windows on Arm64 uses UEFI + ACPI, making stuff easier on that front.

Linux drivers for Qualcomm SoCs don't have extensive ACPI bindings at this point in time though, making the use of a separate devicetree necessary for full functionality. This will be mostly ironed out with time I suppose.


Didn't Linux developers say that Qualcomm's ACPI tables are a horrific Windows-specific mess that has close to zero standard PNP* things?


> Windows-specific mess that has close to zero standard PNP* things

Those are hardware dependent platform devices. Qualcomm didn’t have another option. (Nor do other manufacturers really)

On x86, a virtual PCIe bus abstraction is heavily used, which is not the case for those SoCs.

(And well, if Linux wants to boycott full support of their SoCs, their choice. They just can’t blame Qualcomm anymore at that point.)

Another thing of note is the use of a PEP (power management plug-in) in the OS instead of having power management done in AML. The ACPI spec allows a manufacturer to do this. It isn’t used only by Qualcomm, but is totally unsupported on Linux today.


Manufacturers have the option of producing standards-compliant goddamn hardware! Say for PCIe, even if it's a buggy and quirky implementation but it does support ECAM, you can still expose a PNP0A08 and deal with quirks in firmware (hello Socionext/Marvell/NXP).

> PEP (power management plug-in) in the OS […] ACPI spec allows a manufacturer to do this

Doing management in AML is almost the whole point of ACPI. Microsoft pushing this PEP thing into the ACPI spec is bad. This is the "letter" of ACPI now, unfortunately, but it's very much against the original "spirit" of ACPI :/


> Manufacturers have the option of producing standards-compliant goddamn hardware

For PCIe indeed, but that’s not when the issues are present the most. There’s no standard register interface for integrated GPUs, modems…

> but it's very much against the original "spirit" of ACPI

Yup, it’s what Device Tree does too however, shifting this to the OS.

Another downside is trying to have a good driver-less boot scenario when PEPs are used, for the system to be able to go far enough until drivers can be installed. (N/A to Linux which is hostile to not in-tree drivers, but very much a concern on Windows)


It's just really sad that Apple doesn't help us with drivers for their hardware, I highly doubt the majority would switch anyways, and assisting with info could be done with less effort if people are already doing reverse engineering work.


I suspect it's a mix of legal difficulties in releasing the documentation, and a lack of incentive to write it in the first place.

The ideal scenario would be Apple pushing their hardware in the server space; that might create an internal incentive for apple to get Linux running decently (or at the very least make Darwin a new competitor in the datacenter).


The supported method is virtualization.


MS literally has to sign and approve the bootloaders from any distribution, or you basically risk your distribution not booting on a majority of x86 systems. And there is always the push by MS to make these bootloaders as restrictive as possible, to prevent the situation where you use one of them to boot some software that will break Windows' FDE. So as a result we end up with e.g. automatic lockdown mode in Linux when booted from a secure boot system.

How did x86 not become more locked down as a consequence of this?

You can disable all of it (on some devices only!) but the war is already lost: most people are not going to do it, so distros have to pass through these hoops.


Were I to grant what you say, it is /still/ a load of anti-social, dystopian shit flung against the wall for FOSS devs to scrap off (if ever they may), keeping them from doing something more advantageous.

You only condone the poisoning of the well because you take for granted the pro-socially minded developers willing to sacrifice their time and effort to draw clean water for you.

Think of where we'd be if we didn't need to run to stand still.


> Were I to grant what you say, it is /still/ a load of anti-social, dystopian shit flung against the wall for FOSS devs to scrap off (if ever they may), keeping them from doing something more advantageous.

> You only condone the poisoning of the well because you take for granted the pro-socially minded developers willing to sacrifice their time and effort to draw clean water for you.

If you're referring to my comment about drivers, then I'd like to remind you that a large amount of work done on the Linux kernel is paid, and isn't performed by volunteers.

And as for those that are volunteers, I don't take them for granted. I regularly donate to various FOSS projects. Related to this context, I'm currently a patreon supporter of marcan42's port of Linux to the M1 Mac, and have donated several hundred euro to OpenBSD over the past two years (not including donations from my hosting provider openbsd.amsterdam, which I'll plug here).


You seem to have not understood what I've said. Whatever good will you may extend, the attitude betrays the complacency of an alms-giver. I merely wish to point out how FOSS can be manipulated by such an attitude to provide a fig leaf -- in "a few years time" -- for the rent-seeking behavior I'd hope you condemn.

Let me also apologize for my manner. I'm inept at expressing what I think is important without alienating those that might be most receptive to it. Truly, I wish I could have said what I needed more gracefully. I didn't mean to give offense.


In the end, x86 systems didn't become any more locked down.

Oh hell yes they did. Look at Intel Boot Guard and all the stuff around that.


>Look at Intel Boot Guard and all the stuff around that.

what am I looking for? It looks like you couldn't load third party/modified firmware with that enabled? I suppose it's strictly more locked down than being able to flash whatever firmware you want, but was there a sprawling scene of modified firmware around at that time? Or did everybody essentially run the stock firmware?


BIOS mods are not exactly common, but there's plenty of people doing it. Projects like coreboot are another example, and of course all the tools around removing as much of the ME as possible. Obviously the "fringe" gets slowly trimmed, and we should be looking out for those like we do canaries in a coalmine.


I'm not exactly sure how me_cleaner works, but AFAIK it still works even with intel bootguard? I believe the way it works is that intel ME are present in the bios as optional modules, and they can be removed without messing up the signature.


> In the end, x86 systems didn't become any more locked down.

I realize it was only introduced as of ~2012 and it's been 10 years, but I'm not sure we can draw a conclusion on this one just yet. Windows 11 took a huge leap in that direction so for all I know it might take another decade; it certainly doesn't look like they've given up on the idea of locking down the desktop just yet.


They attempted to lock down the boot process with 32-bit ARM, but backtracked with 64-bit ARM. If the intention was to keep eventually lock it down, why backtrack and open it back up? It's not as if Linux on ARM was a major selling point for their ARM devices.


I actually don't have knowledge of that. Do you know what the reason for the backtracking was?


Microsoft's 32-bit ARM devices were being sold as appliances, their 64-bit ones are being sold as general purpose computers.


To be quite honest, I'd be interested myself to find out. I can't see the motivation for changing it, other than they found it not to be beneficial. It seems like opening it up again would just create more problems in the long run if they intended to close it down eventually.

I might look into it, as it sounds like an interesting rabbit hole.


>it's been 10 years, but I'm not sure we can draw a conclusion on this one just yet.

seriously? 10 years is an eternity in tech, and if they really did lock down the desktop a few years from now with some new system (eg. pluton), I'm not really sure that you could say "I told you so" or "TPM caused the platform to be more locked down". It'd be like predicting some sort of smallpox attack by china in 2010, then claiming you got it right in 2020 because of corona. The only plausible scenario where you could plausibly blame TPM/UEFI is if OEMs suddenly decided to remove the ability to add user keys and/or disable secureboot.


> 10 years is an eternity in tech

Maybe if you're talking Java versions, but not in the desktop OS space. (Look at so many old machines running Windows 7 or earlier right now, and look at long old OSes are officially supported, and how long they're still used afterward.) And besides, even if it was, this wouldn't mean anything. Look at the whole Default Browser fiasco that happened in the last few months. Microsoft went back to engaging in practices they had already settled with the Justice Department two decades ago.

Also look at how they finally made Windows 11 64-bit-only. And even now it still runs 32-bit programs, just the OS is 64-bit. It took two decades after 64-bit CPUs came out to get to this point.

They take their time and meander, and it takes a while. Possibly due to corporate sluggishness, possibly due to wanting to boil the frog slowly, possibly due to wanting to test the waters for a while... who knows why. But speed isn't the main criterion.


>Maybe if you're talking Java versions, but not in the desktop OS space. (Look at so many old machines running Windows 7 or earlier right now, and look at long old OSes are officially supported, and how long they're still used afterward.)

see: https://news.ycombinator.com/item?id=29860320

>They take their time and meander, and it takes a while. Possibly due to corporate sluggishness, possibly due to wanting to boil the frog slowly, possibly due to wanting to test the waters for a while... who knows why. But speed isn't the main criterion.

But in this case it's not really boiling the frog because it's not really getting worse? All we know so far is that it's TPM but it's easier to update. I suppose this could be used to oppress users by patching jailbreaks faster, but the security benefits at least makes it plausible that they're not doing it as some sort of plan to oppress users.


In fairness, I think they have a point. 10 years is usually an eternity in tech, but we're talking about a system that's still compatible with systems from the 80s; so I think that expecting things to occur on extended timelines isn't unreasonable.


How long computers can last isn't really relevant here. What's relevant is how fast new systems get developed, and whether a given system can be blamed for some action a few decades down the road. For instance, let's look at HTTPS. It was implemented to secure web traffic, but also plausibly can be used to lock down who can publish websites (eg. by refusing to issue certificates). However, in its current form it's not really problematic because there are many easy workarounds (eg. using http, adding an exception, etc.). Suppose 5 years from now all the browser/OS vendors go rogue and lock everything down. All web traffic must be conducted via HTTPS with a valid certificate, and all of the workarounds are removed/patched. Can you point to the introduction of HTTPS and say "see I told you! 1994 was the beginning of the end for the open web. We really should have opposed it when we had the chance!"?


>If in five years, it turns out I was wrong, I'll eat my hat. Although defining "my hat" by then might be difficult, as it'll probably be subscription based.

Wanna bet that by 2030 there will be atleast one major commercial bank that enforces attestation on it's E-Banking features even on desktops?


> once FOSS drivers for what is effectively just a new breed of TPM are released.

I genuinely wonder if Microsoft will put any people on this for Linux. They purport to 'love it', but aside from a few Embrace Extend and Extinguish[0] strategies like Edge, WSL, VS Code etc. I haven't seen anything that made me jump out of my chair in amazement.

Maybe they'll surprise me.

[0]: https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


Pluton has been supported by Microsoft Linux for several years and their Azure Sphere support contract promises Liunx security updates for 10+ years, https://www.platformsecuritysummit.com/2019/speaker/seay/


> In the end, x86 systems didn't become any more locked down.

This is not the end. They'll keep pushing, as slow as they need to, with Windows 11 being the next step. They didn't suddenly lose the incentive, they just met resistance.


True, but some of the responsibility lies with the users. If they can't be bothered to care, then maybe some pain is warranted.

In my particular case, I stopped upgrading Windows around 7. It is only last year that I decided to upgrade and that was also the year I moved to linux as my main driver. I am not an average user, but I am not kernel contributor either. I am just a guy, who wants some stuff done on a PC I own.

And that might be part of the issue. People need to feel the pain from the devices they have been sold so that they can learn why freedom and ownership is important.


> The fearmongering about Pluton

Part of the reason for this "fearmongering" (if it's fair to call it that) is that Microsoft has released little information about Pluton, besides a press release. Plus, it's not like the fears are completely unfounded based on Microsoft's messaging; Microsoft's press release says Pluton is based off the Xbox[1] (and this paywalled article mentions the same thing[3]), and they've previous said the major goal of the Xbox security system is piracy prevention [2], i.e. DRM. However, I agree with the overall conclusion of the main article that it's probably not much worse that what already exists.

[1] https://blogs.windows.com/windowsexperience/2022/01/04/ces-2...

[2] https://www.platformsecuritysummit.com/2019/speaker/chen/

[3] https://ieeexplore.ieee.org/abstract/document/9354509


If they are not going to do it then they can just have their lawyers draft and issue a public, legally binding statement that they will not.

Since they are not going to do it anyways, they are no worse off, and the customers get a legally binding guarantee resolving their concerns, and it provides just cause to the good actors in Microsoft management to head off or remove any elements besmirching Microsoft’s reputation.

Sounds like all wins to me and it is what any B2B contract with Microsoft would do (well in the contract rather than publicly) if they wanted that guarantee so it is not even a particularly novel legal request.


>to the criticism that was levied against UEFI Secure Boot when it was being debuted

...or the fearmongering from up last year regarding TPM and windows 11. People were going hysterical over the thought that TPM might be used for DRM, not realizing that they're already running hardware that does exactly that (intel SGX, amd PSP).


I was more concerned about the TPM requirement purely due to not having one in my desktop that's otherwise perfectly capable of running Win11 Pro (I know because I've been running Win11 for months on it via the Insiders Program). Yes, the desktop is 8 or 9 years old now, but it's still a 6 core I7 with 64GB Ram and a suitably fast SSD (not nvme, though). To me, it reeks of planned obsolescence in the name of pushing Windows Hello, that I don't need.

I did look into what an upgrade to add a TPM would cost. I was looking at over $400 for a like motherboard to support TPM (without an actual TPM chip), but I'd also lose SATA channels I currently use. At the point of having to replace a motherboard, it starts looking attractive to do a full rebuild, but that's difficult with supply shortages and inflated costs currently.


Is your argument that things are bad, so who cares if they get worse?


Well in this case it's not really getting worse. The hardware you bought is already backdoored/locked down. It's like closing the stable door after the horse ran away.


So two (or twenty) vulnerabilities in your software isn't worse than one?


That's not a good comparison because there are multiple bad guys wanting to hack into your computer, and more vulnerabilities mean higher chance that at least one succeeds. For this, we can assume that OEMs/microsoft is on the same side, so the better analogy would be: having 20 NSA root CAs installed on my system isn't worse than only one, at least if my threat model is "NSA hacking my communications".


Well my PC does NOT have any of those (AMD Bulldozer), so you can understand how I would be annoyed that this decision makes it ever less likely for me to be able to upgrade to another x86 in the future. (Thankfully, we're not in the nineties/oughties any more with computers becoming effectively obsolete in a year...)


> x86 systems didn't become any more locked down.

But ARM systems sure did. Remember the whole "OEMs are required to make their ARM Windows devices only trust Microsoft's signing key, and not let the end-user turn off Secure Boot or trust any other keys" scandal?


These 32-bit SurfaceRT/etc. devices were a complete failure; this has nothing to do with current Surface Pro X etc. which do allow everyone to easily turn Secure Boot off.


Knowing history of MS I wouldn't call it fear mongering but rather very reasonable concerns. They ended up not materializing as a problem, which is good. But they were very reasonable nevertheless.


AFAIK I can easily disable Secure Boot in the UEFI.

Is there an easy way to disable TPM / Intel IME / Intel SGX / AMD PSP ?

(I'm only aware that Dell can disable Intel IME on request... but only if you're a company buying a large amount of PCs ?)


At least with the hardware I'm familiar with, you can turn off the TPM via the BIOS. IME/SGX/PSP, not so much.


> you can turn off the TPM via the BIOS

In theory you can. In practice, programs will refuse to run if you do this: https://www.techspot.com/news/91138-valorant-anti-cheat-syst...

That goes for Secure Boot too, btw.


Yeah, hence the normalization (or lack thereof) of those features being critically important to the discussion.


Characterizing informed discussion about the implications of technology as "fearmongering" or "hysteria" is essentially ignorant.

The specific functionality of remote attestation is so that a remote party can demand you prove what software you are running, and make it so that you cannot lie. Right now you're free to answer whatever you'd like, while running whatever actual software you choose, as long as you stick to the protocol. Protocols (especially well-defined open ones) are our traditional way of mediating between parties with mutually diverging interests. Remote attestation throws away such neutral mediation, making it so that the more powerful party can dictate what software the less powerful party is running.

One implication of a usable implementation of remote attestation is that a website could insist that you are running a certain OS, web browser, etc, and become unavailable to you otherwise. For example, banking websites have a clear path to doing this in the quest for their elusive "security". They already do similarly invasive things that alienate a small portion of users (eg complain about a device being "rooted", blocking VPN/datacenter IP ranges), and so it's a reasonable assumption that they'll adopt such technology for the same regressive goals.

And once it starts being a de facto requirement for users to have such functionality and it becomes easy for developers to use, it will trickle down to lower stakes websites - think anything that currently sees fit to harass you with a CAPTCHA. It's not simply Big Bad Microsoft that will push this onto us, but rather the entire market will gradually shift for "security" (ie corporate whims).

Will Free Software and the Open Internet still exist? Of course! Remote attestation does not prevent you from running whatever software you like on your local computer. But it will further bifurcate the Free user-representing world and proprietary WebTV land - imagine not being able to do online banking or shopping from your ergonomic desktop system, and having to do it from your phone that you also have to upgrade every two years. And the idea that some day ISPs will mandate this type of technology to connect to their network is far fetched, but still within the realm of possibility.

One caveat here is that if the remote attestation is only over the contents of the Pluton chip itself, then it cannot be used to dictate what software is running on the main system. I have no idea if this is the case here or not, but either way the integration of the chip onto the same die as the processor does not bode well for future development.

Furthermore, I do not believe the claim elsewhere in this thread that you could proxy such requests, as a secure remote attestation design involves the attestation result being used to generate a decryption key (eg a TLS session key) that does not leave the trusted software environment. So the system performing the attestation is unable to simply relay back what it has learned. There might be design shortcomings that or implementation bugs that allow for doing so, but the straightforward goal is to close those over time as for any vulnerability.


Background material on Pluton:

1. Xbox Security, https://www.platformsecuritysummit.com/2019/speaker/chen/

2. Azure Sphere (derived from Xbox) with Microsoft Linux kernel, OE/Yocto runtime and QEMU emulation of Pluton for CI/CD, https://www.platformsecuritysummit.com/2019/speaker/seay/

3. DMTF SPDM (PCI device firmware attestation to SoC/RoT), https://www.platformsecuritysummit.com/2019/speaker/plank/

Nov 2020 Intel announcement about Pluton, https://itpeernetwork.intel.com/intel-and-microsoft-plan-to-...

> Secure platforms anchor on a hardware Root of Trust as the foundation. Given Intel’s diverse ecosystem, our vision is to offer multiple Root of Trust options that ensure isolation of resources, keys and security assets. The partnership with Microsoft to offer Pluton will further broaden the choices available to our mutual customers.

Hopefully a future Intel SoC will include an optional FPGA-based RoT where customer hardware owners can load the open-source firmware of their choice.

Edit: Pluton will be included in upcoming Arm laptops with SoCs from the Qualcomm-Nuvia (former Apple M1) team.


> Hopefully Intel will offer an FPGA-based RoT where customer hardware owners can load the open-source firmware of their choice.

This is sarcasm, right? It must be sarcasm.

Maybe I'm out of the loop but I would guess that hell would freeze over before Intel releases something like this, let alone an FPGA Root of Trust.


Take a look at the past 3 years of presentations at DARPA ERI, where every major US silicon vendor is participating. Much work is underway on heterogeneous systems, including Open FPGAs and OSS toolchains for EDA, to speed up (days, not months) the design-test-deploy cycle for specific applications.

AMD provided a custom (expensive) SoC and RoT to MS Xbox, now being generalized with MS Pluton in 2022 Ryzen CPUs (and some future Intel CPUs). Intel already offers custom CPUs to some large customers. If a security-sensitive automotive or robotics customer needed an FPGA RoT, and the market opportunity was sufficiently interesting, Intel has multiple options for meeting that requirement.

> This is sarcasm, right? It must be sarcasm.

Intel at least left open the possibility in their press announcement. AMD did not, but they have purchased Xilinx and TSMC is building a US-based fab in Arizona, with "secure supply chain" FPGAs high on the list of early product candidates. It's up to customers to bang on Intel/AMD doors and show demand for FPGA RoT chiplets that support OSS gateware.


> Take a look at the past 3 years of presentations at DARPA ERI

Thanks but I'm good, I'll just take your word for it.

I'm sure Intel can materialize FPGAs when the contract warrants it. It doesn't follow that because military or corporate contracts exist consumers will somehow directly benefit.


There's a direct line between the Microsoft/AMD Xbox SoC corporate contract and Pluton in 2022 Ryzen consumer CPUs, as described in the videos linked above. It's not in Intel's interest to make Microsoft-AMD designed Pluton into the exclusive silicon RoT provider for Intel CPUs.

Another candidate for "Open RoT" is Google's OpenTitan, https://opentitan.org.

Open-Source FPGA Foundation: https://osfpga.org/


Will this allow my computer, in the future, to be as locked as current smartphones? Will this allow software to refuse to run or services to refuse to work depending on third party software I have installed?


"Secure boot" and "remote attestation" are complementary features.

Specifically secure boot is what makes it so that "your" computer is unwilling to run software that has not been approved by the company that made it. This has existed for quite some time, and is responsible for the locked down mobile ecosystem as well as the inability to remove the Intel ME and AMD PSP embedded malware from recent PCs.

Remote attestation has not been widely implemented yet, but will make it so that remote services refuse to work unless you are running only software that the service approves of. I'm not sure how much Pluton moves the needle forward, but any amount is not good. If remote attestation comes into full effect, many websites will only be usable on newer computers and websites will be able to forcibly disable software the website finds objectionable, like say Adblock.


>I'm not sure how much Pluton moves the needle forward

A lot. They only need to wait for Pluton enabled PCs to reach critical mass. Compared to TPM's, Pluton is inside the chip thus not vulnerable to bus tampering and is not a standard but a "product", meaning Microsoft will have the ability to make changes without intervention from other companies.


Everything needed to lock down your computer as much as a phone already exists, there's no need for a TPM or Pluton to do so.


We want there to be less ways of doing that. The fact that one way already exists doesn't mean that we should be okay with more. The desired end goal is that eventually there's zero ways to do this, and we'll never get there if we keep moving in the wrong direction and justify it by not already being there.


There's a huge difference between "exists" and "is now commonly available and made easier to use". The frog-boiling is slow, but an increasingly large number of us are becoming aware of this new rise of corporate authoritarianism, and we know how it will end if we do not fight it as hard as we can.


All Microsoft need to do to block other operating systems from PCs is change their policy around secure boot. All they need to do to prevent unsigned apps from running is change the default behaviour of Windows. The code exists. It's deployed. It's commonly available.


Yup, it's that close. I'm honestly happy there's an outrage ahead of releases of chips like that. Some systems did get secureboot locked down. Maybe we get the policy we got exactly because people are still outraged.

I'll take that any day over ms+Intel releasing a t2-equivalent + SB combo as required in all new certified laptops and people realising too late.


They need to boil the frog slowly enough that most people won't realise until it's too late.


I don't think that analogy works here, since the things we're worried about are binary states. Either you can run arbitrary software, or you can not, etc.


Perhaps a better analogy then is securing the noose around the neck of the prisoner, but not yet releasing the trapdoor.


...and the people who work on "progressing" this technology are helping to make the nooses better and also putting them on their own necks. (I've used that analogy before. I think it's a great one.)


Pluton will likely close OEM/firmware security holes that could be used to escape such policy.


Via what mechanisms? Nothing we currently know about Pluton would enable it to do anything like that, as far as I can tell.


not much detail, but slide 12 claims: https://www.platformsecuritysummit.com/2019/speaker/seay/PSE...

> Pluton validates and boots Security Monitor

> Security Monitor validates and boots the Linux Kernel

> Application Signatures are verified by SM and Pluton before Linux Kernel loads an application


This design still relies on prior stages of the boot process handing stuff over to Pluton - if there are vulnerabilities in the OEM firmware, they're still going to be exploitable in this model.


...no? How would MS force me to install an AGESA update that supposedly restricts me in booting unsigned code? That's where the newly announced remote attestation comes in.

On the other hand, on PCs with Pluton chips they can change their minds any second.


The described functionality of Pluton doesn't allow it to prevent you from booting unsigned code. Your system firmware would need to ask Pluton for permission, and if it doesn't do so then no number of Pluton firmware updates is going to make it able to prevent that.


On a second though, you are right. I mentally confused "not being able to boot unsigned code" and them being able to make booting unsigned code as useless as possible through attestation (possibly no internet, no DRM'd software, legally acquired or not)


- Microsoft isn't going to fuck us over that hard <--- We are here

- Microsoft is fucking us over that hard

- Libre software FTW

- Libre software UX sucks

- Repeat

When will the cycle end?


> Libre software UX sucks

This has always been true and while it's better this iteration of the cycle, it's not great.

But have you actually used Windows in recent times? The UX has gotten infinitely worse since XP. Mac OS has changed about as much as the popular open source DEs, but Windows is infinitely worse. I can't stand using it. The taskbar is garbage, the menus are garbage, the discoverability of anything is the worst it has ever been.

I can't imagine they're ignorant to the fact that Desktop UX is no longer a differentiator for them. The only lock-in they have is in business environments, for which there is zero competition at their price point.

It's inevitable that the host OS gets locked down. The runtime has moved to the browser and cloud. It's just a matter of time.


Windows UX is ass, but it runs a lot of nice software. Runs it well too. Many things don't work as well in wine. I'm talking both slowdowns and glitches.


> but it runs a lot of nice software. Runs it well too.

Probably because the developers write software for the most popular OS and not because Windows did something well.


I've always thought this was just Microsoft's copy of Google Titan and Apple's T2. And as others have pointed out, there's a lot of overlap with what a TPM can already do.

The main thing that comes to mind for me is that since this is integrated into the CPU itself, now 'things' can be strongly and directly tied to the CPU instead of a separate TPM or some collection of hardware identifiers. Was this already possible on x86? My mind immediately went to "this will be used for tighter DRM"; I feel like content owners would like this a whole lot.


If you have an AMD system then there's a decent chance that it's already running a TPM stack on the on-die Platform Security Processor. Pluton isn't really any more tightly integrated, it just means the TPM stack isn't running on the same core as a bunch of other random platform things.


OK, thanks for pointing this out.


So basically "Why it's ok and you should be happy about Microsoft's hardware controlling the software on your PC".

I'm so unbelievably sick of this 'security by corporation, it's what's best for you so accept it bullshit.' I really am.

No I don't want proprietary internet enabled hardware on my PC monitoring my software, no it does not make me feel safe and secure, actually, go fuck yourself and whatever marketing bullshit you spew to make this desirable for consumers. I'm honestly so fucking done with this kind of shit.


A quick look at the author's credentials should clear any doubt about the motivations behind this article's intentions --- over the past few years, I have come to the conclusion that anyone who works in the "security" industry is almost certainly working against you and your freedom.


> I'm so unbelievably sick of this 'security by corporation, it's what's best for you so accept it bullshit.' I really am.

Then, consider supporting the alternative approaches to security: https://puri.sm/posts/the-future-of-computers-the-neighborho...


> No I don't want proprietary internet enabled hardware on my PC monitoring my software

Good news! Pluton is not internet enabled and can't monitor your software.


Even if that's true right this minute, an unauditable (:/) bit of hardware controlled by Microsoft (!) that can be force-updated by them (!!!) means this can change at any moment.

I liked TPM with my own keys. This just seems a bit 'extra' in all the wrong ways.


Firmware updates can't add network hardware where none currently exists. The block diagrams for Pluton don't give it any mechanism to communicate with the network directly.


Anything evil will likely be brokered through the OS. While this is good in that it's not a persistent backdoor like Intel ME, there's still Microsoft skulduggery to worry about.


This part was in article. Just don't run Windows. Windows already has access and network.


You will not own anything one day and be happy about it according to the media.


So we already have to trust the Intel ME crap, and now the MS crap too. How is this similar to the status quo? _At best_ it increases the number of actors you have to trust, specially if you are not using Windows.


Don't forget that even recently Microsoft has pretended to be committed to open source, but consistently continues to make decisions that counter that. What may look friendly today like their switch to Edge, may end up being entirely hostile like Edge has become today.


in german we call stuff like pluton "politik der kleinen schritte" or "salamitaktik". which basically means that little step for little step, things change.

it is not CURRENTLY a threat, but it builds up to be a threat in the future if we do not stop and/or constrain it.


Salami tactic is on the nose


"Wire fence is not a threat! "

( fine print: we haven't switched the electricity on yet)

edit: asterisks are somehow omitted


We should put an emphasis on currently, but also I think we should discuss how Microsoft is positioning itself as a gatekeeper and forcing all market players to adopt their tech.

If Microsoft says Windows will only support has with this tech enabled, and since almost every computer on the planet runs Windows, vendors must adopt this tech or go out of the market.

In other words, Microsoft is positioning itself to say to all market players to play by its rules or go out of business.

This is a perfect way to establish control over the market without establish itself as a monopoly, thus not attracting attention from regulators.


> if you're not running Windows Microsoft can't update the firmware on your TPM.

This seems to be the biggest issue - hardware locked into requiring Windows to be up to date.

MS can of course ship firmware that's independent of the OS, but knowing MS - they probably won't.


It sounds like they're using UEFI capsule updates for the firmware, so it'll actually be easy to perform the updates under other operating systems - Microsoft just won't have any mechanism to compel you to do so.


Then it won't be a big problem I guess.

Though having a blob firmware from MS embedded into the CPU itself feels kind of weird. A better way to do it was some third party handling it or requiring that firmware to be open source for example.


The "mechanism to compel you to do so" will be in the form of remote attestation, as some of the other comments here have mentioned.


I don't think I understand your threat model here. In the dystopian remote attestation future, presumably nobody's going to grant you access unless you're running Windows, at which point Microsoft can impose arbitrary policies without needing to involve Pluton at all (all it would do in this case is verify that you're running Windows, and you can already make that determination using a traditional TPM). So under what circumstances would you find yourself unable to gain access to a remote resource unless you're willing to accept a firmware update that changes Pluton's behaviour in a user-hostile way?


I think this is about securing data/keys (AES, TLS, TPM..) vs securing code (Secure Boot, TEEs..). Neither is really a threat to software freedom as I see it, as long as it's user controlled or can be rendered effectively inactive.

The thorniest question I think is around TEEs. You either trust ME/PSP/mobile TEEs for their explicitly mentioned uses (fTPM, SVM, Remote attestation..) or you think they should be even more sandboxed or perhaps shouldn't exist at all. I'm all for the middle ground/option here where the user is in control, thought others may disagree. Remote attestation could be a case where the user is losing control, so preserving user control there is important.


Man, I was really enticed by the specs on these Z-series laptops by Lenovo and was looking at an upgrade. But I was reading about Pluton this afternoon, and now looking at this thread, I don't know how I feel about it. Why ruin a great new CPU with future spyware?

I don't like the edit at the bottom where the author's like: oh yea, of course this could be a massive issue against FOSS but we should just assume that vendors will think it's impractical. I've seen how banks react to rooted phones, even when rooted to heighten device security--and I've switched banks before because of it. They don't care.


I'm totally with you. Not using banking apps - just sites. Also stopped buying CPUs with DRM-on-chip. It's pretty crazy what they're doing now instead of just "computing machines" (


Unfortunately with the DRM situation, the benefits are much outweighing the cost. The speed gains and compatibility I'd get let me do a lot more things. I can use these better CPUs to compile a lot faster :|

The banks not much better. All banks in my country have dated late 90s-looking websites with not even UTF-8 encoding (so you can't send an email with a comma). They are barely usable on desktop. I'd have to make my own client or at least whip up a lot of magic CSS to get it to be mobile-friendly and would even still need to include QR code scanning as it is so ubiquitous that no one would let me fiddle with adding their account numbers.


Thank god for Berkeley and RISC-V.


"Trusted" computing is not something that's unique to x86.


With RISC-V at-least individual providers can spinoff their own versions of the architecture without having to ask for permission, from an entity like intel,amd or arm.

Unlike right now, where all x86 chipsets have backdoors, and all new ones have “Trusted” computing features which you cannot say No to.


There are a lot of security benefits to using a TPM. I wouldn't mind if I could use an open source one in Linux.

They ameliorate a lot of low entropy problems for passwords and can improve security. I can't imagine a proprietary one being mandatory. My banking app uses the mentioned remote attestation so I can't use it on my less-Googled Calyx ROM. I just think that's stupid instead of very strong warnings.


Meanwhile, hardware-level OS-ignostic rootkits like Computrace exist, and Intel ME has its own network stack, but Pluton being adopted as some kind of industry standard to lock down a platform in the name of "security" and what have you is a conspiracy.


Does Computrace even work if you're not running Windows? Does it have a Linux/MacOS/etc. payload now?

(Reversed much of it a long time ago --- and remembered it was specifically coded with Windows in mind, with certain assumptions about various things.)


It's funny how Microsoft seems unable to do "security" without veering into megalomanic authoritarian schemes. See also Palladium.

What about trying to secure your software without building the infrastructure for an oppressive dystopian future? Too much to ask?


They're used to have the desktop monopoly, and losing them makes them to literally lose sleep over the matter.

It's not a matter of security, it's a matter of monopoly. Since forever.


It's not that they can't do security without going full authoritarian, I think they do it because they want to go authoritarian. The new security benefits are just a vehicle.

They're a lot like the common politician who smuggles horrible laws into relief bills or trade treaties. UEFI (especially on the ARM platform) and intel ME are to examples of this.


Yes. Look at the zeroday clusterfuck that is Teams.

To Microsoft, security is an excuse for a land grab.


What worried me about pluton is essentially both the fact it might set a trend where drm locks out Linux devices and that remote exploits on pluton is a real nightmare scenario.

Despite the fact I in a way do think tpm like components are a good thing.


> the fact it might set a trend where drm locks out Linux devices

Fact? Based on what evidence?


If you re-read my statement you might notice "the fact" is used as a figure of speech.


One great advantage of these separate silicons is that side channel attacks are greatly mitigated; it's everything else that worries me (closed platform, no transparency, unusable once vendor stops supporting it etc)


windows? it's a dead OS anyways

people are either on macOS/android/iOS or chromium OS


But if you don't like Pluton, I have bad news for you about Macs, Chromebooks, and most Android phones.


Chromebooks give you full flashing & serial console access for both AP (main CPU) and EC over an SBU cable, run open source firmware on both AP and EC (modulo FSP/AGESA), even run open source firmware on the root of trust (you can't replace that one with an unsigned build on a retail device but you can study it for sure).

Apple silicon Macs have the main CPU cores fully in control, with zero external peripherals having full DMA access to system RAM (everything goes through IOMMU), and have an interesting secureboot architecture that allows different security levels on different OS installations (you can run unsecured Linux side-by-side with a fully Netflix-ready macOS).

I have much worse news about the typical Intel BootGuard'ed PC laptop.


that's the thing, it doesn't really matter

it's not YOUR OS, it's their product, you not forced to use any of their products

linux and voila, you got your freedom back


Kinda, but not completely. The problem is really with the hardware, switching OS can only alleviate but not resolving it. But really, we don't have choices unless we want to stay with fairly old platforms.


Unless you support Linux-first hardware vendors.


I would love to, but unfortunately a lot of the laptop hardware is pretty garbage in comparison. I can deal with a slower CPU maybe, but it's the whole stack. The bodies are all cheap plastic. Battery lasts 3 hours. And you have only like 1080p TN panels; nothing OLED or with color accuracy in mind is on these. Similar feelings extend to phones as well.


Consider this: https://puri.sm/products/librem-14. Not plastic. It's 1080p, but for 14" it should be fine for most people. Supports two external 4k screen AFAIK.


I hadn't seen that SKU yet and the aluminum chassis looks good. Comparing Comet Lake to Tiger/Alder Lakes is disappointing for performance/battery/graphics. No USB4/Thunderbolt. The display also doesn't list the sRGB and DCI-P3 color space coverage which I can only assume it is a generic panel with <100% of both and is a complete show-stopper to me. If I wanted a cheap display and instead plug in to nice monitors, I'd just buy a desktop; the screen is probably the most important piece.


Did you forget the /s? Honestly can't tell...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: