Citation needed. Log4j comes to mind as a logic bug disaster, but all the data I've seen is that >65% of the high severity bugs come from memory unsafety in most software projects.
Attacking the hardware is always "possible" but can be made unreasonably difficult to pull off in practice, and we've reached that point with game consoles now. The Xbox One and Playstation 4 survived their entire generation without falling to hardware hacks, and the Xbox One didn't get hacked via software either.
Yeah these days a lot of hacking has been voltage glitches and other hardware hacking. Rust ensures no kind of safety when the CPU is not operating within it's normal rules.
Rust-ified Windows will still have the fundamental exploitable flaw of Windows - the ability to download binaries from anywhere and give them Admin privs
Which is why if you watch the talk where this was announced, Windows is in the process of requiring signed binaries, and two admin levels like on macOS.
Windows honestly needs this. Problem is "admin access" isn't something that should be a simple button click because then every app requests it and then every user just hits OK because that's just the only way to use Windows. MacOS has it right where you have to reboot in to recovery mode to turn of SIP which is difficult enough that normal apps don't ask users to do this, but power users will have no problem.
It's irrelevant. They've been trying to do those things for years but their ability to execute is completely gone. Their efforts to improve things just make stuff worse.
To name just a few examples: they want all code to be signed but Windows code signing certs are more expensive than the Apple developer programme membership, much harder to obtain, the rules actually make it "impossible" in some cases like if your country issues ID cards without an address on them, they're now forcing mandatory HSMs and if your CA decides to change your subject name you're just SOL because only Windows 11 anticipates that problem but Microsoft can't be bothered maintaining Windows 10 anymore so their solution was never backported. Yet, the Windows 11 security hardware requirements mean many people won't upgrade.
So whilst building a theoretical strategy around sandboxing apps, they aren't even able to get the basics right. If the people making these decisions were actually writing Windows apps themselves, they might realize this and Microsoft would be able to get its teams marching in the same direction but there's not much sign of that from the outside.
Compare to how Apple does it: they run their own code signing CA, assign stable arbitrary identifiers to companies and people, and still manage to sell these certs for less than a Windows certificate whilst also throwing a couple of support incidents into the mix too, something you can't even get from Microsoft at all as far as I can see (and I've tried!).
It is going to be relevant after 2025, regardless.
By the way, some of this stuff is already on Window 11 Previews and can be enabled.
Even if they botch this, like it happened to UWP, the alternative will be moving everything to Azure OS with thin clients, so one way or the other, it will happen.
They do most of this -- albeit without support -- through the store if that's a viable distribution channel for you. You can actually get support, but be prepared to pay big $$$.
Yes, going via the store fixes some of those problems but introduces others. In particular a lot of corporate users have it disabled and of course they have a lot of arbitrary policies. I didn't know they had dev support if you're in the store, interesting thanks.
YMMV, but I found the macOS app store way more picky than the MS Store. Denying my submission for using the term 'Exit' instead of 'Quit' rubs me the wrong way.
No clue, however, they've made it relatively seamless to publish and download from there. You can also use winget [1] to download signed apps from the store. End users don't need an MSA.
It's not that seamless. We've been trying it lately and the onboarding process is still pretty bureaucratic. Like, having to give an age rating to a zip utility doesn't make anything better.
This is a fundamental flaw of nearly every OS, because users need to be able to run software on their computers.
That doesn't mean we should just ignore security elsewhere. Many users know not to trust mysterious executables from the internet, but don't expect a PDF or font file to be able to infect their machine.
I think being able to trust that non-executable files like these won't compromise your system could be a big deal.
Not just. But the ability to run custom code. Such as an Android phone in the future, which is running some rust-hardened kernel and has a locked down bootloader.
All of this will become e-waste without the ability to run unsigned code. And not only, by allowing custom code, which some do not allow, the usefulness or even purpose of the device can be extended or altered.
I dont' want to throw away a perfectly usable device just because the company made it obsolete.
No. In this context reverse engineering is looking at the assembly to understand what it does. Once you understand that, you can figure out how to exploit the code, by, sending a subtly malformed packet which will cause a server to write out of bounds of some array, and corrupt the servers memory. By very carefully corrupting the server's memory the attacker can hijack the server to do whatever they want.
Such as say in 20 years when you want to be able to run custom code in a then old console.