Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft exec says Windows 11 kernel will soon be booting with Rust inside (neowin.net)
407 points by itvision on April 28, 2023 | hide | past | favorite | 222 comments


Microsoft has been interested in using memory safe languages in the kernel for some time now. An example is the Midori project led by Joe Duffy back in 2009 which explored the idea with a C# derivative.

Rust is a game changer in that it is built from the ground up to offer memory safety without garbage collection and still aims for zero overhead abstractions.

It would be ironic, after their notorious security issues of the early 2000’s, if due to efforts like this, Microsoft Windows ends up being the most secure general operating system.


>Microsoft Windows ends up being the most secure general operating system

It already is. What, exactly, is better than Windows at security features on desktop computers? Linux? There is nothing in there that comes even close to the defensive features of windows, like HVCI, a subsystem that checks for driver signatures and the likes isolated by virtualization mechanisms, which completely prevents tempering with the kernel. Linux's support for secure boot only exists to make it convenient to dual boot with windows, it doesn't do enough to prevent kernel level rootkits, it's a total placebo and it's even worse if you use a distro that doesn't have signed kernels, like Archlinux. If you're self signing on the same computer, how exactly are you stopping malware?

Since Vista, the OS also gained some serious resilience against crashes that I have never seen on other operating systems. For example, it is possible for your desktop session to survive a GPU driver crash. On linux this is a guaranteed freeze or kernel panic. This is, fortunately, a rare event, but the last times I've seen my computer freeze on linux, it was always because of the graphic stack.

openBSD's slogan for having few remotely exploitable exploits out of the box doesn't mention that it's because it has literally no features enabled out of the box.

macOS and iOS are the systems with the greatest amount of privilege escalation fails by far. In fact, what do people think jailbreaks are? Some of which are truly frightening when you think about what could have been. Multiple jailbreaks were made that could be run just by browsing a webpage on safari. This means they punched through the browser, punched through privilege escalation and had the potential to install a rootkit on your phone. Just by visiting. A. Webpage.

How many times such a thing has happened on Windows in the recent years? visiting a webpage installed a rootkit on your computer?


Disagree, macOS is way ahead. Windows code signing is a half-implemented joke that doesn't do much and apps can easily tamper with each other at will (unless they're installed using MSIX which not much uses), whereas macOS code signing actually works and will stop apps tampering with each other completely.

The macOS app sandbox actually works. On Windows nothing uses the app sandbox due to serious bugs and performance regressions. Chrome rolls its own sandbox for example.

SIP successfully stops macOS getting screwed up. The number of Windows installs out there in some bizarre half-broken state is incredible. It's routinely the case that API calls which work on one Windows system don't work on others even at the same patch level for no clear reason at all, which trace back to weird configuration differences to the OS.

Windows still relies heavily on client side virus scanning. Apple do malware scanning server side and then lean on their code signing and integrity systems instead, which is one reason Macs have great battery life.

And then there's all the other more well known security things Apple do with secure processors and the like.

Windows is just so far behind and they're so drowning in tech debt it's unlikely they'll ever catch up.


Its difficult to quantify something like this; so obviously treat this data with proper skepticism. But: CVE Database, just looking at 2022.

- Windows 11: 498 reported CVEs in 2022. [1] - MacOS: 379 CVEs [2] - iOS: 242 [3] - Android: 897 [4]

Linux isn't as well-comparable or categorized (especially given its just the kernel, and there are dozens of other "products" which make up an equivalent to what Microsoft would call "Windows 11"). Nonetheless: 306 [5]

You should check your preconceptions and susceptibility to Apple's marketing. No one is substantially far ahead or far behind (except maybe Android, but again, these are hard to compare apples-to-apples). Everyone still experiences roughly the same class and magnitude of vulnerabilities. But, everyone is also getting better at it.

[1] https://www.cvedetails.com/product/102217/Microsoft-Windows-...

[2] https://www.cvedetails.com/product/70318/Apple-Macos.html?ve...

[3] https://www.cvedetails.com/product/15556/Apple-Iphone-Os.htm...

[4] https://www.cvedetails.com/product/19997/Google-Android.html...

[5] https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...


I'm not sure how that rebuts my point? macOS has a much lower number of CVEs than Windows. But there's a lot more to security than CVEs, and my post was about issues that CVEs don't track. BTW Apple marketing isn't what led to my views, they're based on direct experience with the security mechanisms of both operating systems up close and personal.


Well, you know what they say about being too close to something to speak on it objectively. Which in this case means: there's the way these systems were designed to work, and how they actually work toward the end-goal of keeping the systems they secure, secure.

I'll believe that Apple's operating systems are significantly and measurably more secure when they can make it a few years without a maliciously formatted iMessage crashing the kernel. Until then; its arguing minutia. Everyone has security issues. Everyone is taking steps toward improving their security. No one is so far ahead that they're worth white knighting on HackerNews.


> macOS has a much lower number of CVEs than Windows

More than 75% of Windows CVEs isn't exactly "a much lower number of CVEs", even without considering its actually much lower market share.


You probably need to rebase that for usage stats (install base)


The CVEs / Install Base ratio is a pretty silly metric for determining the security of a product. A large number of CVEs could tell you that the users and developers of a particular product care a lot (or are paranoid or are simply security minded) about security, and want to give notice of issues to as many people as possible.

This is a live issue in the Rust community, which does appear to care a great deal about security, as to how to deal with minor/theoretical vulnerabilities perhaps unworthy of a CVE.


> Disagree, macOS is way ahead.

Apple is a consumer electronics company. For serious tools, use Windows.


For serious privacy loss use Windows


Hold your horses there my good friend. Yes, Windows is better now than it was in the past but is still a shitty OS. None of them are actually. All of them continuously fail every single year at hackers gathering/hackatons/whatever public event, with multiple zero-day showing. Every single major OS out there is a joke from security point of view.


This. Parent it's deluded if Windows can even be compared to a hardenex Guix setup with rollbacks and sandboxed Chromium/Icecat's.

I would think otherwise if Windows used virtualisation and sandboxing to run old Win32 apps from XP and below. Because lots of enterprise software depends on proper compatibility modes, and there the security gets thrown out of the window.


iOS has some seriously nifty security mechanisms that takes advantage of features baked into the Apple Processors. Stuff like pointer authentication and page protection layer(something akin to HVCI, without the hypervisor). Jailbreaks are getting harder and harder.

Both Windows and iOS(I can’t speak to macOS) are becoming incredibly security mature operating systems via these security mechanisms that get stacked on top of one another. Saying on is better than the other in terms of security is hard to quantify.

Windows still does have some issues with user mode logical exploitation through DLL hijacking, or issues with credential relaying, although relaying targets are generally known and mitigated by enterprises.

iOS still has issues with remote attack surface, however it has gotten better with iOS 16 and Blastdoor +Lockdown


Anecdote:

Ive tested a two or three years old Chrome version with JIT compiler vulnerability and guess what - on empty Linux vm it managed to escape chrome and execute code

Meanwhile on Windows with Crowdstrike software installed Chrome just showed some error message about mem. access

Im not sure who handled that attack - was it Windows or Crowdstrike, but eitherway Ive been impressed


I can pretty much guarantee that the Windows kernel stopped unallowed memory access from chrome to outside apps.


Under OpenBSD pledge and unveil would send that Chromium instance SIGABRT'ed. Your parent comment it's utterly wrong.


With or without SELinux enabled?


Idk, that was fresh instal


I know Windows has many security features disabled by default. Where do I start to learn about them and maybe get some nice baseline recommendations for my home/office laptop?



> Rust is a game changer in that it is built from the ground up to offer memory safety without garbage collection and still aims for zero overhead abstractions.

Safe Rust cannot represent circular data structures which makes entire classes of algorithms and architectures unimplementable. You have to workaround these limitations by creating auxiliary structures for tracking references or use reference counting; neither are "zero overhead abstractions." Rust is only zero overhead if all you do is pass values up and down the call stack. Its false appeal says more about the simplistic types of applications folks are writing than anything else.


> Safe Rust cannot represent circular data structures which makes entire classes of algorithms and architectures unimplementable

In safe Rust. So just don't use safe Rust for those parts. Unsafe isn't evil, it's there for a reason. You should avoid it when possible and when there are no downsides, but sometimes there are, and that's ok.


Often the workaround to graph representations that safe Rust can't do with the usual pointer approach is faster. You keep the data in an array, which improves your cache hit rate. Even C++ programmers will often use the same workaround either for safety or performance reasons.

As for overall performance in real programs, Rust seems to consistently do as well or better than C++ programs as the cases from Microsoft here show.


> Safe Rust cannot represent circular data structures which makes entire classes of algorithms and architectures unimplementable.

That's not accurate. Safe Rust can represent circular data structures. It cannot represent circular data structures with 0 overhead. But you can use Rc/Arc/Weak + RefCell for that and it is not very hard. Or use one of crates that expose ways of building self-referential structures in a safe way (of course they do use unsafe underneath, but the API is safe) like ouroboros.


The font parsing code was ported to 100% safe rust (other than the C FFI), and was 10-15% faster to run than its Cpp predecessor.

Make of that what you will.


That's what they said. It aims for zero overhead abstractions.


Project Midori was really good read, a lot of good stuff came from it

https://joeduffyblog.com/2015/11/03/blogging-about-midori/


> Microsoft has been interested in using memory safe languages in the kernel for some time now. An example is the Midori project led by Joe Duffy back in 2009 which explored the idea with a C# derivative.

There is also Project Verona.


Well Linux is using Rust in the kernel too.


Not yet.


The forked Android version is using it already for the last two years.

Any Android 12 and later has a little bit of Rust taking care of the bluetooth communications driver.


Is that only the case in Android?


That was only the begining, followed by Keystore2, the new Ultra-wideband (UWB) stack, DNS-over-HTTP3, Android’s Virtualization framework (AVF) on Android 13.

https://security.googleblog.com/2022/12/memory-safe-language...

https://source.android.com/docs/setup/build/rust/building-ru...


One of the slides lists:

> Has driven changes in upstream Rust: more try_ methods for Vec that don't panic in OOM: https://github.com/rust-lang/rust/pull/95051

I was curious to have a look at that PR, but it seems it was closed after a long discussion (mainly because it would add ~30% more methods to Vec?). So which changes landing in upstream Rust is the bullet point referring to? Was the Keyword Generics Initiative born out of this?


> (mainly because it would add ~30% more methods to Vec?)

Sort of. Rather than bolting on fallible methods adhoc to an existing type, it was felt it would be better to take a step back and actually design this properly. This includes third party crates experimenting with different options.

Maybe we should have a FallibleVec type? Maybe common vec-like methods could be abstracted out in to a `RawVec` type? Maybe both? Maybe the (unstable) `Allocator` API could be adapted to better suite all these cases? Whatever the case it's not great to be adding on a ton of methods in the heat of the moment.


They actually split these changes into their own crate I think:

https://github.com/microsoft/rust_fallible_vec


Panicking on OOM was always a questionable design decision.

It doesn't always mean that your app has no memory, it just means that your chosen allocator has no free memory. That's not always an unrecoverable situation.


A few things to say about this:

1. It's not always possible to detect memory allocation failure (e.g., Linux overcommit). So many applications will have to design their operation around the possibility that out-of-memory means someone is going to axe their process unwillingly anyways to support those platforms.

2. Memory allocations tend to be pretty close to omnipresent. If you consider a stack overflow to be a memory allocation failure, then literally every call is a chance for memory allocation failure. But even before then, there's often the use of small amounts of heap memory (things like String in particular).

3. Recovery from OOM is challenging, as you can't do anything that might allocate memory in the recovery path. Want to use Box<dyn Error> for your application's error type? Oops, can't do that, since the allocation of the error for OOM might itself cause an allocation failure!


You can get a view into what Rust looks like with fallible allocation in Rust for Linux, since Linus required this. So e.g. Rust for Linux's Vec only has try_push() and you'd better have successfully try_reserve'd enough space to push into or it may fail.

https://rust-for-linux.github.io/docs/alloc/vec/struct.Vec.h...

NB The prose for Vec here, including examples, is copied from the "real" Vec in Rust's standard library, so it talks about features like push but those are not actually provided in Rust for Linux.


I think it speaks to the fact that Rust's original/ideal usecase (writing a web browser) is slightly higher-level than actual kernel-level OS work (just like C++'s is). It's expanded into kernel territory, and done a good job of it, but there are places like this where a choice was made that creates some dissonance

If you're writing a high-performance userspace application, there's a good chance you don't want to deal with handling an error in every single place where your code allocates. I think Rust made the right choice, even though it means some growing pains as it starts being used in kernels


Not quite, because Mozilla themselves use forked data structures with falliable allocation.


Interesting! Do they use those everywhere throughout Firefox or only in special situations?


It really depends on what you are doing. If you’re writing an application running on an operating system you don’t need out of memory handling, it will even make programming harder.


Like I said, if your allocator is actually the system allocator then yes maybe you're right. If instead you're doing something like using an arena allocator then OOM isn't a huge deal, because all you've done is exhaust a fixed buffer rather than system RAM; totally recoverable. There are huge performance gains to be had with using custom allocators where appropriate.


Sure, and you can do that with Rust today. There's nothing stopping you from writing a custom data structure with its own custom allocator. The "abort on OOM" policy is not a property of the language, it's a property of certain collections in libstd that use the global allocator.


I think the point here is that users would like to use Vec, HashMap etc using that arena allocator and handle OOM manually instead of having to write their own collection types.


The their problem is not a lack of try_, it's a lack of custom allocators.


I think you missed "Add support for custom allocators in Vec": https://github.com/rust-lang/rust/pull/78461


If that's the first time you touch the last bits of your private arena, you can trigger the OOM killer.


True, but also worse than that: if an unrelated application starts consuming slightly more memory it can trigger an oom kill of your application


That's not necessarily the case though. It may be worth it for, say, a document viewer to catch the OOM condition and show a user-friendly error instead of dying. Of course, linux with overcommit memory can't do this. But on Windows, that's totally a thing that can happen.


I was curious so I did a Brave search to find out if that behavior can be changed. You can supposedly (I haven't tried it) echo 2 into /proc/sys/vm/overcommit_memory and the kernel will refuse to commit more memory for a process than the available swap space and a fraction of available memory (which is also configurable). See https://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6 for more details.

I usually write my programs to only grab a little more memory than is actually needed so I might play around with this at home. I wonder if this has lead to a culture of grabbing more memory than is actually needed since mallocs only fail at large values if everything is set the traditional way.

Defaulting to overcommit seems risky. I'd much rather the system tell me no more memory is available than just having something segfault. I could always wait a bit and try again or something or at the very least shut down the program in a controlled manner.


That's a terrible idea to disable overcommit on a generalist Linux system because:

* exec

* some tools or even libs map gigantic area of anon mem but only touch few bits of it.


You can add enough swap space that fork+execve always works in practice (although vfork or vfork-style clone is obviously better if the goal is to execve pretty much immediately anyway). Linux allows reserving address space with PROT_NONE, populating it later with mprotect or MAP_FIXED, and many programs do it like that.

However, I stopped using vm.overcommit_memory=2 because the i915 driver has something called the GEM shrinker, and that never runs in that mode. That means all memory ends up going to the driver over time, and other allocations fail eventually. Other parts of the graphics stack do not handle malloc failures gracefully, either. In my experience, that meant I got many more desktop crashes in mode 2 than in the default mode with the usual kernel OOM handler and its forced process termination.


It is the best behavior for the language you are writing your browser engine on.

The thing is that ironically, a browser engine is only marginally inside the Rust's niche. (Or maybe it's even marginally outside, at this point I don't think anybody knows.) And for most things things that fit squarely at the focus of the language, it is a bad choice.


The original design had no allocator on collections and no alloc crate. If you cares about allocation, you'd use your own data structures with your own allocator in a no_std binary.

The alloc crate came later, and the custom allocator too and is not even stable yet.


For people that might be confused, setting a custom global allocator is possible in stable, but the Allocator trait isn't yet, so specifying the allocator for a specific instance of a Vec isn't possible in stable.

https://doc.rust-lang.org/std/alloc/index.html#the-global_al...

https://github.com/rust-lang/wg-allocators


It really is one of my major gripes with Rust at the moment


Indeed - I can understand that some languages like JavaScript don't care, thats fine.

But the entire value proposition of Rust is reliability and predictability. Use this in critical applocations. And this is the first time this language is being used in a major Os.

The fact that these changes weren't accepted is not a good sign.


As mentioned elsewhere, a different design is being pursued. In addition, lots of similar changes have already landed as part of the Rust-in-Linux work, which has many of the same needs.

In addition, Rust doesn't require you to use allocation, ever. It was originally expected that users who can't handle allocation failures would eschew libstd in favor of libcore (a subset of libstd with all the allocating parts removed).


> And this is the first time this language is being used in a major Os.

Sorry to be pedantic, but that's not really the case: https://en.wikipedia.org/wiki/Rust_for_Linux


Link to the talk - https://www.youtube.com/watch?t=2611&v=8T6ClX-y2AE (timestamped to the part about Rust)

The speaker covers a bunch of areas and the final part of the talk (around 10 minutes) is about Microsoft introducing Rust in some self-contained areas in Windows.

Some highlights:

- Their focus is on "killing bug classes". More context in this post by Microsoft Research from 2019 - A proactive approach to more secure code.

- They want to do this with memory safe languages, CPU architectural changes and safer language subsets. This talk focussed on memory safe languages, specifically Rust.

- First area they've introduced Rust in - a cross platform rewrite of a font parser called DWriteCore. The team reported that parsing was "incredibly easy". Font shaping performance increased by 5-15% compared to the C++ version.

- It took about 2 devs working for half a year to complete this. The speaker says this is pretty good value for an area that is notorious for security bugs.

- Second area is the REGION data type in Win32k GDI. Currently in consumer Windows, disabled by feature flag. Will be enabled in insider builds soon. Performance has been good, some small wins for the Rust version.

- There is now a Windows SysCall implemented in completed safe Rust.

TLDR - Rust is inside the Windows Kernel, will be enabled widely soon.


> Font shaping performance increased by 5-15% compared to the C++ version.

Personally, I wouldn't link it directly to rust, but to rewriting. When you develop something, you usually can't account for all future changes that affect performance, design, LOC, robustness, and so on. But with rewrite, you take them all into account. So there is a big chance that rewrite will be superior in many areas. It will probably have the same effect as if they had rewritten it in C++ again.


I don't think the claim is that Rust is faster than C++ in general. Rather, they mention this to address the worry that there is always a performance penalty for safety. This example (and many others) show that Rust doesn't compromise on performance. You won't always get 5-15% improvements, but it'll always be competitive.


I think you have to link it to more than one thing. Being able to write complex code in a performant way without being worried you've introduced another security bug in one of the most infamous subsystems for security is definitely a plus as it allows you to go after performance you might not otherwise have been able to go after in the alloted time/resources. At the same time, no two rewrites are the same and being able to look back and see how the current design performs and where it may be improved is also great insight into how to do it differently.

What I think this note bucks is the concept choosing a secure language to re-implement something in means performance overhead compared to C/C++ code that has had years or decades of work putting into it. It doesn't necessarily argue A or safe B is inherently faster, just that a safe B doesn't imply you should expect it to be safe and slower, just safe.


But rewriting after the exploratory phase is over (or to be more precise: long after the exploratory phase is over) is still an achievement of Rust: not only because it's fashionable (it sure is), but because it gives reasonable confidence that you won't find yourself regretting having rolled back multiple decades of weeding out lurking memory bugs.

A C rewrite promising a moderate speedup wouldn't so much be skipped because it was not worth the effort, they are not done because the speedup isn't worth the risk of having to go through all that again.


There might be a small advantage brought in by Rust. The Rust memory model (R^W, no aliasing) does mean that some compiler optimizations are broadly applicable in Rust, but only apply in C/++ where the developer has taken the care to signal those constraints to the compiler.


Isn’t this exactly contrary to what second system syndrome would have us believe?


In this case it isn't a "in every way better second system designed from the ground up to include a kitchen sink", it's just an API-compatible rewrite in a different language.


Yes, but it is normal for sayings like that to have contradictory versions, so that one is available in any situation.

For example:

"Too many cooks spoil the broth" vs. "Many hands make light work"

"Look before you leap" vs. "He who hesitates is lost"


Or for a single daying to have antithetical interpretations:

'It's just a few bad apples" vs "one bad apple spoils the barrel"

Or "blood is thicker than water" can mean you prioritise family.

But the full saying is actually "blood of the covenant is thicker than water of the womb" and means opposite


I've never heard that "full saying" and Wikipedia says that people who claim this offer no citations for it. One of them is a 9/11 Conspiracy Theorist, the other listed is from one of the crazier than average US religious sects.


It's not a second system but instead a rewrite, since they are not making new architectural decisions.


In this case, aren't they constrained by a gigantic set of test cases? That can even be run automatically?


I am not sure if I can associate it with a rewrite. Can you elaborate a bit?


Maybe. Certainly it's not the case that Rust is just blanket 5-15% faster than C++ for tasks, not even in the vague hand waving way that Python is typically say 10x slower than C++ or Rust would be.

However, there are some ways that I'd expect similarly capable programmers are likely to often produce faster Rust code than C++ and much fewer ways I'd expect the opposite, so it's not a surprise when this work results in a small unlooked-for performance improvement.


Not quite sure about the down votes here, but there's quite a bit of activity in social media at the moment about performance, saying exactly the same as the parent, tl;dr if you get the data structures right, especially with hindsight, then vNext is almost certainly faster.

e.g. https://lemire.me/blog/2023/04/27/hotspot-performance-engine... which was on HN today.

Andrew Kelley (zig) did a whole talk on data oriented design a while back talking about the same thing. And Casey Muratori is also talking about it a lot right now, and with good reason.


Certainly. Having requirements set in stone helps rewriting any code base. They probably get more time for optimization with the language handling a lot of the security and correctness issues as well as future changes.


Regarding fonts, I feel like when I modded my original Xbox as a teen, it was done using malicious font files on the hdd for the Xbox dashboard haha.


Note that this is high-level GUI stuff and GDI is only in the kernel due to old decisions made in the early 1990s (Windows NT 3.5).


    Microsoft is busy rewriting core Windows library code in memory-safe Rust (theregister.com)
    147 points by mikece 9 hours ago | flag | hide | past | favorite | 106 comments
https://news.ycombinator.com/item?id=35735444


> BlueHat IL 2023: Microsoft rewriting core Windows libraries in Rust (youtube.com) 89 points by mustache_kimono 23 hours ago | past | 45 comments

The primary source material is this talk: https://www.youtube.com/watch?v=8T6ClX-y2AE


Exact place where they talk about Memory Safety

https://youtu.be/8T6ClX-y2AE?t=2616


I like writing C++ code, and I like using SAL annotations to try to improve safety. I try to remember to be const correct as best as I can. Is Rust something I would enjoy? It's hard to discern the signal from the noise on this lang


Maybe!

If you hate writing cmake/make/vcpkg/conan bs, and want to be able to git clone and build (almost) any project, without installing anything beyond rust+cargo... rust will be nice to use.

If you hate the idea of class hierarchies to try and describe behavior and would prefer to attach behavior to any type through traits... rust will be nice to use.

If you like the idea of having generics checking on said traits at compile time with sensible messages rather than the duck typed macros also termed templates with their horrendous error messages... rust will be nice to use

If you like the idea that the compiler verifies for you at compile time the concept of ownership while giving out references, ensuring 1 mutable reference and 0 immutable references, or N immutable references are allowed, while also ensuring the variable being referenced lives longer or as long as the references... rust will be nice to use

If you love spending time debugging invalid references/pointers, races, and more then rust isn't going to nice to use.


If you don't like the idea that the compiler will fight against you if you try to create multiple simultaneous interchangeable mutable references on a single thread, even when it's the right thing to do in a case, you might struggle with Rust for problems that require that (intrusive linked lists, emulators with multiple interacting objects, etc.).


You can always tell it to shut up by using pointers and unsafe.


What would be the particular difficulty with intrusive linked lists?


concepts are a thing now, compile time execution keeps getting better, and stuff like clang tidy help where C++ fails short.

While C++ will never be as safe as Rust, mastering it is still a must in many domains, including contributing to Rust's compiler backends.


I think you will appreciate that moving objects actually invalidates the original bindings and the compiler checks for this quite effectively.

Though there is some learning curve..


So the equivalent of std::move will also make sure to "null" the original object at the same time?


It's not just a matter of "null"ing the reference. The program just won't compile at all if you try to use that reference again


Surely there's more to it than this? Otherwise you can just set up bugprone-use-after-move checks in C++ and call it a day.


Well yes for starters this is not an optional check and it’s not on just “bug-prone” moves it’s on all for them.

Then it also has a borrow checker so if you refer to the content of an other object the langage ensures the (non-owning) pointer does not outlive the pointee, at compile-time. Though this concept is lexical so it’s quite restrictive.

And then it encodes some forms of thread-safety in the langage directly.

It also removes things like nullable pointers.


The "more to it" is that the use-after-move checks aren't bugprone in Rust, you cannot use a value after it has been moved in safe code (additionally, when you call a function, values are moved and not copied, and when you capture a value in a closure, it is either by unique mutable reference, immutable reference, or moved and therefore safe*).

Essentially all the stuff that makes std::move questionable "just works" in Rust. It doesn't even exist, values are moved by default and clones must be explicit (equivalent of a copy constructor for non-trivially copyable types).

The other giant advantage is that use-after-free doesn't exist in safe Rust, since you can't have a reference to a value outlive the value.

* It is also a compile error for a closure that captures a value by reference to outlive the value.


> the use-after-move checks aren't bugprone in Rust

I can't tell if one if us is confused here or if this is just confusing wording, but the check I referred to is not bug-prone. The pattern of using something after it's been moved from is bug-prone (even though it'd work fine for things like integers, in either language), and it's checking for that bug-prone pattern, hence the name.


In Rust it’s not bug–prone. The compiler knows which types are moved and which are copied, and if you write code that uses something after it was moved you get a compile–time error.


The difference is that your C++ checks are executed at runtime. Rust's borrow-checker rules ensure this occurs at compile-time, before you ever run any code.


The check I mentioned is a compile time check. I'm not referring to ASAN.


In practical terms the original variable or field will be out of scope after the move (well not really, but that's how effective the checking is).


It’s a heuristic. And it doesn’t catch when your move isn’t really a move. And writing move constructors is difficult busywork.


Your comment is really confusing, none of that is true for Rust.


Because it was about C++


Try it and see :)

I didn't find the syntax very ergonomic but then I'm the kinda guy that likes Python because it's so loose


I know this may seem an odd question, but what happens in a post rust world, when you NEED to exploit a system?

Such as say in 20 years when you want to be able to run custom code in a then old console.


Rust alone will not save you. Most of the gnarly exploits of late have been logic bugs.


> Most

Citation needed. Log4j comes to mind as a logic bug disaster, but all the data I've seen is that >65% of the high severity bugs come from memory unsafety in most software projects.

https://security.googleblog.com/2022/12/memory-safe-language... https://alexgaynor.net/2020/may/27/science-on-memory-unsafet...


Usually those who want to exploit the system and those building the system have very, very different goals in mind


As long as you have hardware access, you can fuck shit up in ways no software can ever prevent.


Attacking the hardware is always "possible" but can be made unreasonably difficult to pull off in practice, and we've reached that point with game consoles now. The Xbox One and Playstation 4 survived their entire generation without falling to hardware hacks, and the Xbox One didn't get hacked via software either.

Microsoft is so confident in their security model that they openly explained how it works: https://www.youtube.com/watch?v=U7VwtOrwceo


The Xbox 360 still lacks any meaningful software hacks and it will be old enough to vote in November.


Not all exploits are possible because of software bugs: see e.g. https://free60.org/Hacks/Reset_Glitch_Hack/


Yeah these days a lot of hacking has been voltage glitches and other hardware hacking. Rust ensures no kind of safety when the CPU is not operating within it's normal rules.


Rust-ified Windows will still have the fundamental exploitable flaw of Windows - the ability to download binaries from anywhere and give them Admin privs


Which is why if you watch the talk where this was announced, Windows is in the process of requiring signed binaries, and two admin levels like on macOS.

https://www.youtube.com/watch?v=8T6ClX-y2AE


Windows honestly needs this. Problem is "admin access" isn't something that should be a simple button click because then every app requests it and then every user just hits OK because that's just the only way to use Windows. MacOS has it right where you have to reboot in to recovery mode to turn of SIP which is difficult enough that normal apps don't ask users to do this, but power users will have no problem.


It's an hour long video, got a timestamp?


It has chapters.

Check "Windows 11 Security", "App Isolation", "Sandbox", "Standard User", "Pluton".

Or skim the presentation slides,

https://github.com/dwizzzle/Presentations/blob/master/David%...


It's irrelevant. They've been trying to do those things for years but their ability to execute is completely gone. Their efforts to improve things just make stuff worse.

To name just a few examples: they want all code to be signed but Windows code signing certs are more expensive than the Apple developer programme membership, much harder to obtain, the rules actually make it "impossible" in some cases like if your country issues ID cards without an address on them, they're now forcing mandatory HSMs and if your CA decides to change your subject name you're just SOL because only Windows 11 anticipates that problem but Microsoft can't be bothered maintaining Windows 10 anymore so their solution was never backported. Yet, the Windows 11 security hardware requirements mean many people won't upgrade.

So whilst building a theoretical strategy around sandboxing apps, they aren't even able to get the basics right. If the people making these decisions were actually writing Windows apps themselves, they might realize this and Microsoft would be able to get its teams marching in the same direction but there's not much sign of that from the outside.

Compare to how Apple does it: they run their own code signing CA, assign stable arbitrary identifiers to companies and people, and still manage to sell these certs for less than a Windows certificate whilst also throwing a couple of support incidents into the mix too, something you can't even get from Microsoft at all as far as I can see (and I've tried!).


It is going to be relevant after 2025, regardless.

By the way, some of this stuff is already on Window 11 Previews and can be enabled.

Even if they botch this, like it happened to UWP, the alternative will be moving everything to Azure OS with thin clients, so one way or the other, it will happen.


I imagine lots of users will just run Win10 unsupported at that point. I'd be happy if we could assume it vanishes at that point but seems unlikely.

The issue is that a lot of their security strategy is inherited from UWP. So it's already botched.


They do most of this -- albeit without support -- through the store if that's a viable distribution channel for you. You can actually get support, but be prepared to pay big $$$.


Yes, going via the store fixes some of those problems but introduces others. In particular a lot of corporate users have it disabled and of course they have a lot of arbitrary policies. I didn't know they had dev support if you're in the store, interesting thanks.


YMMV, but I found the macOS app store way more picky than the MS Store. Denying my submission for using the term 'Exit' instead of 'Quit' rubs me the wrong way.


The store is kind of a non-starter for many apps. How many apps do they even have in the store? 0.01% of all Windows apps?


No clue, however, they've made it relatively seamless to publish and download from there. You can also use winget [1] to download signed apps from the store. End users don't need an MSA.

[1] - https://learn.microsoft.com/en-us/windows/package-manager/wi...


It's not that seamless. We've been trying it lately and the onboarding process is still pretty bureaucratic. Like, having to give an age rating to a zip utility doesn't make anything better.


This is a fundamental flaw of nearly every OS, because users need to be able to run software on their computers.

That doesn't mean we should just ignore security elsewhere. Many users know not to trust mysterious executables from the internet, but don't expect a PDF or font file to be able to infect their machine.

I think being able to trust that non-executable files like these won't compromise your system could be a big deal.


How is that any different from Linux or MacOS where you can just wget something and then run it with sudo?


I think this question was posed more at locked-down platforms like consoles.


Not just. But the ability to run custom code. Such as an Android phone in the future, which is running some rust-hardened kernel and has a locked down bootloader.

All of this will become e-waste without the ability to run unsigned code. And not only, by allowing custom code, which some do not allow, the usefulness or even purpose of the device can be extended or altered.

I dont' want to throw away a perfectly usable device just because the company made it obsolete.


Attack the hardware: row hammer, jtag, rom swap, ...


You have to start thinking about it now: don't buy things to which you don't have open access. Lobby the politicians to make this happen if you care.

https://www.defectivebydesign.org/


Aren’t exploits on assembly reverse engineering?


No. In this context reverse engineering is looking at the assembly to understand what it does. Once you understand that, you can figure out how to exploit the code, by, sending a subtly malformed packet which will cause a server to write out of bounds of some array, and corrupt the servers memory. By very carefully corrupting the server's memory the attacker can hijack the server to do whatever they want.


Why wasting time on this? Why not just spend the dev time on showing more ads?


The teams working on the kernel and the team working on userspace have always had different priorities and quality of work, to the point that it's not obvious that they are made by the same company. I find lots of things I don't like about Windows userspace (with a few nuggets of greatness in between), but the Windows Kernel is pretty great.


Your question reads as slightly tongue in cheek. But you can learn something by asking and answering it in this case. When should you think about rewriting something?

1. When it's a common recurring source of bugs/problems/pain.

2. When it's no longer fit for purpose. Often either because the purpose has changed in some way or because the purpose was misunderstood when it was created.

In this case at least some of the rewrites were prompted by subsystems being a common source of memory safety bugs continually popping up. Rewriting with technologies and techniques that eliminate some classes of memory safety bugs immediately and in the future is a pretty large payoff for a kernel.


Excuse me, those are not ads, they are special messages from our featured partners.


They're busy writing the code to put the ads in the start menu and all the system settings screens.


Maybe they are writing an ad kernel module in rust.

To you know catch those pesky ad cheaters.


One thing caught my eye:

96 KLOC of C++ is now 152 KLOC of Rust.

What causes the increase, and is that 1.5x ratio typical?


Since Microsoft developed the curious WinRT/Rust language projection, I've wondered if this would happen.

I mean, sure, it's useful anyway but still quite a niche product and was an oddly sudden dive into Rust from Microsoft at the time.

https://github.com/microsoft/windows-rs


Who knows how it will evolve, those are the same folks responsible for deprecating C++/CX and bringing back the WinRT development experience to COM/C++ like in 2000, before .NET came to be.

Eventually they got tired of C++/WinRT, left it half-done (see CppCon 2016 talk about future roadmap plans), and started Rust/winRT.


Now can we please get some basic customizations back, like the taskbar settings and the horrible rounded corners. Enough with apeing Apple


Rounded corners? This is the hill you are choosing to die on? How are these trivialities impacting your day to day use of Windows?

I'd say I'm quite the opposite. I'd much rather hear Microsoft working on core technical improvements rather than adjusting the goddamn roundedness of their UI.


The missing UI features and anti-ergonomic styling is affecting my workday more than tiny improvements in performance or memory safety.

And unfortunately my new computer does not allow rollback


Yeah, like more telemetry!


Honestly the Windows UI is kinda annoying, and it did effect my performance until I recently ditched it for linux w/ i3 as the WM... I want something straightforward and to the point. The windows UI peaked at XP, everything else has been inferior.


I agree. XP had the best experience out of any MS OS.


Microsoft keeps messing with the UI in obviously deleterious ways. If they didn't rewrite the entire interface in 11, no one would be complaining about 11's rounded corners. The very fact that they decided to add rounded corners has likely already distracted from kernel work.

Do you remember when Windows 10 was the last version of Windows? I wish we could go back.

https://www.theverge.com/2015/5/7/8568473/windows-10-last-ve...


> The very fact that they decided to add rounded corners has likely already distracted from kernel work.

The Windows team is so huge that I wouldn't be surprised if the people writing the shell and the people writing the kernel barely know eachother's names.


I'd give my favorite toes for a windows computer that can turn on reliably without having to apply innumerable updates, open a document without involving a web browser, or leave my taskbar icons alone from one month to the next. And I would literally pay a fee if my MS-powered work machines were able to go a day without some inane popup notification getting between me and whatever I want to click on. The best UI for an is one that stays out of the user's way. I cannot stand all these over-engineered, ad-forwarding, popup obsessed cartoon interfaces. When I want you to open a PDF then open the GD PDF using Adobe and go away!

(I talk of my work machines over which I have absolutely no control. Every personal machine I own, that isn't a phone, runs some variation of linux.)


IMO a blind spot that all OS vendors have is the "use just once in awhile" use case. The problem is that every small little thing that is added accrues while the system is offline and when it is brought back online you have dozens of updates pending.

I use Windows daily and think it's fine. However today I turned on a laptop in a coffee shop that has been offline for a few weeks, needing to get a few things done. It saturated the small amount of wifi bandwidth allocated to me by the AP for at least a half hour doing stuff, making the work I wanted to do take longer. Under normal circumstances if I used this laptop daily I probably wouldn't have noticed.

It is a paradox that in our world of always-online software that the less you use it the worse it gets.


>> "use just once in awhile" use case.

Like the non-removable link in my start menu for the MS "Mixed Reality Portal". Total waste of good pixels.


My daily boot into Linux I'm prompted by no end of updates. Hundreds of megabytes of core libraries, fonts, office suites I never asked for but are installed by default, and of course, the weekly or so kernel update and getting to play the lottery as to whether your computer will boot again.


On Linux you can turn that off if you don’t like it. And if it installed something you don’t need, just uninstall it. You don’t have to moan about it here, just fix it once and then it won’t annoy you again.


Windows 10 LTSC does most of what you said and can be acquired for less than your toes ;)

I totally get it though, they make it unreasonably difficult to get a usable OS. Default home or even """Professional""" windows makes my eyes water it's so stinky.


> See hobby horse mentioned in title

> Click into comments

> Don't engage with TFA

> Rehash random pet peeves about hobby horse

> Become top comment thread above 100 other comments actually discussing TFA


Takes me back to maybe my favorite HN comment ever: https://news.ycombinator.com/item?id=23003595


> Phone only have 128 GB? Not enough. Need 129 GB.

best thing I've read all week.


The context menu hiding most of the useful stuff one extra click away is my least favorite change to windows 11.


agree on customization, but I like the look of the windows 11.

I just wish they would 'finish the job', rather than having hodgepodge of windows 2000 and 7 leftovers that don't blend well.

See how silly this looks.

https://youtu.be/UnlqKXijyY4?t=469


My concern is that Windows is already a massive dumpster fire of random bits and bobs. They haven't completed the migrations from the very earliest versions of Windows, they just increase the complexity at all points. The same is true of most of their major desktop apps including Visual Studio, which runs something like 30 processes just to have the main window running including a hotchpotch of EdgeView and Node js bits.

All this becomes is the xkcd trope of "this next migration will really fix our problems".

Unless they can genuinely replace more than 1 sub system in one go, they just increase complexity.


It's ok, Moore's Law will save us.


Best practice has always been to avoid kernel moisture, now they're actually shipping rusty kernels. shame

edit: wrong kind of rust


Gives a new meaning to the old idea of "Windows rot"

not that I've experienced that on Win10, which I found to be great.


This is "surprise" because Microsoft announced it?

It has been possible to use Rust to write device drivers that run on Windows kernel space for years, already.

The Windows-rs crate (Microsoft's crate wrapping the Windows API) already has the WDK for a while (i.e.: the special sdk to interact with the kernel).

I welcome the news and agree it is important and meaningful but it is the kind of thing that was easy to see coming.


> already has the WDK for a while

I would interpret "for a while" as more than ~7 weeks.

Version 0.46 added initial WDK support: https://github.com/microsoft/windows-rs/releases/tag/0.46.0


Who is saying this is a surprise?

There are plenty of things in this industry that are hardly surprising but still newsworthy.

The real bummer is that this might eventually be a reason to switch to Win 11, which is something I’ve been trying to avoid for as long as possible.


Great. They’ll add bing ads into std.


Good decision, I love this game


Are they porting their acclaimed tabloid news service or does that stay in cpp?


The tabloid news service runs in the kernel?


Since people hate it so much and Microsoft is treating their OS like a website to be monetized then it wouldn't surprise me if the tabloid service gets embedded in the kernel, so as to prevent people from disabling or removing it.


In the not-too-distant future Intel bootguard et al. will enforce 'secure booting' of the tabloid microkernel.

/s ...unless..?


That's not sarcasm, that's the upcoming cyberpunk future.

One day people will smoke on the streets and will comment "Wonder what happened to the tech wizards, we really could use some of them right now".


Na, they will be too busy trapped in AI created unskippable commercials taking over their audio and visual cortex on their neural linking device to remember there was anything better in the past.


Blink twice if the new hyperscalar Salesforce metaverse VR experience is causing you pain.

blink\nblinbknlibnklbink\nblnink


Absolutely. But things always swing around.

Even in today's age (during COVID) some small % of the populace actually moved out from the big cities and into rural -- or just smaller city -- communities.

I have hope. Sadly the positive societal changes move with the average speed of a glacier.


Glacial pace is accurate. But you're right.


It takes one generation to forget the past, AI LLMs know this.


Would that be some form of API call generated from bare-metal and storing JSON in memory before the system boots? Is that even possible?


>> The tabloid news service runs in the kernel?

You know the browser used to be an integral and inseparable part of the OS right? They're just going back to their old ways :-)


To be honest, it was. It still is, and it's getting more and more integral and inseparable as time passes.

It just didn't need to be their browser. But the more important thing, they were fighting the trend, and making sure their browser didn't work. That is what broke the Netscape plan.


Priorities


Actually, that entire thing is a WebView with some html+js


HN (heart) Rust. Therefore HN (heart) Windows? That will be novel....


I just hope the Windows logo is bigger than the Rust logo...


Any good reason why or are they using Rust for brownie points



Is the performance on par?


Generally, and sometimes better than C++.


Windows kernel already had rust and now has Rust.


haha i wish Microsoft would just rebase Windows on Linux: extract the Windows "experience"(sic) into a UI/Desktop Environment for Linux and then bless some LTS of Linux as the underlying system and call it a day.

Of course they would never, ever do that, but I can hope however hopeless that is.


Why do people on Hacker News dislike the Windows kernel so much, without knowing a gram about the innards of Windows? All you lot see is the Windows shell, which is fairly trivial to replace (ergo why it has changed so often in the past 30 years since NT 3.1).

Windows NT is a remarkably modular kernel architecture, with design decisions far ahead of its time. Things like (acronym overload here) COM, (A)LPC, UMDF, Windows subsystems (for Linux and Android), and an absolutely immense media and utilities ecosystem: Direct3D, DirectWrite, WASAPI, etc.

Why not hope Microsoft open-sources the entire Windows stack including the NT kernel, NTFS drivers, and the Windows API, rather than hope it 'uses Linux'?

> Windows "experience"(sic) into a UI/Desktop Environment for Linux

And for the record, this more-or-less already exists: it's called KDE Plasma Desktop 5.


The Windows kernel really isn't that great by modern standards, neither codewise nor architecturally. And yes I know a lot about it.

There's a LOT of weird stuff lurking in there, and a lot of features they've added in recent years just doesn't work properly at all (anything uwp related...). Even the basics of how you start another process have turned into a labyrinthine mess. Then you have things they never fixed like the totally unhelpful NT file locking semantics that regularly break apps that work fine on UNIX.

NTFS is supposedly a nightmare of tech debt that uses SEH for control flow, yet their attempt to move Windows to a new ReFS stalled and failed. Note: Apple managed this with APFS and Linux distros have routinely introduced new file systems.

COM isn't a part of the kernel but is the same situation - incredibly complicated and has not been evolved well. Lots of failed rebrands, attempts to rejuvinate it that actually made things worse.


Is the Linux kernel really that great?


Within its own rules, yeah, it's pretty great. Very high performance, huge feature set, everything is configurable and transparent, relatively non-buggy, clean code, runs on everything.

If you want stable driver APIs that allow you to distribute hardware support with the hardware, keeping that investment proprietary, then obviously no it's crap, and that's a pretty important use case.

If your hardware has good driver support in-tree though, then Linux is hard to beat these days. It just gets so much more investment than the NT or Darwin kernels.

There are a few places where Darwin is ahead especially w.r.t. code signing and sandboxing. I can't think of anything that NT in particular excels at though.


Driver guard, safe kernel, userspace graphics drivers, Xerox PARC/Oberon like workflows between kernel and userspace frameworks, setting the rules where graphics APIs are going (DX drives, Vulkan follows),...


Not sure what you mean by oberon-like workflows.

IIRC the new gen graphics frameworks have their origins in work by AMD.


Mantle was just a PC API based of how game console APIs work, like DirectX on XBox.

As for Oberon, there is enough material out there.

Suffice to say how Powershell integrates with .NET, DLLs and COM, enabling OS and applications scripting, pursue of safer approaches to OS system programming instead of yet another UNIX clone written in C.


I think it is. Others might not. But it being open is the best part.


You know, I’d be interested in having a non-Microsoft desktop environment that ran on top of the Win11 kernel… Is that even possible/available?


I said nothing about the windows kernel. I honestly don't care. It's not what the end-user uses. It's the UI and the graphics stack etc that matters for gamers and other users of Windows.

Package that UI and those libraries into the windows experience desktop for Linux and call it a day.

---

If they open sourced windows that'd be good, too! Hackers could then remove all the garbage like telemtry, ads, etc., and it'd be dope as well. They could hack the UI to be even more tiling friendly. It'd be amazing.


A few non-desktop things I'm not fond of (allowing that I don't know that Windows counts these as ‘kernel’): Processes are expensive. Files are expensive. File names have nasty restrictions.

> KDE Plasma Desktop 5

Personally, I use KDE because it allows me to replicate one particular Mac feature that's important to me: leaving Control alone for terminal (editor) control characters.


The Windows kernel is radically different from Unix-likes. Doing anything like this would cause a riot from anyone depending on any of the Windows-only features, which is pretty much all Enterprise customers.


A rosetta/wine layer to run legacy things could bridge this gap.


Why bother when we can enjoy an OpenVMS descendent instead.


Is there a specific benefit you think that would bring?


Not the person you asked, but: this would certainly bring more benefits to Linux and its users/community than to Microsoft...


well for one it'd be free. You'd get the windows experience without having to pay a license.

It'd be based on a userland and kernel that is open source.

And the UI would be hackable (if it were opened up).

Those 3 reasons are huge imo


There you go, have fun.

https://reactos.org/


Nice joke


Dumb question, but isn’t Rust an awful name marketing wise? How did the language’s founders settle on it? Hopefully there’s a more interesting history than rust is a thin layer atop bare metal.


Peering into the mists of time in ages past we hear rumours[1] of an ancient sage naming their new language after the hardy fungi Pucciniales[2].

[1] https://www.reddit.com/r/rust/comments/27jvdt/internet_archa...

[2] https://en.m.wikipedia.org/wiki/Rust_(fungus)


From the first Google result:

"Graydon Hoare named the language after the rust fungus because it is robust, distributed, and parallel. He also liked the play on the words from words such as robust, trust, frustrating, rustic, and thrust."

https://www.reddit.com/r/rust/comments/27jvdt/internet_archa...


Good question! Per Wikipedia: "Hoare would later state that Rust was named after the rust fungus, with reference to the fungus's hardiness." See https://en.wikipedia.org/wiki/Rust_%28programming_language%2....


I’ll copy a TLDR verbatim from stack overflow:

„TL;DR: Rust is named after a fungus that is robust, distributed, and parallel. And, Graydon is a biology nerd.“

https://www.reddit.com/r/rust/comments/27jvdt/internet_archa...


I like the name, personally.


Nice try, Microsoft. You'll need to do more than just use more Rust to get me to downgrade to Windows 11.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: