Microsoft has been interested in using memory safe languages in the kernel for some time now. An example is the Midori project led by Joe Duffy back in 2009 which explored the idea with a C# derivative.
Rust is a game changer in that it is built from the ground up to offer memory safety without garbage collection and still aims for zero overhead abstractions.
It would be ironic, after their notorious security issues of the early 2000’s, if due to efforts like this, Microsoft Windows ends up being the most secure general operating system.
>Microsoft Windows ends up being the most secure general operating system
It already is. What, exactly, is better than Windows at security features on desktop computers? Linux? There is nothing in there that comes even close to the defensive features of windows, like HVCI, a subsystem that checks for driver signatures and the likes isolated by virtualization mechanisms, which completely prevents tempering with the kernel. Linux's support for secure boot only exists to make it convenient to dual boot with windows, it doesn't do enough to prevent kernel level rootkits, it's a total placebo and it's even worse if you use a distro that doesn't have signed kernels, like Archlinux. If you're self signing on the same computer, how exactly are you stopping malware?
Since Vista, the OS also gained some serious resilience against crashes that I have never seen on other operating systems. For example, it is possible for your desktop session to survive a GPU driver crash. On linux this is a guaranteed freeze or kernel panic. This is, fortunately, a rare event, but the last times I've seen my computer freeze on linux, it was always because of the graphic stack.
openBSD's slogan for having few remotely exploitable exploits out of the box doesn't mention that it's because it has literally no features enabled out of the box.
macOS and iOS are the systems with the greatest amount of privilege escalation fails by far. In fact, what do people think jailbreaks are? Some of which are truly frightening when you think about what could have been. Multiple jailbreaks were made that could be run just by browsing a webpage on safari. This means they punched through the browser, punched through privilege escalation and had the potential to install a rootkit on your phone. Just by visiting. A. Webpage.
How many times such a thing has happened on Windows in the recent years? visiting a webpage installed a rootkit on your computer?
Disagree, macOS is way ahead. Windows code signing is a half-implemented joke that doesn't do much and apps can easily tamper with each other at will (unless they're installed using MSIX which not much uses), whereas macOS code signing actually works and will stop apps tampering with each other completely.
The macOS app sandbox actually works. On Windows nothing uses the app sandbox due to serious bugs and performance regressions. Chrome rolls its own sandbox for example.
SIP successfully stops macOS getting screwed up. The number of Windows installs out there in some bizarre half-broken state is incredible. It's routinely the case that API calls which work on one Windows system don't work on others even at the same patch level for no clear reason at all, which trace back to weird configuration differences to the OS.
Windows still relies heavily on client side virus scanning. Apple do malware scanning server side and then lean on their code signing and integrity systems instead, which is one reason Macs have great battery life.
And then there's all the other more well known security things Apple do with secure processors and the like.
Windows is just so far behind and they're so drowning in tech debt it's unlikely they'll ever catch up.
Linux isn't as well-comparable or categorized (especially given its just the kernel, and there are dozens of other "products" which make up an equivalent to what Microsoft would call "Windows 11"). Nonetheless: 306 [5]
You should check your preconceptions and susceptibility to Apple's marketing. No one is substantially far ahead or far behind (except maybe Android, but again, these are hard to compare apples-to-apples). Everyone still experiences roughly the same class and magnitude of vulnerabilities. But, everyone is also getting better at it.
I'm not sure how that rebuts my point? macOS has a much lower number of CVEs than Windows. But there's a lot more to security than CVEs, and my post was about issues that CVEs don't track. BTW Apple marketing isn't what led to my views, they're based on direct experience with the security mechanisms of both operating systems up close and personal.
Well, you know what they say about being too close to something to speak on it objectively. Which in this case means: there's the way these systems were designed to work, and how they actually work toward the end-goal of keeping the systems they secure, secure.
I'll believe that Apple's operating systems are significantly and measurably more secure when they can make it a few years without a maliciously formatted iMessage crashing the kernel. Until then; its arguing minutia. Everyone has security issues. Everyone is taking steps toward improving their security. No one is so far ahead that they're worth white knighting on HackerNews.
The CVEs / Install Base ratio is a pretty silly metric for determining the security of a product. A large number of CVEs could tell you that the users and developers of a particular product care a lot (or are paranoid or are simply security minded) about security, and want to give notice of issues to as many people as possible.
This is a live issue in the Rust community, which does appear to care a great deal about security, as to how to deal with minor/theoretical vulnerabilities perhaps unworthy of a CVE.
Hold your horses there my good friend. Yes, Windows is better now than it was in the past but is still a shitty OS. None of them are actually. All of them continuously fail every single year at hackers gathering/hackatons/whatever public event, with multiple zero-day showing. Every single major OS out there is a joke from security point of view.
This. Parent it's deluded if Windows can even be compared to a hardenex Guix setup with rollbacks and sandboxed Chromium/Icecat's.
I would think otherwise if Windows used virtualisation and sandboxing to run old Win32 apps from XP and below. Because lots of enterprise software depends on proper compatibility modes, and there the security gets thrown out of the window.
iOS has some seriously nifty security mechanisms that takes advantage of features baked into the Apple Processors. Stuff like pointer authentication and page protection layer(something akin to HVCI, without the hypervisor). Jailbreaks are getting harder and harder.
Both Windows and iOS(I can’t speak to macOS) are becoming incredibly security mature operating systems via these security mechanisms that get stacked on top of one another. Saying on is better than the other in terms of security is hard to quantify.
Windows still does have some issues with user mode logical exploitation through DLL hijacking, or issues with credential relaying, although relaying targets are generally known and mitigated by enterprises.
iOS still has issues with remote attack surface, however it has gotten better with iOS 16 and Blastdoor +Lockdown
Ive tested a two or three years old Chrome version with JIT compiler vulnerability and guess what - on empty Linux vm it managed to escape chrome and execute code
Meanwhile on Windows with Crowdstrike software installed Chrome just showed some error message about mem. access
Im not sure who handled that attack - was it Windows or Crowdstrike, but eitherway Ive been impressed
I know Windows has many security features disabled by default. Where do I start to learn about them and maybe get some nice baseline recommendations for my home/office laptop?
> Rust is a game changer in that it is built from the ground up to offer memory safety without garbage collection and still aims for zero overhead abstractions.
Safe Rust cannot represent circular data structures which makes entire classes of algorithms and architectures unimplementable. You have to workaround these limitations by creating auxiliary structures for tracking references or use reference counting; neither are "zero overhead abstractions." Rust is only zero overhead if all you do is pass values up and down the call stack. Its false appeal says more about the simplistic types of applications folks are writing than anything else.
> Safe Rust cannot represent circular data structures which makes entire classes of algorithms and architectures unimplementable
In safe Rust. So just don't use safe Rust for those parts. Unsafe isn't evil, it's there for a reason. You should avoid it when possible and when there are no downsides, but sometimes there are, and that's ok.
Often the workaround to graph representations that safe Rust can't do with the usual pointer approach is faster. You keep the data in an array, which improves your cache hit rate. Even C++ programmers will often use the same workaround either for safety or performance reasons.
As for overall performance in real programs, Rust seems to consistently do as well or better than C++ programs as the cases from Microsoft here show.
> Safe Rust cannot represent circular data structures which makes entire classes of algorithms and architectures unimplementable.
That's not accurate.
Safe Rust can represent circular data structures. It cannot represent circular data structures with 0 overhead.
But you can use Rc/Arc/Weak + RefCell for that and it is not very hard. Or use one of crates that expose ways of building self-referential structures in a safe way (of course they do use unsafe underneath, but the API is safe) like ouroboros.
> Microsoft has been interested in using memory safe languages in the kernel for some time now. An example is the Midori project led by Joe Duffy back in 2009 which explored the idea with a C# derivative.
That was only the begining, followed by Keystore2, the new Ultra-wideband (UWB) stack, DNS-over-HTTP3, Android’s Virtualization framework (AVF) on Android 13.
I was curious to have a look at that PR, but it seems it was closed after a long discussion (mainly because it would add ~30% more methods to Vec?). So which changes landing in upstream Rust is the bullet point referring to? Was the Keyword Generics Initiative born out of this?
> (mainly because it would add ~30% more methods to Vec?)
Sort of. Rather than bolting on fallible methods adhoc to an existing type, it was felt it would be better to take a step back and actually design this properly. This includes third party crates experimenting with different options.
Maybe we should have a FallibleVec type? Maybe common vec-like methods could be abstracted out in to a `RawVec` type? Maybe both? Maybe the (unstable) `Allocator` API could be adapted to better suite all these cases? Whatever the case it's not great to be adding on a ton of methods in the heat of the moment.
Panicking on OOM was always a questionable design decision.
It doesn't always mean that your app has no memory, it just means that your chosen allocator has no free memory. That's not always an unrecoverable situation.
1. It's not always possible to detect memory allocation failure (e.g., Linux overcommit). So many applications will have to design their operation around the possibility that out-of-memory means someone is going to axe their process unwillingly anyways to support those platforms.
2. Memory allocations tend to be pretty close to omnipresent. If you consider a stack overflow to be a memory allocation failure, then literally every call is a chance for memory allocation failure. But even before then, there's often the use of small amounts of heap memory (things like String in particular).
3. Recovery from OOM is challenging, as you can't do anything that might allocate memory in the recovery path. Want to use Box<dyn Error> for your application's error type? Oops, can't do that, since the allocation of the error for OOM might itself cause an allocation failure!
You can get a view into what Rust looks like with fallible allocation in Rust for Linux, since Linus required this. So e.g. Rust for Linux's Vec only has try_push() and you'd better have successfully try_reserve'd enough space to push into or it may fail.
NB The prose for Vec here, including examples, is copied from the "real" Vec in Rust's standard library, so it talks about features like push but those are not actually provided in Rust for Linux.
I think it speaks to the fact that Rust's original/ideal usecase (writing a web browser) is slightly higher-level than actual kernel-level OS work (just like C++'s is). It's expanded into kernel territory, and done a good job of it, but there are places like this where a choice was made that creates some dissonance
If you're writing a high-performance userspace application, there's a good chance you don't want to deal with handling an error in every single place where your code allocates. I think Rust made the right choice, even though it means some growing pains as it starts being used in kernels
It really depends on what you are doing. If you’re writing an application running on an operating system you don’t need out of memory handling, it will even make programming harder.
Like I said, if your allocator is actually the system allocator then yes maybe you're right. If instead you're doing something like using an arena allocator then OOM isn't a huge deal, because all you've done is exhaust a fixed buffer rather than system RAM; totally recoverable. There are huge performance gains to be had with using custom allocators where appropriate.
Sure, and you can do that with Rust today. There's nothing stopping you from writing a custom data structure with its own custom allocator. The "abort on OOM" policy is not a property of the language, it's a property of certain collections in libstd that use the global allocator.
I think the point here is that users would like to use Vec, HashMap etc using that arena allocator and handle OOM manually instead of having to write their own collection types.
That's not necessarily the case though. It may be worth it for, say, a document viewer to catch the OOM condition and show a user-friendly error instead of dying. Of course, linux with overcommit memory can't do this. But on Windows, that's totally a thing that can happen.
I was curious so I did a Brave search to find out if that behavior can be changed. You can supposedly (I haven't tried it) echo 2 into /proc/sys/vm/overcommit_memory and the kernel will refuse to commit more memory for a process than the available swap space and a fraction of available memory (which is also configurable). See https://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6 for more details.
I usually write my programs to only grab a little more memory than is actually needed so I might play around with this at home. I wonder if this has lead to a culture of grabbing more memory than is actually needed since mallocs only fail at large values if everything is set the traditional way.
Defaulting to overcommit seems risky. I'd much rather the system tell me no more memory is available than just having something segfault. I could always wait a bit and try again or something or at the very least shut down the program in a controlled manner.
You can add enough swap space that fork+execve always works in practice (although vfork or vfork-style clone is obviously better if the goal is to execve pretty much immediately anyway). Linux allows reserving address space with PROT_NONE, populating it later with mprotect or MAP_FIXED, and many programs do it like that.
However, I stopped using vm.overcommit_memory=2 because the i915 driver has something called the GEM shrinker, and that never runs in that mode. That means all memory ends up going to the driver over time, and other allocations fail eventually. Other parts of the graphics stack do not handle malloc failures gracefully, either. In my experience, that meant I got many more desktop crashes in mode 2 than in the default mode with the usual kernel OOM handler and its forced process termination.
It is the best behavior for the language you are writing your browser engine on.
The thing is that ironically, a browser engine is only marginally inside the Rust's niche. (Or maybe it's even marginally outside, at this point I don't think anybody knows.) And for most things things that fit squarely at the focus of the language, it is a bad choice.
The original design had no allocator on collections and no alloc crate.
If you cares about allocation, you'd use your own data structures with your own allocator in a no_std binary.
The alloc crate came later, and the custom allocator too and is not even stable yet.
For people that might be confused, setting a custom global allocator is possible in stable, but the Allocator trait isn't yet, so specifying the allocator for a specific instance of a Vec isn't possible in stable.
Indeed - I can understand that some languages like JavaScript don't care, thats fine.
But the entire value proposition of Rust is reliability and predictability. Use this in critical applocations. And this is the first time this language is being used in a major Os.
The fact that these changes weren't accepted is not a good sign.
As mentioned elsewhere, a different design is being pursued. In addition, lots of similar changes have already landed as part of the Rust-in-Linux work, which has many of the same needs.
In addition, Rust doesn't require you to use allocation, ever. It was originally expected that users who can't handle allocation failures would eschew libstd in favor of libcore (a subset of libstd with all the allocating parts removed).
The speaker covers a bunch of areas and the final part of the talk (around 10 minutes) is about Microsoft introducing Rust in some self-contained areas in Windows.
Some highlights:
- Their focus is on "killing bug classes". More context in this post by Microsoft Research from 2019 - A proactive approach to more secure code.
- They want to do this with memory safe languages, CPU architectural changes and safer language subsets. This talk focussed on memory safe languages, specifically Rust.
- First area they've introduced Rust in - a cross platform rewrite of a font parser called DWriteCore. The team reported that parsing was "incredibly easy". Font shaping performance increased by 5-15% compared to the C++ version.
- It took about 2 devs working for half a year to complete this. The speaker says this is pretty good value for an area that is notorious for security bugs.
- Second area is the REGION data type in Win32k GDI. Currently in consumer Windows, disabled by feature flag. Will be enabled in insider builds soon. Performance has been good, some small wins for the Rust version.
- There is now a Windows SysCall implemented in completed safe Rust.
TLDR - Rust is inside the Windows Kernel, will be enabled widely soon.
> Font shaping performance increased by 5-15% compared to the C++ version.
Personally, I wouldn't link it directly to rust, but to rewriting. When you develop something, you usually can't account for all future changes that affect performance, design, LOC, robustness, and so on. But with rewrite, you take them all into account. So there is a big chance that rewrite will be superior in many areas. It will probably have the same effect as if they had rewritten it in C++ again.
I don't think the claim is that Rust is faster than C++ in general. Rather, they mention this to address the worry that there is always a performance penalty for safety. This example (and many others) show that Rust doesn't compromise on performance. You won't always get 5-15% improvements, but it'll always be competitive.
I think you have to link it to more than one thing. Being able to write complex code in a performant way without being worried you've introduced another security bug in one of the most infamous subsystems for security is definitely a plus as it allows you to go after performance you might not otherwise have been able to go after in the alloted time/resources. At the same time, no two rewrites are the same and being able to look back and see how the current design performs and where it may be improved is also great insight into how to do it differently.
What I think this note bucks is the concept choosing a secure language to re-implement something in means performance overhead compared to C/C++ code that has had years or decades of work putting into it. It doesn't necessarily argue A or safe B is inherently faster, just that a safe B doesn't imply you should expect it to be safe and slower, just safe.
But rewriting after the exploratory phase is over (or to be more precise: long after the exploratory phase is over) is still an achievement of Rust: not only because it's fashionable (it sure is), but because it gives reasonable confidence that you won't find yourself regretting having rolled back multiple decades of weeding out lurking memory bugs.
A C rewrite promising a moderate speedup wouldn't so much be skipped because it was not worth the effort, they are not done because the speedup isn't worth the risk of having to go through all that again.
There might be a small advantage brought in by Rust. The Rust memory model (R^W, no aliasing) does mean that some compiler optimizations are broadly applicable in Rust, but only apply in C/++ where the developer has taken the care to signal those constraints to the compiler.
In this case it isn't a "in every way better second system designed from the ground up to include a kitchen sink", it's just an API-compatible rewrite in a different language.
I've never heard that "full saying" and Wikipedia says that people who claim this offer no citations for it. One of them is a 9/11 Conspiracy Theorist, the other listed is from one of the crazier than average US religious sects.
Maybe. Certainly it's not the case that Rust is just blanket 5-15% faster than C++ for tasks, not even in the vague hand waving way that Python is typically say 10x slower than C++ or Rust would be.
However, there are some ways that I'd expect similarly capable programmers are likely to often produce faster Rust code than C++ and much fewer ways I'd expect the opposite, so it's not a surprise when this work results in a small unlooked-for performance improvement.
Not quite sure about the down votes here, but there's quite a bit of activity in social media at the moment about performance, saying exactly the same as the parent, tl;dr if you get the data structures right, especially with hindsight, then vNext is almost certainly faster.
Andrew Kelley (zig) did a whole talk on data oriented design a while back talking about the same thing. And Casey Muratori is also talking about it a lot right now, and with good reason.
Certainly. Having requirements set in stone helps rewriting any code base. They probably get more time for optimization with the language handling a lot of the security and correctness issues as well as future changes.
Microsoft is busy rewriting core Windows library code in memory-safe Rust (theregister.com)
147 points by mikece 9 hours ago | flag | hide | past | favorite | 106 comments
I like writing C++ code, and I like using SAL annotations to try to improve safety. I try to remember to be const correct as best as I can. Is Rust something I would enjoy? It's hard to discern the signal from the noise on this lang
If you hate writing cmake/make/vcpkg/conan bs, and want to be able to git clone and build (almost) any project, without installing anything beyond rust+cargo... rust will be nice to use.
If you hate the idea of class hierarchies to try and describe behavior and would prefer to attach behavior to any type through traits... rust will be nice to use.
If you like the idea of having generics checking on said traits at compile time with sensible messages rather than the duck typed macros also termed templates with their horrendous error messages... rust will be nice to use
If you like the idea that the compiler verifies for you at compile time the concept of ownership while giving out references, ensuring 1 mutable reference and 0 immutable references, or N immutable references are allowed, while also ensuring the variable being referenced lives longer or as long as the references... rust will be nice to use
If you love spending time debugging invalid references/pointers, races, and more then rust isn't going to nice to use.
If you don't like the idea that the compiler will fight against you if you try to create multiple simultaneous interchangeable mutable references on a single thread, even when it's the right thing to do in a case, you might struggle with Rust for problems that require that (intrusive linked lists, emulators with multiple interacting objects, etc.).
Well yes for starters this is not an optional check and it’s not on just “bug-prone” moves it’s on all for them.
Then it also has a borrow checker so if you refer to the content of an other object the langage ensures the (non-owning) pointer does not outlive the pointee, at compile-time. Though this concept is lexical so it’s quite restrictive.
And then it encodes some forms of thread-safety in the langage directly.
The "more to it" is that the use-after-move checks aren't bugprone in Rust, you cannot use a value after it has been moved in safe code (additionally, when you call a function, values are moved and not copied, and when you capture a value in a closure, it is either by unique mutable reference, immutable reference, or moved and therefore safe*).
Essentially all the stuff that makes std::move questionable "just works" in Rust. It doesn't even exist, values are moved by default and clones must be explicit (equivalent of a copy constructor for non-trivially copyable types).
The other giant advantage is that use-after-free doesn't exist in safe Rust, since you can't have a reference to a value outlive the value.
* It is also a compile error for a closure that captures a value by reference to outlive the value.
> the use-after-move checks aren't bugprone in Rust
I can't tell if one if us is confused here or if this is just confusing wording, but the check I referred to is not bug-prone. The pattern of using something after it's been moved from is bug-prone (even though it'd work fine for things like integers, in either language), and it's checking for that bug-prone pattern, hence the name.
In Rust it’s not bug–prone. The compiler knows which types are moved and which are copied, and if you write code that uses something after it was moved you get a compile–time error.
The difference is that your C++ checks are executed at runtime. Rust's borrow-checker rules ensure this occurs at compile-time, before you ever run any code.
Citation needed. Log4j comes to mind as a logic bug disaster, but all the data I've seen is that >65% of the high severity bugs come from memory unsafety in most software projects.
Attacking the hardware is always "possible" but can be made unreasonably difficult to pull off in practice, and we've reached that point with game consoles now. The Xbox One and Playstation 4 survived their entire generation without falling to hardware hacks, and the Xbox One didn't get hacked via software either.
Yeah these days a lot of hacking has been voltage glitches and other hardware hacking. Rust ensures no kind of safety when the CPU is not operating within it's normal rules.
Rust-ified Windows will still have the fundamental exploitable flaw of Windows - the ability to download binaries from anywhere and give them Admin privs
Which is why if you watch the talk where this was announced, Windows is in the process of requiring signed binaries, and two admin levels like on macOS.
Windows honestly needs this. Problem is "admin access" isn't something that should be a simple button click because then every app requests it and then every user just hits OK because that's just the only way to use Windows. MacOS has it right where you have to reboot in to recovery mode to turn of SIP which is difficult enough that normal apps don't ask users to do this, but power users will have no problem.
It's irrelevant. They've been trying to do those things for years but their ability to execute is completely gone. Their efforts to improve things just make stuff worse.
To name just a few examples: they want all code to be signed but Windows code signing certs are more expensive than the Apple developer programme membership, much harder to obtain, the rules actually make it "impossible" in some cases like if your country issues ID cards without an address on them, they're now forcing mandatory HSMs and if your CA decides to change your subject name you're just SOL because only Windows 11 anticipates that problem but Microsoft can't be bothered maintaining Windows 10 anymore so their solution was never backported. Yet, the Windows 11 security hardware requirements mean many people won't upgrade.
So whilst building a theoretical strategy around sandboxing apps, they aren't even able to get the basics right. If the people making these decisions were actually writing Windows apps themselves, they might realize this and Microsoft would be able to get its teams marching in the same direction but there's not much sign of that from the outside.
Compare to how Apple does it: they run their own code signing CA, assign stable arbitrary identifiers to companies and people, and still manage to sell these certs for less than a Windows certificate whilst also throwing a couple of support incidents into the mix too, something you can't even get from Microsoft at all as far as I can see (and I've tried!).
It is going to be relevant after 2025, regardless.
By the way, some of this stuff is already on Window 11 Previews and can be enabled.
Even if they botch this, like it happened to UWP, the alternative will be moving everything to Azure OS with thin clients, so one way or the other, it will happen.
They do most of this -- albeit without support -- through the store if that's a viable distribution channel for you. You can actually get support, but be prepared to pay big $$$.
Yes, going via the store fixes some of those problems but introduces others. In particular a lot of corporate users have it disabled and of course they have a lot of arbitrary policies. I didn't know they had dev support if you're in the store, interesting thanks.
YMMV, but I found the macOS app store way more picky than the MS Store. Denying my submission for using the term 'Exit' instead of 'Quit' rubs me the wrong way.
No clue, however, they've made it relatively seamless to publish and download from there. You can also use winget [1] to download signed apps from the store. End users don't need an MSA.
It's not that seamless. We've been trying it lately and the onboarding process is still pretty bureaucratic. Like, having to give an age rating to a zip utility doesn't make anything better.
This is a fundamental flaw of nearly every OS, because users need to be able to run software on their computers.
That doesn't mean we should just ignore security elsewhere. Many users know not to trust mysterious executables from the internet, but don't expect a PDF or font file to be able to infect their machine.
I think being able to trust that non-executable files like these won't compromise your system could be a big deal.
Not just. But the ability to run custom code. Such as an Android phone in the future, which is running some rust-hardened kernel and has a locked down bootloader.
All of this will become e-waste without the ability to run unsigned code. And not only, by allowing custom code, which some do not allow, the usefulness or even purpose of the device can be extended or altered.
I dont' want to throw away a perfectly usable device just because the company made it obsolete.
No. In this context reverse engineering is looking at the assembly to understand what it does. Once you understand that, you can figure out how to exploit the code, by, sending a subtly malformed packet which will cause a server to write out of bounds of some array, and corrupt the servers memory. By very carefully corrupting the server's memory the attacker can hijack the server to do whatever they want.
The teams working on the kernel and the team working on userspace have always had different priorities and quality of work, to the point that it's not obvious that they are made by the same company. I find lots of things I don't like about Windows userspace (with a few nuggets of greatness in between), but the Windows Kernel is pretty great.
Your question reads as slightly tongue in cheek. But you can learn something by asking and answering it in this case. When should you think about rewriting something?
1. When it's a common recurring source of bugs/problems/pain.
2. When it's no longer fit for purpose. Often either because the purpose has changed in some way or because the purpose was misunderstood when it was created.
In this case at least some of the rewrites were prompted by subsystems being a common source of memory safety bugs continually popping up. Rewriting with technologies and techniques that eliminate some classes of memory safety bugs immediately and in the future is a pretty large payoff for a kernel.
Who knows how it will evolve, those are the same folks responsible for deprecating C++/CX and bringing back the WinRT development experience to COM/C++ like in 2000, before .NET came to be.
Eventually they got tired of C++/WinRT, left it half-done (see CppCon 2016 talk about future roadmap plans), and started Rust/winRT.
Rounded corners? This is the hill you are choosing to die on? How are these trivialities impacting your day to day use of Windows?
I'd say I'm quite the opposite. I'd much rather hear Microsoft working on core technical improvements rather than adjusting the goddamn roundedness of their UI.
Honestly the Windows UI is kinda annoying, and it did effect my performance until I recently ditched it for linux w/ i3 as the WM... I want something straightforward and to the point. The windows UI peaked at XP, everything else has been inferior.
Microsoft keeps messing with the UI in obviously deleterious ways. If they didn't rewrite the entire interface in 11, no one would be complaining about 11's rounded corners. The very fact that they decided to add rounded corners has likely already distracted from kernel work.
Do you remember when Windows 10 was the last version of Windows? I wish we could go back.
> The very fact that they decided to add rounded corners has likely already distracted from kernel work.
The Windows team is so huge that I wouldn't be surprised if the people writing the shell and the people writing the kernel barely know eachother's names.
I'd give my favorite toes for a windows computer that can turn on reliably without having to apply innumerable updates, open a document without involving a web browser, or leave my taskbar icons alone from one month to the next. And I would literally pay a fee if my MS-powered work machines were able to go a day without some inane popup notification getting between me and whatever I want to click on. The best UI for an is one that stays out of the user's way. I cannot stand all these over-engineered, ad-forwarding, popup obsessed cartoon interfaces. When I want you to open a PDF then open the GD PDF using Adobe and go away!
(I talk of my work machines over which I have absolutely no control. Every personal machine I own, that isn't a phone, runs some variation of linux.)
IMO a blind spot that all OS vendors have is the "use just once in awhile" use case. The problem is that every small little thing that is added accrues while the system is offline and when it is brought back online you have dozens of updates pending.
I use Windows daily and think it's fine. However today I turned on a laptop in a coffee shop that has been offline for a few weeks, needing to get a few things done. It saturated the small amount of wifi bandwidth allocated to me by the AP for at least a half hour doing stuff, making the work I wanted to do take longer. Under normal circumstances if I used this laptop daily I probably wouldn't have noticed.
It is a paradox that in our world of always-online software that the less you use it the worse it gets.
My daily boot into Linux I'm prompted by no end of updates. Hundreds of megabytes of core libraries, fonts, office suites I never asked for but are installed by default, and of course, the weekly or so kernel update and getting to play the lottery as to whether your computer will boot again.
On Linux you can turn that off if you don’t like it. And if it installed something you don’t need, just uninstall it. You don’t have to moan about it here, just fix it once and then it won’t annoy you again.
Windows 10 LTSC does most of what you said and can be acquired for less than your toes ;)
I totally get it though, they make it unreasonably difficult to get a usable OS. Default home or even """Professional""" windows makes my eyes water it's so stinky.
My concern is that Windows is already a massive dumpster fire of random bits and bobs. They haven't completed the migrations from the very earliest versions of Windows, they just increase the complexity at all points. The same is true of most of their major desktop apps including Visual Studio, which runs something like 30 processes just to have the main window running including a hotchpotch of EdgeView and Node js bits.
All this becomes is the xkcd trope of "this next migration will really fix our problems".
Unless they can genuinely replace more than 1 sub system in one go, they just increase complexity.
Since people hate it so much and Microsoft is treating their OS like a website to be monetized then it wouldn't surprise me if the tabloid service gets embedded in the kernel, so as to prevent people from disabling or removing it.
Na, they will be too busy trapped in AI created unskippable commercials taking over their audio and visual cortex on their neural linking device to remember there was anything better in the past.
Even in today's age (during COVID) some small % of the populace actually moved out from the big cities and into rural -- or just smaller city -- communities.
I have hope. Sadly the positive societal changes move with the average speed of a glacier.
To be honest, it was. It still is, and it's getting more and more integral and inseparable as time passes.
It just didn't need to be their browser. But the more important thing, they were fighting the trend, and making sure their browser didn't work. That is what broke the Netscape plan.
haha i wish Microsoft would just rebase Windows on Linux: extract the Windows "experience"(sic) into a UI/Desktop Environment for Linux and then bless some LTS of Linux as the underlying system and call it a day.
Of course they would never, ever do that, but I can hope however hopeless that is.
Why do people on Hacker News dislike the Windows kernel so much, without knowing a gram about the innards of Windows? All you lot see is the Windows shell, which is fairly trivial to replace (ergo why it has changed so often in the past 30 years since NT 3.1).
Windows NT is a remarkably modular kernel architecture, with design decisions far ahead of its time. Things like (acronym overload here) COM, (A)LPC, UMDF, Windows subsystems (for Linux and Android), and an absolutely immense media and utilities ecosystem: Direct3D, DirectWrite, WASAPI, etc.
Why not hope Microsoft open-sources the entire Windows stack including the NT kernel, NTFS drivers, and the Windows API, rather than hope it 'uses Linux'?
> Windows "experience"(sic) into a UI/Desktop Environment for Linux
And for the record, this more-or-less already exists: it's called KDE Plasma Desktop 5.
The Windows kernel really isn't that great by modern standards, neither codewise nor architecturally. And yes I know a lot about it.
There's a LOT of weird stuff lurking in there, and a lot of features they've added in recent years just doesn't work properly at all (anything uwp related...). Even the basics of how you start another process have turned into a labyrinthine mess. Then you have things they never fixed like the totally unhelpful NT file locking semantics that regularly break apps that work fine on UNIX.
NTFS is supposedly a nightmare of tech debt that uses SEH for control flow, yet their attempt to move Windows to a new ReFS stalled and failed. Note: Apple managed this with APFS and Linux distros have routinely introduced new file systems.
COM isn't a part of the kernel but is the same situation - incredibly complicated and has not been evolved well. Lots of failed rebrands, attempts to rejuvinate it that actually made things worse.
Within its own rules, yeah, it's pretty great. Very high performance, huge feature set, everything is configurable and transparent, relatively non-buggy, clean code, runs on everything.
If you want stable driver APIs that allow you to distribute hardware support with the hardware, keeping that investment proprietary, then obviously no it's crap, and that's a pretty important use case.
If your hardware has good driver support in-tree though, then Linux is hard to beat these days. It just gets so much more investment than the NT or Darwin kernels.
There are a few places where Darwin is ahead especially w.r.t. code signing and sandboxing. I can't think of anything that NT in particular excels at though.
Driver guard, safe kernel, userspace graphics drivers, Xerox PARC/Oberon like workflows between kernel and userspace frameworks, setting the rules where graphics APIs are going (DX drives, Vulkan follows),...
Mantle was just a PC API based of how game console APIs work, like DirectX on XBox.
As for Oberon, there is enough material out there.
Suffice to say how Powershell integrates with .NET, DLLs and COM, enabling OS and applications scripting, pursue of safer approaches to OS system programming instead of yet another UNIX clone written in C.
I said nothing about the windows kernel. I honestly don't care. It's not what the end-user uses. It's the UI and the graphics stack etc that matters for gamers and other users of Windows.
Package that UI and those libraries into the windows experience desktop for Linux and call it a day.
---
If they open sourced windows that'd be good, too! Hackers could then remove all the garbage like telemtry, ads, etc., and it'd be dope as well. They could hack the UI to be even more tiling friendly. It'd be amazing.
A few non-desktop things I'm not fond of (allowing that I don't know that Windows counts these as ‘kernel’): Processes are expensive. Files are expensive. File names have nasty restrictions.
> KDE Plasma Desktop 5
Personally, I use KDE because it allows me to replicate one particular Mac feature that's important to me: leaving Control alone for terminal (editor) control characters.
The Windows kernel is radically different from Unix-likes. Doing anything like this would cause a riot from anyone depending on any of the Windows-only features, which is pretty much all Enterprise customers.
Dumb question, but isn’t Rust an awful name marketing wise? How did the language’s founders settle on it? Hopefully there’s a more interesting history than rust is a thin layer atop bare metal.
"Graydon Hoare named the language after the rust fungus because it is robust, distributed, and parallel. He also liked the play on the words from words such as robust, trust, frustrating, rustic, and thrust."
Rust is a game changer in that it is built from the ground up to offer memory safety without garbage collection and still aims for zero overhead abstractions.
It would be ironic, after their notorious security issues of the early 2000’s, if due to efforts like this, Microsoft Windows ends up being the most secure general operating system.