Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The monolithic kernel vs microkernel debate has always come down to performance vs simplicity & reliability.

Simplicity I'll grant, but the reliability argument doesn't really strike me as relevant. Having overseen many thousands of server years worth of uptime, it's almost never the kernel's fault when something breaks. Linux is pretty solid. Most of us are far more limited by the reliability of our applications.

There are niches where higher-assurance kernels are worth it, and maybe that's where microkernels can shine.



The Linux kernel has a ton of attention dedicated to it. In particular, enterprises that prefer reliability to newness often are a few versions behind, making Linux for their purposes the most heavily acceptance-tested software there is.

This doesn't mean its design is inherently more reliable. Anything can be made reliable with enough eyeballs. I think a design goal of Minix is to increase the reliability per eyeball ratio, particularly when it comes to extending the kernel. Reliability, modularity, performance, and testing are all trade-offs. It's also pretty easy to find a configuration that one would think "should work", but actually causes Linux to suffer, complain, and crash.


Sure, but we already have Linux (and FreeBSD and NetBSD and...). So if your argument for something new is reliability, you're arguing inherently-potentially-more-reliable vs in-practise-already-quite-reliable and haven't shown us what we gain by going with you vs them.


The usual arguments in a language or OS flame war are relevant here. Do you choose the allegedly superior design, or the more popular and practiced one? The answer depends on your use case, love of tinkering, tolerance for productivity risk. But were it not for people trying new designs, we'd all be writing code in assembly language on single-user systems.


> Anything can be made reliable with enough eyeballs.

The relevant measure isn't the number of eyeballs, but whether they're the correct eyeballs. The wrong eyeballs can decrease reliability.


As of Ubuntu 11.10 my netbook randomly kernel panics with the default Wi-Fi drivers. It sure would be great if it didn't take out everything I was looking at because of one bad driver. Just saying.


Bug reported to LKML? Did you bisect? Did someone else? Did you at least test with latest upstream kernel to see if it was fixed already?


You're asking this of a casual user.


On a site called 'Hacker News', it's not entirely unreasonable to expect that a user may have the skills and inclination to contribute back to a project like the Linux kernel. Especially when it's a problem that affects them directly.


Are you implying any of these activities would get back the stuff I was working on at the time? Because if not I think you sort of missed the point.


Ironically, as I was reading this, my Chrome crashed and shortly after Windows blue-screened and I had to reboot... granted this happens very rarely, but it was kinda funny it should happen exactly when I was reading about reliability.


QNX and VxWorks are two commercial operating systems that are microkernel based.


Don't forget reliability (or otherwise) of PC hardware. No kernel is going to save you if your PCI bus locks up.


Yeah, that happens to me all the time. I'm in the habit of running hardware with a faulty PCI bus for very long periods of time so I don't really need anything more complex than Windows 95 since the kernel doesn't matter when you have a faulty PCI bus.


The scary thing is, is that was precisely the state of affairs with cheap desktops in the heyday of win95. My understanding is that the hardware got cheaper to match the software, then there was no motivation to improve the software, because the hardware would've crashed the machine anyhow.





Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: