Hurd has certainly made strides, we're way past the days when you needed to compile your own translator for /dev/random to make an implementation of sshd possible. I have not tried it this year, but last year I was quite impressed at the progress in ease of installation, driver support, and use.
I just skimmed the Phoronix review from 2011, and I was surprised to see that Hurd outperformed Linux in some categories by a narrow margin, and in others it did not lose by a complete landslide. It's still i386-only, so you know what to expect at least a little bit.
I think a more fair comparison would be Debian GNU/kFreeBSD and Debian GNU/Hurd, since the developer base for each is probably comparatively small. Glad to hear the HURD is still alive. GNU/kFreeBSD has NOT had any release since wheezy or since 9.1, to look at their website, so if numreleases or first-mover is any indicator of progress, take that into account.
Sorry to say all this and have to answer, no, I am not using Hurd for anything. Interested as you are to hear if anyone is, and for exactly what. I would be surprised to hear that anywhere it "really shines" compared to any other platform.
Yes, I meant since the Debian GNU/Linux Wheezy release, kFreeBSD has not had a new release. The latest (documented) kFreeBSD release is squeeze.
Similarly, the latest notes about a kFreeBSD kernel are that 9.0 will be in Wheezy, but no mention of 9.1. That was a fairly ridiculous use of 'since.'
Last time I had the Hurd installed on any hardware (looks like 2004), the only web browsing option was lynx. It sounds like things have improved since then?
Indeed. With 75% of Debian packages available, this should be enough to try it out. It's pretty amazing to have it somewhat usable nowadays, considering how long it has taken.
Looking at the wikipedia page, the project seems to have some focus issues, with people periodically starting to port Hurd to other microkernels and then stopping due to lack of time. It would be interesting to have a side-by-side comparison with other similar projects like Tannenbaum's own Minix 3.
I've never encountered it in my professional career.
As near as I can tell, it offers nothing over the main Linux kernel except sourcing from the GNU Project. IMHO, not enough reason to abandon the Linux kernel. YMMV.
"Nothing over the main Linux kernel" is complicated. It offers a microkernel design that was modern and radical 20 or so years ago, and is still pretty unique as a Unix system; Linux being as monolithic as ever, you can still do some pretty cool stuff with it you can't come close to with Linux. However, afaik, these days the designs of Mach and Hurd aren't looked upon very highly, the stability advantages of microkernels aren't as important anymore with rock solid traditional kernels, and instead of figuring out how to empower non-root users on Unix systems we just give everyone their own VM.
> However, afaik, these days the designs of Mach and Hurd aren't looked upon very highly, the stability advantages of microkernels aren't as important anymore with rock solid traditional kernels, and instead of figuring out how to empower non-root users on Unix systems we just give everyone their own VM.
On the commercial world you have Symbian and QNX as examples of micro-kernels OS.
Microsoft is doing research how to run the full OS as a library on top of a pico-hipervisor (Drawbridge).
And Minix just got a EU grant for their research in micro-kernels and security.
Sometimes I think Linux is happy just copying the UNIX and Mainframe designs, without much OS innovation.
> Sometimes I think Linux is happy just copying the UNIX and Mainframe designs, without much OS innovation.
This is difficult to answer, but I'll try. Note that this is from merely an interested person's view, I'm not a kernel dev or anything.
Linux has to strike a balance between compatibility, stability and innovation. The goal of Linux isn't "OS innovation", the kernel people would rather that innovation occur in userspace (at least from my reading of the tea leaves), and that is happening a lot.
However, Linux doesn't have the power to "break out" and try something radically different, as it has actual users with billions of dollars invested in Linux that rely on its stability and its compatibility with the hardware and software they use.
Symbian and QNX are not mainstream kernels, as they really only exist in embedded devices, and really only support ARM and a couple other architectures.
Minix, GNU/Hurd and the Microsoft research are just that - research. No user base to speak of, no commercial uses, etc.
I think the most successful microkernel wasn't actually micro, but a hybrid - Windows NT.
I used to work with Thomas Bushnell, who was deeply involved with the Hurd at the beginning. His claim was that microkernels offered theoretical advantages on paper, but practical complications that Stallman seriously underestimated.
He says that he had originally suggested taking a BSD kernel and rewriting all of the (then) legally risky bits, and he still believes that this would have been straightforward, and if they had followed that route then Linux would not have stolen their thunder. But he let Stallman argue him out of it, and that was a mistake.
> However, afaik, these days the designs of Mach and Hurd aren't looked upon very highly, the stability advantages of microkernels aren't as important anymore with rock solid traditional kernels, and instead of figuring out how to empower non-root users on Unix systems we just give everyone their own VM.
On the other hand QNX, a closed-source Unix-like microkernel OS, has been used commercially for a long time - for instance, it powers Blackberry phones nowadays, and I heard the performance is quite decent.
It's surreal to me that this conversation still happens in these terms.
People doing real work for which they need an OS don't care that QNX/HURD are microkernels or that Linux/NetBSD aren't. They care that one solves their problem better than the other.
Nobody cares about kernel architecture unless they're writing (or learning about) kernels. Everybody else is looking for an OS that meets practical requirements. QNX's architecture might help it meet some requirements better than Linux's architecture, but the architecture itself is not a determining factor in commercial success.
I don't understand your argument. We are talking about kernel architecture (and specifically, monolithic kernels vs microkernels).
Specifically, I brought up QNX as an example of microkernel-based OS used in the wild, as opposed to Hurd and (presumably) Minix - which gets to show that they can be made to work. However QNX is marketed is no concern of mine.
You're splitting hairs. Microkernels do offer practical advantages by virtue of their architecture, while suffering from other drawbacks again caused by their architecture.
I'm not trying to "split hairs", I'm trying to address a problem that has repeatedly made my life difficult -- people seeking particular technologies instead of solutions.
But since that has obviously made some people upset, I'll trouble you no more about it. I'm sorry to have intruded.
Why do you automobile engineers keep talking to each other about pistons and transmissions, and all that nonsense, it's absurd. I just want a car that gets me from A to B and doesn't use too much gas. I don't see what that has to do with engine seals.
Right here and now, we're doing it because it's fun. At the end of the day, most of us go back to practical solutions, but we have to get this stuff out of our system. :)
One of the benefits of microkernels is supposed to be their resilience (since code that would otherwise be in the kernel is now in servers and can be restarted if it goes wrong or upgraded in place without taking down the whole kernel).
That sounds like a pretty good practical benefit to me.
There are two practical problems with microkernels:
(1) Hardware. Some driver locks up your PCI bus. Your whole system grinds to a halt, whether it's a microkernel or not. This could probably be solved by having hardware that doesn't suck, but unfortunately here in the real world we've settled on the cheapest option, PC hardware which does suck.
(2) Division of responsibilities. Some OS structures like the process table are inherently connected to multiple components (memory management, filesystem, etc). Doing process creation or fork in a microkernel results in a flurry of internal messages between the different parts which is hard to reason about and fragile. Minix especially suffered this problem.
I don't know what you specifically mean by a driver locking up PCI (and we've moved on to PCIe now), but I suspect the general category of (1) is largely addressed by the io virtualization facilities and RAS features that have recently come to amd64 hardware. That stuff has existed on big iron hardware since times immemorial.
Well, if you have X units of effort (dollars, man-years, whatever) you can choose to allocate some of it to creating a mechanism for tolerating buggy drivers, and some to debugging the drivers. Micro- vs monolithic- in the real world is an argument over this exact allocation.
There's no such thing as real work, in much the same way that there's no such thing as the real world. Everyone creates bubbles and cushions around themselves.
.. And containerization is unfortunately going the opposite way. Having a single huge morass of code (the Linux kernel) run multiple operating systems at the same time.
Hurd has been in development since 1990, Tanenbaum–Torvalds was 92. The shipping and loading of drivers separately from the kernel was a micro-kernel advantage and Linux being able to do the same is a move towards micro-kernel features in Linux (fuse and cuse being others).
In MINIX, afaik, you ship some drivers with the kernel in the same file. But those drivers runs in user space and separate from each other. The issue is not related with shipping.
IMO, loading the drivers at runtime is the easiest part of microkernels. It is really very very very tiny step. There are a lot more things to do, like doing IPC between operating system components, managing memory, scheduling, protection of the system & driver process running in the user space.
Finally, loadable kernel modules does not give you any security and reliability. Those modules still share the same address space with the kernel and each other.
Last time I checked there was no working sound subsystem in GNU/Hurd, but this was about 4 years ago or so. Anyone knows, did things change since then?