I would argue that a microkernel would be a good fit because: "Traditional operating system functions, such as device drivers, protocol stacks and file systems, are typically removed from the microkernel itself and are instead run in user space." - https://en.wikipedia.org/wiki/Microkernel
You want to throw out as many functionality as possible to get your kernel down in size to fit into e.g. 1 MiB of memory.
Those operating system functions still need to exist and still consume RAM. It doesn't matter whether they run in userspace or kernel space.
If you don't need those functions, you can remove them from kernel space in a monolithic kernel. It's possible to build Linux without TCP/IP support and with no on-disk filesystems.
They achieved a compressed linux kernel size of just 749 kB which additionally requires at least 12 MB of RAM to boot. This is very impressive, but there a constrained systems with 1 MB or less of memory.
That’s a different argument to the one you opened with. The OP had already made the point that Linux isn’t really practical. But that doesn’t mean that a micro kernel OS would be any better than GNU/Linux. Ultimately you still have the same data stored into RAM (as others have said).
What nobody has (yet) mentioned is that micro kernels typically run slower than monolithic kernels because user space / kernel space memory swapping is expensive compared to running everything in kernel space. This overhead would kill any performance you might get from a system with the hardware specs of an N64. So a monolithic design is absolutely the way to go (in facts that’s how N64 games are actually written —- one monolithic code base with shared memory).
Pragmatically the only way to write software for the N64 is to go bare metal. As the OP said, this Linux port is a fun technical challenge but no OS would be practical (that is unless you’re just using it as a boot loader for other software).
> That’s a different argument to the one you opened with.
"A microkernel would be a good fit for such constrainted environments."
No, that's exactly my argument. My argument is that the linux kernel (even if you strip everything out and create the most tiny linux kernel) is still too big for many constrained environments, e.g. a old router with 512 KiB memory (there are many devices that cannot run linux). However, it is possible to run a small microkernel on such a router, that's my point, nothing more nothing less, that's why I consider using microkernel a good fit.
Then people argued that mircokernels can be as big as a linux kernel and that you can strip the linux kernel down in functionality and I agreed with them, but this does not contradict the point I made.
> "A microkernel would be a good fit for such constrainted environments."
As the GP said, micro kernels have a performance overhead swapping data between rings. That overhead would bite hard on something running a NEC VR4300 clocked at 93.75 MHz.
A monolithic kernel is the way to go. Just not Linux specifically.
> My argument is that the linux kernel (even if you strip everything out and create the most tiny linux kernel) is still too big for many constrained environments
That was the OP's point. Yours was that a micro kernel would be a better fit. It would not.
> a old router with 512 KiB memory (there are many devices that cannot run linux)
Those devices wouldn't be running code written to programmable chipsets. They wouldn't be running an operating system in the conventional sense. Much like laumars point about how games are written for the N64.
Also nobody is suggesting Linux runs everywhere. We are just pointing out that you massively misunderstand how micro kernels work (and embedded programming too by the sounds of your last post).
By the way, you wouldn't find any routers running a meger 512KB of RAM. That wouldn't be enough for multiple devices connected via IPv4, never mind IPv6 and a wireless AP too. Then you have firewall UI (typically served via HTTP), a DHCP & DNS resolver (both of which are usually served by dnsmasq on consumer devices) and likely other stuff I've forgotten about off hand -- I have some experience building and hacking routers :)
> However, it is possible to run a small microkernel on such a router, that's my point, nothing more nothing less, that's why I consider using microkernel a good fit.
Most consumer routers actually run either Linux or some flavour of BSD. All of which are also monolithic kernel designs. Some enterprise gear will have their own firmware and from what I've seen from some vendors like old Cisco hardware, those have been monoliths too.
I know micro kernel has the word "micro" in it and the design requires loading the bare minimum into the kernel address space of the OS, but you're missing the bigger picture of what a micro kernel actually is and why it is used:
The point of a micro kernel isn't that it consumers less memory. It's that it separates out as much functionality from the kernel as it can and pushes that to user space. The advantages that brings is greater security with things like drivers (not an issue with the N64) and greater crash protection (again, not really an issue with the N64). However that comes with a performance cost, code complexity and any corners you do cut to try to bring those costs down ultimately end up eroding any real world benefits you get from a micro kernel design.
> By the way, you wouldn't find any routers running a meger 512KB of RAM.
This is not true in general, although true for modern devices. There are older router models that cannot run linux. A few years back I unsuccessfully tried to flash a very minimal <1 MiB linux on an old TP link router. I was able to flash the rom but I couldn't boot, because there was not enough memory available, it wasn't 512KB but only a few MiB IIRC, still not enough.
> We are just pointing out that you massively misunderstand how micro kernels work
If someone points out, that linux kernel can be reduced in size and that there are some big microkernels, then I do agree and there is no misunderstanding, as far as I can see. Same holds true for the performance argument.
> As the GP said, micro kernels have a performance overhead swapping data between rings.
> This is not true in general. There are older models that which cannot run linux. A few years back I uncessfully tried to flash a very minimal <1 MiB linux on an old TP link router. I was able to flash the rom but I couldn't boot, because there was not enough memory available.
How long ago was "a few years ago"? What model number was that? DD-WRT has been ported to the Archer series but if you're talking a ZyNOS based router then you're probably out of luck. Those ZyNOS devices are the real bottom end of the market though. Even the ISP routers here in the UK are generally a step up from those. Particularly these days now that households have an expectation to have kids playing online games, streaming Netflix and such like (even before COVID-19 hit ISPs have been banging on for ages about how their routers allow you to do more concurrently). And with TP-Link, the Archer series are all Linux based or Linux compatible and they start from ~£50. So you'd be really scraping the barrel to find something that wasn't these days.
> I agree that perfromance will be problematic, but this does not render microkernels useless in general for constrained devices.
Any OS designed around kernels, memory safety etc would be useless in general for constrained devices. This isn't an exclusively Linux problem. On such systems the whole design of how software is written and executes is fundamentally different. You don't have an OS that manages processes nor hardware, you write your code for the hardware and the whole thing runs bare metal as only one monolithic blob (or calls out to other discrete devices running their own discrete firmware like a circuit). That's how the N64 works, it's how embedded devices work. It's not how modern routers work.
In 2020 it's hard to think of a time before operating systems but really that is the way how the N64 works. Anything you run on there will eat up a massive chunk of resources if it's expected to stay in memory. So you might as well go with a tiny monolithic kernel and thus shave a few instructions from memory protection and symbol loading (not to mention the marginally smaller binary sizes due to any file system metadata, binary file format overhead and other pre-logic initialisation overhead (such as you get when compiling software rather than writing it in assembly). If you're going to those lengths though laumars point kicks in: you're better off just writing a "bootloader" menu screen rather than a resident OS.
> How long ago was "a few years ago"? What model number was that? DD-WRT has been ported to the Archer series but if you're talking a ZyNOS based router then you're probably out of luck. Those ZyNOS devices are the real bottom end of the market though.
This brings back memories :) http://www.ixo.de/info/zyxel_uclinux/ Sure, we are talking about low-end (real bottom) devices and date models here. I cannot recall the model number, but I think we both agree that routers that cannot run linux exist, although not very common (anymore).
> Any OS designed around kernels, memory safety etc would be useless in general for constrained devices.
How about QNX then?
"QNX is a commercial Unix-like real-time operating system, aimed primarily at the embedded systems market. QNX was one of the first commercially successful microkernel operating systems. As of 2020, it is used in a variety of devices including cars and mobile phones." - https://en.wikipedia.org/wiki/QNX
They are "aimed primarily at the embedded systems market", their latest release is from "7.1 / July 2020; 5 months ago" and they are operating their business model since 1982.
So not just low-end, but a decade old device that was already low-end upon it's release. That's hardly a fair argument to bring to the discussion.
> How about QNX then?
QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* . The published minimum requirements for Neutrino 6.5 (which is already 10 years old) was 512MB. Double that if you want the recommended hardware specification.
Sure, if you want to strip out graphics libraries and all the other stuff and just run it as a hypervisor for your own code you could get those hardware requirements right down. But then you're not left with something POSIX compliant, not even useful for the N64. And frankly you could still get a smaller footprint by rolling your own.
The selling point of QNX is a RT kernel, security by design and a common base for a variety of industry hardware. But if you're writing something for the N64 then none of those concerns are relevant (and my earlier point about a resident OS for the N64 being redundant is still equally valid for QNX).
Also smart phones are neither embedded nor "constrained" devices. I have no idea what the computing hardware is like in your average car but I'd wager it varies massively by manufacturer and model. I'd also wager QNX isn't installed on every make and model of car either.
* I should caveat that by saying, yes it's possible to write something partially POSIX compliant which could target really small devices. There might even be a "UNIX" for the C64. But it's a technical exercise, like this N64 port of Linux. It's not a practical usable OS. Which is the real crux of what we're getting at.
> So not just low-end, but a decade old device that was already low-end upon it's release. That's hardly a fair argument to bring to the discussion.
Fair enough, I agreed that I should have come up with an better example. But before going down another rabbit hole, just replace router with any modern embedded chip you like, that cannot run linux as example.
Regarding QNX, I don't know their current requirements but what impresses me:
"To demonstrate the OS's capability and relatively small size, in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk."
> I should caveat that by saying, yes it's possible to write something partially POSIX compliant which could target really small devices.
>QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* . The published minimum requirements for Neutrino 6.5 (which is already 10 years old) was 512MB. Double that if you want the recommended hardware specification.
An older QNX ran from a floppy with very few MB. With GUI and a browser with limited JS support.
>QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* .
You could run Linux under a TTY for i386 with 2MB with some swap about 24 years ago.
> An older QNX ran from a floppy with very few MB. With GUI and a browser with limited JS support.
It did and it was a very impressive tech demo....but it's not representative of a usable general purpose OS. Chrome or Firefox alone comes in at > 200MB. So there is no way you'd get a browser that would work with the modern web to fit on the 1.4MB floppy. And that's without factoring in fonts, drivers, a kernel and other miscellaneous user land.
The QNX demo was a bit like this N64 demo. Great for showing off what can be done but not a recommendation for what is practical.
> You could run Linux under a TTY for i386 with 2MB with some swap about 24 years ago.
That's still double the memory specification and yet Linux back then lacked so much. For example Linux 24 years ago didn't have a package manager (aside from Debian 1, which had just launched and even then dpkg was very new and not entirely reliable). Most people back then still compiled stuff from source. Drivers were another pain point, installing new drivers meant recompiling the kernel. Linux 1.x had so many rough edges and lacked a great deal of code around some of the basic stuff one expects from a modern OS. There's a reason Linux has bloated over time and it's not down to lazy developers ;)
Let's also not forget that Linux Standard Base (LSB), which is the standard distro's follow if they want Linux and, to a larger extent POSIX, compatibility wasn't formed until 2001.
Linux now is a completely different animal to 90's Linux. I ran Linux back in the 90s and honestly, BeOS was a much better POSIX-compatible general purpose OS. Even Windows 2000 was a better general purpose OS. I don't think it was until 2002 that I finally made Linux my primary OS (but that's a whole other tangent).
I mean we could have this argument about how dozens of ancient / partially POSIX-complient / unstable kernels have had low footprints. But that's not really a credible argument if you can't actually use them in any practical capacity.
> I mean we could have this argument about how dozens of ancient / partially POSIX-complient / unstable kernels have had low footprints. But that's not really a credible argument if you can't actually use them in any practical capacity.
There are modern microkernels that are POSIX compliant and have a much lower footprint than linux. That's not the problem. I think the most prominent issue, people points out here is performance. However, it's very obvious to me that the extra abstraction of having a kernel vs having no kernel on an constrained device costs performance, and it's always a trade-off, both solutions can be found and both solutions are valid.
> There are modern microkernels that are POSIX compliant and have a much lower footprint than linux.
There are... but they're not < 1MB. Which was the point being made.
> I think the most prominent issue, people points out here is performance.
That's literally what I said at the start of the conversation!
> However, it's very obvious to me that the extra abstraction of having a kernel vs having no kernel on an constrained device costs performance, and it's always a trade-off, both solutions can be found and both solutions are valid.
Show me a device with the same specs as the N64 which runs an OS and I'll agree with you that both solutions are valid. The issue isn't just memory, it's your CPU clock speed. It's the instructions supported by the CPU. It's also the domain of the device.
Running an OS on the N64 would never have made sense. I guess, in some small way, you could argue the firmware is an OS in the say way that a PC BIOS could. But anything more than that is superfluous both in terms of resources used and any benefits it might bring. But again, if it's a case of "both solutions are valid" then do please list some advantages an resident OS would have bought. I've explained my argument against it.
Let's take a look at what was happening on PCs around the time of the N64's release. Most new games were still targeting MS-DOS and largely interfaced with hardware directly. In a way, DOS was little more than a bootstrap: it didn't offer up any process management, the only memory management it did was provide an address space for the running DOS application, it didn't offer any user space APIs for hardware interfaces -- that was all done directly. And most of the code was either assembly or C (and the C was really just higher level assembly).
Fast forward 4 years and developers are using OpenGL, DirectX and Glide (3DFX's graphics libraries which, if I recall correctly, was somewhat based on OpenGL) in languages like C and C++ but instead of writing prettier ASM they're writing code based around game logic (ie abstracting the problem around a human relatable objects rather than hardware schematics). It was a real paradigm shift in game development. Not to mention consoles shifting from ROM cartridges to CD posed a few new challenges: 1) you no longer have your software exist as part of the machines hardware 2) you now have made piracy a software problem (since CD-ROMs are a standard bit of kit in most computers) rather than a hardware one (copying game carts required dedicated hardware that wasn't always cheap). Thankfully by that time computer hardware had doubled a few times (Moore's Law) so it was becoming practical to introduce new abstractions into the stack.
The N64 exists in the former era and the operating system methodologies you're discussing exist in the latter era. Even the constrained devices you're alluding to are largely latter era tech because their CPUs are clocked at orders of magnitude more than the N64 and thus you don't need to justify every instruction (it's not just about memory usage) but in many cases an OS for an embedded device might just be written as one binary blob and then flashed to ROM, effectively then running like firmware.
It's sometimes hard to get a grasp on the old-world way of software development if it's not something you grew up with. But I'd suggest maybe look at programming some games for the Atari 2600 or Nintendo Gameboy. That will give you a feel for what I'm describing here.
>It's sometimes hard to get a grasp on the old-world way of software development if it's not something you grew up with.
I lived through that, the first PC I used had DOS with 5'25" floppies.
On 3DFX, it was a mini-GL in firmware, low level. Glide somehow looked better than the later games with DirectX, up to Directx7 when games looked a bit less "blocky".
>For example Linux 24 years ago didn't have a package manager
Late 90's Linux is very different from mid 90's. Slackware in 1999 was good enough, and later with the 2.4 kernel it was on par on w2k, even Nvidia drivers worked.
And I could run even some games with early Wine versions.
In fairness, you did say “24 year old Linux” which would put it in the mid 90s camp rather than late 90s.
I wouldn’t agree that Slackware in 2000 was on a par with Windows 2000 though. “Good enough”, sure. But Linux had some annoying quirks and Windows 2000 was a surprisingly good desktop OS (“surprising“ because Microsoft usually fuck up every attempt at systems software). That said, I’d still run FreeBSD in the back end given the choice between Windows 2000 and something UNIX like.
It’s believed Nintendos consoles use a micro kernel but that’s a result of hacks and reverse engineering. However Nintendo themselves have given limited information. While I think your point is more likely than not, that caveat I’m making is still worth noting; ie things aren’t as certain as you’re boldly claiming.
Now on to your point about the complaint the GP and myself made being FUD; it’s really not. The closest to a monolithic kernels performance any micro kernel has gotten was L4 and those benchmarks was running Linux on top of L4 vs bare metal Linux. While the work on L4 is massively impressive there is still a big caveat, the actual workload was still effectively ran on a monolithic kernel with L4 acting like a hypervisor. So most of the advantages that a micro kernel offers were rendered moot and there was still a small performance hit for it.
Why doesn’t that matter for the Nintendo Switch? Probably because any DRM countermeasures in user space would have a bigger performance penalty and a micro kernel offers some protections there as part of the design. That’s just a guess but as I opened with, Nintendo are quite secretive about their system software so it’s hard to make the kind of conclusive arguments you like to claim.
Other than that I can only point out the CCC related
Also given the amount of hypervisor and container baggage that gets placed on top of Linux to make for the lack of microkernel like safety, it doesn't really matter if it happens to win a couple of micro-benchmarks.
Nintendo don’t publish detailed schematics of their systems to the level that you’re claiming. Not even on their developers portal. (I’ve had developer account with Nintendo since the Wii days).
And with regards to your point about Linux vs micro kernels, it does make a massive difference when you’re talking about hardware like the N64 which wouldn’t want any of those features which micro kernels excel at and which every instruction wasted is going to cost the user experience heavily. This point was made abundantly clear at the start of the conversation as well.
Look, I have nothing against micro kernels. There’s an architectual beauty to them which I really like. It’s the functional programming equivalent of kernel design. But pragmatically it wouldn’t be your silver bullet in the very specific context we were discussing (re N64). And to be honest I’m sick of you pulling these pathetic straw man arguments in every thread you post on.
laumars, I think what I'm missing to hear from you is why microkernels are such a bad and horrible idea and more importantly why Nintendo itself is mistaken if they had used them on their devices. Otherwise, I still think they are a good fit.
They’re not a bad and horrible idea. I never once said that. Micro kernels are, in my opinion, the future of OS development because they offer a bunch of guarantees which are much harder to achieve with a monolithic kernel (like memory safety, stability (eg a segfault in a driver doesn’t bring down the entire kernel) and so on and so forth.
The problem worth micro kernels is that abstraction isn’t free. That’s less of an issue with modern hardware running modern work loads because you’d need to put that memory safety in regardless of the kernel architecture and chips these days are fast enough that the benefits of security and safety far far outweigh the deminishing cost in performance. However on the N64 you don’t need any of the benefits that a micro kernel offers while you do need to preserve as many clock cycles as you can. So a micro kernel isn’t well suited for that specific domain. The case would be different again for any modern low footprint hardware because they’d still be running on CPUs clocked at an order of magnitude more and modern embedded system might need to take security or stability concerns more seriously than an air gapped 90s game console.
In short, micro kernels are the future but the N64, being a retro system, needs an approach from the past.
This is why it doesn’t help how modern and 90s hardware have been conflated as equivalent throughout this discussion.
> However on the N64 you don’t need any of the benefits that a micro kernel offers while you do need to preserve as many clock cycles as you can. So a micro kernel isn’t well suited for that specific domain
How come that Nintendo decided to use them (according to reverse engeneering finds)? If they are not suited, then Nintendo should know that right?
I’ve answered this question probably half a dozen times already now....
The N64 doesn’t run any OS. It’s just firmware that invokes a ROM which runs bare metal.
The Switch, however, does have an operating system.
There is around 20 years difference between the two games consoles. That’s 20 years of Moore’s law. 20 years of consumer expectations of fast processors and fancier graphics. And 20 years of evolution with developer tooling and thus their expectations.
You cannot compare the two consoles in the way you’re trying to. It’s like comparing a 1920s racing car to a 2020s F1 car and asking why they are so different. Simply put: because technology has advanced so much in that time it’s now possible to do stuff that wasn’t dreamt of before.
Ah okay, I somehow thought that Nintendo had used microkernels on other devices too, not just the Switch. The Nintendo Switch is certainly not a constrained device.
It’s theoretically possible they may have done on other devices too. It’s believed the Switch system software is derived from the DS system software. I’ve not seen any breakdowns on what kernels are running in the DS nor on the Wii family of devices either. But there still orders of magnitude more powerful than the N64 too.
I don’t think there’s much to be gained in speculation about proprietary operating systems running on newer hardware though.
Even if unconfirmed, I think that Nintendo might use Microkernels according to reverse engeneering finds shows their massive potential for constrained devices. Although certainly not for performance reasons.
> Games consoles are about as far removed from a constrained device as you could possibly get.
It seems to me that you are always ignore low-end devices and very old devices.
If we are talking about an PS5 then yes, this and similar devices are not very constrained, even a full blown Kubuntu might run on some.
But, again there are low-end gaming devices with a tiny black and white screen for 10 dollars and old gaming hardware with very tight constrains. The N64 is certainly one of those constrained old gaming devices.
N64 wouldn’t have been considered “constrained” when it was new though. To be honest it’s not really constrained even now, not compared to the sort of hardware you were discussing earlier. And it’s rather disingenuous how you keep rocking back and forth between current generation consoles and 20+ year old tech as if it’s all current hardware. It makes it rather hard to reply to your points when you then when the goal posts constantly get shifted.
PS5 would easily run Linux considering the PS3 had a few Linux distros ported to it (back when Sony endorsed running Linux on their hardware via the “Other OS” option, which they later removed). Linux is pretty lightweight by modern hardware standards anyways. It’s just not suitable for every domain (but what OS is?)
On then topic of consoles running Linux, pretty sure I have a CD-R somewhere with Linux for the Dreamcast. That was the era when consoles really started to converge on a modern-looking software development approach.
> And it’s rather disingenuous how you keep rocking back and forth between current generation consoles
I never did. I never mentioned current generation consoles not even implicitly. I always talked either about the N64 or (gaming) devices that are constrained and cannot run linux.
“I think that Nintendo might use Microkernels [in the Switch] according to reverse engeneering finds shows their massive potential for constrained devices.”
Maybe you hadn’t grokked that pjmlp was talking about the Switch (Nintendo’s current generation console) rather than the N64?
Execpt that you entered "[in the Switch]" from your own imagination and it's simply not there in my original comment.
> Nintendo actual devices use microkernel based designs. We are way past the usual FUD against microkernels.
Regarding pjmlp post, yes that's true, it wasn't clear (and still isn't) to me from his post, that he specifically speaks about the Switch when referring to "devices".
> Execpt that you entered "[in the Switch]" from your own imagination and it's simply not there in my original comment.
I know it’s not there in your original comment, that’s why it was inside square brackets. That’s a standard way of including context in a quote that otherwise would lack said context. You would see that in news papers and other publications. This isn’t some weird markup I’ve just invented and it’s definitely not a figment of my imagination because the post you were replying to was about the Switch.
> Regarding pjmlp post, yes that's true, it wasn't clear (and still isn't) to me from his post, that he specifically speaks about the Switch when referring to "devices".
You’re right, it wasn’t explicit. My apologies there.
> I know it’s not there in your original comment [and I'm sorry that I have abused the square brackets in such a way, that it changes the meaning] ... My apologies there.
I wasn’t changing the meaning though. You were replying to a post about Switch. It’s not my fault you can’t grasp enough of this stuff to hold an intellectual discussion.
You want to throw out as many functionality as possible to get your kernel down in size to fit into e.g. 1 MiB of memory.