Hacker News new | past | comments | ask | show | jobs | submit login

> How long ago was "a few years ago"? What model number was that? DD-WRT has been ported to the Archer series but if you're talking a ZyNOS based router then you're probably out of luck. Those ZyNOS devices are the real bottom end of the market though.

This brings back memories :) http://www.ixo.de/info/zyxel_uclinux/ Sure, we are talking about low-end (real bottom) devices and date models here. I cannot recall the model number, but I think we both agree that routers that cannot run linux exist, although not very common (anymore).

> Any OS designed around kernels, memory safety etc would be useless in general for constrained devices.

How about QNX then?

"QNX is a commercial Unix-like real-time operating system, aimed primarily at the embedded systems market. QNX was one of the first commercially successful microkernel operating systems. As of 2020, it is used in a variety of devices including cars and mobile phones." - https://en.wikipedia.org/wiki/QNX

They are "aimed primarily at the embedded systems market", their latest release is from "7.1 / July 2020; 5 months ago" and they are operating their business model since 1982.




> This brings back memories :) http://www.ixo.de/info/zyxel_uclinux/ Sure, we are talking about low-end (real bottom) devices here.

So not just low-end, but a decade old device that was already low-end upon it's release. That's hardly a fair argument to bring to the discussion.

> How about QNX then?

QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* . The published minimum requirements for Neutrino 6.5 (which is already 10 years old) was 512MB. Double that if you want the recommended hardware specification.

Sure, if you want to strip out graphics libraries and all the other stuff and just run it as a hypervisor for your own code you could get those hardware requirements right down. But then you're not left with something POSIX compliant, not even useful for the N64. And frankly you could still get a smaller footprint by rolling your own.

The selling point of QNX is a RT kernel, security by design and a common base for a variety of industry hardware. But if you're writing something for the N64 then none of those concerns are relevant (and my earlier point about a resident OS for the N64 being redundant is still equally valid for QNX).

Also smart phones are neither embedded nor "constrained" devices. I have no idea what the computing hardware is like in your average car but I'd wager it varies massively by manufacturer and model. I'd also wager QNX isn't installed on every make and model of car either.

* I should caveat that by saying, yes it's possible to write something partially POSIX compliant which could target really small devices. There might even be a "UNIX" for the C64. But it's a technical exercise, like this N64 port of Linux. It's not a practical usable OS. Which is the real crux of what we're getting at.


> So not just low-end, but a decade old device that was already low-end upon it's release. That's hardly a fair argument to bring to the discussion.

Fair enough, I agreed that I should have come up with an better example. But before going down another rabbit hole, just replace router with any modern embedded chip you like, that cannot run linux as example.

Regarding QNX, I don't know their current requirements but what impresses me:

"To demonstrate the OS's capability and relatively small size, in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk."

> I should caveat that by saying, yes it's possible to write something partially POSIX compliant which could target really small devices.

Yeah, I think here's a interesting overview of some http://www.microkernel.info

I wonder how many of them are POSIX compliant (or partially) and what their requirements are. GNU/Hurd certainly is.


>QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* . The published minimum requirements for Neutrino 6.5 (which is already 10 years old) was 512MB. Double that if you want the recommended hardware specification.

An older QNX ran from a floppy with very few MB. With GUI and a browser with limited JS support.

>QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* .

You could run Linux under a TTY for i386 with 2MB with some swap about 24 years ago.


> An older QNX ran from a floppy with very few MB. With GUI and a browser with limited JS support.

It did and it was a very impressive tech demo....but it's not representative of a usable general purpose OS. Chrome or Firefox alone comes in at > 200MB. So there is no way you'd get a browser that would work with the modern web to fit on the 1.4MB floppy. And that's without factoring in fonts, drivers, a kernel and other miscellaneous user land.

The QNX demo was a bit like this N64 demo. Great for showing off what can be done but not a recommendation for what is practical.

> You could run Linux under a TTY for i386 with 2MB with some swap about 24 years ago.

That's still double the memory specification and yet Linux back then lacked so much. For example Linux 24 years ago didn't have a package manager (aside from Debian 1, which had just launched and even then dpkg was very new and not entirely reliable). Most people back then still compiled stuff from source. Drivers were another pain point, installing new drivers meant recompiling the kernel. Linux 1.x had so many rough edges and lacked a great deal of code around some of the basic stuff one expects from a modern OS. There's a reason Linux has bloated over time and it's not down to lazy developers ;)

Let's also not forget that Linux Standard Base (LSB), which is the standard distro's follow if they want Linux and, to a larger extent POSIX, compatibility wasn't formed until 2001.

Linux now is a completely different animal to 90's Linux. I ran Linux back in the 90s and honestly, BeOS was a much better POSIX-compatible general purpose OS. Even Windows 2000 was a better general purpose OS. I don't think it was until 2002 that I finally made Linux my primary OS (but that's a whole other tangent).

I mean we could have this argument about how dozens of ancient / partially POSIX-complient / unstable kernels have had low footprints. But that's not really a credible argument if you can't actually use them in any practical capacity.


> I mean we could have this argument about how dozens of ancient / partially POSIX-complient / unstable kernels have had low footprints. But that's not really a credible argument if you can't actually use them in any practical capacity.

There are modern microkernels that are POSIX compliant and have a much lower footprint than linux. That's not the problem. I think the most prominent issue, people points out here is performance. However, it's very obvious to me that the extra abstraction of having a kernel vs having no kernel on an constrained device costs performance, and it's always a trade-off, both solutions can be found and both solutions are valid.


> There are modern microkernels that are POSIX compliant and have a much lower footprint than linux.

There are... but they're not < 1MB. Which was the point being made.

> I think the most prominent issue, people points out here is performance.

That's literally what I said at the start of the conversation!

> However, it's very obvious to me that the extra abstraction of having a kernel vs having no kernel on an constrained device costs performance, and it's always a trade-off, both solutions can be found and both solutions are valid.

Show me a device with the same specs as the N64 which runs an OS and I'll agree with you that both solutions are valid. The issue isn't just memory, it's your CPU clock speed. It's the instructions supported by the CPU. It's also the domain of the device.

Running an OS on the N64 would never have made sense. I guess, in some small way, you could argue the firmware is an OS in the say way that a PC BIOS could. But anything more than that is superfluous both in terms of resources used and any benefits it might bring. But again, if it's a case of "both solutions are valid" then do please list some advantages an resident OS would have bought. I've explained my argument against it.

Let's take a look at what was happening on PCs around the time of the N64's release. Most new games were still targeting MS-DOS and largely interfaced with hardware directly. In a way, DOS was little more than a bootstrap: it didn't offer up any process management, the only memory management it did was provide an address space for the running DOS application, it didn't offer any user space APIs for hardware interfaces -- that was all done directly. And most of the code was either assembly or C (and the C was really just higher level assembly).

Fast forward 4 years and developers are using OpenGL, DirectX and Glide (3DFX's graphics libraries which, if I recall correctly, was somewhat based on OpenGL) in languages like C and C++ but instead of writing prettier ASM they're writing code based around game logic (ie abstracting the problem around a human relatable objects rather than hardware schematics). It was a real paradigm shift in game development. Not to mention consoles shifting from ROM cartridges to CD posed a few new challenges: 1) you no longer have your software exist as part of the machines hardware 2) you now have made piracy a software problem (since CD-ROMs are a standard bit of kit in most computers) rather than a hardware one (copying game carts required dedicated hardware that wasn't always cheap). Thankfully by that time computer hardware had doubled a few times (Moore's Law) so it was becoming practical to introduce new abstractions into the stack.

The N64 exists in the former era and the operating system methodologies you're discussing exist in the latter era. Even the constrained devices you're alluding to are largely latter era tech because their CPUs are clocked at orders of magnitude more than the N64 and thus you don't need to justify every instruction (it's not just about memory usage) but in many cases an OS for an embedded device might just be written as one binary blob and then flashed to ROM, effectively then running like firmware.

It's sometimes hard to get a grasp on the old-world way of software development if it's not something you grew up with. But I'd suggest maybe look at programming some games for the Atari 2600 or Nintendo Gameboy. That will give you a feel for what I'm describing here.


>It's sometimes hard to get a grasp on the old-world way of software development if it's not something you grew up with.

I lived through that, the first PC I used had DOS with 5'25" floppies.

On 3DFX, it was a mini-GL in firmware, low level. Glide somehow looked better than the later games with DirectX, up to Directx7 when games looked a bit less "blocky".

>For example Linux 24 years ago didn't have a package manager

Late 90's Linux is very different from mid 90's. Slackware in 1999 was good enough, and later with the 2.4 kernel it was on par on w2k, even Nvidia drivers worked.

And I could run even some games with early Wine versions.


In fairness, you did say “24 year old Linux” which would put it in the mid 90s camp rather than late 90s.

I wouldn’t agree that Slackware in 2000 was on a par with Windows 2000 though. “Good enough”, sure. But Linux had some annoying quirks and Windows 2000 was a surprisingly good desktop OS (“surprising“ because Microsoft usually fuck up every attempt at systems software). That said, I’d still run FreeBSD in the back end given the choice between Windows 2000 and something UNIX like.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: