> every embedded Linux device I've been paid to work on in the past five years had over 1GB of RAM. If I'm on a tiny machine where I care about 8MB RSS, I'm not running Linux, I'm running Zypher or FreeRTOS
The gap between “over 1GB of RAM” and 8MB RSS contains the vast majority of embedded Linux devices.
I, too, enjoy when the RAM budget is over 1GB. The majority of cost constrained products don’t allow that, though.
That’s said, it’s more than just RAM. It increases boot times (mentioned in the article) which is a pretty big deal on certain consumer products that aren’t always powered on. The article makes some good points that you’ve waved away because you’ve been working on a different category of devices.
except thanks to availability crisis hitting the industry for the past decade you have to go with the 4mb sometimes
just look at wifi routers. in the usa and China they are all sold with 64 or 128mb ram. south America and Europe they are all 16 or 32 for no clear reason.
Do you have some examples? I have a very hard time imagining a modern Wifi router supporting the latest standards and IPv6, admin web interface and so on running on 16 MB of RAM. I also have issue with "wifi routers in Europe are all 16 or 32 MB of RAM". In what decade?
My ISP provided router also does VPN, VoIP, mesh networking, firewalling, and it's towards the lower end of feature set (as it's offered for free and not a fancy router I bought).
Are you talking about devices from the early 2000?
I’ve still got two MR3040. TP-Link hasn’t released any update for them in years. You can run an older version of OpenWrt on them, but there’s no real point. These things don’t even support 5GHz WiFi.
pick any modem from linksys or dlink or netgear. then buy one in south America and compare what's really inside
look at all the revB on openwrt wiki, sometimes ram lowers. sometimes arm cpu change to mediatek. often the wifi chip changes from qualcomm to rtl. and it's always the revisions sold outside of usa and China in the observation fields.
> south America and Europe they are all 16 or 32 for no clear reason
I don't know where you're getting your data from but it's clearly wrong or outdated. These are the most often sold routers in Czechia on Alza (the largest online retailer) under $100:
> The gap between “over 1GB of RAM” and 8MB RSS contains the vast majority of embedded Linux devices.
Of all currently existing Linux devices running around the world right this moment? Maybe.
But of new devices? Absolutely not, and that's what I'm talking about.
> The majority of cost constrained products don’t allow that, though.
They increasingly do allow for it, is the point I'm trying to make.
And when they don't: there are far better non-Linux open source options now than there used to be, which are by design better suited to running in constrained environments than a full blown Linux userland ever can be.
> It increases boot times (mentioned in the article) which is a pretty big deal on certain consumer products that aren’t always powered on. The article makes some good points that you’ve waved away because you’ve been working on a different category of devices.
I've absolutely worked on that category of devices, I almost never run Linux on them because there's usually an easier and better way. Especially where half a second of boot time is important.
> But of new devices? Absolutely not, and that's what I'm talking about.
The trouble with "new" is that it keeps getting old.
There would have been a time when people would have said that 32MB is a crazy high amount of memory -- enough to run Windows NT with an entire GUI! But as the saying goes, "what Andy giveth, Bill taketh away". Only these days the role of Windows is being played by systemd.
By the time the >1GB systems make it into the low end of the embedded market, the systemd requirements will presumably have increased even more.
> there are far better non-Linux open source options now than there used to be, which are by design better suited to running in constrained environments than a full blown Linux userland ever can be.
This seems like assuming the conclusion. The thing people are complaining about is that they want Linux to be good in those environments too.
That's entirely the point. In the days of user devices with 32MB of RAM, embedded devices were expected to make do with 32KB. Now we have desktops with 32GB and the embedded devices have to make do with 32MB. But you don't get to use GB of RAM now just because embedded devices might have that in some years time, and unless something is done to address it, the increase in hardware over time doesn't get you within the budget either because the software bloat increases just as fast.
We've been stuck at ~$10/GB for a decade. There are plenty of devices for which $10 is a significant fraction of the BOM and they're not going to use a GB of RAM if they can get away with less. And if the hardware price isn't giving you a free ride anymore, not only do you have to stop the software from getting even bigger, if you want it to fit in those devices you actually need it to get smaller.
I recently looked up 2x48GB RAM kits and they are around 300€ and more for the overclockable ones. That is 3€ per GB and this is in the more expensive segment in the market since anyone who isn't overclocking their RAM is fine using four slots.
The end of that chart is in 2020 and in the interim the DRAM makers have been thumped for price fixing again, causing a non-trivial short-term reduction in price. But if this is the "real" price then it has declined from ~$10/GB in 2012 to, let's say, $1/GB now, a factor of 10 in twelve years. By way of comparison, between 1995 and 2005 (ten years, not twelve) it fell by a factor of something like 700.
You can say the free lunch is still there, but it's gone from a buffet to a stick of celery.
> We live in the 2020s now and ram is plenty. The small computers we all carry in our pockets (phones) usually have between 4 and 16g GB ram.
I do not think the monster CPUs running Android or iOS nowadays are representative of embedded CPUs.
RAM still requires power to retain its contents. In devices that sleep most of the time, decreasing the amount of RAM can be the easiest way to increase battery life.
I would also think many of the small computers inside my phone have less memory. For example, there probably is at least one CPU inside the phone module, a CPU doing write leveling running inside flash memory modules, a CPU managing the battery, a CPU in the fingerprint reader, etc.
Is it really the case? On desktops it is significantly faster than all the other alternatives. Of course if you do know your hardware there is no need for discovering stuff and the like, but I don't know. Would be interested in real-life experiences because to me systemd's boot time was always way faster than supposedly simpler alternatives.
When Arch Linux switched to systemd, my laptop (with an HDD) boot times jumped from 11 seconds to over a minute. That 11 seconds was easy to achieve in Arch’s config by removing services from the boot list and marking some of the others as supposed to be started in parallel without blocking others. After the switch to systemd there was no longer a such a simple list in a text file, and systemd if asked for the list would produce such a giant graph that I had no energy to wade through it and improve things.
Later, when I got myself a laptop with an SSD, I discovered that what my older Arch configuration could do on an HDD is what systemd could do only with an SSD.
I switched to systemd when Arch switched and from the get go, it was massively easier to parallelise with systemd than with the old system and that was with an HDD.
Systemd already parallelises by default so I don't know what insanely strange things you were doing but I fail to see how it could bring boot time form 11s to 1 minute. Also, it's very easy to get a list of every services enabled with systemctl (systemctl list-unit-files --state=enabled) so I don't really know what your point about a giant graph is.
We don’t have to talk in hypotheticals here. Booting time benchmarks from the time systemd was released are everywhere and showed shorter boot times. It was discussed ad nauseam at the time.
Arch changed to systemd in 2012, at which point systemd was 2 years old. It surely had quite a few growing pains, but I don't think that's representative of the project. In general it was the first init system that could properly parallelize, and as I mentioned, it is significantly faster on most desktop systems than anything.
it was only faster if you started with bloated redhat systems to begin with. but yes, it was the beginning of parallelism on init...
but the "faster boot" you're remembering are actually a joke at the time. since the team working on it were probably booting vms all the time, the system was incredible aggressive on shutdown and that was the source of it. something like it reboots so fast because it just throws everything out and reboot, or something. i don't really care much for the jokes but that was why everyone today remembers "systemd is fast".
It mandates strict session termination, unlike the unsustainable wild west approach of older Unix systems. Proper resource deallocation is crucial for modern service management. When a user exits without approval of "lingering user processes," all their processes should be signaled to quit and subsequently killed.
i think the "unsustainable wild west" of sending sigterm, waiting and sending sighup was very good because it was adaptable (you were on your own if you had non standard stuff, but at least you could expect a contract)
Nowadays if you start anything more serious from your user session (e.g. start a qemu vm from your user shell) it will get SIGHUP asap on shutdown, because systemd doesn't care about non service pids. but oh well.
...which is where the jokes about "systemd is good for really fast reboots" came from mostly.
The old way has literally no way to differentiate between a frozen process and one that just simply wants to keep on running after the session's end, e.g. tmux, screen.
It's trivial to run these as a user service, which can linger afterwards. Also, systemd has a configurable wait time before it kills a process (the "dreaded" 2 mins timer is usually something similar)
which was fine for everything that didn't need a watchdog. systemd on the other hand still lacks most common usecase and people bend over backwards to implement them with what's available. ask distro maintainers who know the difference between the main types of service files...
The gap between “over 1GB of RAM” and 8MB RSS contains the vast majority of embedded Linux devices.
I, too, enjoy when the RAM budget is over 1GB. The majority of cost constrained products don’t allow that, though.
That’s said, it’s more than just RAM. It increases boot times (mentioned in the article) which is a pretty big deal on certain consumer products that aren’t always powered on. The article makes some good points that you’ve waved away because you’ve been working on a different category of devices.