We often criticise systemd for being too bloated, and making it hard to write a drop in replacement. I totally agree with this line of thought.
However, in my mind it has made several awesome things possible. My boot time got dramatically shorter when I adopted it thanks to parallelization. Besides, daemons have now simple and robust service definitions. Sys V had become a mess!
Lastly, lightweight containers are the real-deal for small development tasks (not for production!). Just one command: systemd-nspawn, and you're ready to go. Docker is currently a bit more complicated to set up.
Arguably, many features, including containers, should be moved out of systemd. Right now, more than a monolithic architecture, I think systemd is rather shipping too many things under the same project umbrella.
> However, in my mind it has made several awesome things possible. My boot time got dramatically shorter when I adopted it thanks to parallelization. Besides, daemons have now simple and robust service definitions. Sys V had become a mess!
Writing daemon startup files was somehting I always dreaded, and never really did well.
Before systemd, if I needed to run services I'd try to use daemontools (for auto-restart, and logging), but then I had two service-starting services running my system. Upstart had some of the features, but was still finicky (and the versions I had available didn't consistently have good service supervision support).
systemd just fix that.
Also, with systemd, for the first time I feel like I'm really using Linux, not just a random *Nix that has adequate drivers.
I'm saying that systemd makes the Linux kernel's feature set and capabilities visibly usable from user-space. For (nearly) the first time, it feels like it matters that I'm using Linux.
Linux is still Linux without systemd, it just doesn't provide as much benefit (aside from device support and compatibility) over, say, FreeBSD without software that takes advantage of its feature set.
The lack of documented software that used them to enable useful (to me) functionality.
I was using some of them, such as kvm for my virtualization and lvm for disk management. But systemd still had a substantial 'oh, wow, Linux lets process management be this easy and powerful?' factor, showing me something new that I hadn't seen in my use of any other system (FreeBSD, OpenBSD, Windows, a touch of Mac).
the documentation directory of the kernel source is actually pretty nice.
theres a bunch of utilities for things like cgroups, namespaces, etc. they're not well known but they work perfectly fine.
I suspect its not well known because there was no commercial, marketing drive behind them. Nowadays at least one of these seems to be needed to even gain visibility.
People don't go search what's cool/good where it is. They wait for HN or some other news website to tell them
Just like the regular news really. Turns out it doesn't work all that great.
In a server environment it happens that the various DRAC/BIOS(es) are initialized and the bootloader reached is far longer (several minutes sometimes) than the boot time of sysvinit. So optimizing boot time in the Linux part on a server is probably moot for me.
On the laptop you can suspend/hibernate as others have said if you care about startup time. I have full-disk encryption and need to type in password to boot so few seconds more or less doesn't matter anyway.
So that leaves the desktop, where I might care about boot times (the UEFI/BIOS is actually saner than in servers and reaches the bootloader very fast). It turns out that my desktop boots faster than my router with sysvinit already, so having faster boot times on my desktop would get me nowhere, I still wouldn't be able to use the internet until the router has booted.
So faster boot times ... I didn't need all the pain systemd is causing just for that. Debian has haid makefile/startpar based concurrent boot already, I don't think systemd would improve on that much ...
Meanwhile not using systemd breaks things that used to work on a KDE desktop (USB mounting, VPN config, etc.), so having an app support systemd is a net-negative for me.
Boot time most definitely impacts my cloud computing setup. If your servers are pets then, no, boot time is not important to you. However, my cattle are constantly being brought to life and killed again. When I need additional capacity to handle a spike in load, I want it right now, not in 5 minutes.
Holy heck, I get being defensive, but you've let logic totally fall by the wayside here. Where do I start...
The fact that BIOS/DRAC/RAID initialisation is slow on some servers is irrelevant. Linux's init and the firmware initialisation don't run concurrently, therefore if init takes longer the whole boot takes longer. Additionally many servers manufacturers have improved boot times in the last few years (down from 10+ minutes, to 5+ minutes, to less).
Most routers don't take as long to boot as you claim. The entire OS is about 8 MB (uncompressed) and RAM is only 32 MB, and the medium that the OS is stored on is faster than a computer's hard drive. So just looking at IO should tell you your supposition is flawed. In my experience most Linux based routers boot the RJ-45 interface (LAN side) in under 20 seconds unless it is allocating slowly on the WAN interface (e.g. unable to get an IP, etc). If you set a static WAN IP/gateway/etc, then boot times comes down substantially.
Additionally the whole concept that every time your PC turns off you'll also turn off your router at the mains is, uhh, strange. Sure there are power cuts but that isn't the only time you shutdown your PC throughout the year.
The concept that your PC needs to wait for the server is equally flawed. Again, yes, power cuts. But PCs get shut down significantly more often than servers and if we're playing that game then wouldn't a "server" have a UPS anyway?
So overall your argument for why boot times don't matter lacks any kind of substance. It is also purely based on a PC->Server->Router infrastructure where nothing is on a UPS and everything suffers from a power cut (then "races" to all come back up).
In the real world my phone has Linux, our "Tivo" has Linux, our printer has Linux, our car's entertainment system has Linux, etc. So bad Linux boot times will be noticed day to day. It matters to a lot of people and while I don't know if systemd is the solution, I do know that progress is needed relative to the classic UNIX init system (per the article).
I think my point was that boot times are fast enough already on desktop/laptop, and systemd improvements over that do not justify the costs for me.
It would be nice to improve boot times on routers, but I don't know if systemd would improve there much.
Boot time on desktop with sysvinit: 8-9s, boot time on desktop with systemd: ~6s.
Boot time of router (until network is up and usable, maybe made longer by having to setup WiFi): 1m+.
PC waiting for router is my usual use-case when I power off everything and then power them back on at another time. I haven't said anything about PC waiting for server, and I agree it wouldn't make sense.
I don't have a server with systemd to check, but assuming similar improvement, 4s out of 5m+ you mention is barely 1%.
> Lastly, lightweight containers are the real-deal for small development tasks (not for production!). Just one command: systemd-nspawn, and you're ready to go.
You might be interested in firejail [1]. It makes finer-grained use of Linux namespaces, and doesn't depend on systemd (or much of anything, for that matter).
> Lastly, lightweight containers are the real-deal for small development tasks (not for production!).
I've come across this sentiment a few times in the last month, but I haven't yet heard an explanation other than "VMs are battle-tested and containers might leak data to each other". Is there something more that I'm missing? Why aren't containers a good idea to use in production?
If we (fairly or unfairly) group Linux' LXC (eg: docker) and * bsd's jails, the main contrast with "proper" hypervisors (xen/kvm/vmware/bhyve(? That new thing in freebsd 10?) is (the possibility of) full resource accounting/limitation. Go ahead run you pi-digit-finder at "100%" cpu, pipe /dev/zero over an ssh pipe to /dev/null on some box and pipe it to a local file as well: no other vm or the host will notice. You only get 1mbs, x cycles of cpu and x mb of disk.
Secondly, assuming a bug in the kernel, one might assume root in a container can lead to root on the host. Bsd jails have been pretty solid for the last few years afaik - but hardware support for virtualization might still get more of both separation/safety and speed. There have been som bad bugs in (as i recall) the io system in xen, leading to similar issues ... but again the last time i saw anything on that was years ago.
Ymmv - generally docker doesn't have "run untrusted code, safely, as root" as a design-goal (yet, afaik) (not entirely sure about lxc, née vserver -- the underlying technology) -- so don't expect it to do that. Isolation and security (esp. without sacrificing performance) is very hard to get right. Or so a long series of privilege escalation exploits across many different os' seem to indicate.
Just to be a little pedantic, LXC has definitly be inspired by pre-existing Vserver and OpenVZ. But it's a different implementation.
A lot of things that are viewed as innovations from Docker really already did exist in 2006~2007. Maybe a bit cruder but not that much. OpenVZ was very close to that. AUFS is the only real innovation as far as I know.
Anyway, Docker guys were smart enough to ride the cloud wave and hype the thing. I'm pretty sure Parallels missed the boat because they went the opencore way (OpenVZ/Virtuozzo).
I'm just familiarising with docker at the moment (specifically docker, not 'containers'). I'm finding that there's a lot of glitz and glamour around it that's good for devs, but us ops guys like mundane things like logs and status messages. For example, I get the same message whether I start or stop a container: the arg I used to refer to the container. No information. I've run into a few shortcuts like this. It's pretty magical, don't get me wrong, but it's still in adolescence. I've heard some banks are using it in production (no idea of what for, though) - which is a feather in docker's cap - but there's still some things that need to be polished.
If your boot time was bad before and decent now, this is not thanks to the goodness of systemd but rather the badness of whatever hideous system your distro was using before, and/or because your distro is starting a bunch of useless junk that shouldn't be running in the first place. I've been using flat /etc/rc (all commands in one file, & at the end of anything non-essential) for years (decades?) now, and have always had a login prompt faster than the display can synchronize to the video mode change.
However, in my mind it has made several awesome things possible. My boot time got dramatically shorter when I adopted it thanks to parallelization. Besides, daemons have now simple and robust service definitions. Sys V had become a mess!
Lastly, lightweight containers are the real-deal for small development tasks (not for production!). Just one command: systemd-nspawn, and you're ready to go. Docker is currently a bit more complicated to set up.
Arguably, many features, including containers, should be moved out of systemd. Right now, more than a monolithic architecture, I think systemd is rather shipping too many things under the same project umbrella.