Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Experiment in booting Linux fast (gentooexperimental.org)
210 points by padde on Oct 9, 2013 | hide | past | favorite | 120 comments


As the author says, it's just an experiment to see how fast he can get the filesystem and kernel up.

There is no network and most functionality is missing. There are not services or anything else.

So no point comparing this to Windows or OSX or anything else that requires any functionality other than kernel and filesystem.


Actually, if you use efficient software, you can get lots of stuff running in basically no time. In particular that means avoiding shell scripts and using as little external commands as possible.

In fact, I'm pretty sure that if the author stopped using OpenRC and used systemd instead, to initialize the same things, it would have made a noticable difference.

And a little self-promotion now :P As far as network config is concerned, using my NCD[1] software, it can be set up in no time (that is, not much more than the time it takes to negotiate a DHCP lease). In fact if you were super crazy, you would drop systemd and just run NCD as init, doing both basic initialization, as well as network config, all in one NCD process. I've even tried it some time ago, and got some very basic stuff working[2].

[1] https://code.google.com/p/badvpn/wiki/NCD

[2] https://code.google.com/p/ncdinit/


Recommend you have a look at how Apple does DHCP negotiations. http://cafbit.com/entry/rapid_dhcp_or_how_do


Thanks for the link, that is some interesting info. I've only skimmed it, but yes, I'm aware that DHCP requests can be made without a discover phase (I've had to read the RFC to implement DHCP in NCD ;).

I suspect that it's not actually directly doing some low-level ARP magic; rather the DHCP client may just be sending multiple DHCP request packets to past DHCP servers, and only the one whose automatic ARP request succeeds actually gets sent out. At least that'd how I would implement it.


I actually experimented with a kvm image of gentoo where I replaced openrc with daemontools (plus some sh scripts I wrote to manage dependencies). I boot to a login with DHCP and NIS in under 5 seconds.


Windows and OS X are actually "cheating" regarding their boot time.

The UI becomes responsive before all services are loaded.


I really hate this.

My MBA gets to sleep again before I can type my login password, so I have to press something (usually arrow) to keep it awake till UI responds. If I type the password before UI responds, then there is a good chance that the first characters are lost.

I should probably reinstall the system or do some clean up, but it is still an UI/UX issue.


Had a similar issue (screen would just go black again a random interval after wakeup from sleep and have to login again). Resetting the system management controller did the trick for me: http://support.apple.com/kb/HT3964


I remember before I got my SSD I'd have to wait for the clock in the menu bar to update before I could use anything.


I have an pretty standard Arch Linux installation with an SSD and my boot time is 1.5 seconds. My MacBook with an SSD takes nearly a minute.


Startup finished in 1.267s (kernel) + 1.167s (userspace) = 2.434s

Booting to a full featured Gnome. NetworkManager is the showstopper...


I suggest something is wrong with your MacBook.

My 2012 MBA with an SSD and 8GB of RAM cold boots to the desktop in ~8 seconds. (It was ~6 when I bought it..)

I'm hoping a clean install of 10.9 will speed it up again a bit.


I suspect it's because I use FileVault, but I use full-disk encryption via LUKS on Arch as well.


I have not noticed a speed difference after enabling filevault.

I have not measured either. But the OSX boot time is still measured in seconds with the SSD.


Does that exclude bios init time?


BIOS init time on my chromebook pixel is absolutely a killer. The "OS verification is turned off" screen appears instantly, but then I have to manually press ctrl-l to make seabios start loading, and seabios isn't particularly snappy about loading then picking the harddrive so grub can kick off. That whole process takes maybe 4 seconds which I think is a bit absurd.

My solution of course is to just suspend/hibernate instead of powering down, but it is a little disappointing that I don't get to play the "fast boot" game.


Yes if you do unsafe things you can be fast. There is a hilarious open issue that google refuses to fix - critical information required to boot is written to a part that is battery backed. So if you have your pixel in developer mode, and have installed linux on it, just close the screen and and let your battery drain to zero - your pixel will become non bootable. All of this because storing this information in TPM where it would be persistent would "slow" down the boot. Speaking from experience - my pixel has lost data three times because of this.


Huuh, that sounds pretty shit, I'll have to increase the frequency of my backups... Do you have a link to that issue handy?



Thanks!

I've run down my battery a few times while not suspended (just me being neglectful with plugging in) and haven't run into this issue. Is this just because the hardware kills power to the processor before the battery actually hits zero?


Yes


Stuck at work right now so google+ is out of reach. Will update this message later at night with the relevant links.


No it doubt it. I also ran Arch Linux on a Macbook Air 2012 w/ ssd and the bios init time was (way) longer than the Arch linux boot.


And that's a great point he implicitly makes in his "study". With Linux (or any OSS OS for that matter) you strip out all this different services, dig deep into the code, remove lots of stuff and you still get an OS working. Sure, you can't use LibreOffice, but that's not the point. You cannot get this sort of flexibility with the operating systems you mention. Good luck removing Internet Explorer from freaking Windows.



XPLite, nLite, RT Se7en Lite, etc disagree


If I had the money I'd set up a small yearly prize for "quickest boot on an RPi". Have some rules for what is or isn't allowed, and maybe different categories (IE, "anything goes" for people who want to overclock the hardware) but concentrating on having a usable system when you finish timing.

The boot to a desktop in 5 seconds on an EEE PC 701 is still impressive.

Noodling through the author's site it's pretty interesting to see what they did to cut times, and they're honest about this just being about boot time.

Does anyone else do this kind of optimization, or does it carry too much risk or cost?


> and they're honest about this just being about boot time.

reboot, not boot, boot was already a non-issue after booting into sh and removing most of the services.

The later posts deal with shutdown, mostly by removing sleeps... which are there to wait for lying hardware to finish flushing its caches. So some of TFA's fixes are really recipes for data corruption if you're not using a readonly FS (which TFA is, so it's no problem for his precise use-case).


Yeah, I was reading the bits where they disabled the sleep()s and I couldn't help but wonder how many evil little bugs and device problems those kludges covered up.


Hmmm. For hard disks, a call to fsync could be sufficient instead of the sleeps, no?


No. sync() is the system-wide fsync() so it makes no difference. The issue is that sync() tells the filesystem to flush its caches to storage, the FS tells storage to get everything in permanent storage and the storage device lies its pants off and says everything is stored as soon as it's hit the controller or the write cache.

It's not like the layers above can just go "nuh-uh, tell me when it's actually really written for real", if the storage device is going to lie it's going to lie every time.


The way the SCSI layer is implemented in Linux, fsync() (for that matter O_SYNC I/O) doesn't really flush data to the non-volatile platter on disk. Too much of a performance hit if it did that all the time.

http://thecodeartist.blogspot.com/2012/08/hdd-filesystems-os...



That blog post is from 2005. There was a major rework of how the block layer handles flushes back around 2010 and I'm pretty sure the issue he was having with fsync not being reliable has been resolved.


To go into the details, essentially, its the AHCI-driver(SATA) that handles 2 use-cases differently.

The more common being the case where there is an additional VFS driver between the app attempting sync-I/O and the AHCI driver which simply issues an asynchronous I/O command to the disk and returns immediately. The new data is guaranteed to be on the HDD but NOT guaranteed to be written to the non-volatile platter of the HDD. Data is often still in the HDD internal-cache, waiting to be written to the disk platter.

The 2nd case (very rare) is when the application attempting sync-I/O opens the HDD in raw mode i.e. opens the block device directly(without any VFS layer in between) with O_SYNC. Now following each disk-write, the AHCI driver issues a CMD_FLUSH to ensure that even the HDD cache is immediately flushed to the platter. As this eliminates any chance for NCQ to kick in, the performance drops by an order of magnitude but data-integrity is ensured.


If boot time is an issue, TinyCoreLinux has some of the fastest boot times and runs completely out of RAM, runs on x86, x86_64, armv6 (Pi), armv7 (GK802, AllWinner A-10).

[1]http://tinycorelinux.net/


This guy has been doing some work on getting a raspberry Pi to boot fast: https://github.com/gamaral/rpi-buildroot

Demo: http://www.youtube.com/watch?v=4Fjfqz6FxC8 It seems to be 3 to 4 seconds.

It is however for a specific use and I don't know how polyvalent it is.


There's a great case study and documentation at eLinux.org.

This guy stripped a Renesas SH7724 boot-to-UI from 19.44 seconds down to 0.77.

http://elinux.org/images/f/f7/RightApproachMinimalBootTimes....

The tl;dr is that there's a lot in the kernel you can disable or strip away if your target system doesn't need it. The section on optimizing your application for block-oriented storage (like NOR flash) is a great one too.


Crowdfunded contest?


Kickstarter?


I recently started using Linux again and installed Arch Linux. This was the first time I had used it since the init system was switched to systemd and its speed is impressive. Both booting and shutting down are very fast and I really like the journal feature.

That said, when I installed a virgin system on Monday, I was sad to see that systemd is creating a .local directory in the home directories of users (even root). This is on a system with no Xorg what-so-ever. Call me old fashioned but I don't think an init system should be creating directories in a user's home directory.


got it running in 1 try on my BeagleBone Black following this: http://archlinuxarm.org/platforms/armv7/ti/beaglebone-black

though i did have to screw around with uEnv.txt for a while to get it booting as seamlessly from eMMC rather than external SD card. this has since been fixed (as of 9/21) [1]

btw, it's much much smaller and faster than the distro BBB comes with, Angstrom. Arch boots in a quarter the time and also has many more up-to-date packages. Embedded nginx stack ftw!

[1] https://github.com/archlinuxarm/PKGBUILDs/issues/554

[EDIT] sorry, responded to wrong post. should have been this sibling post: https://news.ycombinator.com/item?id=6521531


Enforcing the XDG data path via systemd seems weird.

http://standards.freedesktop.org/basedir-spec/latest/ar01s02...


I don't have access to an Arch system to test, so: are you sure it's systemd that's doing that? It seems more likely to me that it would be from /etc/skel/, but I could be wrong...


It certainly is, although I will check skel at my next chance. I also found a bug report about it (it was causing some kind of conflict with MySQL) here:

https://bugzilla.redhat.com/show_bug.cgi?id=1012842

In any case, I know it is for systemd because if I dig down into the directly, it eventually ends into a dead systemd symlink for a directory in ~/.config, which doesn't yet exist. I think it is suppose to provide the ability to add and run end user units, but I am not sure.


Is Arch Linux still near impossible to install flawlessly on first try? Reporting in @ 7 tries here before I finally got it working.


Wait really!?

I hadn't installed a linux distro that wasn't a *buntu since Gentoo ~8 years ago when I started a new job a couple of months ago. They gave me a Lenovo laptop, and when it came to picking a distro I thought I'd try Arch.

I just followed the beginner's guide[1]. Apart from the first bootmanager I tried (uefi is a new experience for me) not working right everything just worked as easily as you'd expect. Wifi worked ootb, graphics acceleration was as easy as installing the right driver package. For anything else I wanted to install (eg xfce) I just read the wiki page for that particular package on the arch site, and did what they said.

I was actually blown away at how easy it was to install a command line "do it yourself" distro. I'm guessing from your comment that's not the standard experience then :-/

[1] https://wiki.archlinux.org/index.php/Beginners'_Guide


I've installed it about 7 times, and never had a problem. Haven't had to deal with drivers, though, as I always use it on server only.


In my opinion: Far from it, I find it hilariously easy. I've done it some 20 times at least but I got into a Gnome environment flawlessly the first time around (following the Beginner's Guide.)

The majority of those installs have been on my main computer (a laptop) because I couldn't find the source of my issues with X not initializing. You could argue full re-installs are overkill, but I say no.

Fun-fact, Turned out to be a fault with GDM (I use Gnome Shell), and I finally fixed my pains with a total of three commands.


I've recently installed Arch on two different laptops.

The first one was an old netbook. I hadn't done an Arch install in a long time (I've been spoiled by the simplicity and convenience of the Wubi installer) and wanted the learning experience. I also wanted the HDD to be encrypted using dmcrypt and LUKS.

The main problem I had was wrapping my head about the various partition schemes. This was made more difficult by my insistence to have the HDD be encrypted which meant I needed a separate /boot partition along with the special 1MB linux boot partition required to use GPT with BIOS (I missed that in the instructions and it tripped me up for quite a while). Altogether, it took three fresh installs before everything was exactly the way I wanted it but the last one was quite quick and was really on a fresh install because I wanted to make absolutely sure I had everything nailed down.

The second laptop was a midrange, 2 year old HP "media"-style laptop. With my new knowledge, partitioning and installing was easy even while maintaining an existing Windows install. But this laptop has both Intel and Nvidia graphics shared via PowerXpress whereas the netbook has simple Intel graphics. I still haven't taken a stab at the proprietary drivers but using the open source intel and nvidia graphics in tandem (after some fumbling around but it seemed counterintuitive to simply install both drivers side-by-side) it works well enough for what I'm using it for. I don't really relish the prospect of installing the proprietary Nvidia drivers, though.

I've probably spent 20 hours installing Arch on both of these machines but I've learned a lot in the process. It's been good.


They used to have a decent menu-driven installer. Why they threw it out and said, "here's zsh and a wiki page, go install," is beyond me (not to mention the default environment of your installed system is pretty different from the environment on the live disc). Even OpenBSD is easier to get up and running.

That said, I still contend that pacman is hands-down the best package manager in existence. I just prefer to use it on Frugalware nowadays.


Honestly, having used both the old installer and the new procedure dozens of times each, I really do prefer the wiki page. It's just a lot more flexible and comfortable to a shell user. It also exposes a number of config files that as an arch user, you should be aware of. In that way, I find it to conform to the "Arch Way[1]" than the system it replaced.

Not to say that it doesn't have it's downsides, like scaring off new users.

[1]: https://wiki.archlinux.org/index.php/The_Arch_Way


I think it would be fine if they gave a little guidance on the disc itself. Maybe little notes above the prompt that tell you what you should do next in the installation process, or even just a notice that says "see /usr/doc/INSTALL for instructions". The first time I attempted to install Arch after the installer was dropped, I had no idea what to do and had to find another computer to bring the wiki up. Some people might not have a spare computer to bring the instructions up, and they might not have a printer either.


If you 'ls' in the first directory you are dropped into you will see INSTALL.txt. You can use cat, less. more, vim or nano to read the document. You can use ALT+f2 to switch to a second pty and install away.


Good to know! I still hold that it would be helpful if there was some indication that the file was there. Neither the Installation Guide nor the download page mention it at all.


Feels like the developers said, "hi, we're starting to get too many of those pesky users, let's make each of them run a bunch of obscure commands just to get started".

Well, I'm not being fair here, the commands are not really obscure (except the ones they made up), and I'm sure they had a reason. But I just recently had to reinstall Arch after many years, and found the new "installer" strange and kind of daunting compared to the old one.


> Why they threw it out and said, "here's zsh and a wiki page, go install," is beyond me

Simple, because no one wanted to maintain it. I don't miss it, either.


Yea I'm not sure why they got rid of the menu-driven installer and zsh sucks imo. I can still live with the installation process though, it boils down to: partitioning your disk, setup filesystems, /mnt and /mnt/boot (for my desktop), use pacstrap for base system, install bootloader. After one installation with the manual process you can still install Arch in 30 minutes, plus you can thank it for somewhat forcing you to understand how you are setting up your system.

Pacman is really the best. I had to be reminded of that recently when I had the luxury of dealing with a Ubuntu system. Granted I don't have to deal with that system everyday but having to use apt, aptitude, dpkg to accomplish different things? What a fucking mess.


I've been living my life mostly using pacman with archlinux and in the last year I experienced the ubuntu/debian way for the first time... Trying to get around dozen of apt commands and other tools is truly painfull.

But it's probably just me that got used to a simple and easy life with pacman... There was a time that I even had to create an ubuntu package, well, don't get me started, I cried a little and lost around 3 days.

Last time I installed arch was before they removed the menu driven installer and it sure had some problems and bugs, so even if I have not tried it yet, I think I prefer this way of installing. It's cleaner, simpler, so better.


apt vs. pacman: Could you elaborate a little? What's your main reasoning for this?

I have tried Arch on three machines (old netbook, laptop, desktop) and all went fine during installing (all LUKS cryptorooted). What made me mad was when some stuff broke on two of them due to infrequent updates this spring (didnt use both netbook+desktop for a longer time). Alas, I can't remember details any more.

Made me switch back (to debian) I almost never use aptitude and dpkg only for low level stuff (e.g, when there are errors). Apt alone is in almost all cases sufficient.


Depends on your hardware I suppose. I got it working the first try but had the unfair "advantage" that I had already had experience with installing Ubuntu on a laptop with sli before the Nvidia drivers included support for it and also ran the archlinux menu-driven installer before doing it the manual/script based way. The nice thing is that, as long as you keep updating, you don't really have to reinstall.


If you want to try arch with an easy install, have a look at Antergos(www.antergos.com). Sorry to self-promote, I do some work on the team when I can, especially on the installer. I'm very proud of it, and of the team. Once installed, although it does install some extras(drivers, codecs, etc), you are left with a vanilla arch system. It does, however, take away from the fun of your first successful arch install :).


Since the last time I had used Arch, not only had they added systemd, but they had also killed AIF (Arch Installation Framework). Most of it is easy, like partitioning, but I had to figure some stuff out that I really didn't want to figure out, like the command for getting Grub working.

But between the official and unofficial install guides, I got it working.


Describe your desired setup. For a simple xorg / xmonad / emacs / chromium[1] it's a few traditional commands and a few arch specific commands[2].

[1] single GPT partition, no lvm, no encryption, no fancy devices.

[2] simple unix tools : disk, mkfs., *chroot. arch : netctl (network setup), pacman and a bit of systemd (not traditional yet ;)


You mean human-errors, or do you mean that Arch breaks itself mistakenly? If so, it might be worth reporting the problem. I used Arch a year ago or so(but switch to CRUX due to systemd), and I've never had any problems that wasen't me messing up things.


If it wasn't for the Arch guide for my specific laptop I found online holding my hand through every single step of the way there would be no chance in hell I would get it working. I feel like you need to be a linux guru just to get it working.

Yeah human errors that threw the entire process off.

However the journey was really fun, I learned a lot, and I highly recommend other to try the same as once you get it running, it really is amazing. My crappy netbooks boot time went from 2 minutes with win7 to 14 seconds with arch.


>If it wasn't for the Arch guide for my specific laptop I found online holding my hand through every single step of the way there would be no chance in hell I would get it working. I feel like you need to be a linux guru just to get it working.

Really? I find it pretty easy to find instructions to configure a particular device or install a certain software. If you want to install a DE, just Google how to install a DE to Arch Linux, or if you have a Broadcom wireless, you can Google how to install the wireless card.

There's little laptop-specific instructions you'll need, and you don't need to be a guru since there's very good instructions available, especially in the Wiki.


It's more like the happy path from start install to finish install is one of a million paths. Throughout each installation I'd hit a new detour, which in turn led me even further astray, and so on, and so on, until I never got back on the happy path and had to restart. That was my situation.


Which specific laptops? I haven't required specific hacks since the Dell Mini 9, but I don't doubt you had platform specific issues.


I got it on my second try, once I found out I was supposed to use arch_chroot instead of chroot or something like that.


3 tries here, although I broke some things by the time I rebooted the next day.


the shutdown loop in one of the blog posts there - sync(), then sleep(2) - has me worried he may get filesystem corruption at times under those circumstances. i could be wrong, but i recall that sync() will return immediately even though it's not done synchronizing the filesystem writes (for a filesystem that needs that). as such, the sleep(2) gives it some time to get that done.

is that a correct understanding? if so, is that a reasonable risk i see (filesystem corruption at times)?


One would hope that the completion of sync() would mean the data is written out, except I recently read this horror[1] on HN:

Unfortunately, most consumer-grade mass storage devices lie about syncing. Disk drives will report that content is safely on persistent media as soon as it reaches the track buffer and before actually being written to oxide. This makes the disk drives seem to operate faster (which is vitally important to the manufacturer so that they can show good benchmark numbers in trade magazines). And in fairness, the lie normally causes no harm, as long as there is no power loss or hard reset prior to the track buffer actually being written to oxide. But if a power loss or hard reset does occur, and if that results in content that was written after a sync reaching oxide while content written before the sync is still in a track buffer, then database corruption can occur.

… part of me hopes there's a very special hell for the people making disks where the OS can never be sure if the data is safe or not.

[1]: http://www.sqlite.org/howtocorrupt.html


This problem is nowhere near as widespread as most people claim. While bugs do happen and I can't speak for the SSD side of things, HDD manufactures test their cache behavior quite thoroughly. This includes pulling the power immediately after flushing the cache to make sure the data made it to disk. 99% of people who report cases of HDDs "lying" about write integrity either have write cache enabled or are not actually issuing a flush cache command due to OS level issues.


Does anybody know if SSD is subject to the same delayed sync issue?


As with regular drives, it depends on the device.

Early Intel SSDs were known to be particularly prone to this issue: http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-f... http://www.evanjones.ca/intel-ssd-durability.html


I've definitely seen `sync` itself waiting/blocking (especially if you use fuse for something network based and disconnect the cable first), but whether it's guaranteed or not... that's an interesting question.

Edit: after some googling:

       On Linux, sync is guaranteed only to schedule the dirty blocks for
       writing; it can actually take a short time before all the blocks are
       finally written.  The reboot(8) and halt(8) commands take this into
       account by sleeping for a few seconds after calling sync(2).

       This page describes sync as found in the fileutils-4.0 package; other
       versions may differ slightly.
So it doesn't look like the writes are guaranteed to take place. Just a best effort + wait + pray :)


Interesting. http://linux.die.net/man/2/sync says

    On Linux, sync is guaranteed only to schedule the dirty blocks for
    According to the standard specification (e.g., POSIX.1-2001), sync()
    schedules the writes, but may return before the actual writing is done.
    
    However, since version 1.3.20 Linux does actually wait. (This still
    does not guarantee data integrity: modern disks have large caches.)
So it seems like the sleep(2) is there to give the disk enough time to write the cache data.


Pity no-one thought to comment this little piece of reboot/halt magic - 'twould have saved a bit of digging.


I changed an OS (before Linux) to sync when idle. So by the time you could type a shutdown command, it was already sync'd. I don't know why more OSs don't do that.


Ancient Unix lore has it that you need to do 'sync; sync; init 6' in order to sync the buffers and reboot. Sync was supposed to only schedule a sync, but would block if another sync was already running. I have no idea how applicable that lore is to modern 2013 Linux... Definitely would like to see more careful research than just removing the sleep(2) and declaring victory and address if that sleep was simply vestigial or not...


This example used a read-only squashfs filesystem, making sync() irrelevant.

Also, counting on sync() to write everything within two seconds seems problematic as well.


It's not sync() which is the problem. A correctly implemented `sync` should flush all writes to permanent storage before returning.

The issue here is storage devices may lie about it[0]. The 2s sleep is problematic, but IIRC devices don't (and have no way to, and would not anyway) report when the data is actually written to permanent storage, so you can't do much besides waiting a bit and hoping for the best.

[0] http://www.sqlite.org/howtocorrupt.html


Storage devices actually do have several ways of reporting when data is permanently stored, and Linux makes use of them. However, some storage device manufacturers found that if they lied and claimed data was permanently stored when it wasn't quite yet, they got better benchmark results.


> Storage devices actually do have several ways of reporting when data is permanently stored

Which `sync` uses. My comment was probably unclear, but the point I was trying to make is if they're lying to sync they're probably not going to provide other accurate ways to get the information.


> if so, is that a reasonable risk i see (filesystem corruption at times)?

It is not a reasonable risk on a normal system, but TFA uses a readonly filesystem (squashfs) so it's not an issue: there's no data to be written.


# me> systemd-analyze

Startup finished in 2.480s (kernel) + 623ms (initrd) + 570ms (userspace) = 3.674s

Fedora 18 on Dell Precision M4700

Not quite 0.25 seconds though ;-)

Here's one way to get there: http://www.harald-hoyer.de/personal/blog/fedora-17-boot-opti...


I've noticed that dhcpcd always gets the lease, then broadcasts on the network to see if anybody else has that address, waits for a timeout, and then proceeds. This seems like an unnecessary waste of boot time when you're on uncomplicated home networks; the DHCP server can be trusted to give a fresh address almost always. Doubly so if you've set your AP to static MAC address mappings.

Is there a way to shut that off, or a Linux DHCP client that doesn't do that?


Switching to dhclient instead of dhcpcd will usually help that. Once it has an address it will go into the background during that check if the ip address is actually available. You can stop that by running it with -d if you're debugging a bad network.


That check is actually recommended by the RFC, but using the --noarp switch should disable it.


Amazing! I would like to see such experiment with "boot to browser" on real hardware.


I came here to post something like this as well. I'm less concerned with bootup time and more concerned with how much time it takes to get to actually using the system. its still in the 5~10 minute range for me.


Can you elaborate on that? I've never seen bootup on Linux take that long unless something goes wrong (e.g. fsck fails).


That seems weird. My PC is rather slow for a dev machine (+ a few lowkey games): i3 @ 3.3GHz, 16GB DDR3 RAM, GTX 550 TI, Samsung SSD 830 for the OS. Windows 8.1 Professional. I just took some non-exact measurement (stopwatch) from pressing the power button:

26s to login screen, another 13s from pressing enter after login to having Chrome show me a useable Outlook Web Access.

And I haven't performance optimized anything. Steam, 2 VPN clients, Trillian and Directory Opus all load on login.


Are you including the time it takes to open whatever applications you need to be productive?


absolutely, gnome boots up very quickly but the churn that happens right after boot slows opening applications to a crawl.


What on earth takes so long?


Does Links count? :)


Fastest "reasonably complete" boot I've ever seen was on a BeOS setup I had on a 600 MHz laptop (~2000?). ~4 seconds to the login prompt, and another 1-2 seconds to finish "thinking" after login.

Man, that thing was fast.


I also seem to recall that the somewhat legendary QNX demo floppy (a 1.44 floppy image that booted to a full graphical desktop) was also pretty fast to come up. I can't recall exactly how fast though.

On a side note I somewhat recently booted up our old Amiga 2000, running an upgraded cpu (68020 I think, maybe 10 Mhz) -- booting off its ancient 40 MB scsi hd -- and I was surprised how slow bootup was. Can't remember that I gave bootup time much thought when the thing was new (then again, most reboots were done to boot into a game off of floppies...).


:)


I didn't know about BeOS booting process, but I stumbled upon good old demos[1] recently, I remember how I was amazed back in the days. I'm still amazed, more than before. Sad.

[1] http://www.youtube.com/watch?v=BsVydyC8ZGQ


BeOS was super responsive. I had the opportunity to test the last release on some early MMX I had and was blown away by how responsive it was compared to Windows, or even to my Slackware installation.


In contrast, my win98SE desktop took 4'30" to boot back in the day. I know this because I decided to time it after yet-another-BSOD-while-gaming...


Here's the real world thing we need: Start enough services to enable: networking, cron, one wsgi server and a database. I need to do this next month for a deployment.


While looking for this : http://www.youtube.com/watch?v=-l_DSZe8_F8 (boot to embedded real time application around 1 sec) I found this more recent desktop talk http://www.youtube.com/watch?v=aVcfjs02Srs (Integrating systemd: Booting Userspace in Less Than 1 Second - ELCE 2011)


Experiments like that might benefit server VMs most. How soon your nodes go up after a reboot or after spinning up a new one can matter. But, of course, you'll not be cutting corners as the poster does; you'd need a usable system.

On a desktop or laptop, you probably use hibernation, and your biggest time sinks are typing your password and having your network connection go back up. Rebooting these things is rarely needed.


Apple "cheats" on that regard, getting your wifi back up after sleep.

http://cafbit.com/entry/rapid_dhcp_or_how_do

http://news.ycombinator.com/item?id=2755461 <-- good discussion


Booting linux to the graphical server in 0.25 seconds ?


Short answer: No.

> qemu-kvm -nographic -kernel kernel -boot c -drive file=./kvm-squashfs,if=virtio -append "quiet root=/dev/vda console=ttyS0 init=/sbin/halt"


No, booting to a barely usable system :)


libguestfs can boot & shutdown a small appliance in around 3½ seconds. However we have to use the standard distro kernel, distro udev and distro KVM (for security policy reasons we cannot ship our own). We could do a lot better if we could custom compile everything.

http://libguestfs.org/guestfs-performance.1.html


Chrome OS boots Linux really fast and gives you a usable system within seconds :)


For extremely small values of "usable"...

I attended Google I/O this year and received a Pixel. It's a really nice piece of hardware, but as a developer it's worthless. I saw Google presenters using a lot of Macs and a ThinkPad running Linux. Not a single Chromebook.


With secure shell, you can make it significantly more useful. Of course, you still need a box to SSH into.

https://chrome.google.com/webstore/detail/secure-shell/pnhec...


/rant/ Old news, someone on HN assumes everyone is a developer, and in the mean-time 99.9% of the market are not


For comparison, does anybody know what kind of boot times to expect with CoreOS?


I would be curious how to achieve similar or better results on other distros, particularly ubuntu.


At this point, I don’t think that the distro matters much, as there is quite exactly nothing there except for the kernel, udev and the very first steps of startup.


There's the famous "5 second boot" for Fedora on an EEE PC 701. That's a weak machine with a small, old, SSD. It's an "honest" boot time - from power on to desktop up with idle CPU and disk.

(https://lwn.net/Articles/299483/)

Here's an article trimming boot time of Fedora 17 to 3 seconds.

(http://www.harald-hoyer.de/personal/blog/fedora-17-boot-opti...)

There's some overlap between the two different approaches.

If you wanted some esoteric hardware you could make an always powered RAM disc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: