Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Inventing Chromebook (jeff-nelson.com)
222 points by jtemplin on Feb 13, 2013 | hide | past | favorite | 77 comments


When designing the OS, he "took off the table the largest performance bottlenecks in the operating system: File I/O" by moving the entire OS to RAM, so the new OS couldn't really run large (i.e., powerful) desktop applications, which in turn meant that the new OS would have to use "webapps to replace any and all functionality normally found on a desktop." So, in the end, for powerful desktop applications that need a lot of I/O, he ended up replacing file I/O with... over-the-Web I/O that is slower by orders of magnitude.

I like Chrome OS, and want it to be successful, but for powerful desktop functionality with heavy I/O, a traditional desktop OS is faster.

There's always a trade-off.


Per the blog post itself, all operations operated locally in RAM except for some asynchronous backing up to permanent media. He didn't mention network I/O at all, because most likely, the web apps were local (see: Chrome packaged apps.)

In any case Chrome OS's current architecture is completely different from the version he laid out. It doesn't run completely in RAM; instead, it relies on SSD's in most cases to reduce file I/O times.


> He didn't mention network I/O at all, because most likely, the web apps were local (see: Chrome packaged apps.)

This pre-dated Chrome, see the part about it originally being based on FireFox.


I didn't say they were Chrome packaged apps, but that they were like them. He even says as much:

"That's how the seeds of the webapps on the Chromium desktop, albeit originally written in HTML and running on Firefox, were planted."


I agree with you, but there is a flip side. It's OK for powerfull apps to use disk, however, by your definition a browser is not a powerfull app nor it is a word processor nor an email client. His design makes sense in a very specific market but I would very interested in seeing an OS with an hybrid approach


I'm working on an OS with a hybrid approach: http://www.projectmeadow.com

(The site isn't as fine tuned as I'd like yet...but interested in your thoughts. Thanks!)


Hmm looks promising... I just got subscribed to see your updates! :)


Yes, there's a couple performance trade-offs, both good and bad.

First, as you observe, File I/O is faster than Network I/O - on a call for call basis. However, current disk-based operating systems may perform thousands of File I/O operations per second when performing seemingly trivial tasks - dramatically slowing down the perceived speed of the operating system. Webapps do not perform thousands of Network I/O operations per second. Instead, webapp authors go to great lengths to maximize that the perceived performance of the app, even on very slow network connections. For example, Gmail perform asynchronous background network operations, so the user rarely perceives any sort of network delay - even on a very slow connection.

Another significant trade-off of running webapps instead of locally installed application compiled to machine code - webapps are usually written in some combination of Flash, ECMAscript, or HTML5, all of which are much slower than machine code. However, webapp authors also go to great lengths to insure their apps perform very well, despite the slow language interpreters.

In the end if the webapps are written well, they absolutely can run faster than a traditional operating system running local apps.


Isn't he replacing disk I/O with network I/O, and unless you are talking about SSD isn't the network faster?

http://serverfault.com/questions/238417/are-networks-now-fas...

Doesn't google's search engine demonstrate this?


It seems unlikely that a Chromebook would have a gigabit link within Google's datacenters.


That wasn't the point. Is the network really "slower by orders of magnitude" than disk? Why is using the network such a silly idea, considering network speeds are more than keeping up. Was the reference just referring to protocols. I was hoping for some accuracy: something to back up the statement.


No, it was referring to bandwidth and latency.

30ms latency is basically the best-case scenario you can expect for consumer internet (with a nearby datacenter and all that.) 10ms latency is about the worst-case latency for a rotational disk. And almost no consumer internet outside of Japan or South Korea is going to get more than 2 MB/s of real bandwidth, compared to 100 MB/s of real bandwidth from rotational hard drives.

Throw wireless or uploading into the mix (who's going to use a Chromebook over ethernet?) and it's even worse.


> Is the network really "slower by orders of magnitude" than disk?

Congrats on asking this question, because it's a window into a foundational fact of life in computer programming and technology:

http://en.wikipedia.org/wiki/Memory_hierarchy

You can feel good about gaining important knowledge. (Advanced exercise: extrapolate from the implications of not having known one piece of fundamental knowledge, then take action.)


...unless you are talking about SSD isn't the network faster?

It was my understanding that most or all of the Chromebooks have (small) SSDs.


Synchronous File I/O with async Network I/O.


I worked this way for years in the 80s, when my main platform was the Atari ST. I had a whopping 4MB of RAM and would load the OS from a floppy disk into RAM, after which it would create a decent sized RAM drive that I would periodically save my work to. Compiling and running programs was nearly instantaneous, and everything ran smoothly and silently (the ST didn't even have a fan). At the end of my session, I would save whatever I wanted to keep to disk before shutting down. I never even bothered to buy a hard drive, it was such a pleasant experience. Moving on to other operating systems was an abrupt shock when the time came, leaving me somewhat nostalgic for those days.

I still prefer local storage, though, without a forced dependency on the cloud.


4MB of RAM in the 80's! Nice to have you aboard Mr. Jackalope Rockefeller.


The Atari 520 STe and 1040 STe had four RAM slots. 4MB was doable, using four modules. The developer version (4160 STe) came with 4MB RAM preinstalled. [1] [2]

I can't find a price list for those models, but Atari sold 4MB memory upgrades for its ATW800 workstations at GBP 1450 [3].

[1] http://www.old-computers.com/museum/computer.asp?c=24&st...

[2] http://info-coach.fr/atari/hardware/memory.php

[3] http://atw800.complicated.net/pricelist.html


Yes, 4 MB was really a lot in 1990, so I guess his eighties were near to the end of them:

http://www.jasondunn.com/remember-computers-from-the-1990s-3...

I'd still like to know how much he paid for 4 MB then, he must remember that, it wasn't a little.


The Amstrad CPC 6128 had an 8-bit Z80 CPU and would only access the first 64KB of 128KB of RAM, even when restarting.

Somebody had the idea of using the second 64KB as RAM disk to story an exact copy of the main RAM, which would survive a CPU crash, which made it great for development.

To this day I do not know why they put 128KB on a machine with an OS designed for the older 64KB models. I suspect it had to to with legal reasons (lower taxes for higher memory models?). The fact that it also ran a rather dated CP/M might also have been a factor.


The 80s were a crazy time. Hardware was dropping in price in a way no one expected.

I remember reading an interview with Nolan Bushnell about the Atari 2600. He said by the time it launched, RAM prices fell by 80%. They could have put in tons more RAM, but during the design phase no one saw it coming, so they designed with expensive RAM in mind.

He also complained about his parent company's hesitation to see newer models of Atari 2600s with newer equipment. I guess they thought of it like an appliance. You make one, it lasts 10 years, and you buy a new one when it dies, like a washing machine. The ultra-fast pace of technology threw a lot of old world business ideas out the window.

I suspect that Amstrad was in the opposite situation. The hardware got so cheap so fast, the software guys couldn't keep up.


Interesting, very interesting.

Until now, I was believed the point was just to make a cheapass machine that ran from web, not that the intention was make it fast.

Unfortunately, it ended as cheapass machines that run from the web, and not as good machines that are fast.

The day that I can use it 100% offline, I will jump in ;)


They do run ChromeOS very fast. My XE303 Recovers from sleep in an unnoticeable amount of time, and boots from cold in mere seconds. The browser starts pretty much instantaneously.

If you want to use an in-memory linux offline you might look at things like Puppy and the various other old-school live distros. Several of them pioneered in this space though they didn't stick to lightweight stuff in quite the same way as chromeOS


I think ChromeOS is significantly faster than Chrome on Windows 7, on the same hardware. But of course you can't compare a $1000 Windows machine with a $250 Chromebook, because obviously the $1000 machine has much faster hardware to make up for the difference. In the same time I don't think a $1000 Chromebook would be worth it, as the bottleneck at that point is not the CPU, but network connection.

And do you really expect to run "web apps" 100% offline all the time? Then you're missing the point. Chromebooks are for an all-connected world, not for one where you still do most stuff online.


Actually the (X)(Chr)Ubuntu performance on a new ARM chromebook is pretty damn awesome for such a cheap, thin, light machine. IMHO. The hardware isn't at all bad.


Agreed. Really wish it had more RAM though (ironic given the content of this article, heh.)


Tiny Core (distro.ibiblio.org/tinycorelinux/) is another Linux distro that runs from ram, and you can install general software easier than in Chrome OS. Worth a look if you're interested in speed but not web apps.


Puppy Linux is another distro that does the same. It's one I've used before and is actually rather pleasant, and seems to "just work" for most anything. (including an awkward Windows-only USB wireless network interface) The interface can be a bit awkward though, admittedly, and it doesn't have everything available in its repositories, but it still works blazing fast.


I've used Puppy quite a bit, and didn't realise it was running in RAM. I have to say though, that I still think Tiny Core has the edge in terms of speed.


There are quite a lot of linux distros that run (or can run) from ram (https://en.wikipedia.org/wiki/List_of_Linux_distributions_th...). I've used porteus(http://porteus.org/) and slax(http://www.slax.org/) and although the experience was interesting, some things needed quite a lot of effort to work.


The wikipedia page is pretty interesting. Particularly the link at the bottom to a Win PE -- a live version of windows that's available from MS. I didn't know that existed.


I looked at the patent referenced in the article (http://www.google.com/patents/US8239662) and was disappointed to see nothing that would appear non-obvious. It looks like it would be pretty easy to infringe on this patent if one were building a pre-boot configuration management system with existing free and open source tools.


I cried a little inside just reading that patent name. I know thats not necessarily how patents work, but patenting a network bootloader in 2009? PXE and TFTP have been around much much longer.


If you look at the claims, that define the invention, you see that it's quite specific - the OS has to be an [binary] image, preferences have to be an image, a loader combines them to make a full image and removes the full image from the device when you logoff/poweroff.

The removal of the image is in the method claims 8, 17 too.

The details will depend on general interpretation and on the detail in the specification but claim 1 appears to contradict the actual operation of the device as in col.7 li. 20 where it says a cached image can be used when the network is down (if you cache it you haven't removed it). That element appears to rely solely on col.6, li. 50 in the specification.

So the patent only applies to a device that gathers 2 images from a network, combines the images to make a full OS and remove the full OS from the device at logoff - that strikes me as a pretty weak patent [except maybe in some security applications], no local caching? You can avoid the patent it seems by simply caching or failing to remove the complete OS image (including preferences) when you logout.

Obviously I've only made a cursory analysis based on a very brief view of the US B doc.


Fascinating post, and very interesting, but I have to wonder... 45 seconds to restart Firefox? And just to clear the cache? Assuming he does that "hundreds" of times a day (say, 200), that's 2.5 hours a day just waiting for Firefox to spin up and down.

What could possibly cause it to take that long? Was there no a to clear the cache without restarting the whole browser?

Unless I'm missing something, this seems very strange to me.


Back in 2006? That's a bit unusual but entirely realistic for a heavy user. The binaries were not optimized for loading from disk, because no one was anal enough to do that until 2010 https://blog.mozilla.org/tglek/2010/04/12/squeezing-every-la... The history and bookmarks were completely overhauled in 2007 https://wiki.mozilla.org/Places#Timeline_and_History But if you had a lot of tabs open, and a bunch of extensions, it could still take nearly 30 seconds to do a warm start in 2010 according to this forum thread http://forums.mozillazine.org/viewtopic.php?f=23&t=20509...


This was around 2006, which, IIRC, was pretty much at the peak of Firefox's startup bloat issues. I'm also inclined to believe that he was on an older machine, because this would have also been IE6 days, which was definitely much faster to load than anything else(due to MS basically building IE into XP).


It's absolutely realistic. Firefox was very, very bloated in 2006, but we still used it because the next best thing was IE.


> First, Chromebook was initially rejected by Google management. In fact I wrote the first version as early as July 2006 and showed it around to management. Instead of launching a project, the response was extremely tepid. My boss complained, "You can't use it on an airplane." Actually, you could as, under the covers, it was still a bare-bones Linux distribution and could execute any Linux program installed on it.

Its interesting how many companies first reaction to grassroots innovation is to kick it to the curb.

To quote Jonathan Ive: "while ideas ultimately can be so powerful, they begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished."


I think it would be better if Chrome OS were more like Lotus Notes, where a group of individuals can remotely collaborate on a document in the cloud or work offline and replicate their changes overnight for later editorial adjustment.


And I don't think anything should be more like Lotus Notes, including Lotus Notes.

shudder


You mean Google Docs?


I used to setup a ramdisk as a flat file for / and /usr, and have it load unionfs style over the main filesystem after /sbin/init finished. This was around 2005. The system flew, and the only super large applications I was using installed to /opt anyway, which was on a spinning disk (I think Eclipse was the only one).


I wonder if he would ever have done this if SSDs had arrived a few years earlier? In 2007 my hard disk was indeed the greatest bottleneck on my computer by an order of magnitude. Replacing my spinning platter with an SSD (around 2010) has all but eliminated this problem.

It's funny how innovation works: the advantages of a Chromebook are no longer linked to it's original purpose, but we have found the incidental advantages that came along with solving the original purpose so beneficial that it's now carving out a solid niche based on those alone. It makes me think of a detonator and an explosive. You need something to initiate an innovation, but that doesn't always have to be what sustains it.


Interesting also to note that Chrome OS inventor Jeff Nelson is currently CTO of a "stealth startup". http://www.linkedin.com/in/nelsonjeffrey


I have 8 GB RAM. Why doesn't the kernel load everything to memory the first time I use it, and have it available afterwards? Shouldn't the fs cache do that anyway?


If by kernel you mean linux, then yes, it does do precisely that. Try this out on a large file:

   time cat bigfile > /dev/null
   time cat bigfire > /dev/null
Then wait a few minutes and try again. Linux won't evict the pages from the cache unless something else wants to use the space.


OS X does that. A problem: it's currently set up so that 8GB is not quite enough. It would be better for it to be just a bit less agressive in caching and have much lower "swappiness." Right now, it's as though it's set up to be responsive to a wide variety causal use-cases, but far from optimal for users doing really memory intensive work on just a few apps.


So the Linux kernel should cache more aggressively? It does not seem to keep everything in RAM for the second startup of the browser on my box.


It's funny. The first Linux distribution I ever used, way back in '05 on a terrible P2 laptop with 128Mb of RAM, was DamnSmallLinux. It ran in RAM from what I can recall, at least off my USB drive (the disk iso dd'ed to it) with a floppy as the bootloader to hit the USB :)

Those were the days. I also remember getting Slax to boot from the HDD, which it was never designed to do. Nostalgia...


There were also 1.4M floppies with barebone Linux that booted and ran completely in 8M of RAM on a 386. You could remove the floppy after it booted and continue using Linux.


I have the same frustration with speed and the same proposed solution (keep all your live data in RAM, journal updates to disk). My netbook now has 1024MiB of RAM; my first Linux machine had 800MB of disk and 64MiB of RAM. Current laptop computers are powerful enough to keep a full working environment in RAM and never have to go to disk or even flash RAM for software. (Maybe for data if you have a large dataset, but not for software.)


It'll be years before webapps can supplant "native" PC/Mac apps. If Google really wants ChromeOS to be a third PC option, there needs to be a way to run "native" apps, and I'm not sure NaCl is the way to go.


The only thing I would like on my Chromebook is something like Cmus, so I can play music. Other than that, I am quite happy with it.


Really? They seem to be selling pretty well as it is. Why does there have to be a way to run native apps?


It could actually replace the PC or Mac for many more people.

By the way, can you quantify "selling pretty well?" Market share numbers, for example.


But then it would lose all the features that make it an excellent device, such as security, always being up-to-date and fast boot time.

It was the top-selling laptop on Amazon over Christmas. http://www.chromebookblog.com/2013/01/samsung-and-acer-chrom...


10% of Acer's sales.


From Wikipedia: By January 2013, Acer's Chromebook sales were driven by "heavy Internet users with educational institutions", and the platform represented 5-10 percent of the company's U.S. shipments, according to Acer president Jim Wong.

http://en.wikipedia.org/wiki/Chromebook#Sales_and_marketing

Can anyone provide market share numbers? I'm not looking for "kool-aid" marketing, but data driven decisions. The kind that Google does. For example:

http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r...


Market share numbers are irrelevant, what matters is if it has the userbase to support a viable app ecosystem. Which it slowly is.


Great read, but I'm not sure why he keeps using the term of Chromebook/Chrome OS. It was more the story of a Google OS which eventually a few years after someone else turned into Chrome OS.


In iOS 5 that article was unreadable black on dark blue. Thankfully the "reader" mode worked and makes the text readable.


I wish he would commit these kinds of speed boosting changes to other linux distributions.


It seems like most of the speedup is from running everything from RAM. If you're serious about never touching disk, adding something like "BOOT=casper toram" to a stock Ubuntu kernel line might do it easily.

Edit: OK for something practical this should work http://www.pendrivelinux.com/universal-usb-installer-easy-as... Just decline to give it any "persistent" storage, and point it at your hard drive (or a smallish <10GB boot partition) instead of a thumb drive. Does anyone know of a tool like this that will run on Linux?

Edit2: unetbootin seems to be a similar tool that is cross-platform.


I was hoping to get boost using games in wine, looking at it now i don't think it would have a useful affect, games would still need to use the hd for assets and swap space.


Well that depends... if you have 16+ GB of RAM and don't mind waiting an extra couple minutes to boot, you wouldn't have to wait for textures to load from disk during the game! :)


SSDs are your friend :)


Maybe you want to try linuxfromscratch.org or Gentoo.


Did they want a fast Linux or their "own" Linux? They are trying to get us to program systems now in the Go language, web apps in their DART language. They control search, increasingly smart phones too, what's left? Own your OS and hardware.

Linux is just fine Google, stick with it.


> They are trying to get us to program systems now in the Go language

Basically everything on http://golang.org/ is licensed under either a CC Attrribution 3.0 license or a BSD license. Go was intentionally designed to be a specification and not just a single canonical implementation. True to that, there are two mainstream Go compilers, one of which (gcc-go) is licensed under the GPL v3.

Aside from the fact that Google happens to employ some of the main developers of the language, I don't see how you can say that they 'own' it, since they've gone out of their way to make none of it proprietary.


What's wrong with creating your own programming language?


Nothing. My first thought was "what if Google writes their own operating system based on Go?".

It may sound silly but some are already hacking around this idea, as we can see in this thread [0].

* https://codereview.appspot.com/3996047/

* http://gofy.cat-v.org/

So, why not?

[0] https://groups.google.com/forum/?fromgroups=#!topic/golang-n...


I don't see the point. With flash disks now offering hundreds of MB/sec of read bandwidth and great random access latency, what's the benefit of running out of RAM? I can't say I ever felt a "need for speed" on my Air or from my iPad.

And web apps suck. I don't know if anyone told the Chrome OS team, but GMail and Google Docs are awful, and it seems like they get worse and weirder with every incarnation (at some point several years ago I found GMail quite tolerable, but they've fucked it up since then). I used to think Outlook and Word were bloated pigs, but give me Outlook 2010 any day. Why would I replace my reasonably functioning native apps with clunky spyware that doesn't work when I don't have an internet signal?


You're missing the fact that this RAM-focused operating system was conceived in 2006, aside from a variety of other things.


And it's pointless now. The only value to Chromebooks seems to be that you can sell them for extremely cheap because you just need 8GB or so of flash. Once you hit the > $350 price point and can afford to put in a decent amount of flash, I don't think using Chrome OS buys you anything.


If they can reduce user intervention for maintenance to near zero, that's a lot of value right there.


SSD drives are only as fast as the BUS. SATA BUS tops out at around 100 MB/s. While the RAM BUS tops out at 15,000 MB/s.

150x faster data access is very significant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: