When designing the OS, he "took off the table the largest performance bottlenecks in the operating system: File I/O" by moving the entire OS to RAM, so the new OS couldn't really run large (i.e., powerful) desktop applications, which in turn meant that the new OS would have to use "webapps to replace any and all functionality normally found on a desktop." So, in the end, for powerful desktop applications that need a lot of I/O, he ended up replacing file I/O with... over-the-Web I/O that is slower by orders of magnitude.
I like Chrome OS, and want it to be successful, but for powerful desktop functionality with heavy I/O, a traditional desktop OS is faster.
Per the blog post itself, all operations operated locally in RAM except for some asynchronous backing up to permanent media. He didn't mention network I/O at all, because most likely, the web apps were local (see: Chrome packaged apps.)
In any case Chrome OS's current architecture is completely different from the version he laid out. It doesn't run completely in RAM; instead, it relies on SSD's in most cases to reduce file I/O times.
I agree with you, but there is a flip side. It's OK for powerfull apps to use disk, however, by your definition a browser is not a powerfull app nor it is a word processor nor an email client. His design makes sense in a very specific market but I would very interested in seeing an OS with an hybrid approach
Yes, there's a couple performance trade-offs, both good and bad.
First, as you observe, File I/O is faster than Network I/O - on a call for call basis. However, current disk-based operating systems may perform thousands of File I/O operations per second when performing seemingly trivial tasks - dramatically slowing down the perceived speed of the operating system. Webapps do not perform thousands of Network I/O operations per second. Instead, webapp authors go to great lengths to maximize that the perceived performance of the app, even on very slow network connections. For example, Gmail perform asynchronous background network operations, so the user rarely perceives any sort of network delay - even on a very slow connection.
Another significant trade-off of running webapps instead of locally installed application compiled to machine code - webapps are usually written in some combination of Flash, ECMAscript, or HTML5, all of which are much slower than machine code. However, webapp authors also go to great lengths to insure their apps perform very well, despite the slow language interpreters.
In the end if the webapps are written well, they absolutely can run faster than a traditional operating system running local apps.
That wasn't the point. Is the network really "slower by orders of magnitude" than disk? Why is using the network such a silly idea, considering network speeds are more than keeping up. Was the reference just referring to protocols. I was hoping for some accuracy: something to back up the statement.
30ms latency is basically the best-case scenario you can expect for consumer internet (with a nearby datacenter and all that.) 10ms latency is about the worst-case latency for a rotational disk. And almost no consumer internet outside of Japan or South Korea is going to get more than 2 MB/s of real bandwidth, compared to 100 MB/s of real bandwidth from rotational hard drives.
Throw wireless or uploading into the mix (who's going to use a Chromebook over ethernet?) and it's even worse.
You can feel good about gaining important knowledge. (Advanced exercise: extrapolate from the implications of not having known one piece of fundamental knowledge, then take action.)
I worked this way for years in the 80s, when my main platform was the Atari ST. I had a whopping 4MB of RAM and would load the OS from a floppy disk into RAM, after which it would create a decent sized RAM drive that I would periodically save my work to. Compiling and running programs was nearly instantaneous, and everything ran smoothly and silently (the ST didn't even have a fan). At the end of my session, I would save whatever I wanted to keep to disk before shutting down. I never even bothered to buy a hard drive, it was such a pleasant experience. Moving on to other operating systems was an abrupt shock when the time came, leaving me somewhat nostalgic for those days.
I still prefer local storage, though, without a forced dependency on the cloud.
The Atari 520 STe and 1040 STe had four RAM slots. 4MB was doable, using four modules. The developer version (4160 STe) came with 4MB RAM preinstalled. [1] [2]
I can't find a price list for those models, but Atari sold 4MB memory upgrades for its ATW800 workstations at GBP 1450 [3].
The Amstrad CPC 6128 had an 8-bit Z80 CPU and would only access the first 64KB of 128KB of RAM, even when restarting.
Somebody had the idea of using the second 64KB as RAM disk to story an exact copy of the main RAM, which would survive a CPU crash, which made it great for development.
To this day I do not know why they put 128KB on a machine with an OS designed for the older 64KB models. I suspect it had to to with legal reasons (lower taxes for higher memory models?). The fact that it also ran a rather dated CP/M might also have been a factor.
The 80s were a crazy time. Hardware was dropping in price in a way no one expected.
I remember reading an interview with Nolan Bushnell about the Atari 2600. He said by the time it launched, RAM prices fell by 80%. They could have put in tons more RAM, but during the design phase no one saw it coming, so they designed with expensive RAM in mind.
He also complained about his parent company's hesitation to see newer models of Atari 2600s with newer equipment. I guess they thought of it like an appliance. You make one, it lasts 10 years, and you buy a new one when it dies, like a washing machine. The ultra-fast pace of technology threw a lot of old world business ideas out the window.
I suspect that Amstrad was in the opposite situation. The hardware got so cheap so fast, the software guys couldn't keep up.
They do run ChromeOS very fast. My XE303 Recovers from sleep in an unnoticeable amount of time, and boots from cold in mere seconds. The browser starts pretty much instantaneously.
If you want to use an in-memory linux offline you might look at things like Puppy and the various other old-school live distros. Several of them pioneered in this space though they didn't stick to lightweight stuff in quite the same way as chromeOS
I think ChromeOS is significantly faster than Chrome on Windows 7, on the same hardware. But of course you can't compare a $1000 Windows machine with a $250 Chromebook, because obviously the $1000 machine has much faster hardware to make up for the difference. In the same time I don't think a $1000 Chromebook would be worth it, as the bottleneck at that point is not the CPU, but network connection.
And do you really expect to run "web apps" 100% offline all the time? Then you're missing the point. Chromebooks are for an all-connected world, not for one where you still do most stuff online.
Actually the (X)(Chr)Ubuntu performance on a new ARM chromebook is pretty damn awesome for such a cheap, thin, light machine. IMHO. The hardware isn't at all bad.
Tiny Core (distro.ibiblio.org/tinycorelinux/) is another Linux distro that runs from ram, and you can install general software easier than in Chrome OS. Worth a look if you're interested in speed but not web apps.
Puppy Linux is another distro that does the same. It's one I've used before and is actually rather pleasant, and seems to "just work" for most anything. (including an awkward Windows-only USB wireless network interface) The interface can be a bit awkward though, admittedly, and it doesn't have everything available in its repositories, but it still works blazing fast.
I've used Puppy quite a bit, and didn't realise it was running in RAM. I have to say though, that I still think Tiny Core has the edge in terms of speed.
The wikipedia page is pretty interesting. Particularly the link at the bottom to a Win PE -- a live version of windows that's available from MS. I didn't know that existed.
I looked at the patent referenced in the article (http://www.google.com/patents/US8239662) and was disappointed to see nothing that would appear non-obvious. It looks like it would be pretty easy to infringe on this patent if one were building a pre-boot configuration management system with existing free and open source tools.
I cried a little inside just reading that patent name. I know thats not necessarily how patents work, but patenting a network bootloader in 2009? PXE and TFTP have been around much much longer.
If you look at the claims, that define the invention, you see that it's quite specific - the OS has to be an [binary] image, preferences have to be an image, a loader combines them to make a full image and removes the full image from the device when you logoff/poweroff.
The removal of the image is in the method claims 8, 17 too.
The details will depend on general interpretation and on the detail in the specification but claim 1 appears to contradict the actual operation of the device as in col.7 li. 20 where it says a cached image can be used when the network is down (if you cache it you haven't removed it). That element appears to rely solely on col.6, li. 50 in the specification.
So the patent only applies to a device that gathers 2 images from a network, combines the images to make a full OS and remove the full OS from the device at logoff - that strikes me as a pretty weak patent [except maybe in some security applications], no local caching? You can avoid the patent it seems by simply caching or failing to remove the complete OS image (including preferences) when you logout.
Obviously I've only made a cursory analysis based on a very brief view of the US B doc.
Fascinating post, and very interesting, but I have to wonder... 45 seconds to restart Firefox? And just to clear the cache? Assuming he does that "hundreds" of times a day (say, 200), that's 2.5 hours a day just waiting for Firefox to spin up and down.
What could possibly cause it to take that long? Was there no a to clear the cache without restarting the whole browser?
Unless I'm missing something, this seems very strange to me.
This was around 2006, which, IIRC, was pretty much at the peak of Firefox's startup bloat issues. I'm also inclined to believe that he was on an older machine, because this would have also been IE6 days, which was definitely much faster to load than anything else(due to MS basically building IE into XP).
> First, Chromebook was initially rejected by Google management. In fact I wrote the first version as early as July 2006 and showed it around to management. Instead of launching a project, the response was extremely tepid. My boss complained, "You can't use it on an airplane." Actually, you could as, under the covers, it was still a bare-bones Linux distribution and could execute any Linux program installed on it.
Its interesting how many companies first reaction to grassroots innovation is to kick it to the curb.
To quote Jonathan Ive: "while ideas ultimately can be so powerful, they begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished."
I think it would be better if Chrome OS were more like Lotus Notes, where a group of individuals can remotely collaborate on a document in the cloud or work offline and replicate their changes overnight for later editorial adjustment.
I used to setup a ramdisk as a flat file for / and /usr, and have it load unionfs style over the main filesystem after /sbin/init finished. This was around 2005. The system flew, and the only super large applications I was using installed to /opt anyway, which was on a spinning disk (I think Eclipse was the only one).
I wonder if he would ever have done this if SSDs had arrived a few years earlier? In 2007 my hard disk was indeed the greatest bottleneck on my computer by an order of magnitude. Replacing my spinning platter with an SSD (around 2010) has all but eliminated this problem.
It's funny how innovation works: the advantages of a Chromebook are no longer linked to it's original purpose, but we have found the incidental advantages that came along with solving the original purpose so beneficial that it's now carving out a solid niche based on those alone. It makes me think of a detonator and an explosive. You need something to initiate an innovation, but that doesn't always have to be what sustains it.
I have 8 GB RAM. Why doesn't the kernel load everything to memory the first time I use it, and have it available afterwards? Shouldn't the fs cache do that anyway?
OS X does that. A problem: it's currently set up so that 8GB is not quite enough. It would be better for it to be just a bit less agressive in caching and have much lower "swappiness." Right now, it's as though it's set up to be responsive to a wide variety causal use-cases, but far from optimal for users doing really memory intensive work on just a few apps.
It's funny. The first Linux distribution I ever used, way back in '05 on a terrible P2 laptop with 128Mb of RAM, was DamnSmallLinux. It ran in RAM from what I can recall, at least off my USB drive (the disk iso dd'ed to it) with a floppy as the bootloader to hit the USB :)
Those were the days. I also remember getting Slax to boot from the HDD, which it was never designed to do. Nostalgia...
There were also 1.4M floppies with barebone Linux that booted and ran completely in 8M of RAM on a 386. You could remove the floppy after it booted and continue using Linux.
I have the same frustration with speed and the same proposed solution (keep all your live data in RAM, journal updates to disk). My netbook now has 1024MiB of RAM; my first Linux machine had 800MB of disk and 64MiB of RAM. Current laptop computers are powerful enough to keep a full working environment in RAM and never have to go to disk or even flash RAM for software. (Maybe for data if you have a large dataset, but not for software.)
It'll be years before webapps can supplant "native" PC/Mac apps. If Google really wants ChromeOS to be a third PC option, there needs to be a way to run "native" apps, and I'm not sure NaCl is the way to go.
From Wikipedia:
By January 2013, Acer's Chromebook sales were driven by "heavy Internet users with educational institutions", and the platform represented 5-10 percent of the company's U.S. shipments, according to Acer president Jim Wong.
Great read, but I'm not sure why he keeps using the term of Chromebook/Chrome OS. It was more the story of a Google OS which eventually a few years after someone else turned into Chrome OS.
It seems like most of the speedup is from running everything from RAM. If you're serious about never touching disk, adding something like "BOOT=casper toram" to a stock Ubuntu kernel line might do it easily.
Edit: OK for something practical this should work http://www.pendrivelinux.com/universal-usb-installer-easy-as... Just decline to give it any "persistent" storage, and point it at your hard drive (or a smallish <10GB boot partition) instead of a thumb drive. Does anyone know of a tool like this that will run on Linux?
Edit2: unetbootin seems to be a similar tool that is cross-platform.
I was hoping to get boost using games in wine, looking at it now i don't think it would have a useful affect, games would still need to use the hd for assets and swap space.
Well that depends... if you have 16+ GB of RAM and don't mind waiting an extra couple minutes to boot, you wouldn't have to wait for textures to load from disk during the game! :)
Did they want a fast Linux or their "own" Linux? They are trying to get us to program systems now in the Go language, web apps in their DART language. They control search, increasingly smart phones too, what's left? Own your OS and hardware.
> They are trying to get us to program systems now in the Go language
Basically everything on http://golang.org/ is licensed under either a CC Attrribution 3.0 license or a BSD license. Go was intentionally designed to be a specification and not just a single canonical implementation. True to that, there are two mainstream Go compilers, one of which (gcc-go) is licensed under the GPL v3.
Aside from the fact that Google happens to employ some of the main developers of the language, I don't see how you can say that they 'own' it, since they've gone out of their way to make none of it proprietary.
I don't see the point. With flash disks now offering hundreds of MB/sec of read bandwidth and great random access latency, what's the benefit of running out of RAM? I can't say I ever felt a "need for speed" on my Air or from my iPad.
And web apps suck. I don't know if anyone told the Chrome OS team, but GMail and Google Docs are awful, and it seems like they get worse and weirder with every incarnation (at some point several years ago I found GMail quite tolerable, but they've fucked it up since then). I used to think Outlook and Word were bloated pigs, but give me Outlook 2010 any day. Why would I replace my reasonably functioning native apps with clunky spyware that doesn't work when I don't have an internet signal?
And it's pointless now. The only value to Chromebooks seems to be that you can sell them for extremely cheap because you just need 8GB or so of flash. Once you hit the > $350 price point and can afford to put in a decent amount of flash, I don't think using Chrome OS buys you anything.
I like Chrome OS, and want it to be successful, but for powerful desktop functionality with heavy I/O, a traditional desktop OS is faster.
There's always a trade-off.