It's just too easy to make a case by taking a retrospective look at the winners. I want to see a current battleground along with a prediction of which one is the toy and which one is the non-toy.
Here's an example of why it's hard: iPhone vs Android. Well, obviously the iPhone is the toy. You have to do incredibly awkward things, or pay Apple $100, to run your own code on it; you can't tether it (sometimes); you can't replace the Web browser. It's hard to script it. Want to use your favourite email client? Good luck.
But hang on. Clearly Android is the toy. You can pick up Android phones for a lot less than an iPhone. Most of these phones are nowhere near as pretty or as specced out as an iPhone -- but pick one up and you've got the complete Android experience. If you want to hack on it, go ahead. If what you're looking for isn't built in to the phone, chances are someone's written it for you. Sure, it's messy, but that's democracy.
But hold up again. Smartphones are getting cheaper, but who really needs such a powerful, battery-sucking device in their pocket? Maybe the real "toy" of this generation is yet to emerge.
In five years someone will write an article about how obvious it was that iPhones would succeed because they were the simple accessible choice, or that Android would succeed because it was democratising, or that something else entirely would succeed because the smartphone genre was a fad.
Disclaimer: I agree with the general sentiment of the article.
Your comment makes me realize a problem with this article: It's not that it isn't true, it is too true. Its title is a tautology. Of course the technological winner will always be a toy. To a proper hacker, every object is a potential toy.
The very expensive multiphoton microscopes that I once worked with were, and still are, very serious medical research tools. But they were also wonderful toys. Building them was fun. For an R&D engineer, engineering is fun.
Which were the "serious" computers again? The ones that were never used for fun? But that excludes all of them. Sometimes the fun was right out in the open: The DEC PDP-1 of 1959, for example, became famous as a game machine, because Spacewar was written on it. But I'll bet that even old-school bank-account data processing was a lot of fun when it was first being invented. I bet even COBOL programmers had fun, especially in 1960 when COBOL was brand new.
I think it's more subtle than that. For the micro to supplant the mini (mainframes are alive and well, thankyouverymuch) it had to become one. A contemporary PC running Linux or NT is not only faster than a VAX, it is actually more complicated under the hood. This should be easy to prove: do whatever you need to do to get a complete process listing of your preferred system. How much do you, probably an "expert" if you're even reading HN, know about each and every process? Try the same thing on a CP/M box if you have one...
The real lesson is, it's easier to make a simple thing more complex than it is to make a complex thing more simple.
"it's easier to make a simple thing more complex than it is to make a complex thing more simple"
And unfortunately, that's why today's toys will be tomorrow's dinosaurs. That said, this is exactly why mobile is winning now. It's so much simpler. There's no reason it's so complicated to do something on a PC, other than that the simple thing from then has now become more complex
> this is exactly why mobile is winning now. It's so much simpler.
Are you sure? Here's a relevant quote :
I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone.
I'm not sure for everything, but on iOS 4.1 with my iPod Touch (so no, I don't have a modern mobile phone, but this is basically the same) it's much easier to do the things most people do (Twitter, Facebook, email) than it is on a PC. Plus, want to install a game? A couple taps and a couple dollars, and you're ready to play a game within seconds.
Compare that with a PC: find a game online, create an account to download it, spend $20-$50 on it or perhaps just $10-$15 if it's a basic game, wait while it downloads, then step through the installer and if all goes well, you can play your game.
We're all programed to do it, and it's a no brainer for those of us that are used to it, but the iOS model is so much simpler for people who don't enjoy tweaking a machine and just want it to work.
Compare that with a PC: find a game online, create an account to download it, spend $20-$50 on it or perhaps just $10-$15 if it's a basic game, wait while it downloads, then step through the installer and if all goes well, you can play your game.
As the owner of one, I get your point - sometimes it could be a little les computer like. OTOH I was using a mac book the other day and kept wondering a) why the arrow keys don't move the cursor or page view in text editing fields and b) why on earth it wouldn't let me just copy the files from the SD card in the camera connected to it.
It wanted to import them into iPhoto but couldn't do it, because iPhoto didn't recognize some files that used RAW format. I didn't mind that, but all I wanted to do was see the contents of the card in a finder window and move them myself. Eventually I had to borrow a USB card reader and stick the SD card into that.
I kept imagining Clippy for Mac: It looks like you know what you're doing. Do you think I'm going to let you put me out of a job?
There's no reason it's so complicated to do something on a PC
Of course there are reasons why it's complicated to do things on a PC. A PC is an open-ended tool, that's why. One person might playing world of warcraft, while someone else might be using emacs to write a clojure web-app, while someone else might be using google sketchup to design a floorplan, while someone else is chatting on skype. Meanwhile I am using my PC to emulate the original sony playstation and post to hacker news, and my friend is watching a movie on Netflix.
Computers get simpler to use when people solve problems in permanent ways. They get harder to use when new challenges expose fundamental limitations of the existing design (IE the design was TOO simple).
That said, this is exactly why mobile is winning now. It's so much simpler.
For the user, maybe. Even though I knew it ahead of time, it was still a bit weird the first time I opened a debug terminal to my Android phone and found myself at a shell prompt. Putting Unix on a telephone is not my idea of making things simpler :-)
I can't say I find it complicated to do things on my PC though. I just don't want to cart it around all the time, or need to do all that stuff on the bus or in the park.
> it's easier to make a simple thing more complex than it is to make a complex thing more simple
Ha, yes, sort-of. That seems immediately right, but when you consider it, it doesn't quite make sense, in those words.
To make something more complex you must create new things. But to make something simpler you can just destroy a chunk of it. It is certainly easier to delete code than write it.
What I think is the point is: it is easy to make something better by just accumulating features; it is hard to make something better by removing features.
Deleting code is as difficult as the amount of other code that depends on it. It can be much, much harder than writing new code. For example, can you imagine removing a core shell command like grep? Practically impossible. Creating a new one--however hard--is still at least possible.
OK, what I am saying is that the original statement only makes sense because it has an implication of maintaining coherence -- but it doesn't actually bring that important point out. It is not quite the simplifying, per se, that is difficult, it is keeping the overall structure or improving it. Maybe that is an overly fine (or pedantic) point, but maybe not . . .
I'm going to nitpick a little bit about the Arduino. Just to be clear, I won an Arduino and love it, but the article misrepresents it a little bit.
It states that the Arduino is a microcontroller. This is false. From the Arduino website:
> Arduino is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software.
The Arduino is an entire prototyping platform. It includes software which makes it easy to use, and it includes a lot of extra peripherals that make it more expensive. In the professional realm, Arduinos are not that popular. They're just too expensive. The microcontroller that the Arduino uses costs a few dollars, which is a tiny fraction of $30 for the whole board. Any professional will be comfortable enough to just program a basic microcontroller and build the small amount of circuitry to power it.
> In the professional realm, Arduinos are not that popular. They're just too expensive.
It seemed to me the comparison in the article was more toward something like an ARM-based platform. In my opinion a large part of the success is the combination of price (compared to $100+ microcontroller development boards) and accessiblity (a mostly out-of-box tool chain and set of libraries).
> The microcontroller that the Arduino uses costs a few dollars, which is a tiny fraction of $30 for the whole board.
Keep in mind that part of the cost of the board is "sustainable development" due to it being produced in Italy with apparently known labour practices.
The Arduino is basically a prototyping platform, which makes use of an Atmel microprocessor - namely the ATmega range (ATmega8, ATmega168, ATmega328 etc..)
The Arduino _isn't_ a microprocessor - a microprocessor is placed into an Arduino board. The Arduino provides a way to interface with the microprocessor (via hardware ports, and a software interface).
Thanks for the corrections, I edited the article a bit. (I'm still comfortable calling the Arduino a "microcontroller with some software included" since this is what Wikipedia is calling it.)
Whilst fixing that I might also point out that the processor on Beagleboard isn't a microcontroller either. The memory is external, it typically runs HLOSs etc. etc.
I'd put it a little more strongly: while the example of scripting languages is a new one to me, none of the reasoning here is new or surprising to someone who's read Christensen.
The crucial point isn't that it's a technology disruption, but a business disruption, in the sense that (1) customers of the new technology have different needs (or ranking of needs); (2) vendors of the new technology need to supply it in different ways.
One example from the book, on disk drives, is that in moving to smaller disk drives that were more popular in laptops, (1) for customers who valued small physical size and mechanical robustness over storage size and cost per MB; (2) the manufacturers of which came out with new models twice as quickly as the previous size (incumbent organizations found it difficult to change all their internal procedures to adapt to this new rhythm, new sales channels, new customer demographic - everything changes, both "build something" and "that people want", not just the tech).
Because it's not a technology problem, but a business problem, the proposed solution is not a separate tech lab, but a separate business unit - one that can be happy with tiny sales, and that can be organized by the needs, ranking of needs, and rhythms of the new market.
[It seems to me that sadly, this amounts to a startup, which the incumbent owns; this will save them financially, but won't save their organization.]
A current example is ARM CPUs in smartphones, which appear to have disrupted intel x86 architecture. But I hesitate to count intel out - they've been around the disruption cycle a few times (having invented the microprocessor and integrated circuit).
The book is worth reading, but half of it is historical data about disk drives.
Edit: I think it's a bit overrated, actually. Had it been a quarter of its size and stripped of its pretentiousness, it actually might be the classic everyone says it is. That being said, its core insight is an important and good one, so you kind of have to read it anyway.
Is it me, or is the author confusing incumbent and enterprise? Office is not 'enterprise' in my book, nor is Yahoo. MS Exchange, maybe. SAP and Salesforce definitely. I'm not sure I see any toys entering that space. Mint is what I can think of that comes closest.
I'm also not convinced by the whole scripting language thing. There is certainly a healthy debate to be had between static and dynamic languages but I'm not sure that too many people poo-poo'd Python or Ruby (unlike PHP). Also to suggest that those have "won" vis-a-vis Java or .Net is jumping the gun somewhat IMO.
Other than that, the central argument is essentially disruption theory, but as gaius points out the idea there is that you start off as a "toy" with a particular advantage which, despite the limitations, allows you to carve out a niche. Then you take over the entire market by growing capacity without losing the initial advantage and pushing the incumbent into an increasingly small niche at the high end. However, by that time the product is fully featured, becomes the incumbent and ceases to be a "toy".
"But as the years passed and hardware became faster the relative slowness of dynamic languages became less relevant, while their advantages in programmer productivity became much more relevant."
Let's not discount the fundamental advances to compilation and performance of "dynamic" languages of relatively recent years. Just look at Javascript.
Dynamic languages look much less amenable to optimization at first than static languages, but many new techniques have been discovered, and sometimes there are even optimizations that are only possible to do at runtime if the environment supports it.
In the meantime, the 'advantage' of dynamic languages such as verbosity and the functional additions they had before the OO languages are becoming lesser, with newer languages having both static typing and type inference, functional features, etc.
A definition of 'toy' needs to include motivation. Most of the time, people will use a toy for its entertainment value rather than its ability to help them carry out productive work. This is the reason that toys aren't always considered as 'serious'.
The essay seems to be suggesting that a small, less complicated product, has a lot of benefits over larger, more complicated products.
Here's an example of why it's hard: iPhone vs Android. Well, obviously the iPhone is the toy. You have to do incredibly awkward things, or pay Apple $100, to run your own code on it; you can't tether it (sometimes); you can't replace the Web browser. It's hard to script it. Want to use your favourite email client? Good luck.
But hang on. Clearly Android is the toy. You can pick up Android phones for a lot less than an iPhone. Most of these phones are nowhere near as pretty or as specced out as an iPhone -- but pick one up and you've got the complete Android experience. If you want to hack on it, go ahead. If what you're looking for isn't built in to the phone, chances are someone's written it for you. Sure, it's messy, but that's democracy.
But hold up again. Smartphones are getting cheaper, but who really needs such a powerful, battery-sucking device in their pocket? Maybe the real "toy" of this generation is yet to emerge.
In five years someone will write an article about how obvious it was that iPhones would succeed because they were the simple accessible choice, or that Android would succeed because it was democratising, or that something else entirely would succeed because the smartphone genre was a fad.
Disclaimer: I agree with the general sentiment of the article.