There was even a version of Windows NT 4.0 for that. It was great. Then Motorola decided they had no clue what they were doing with computers, IBM gave up and shut the whole thing down.
All of the RISC chips were dying anyway. Intel was starting to build steam with the Pentium Pro and Pentium II, Wintel servers were getting cheaper and more common, Windows was becoming 'good enough' and Linux started to push the expensive traditional Linuxes off the table.
That's really just the standard Innovator's Dilemma effect where low-cost, high-volume products often tend to eat their way up the market. Now that the semi-RISC ARM processor is attacking Intel from the low end it's having the same sort of success as x86 did over MIPS, etc.
Cheap Mips and other RISC chips were supposed to replace all the clunky old x86 style processors, so it's one case where the Innovator's Dilemma effect turned out to be wrong.
Neither RISC nor CISC won in the end. Intel's current generation processors, like most others of similar performance specifications, break down the incoming instruction stream into micro ops (http://en.wikipedia.org/wiki/Micro-operation), the units of processing that are actually executed. This renders the difference between CISC and RISC a case of semantics.
Before this, the idea was that RISC was simpler to implement and could be optimized more easily, ultimately be more cost effective. What wasn't factored in was how good Intel is at optimizing, and how hard they'd push their process, beating the RISC side despite all the disadvantages CISC had.
Now it's the GPU that's eating Intel's lunch, high performance floating point code on the CPU is several orders of magnitude slower than a high-end GPU, so Intel's trying to fight back with their "pile of CPUs" strategy (http://en.wikipedia.org/wiki/Larrabee_(microarchitecture)). It's not working out very well so far.
In defence of Intel here, if you look at performance per watt GPUs aren't all that far ahead of Intel. It's mostly that every instruction an modern CPU executes is predicted by a branch predictor, makes it's way through several levels of cache, is run through a reorder buffer and register renaming and a reservation station before finally being executed. And all of that takes energy, though it speeds up the rate at which sequential instructions can be issued by the processor quite a bit.
As to RISC vs CISC, well, it's true that x86 instructions are decoded to micro Ops inside a modern processor but the fact that the instruction was complicated does have a cost even for a modern processor. The act of just decoding four instructions in a clock cycle and transforming them into uOps is quite a bit of work, on the same order as finally executing them if they're simple additions or such. And the uOps that make up an instruction have to be completed all together or else when the processor is interrupted by a page fault or such it will resume in an inconsistent state. And the first time you run through a segment of code you can only run one instruction at a time since figuring out where instruction boundaries are is hard, though you can store the location of those boundaries with just another bit per byte when they're in the L1 instruction cache.
On the other hand, complex variable length instructions mean that you don't need as many bytes to express some piece of code both since you're using less bytes per instruction on average and because complex instructions mean you sometimes use fewer of them.
Of course, Intel is the biggest CPU vendor out there and has correspondingly large and brilliant design teams working hand in hand with the most advanced fabs in the industry.
Now, there are many RISC instruction sets that have taken on x86 before, but they all attacked it from the high end, from upmarket. Doing just the opposite of what ARM is doing now. Will it succeed in dethroning x86 from the low end the way x86 did to it's rivals? Who knows. But I think that previous fights don't tell us much about this one.
Quite. But there was plenty of "Intel is doomed" hype in the 1980s when RISC chips first appeared. Indeed, Microsoft didn't write either of its home grown operating systems -- NT and CE -- on x86 processors.
Of course, "Intel is doomed" (and "Microsoft is doomed") have been staples of clueless fanboy hype for 40 years. I'm still waiting for one of them to be right....
Next couple of years are going to be interesting. Intels been winning for so long that it seems that theyre immune to the innovators dilemna. Will be interesting to see if the arm platform will break into the mainstream of desktop / server computing or if Intel will prevail and have 80% of the desktop and mobile market.
How good is ARM at running just generic ARM binaries? There are all the custom hardware parts, we can ignore them, but can I build a 32bit ARM binary that will run on a wide range of ARM cores with good/great performance?
Historically, it has been my experience that pretty much all the non-x86 platforms the compiler and hardware specific optimizations tend to have a pretty dramatic impact. Intel just has so much code and existing code streams to factor in to their designs for new hardware. Maybe this has changed. It's a hard road if mismatched or non-hardware optimized binaries are slow and pokey and hardware specific optimized binaries are competitive. Come out with a great 64bit ARM core that can run nearly all ARM binaries with decent performance (clearly, excluding stuff that needs custom hardware..) and ARM could be pretty disruptive.
ARM realized that this was a problem when they got into smartphones, and while the lineup was a total and complete mess in 2008, their modern high-end chips actually provide a pretty uniform experience.
The half-watt microcontroller replacements still need custom builds, but the chips used in top-line smartphones can now all run the same compiled OS and apps. They are going to do a 64-bit transition soon, it will be very interesting how that will turn out.
Historically the big thing has been the variety of floating point units — nowadays VFP3 is pretty much a de-facto standard on the high-end ARM chips (it's required from Cortex-A8 onwards in the application profile), and what's done on Android (where you have a huge diversity of hardware, some with FPUs, some without) where performance matters is you ship one hardfloat binary and one softfloat binary.
Man, talk about rubbing salt in an (old) open wound. I was SO excited when they announced that, and was really hoping that we'd get cheap, widely available Power based motherboards in a standard form-factor, capable of running Linux or BSD or whatever, etc.
Yeah, no.
Some more competition for Intel x86 and some widespread availability of Power machines (that don't cost a bazillion dollars) has felt like a pipe dream for years, and I'm not optimistic now...
http://en.wikipedia.org/wiki/Common_Hardware_Reference_Platf...