> the details end up changing every 2-3 years, so you need to re-compile all your code
I could be wrong, but I think that was part of the original plan: Machine code would be thrown away along with the hardware, only the (implicitly C) sources would be saved, and you'd rebuild the world with your shiny new compiler revision to make use of a shiny new computer. Implicit in this model is the idea that compilers are smart and fast and can take advantage of minor hardware differences.
Compare this to the System/360 philosophy, where microcode is meant to 'paper over' all differences between different models of the same generation and even different generations of the same family so machine code is saved forever and constantly reused. (This way of doing things was introduced with the System/360, as a matter of fact.) Implicit in this model is the idea that compilers are slow, stupid, and need a high-level machine language where microcode takes advantage of low-level machine details.
A half-step between these worlds is bytecode, which can either be run in an interpreter or compiled to machine code over and over again. The AS/400, also from IBM, takes the latter approach: Compilers generate bytecode, which is compiled down to machine code and saved to disk when the program is first run and whenever the bytecode is newer than the machine code on disk; when upgrading, only the bytecode is saved, and the compilation to machine code happens all over again. IBM was able to transition its customers from CISC to RISC AS/400 hardware in this fashion.
As you said, the world didn't work like the RISC model, and we now have hardware designed on the System/360 model along with compilers even better than the ones RISC systems had designed for them. Getting acceptable performance out of C code has never been easier, but going the last mile to get the absolute most means making increasingly fine distinctions between types of hardware that all try hard to look exactly the same to software.
I could be wrong, but I think that was part of the original plan: Machine code would be thrown away along with the hardware, only the (implicitly C) sources would be saved, and you'd rebuild the world with your shiny new compiler revision to make use of a shiny new computer. Implicit in this model is the idea that compilers are smart and fast and can take advantage of minor hardware differences.
Compare this to the System/360 philosophy, where microcode is meant to 'paper over' all differences between different models of the same generation and even different generations of the same family so machine code is saved forever and constantly reused. (This way of doing things was introduced with the System/360, as a matter of fact.) Implicit in this model is the idea that compilers are slow, stupid, and need a high-level machine language where microcode takes advantage of low-level machine details.
A half-step between these worlds is bytecode, which can either be run in an interpreter or compiled to machine code over and over again. The AS/400, also from IBM, takes the latter approach: Compilers generate bytecode, which is compiled down to machine code and saved to disk when the program is first run and whenever the bytecode is newer than the machine code on disk; when upgrading, only the bytecode is saved, and the compilation to machine code happens all over again. IBM was able to transition its customers from CISC to RISC AS/400 hardware in this fashion.
As you said, the world didn't work like the RISC model, and we now have hardware designed on the System/360 model along with compilers even better than the ones RISC systems had designed for them. Getting acceptable performance out of C code has never been easier, but going the last mile to get the absolute most means making increasingly fine distinctions between types of hardware that all try hard to look exactly the same to software.