But there wasn't ever a divergence in philosophy. It was a straight switch.
In the 70s, everyone designing an ISA was doing CISC. Then in the 80s, everyone suddenly switched to designing RISC ISAs, more or less overnight. There weren't any holdouts, nobody ever designed a new CISC ISA again.
The only reason why it might seem like there was a divergence is because some CPU microarchitecture designers were allowed to design new ISAs to meet their needs, while others were stuck having to design new microarchitecture for legacy CISC ISAs which were too entrenched to replace.
> For example some RISC experiments turn out to have painted their designs into dead ends
Which is kind of obvious in hindsight. The RISC philosophy somewhat encouraged exposing pipeline implementation details to the ISA, which is a great idea if you can design a fresh new ISA for each new CPU microarchitecture.
But those RISC ISAs became entrenched, and CPU microarchitecture found themselves having to design for what are now legacy RISC ISAs and work around implementation details that don't make sense anymore.
Really the divergence was fresh ISAs vs legacy ISAs.
> while the looseness of the CISC approach allowed more optimisation to be done in the micromachine.
I don't think this is actually an inherent advantage of CISC. It's simply result of the shear amount of R&D that AMD, Intel, and others poured into the problem of making fast microarchitectures for x86 CPUs.
If you threw the same amount of resources at any other legacy RISC ISA, you would probably get the same result.
In the 70s, everyone designing an ISA was doing CISC. Then in the 80s, everyone suddenly switched to designing RISC ISAs, more or less overnight. There weren't any holdouts, nobody ever designed a new CISC ISA again.
The only reason why it might seem like there was a divergence is because some CPU microarchitecture designers were allowed to design new ISAs to meet their needs, while others were stuck having to design new microarchitecture for legacy CISC ISAs which were too entrenched to replace.
> For example some RISC experiments turn out to have painted their designs into dead ends
Which is kind of obvious in hindsight. The RISC philosophy somewhat encouraged exposing pipeline implementation details to the ISA, which is a great idea if you can design a fresh new ISA for each new CPU microarchitecture.
But those RISC ISAs became entrenched, and CPU microarchitecture found themselves having to design for what are now legacy RISC ISAs and work around implementation details that don't make sense anymore.
Really the divergence was fresh ISAs vs legacy ISAs.
> while the looseness of the CISC approach allowed more optimisation to be done in the micromachine.
I don't think this is actually an inherent advantage of CISC. It's simply result of the shear amount of R&D that AMD, Intel, and others poured into the problem of making fast microarchitectures for x86 CPUs.
If you threw the same amount of resources at any other legacy RISC ISA, you would probably get the same result.