Microcode goes way back to Whirlwind (1497) and EDSAC 2 (1957), depending on how you define it.
Processors still use hardwired control sometimes if they are simple enough (e.g. RISC). I think hardwired control is also used for performance sometimes.
Interesting, I wasn't familiar with the Whirlwind. Regarding the hardwired control in RISC designs that makes sense.
I have a somewhat tangential question - in doing some Googling about RISC and control units I came across the following from a question on Quora[1]:
>"P5 (Pentium) and P6 (Pentium Pro). Note that the control unit was phased out in favour of the reservation station and re-order buffer."
>"The control unit with its rule-based, static approach is far too simplistic and would perform poorly in an out-of-order superscalar core design where the code stream is sequential."
At first this struck me as odd as I wouldn't have thought that reservation/station and ROB would be mutually exclusive with a control unit but it got me to wondering is the microcoded control unit more of logical entity today that's split between the reservation/station and ROB? Or was this something specific to this Pentium chips. Or is this person just wrong?
In that quora answer, the box labeled "control unit" really should have been labeled "scheduler", as it is the unit that would be "scheduling" instructions for execution into the execution units (taking dependencies into account to do so).
That "scheduling" is a "control" function, so "control unit" is not wrong, per. se., but there is still going to be a "control unit" even in the reservation station/reorder buffer variant on the right as well. What differs between the two is that the "scheduling" that occurred by the left side "control unit" was instead dispersed into the reservation stations and reorder buffer of the right side diagram.
But there will still be a "control unit" handling overall control of the CPU, it is just not given a box and a label in the right side.
Thanks. I think this is the key idea I was looking to understand:
>"What differs between the two is that the "scheduling" that occurred by the left side "control unit" was instead dispersed into the reservation stations and reorder buffer of the right side diagram."
Is it a correct mental model then to think about control unit being a logical or "distributed" unit(right side of diagram) instead of a monolithic unit(the left side of diagram)?
I'm guessing this "reservation station/reorder buffer variant" of control unit is the predominant one in use in CISC chips these days?
> Is it a correct mental model then to think about control unit being a logical or "distributed" unit(right side of diagram) instead of a monolithic unit(the left side of diagram)?
That is a somewhat reasonable interpretation. Just keep in mind that both diagrams are massive simplifications of the actual reality (i.e., the left side diagram likely had a more dispersed control system than the block diagram would lead one to believe).
> I'm guessing this "reservation station/reorder buffer variant" of control unit is the predominant one in use in CISC chips these days?
It is/has been a popular model (first introduced by IBM in the 360/91 mainframe circa 1967 -- R.M. Tomasulo, "An Efficient Algorithm for Exploiting Multiple Arithmetic Units, https://www.cs.virginia.edu/~evans/greatworks/tomasulo.pdf) for many designs over the years. As for predominant in CISC chips, that really depends upon what current Intel/AMD CPU's are doing internally, and I've not kept up with their most recent offering's internal designs.
Thanks and thanks for the link. I was aware of the Tomasulo algorithm but I don't think I've ever seen the original paper nor did I know it was inherently connected to the IBM OS/360. Cheers.
Do you know of any systems that dynamically updated their microcode or is microcode relatively fixed?
My understand is the Burroughs series of mainframes were designed to support user definable instruction sets so that a computing site would be optimized for specific workloads and languages.
>"My understand is the Burroughs series of mainframes were designed to support user definable instruction sets so that a computing site would be optimized for specific workloads and languages."
This is fascinating although I'm having trouble wrapping my head around it. Would "user" here be an engineer or field tech from Burroughs or mainframe admin at the company the was leasing the Burroughs system. You would have to have very intimate knowledge of CPU architecture to sort of roll you own instruction set no? Or am I misunderstanding this and it would be more about removing some inefficient instructions for a specific type of workload?
Intel CPU's (as well as AMD's) have the ability to load microcode patches to correct/fix bugs that are found post manufacture. In fact, many of the spectre/meltdown mitigations that occurred after those exploits became known were in the form of microcode patches to the various CPU models.
Note, this is not "user definable", but it is updatable microcode.
The Xerox Alto was famous for its user-writeable microcode (among other things). Different languages could use microcode that was optimized for the language's operations. Even some games had custom microcode for high performance.
Processors still use hardwired control sometimes if they are simple enough (e.g. RISC). I think hardwired control is also used for performance sometimes.