Hacker News new | past | comments | ask | show | jobs | submit login

The "CISC CPUs just decode instruction into RISC internally" thing is getting at something I think is important: RISCs and CISCs aren't necessarily that different internally. "CISC CPUs and RISC CPUs both just decode instructions into microcode and then execute that" is probably a more accurate but less "memeable" expression of that idea.

What exactly we mean by "RISC" and "CISC" becomes important here. If, by RISC, we mean an architecture without complex addressing modes, then "CISCs are just RISCs with a fancy decoder" is wrong. But if we expand our definition of "RISC" to allow for complex addressing modes and stuff, but keep a vague upper limit on the "amount of stuff" the instruction can do, it's sort of more appropriate; the "CISCy" REPNZ SCASB instruction (basically a strlen-instruction) is certainly decoded into a loop of less complex instructions.

I think the main issue with most of these discussions is in the very idea that there is such a thing as "RISC" and "CISC", when there's no such distinction. They are, at best, abstract philosophies around ISA design which describe a general preference for complex instructions vs simpler instructions, where even the terms "complex" and "simple" have very muddy definitions.

TL;DR: I agree




True RISC processors do NOT decode instructions into µops. The instructions ARE the microcode.

This is true for all mainstream implementations of RISC-V, Alpha [1], and MIPS.

To whatever extent a "RISC" core decodes (some) instructions into µops, it is deviating from RISC.

ARM has always had mixed designs, marketed as "RISC".

x86 is the least CISCy of the CISC ISAs. Except for the string instructions, it has only one memory operand in each instruction, and the addressing modes are simple with at most two registers (one scaled) plus a constant. In particular (unlike most CISC and indeed most pre-CISC/RISC ISAs) there is no indirect addressing.

If you are looking at x86 as representative of CISC and ARM as representative of RISC and saying "there is hardly any difference" then 1) you are correct, and 2) you are looking a neither true CISC nor true RISC -- both of which do actually exist.

[1] with one exception on some cores: cmov


In all but the most simple designs, you’re going to add LOTS of additional information specific to your uarch. For example, you need more bits internally for register remaining. Likewise, you need bits for hazards, synchronization, ordering, etc. Parts of the instruction will likely be dropped (eg, instruction length bits).

It’s more accurate to say that there will be a 1-to-1 relationship between RISC-V instructions and uops, but there’s a push to perform fusion for some instructions, so even that may not actually be true either.


1-to-1 isn’t entirely true right now. sonicBOOM includes a feature called short forwards optimization which can turn certain branches into flag setting ops (and make the former branch shadow predicated). So one instruction always produces one uop, but not necessarily always the same one.


SiFive's U74 (found in the VisionFive 2, Star64, PineTab-V, Milk-V Mars) does this. The conditional branch travels down pipe A, the following instruction down pipe B. If the conditional branch turns out to be taken then the instruction in pipe B is NOP'd rather than taking a branch mispredict.


The vast vast majority of CPUs shipped in the world do not have register renaming. But even with it, as you say, that's just expanding a 5 bit architectural register field to a 6 to 8 bit logical register field.

There are zero currently shipping commercially-produced RISC-V CPUs that do instruction-fusing -- and we know more than 10 billion cores had been shipped as at this time last year.

The three or four companies currently designing RISC-V CPUs intended to be competitive with current or near-current x86 and Apple are of course making 8-wide (or so) OoO cores, and they say they are implementing some instruction fusion.

Those will be available in two or three years perhaps, and will get a lot of publicity, but they will be a tiny minority of RISC-V cores shipped, just as Cortex-A and Cortex-X are a tiny minority of Arm's.


My point is that this is a uarch decision rather than something fundamental to RISC-V itself. If you want high performance, that idea doesn’t hold.

This is like arguing that RISC-V doesn’t need advanced branch predictors because MCUs done need them.


> "CISC CPUs and RISC CPUs both just decode instructions into microcode and then execute that" is probably a more accurate but less "memeable" expression of that idea.

That’s not very accurate either. Most instructions are not implemented in microcode. Only rare or complex instructions are.


If I recall it correctly, when Spectre/Meltdown mitigations were released, they were applied (to Intel CPU's at the very least) as microcode updates.

What exactly comprises the Intel CPU microcode is somewhat of a mystery, but I also remember somebody tearing apart the blob with the Spectre or Meltdown mitigation and posting the findings (a guesswork, in fact) of what was in there. The microcode was very low level, and it was not very comprehensible to me.


Huh? My understanding is that the CPU's back-end only executes micro-ops, and the front-end translates ISA instructions to micro-ops. Most instructions get translated into one micro-op, but there's still that translation. Is that wrong?


Microcode≠micro-ops




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: