Hi, I'm a Ph.D. student at UC Berkeley on the RISC-V team. Happy to answer any questions about the RISC-V ISA or Rocket, our open-source reference implementation.
Since there's always confusion about it, I'll start off by clarifying the difference between the two. RISC-V is an open-source ISA standard. Rocket is an implementation of this ISA which also happens to be open source. We do not intend for the ISA to be tied to a single reference implementation. The intention for RISC-V is to enable many different implementations (whether open-source or proprietary). These different implementations can all run an open-source software ecosystem, which currently consists of a GNU toolchain, LLVM, and Linux kernel port.
When can we expect to see the license terms? This claims that "The terms will specify a zero-royalty RAND license as well as verification suites licensees must run to use the RISC-V logo" but it appears that free (libre) implementations might not be permissible.
RISC-V is a standard. Anybody can implement a CPU with the standard and then do whatever they want with that CPU. There are plenty of open-sourced RISCV cores already. Copyleft vs copyright is only an issue regarding how much of the extra "libraries"/layouts/foundry tooling has to be made available too.
Of course, somebody is also FREE to implement something that's "totally not RISC-V", and call it whatever they want. They can change one tiny thing or even nothing.
The major win here is:
a) if somebody WANTS to call their thing "RISC-V compatible", it must implement the same core ISA instruction set as everybody else and
b) all of these companies are pooling their resources regarding lawsuits or patent fights against the core ISA.
Berkeley has done 11+ tape-outs of RISC-V chips. LowRISC is looking to do a silicon run in a year-ish, and there's already one company that's publicly stated they've already been shipping RISC-V cores in their cameras.
We have taped out several chips for our own research. But as a university research lab, we do not have any plans for a commercial manufacturing run. The lowRISC team can say more about the roadmap to a commercial dev board.
Great! I heard somewhere here on HN that a modern x86 decoder is smaller than a modern arm decoder. Do you know if this is true? Also, how big is the decoder of risc-v and does it even matter? (I mean the size)
> a modern x86 decoder is smaller than a modern arm decoder
That's because the ARM ISA is not small either by any stretch of the imagination. On the other hand, the instruction listing of the base RISC-V ISA and the standard extensions can fit on a single powerpoint slide.
I wasn't involved in any of the recent tape-outs, so I can't say exactly how big the decoder is. But it's quite small relative to the other chip components. Currently, the integer pipeline of the chip is roughly the same size as the FPU, and these two together are roughly the same size as the L1 cache. All of those components together are smaller than the L2 cache (depends on the size of the L2 cache, though). So decoder size doesn't really matter in the grand scheme of things.
Decoder speed probably does matter, though. Currently, we can decode an instruction in a single cycle (1 ns). The x86 decoder, on the other hand, can take multiple cycles depending on instruction. But maybe this isn't a fair comparison since the instructions are decomposed into uops.
I have no idea about the performance of ARM decoders.
How can you be multiscalar with a decoder that only does 1 ops/cycle? Intel does 6:
> From the original Core 2 through Haswell/Broadwell, Intel has used a four-wide front-end for fetching instructions. Skylake is the first change to this aspect in roughly a decade, with the ability to now fetch up to six micro-ops per cycle. Intel doesn’t indicate how many execution units are available in Skylake’s back-end, but we know everything from Core 2 through Sandy Bridge had six execution units while Haswell has eight execution ports. We can assume Skylake is now more than eight, and likely the ability to dispatch more micro-ops as well, but Intel didn’t provide any specifics.
Is that for a decoder that can decode multiple instructions per clock cycle? I think it would be somewhat interesting for a single instruction decoder but it would be quite remarkable for a decoder of greater width since x86 instructions aren't even self synchronizing (you can read the same sequence of bytes in different valid ways depending on where you start) while ARM is fixed width.
If anyone has a RISC-V processor that will run Solaris, let alone a port of Solaris, they haven't announced it.
It really would be interesting to know why Oracle got involved, though, since their involvement is not the universal sign of pending success for an open source project.
to be a non-cynic, oracle is going to want to make sure their stack runs as efficiently as possible on any platform that google and hp are backing. getting involved at the ground level allows them to provide input and to shape the architecture as much as possible.
i call people in that position "frenemies". often times in the large enterprise space, even your largest rivals have something you want/need, whether you like it or not.
sometimes it is writing software that interoperates (bi and the like), other times it is direct need (java in the case of google and oracle, and i'm sure at least some backend systems).
We were speculating as to why Oracle would be interested in getting in bad with these guys on this architecture, specifically. Google is if anything an opponent of Oracle, so I assume Oracle's reasons for getting involved are somewhat different than what you've outlined.
It makes sense for Oracle to keep a close eye on new architectures that seems like it may get some traction. If it makes it into the server space, it'll affect them.
> Currently RISC-V runs Linux and NetBSD, but not Android, Windows or any major embedded RTOSes. Support for other operating systems is expected in 2016.
No mention of Solaris, so I assume that would be forthcoming?
I'd add to that that there are already more implementations than I can count. Rocket isn't the only ASIC game in town, and there are countless soft-cores (FPGA implementations).
Um, it depends on what exactly you mean by "reference implementation" or "running". If you mean one of the silicon test chips, that might be hard to swing. I don't have them at my desk and they take a lot of setup to actually use (a setup process which I am unfamiliar with). I'd have to ask the grad student who worked on the bring-up to help me. But if you'd be satisfied with seeing the reference RTL run on an FPGA, that would certainly be possible.
Or just run it yourself. I brought up Rocket on a Zedboard following the instructions. Rocket isn't ideal if your ultimate target is an FPGA, but IIRC it's currently the only that includes the virtual memory support that is needed to boot Linux.
BOOM also has virtual memory support. But yes, if you have a Zedboard, you can run the reference RTL yourself. Of course, those things are pretty costly, so I understand if you'd like to see someone else do it.
Since there's always confusion about it, I'll start off by clarifying the difference between the two. RISC-V is an open-source ISA standard. Rocket is an implementation of this ISA which also happens to be open source. We do not intend for the ISA to be tied to a single reference implementation. The intention for RISC-V is to enable many different implementations (whether open-source or proprietary). These different implementations can all run an open-source software ecosystem, which currently consists of a GNU toolchain, LLVM, and Linux kernel port.