Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be more accurate to say that there haven't been any RISC-V designs for Qualcomm's market segment yet.

As far as I am aware, there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM. The people designing their own cores just haven't bothered to do so yet.

RISC-V isn't competitive in 2024, but that doesn't mean that it still won't be competitive in 2030 or 2035. If you were starting a project today at a company like Amazon or Google to develop a fully custom core, would you really stick with ARM - knowing what they tried to do with Qualcomm?



> RISC-V isn't competitive in 2024, but that doesn't mean that it still won't be competitive in 2030 or 2035.

We can't know and won't for up to until 2030 or 2035. Humans are just not very good when it comes projecting the future (if predictions of 1950-60's were correct, I would be typing this up from my cozy cosmic dwelling on a Jovian or a Saturnian moon after all).

History has had numerous examples when better ISA and CPU designs have lost out to a combination or mysteries and other compounding factors that are usually attributed to «market forces» (whatever that means to whomever). The 1980-90's were the heydays of some of the most brilliant ISA designs and nearly everyone was confident that a design X or Y would become dominant, or the next best thing, or anywhere in between. Yet, we were left with a x86 monopoly for several decades that has only recently turned into a duopoly because of the arrival of ARM into the mainstream and through a completely unexpected vector: the advent of smartphones. It was not the turn than anyone expected.

And since innovations tend to be product oriented, it is not possible to even design, leave alone build, a product with something does not exist yet. Breaking a new ground in the CPU design requires an involvement of a large number of driving and very un–mysterious (so to speak) forces, exorbitant investment (from the product design and manufacturing perspectives) that are available to the largest incumbents only. And even that is not guaranteed as we have seen it with the Itanium architecture.

So unless the incumbents commit and follow through, it is not likely (at least not obvious) that RISC-V will enter the mainstream and will rather remain a niche (albeit a viable one). Within the realms of possibility it can be assessed as «maybe» at this very moment.


A lot of the arguments I’m seeing ignore the factor that China sees ARM as a potential threat to it’s economic security and is leaning hard into risc-v. it’s silly to ignore the largest manufacturing base for computing devices when talking about the future of computing devices.

I would bet on china making risc-v the default solution for entry level and cost sensitive commodity devices within the next couple of years. It’s already happening in the embedded space.

The row with Qualcomm only validates the rationale for fast iterating companies to lean into riscv if they want to meaningfully own any of their processor IP.

The fact that the best ARM cores aren’t actually designed by ARM, but arm claims them as its IP is really enough to understand that migrating to riscv is eventually going to be on the table as a way to maximize shareholder value.


But then there is the software ecosystem issue.

Having a competitive CPU is 1% of the job. Then you need To have a competitive SoC (oh and not infringe IP), so that you can build the software ecosystem, which is the hard bit.


RISC-V is rapidly growing the strongest ecosystem.

The new (but tier1 like x86-64) Debian port is doing alright[0]. It'll soon pass ppc64 and close up to arm64.

0. https://buildd.debian.org/stats/graph-week-big.png


> But then there is the software ecosystem issue.

We still have problems with software not being optimised for Arm these days, which is just astounding given the prevalence on mobile devices, let alone the market share represented by Apple. Even Golang is still lacking a whole bunch of optimisations that are present in x86, and Google has their own Arm based chips.

Compilers pull off miracles, but a lot of optimisations are going to take direct experience and dedicated work.


Considering how often ARM processors are used to run an application on top of a framework over an interpreted language inside a VM, all to display what amounts to kilobytes of text and megabytes of images, using hundreds of megabytes of RAM and billions of operations per second, I'm surprised anyone even bothers optimizing anything, anymore.


For all it's success it's still kind of a niche-language (and even with the amount of Google compiler developers, they're are spread thin between V8, Go, Dart,etc).

I think the keys to Risc-V in terms of software will be,

LLVM (gives us C, C++, Rust, Zig,etc), this is probably already happening?

JavaScript (V8 support for Android should be the biggest driver, also enabling Node,Deno,etc but it's speed will depend on Google interest)

JVM (Does Oracle have interest at all? Could be a major roadblock unless Google funds it, again depends on Android interest).

So Android on Risc-V could really be a game-changer but Google just backed down a bit recently.

Dotnet(games) and Ruby (and even Python?) would probably be like Go with custom runtimes/JIT's needing custom work but no obvious clear marketshare/funding.

It'll remain a niche but I do really think Android devices (or something else equally popular, a chinese home-PC?) would be the gamechanger to push demand over the top.


> Even Golang

Golang's compiler is weak compared to the competition. It's probably not a good demonstration of most ISAs really.


Not an issue because exceyt for a few windows or apple machines everthing arm is compiled and odds are they have the source. Give our ee a good risc-v and a couple years latter we will have our stuff rebult for that cpu


The whole reason ARM transition worked is that you had millions of developers with MacBooks who because of Rosetta were able to seamlessly run both x86 and ARM code at the same time.

This meant that you had (a) strong demand for ARM apps/libraries, (b) large pool of testers, (c) developers able to port their code without needing additional hardware, (d) developers able to seamlessly test their x86/ARM code side by side.

RISC-V will have none of this.


Apple is the only company that has managed a single CPU transition successfully. That they actually did it three times is incredible.

I think people are blind to the amount of pre-emptive work a transition like that requires. Sure, Linux and FreeBSD support a bunch of architectures, but are they really all free of bugs due to the architecture? You can't convince me that choosing an esoteric, lightly used arch like Big Endian PowerPC won't come with bugs related to that you'll have to deal with. And then you need to figure out who's responsible for the code, and whether or not they have the hardware to test it on.

It happened to me; small project I put on my ARM-based AWS server, and it was not working even though it was compiled for the architecture.


Apple’s case is really good indeed.

Having a clear software stack that you control plays a key role in this success, right?

Wanting to have the general solution with millions of random off label hardware combinations to support is the challenge.


Embedded is far larger than pcs and dosn't needthat phones too are larger an already you recompile as needed.


> […] and odds are […]

When it comes to the adoption of a new ISA, there are no odds even the sources exist, it is the scale and QA that are or are not.

The arrival of the first wave of Apple Silicon in 2020 led to a very hectic year in 2021 and beyond of people rushing in to fix numerous issues mostly (but not only) in Linux for aarch64, ranging from bugs to unoptimised code. OpenJDK that had existed for aarch64 for some time was so unstable that it could not be seriously used natively on aarch64, and it took nearly a year to stabilise it. Hand-optimising OpenSSL, Docker, Linux/aarch64 and many, many other packages also took time.

It only became possible because of the mass availability of the hardware (performant consumer-level arm64 CPU's) that has led to a mass adoption of the hardware architecture at the software level. aarch64 has now become the first-class citizen, and Linux as well as other big players have vastly benefited from it (e.g. cloud providers) as a whole. It is far being certain that if not the Apple Silicon catalyst we would have seen Graviton 4 in 2024 (the 4th gen in just 5 years), large multi-core Ampere CPU's in 2023/24 and even a performant Qualcomm laptop CPU this year.

Mass hardware availability to lay people that leads to mass adoption by lay people is critical to the success of a new hardware platform, as all of a sudden a very large pool of free QA becomes available that spurs further interest in improving the software support. Compare it, for instance, with the POWER platform that is open and the hardware has been available for quite a while; however, there has been no scale. The end result is that JIT still yields poor performance in Firefox/ppc64. Embedded people and hardware enthusiasts are not the critical mass that is required to trigger a chain reaction that leads to platform success, it is the lay people incessantly whining about something not working and reporting bugs.

Then there is also a reason why OpenBSD still holds on to a zoo of ancient, no longer available platforms (including a Motorola 88k) – they routinely compile the newly written code – however many moons it takes them to do it – and run it on the exotic hardware today with the single narrow purpose of trapping bugs, subtle and less subtle ones, caused by architectural differences across the platforms. Such an approach stands in stark contrast to the mass availability one; it does not scale as much, but it is a viable approach, too. And this is why the OpenBSD source code has a much better chance of running flawlessly on a new ISA.

Hence, hardware platform adoption is not a simple affair as some enthusiastically try to portray it to be.


Embedded has been doing just that for their platform for ages. they don't care about most of the things you list though.

not that your point is wrong, but for mose uses it doesn't matter. It would be better if they had it but they don't need it.


> they don't care […]

Precisely. Embedded cares only about one thing: «get the product off the ground and ship it fast, bugs including». And since the software in embedded is not user-facing, they can get away with «power cycle the device if it stops responding» recommendations in the user guide.

Embedded also sees the CPU as a disposable commodity and not a long term asset, and it is a well entrenched habit of throwing the entire code base away if another alternative CPU/ISA (cheaper, more power efficient etc – you name it) comes along. Where is all the code once written for 68HC11, PIC, AVR etc? Nowhere. It has all but been thrown away for varying reasons (architecture switch, architecture obsolescence and stuff). Same has not happened for Intel, and the code is still around and running.

For more substantial embedded development, the responsibility of adopting a new ISA falls on the vendor of the embedded OS/runtime (e.g. VxWorks or the embedded CPU vendor) who makes reasonable efforts of supporting hardware features important to customers but does not carry out the extensive testing of all features. Again, the focus is on allowing the vendor's customers to ship the product fast. Quality of development toolchains for embedded is also not so infrequently questionable and complaints about poor support of the underlying hardware are common. They are typically ignored.

> but for mose uses it doesn't matter […]

Which is why embedded is not a useful success metric when it comes to predicting the success of a CPU architecture in user-facing scenarios (namely, personal and server computing).


We've seen compatibility layers between x86 and arm. Am I correct in thinking that a compatibility layer between riscV and arm would be easier/more performant since they're both risc architectures?


There are already compatibility layers for x86 on RISC-V. They’re not quite as good, but progress is being made.

Edit, link: https://box86.org/2024/08/box64-and-risc-v-in-2024/


> As far as I am aware, there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM

Lack of reg+shifted reg addressing mode and or things like BFI/UBFX/TBZ

The perpetual promise of magic fusion inside the cores has not played out. No core exists to my knowledge that fuses more than two instructions at a time. Most of those take more than two to make. Thus no core exists that could fuse them.


Zba extension (mandatory in RVA23 profile) provides `sh{1,2,3}add rd, rs1, rs2` ie `rd = rs1 << {1,2,3} + rs2`, so a fusion with a subsequent load from `rd` would only require fusing two instructions.


And which cores currently support it? And unless the answer is “all”, it will not be used. Feature detection works for well-isolated high-performance kernels using things like AVX. No one‘s going to do feature detection for load/store instructions. Which means that all your binaries will be compiled to the lowest common denominator


As I said, it's mandatory in RVA23 profile. In fact it has been mandatory since RVA22 profile. A bunch of cores appear to support RVA22.

Whether prebuilt distribution binaries support it or not, I can't tell. Simple glance at Debian and Fedora wiki pages doesn't reveal what profile they target, and I CBA to boot an image in qemu to check. In the worst case they target only GC so they won't have Zba. Source distributions like Gentoo would not have a problem.

In any case, talking about the current level of extension support is moving the goalposts. You countered "there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM" with "Lack of reg+shifted reg addressing mode", which is an argument about ISA, not implementation.


ubuntu announced they want to suppory RVA23 in their next LTS 25.04 IIRC. That doesn't really make sense, unless we get new hardware with RVA23 support till then.


Isn’t there next LTS 26.04?


Yeah you are right, I looked it up again:

25.10 -> RVA23

26.04 (LTS) -> RVA23

from: https://www.youtube.com/watch?v=oBmNRr1fdak


The reality is everyone is targeting RV64GC, until we get cheap and widely available RVA23 boards. At some point after that, everyone will switch.


Zb extension is in both RVA22 and RVA23 profiles, meaning application cores (targeting consumer devices like smartphones) designed in the past few years almost certainly have shXadd instructions in order to be compatible with the mainstream software ecosystem


> meaning application cores (targeting consumer devices like smartphones

where are they?


You are aware that hardware takes time to build, tapeout and productise?

On the open-source front, I can right now download a RVA23 supporting RISC-V implementation, simulate the RTL and have it out perform my current Zen1 desktop per cycle in scalar code: https://news.ycombinator.com/item?id=41331786 (see the XiangShanV3 numbers)


RISC-V has existed for over a decade and in that time no one has got close to building a competitive non microcontroller level CPU with it.

How long is this supposed to take?

How long until it is accepted that RISC-V looks like a semiconductor nerd snipe of epic proportion designed to divert energy away from anything that might actually work? If it was not designed for this it is definitely what it has achieved.


The name RVA23 might give you a hint around which time the extensions required for high performance implementations to be viable were rougly standardized.

The absolutely essential bitmanip and vector extensions were just ratified at the end of 2021 and the also quite important vector crypto just in 2023.


So it took 10 years to ratify absolutely essential extensions?

Somehow I suspect in 10 years there will be a new set of extensions promising to solve all your woes.

Seriously, the way to do this is for someone to just go off and do what it takes to build a good CPU and for the ISA it uses to become the standard. Trying to do this the other way around is asking for trouble, unless sitting in committees for twenty years is actually the whole idea.


> So it took 10 years to ratify absolutely essential extensions?

Essential for what? RISC-V was not just created for high performance of application cores.

The first couple years RISC-V was mostly for university research work.

Turning it into a fully capable alternative only started later and yes, that takes a number of years.

> Somehow I suspect in 10 years there will be a new set of extensions promising to solve all your woes.

Its about delivering the same as Intel/ARM and they have that now. Yes, in 10 years more extentions will exist, this is true for RISC-V and ARM and x86.

> Seriously, the way to do this is for someone to just go off and do what it takes to build a good CPU and for the ISA

No it doesn't happen that way because the company wouldn't do that wouldn't open source their ISA design. Or at least not historically.

So a different path was taken to create and open standard and it worked out pretty well, even if it doesn't do what your imagination wants.


10 years from where?

The base specifications in RISC-V were only ratified in 2019.


The first RISC-V chip was taped out in 2011.

Source: https://riscv.org/about/#history


And the RISC paper[0] was published in 1980.

Both are milestones that paved the way to RISC-V's base spec ratification in 2019.

Incidentally, the Earth is estimated to be 4.54 billion years old.

0. https://dl.acm.org/doi/10.1145/641914.641917


So the ratification of a spec that will enable high performance implementation would be another 4.54 billion years after 2019?

It's actually worse than that, because until it's done there's no guarantee it will ever get there.


Everything needed for high performance was ratified in 2021, just two years after the base spec.

That's RVA22 and Vector 1.0.

Low-end hardware implementing the spec already exists (Milk-V Jupiter). High end implementations (e.g. Ventana Veyron V2, designed for servers) will be deployed in 2025.


> Everything needed for high performance was ratified in 2021

Assuming you are right, getting on for four years after even this we would have a high performance implementation of it to look at which proves that it really is everything needed . . .

Except we don't.

Until we do all you have is a dream.


High performance IP was announced, by multiple vendors, as available for licensing, not long (days, weeks) after these ratifications.

Typically (and quite consistently) there are 3 years from that point to products you can buy cheaply sitting on shelves.

Thus, expect some fun in 2025.


> High performance IP was announced,

To re-iterate: you have no basis for saying it is high performance because it has not been shown to be so.

When a track record has been established _then_ claims of this kind can be made, but not before.

I have worked with chip makers at both ends (I was doing games which they used for testing, but they also tried to get our input for what their next designs should do) and honestly learned that these people are absolute dreamers. This was not helped by the nearly constant regularity with which we found performance destroying design flaws. Some prototypes were aborted as a result (dramatic overheating is quite a problem) but most products launched to go nowhere since they were now unspectacular.

The dreaming is necessary to enable them to get up and try again, but it means you should not believe a word they say until it works.


Realistically, there's no way every company that has high performance microarchitectures available for licensing is lying, and that the clients it already licensed the IP to are covering for them.

It is also unreasonable to think that each and every industry veteran who has designed successful, competitive high performance microarchitectures in the past and is now working on RISC-V IP is for some reason suddenly unable to deliver.

Waiting will give you confirmation, in the form of seeing your competitor's successful products in the market, but at the cost of missing your own chance.

¯\_(ツ)_/¯


RVA23 will become the only thing anyone targets


In the year of the Linux desktop, with Wayland and IPv6 for all. /s


I think you’re missing a point here. The fact that this was not part of the initial design speaks volumes, since it is entirely obvious to anybody who has ever designed an ISA or looked at assembly of modern binaries. Look at aarch64 for an example an ISA designed for actual use.


>would you really stick with ARM - knowing what they tried to do with Qualcomm?

Business are actually happen how the ALA is proven in court.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: