Do those languages get used in the industry, except of academia? There are so many HDLs and I am wondering if there is any other benefit of learning any of these, except of possible fun.
We do a fair bit of FPGA design in SpinalHDL, and have taped out several ASICs with parts of the design done in SpinalHDL at my dayjob.
In general: No, alternative HDLs don't see a lot of use, and I'd argue that we qualify as 'academia' since the ASICs are NIH funded and we tend to work with a lot of academic partners and on low-quantity R&D projects.
Having said that, every time we've deployed SpinalHDL for a commercial client they've been blown away by the results. The standard library, developer ergonomics, test capabilities, and little things like having clock domains as a part of the type system make development so much faster and less error prone that the NRE for doing it in verilog just doesn't make sense.
You get access to the entire Java and Scala ecosystem at elaboration and test time. We deploy ScalaCheck in our test harnesses to automatically generate test cases that can reduce inputs to identify edge cases. It's incredibly powerful.
In hardware everything boils down to volume and NRE.
If the design is low volume then minimizing NRE, which is mostly set by engineering hours, makes sense. At low volume, the semiconductor unit cost is mostly irrelevant so you can potentially use things like SpinalHDL to keep engineering hours down, and therefore potentially save NRE, and eat the higher unit cost which occur due to toolchain inefficiencies.
At high volume NRE is mostly irrelevant and unit cost is everything. So even if a tool or language is hard and annoying to use, if it gives a lower unit cost, you use it. Here you see things like an engineers hand tuning the layout of a single MUX to eek out a bit more of something good in the PPA space.
I only have experience with high volume HW and there something like Chisel or SpinalHDL wouldn't be considered as it just adds complexity to the flow, and makes it hard to do the optimizations that high volume enable us to consider, for a potential benefit we're not interested in.
Basically no. Almost everybody uses SystemVerilog. The main issue is that all the simulators only support SystemVerilog so every other HDL is compile-to-SV, and often they output truly awful code that is a nightmare to debug.
Also SV has an absolutely enormous feature set, and often alternative HDLs miss out important parts like support for verification, coverage, formal verification, etc.
Getting away from SV is like getting away from JavaScript. The network effects are insane.
There was an attempt to make a kind of IR for RTL that would break the tie with SV (kind of like WASM has for JS)... I can't remember the name (LL..something?) but it seemed to have died.
They're overall more prevalent in the FPGA world, I think. I've used and done several jobs with them (Clash/Haskell, Bluespec, etc) and know others who have, too. But you basically need to know someone or do it yourself. Pretty marginal overall, but IME the results have basically been good (and more fun to write, too.)
At lumiguide we use clash for FPGA stuff. It's not perfect but we are very, very happy we didn't go the verilog route. What a horrible experience that is.
I'm pretty sure they're going to say they don't regret it at all. Either because it's true, or because they are too invested in it.
When I've started doing FPGA consulting a few years ago I've started using Chisel, but eventually had to go back to SystemVerilog due to client reluctance.
I was dramatically more productive with Chisel than with SystemVerilog.
> I'm pretty sure they're going to say they don't regret it at all.
i didn't say that as a supposition - i know that they regret it. the chisel compiler has been an enormous (enormous) technical debt/burden for them because of how slow/resource intensive it is.
It's not like all the other EDA tools are really fast or not resource intensive. For smaller design firms I would think things like FireSim [1] would be a significant advantage.
I can imagine it is a disadvantage in other ways, i.e. it's only possible to do single phase positive edge synchronous design, which could be an impediment to high performance digital design.
But I wouldn't imagine that scala performance is particularly significant.
This is off topic, but I recognize your username from a thread a couple weeks ago but your account is relatively new. Out of curiosity did you just find hacker news and decide to make an account, or is this a new alias and you have an older account? I guess I'd be surprised if there's still new people joining lol.
I have no idea who they are, but I think you'd find there are lots of "old-timers" (even notable ones) who've never had HN accounts. Any of them could decide to join at any moment
In the ASIC space, sure, I don't think any of these tools scale in the way that most ASIC companies have forced their "traditional" HDL toolchains to scale.
In the FPGA-based space (accelerators, RF/SDR, trading), hard disagree. There's plenty of boutique FPGA work going on in these.