I’ve started using SpinalHDL, https://github.com/SpinalHDL/SpinalHDL. It’s a Scala DSL that spits out Verilog or VHDL for traditional synthesis tools. But, unlike Chisel or MyHDL, in my opinion it’s a great experience. And, now it has seamless integration with Verilator for simulation, and the open-source Verilator project is very capable—they claim it beats commercial simulators: https://www.veripool.org/wiki/verilator. Since Scala is quite a bit faster than Python, the simulations run much faster than something like Cocotb too!
I’ve been using Python to generate OpenSCAD scripts, and then a half-finished Python binding for OpenSCAD. This framework uses the natural parametric CAD modeling primitives, like the workplane. I never got used to the OpenSCAD way.
I'd love to know how this works out for you. I was trying to do something similar, but got stuck.
The core issue is that the fluent object design approach makes it very difficult to decompose objects into reusable features. The shared state behind the fluent approach makes creating multiple sub-objects (to then combining them) difficult (maybe impossible). The power of Python gets limited to thinking about only one thing at a time.
That is called an anechoic chamber [1]. Basically, the walls absorb all the outward radiation from the device so the test instruments do not have to sort out reflected waves. It mimics the device being outside with infinite space in all directions.
Additionally, the instruction set is not virtualized because of compatibility. Compatibility is a nice side effect though, so software from even five years ago works just as expected (not all software can be recompiled, and could you imagine commercially supporting that many variants--this way Intel shoulders that burden). And, backward compatibility is a tiny fraction of the control unit silicon area. It has zero performance cost. Zero. Intel is scraping for more (not less) things to add to silicon to improve performance (video encode/decode, GPU, memory controller, PCI).
The virtualized instruction set enables the performance gains by executing portions of instructions in parallel, re-ordering to avoid pipeline stalls, better branch prediction, execution shortcuts depending on operanda, etc. without compiler support. RISC architectures like ARM do this as well. On a modern Cortex part with multiple execution units, the processor is not a textbook pipelined RISC processor like this community seems to yearn for. So if you look at machine code for some reason, yes the instruction set is simple, but it's still a facade over complexity.
I don't understand the constant cry that processors are complex. To gain performance against the frequency limit, complexity increased. This happens with software all the time.
Also, for those that want direct control over all the elements of the processor, go buy an Itanium... oh wait... no one did.
What kind of artist work can you practically distribute between multiple people and merge? Most things like this are broken into pieces and each piece is tracked separately and assembled later.
I think this is a big mistake programmers make when trying to create VCS for other industries. A lot other fields with binary files don't care about merge as much. Like electronic design automation: track every part's history and even every module on a board or die layout. But, practically it's difficult to have two people productivly work on the same small piece simultaneously.
[edit] I also think a bunch of the reasons programmers require merge are because code is organized into files. Organization into files is arbitrary and often doesn't correlate particularly well with logical structure.
It's interesting how the article you linked mentions cheap capital has enabled consolidation. I was always under the impression that cheap capital would spawn a plethora of startups and new differentiation. I guess this system has two stable states...
Also, now more than ever, startups can enter the fabless market since TSMC and other foundries are open to even small customers. I think we may just be seeing a slowdown on new entries because the engineering cost, not just tool or IP cost, is monstrous for new chips (i.e. innovative features, not just coupling IP cores). Now I want to go research what portions of IC designs require the most time now. My hunches: formal requirements definition and analog design.
If LIDAR could sweep materials with varying frequency and measure the reflection vs. absorption across a band, you could get pretty decent material identification.
I have my hobby project in SpinalHDL up at https://craigjb.com
Edit: also, GTKWave is pretty good! It’s a simple and straightforward waveform viewer that works on all platforms.