Hacker News new | past | comments | ask | show | jobs | submit login
Unleashing Open-Source Silicon (googleblog.com)
189 points by sfhoover on Nov 25, 2019 | hide | past | favorite | 78 comments



This article would be better represented with the title “Open Source Logic”, since it basically refers to the adoption of FPGAs on the cloud. IMHO Open Source Silicon refers to the physical design and process information for actual manufacturing of a chip.

That said, I do believe significant improvements are being achieved to lower the barrier for HW development. One thing that is not often mentioned is that even for experienced hardware developers (like myself) it is very hard/time consuming to fully implement the full stack of technologies required to accelerate computation on a PCI express connected FPGA board.

I haven’t followed recent trend of HLS, thus I can certainly say that with the traditional approach verilog/SV, it would be only a handful of people that could actually make use of those FPGAs.


Good points. But it should be mentioned that, once you have your logic working in an FPGA, the additional step to make it ready for silicon is comparatively small. So I don't think it's too bad to confound the two.


The ASIC flow is way more involved than the FPGA flow, you need to think about all sorts of other things, like what pads you want on your chip, how you are going to generate clocks, floor planning, power distribution, test, and a whole raft of other issues. Going from RTL to GDS2 for even a simple chip would take 3 months work as an absolute minimum (and that's just to get it ready to send to the fab. You then have a whole lot of work when it comes back (you need to have boards designed for testing the chip).


While this is true, this work is typically done by a different person than the one designing the RTL, and it takes 1-2 month at most if there are minimal changes to the overall physical design. We are able to iterate on designs at a rate of ~6 month with a team of ~10 people.


No. It's many, many months of work.

The physical design in an fpga solution is pretty much done for you. You've also got a nice package (power, warpage, SI all done). In an ASIC you start this enormous task from scratch. You've also got to implement (and interface with) a huge amount of test, which again, is mostly done for you when you purchase your working FPGA.

Not to mention the fact that there's little margin for error with an ASIC, there's no second chance to try again without spending a load of money. So the test and verification of your circuits is much higher. (I know your said "working" design but there is no such design that is guaranteed to work when changed).


I do both FPGA and high performance ASIC design for a living - there is typically a pretty big gap between something designed well for an FPGA and something designed well for an ASIC. You’re probably making use of FPGA-specific hard IP (memories, PLLs, PHYs, etc.) that you’ll need to replace, and the logic will have been implemented in a way that’s tuned to the timing trade offs on an FPGA. Clicking “recompile” is only the case if your design has terrible performance in an FPGA and you want awful performance in an ASIC too.


That is not true. The synthesis and layout have to be done separately for the ASIC. That is a more time consuming process than writing verilog code.


Also unless you have a pocket full of cash, going from FPGA to ASIC is typically prohibitive.


> the additional step to make it ready for silicon is comparatively small

Yep, you just change the target platform and re-compile!


"Comparatively small" is still several weeks of work, as far as I know.


There are tools targeting these languages. For instance, clash lets you write functional code and get verilog (iirc).


The problem is that everyone claims to want open silicon but nobody wants to put together the tools for custom VLSI which is where an individual can do interesting things.

Getting custom silicon in older processes (>150nm, but even 90nm has come down dramatically and that's a pretty modern process) just isn't that expensive anymore. It's under $50K, and can be under $5K if you catch a "shuttle" that one of the fabs runs.

The VLSI CAD tools, however, are ferociously expensive. Even the old Tanner tools are now being flogged by Mentor for north of $100K. And they're probably the cheapest around.

We're not even talking heavy digital--old 2um or 5um (especially metal gate since they have 18V voltage tolerance) processes are interesting for analog audio. Guitar pedals still use analog bucket brigade delay chips that haven't been manufactured for decades. At $5K, you could make your money back on a boutique pedal if you sell 100 of them. At $50K that gets dicey and at $250K that will probably never happen.

The ability to slap a couple hundred transistors down on a chip, simulate it and get it manufactured for less than $5K would enable a lot of interesting, low-volume stuff--look at what happened when PCBs got so cheap.

Right now, the entire electronics industry is constrained by the fact that volumes are "millions or bust".


Yeah DARPA or somebody should just dump money into making that cheap low volume happen.


DARPA (via VHSIC) is the reason why most of the commercial VLSI tools even exist today.

That's why it was particularly frustrating to see DARPA missing the boat to try to make more digital stuff happen. We have an excess of digital. We have vast quantities of digital. We have so much digital people don't know what to do with it all.

What we don't have vast quantities of are interesting sensors, oddball RF designs, or extremely low power things. Note that these are all analog.

Throwing money into an open-source VLSI toolchain would do more to relieve DARPA's VLSI bottlenecks than anything else they could do.


Not sure what you are referring to, but the majority of DARPA’s recent $110M investment in silicon design tools and opens source IP is dedicated to analog design.


It sometimes seems that Google uses open-source as more of a marketing ploy than a commitment to respecting user freedom. Why doesn't Google (first) fully open-source the entire Android platform before worrying about "silicon"? This would of course include all of their apps (e.g. GMail, Google Maps, Docs, etc.) as well as the Playstore Framework.

It is hard for me to take their words all that seriously when they are holding back some of the most important pieces from their users.

Ditto for Microsoft.


This tool isn't open sourced by Google, it's a guest post by a Google Summer of Code mentor, so Android/Google apps aren't relevant here.

From the article: "By Steve Hoover, Redwood EDA, Google Summer of Code mentor".


None of those are part of the Android platform. They are clients to Google services. Many products including those from Amazon have been produced using the Android platform without access to any of them.

If Microsoft open sourced Windows would you complain that it wasn't really open sourced because they didn't open source Microsoft Office?


> Why doesn't Google (first) fully open-source...

Because sustainable open source rarely works that way. Sustainable open source projects focus on three things: (1) identify the bits that are common across many applications, (2) establish an open source standard for those bits, (3) sustain the bits by making it easy for other people/orgs to use them & contribute to maintaining them.

Edit: Sometimes a tool like Blender is open source and feature complete. So it seems like “the whole thing” is open source. However, in those scenarios the tool usually replaces a very expensive proprietary tool (Maya, in the Blender example) that is a single step in a larger business process (like making a 3D animation).


You mean, kind of like Android, Google Docs, etc? Google makes its money by charging for add-ons to those products (space on Google Drive, Play store for Android, etc), not from those products directly. I see no reason why Google would be hurt by open sourcing those products, yet they don't, and that choice leads to all sorts of issues from privacy-conscious individuals.

I understand not open sourcing their secret sauce (e.g. search algorithms), but they have a ton of other products that could absolutely be open source without any negative financial impact. It's things like this that make me seriously doubt their commitment to open source.


The common-across-many-applications parts of android are open source, and "Google docs" isn't common-across-many-applications. Arguably, the google drive storage backend could be open source, but that wouldn't need to be open source, it would make more sense as an api.


Open source very often works wonderfully that way. Or is your definition of “working” open source something like “receives constant commits and churn”?

Lots of open source repos rarely see any activity and the the world is still so much better because people fork it, read the code, or just use it as is.

GNU grep sees almost no activity and it’s done more for people than kubernetes has.


Your Blender example doesn't really hold up in this case because it's not like Blender has been funded historically by some large company replacing the expensive proprietary tool Maya. In fact, there are many fully open source products that do not fit into some larger corporate-backed business model: VLC, Gimp, ffmpeg, to name just a few.


But Blender specifically has been proprietary for many years, before being crowdfunded into an open-source project.

https://en.wikipedia.org/wiki/Blender_(software)#History


It’s an imperfect example, but the Maya to Blender transition is starting to happen now. See “Epic Games supports Blender Foundation with $1.2 million Epic MegaGrant“, https://www.blender.org/press/epic-games-supports-blender-fo...


I would love to see Blender become competitive with Maya etc but it's got a long long way to go. Though it might be closer than most, in part because it's competition is so expensive people are looking for alternatives. I sponsor Blender to help see that day happen and I wish big companies that are paying for Maya, Houdini, 3DSMax, ZBrush licenses would instead spend all that money paying engineers to contribute to Blender. But, as of today, while 2.8 made big strides, it's still missing huge swaths of features available in Maya and others.


How is it HN news seems happy to let people derail threads for Google hate?


It isn’t hate if someone criticises a company and states clear reasons, which we may or not disagree with.


In order to truly to show Google is committed to open source, they have to open source Gmail, Google Maps, Google Docs, etc.???

Such extreme exaggeration makes it difficult to take your comment seriously.


In theory, if you are philosophically committed to the idea of open source, it logically follows that every binary you release should come with full source, without exception.

This is the case for me, for example. I never release software without source.


> (e.g. GMail, Google Maps, Docs, etc.) as well as the Playstore Framework.

These "apps" are all based around interaction with Google services. Maybe they could release open source versions of those, but then the users would be left with the hassle of having to pay Google for API accesses by the open source code they're running. Most users would simply not bother, and stick to whatever Google is providing for free.


The Google Cloud client libraries are open-source; I don't really see the difference.

I should also point out: often, with this type of project (in-the-open development of a client for a paid service) the client comes with some sort of stub emulator for the service. FOSS developers can test their changes against the emulator, and then submit PRs, which then get run by CI against the real service.


They could offer a digitally signed binary that is free (as in beer ... which would presumably be supported by ads that could not easily be stripped out) and a parallel open-source (ad-free) version that charges users directly for API access (preferably with an "always free" tier of use).


I would pay for that second tier provided they don't mine API usage for data for their search product. I would love to opt out of Google data mining, and I'm happy to pay for the services I use. The closest they have is their G-suite, but I think they still mine data from paying customers.


I think you are being a bit hard on Google here, they are strategic but also quite generous with open-source.


You only get upvotes for Google hate here.


Google are great, upvote/downvote what does it matter :)


While I realise that FPGAs are much closer to hardware design that programming is: delivering FPGA code to an FPGA to reconfigure it is still open-source software in my opinion. You've simply shifted what it is you're programming.

Actual open-source hardware (if you can indeed even use the phrase "open source" for this) would be distributing schematics of silicon chips online, so that anybody with the means to fabricate them could do so. I see this more akin to the "openess" of the 3D printing community. If technology to "3D print" silicon chips becomes available, that will be the turning point.


> If technology to "3D print" silicon chips becomes available, that will be the turning point.

I do see the point you're trying to make about cost of manufacturing, but there are already a number of "printable" circuits that a 3d printer can put together to build your own chips.

And whilst the kind of circuits I've printed onto clothes don't quite match full-scale chip production, it's an active area of research [0], so I wouldn't say it would be too long before it reaches consumer hands.

[0] https://www.machinedesign.com/3d-printing/3d-printed-flexibl...


Is this the kind of 3D printing of circuit boards you've been doing?

https://www.instructables.com/id/3D-Printing-3D-Print-A-Sold...


Similar idea, but rather than filling the traces after printing, I've been using conductive filament to print the traces themselves directly.


Does anyone have some reading online about successful use case of Amazon's F1 FPGA? I'm genuinely curious because I remember reading about this in 2017 and it sounded interesting. But that was the last time I heard about it and haven't heard anything else about it.


What kind of threat / challenge this kind of open source initiatives represent to dominant EDA players (Synopsys/Cadence) and their long term survivability? Synopsys in particular has been killing it with their stock performance, but I’ve been concerned for a while that this industry might be ripe for disruption. Alas, I think I lack technical expertise to fully appreciate complexities of EDA platforms and how to value that.


Not much actually. There is a huge problem with EDA where only established tools can be used. Where Free tools are available, its hard to get PDKs from the foundrys unless you are part of a university or a firm. The free tools lack some important features to make them robust ( like a good parasitic checker) etc.

There are several groups trying to bring this awareness, for example FSIC is pretty interesting (https://wiki.f-si.org/index.php/FSiC2019)


> Open source tools can now compile C++ into silicon (with caveats).

Woah, really? This seems ... impossible, right?


Been around for a while (at least the proprietary stuff). Certain things that are mere "style" and don't affect runtime for traditional C++ do have large effects on the generated code though. Intermediate variables for example will affect how many pipeline stages it generates (at least with Vivado).

Its a cool tool, but right now it's more of a productivity aide for people who already understand Verilog/VHDL rather than an enabler for software engineers.


Interesting. Seems like it would be a big performance gain for e.g. Facebook to convert all their networking/server code to silicon. Obviously has drawbacks like decreased speed of deployment and whatnot but it sounds like something that could be huge one day.


If you do not understand FPGAs, then C++ to VHDL/Verilog is lipstick on a pig at best.

Here's the thing:

Knowing VHDL/Verilog will not:

- Get you a job with compensation on the same level as ML/Web Dev.

- Magically make your tech/startup work better

- Fulfill buzzword requirements for investors

These sort of low level tooling is tremendously difficult to make profitable unless you already have some sort of vendor lock-in i.e. silicon, or an application that is extremely demanding in terms of efficiency or speed e.g. High Frequency Trading, network switches, dot-product-machines (commonly known as ML) and crypto hardware. Unfortunately any other applications for FPGAs tend to closely related to the embedded space (robotics, aerospace) and again it's significantly more annoying to monetize compared to e.g. a SaaS oriented around ML. Especially so it you are not a massive corporation with deep pockets.

The closest we have come to a high level tool for FPGA synthesis is reconfigure.io but they got acquired and is now effectively dead.

CPUs and ASICs have become too powerful and too cheap. Even with Moore's law tapering off, the gains provided by FPGAs are still too narrows. Electrical engineers are cheap. Physics have not changed much in recent decades. A couple computer architecture courses is more than enough to bring an EE graduate up to speed (referring to FPGAs here, ASICs are another story)


C++ is a poor fit for this. Proper high-level HDLs are better to understand and work with. Mentioned in the neighbor commment Clash[1], Chisel[2]/FIRRTL[3], Hardcaml[4], Spatial[5], various Python-based languages, there many[6].

[1] https://clash-lang.org/

[2] https://www.chisel-lang.org/

[3] https://www.chisel-lang.org/firrtl/

[4] https://github.com/janestreet/hardcaml

[5] https://spatial-lang.org/

[6] https://github.com/drom/awesome-hdl


While I agree, CodePlay, Khronos, Intel, and LLVM think otherwise with their SYSCL, oneAPI and LLVM bitcode to FPGA ongoing projects.


Sadly you can't synthesize any old piece of C++, there are a ridiculous amount of constraints and you need to write the code in a very specific way.

Also there's not necessarily much of a performance gain, some tasks are not good candidates for being implemented in hardware (as a general rule you need a lot of parallelism to make it worth it). As an overly simple example, implementing hardware RSA and hoping for a significant speedup doesn't make sense because there isn't really much parallelism and it's usually only used to encrypt keys, but something like AES or SHA might benefit from a good hardware implementation because there is much more parallelism to be had and they are used to encrypt much larger amounts of data.

To add even more complexity, the compilers can be obscenely finicky with optimizations.


So, it's less that you're writing C++ and synthesizing it, and more like you're maintaining a VHDL codebase that happens to be presented/edited "through" C++.


That's a fair description. Even so, with VHDL being rather verbose you might prefer the "C++ skin on VHDL" version for some things, especially algorithmic things.


Definitely, some code is much more readable/clean in C++, while still compiling to reasonable VHDL. Also templates let you create fairly complex blocks programmatically at compile-time, Verilog doesn't have the same metaprogramming facilities that C++ offers.


That sounds awful, what about bugs?


To add top of other answers.

Khronos is pursuing OpenCL to FPGA targeting, and you can do it with C++ via SYSCL.

https://youtu.be/BDH8p--kx6c

Intel also has their offerings,

https://software.intel.com/en-us/oneapi

And LLVM can also target some directly,

https://youtu.be/9a5gQ4wJoxs


It's indeed silly to focus on C++ synthesis when we have tools like Chisel (or, for that matter, SystemVerilog) that are just as "high-level", and far better matched to the inherent workings of the hardware you're going to synthesize.


Tell that to LLVM, Khronos, Intel, CodePlay.

Silly or not, they definitely see a market there.


If you're assuming you can take any program and just take it straight across to silicon, then yes it's impossible. There are some serious limitations on what you can do and what will translate (which seem not to be well articulated anywhere I can find?).

Then there are the same old questions of efficient access patterns to DRAM, bus bandwidth, and so on.

See e.g. http://users.ece.utexas.edu/~gerstl/ee382v_f14/soc/vivado_hl... : basically the data structure has to be defined at compile time, or at least "auto" allocation; you can't dynamically allocate silicon.


The technology has been around for a while, but as the other commenter noted "caveats" covers a whole lot of annoying/frustrating things. It's getting better, though.

https://www.xilinx.com/products/design-tools/vivado/integrat...


Much simpler cost model if you write in...Haskell

https://clash-lang.org/


This is called high level synthesis and has existed for a while. The "with caveats" part is quite the understatement


Efabless.com --- An EDA tool that is open source and shipping chips right now.


Only around 30k for a 50 chip sample


Note that as Moore's law is dying, FPGAs and custom silicon will become more important. In many scenarios, there is no use to a 1-year project to design custom silicon when within 1 year, a new CPU generation is being released that's double as fast. However, if that interval is 3 or 4 years or even longer, it suddenly becomes feasible.


> the mere fact that it is free of proprietary licenses has inspired countless open source implementations and an industry shake-up that has ARM quaking in its boots.

ARM and x86 are not quaking in their boots by any measure. I certainly haven't seen Google itself rushing to ship RISC-V devices or provide RISC-V cloud services at scale. I doubt there's much demand for either compared to ARM and x86.

Consider MIPS: it is another open ISA and it has good toolchain, silicon, and OS support, but ARM still rules on mobile and x86 still rules the desktop and server world.

In the case of Apple at least, they already design their own ARM CPUs and there is no clear advantage to switching to RISC-V or MIPS.

However, an Apple CPU that combined ARM and x86 compatibility might be useful in Apple laptop and desktop PCs.


I guess this will have to be limited to accelerators because a CPU/GPU/NIC/etc. in an FPGA will never be remotely as efficient as a real CPU/GPU/NIC/etc.

Open source software has been successful at creating generic infrastructure, possibly because generic software can support large communities. I still don't see how to create that kind of effect in hardware.


I don't think it's necessarily about using an FPGA on the cloud for your crypto miner as much as it is about reducing the costs (not specifically monetary) associated with hardware design in general. As the blog states: "Access to hardware: Hmmm. This is still a problem." The traditional reasoning would argue that spinning a chip is so prohibitively expensive, so why bother reducing other costs for the open source community?

Unlike the software community where we have nice open source libraries to do the useful and boring bits of of your software stack, more or less everything that's more complicated than a floating point adder is IP. (ARM CPU, PCIe controller, Ethernet controller, Internal fabric, DRAM controller.) Now you can go and buy this IP from other companies like ARM and Synopsis, but it costs money.

Then once you have made an architecture you like, there will be another group of companies wanting hundreds of thousands of dollars for their simulation tools (which are painfully outdated.) And they can charge so much because creating silicon is really expensive!

We have a long way to go on both of these fronts, because making open source hardware blocks and open source toolchains is much harder than it sounds, and for what benefit? It's still several million for masks.

For a startup trying to make innovative hardware however, Those things can mean faster development times, fewer engineers needed, and reduced IP royalty fees. For the industry, it gives students a lower barrier to entry to the field, so that when they come out of college they can be ramp up quicker. For me as a hardware engineer, it makes my life better, as it gives me hope that some day my tools won't be outdated and overpriced.


An FPGA is a lot slower than a real ASIC in the same process node (one order of magnitude of slowdown at the very least - even two is far from uncommon!). OTOH, this means you can make an ASIC version of your logic in older process nodes that can be fabbed a lot cheaper than anything on the cutting edge, and still come out quite a bit ahead over the FPGA itself. This turns FPGA's into tools for quickly iterating on a design and little else, because for real use ASICs are just too good of a deal to pass up.

But this can only be fully realized if the "high-level" part of ASIC design becomes a lot more straightforward than it is today. Which in turn requires significant progress in open source EDA tooling.


> This turns FPGA's into tools for quickly iterating on a design and little else, because for real use ASICs are just too good of a deal to pass up.

The dynamic reconfigurability lets FPGAs have a niche that can't be easily filled by ASICs. Being able to treat FPGAs like any other software target (albeit with a esoteric language exosystem) was a game changer in a lot of ways. I've heard of systems that reconfigure at runtime as well.


Well, yes and no. Yes, FPGAs will always be slower than real HW (for the foreseeable future at least). But you can use a design taylored to your application (an accelerator indeed), whether it is a CPU, a GPU, a NIC or anything else, and get more performance out of it than silicon that's available today.

For instance: a custom RISC-V CPU with extensions for machine learning, whether it is for prototyping or not. Or a NIC with some complex routing capabilities, etc. And of course, I guess that various accelerators are indeed the main driver behind this.

I guess you could also run one of these devices you named without efficiency in mind: hardware/software validation, trusting trust, profiling, etc.

But the big part might also be: access to big FPGAs at a fraction of the cost you usually need to shell out to get them, which in turns enables more regarding design verification, etc. for Open source HW, and maybe bypass the annoying vendor-specific tools for a cleaner API, with a hassle-free experience. Can we directly modify the bytecode with this?


I've generally wondered about extensions for RTOS support. Special kinds of priority-encoders off registers for flags; shared-memory locks; -- that sort of thing.


Why do you think an open source CPU design can't be manufactured by a fab?

There are economics of scale at play but the entry level keep going down while the market for open source grows.

I don't expect open-source ASIC to suddenly come to life and compete with the latest Intel, but I could see an open and verifiable design taking the place of microcontrollers in many places where cost is more a factor than performances. Or rather, where you need a known level of performances but that anything above that threshold is waste: HDD controllers, ethernet controllers, power supply controllers, etc...

I am predicting that at one point a Chinese fab will manufacture a batch of an open source competitor to some ARM Cortex chips and that it will catch up because of a mix of inferior prices, open doc and community and ideological reasons.


Why do you think an open source CPU design can't be manufactured by a fab?

That's possible but it doesn't enable the feedback loop where you can modify it, use your own modifications immediately, then contribute them back to the community. FPGAs support that loop, but only for accelerators.

There are economics of scale at play but the entry level keep going down

Everything I've read says that the cost to tape out is going up.

I could see an open and verifiable design taking the place of microcontrollers in many places where cost is more a factor than performances.

Yeah, but why would a community form around that? What's cool about saving a few pennies for some chip companies?


"Google generously adds fuel to this culture of openness and community through Google Summer of Code."

You don't get to call yourself generous. If it's so obviously self-serving then it's not generosity at all.


I don't think the author is a googler?


Correct. The author is a mentor of a GSoC recipient, so them calling Google generous (for sponsoring his mentee) makes perfect sense.


They are advertrising use of Web technologies as something good, but it's the opposite. That Python web server will be a bottleneck because it's throughtput cannot even be compared to that of a hardware chip.

It doesn't make sense to rent a "cloud" FPGA because it is ineffective. You get a vendor-locked solution with poor performance written in slow Python and Javascript. The code is on Github which means it will be buggy and always in "under construction" state (try using desktop Linux to understand what it means). As the chip is in the cloud, cloud operator can see and steal your secrets. The latency to send data to the cloud is very high. You cannot debug the chip and cannot connect a logic analyzer to it. You cannot make a hardware product with "cloud" chip. You cannot connect sensors to it.

This is only usable as a toy to play with FPGA from browser. You cannot use it to control a CNC machine or video surveillance system, for example. You cannot make a guitar pedal (that someone have mentioned in comments below) with this.

What developers need is cheap powerful FPGAs that they can buy and connect directly into an USB port, not a toy in the cloud.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: