"Customers will damage their FPGAs with invalid
bitstreams, and blame us for selling unreliable
products."
This used to be true when FPGAs had internal tristate
resources (so you could drive the same net from
competing sources); this is no longer the case for
modern devices.
I agree that this is a bogus argument -- nobody would blame the chip vendor for damage caused by a third-party bitstream -- but I don't see what it has to do with tristate logic. The only thing keeping me from shorting out two (or two thousand) ordinary CMOS drivers with an assign statement is the toolchain, isn't it?
It's not a bogus argument. The author here is flat out incorrect. There are many bit-level resources on a modern FPGA which could be configured to both source and sink current on the same wire. Any transistors set in this mode most certainly will burn out, at minimum the wire track, but at maximum an entire region of the die.
EDIT: Modern process makes this problem worse actually, since wires are by definition smaller, decreasing their failure threshold. Even a short for a nanosecond can cause irreversible damage.
> Even a short for a nanosecond can cause irreversible damage.
Jumped out at me. 1 nanosecond is 10-9 second, assuming two switching elements with a 10 Ohm RDSON connected to the rails (probably on the low side, such small elements usually have a rather high ON resistance) that's a 20 Ohm series resistance on 3.3V causing assuming an instantaneous rise (which it won't be) which leaves you with about 0.5 nano-Joule of energy spread out over two locations. That's an extremely small amount of energy to be able to cause damage, surprising!
Hard to say what the true damage potential is like, because the numbers become unintuitive to work with at such small scales. The feature sizes we're talking about are smaller than most known viruses, so it wouldn't take much heat to damage them. Be interesting to go through the math.
Some older FPGAs, such as the Virtex 2 Pro series, had internal buses driven by tristate buffers. The responsibility was on the designer to avoid enabling multiple drivers at a time.
This is no longer true of modern FPGAs. There is no way to create a signal that is driven by multiple sources; the structure of the FPGA guarantees that any net will have exactly one driver.
I don't think I've asked for anything particularly unusual?
It's common for electronics manufacturers to release diagrams like this: http://i.imgur.com/k6o8mwO.png detailed enough to give you an idea of how the circuit behaves and why, but well short of a complete internal schematic for the entire device.
Google "Xilinx FPGA cell" https://www.google.co.uk/search?tbm=isch&q=xilinx+fpga+cell and you'll find similar approximate diagrams exist for FPGA cells. That's the detail level I'm interested in, and I think it's reasonable enough to believe it would exist?
Those cell diagrams are very abstract illustrations of the underlying functionality and are in some parts even wrong [1]. The level you have to be looking at to find out more about short-circuits (transistors and their interconnections), is very much a secret.
[1] "Wrong" in the sense that they are only proper abstractions for the officially supported functionality. For example, the DSP48E blocks of the Xilinx Virtex-5 can be chained for higher precision, but if you interpret the diagrams literally and try to build unsupported functions, it won't work as you expect.
What I suspect is happening is that the good old tristate bus can't be turned round fast enough for highspeed designs so some sort of mux network has been built to replace it. The place to look is probably in the patents.
Shouldn't the FPGA do a validation step before loading the bitstream? Or maybe the vendor can give you a validation tool that you have to run before loading the code and if the tool says "this will damage your board", you're liable for all damages?