I agree this looks promising, though I'm not an expert in this field.
But the title is a bit, well, overpromising or broad. I don't think we'll replace traditional motherboards anytime soon (except maybe in smartphones?). Rather, it will be an incremental progress.
- first, SoC's will be replaced with chiplets
- then we'll start seeing more and more stuff being integrated on this wafer.
- say, instead of a server motherboard with multiple sockets, have all the CPU chiplets on the same wafer and enjoy much better bandwidth than you get with a PCB
- integrate DRAM on the wafer. This will be painful as we're used to being able to simply add DIMM's, but the upside is massively higher bandwidth.
The motherboard pcb per se will live for a long time still, if nothing else then as the place to mount all the external connectors (network, display, pcie, usb, power, whatnot).
> integrate DRAM on the wafer. This will be painful as we're used to being able to simply add DIMM's, but the upside is massively higher bandwidth.
One way I imagine this working out is that, instead of just replacing the plastic motherboard with a silicon motherboard, you eventually do away with a single monolithic motherboard entirely. Instead, you have "compute blocks" (comprised of chiplets bonded to a silicon chip, or conventional chips on a conventional circuitboard) that connect with each other via copper or fiber optic point-to-point communication cables, and you can just wire them together arbitrarily to build a complete computer. Like, you might have a couple blocks that house CPUs, one or two that have memory controllers and DRAM, and maybe one with a PCI bus so you can connect peripherals, and you can connect them all in a ring bus. You could house these blocks in a case and call it a server, or connect a lot more blocks and call it a cluster.
The main advantage of such a setup is that you don't have a single component (the motherboard) that determines how much memory, how many processors, or what sort of peripherals you can have.
This becomes specially interesting if you imagine these components becoming smart enough to support high(er)-level atomic operations and some form of access-control, so you could have shared resources between two subsystems.
Also if all these components are reasonably smart and interconnected, it could become more common for the CPU to merely coordinate communication in many cases, so larger chunks of data could easily be handed around different components and the processor only telling them what range of bytes to send where.
I think the embedded wafer level or "panel" level packaging technologies are the mid-ground. These technologies don't use expensive silicon, and instead surround the die with cheaper epoxy. Then the interconnects are built on top of that, and can connect multiple die together. Yield and interconnect pitch are the big issues here though, and that's why I think you're right, that we will see SoCs or mobile systems first, not whole motherboards.
With that said, some of these technologies can have a layer of surface mount pads on top. So you have a substrate of epoxy with all your chips and interconnects embedded in it, and then surface mount parts on top. For example, passives, connectors, etc. It would look almost like a motherboard, but with all the chips inside. Of course, for cost and yield reasons, this will be for mobile devices only at first.
I didn’t phrase that well. I meant that the wafer wafer and panel level embedded technologies embedded the silicon die inside of cheaper epoxy, instead of building expensive silicon interconnect to integrate them on. They basically make a plastic wafer with a bunch of die in it. Then interconnect is built up on that.
Edit: the links below show solder balls. Today this technology is used for packaging, and has been used on chips in phones for years now. In the near future, we should be able to embed or surface mount passives and mechanical components, so maybe we don’t need the PCB.
- The interconnect pitch is huge, 0.3mm-0.4mm. HBM memories have 1000s of I/Os
- The inductance of the solder balls and the impedance discontinuities in the path mean the logic below still has to have big energy-hungry I/O drivers
- If you want to stack more than one die, you need something expensive like through silicon vias (TSV)
Air is gonna do fine. The bottleneck in CPU cooling right now is pretty much always the transfer between the die, the heat spreader and the cooling plate, not the transfer from the fins to the air. Water cooling can do slightly better because you can keep the water cool, and with that the cooling plate, and through that increase the heat flow from the CPU to the plate, but it's really only marginally better than a big air cooler.
And if you put more dies below a heat spreader, you get more surface area, i.e. better heat flow overall (compared to a single die with the same power consumption) from the dies to the heat spreader and from the heat spreader to the cooling plate.
That's also the reason why bigger air coolers don't really do as much as you'd think they should in terms of cooling performance or overclocking, the difference between an NH-U14S and an NH-D15 is really quite small. If the problem is heat dissipation through the fins all you have to do is make the cooler bigger.
You can bring water closer to the crystal, and make it pass faster past / inside the dissipator plate, thus achieving a larger stream of heat. Effectively you can turn the dissipation plate into moving liquid with high specific thermal capacity (5-7x of the metal plate).
Water has the big advantage that it's plentiful, cheap, and environmentally benign.
Sure, it'll take some more upfront engineering to design a system/rack/datacenter for water cooling than just immersing a server in a tank of inert liquid (flourinert or whatever they use these days), but I'm quite sure that at some point water cooling will be the standard solution in data centers.
Hmm, I would say the opposite. If all the memory and CPU cores are integrated on a single wafer, the penalty for off-chip access would be much less than if you had to go through a PCB.
It'll be less than a networked cluster, but it still mattered with Threadripper units and I'd expect a racked board of this nature to expose more disparity between accessing memory in other chiplet areas.
But the title is a bit, well, overpromising or broad. I don't think we'll replace traditional motherboards anytime soon (except maybe in smartphones?). Rather, it will be an incremental progress.
- first, SoC's will be replaced with chiplets
- then we'll start seeing more and more stuff being integrated on this wafer.
- say, instead of a server motherboard with multiple sockets, have all the CPU chiplets on the same wafer and enjoy much better bandwidth than you get with a PCB
- integrate DRAM on the wafer. This will be painful as we're used to being able to simply add DIMM's, but the upside is massively higher bandwidth.
The motherboard pcb per se will live for a long time still, if nothing else then as the place to mount all the external connectors (network, display, pcie, usb, power, whatnot).