Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Wondering how you'd classify Gaudi, tenstorrent-stuff, groq, or lightmatter's photonic thing.

Completely irrelevant to consumer hardware, in basically the same way as NVIDIA's Hopper (a data center GPU that doesn't do graphics). They're ML accelerators that for the foreseeable future will mostly remain discrete components and not be integrated onto Xeon/EPYC server CPUs. We've seen a handful of products where a small amount of CPU gets grafted onto a large GPU/accelerator to remove the need for a separate host CPU, but that's definitely not on track to kill off discrete accelerators in the datacenter space.

> Calling something a GPU tends to make people ask for (good, performant) support for opengl, Vulkan, direct3d... which seem like a huge waste of effort if you want to be an "AI-coprocessor".

This is not a problem outside the consumer hardware market.



Consumer hardware and AI inference are joined at the hip right now due to perverse historical reasons.

AI inference's big bottleneck right now is RAM and memory bandwidth, not so much compute per se.

If we redid AI inference from scratch without consumer gaming considerations then it probably wouldn't be a coprocessor at all.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: