Hacker News new | past | comments | ask | show | jobs | submit login

"We have an FPGA, oh but need to go faster, ok, build an ASIC then" is a natural thing to come up with. That's kind of what bitcoin farms did.

Obviously details and plans how it was done is where all the good stuff is, so that being hidden is understandable.




I'm not even a hardware or AI person, and even I could have told you ASICs would make way more sense than GPUs or FPGAs for machine learning. It's all about data locality. Fetching memory is the most costly thing a GPU does, and for ML (DNNs) there no big need for global access memory 99% of the time.

Anyone the casually follows AI knows that people have been talking about making DNN ASICS for some time. It was all a matter of time and $$$$$$

There is no doubt FB is working on them too. Which is why Google is finally publicaly saying that "we did it first ;)"


> It's all about data locality.

That's kind of what Movidius says, too, about its Myriad 2 VPU, which is kind of a GPU (SIMD-VLIW) with larger amounts of local memory combined with hardware accelerators.


> Google is finally publicaly saying that "we did it first ;)"

Makes sense. In that respect yeah, they probably wanted to keep it under wraps to avoid Facebook/others from getting a timeline estimate out of it and jump ahead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: