Hacker News new | past | comments | ask | show | jobs | submit login

A podcast I listen to posted an interview with an expert last week saying that he perceived that much of the interest in custom hardware for machine learning tasks died when people realized how effective GPUs were at the (still-evolving-set-of) tasks.

http://www.thetalkingmachines.com/blog/2016/5/5/sparse-codin...

I wonder how general the gains from these ASIC's are and whether the performance/power efficiency wins will keep up with the pace of software/algorithm-du-jour advancements.




I listen to the Talking Machines as well. Great podcast. Another question would be are the gains worth the cost of an ML-specific ASIC. GPUs have the entire, massive gaming industry driving the cost down. I suppose that as adoption of gradient-descent-based neural networks increases, it may be worth the cost in a similar way that GPUs are worth the cost. Then again, I have never implemented SGD on a GPU so I'm not aware if there are any bottlenecks that could be solved with an ML-specific ASIC. Can anyone else shed some light on this?


> massive gaming industry driving the cost down.

Per-unit manufacturing cost scales logarithmically. Even a single batch of custom silicon on yesterday's technology is only $30K. This is one of the reasons there is so much interest in RISC-V; hardware costs are not the barrier-to-entry that they used to be.

So yeah, the gaming market pushes the per-unit price of GPUs down, but even an additional 2x reduction in rackspace and power will pay for itself at the right scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: