Hacker News new | past | comments | ask | show | jobs | submit login

How different are TPUs from GPUs? From the article, it sounds like TPUs use lower-precision arithmetic: are there any other differences?



Not just lower precision but probably less error correction in the lower bits. -Or- ... they convert the floating point values to analog values and do all the math in the analog domain and convert it back, just like old analog computers. But by going fully analog, I think the gains would be over 10x better, so probably not.

See 'low-power (inexact|approximate) computing'


They likely use half precision (i.e. 16 bit) floats in some of the computing stages. They've also discussed using even fewer bits of precision in some of their research output. When training and following an error gradient you kind of need lots of precision, but when running a learned model you often don't need much at all, 8 or even 4 bit numbers are sufficient in some calcs where rounding error propagation isn't an issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: