Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of people are using GPUs for many other things than ML. The big advantage is the number of cores, and people that run on super computers write algorithms that are highly parallelized (otherwise what's the point). GPUs are getting fast enough that the number of cores they share is gaining an edge. Also the memory on them is MUCH faster than that on a CPU, but the cost is that you have less (20Gb compared to 256Gb).

As far as the TPUs, one big advantage for ML is that they are float16/float32 (normal being f32/f64)(in ML you care very little about precision) and are optimized for tensor calculations. For anything that you don't need that resolution and are doing tensor stuff (lots of math/physics does tensor stuff), then these will give you an advantage. (I'm not aware of anyone using these for things other than ML, but I wouldn't be surprised if people did use them) But other things you need more precision and those won't use the TPUs (AFAIK).




All modern gpus support f16.


This, and lots of things besides ML don't need high precision.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: