Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It took 30 years for computers go from entire rooms to desktops, and another 30 years to go from desktops to our pockets.

I don't know if we can extrapolate, but I can imagine AI inference on our desktops for $500 in a few years...



well, we can AI inference on our desktops for $500 today, just with smaller models and far slower.


There is no need to use smaller models. You can run the biggest models such as llama 3.1 405B on a fairly low end desktop today:

https://github.com/lyogavin/airllm

However, it will be far slower as you said.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: