Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

well, we can AI inference on our desktops for $500 today, just with smaller models and far slower.


There is no need to use smaller models. You can run the biggest models such as llama 3.1 405B on a fairly low end desktop today:

https://github.com/lyogavin/airllm

However, it will be far slower as you said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: