Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hopefully we see enough efficiency gains over time that this is true. The models I can run on my (expensive) local hardware are pretty terrible compared to the free models provided by Big LLM. I would hate to be chained to hardware I can't afford forever.


The breakthrough of diffusion for tolken generation bumped down compute alot. But there are no local open sources versions yet.

Distillation for specialisation can also raise the capacity of the local models if we need it for specific things.

So its chugging along nicely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: