Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For a local LLM, you can't really ask for a certain performance level, it is what it is.

Instead, you can ask for the architecture, be it dense or MoE.

Besides, let's assume the best open weight LLM for now is deepseek r1, is it practical for you to run r1 locally? If not, r1 means nothing to you.

Maybe r1 will be surpassed by llama 4 behemoth. Is it practical for you to run behemoth locally? If not, behemoth also means nothing to you.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: