Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Ollama does not use llama.cpp anymore

That is interesting, did Ollama develop its own proprietary inference engine or did you move to something else?

Any specific reason why you moved away from llama.cpp?



it's all open, and specifically, the new models are implemented here: https://github.com/ollama/ollama/tree/main/model/models




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: