Hacker News new | past | comments | ask | show | jobs | submit login

Thank you for sharing this! I have one question: Is there any plan to add support for local LLM / embeddings models?



"Right now the system only supports OpenAI as an embedding provider, but we plan to extend with local and OSS model support soon."

In the post you responded to


Haha I feel so dumb now. Thank you!


This question keeps popping up but I don't get it. Everyone and their dog has an OpenAI-compatible API. Why not just serve a local LLM and put api.openai.com 127.0.0.1 in your hosts file?

I mean why is that even a question? Is there some fundamental difference between the black box that is GPT-* and say, LLaMA, that I don't grok?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: