Hacker News new | past | comments | ask | show | jobs | submit login

Fine tuning is generally not the best way to teach an LLM new knowledge. RAG is still more appropriate. Fine tuning is generally more effective for controlling the format of the responses but it's not going to teach the model a lot of new concepts. The model can learn how to handle new vocabulary through fine tuning but it's not a great way to teach the model new facts. Giving it access to a knowledge base is a better way to do that.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: