Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This reminds me of finetuning LLMs vs using RAG:

In RAG you "know" what the model knows and it's easy for it to give sources - you literally give it to the model in the prompt.

In finetuning, the model learns something, but it might not be able to reproduce it perfectly later on. But it's models have been changed.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: