Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funnily I am interested in this semantic argument. Do LLM trainers actually feed their « beast » with prompts from the past? Especially ones that are human corrections upon false assumptions hallucinated by the LLM? As a non-specialist I would definitely see a lot of value in doing so, but I let you, experts, clarify that point.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: