Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I always feel that if you share a problem here where LLMs fail, it will end up in their training set and it wont fail to that problem anymore, which means the future models will have the same errors but you have lost your ability to detect them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: