Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What you're referring to isn't a bug. It's inherent to the way LLMs work. It can't "go away" in an LLM model because...

The 'bug' presented above is a simple case of not understanding correctly. Larger models, models with MOE, models with truth guages, better selection functions, etc will make this better in the future.

> ...they don't. They are prediction machines. They don't "understand" anything.

Implementation detail.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: