> What you're referring to isn't a bug. It's inherent to the way LLMs work. It can't "go away" in an LLM model because...
The 'bug' presented above is a simple case of not understanding correctly. Larger models, models with MOE, models with truth guages, better selection functions, etc will make this better in the future.
> ...they don't. They are prediction machines. They don't "understand" anything.
The 'bug' presented above is a simple case of not understanding correctly. Larger models, models with MOE, models with truth guages, better selection functions, etc will make this better in the future.
> ...they don't. They are prediction machines. They don't "understand" anything.
Implementation detail.