> What you're referring to isn't a bug. It's inherent to the way LLMs work. It can't "go away" in an LLM model because...
The 'bug' presented above is a simple case of not understanding correctly. Larger models, models with MOE, models with truth guages, better selection functions, etc will make this better in the future.
> ...they don't. They are prediction machines. They don't "understand" anything.
Prediction without understanding is just extrapolation. I think you're just extrapolating your prediction on the abilities of future LLM based prediction machines.
What you're referring to isn't a bug. It's inherent to the way LLMs work. It can't "go away" in an LLM model because...
> understands human language
...they don't. They are prediction machines. They don't "understand" anything.