The point is that these problems will follow the same growth trajectory as every other tech bug. In other words, they will go away eventually.
But the Rubicon is still crossed. There is a general purpose computer system that understands human language and can write real sounding human language. That's a sea change.
> What you're referring to isn't a bug. It's inherent to the way LLMs work. It can't "go away" in an LLM model because...
The 'bug' presented above is a simple case of not understanding correctly. Larger models, models with MOE, models with truth guages, better selection functions, etc will make this better in the future.
> ...they don't. They are prediction machines. They don't "understand" anything.
Prediction without understanding is just extrapolation. I think you're just extrapolating your prediction on the abilities of future LLM based prediction machines.
But the Rubicon is still crossed. There is a general purpose computer system that understands human language and can write real sounding human language. That's a sea change.