Your brain is also based on statistics. We also get stuck in a rut because the "right" answer is no longer statistically significant.
And yet this is not what limits our cognition.
Current LLMs are slow to update with new info, which is why they have cut-off dates so far in the past. Can that be improved to learn as fast (from as little data) as we do? Where's the optimal point on inferring from decreasing data before they show the same cognitive biases we do?
(Should they be improved, or would doing that simply bring in the same race dynamics as SEO?)
Even humans are not good at this. The US military has a test (DLAB) to figure out how good you are at taking in new information in regards to language -- to determine if it is worth teaching you new languages. Some humans are pretty good at this type of thing, but not all. Some humans can't even wrap their heads around algebra but will sell you a vacuum cleaner before you even realize you bought it.
The problem with LLMs is that there is one and it is always the same. Sure, you can get different ones and train your own, to a degree.
Your brain is also based on statistics. We also get stuck in a rut because the "right" answer is no longer statistically significant.
And yet this is not what limits our cognition.
Current LLMs are slow to update with new info, which is why they have cut-off dates so far in the past. Can that be improved to learn as fast (from as little data) as we do? Where's the optimal point on inferring from decreasing data before they show the same cognitive biases we do?
(Should they be improved, or would doing that simply bring in the same race dynamics as SEO?)