Perhaps it's no longer being spelled out because it's getting outdated?
In your thread you argue we can't assume AI models generalize the same way we do (which is technically true except maybe not in the limit), but you seem to be worried about the extent of generalization ability (like learning to run vs. bike example, in terms of generalizing from either to climbing stairs).
Thing is, people made these objections a lot until the last year or two - this is what we're now calling a narrow AI problem. A "hot dog or not?" classifier ins't going to generalize into open-ended visual classifier of arbitrary images; a sentiment analysis bot isn't going to generalize into an universal translator; a code completion model isn't going to be giving good personal advice while speaking in pirate poetry. Specialized models fundamentally couldn't do that. But we went past that very rapidly, and for the past half a year or so, we've already seen models excelling at every single task listed above simultaneously. Same architecture, same basic training approach, few extra modalities, ever growing capabilities.
Between that and both successes and failures being eerily similar to how humans succeed or fail at these tasks, it's understandable that people are perhaps no longer convinced this class of models can't generalize in a similar way to how humans do.
> But we went past that very rapidly, and for the past half a year or so, we've already seen models excelling at every single task listed above simultaneously. Same architecture, same basic training approach, few extra modalities, ever growing capabilities.
With due deference to the title of the top-level post, I'm tempted to call bullshit unless your claim can be justified.
Just because a single model can do a handful of things you've listed doesn't mean that its capabilities are not "jagged"; you've just cherry-picked a few things it can do among the countless things it cannot yet. If AI really were so good at every single task, then (for example) it wouldn't matter much how you prompt it.
PS: I really do want to debate this further and understand your perspective, so I will reach out for continuing discussion.
In your thread you argue we can't assume AI models generalize the same way we do (which is technically true except maybe not in the limit), but you seem to be worried about the extent of generalization ability (like learning to run vs. bike example, in terms of generalizing from either to climbing stairs).
Thing is, people made these objections a lot until the last year or two - this is what we're now calling a narrow AI problem. A "hot dog or not?" classifier ins't going to generalize into open-ended visual classifier of arbitrary images; a sentiment analysis bot isn't going to generalize into an universal translator; a code completion model isn't going to be giving good personal advice while speaking in pirate poetry. Specialized models fundamentally couldn't do that. But we went past that very rapidly, and for the past half a year or so, we've already seen models excelling at every single task listed above simultaneously. Same architecture, same basic training approach, few extra modalities, ever growing capabilities.
Between that and both successes and failures being eerily similar to how humans succeed or fail at these tasks, it's understandable that people are perhaps no longer convinced this class of models can't generalize in a similar way to how humans do.