If you give GPT3 code with a bug in it, and ask it to find the bug, it can't really do that. I'm pretty sure giving it all the data and asking it why things aren't working the way it should, it wouldn't have actual knowledge.
There's a depth to explaining things that GPT still can't do. It's still astonishing, and has completely changed my idea on what AI can do, like write plays with very incredible context (better than most humans!) but there are still major limits.
I think it's more likely a problem that the GP poorly worded their initial statement, rather than actually moving the goalposts. They were probably having trouble with a few thorny bugs, tried ChatGPT, got nowhere, and forgot to qualify their initial statement with "for the few non-trivial bugs I tried".
From the external point of view, the goalposts moved, but from within the GP's poorly expressed mental model, they haven't moved. But, that's just a guess.
There's a depth to explaining things that GPT still can't do. It's still astonishing, and has completely changed my idea on what AI can do, like write plays with very incredible context (better than most humans!) but there are still major limits.