This argument generalises to all possible AI systems and thus proves way too much.
>[AI system]s are not general, but they show that a specific specialization ("[process sequential computational operations]") can solve a lot more problem that we thought it could.
Or if you really want:
>Humans are not general, but they show that a specific specialization ("neuron fires when enough connected neurons fire into it") can solve a lot more problem that we thought it could.
This is just sophistry - the method by which some entity is achieving things doesn't matter, what matters is whether or not it achieves them. If it can achieve multiple tasks across multiple domains it's more general than a single-domain model.
For poetry: counting syllables is a significant part of most poetry forms, so if you can't count syllables, you can't do poetry.
Let's say you want a 5-7-5 haiku, this is ChatGPT
this is not a 5-7-5 haiku.LLMs are not general, but they show that a specific specialization ("guess next token") can solve a lot more problem that we thought it could.