It's impossible for humans to know a lot about everything, while LLMs can. So an LLM that sacrifices all that knowledge for a specific application is no longer an AI, since it would show its shortcomings more obviously.
They're still very bounded systems (not some galaxy brain) and training them is expensive. Learning tradeoffs have to be made. The tradeoffs are just different than in humans. Note that they're still able to interact via natural language!