But it makes a radical difference to the quality of the answer. How is the LLM (or collaboration of LLMs) going to get all the useful context when it’s not told what it is?
(Maybe it’s obvious in how MCP works? I’m only at the stage of occasionally using LLMs to write half a function for me)
In short, that's the job of the software/tooling/LLM to figure out, not the job of the user to specify. The user doesn't know what the context needs to be, if they did and could specify it then they probably don't need an LLM in the first place.
MCP servers are a step in the direction of allowing the LLM to essentially build up its own context, based on the user prompt, by querying third-party services/APIs/etc. for information that's not part of their e.g. training data.
(Maybe it’s obvious in how MCP works? I’m only at the stage of occasionally using LLMs to write half a function for me)