> I ask the question, what is 2 * 2, which is an obviously loaded question that's pattern matched to death.
Yeah, that was my point. Small codomain -> easy to validate. Large codomain -> open to interpretation. You implied that to prove reasoning, pick a prompt with a large codomain and if the LLM answers with accurate precision, then viola, reasoning.
So my question was, can you give an example of a prompt with a high codomain that isn't subject to wide interpretation? It seems the wider the codomain the easier it is to say, "look! reasoning!"
Your original claim was that an LLM can reason. And you say it can be proven by picking one of these prompts with a large codomain that has a precise answer which requires reason. If an LLM can come to a specific answer out of a huge codomain, and that answer requires reason, you claim that proves reasoning. Do I have that right?
So my question is, and has been these three replies: Can you give any example of one of these prompts?
Yeah, that was my point. Small codomain -> easy to validate. Large codomain -> open to interpretation. You implied that to prove reasoning, pick a prompt with a large codomain and if the LLM answers with accurate precision, then viola, reasoning.
So my question was, can you give an example of a prompt with a high codomain that isn't subject to wide interpretation? It seems the wider the codomain the easier it is to say, "look! reasoning!"