> In theory a LLM could learn any model at all, including models and combinations of models that used logical reasoning.
Yes.
But that is not the same as GPT having it's own logical reasoning.
An LLM that creates its own behavior would be a fundamentally different thing than what "LLM" is defined to be here in this conversation.
This is not a theoretical limitation: it is a literal description. An LLM "exhibits" whatever behavior it can find in the content it modeled. That is fundamentally the only behavior an LLM does.
Yes.
But that is not the same as GPT having it's own logical reasoning.
An LLM that creates its own behavior would be a fundamentally different thing than what "LLM" is defined to be here in this conversation.
This is not a theoretical limitation: it is a literal description. An LLM "exhibits" whatever behavior it can find in the content it modeled. That is fundamentally the only behavior an LLM does.