>Human innovation can arise by using these experiences as source data for composition of text or images. LLMs, by contrast, are limited to training on text and images/video exclusively.
Right but my concept still applies. What's unique about the LLM is that it is fed a massive amount of textual data in such a way that it can essentially output text AS if it were a human that does experience those emotions. From the perspective of you and me, this is no different then what we experience in real life.
How can you be sure the people around you feel emotions just as you do? Do you just assume it? Why shouldn't you assume it for an LLM? The human like the LLM uses English to describe and communicate to you. This in terms of raw logical evidence we can't confirm if LLMs feel emotions any more than our ability to confirm whether other humans can feel emotions. The technical evidence for both is relatively identical.
Right but my concept still applies. What's unique about the LLM is that it is fed a massive amount of textual data in such a way that it can essentially output text AS if it were a human that does experience those emotions. From the perspective of you and me, this is no different then what we experience in real life.
How can you be sure the people around you feel emotions just as you do? Do you just assume it? Why shouldn't you assume it for an LLM? The human like the LLM uses English to describe and communicate to you. This in terms of raw logical evidence we can't confirm if LLMs feel emotions any more than our ability to confirm whether other humans can feel emotions. The technical evidence for both is relatively identical.