Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In your opinion, how does the "hallucination" issue differ from the same behaviour we see in humans?

I feel like if you have any belief in philosophy then LLMs can only be interpreted as a parlour trick (on steroids). Perhaps we are fanciful in believing we are something greater than LLMs but there is the idea that we respond using rhetoric based on trying to find reason within in what we have learned and observed. From my primitive understanding, LLMs rhetoric and reasoning is entirely implied based on an effectively (compared to the limitations of human capacity to store information) infinite amount of knowledge they've consumed.

I think if LLMs were equivalent to human thinking then we'd all be a hell of a lot stupider, given our lack of "infinite" knowledge compared to LLMs.



> if you have any belief in philosophy [...]

You're going to have to explain which part of philosophy you mean, because what came after this doesn't follow from that premise at all. It's like saying a Chinese Room is fundamentally different from a "real" solution even though nobody can tell the difference. That's not a "belief in philosophy", that's human exceptionalism and perhaps a belief in the soul.


The belief that your thoughts are constructed based on an understanding of principles such as logic, rationality, ethics. That your interactions are built from a solid understanding of these ideas. As opposed to every train of thought just being glued together from pertinent fragments you can recall from your knowledge in response to a prompt provided by the circumstances of reality.

> that's human exceptionalism and perhaps a belief in the soul.

I would also argue that LLMs are not proven to be equivalent to what's going on in our minds. Is it really "human exceptionalism" to state that LLMs are not yet and perhaps never will be what we are? I feel like from their construction it is somewhat evident that there are differences, since we don't raise humans the same way we raise LLMs. In terms of CPU years babies require significantly less time to train.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: