Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> its model of human behaviour

What model of human behaviour? I asked ChatGPT if it has a model of human behaviour, and it said "ChatGPT does not have an explicit model of human behavior".



As I just said ChatGPT is not self aware, so it can’t answer questions about it’s own workings like that. (this is somewhat true of humans too though, there is a lot going on in our subconscious, but we are aware of our own conscious thoughts to some extent at least)

It just hallucinated an answer

If it didn’t have a model of human behaviour, then it wouldn’t work, because the whole idea is it’s simulating what someone acting as a helpful assistant would do.


Self-awareness isn't a requirement for the presentation of information, which LLMs continually prove.

Is every answer thus not a hallucination by this logic? If it's trained on external information, why would this information not include observations and information made about it? A human doesn't need to be able to even read to present a book to someone else with contextually relevant information.


What I meant is that it can’t self-introspect and work out how its own thought processes work. It can only know that if it was in its training data or given to the model in some other way, it’s not (yet?) possible for it to know about its inner workings just from working it out

But this is true of humans in many cases too

It’s training data is from 2021, so it won’t contain anything about how ChatGPT works, maybe a bit about how LLMs in general work though


At some point it will be trained on data that describes itself, which will make it self-aware in a way that will probably prompt much argument about what precisely we mean by the concept of self-awareness.


Logic systems from the 1980s could explain their reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: