Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A possible outcome is that it turns out intelligence is orthogonal to feeling/consciousness/qualia, and we start to recognize the latter as the true defining nature of humanity and personhood - and perhaps thereby extend a little more empathy to animals. They (comparatively) lack intelligence, but still experience those biology-rooted feelings. And in that renewed appreciation of animal rights we might hope that superhumanly-intelligent AIs will agree.


I don’t see why AGI plus sensory input couldn’t in principle give you qualia. In fact, I’ve heard some folks like philosophers argue that sensory input is kind of essential to getting AGI, and since current AI like LLMs don’t have sensory input, they therefore can’t develop general intelligence.

If we put Multimodal GPT-4 in a computer on a robot and instructed it to drive somewhere while avoiding obstacles, avoiding hazards… that right there is a primitive type of self-preservation instruction which it could potentially interpret as generalizable self-preservation as it would have an association of “hazard avoidance” with “self-preservation,” and have tons of examples of what “self preservation” means in its training weights. Putting LLMs into action like this can lead potentially to unexpected behavior like that, although I don’t think the mechanisms in GPT-3/4 are yet there to enable this without a bunch of extra hooks.


The phrase "in principle" is eliding quite a lot here since we don't understand what process gives rise to qualia in ourselves or whether qualia even exists in the same way as other categories of things. Certainly our naive intuitions suggests that things like conditional/able probability distributions don't have qualia, and so it is unclear how adding multiple modes to a model like ChatGPT (which is just a giant conditional probability distribution) could produce quale or (conversely) why, if conditional probability distributions _can_ have qualia why ChatGPT as it stands now wouldn't have such. When I run my eyes over text the words produce meanings which manifest in my mind and that sensation is a kind of quale, so why not so for ChatGPT?

I personally don't think ChatGPT has any experience at all for what it is worth.


The input prompt is their only sensory input.


Is that a major philosophical problem? GPT-4 is ostensibly multi-modal. Except for smell, we get our sensory input through the thalamus.


> we might hope that superhumanly-intelligent AIs will agree.

this kind of fear of misalignment bamboozles me - is there any proposed AI architecture that is not merely call and response? in what world can we simply not call a function again if we don't like the answer?


> this kind of fear of misalignment bamboozles me - is there any proposed AI architecture that is not merely call and response?

Yes, models that interact with the physical world or other external real-time systems would (even if the underlying model is “call-and-response” in a sense) be called in an infinite loop (possibly with exit conditions) by with captured input (sensor, command if available, potentially also past output) data.

Heck, the ReAct architecture which is used to provide extension (retrieval, web lookup, interface to other systems) for chat-style agents gives them an action loop without human-in-the-loop (usually, this is explicitly limited and is designed to do a finite number of actions in the course of getting to a response) but could be unlimited, or could even when limited connect to actions that involve reprompting (immediate or delayed) without a human in the loop.


Yes there are, ones that put language models in action loops (where the output is sent to a command line or something and the response sent back to the model as extension of the prompt).

That said, they definitely aren't going to be fooming this year!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: