Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This makes me think about what are the principled requirements for artificial intelligence. It might be designed, it must have sensors and actuators, and it must have “goals” in some manner.

But, does AI require digital computers? Can you have AI with analog computers? Or, are even computers required? Perhaps AI only requires the (artificial) design of intelligent processes in the world.

For instance, the autopilot (known as “the Pilot’s Assistant) was invented in 1912, long before digital computers. If that’s an example of AI, then there is a need for conceptual reformulation…



> Can you have AI with analog computers?

Pattern-recognizing neural networks can certainly exist as analog computers. The very first perceptron-based image recognition systems were analog and optoelectronic. A grid of light sensors wired up in a network, with the weights set by potentiometers, self-adjusted by little stepper motors during training.

https://towardsdatascience.com/rosenblatts-perceptron-the-ve...

https://americanhistory.si.edu/collections/search/object/nma...


John McCarthy said of "artificial intelligence" that even his thermostat had beliefs. Sometimes it thinks the room is too hot, and sometimes it thinks the room is too cold.


In Cybernetics there's a result: every efficient self-regulating system must contain a model of itself. It's true even for the system of thermostat-and-room (although it's subtle.)

https://en.wikipedia.org/wiki/Good_Regulator


which explains consciousness: both why we are self-aware (because we're modelling ourselves) and why we aren't more self-aware (because the model is only accurate enough to be effective)...


I think so, yep. (Although I would say that it explains our subjective experience of mind, "what it's like to be a human", rather than the existence and nature of subjective experience itself, but that's getting into metaphysics.)

It seems to me that there's no good reason not to reuse the term "ego" for the cybernetic self-models in human minds.

This idea also has interesting ramifications when you think about the boundaries of "self" and "other". The self-model system is not just the human being: it includes all the aspects and entities around the human too. And in humans we have models of other people and their self-models, and they have models of us and our self-models, and in turn we have models of their models of our models, etc.


FWIW I believe the other-models are what have been strongly selected for, and the self-model is just an epiphenomenon arising from the relative ease of modelling one's self after having gained the ability to represent models of others' selves:

cf https://news.ycombinator.com/item?id=23475069


Sounds reasonable to me.

I appreciate the link to Shannon's paper. I've played a browser-based version of such a machine/game and it's pretty eerie.

Long ago a friend of mine had this cat, she liked to sit on your lap, but hated it when you got up, she would always jump down the instant you started to rise.

However...

She could tell from the pattern of muscle tensions in your legs whether or not you were really getting up or just moving a bit to rearrange your butt or whatever, and she would only jump if you were getting up. She was flawless at this. In fact, a few times she jumped down a split-second before I knew I was getting up. Like Bruce Lee, she could detect and respond to the intention to move, even before the motion was consciously known to the mover himself (me, in this case.)


Saying AI requires a digital computer is like saying biological intelligence requires a brain. If intelligence is substrate independent then it can run on anything.


It could use humans as actuators :-)

As for the autopilot - a plane autopilot is fundamentally a device to make the plane go straight; you don't need AI, or much processing power at all, to do that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: