Hacker News new | past | comments | ask | show | jobs | submit login
The body is the missing link for truly intelligent machines (2017) (aeon.co)
165 points by new_guy on Aug 22, 2019 | hide | past | favorite | 112 comments



I've been saying something like this since the 1980s. But we knew back then that manipulation in unstructured situations was very hard. It still is.

Here's the DARPA robot manipulation challenge, 2012.[1] This is pathetic. Especially since DARPA has been funding universities in this area since the 1960s. There's a classic video of robotic assembly at Stanford SAIL in the 1960s I can't find right now. It looks very similar, except that the video quality is worse.

The state of the art in autonomous mobile robots for unstructured environments is terrible. The state of the industry for that is worse. Willow Garage went bust. Google bought up some of the players, ran them into the ground, and dumped them. Schaft, the Tokyo University spinoff they bought, found no buyers at the selloff. (They had nice hardware, too.) Boston Dynamics is still around, feeding off of Softbank now, after feeding off Google and DARPA, but there are no selling products after 30 years. The USMC rejected their Legged Squad Support System. The performance level at the DARPA Humanoid Challenge was very poor.[2]

Even robot vacuum cleaners aren't very good. You'd think they'd be doing offices and stores late at night by now, but they're not. The Roomba, which has the intelligence of an ant (it's from Rod Brooks, the insect AI guy) came out in 2002, and is only slightly smarter 17 years later.

Automatic driving is starting to work, after a few billion dollars was thrown at that problem. That, too, was harder than expected.

Drones, though. Drones are doing fine.

The real breakthrough in machine learning was the discovery that it could be used to target advertising. That doesn't have to work very well to be useful. It's easy to test. 80% success is fine. Now there's money behind that field.

Embodied AI is really hard to work on, and very expensive. It's easier than it used to be; you can buy decent robot hardware off the shelf, and don't spend your time worrying about gear backlash and motor controllers. But it's still way harder to test than something that runs in a web server.

The payoff is low. Robots in unstructured situations do the jobs of cheap people, and the robots are usually slower. After many decades of many smart people beating their head against the wall in this area, there's been some progress, but not much. That's why this isn't happening yet.

However, being able to mooch off of technology being developed to serve the ad-supported industries that use AI does help.

[1] https://www.youtube.com/watch?v=jeABMoYJGEU

[2] https://www.youtube.com/watch?v=nIyuC7ceFH0


> Automatic driving is starting to work, after a few billion dollars was thrown at that problem. That, too, was harder than expected.

It is quite surprising that it took so long if you look at the achievements from the 1980s and 90s by German military. [1]

[1] https://www.youtube.com/watch?v=_HbVWm7wdmE


This is why I tell people we should be building better tools, not trying to replace ourselves. At least until we figure out what AI is supposed to be.

Sort of like slide rules vs Matlab instead of human-with-broom vs a Roomba.


I would argue that the issue isn't embodiment, but sensor density.

Humans have an amazing number of sensors built into their manipulators in all directions and an enormous amount of neurological resource dedicated to it.

Until mechanical manipulators have the sensor density of even the back of a finger, it's not really going to get anywhere.


The importance of embodiment has been a fairly common idea in AGI research for many years.

Virtual embodiment has become quite popular. See things like OpenAI gym or DeepMind Lab etc.

Anyway, and this is more of a general comment than a reply to the above comment specifically, this idea is not new, and I hope that people will realize that the field of AGI exists and study some of the existing research. Maybe take a look at the sidebar and intro info at reddit.com/r/agi


> The state of the art in autonomous mobile robots for unstructured environments is terrible.

Roomba’s are very good at what they do. The real limitation is what you want the robot to accomplish and how much it costs. There are surprisingly few home chores worth spending significant amounts on a robot to do it for you vs just having a cheap maid service.

In professional settings, you can generally just make it a structured environment.


> Roomba’s are very good at what they do.

I'm not sure you've ever owned a Roomba. In theory they work great. In practice, there's always something on the floor they get tangled in, there's that one couch they're just small enough to fit under but not escape from, or there's that one corner of death in your room they inevitably get into and become trapped. And sometimes, even when everything is absolutely perfect, one of the sensors decides it's stuck an so the thing just backs up in circles indefinitely in an otherwise-ideal empty room.

I've owned two Roombas and both were somehow more work than just sweeping or vacuuming.


We just bought a Mi Robot to replace our Roomba 630. It’s half the price, actually maps my house, doesn’t bump into stuff, has better fault-recovery, scheduling built-in, and is just generally a real pleasure to run.


> Roomba’s are very good at what they do

Provided you want them to clear a surface that's mostly empty of obstructions. Wires are the bane of their existence, but also cloth and big pieces of paper that can't be ingested by the vacuum.

They are relatively ok for office settings, but the models without navigation would take until the universe heat death to clear big open floor offices properly.


Don't the performances of AI in more unstructured game environments show that progress has been mzde? The 5v5 Dota game with OpenAI for example, where the AI is 'embodied' in the héros being played on the terrain.


>> But despite encouraging results, most of the time I’m reminded that we’re nowhere near achieving human-like AI. Why?

Primarily, because we have no idea what intelligence is, or how it works, why it exists even, etc etc. This goes for human-like intelligence, but also for any kind of intelligence. We just have no good scientific understanding of the subject. We have some vague models of it ("the brain is like a computer and the mind is like a program running on it") but nothing very precise and certainly nothing that can be reproduced on a digital computer, which is what "human-like AI" would be (i.e. human-like AI would be the reproduction of human intelligence on a digital computer).

Most likely, until we make some progress in understanding intelligence we will not be able to reproduce it. Except perhaps by chance.


>Primarily, because we have no idea what intelligence is, or how it works, why it exists even, etc etc.

I believe so many of us limit ourselves by even thinking that intelligence happens in the brain. By the title of this article I thought it might go into the vagus nerve, etc. There is so much "intelligence," processing, communication etc that happens away from the brain and independent of the brain in our bodies.

We have to remember many animals (which we evolved from and retain vestiges of) don't have "centralized" intelligence and yet are capable of great things. We also have to remember that the trillions of cells we are made of each have their own nucleus as well.


"What Bodies Think About: Bioelectric Computation Outside the Nervous System" https://news.ycombinator.com/item?id=18736698

Whatever it is, intelligence is ambient.


> we have no idea what intelligence is

I wouldn't make so strong a claim in light of what philosophy has to say about intelligence. Relevant here also are certain metaphysical presuppositions made by fields like neuroscience that preclude that understanding. Furthermore, even knowledge of something does not necessarily entail ability to reproduce something.


Philosophy absolutely has lots to say about intelligence, much of which is mutually contradicting. There are a lot of different models out there, based off of varying levels of cleavage to various priors, that all seem to have very little real explanatory (or, I should say, predictive) power. Approaches based off of current materialist, biology-based methods have a little more utility behind them as far as we can tell (some of our drugs seem to do something to the mind), but the furthest they've gotten so far is to be able to figure out some of the things that matter to intelligence, rather than a successful general model for what it is.

And while I'll agree that this lack doesn't necessarily preclude the ability to reproduce intelligence, it does make it darn hard to recognize it.


>> Furthermore, even knowledge of something does not necessarily entail ability to reproduce something.

Well, since we're on a philosophical footing, I'll say that knowledge of what intelligence is does not entail the ability to reproduce it, but the ability to reproduce intelligence entails knowledge of what it is.

(A |= B means that, whenever A is true, B is true, but not necessarily v.v.)

You are right to say that knowledge of what intelligence is does not entail the ability to reproduce it, since for example there may be hard practical obstacles to realising the necessary technologies etc. But I would think the ability to reproduce intelligence entails knowledge of what it is, assuming intelligence is something very complex that we will not just spontaneously obtain by chance, for example by a lucky combination of various random elements. We have certainly been trying to reproduce intelligence on computers for a while now- and failed. So it doesn't seem to be something in the order of, say, coming up with a cure for some ailment by trying things that we believe "should work" and that works in a way we don't quite understand etc.

All of which, given it's true, implies that we will not reproduce intelligence until we understand it.


Maybe, or maybe not. Maybe much of what philosophy has to say to say about what it thinks about intelligence is bull-hooky and actually holding us back. The 'understanding' we have feeds into our presumptions, so I agree in that regard.

Arguably, the tangible limits experienced in our actual models of the world based on those presumptions are reflective of the validity of those assumptions.

Clearly, in the case of modeling 'intelligence' and 'mind', they aren't that great, indicating we really don't understand the phenomena.

I've made this argument repeatedly and before on this platform, but I think the biggest mistake we've made in this field is regarding the preeminence of what we experience as 'human' intelligence and our mental models for how that 'must' work.

I'm going to catch all kinds of flack for this, but its my biased perspective that plants are just as intelligent as animal life, but because we use humans as a litmus for what intelligence must 'look' like; we have a very difficult time enumerating what intelligence a non-human organism must have. I think intelligence is reliant on two functions: complexity and connectivity. Plants (vascular) are far more highly connected than animals, and at a cellular level they are as, if not more complex then animal life in cell structure and tissue type.

We just don't acknowledge intelligence when its considerations are so distant from our own concerns. I think a revised model where modes of intelligence are attributed starting at an individually cellular model, and build into more complex modes of computational complexity would really clear a lot of this up, and get different modes of intelligence into a singular framework.


>Primarily, because we have no idea what intelligence is, or how it works, why it exists even, etc etc. This goes for human-like intelligence, but also for any kind of intelligence. We just have no good scientific understanding of the subject.

Speaking as a machine learning and neuroscience researcher, how thoroughly have you investigated the matter to claim this? I would definitely not say that we have "no" understanding of the subject. We have a number of predominant paradigms and competing theories regarding the matter.


You shouldn't be downvoted. That portion is clearly over-the-top hyperbole, and it is fair to call that out.

We do have some understanding, even if it is somewhat limited.


How can I answer your question of "how thoroughly I have studied the matter" without sounding like I'm full of myself?

Anyway, that there are many competing theories about intelligence can mean one of three things: either they're all mostly wrong, or some of them are mostly right, or all of them are a little right. Which one do you think is the case, regarding the theories that you have in mind? And what are those?


How would you define what intelligence is, in simple terms to a layman?



What you describe sounds very much like a system that could be implemented with computers and software. Has anyone ever tried to do that?


There are existing software implementations, particularly the various free-energy principle implementations available in SPM from UCL (albeit, in Matlab), and a few others. What has tended to happen is that neuroscientists are more interested in a minimal bio-plausible proof of principle, and machine learners often don't keep entirely up to date with neuroscience after their training. As a result, a lot of the most interesting ideas in neuroscience haven't been scaled up to larger machine-learning applications yet.

Me being a dual trainee in both fields, well, I'm working on it.


How curious that one of the best models of intelligence is one of its products created in an effort to emulate itself.


Right. Turing automated a part of the operation of his own mind, computers have always been AI.

But it seems (to me) that logic doesn't capture all of what we call intelligence, and that the effort to recapitulate the human mind solely on the basis of discrete logic machines is almost certainly going to fail. But I think that failure will be valuable, and that we should not avoid the attempt to make a sculpture of the mind.

Babbage had an automaton, the Silver Lady, that could imitate the dancing of a ballerina. We might be able to make "Silver Minds" that imitate the outward manifestations of the human personality, and so learn more about the nature and essence of thinking, as the Silver Lady must have of dancing. Certainly the craftsman who made her must have known a thing or two about ballet and physiology?


That's largely the idea behind the concept of "technological singularity". We are quite slow at developing the first version (it's still a work in progress) but that v1 will likely be able to build a v2 much faster and so on, leading to an "intelligence explosion".

[0] https://en.wikipedia.org/wiki/Technological_singularity


> We are quite slow at developing the first version (it's still a work in progress) but that v1 will likely be able to build a v2 much faster and so on, leading to an "intelligence explosion".

As with everything in nature, there are diminishing returns. It is not clear that any significant leaps in 'intelligence' are possible. At least to the extent proposed by 'singularity' advocates. Or that such an intelligence would be able to completely outclass humans, either in the current form, or augumented by tech.


As humans have evolved larger brains, intelligence went up extremely. Human brain size is limited by a lot of factors specific to human biology, like the fact that it needs to be small enough to fit through the birth canal, it can't take up too much of the body's energy, and it can't make us too top-heavy. It seems unlikely that human evolution ran into these limits right at the same time it finished making a maximally-intelligent mind. None of these limiting factors apply to AI. If/when we figure out how to make intelligent software, we make AIs, and then we improve them to be human-level intelligent, it seems really unlikely to me that we'll hit a limiting factor right there. I think if we ever create near-human-level intelligent AIs, we'll blow right past human-level intelligence and hit a limit further on that's not coincidentally placed at human-level intelligence.


The inevitability of the Singularity, and most of the rest of the baggage train that comes with Kurzweilanism reads more like a religious conviction, and less like a scientific certainty.

There are enormous gaps in reasoning that you just have to gloss over to go from the current state of the world to a post-singularity one, that you are expected to accept, on faith.

I personally think it's about as likely as the return of the Messiah in our lifetimes. That is to say, it is not.


Kurzweilanism=Gavinism


There is no actual evidence that such a scenario is "likely". At this point it's just a bunch of unsupported assumptions and shaky analogies.


>>We have some vague models of it ("the brain is like a computer and the mind is like a program running on it")

I agree with the thrust of your argument, but this itself is wrong. When the Europeans were figuring out complex gears and stuff, they thought that the brain is like a complex cog machine (that was when they thought that mind was different, I think). When fluid dynamics became the craze, they started comparing, brain to a machine that manipulates some fluid.

The truth is that we don't even have a model for it.


This is incorrect. György Buzsáki‘s work is as fine a place as any to start.


At the time I did my AI post-grad, there were broadly speaking 3 schools of thought on how general AI would be achieved: via (i) symbolic AI (or "classic AI"), (ii) connectionist AI (i.e. neural networks, now "deep learning"), and (iii) what they called "robotic functionalism". It sounds like this article is referring to the last group, i.e. that embodiment in and interaction with the physical world are a necessary requirement for general intelligence. Can't find any references to it by this term, but as others have noted this is not a new idea. Personally, I've never been convinced by the idea that you had to have physical presence, and sometimes suspected the theory existed to allow robotics to fall within the AI camp, but that said I do think hybrid solutions (i.e. the combination of more than one "narrow" approaches) are one of the most promising areas right now.


Human intelligence is a tool evolved to interact with our environment. Not having an environment to interact with is, imho, a serious problem when trying to define/identify intelligence.

On the other hand, I'm not sure the environment necessarily needs to be physical. Ages ago, I worked on reinforcement learning in a simulated environment, which can provide lots of advantages.


And that's the heart of AI's core problem: an oversimplified world is needed in order for your research to produce short-term results that can sustain your project's existence. But trimming back nature's complex signals and noise also limits your solution/model so much that your system becomes too simplistic and fragile (AKA brittle) to thrive in the much-more-complex real world.

After 50+ years of AI research that hasn't scaled or meaningfully progressed on the fundamental capabilities needed by a synthetic mind, you'd think we'd agree more that simplifying reality into something easier to model is the wrong basis for creating AI that's more than a toy.


That seems a lot more cost effective, to train algorithms in a virtual, rather than physical, environment.

On the other hand, like with self-driving cars, for some purposes it makes sense to provide physical, real-life situations and objects, with all its chaos, unexpected and unpredictable events.

For "true" intelligence matching human expectations, I imagine an understanding of the physical environment and its complexity is key. Otherwise, it could only deal with abstract concepts, like pure mathematics, but missing the experience of concrete reality - to be able to relate to us.


It runs the other way too.

Developmental psychology demonstrates that you get very serious functional deficits if you deprive a young developing organism of its normal environment.


It's a mathematical-philosophical question, I suppose.

Can one use computers to simulate an environment with such fidelity that another computer doesn't notice the simulation and optimize around its quantum quirks?

Nvidia seems to think so. They claimed (a couple GTCs ago) to use virtual driving simulators to train their autonomous vehicle systems.


I think a more nuanced way of defining iii is that the intelligence an agent is capable of is limited by the extent to which it is capable of perceiving and interacting with an environment.

At one level this has to be true, if it weren't I could plop a black box on the table and say I've invented AGI it just can't interact with anyone, and you would have no recourse but accept my statement. We must necessarily define intelligence as the interaction that the agent can perform with some environment otherwise we'd have no way to know of its intelligence.


Reminds me of the joke about sci-fi intelligent plants - they would have to be a product of intelligent design because of how useless the massive energetic intake to maintain it would be to something sessile.

The some sort of environment is an important distinction - even if smart enough to derive linguistic translation on its own through "first contact" it could suggest we should generate free energy through floating point error exploits and destroy excess heat using the same because that worked perfectly well in its environment and it had no indication that wouldn't be possible in the real world.


If I recall rightly from the documentary "Fast, Cheap, and Out of Control," a representative of the last school is Rodney Brooks, who among other things cofounded iRobot, the Roomba maker.

We'll see who gets there first, but I have a lot of sympathy for this approach. It's the one way we know intelligence got going in the first place. And given that too many degrees of freedom make coherent creativity difficult, it imposes some useful constraints.

Anyhow, I think those interested in this debate would enjoy that movie. It's 20+ years old, but the director, Errol Morris, is a stellar documentarian. And it's available to rent on the major platforms for a few bucks.


I did research on model-based reasoning in the early '90s and actually thought (though I never mentioned this to my supervisor) that Brooks had a lot of good arguments summed up by his pithy phrase "the world is its own best model".


The reason I personally believe you have to have a physical presence is that changes to the physical body which do not touch the brain can and do have profound impacts on consciousness. If no body is necessary, then why can't consciousness sustain itself for prolonged periods in situations of total sensory deprivation?


For (iii) you're probably thinking of Rodney Brooks (aka the co-founder of iRobot).

Since the 80s, he's generally been a proponent of the idea that you can't have human-like intelligence without placing that nascent intelligence in a human-like world, with human-like sensory perception.

https://en.wikipedia.org/wiki/Behavior-based_robotics

Which more or less bears out our experience with deep learning. If you place intelligent algorithms in a world where their sole sensory inputs are matrices then what you get out doesn't look anything like human intelligence.


The idea of "embodied AI" has been around for some decades. It is reasonable that, from a practical engineering perspective, creating "human-like" intelligence becomes more feasible if this intelligence is embodied in the same physical possibilities and constraints as a typical human.

What I find somewhat ironic is this: the author mentions working with Stephen Hawking, an amazing man who produced incredible intellectual work and enriched our understanding of reality while being almost incapable of any physicality.

If we apply current scientific theories (physics, chemistry, biology, etc) to this cell network and physical machinery, we quickly find our way back to symbolic manipulation. What are cells if not computational nodes that exchange messages?

A much more reasonable hypothesis for what is missing is contained in the text:

"A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet[...]"

Maybe we just haven't reached the level of complexity needed for human-level AI. A hint that this might be the case is that the current excitement with ML seems to be fueled by algorithms that were mostly known by the 80s (sure, with lots of recent incremental improvements, but no new big idea). What made a difference was the computational power and datasets that became available in the 2010s. I suspect the next leap will be of a similar nature. "More is different".


> Maybe we just haven't reached the level of complexity needed for human-level AI. A hint that this might be the case is that the current excitement with ML seems to be fueled by algorithms that were mostly known by the 80s (sure, with lots of recent incremental improvements, but no new big idea). What made a difference was the computational power and datasets that became available in the 2010s. I suspect the next leap will be of a similar nature. "More is different".

Regarding the nature of complexity and the notion that "More is different", I am reminded of the emergent behavior of vivisystems [1] as described in Kevin Kelley's book Out of Control [2] -- an insightful exploration of the emergent behavior expressed by complex self sustaining systems. If you have not read Out of Control then you might want to put it in your reading queue. I found it highly engaging and thought provoking.

[1] https://www.everything2.com/title/Vivisystem

[2] https://kk.org/outofcontrol/


I like the alternative phrase 'Quantity has a quality of its own'


>What I find somewhat ironic is this: the author mentions working with Stephen Hawking, an amazing man who produced incredible intellectual work and enriched our understanding of reality while being almost incapable of any physicality.

Not from birth though. I wouldn't dismiss the impact of being able to interact with the world during childhood so easily.


Helen Keller is a better example. It is also the case, however, that her brain, like anyone's, had been shaped, through evolution, by eons of interaction with the environment.


To me embodiment is important because it acts as the root in the hierarchy of reference frames from which you can model the real world. So whether or not the body has a particular set of physical capabilities isn't as important as the relationship it has with the rest of the world around it.


This strikes me as appealing but dangerous:

> What are cells if not computational nodes that exchange messages?

We evolved in a context where we survived by exchanging messages with other "computational nodes". So it's very tempting to see everything through that lens. But I think it's a mistake to see cells as "really" just like us. As Box said, "All models are wrong, some models are useful." We shouldn't forget that cells are really cells, and we see them as analogous to familiar things because the world is too big to represent directly in three pounds of meat.


Are you sure that the parent was comparong humans to cells? It strikes me more as this: "human intelligence is run by a network of nerve cells which are complex but could be modeled in all of their conplexity". So if you could model enough of these, you would get a functional replica of a human brain.


> What I find somewhat ironic is this: the author mentions working with Stephen Hawking, an amazing man who produced incredible intellectual work and enriched our understanding of reality while being almost incapable of any physicality.

From my point of view, what matters is not physical interaction but rather physical perception. May it be visual, auditive, tactile etc. it all contributes to build the "model of the world" you carry in your whole body.

I suppose Hawking could still largely perceive his environment.


>while being almost incapable of any physicality

Only for the later (even if major) part of his life. One could argue that the physicality he was capable of in the earlier stage of his life helped him gain a solid understanding of physics on our level.

But there is a better argument to be made: Modern Physics is an entirely different beast. To understand the underlying reality you have to be, in one sense of the word, be detached from the reality. And being paralyzed can be said to be one of the many things that let him have experiences unlike that of any almost any other human being. His earlier life let him have a solid footing on the dynamics of our level, and his later life allowed him to depart from it, in a direction in which he was propelled by his intellect.

I would argue that if Stephen Hawking was a sports instructor (or even a programmer for a sport software), his lack of physicality would have worked against him. But if you showed me someone with a severe physical disability who writes great code for sport software, I would revert to my first argument.

(I just explain what's in front of me! :P)


I think the jumbo-jet argument is off a little - maybe 5 or 6 orders of magnitude? Anyway, a vivid analogy.


If you mean the jet airplane is the simpler of the two, I would agree. The famous Roche Biochemical Pathways chart of just those mecahnisms that are understood illustrates this perfectly: http://www.expasy.ch/cgi-bin/show_thumbnails.pl


Right. Proteins alone, there are something link 10^9 in one cell. Plus all the 'parts' the proteins deal with (many more)


I broadly agree, but I think the fundamental missing piece in current AI is not 'body' but action. Agents using action to experiment is a step change in capabilities over passive observation (pattern matching). You can use experiment to tease out causal relationships, this is not possible with passive observation. I think the bodies role in nature is merely enabling action. Action is the key, and you don't need a nature-like body to get this. AI driven action, that is what we need.


I concur - "The Book of Why" lays out this thesis really well (http://bayes.cs.ucla.edu/WHY/).


It is strange that the author does not reference the whole 'Nouvelle AI' movement of the late 1980's that was a direct response to the 'symbol systems' of classic AI, proclaiming the necessity for embodiment as a prerequisite for grounding.

See for example Brook's classic "Elephants don't play chess", or Steel's write up on "The Artificial Life roots of Artificial Intelligence"


More than just strange, it's a glaring omission -- MIT Prof. Rod Brooks paved these roads nearly 3 decades ago and arrived at these ideas through deduction and experimentation.

He made the argument that embodiment is an essential component of AI in his paper "Intelligence Without Reason" ( https://people.csail.mit.edu/brooks/papers/AIM-1293.pdf ) COG was his groups' attempt to build a humanoid robot: http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/o... (see "Why not simulate it?") but he and his grad-student researchers devoted considerable effort to exploring the importance of embodiment, especially in humanoids: http://www.ai.mit.edu/projects/humanoid-robotics-group/index...

The Mobile Robots Lab built biologically inspired robots that were remarkably capable and able to function to dynamic events in the real world (rather than carefully controlled lab environments).


Great guy. Here's an anecdote from that time. He spent a few months on sabbatical at our lab ( https://ai.vub.ac.be/ ) in those days. We were preparing robots for a NATO Advanced Study Institute ( https://www.springer.com/gp/book/9783642796319 ), and I was struggling writing the serial driver for a custom embedded computer for this as the system kept crashing (due to a bug in the dram controller). Anyways, Rod Brooks, offered to help me with the coding. It wasn't needed as the code was not the problem, but I don't know many professors that could and would be prepared to dive in that deep.


As usual, the author also leaves off the training phase humans go through where we have a mix of enhanced learning abilities and humans guiding us (aka parents/adults). The process to produce one of these general intelligences takes decades.


The iCub project is rather well-known though.

Uses cable drive for actuation. Interesting stuff if you're mechanically inclined.


“There is more wisdom in your body than in your deepest philosophy.” - Friedrich Nietzsche


I think a machine which successfully finds a charging point in a changing environment to fill its batteries when needed has already a good simulation of its 'body'.


> So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.

Except a very large fraction of people don't think this way (eg. those with aphantasia), and Helen Keller certainly didn't, yet seems to have been as smart as any of us. So obviously intelligence does not depend on having a huge breadth of sensory experience.

It's quite tiring how much posturing about what's ‘really’ missing from machine intelligence doesn't last past 5 seconds of basic fact checking.


Breadth can also be considered "diversity" in this case, i.e., experiencing the cat against many different background environments and the time-evolution of the sensory readings attributable to the cat relative to that background.

Breadth also refers perhaps to the massively parallel sensor streams. The sense of touch is not one experience, but the amalgamation of millions of experiences -- one per nerve ending. So far the sensor platforms we develop artificially have on the order of 1000s of sensors, not the millions that distinguish the combination of temperature and pressure at certain "optimized" key points.

Rote and myopic counterarguments do not a futurism-oriented discussion make.


> Except a very large fraction of people don't think this way (eg. those with aphantasia)

I don't think it's accurate to say that aphants don't have this type of sensory information at their disposal. I have aphantasia, but I still experience the world through my senses. I may not be able to visualize a cat in my mind's eye, but based on my prior experiences with cats, I know a cat when I see one. If I hear purring, I recognize that as a sound that cats I've encountered in the past have made, etc.


Yes, you are capable of pattern matching. The point is that this is a distinct thing from intellectual ability, since we know having orders of magnitude less sensory input doesn't much seem to limit the ability to do high-level reasoning, and know some people don't use it as part of high-level reasoning.

This kind of pattern matching is also fairly evidently not all that difficult, since much simpler brains than ours can manage it, as can ML models with caveats (albeit caveats often misunderstood and exaggerated).


>This kind of pattern matching is also fairly evidently not all that difficult, since much simpler brains than ours can manage it, as can ML models with caveats (albeit caveats often misunderstood and exaggerated).

Do tell me, since I'm writing a paper on a related topic, which current ML models can "pattern match" to recognize or generate multimodal (ie: visual, auditory, and tactile) percepts of cats, in arbitrary poses, in any context where cats are usually/realistically found?

Or did you just mean that the "cat" subset of Imagenet is as "solved" as the rest of Imagenet?


Please try to argue in good faith. I've already said ML models have caveats, obviously I don't think they're perfect or par-human.


I think that "perfect" or "par-human" would be a judgement about performance on a set computational task. My caveat is that ML models are usually performing a vastly simplified task compared to what the brain does. But it looked like you were saying they perform "pattern matching" with the same task setting and cost function as the brain, and just need to perform better at it. What's your view?


“Not all that difficult” is in the context of the brain, where things tend to vary between ‘pretty difficult’ and ‘seemingly impossible’. I say ML shows pattern matching of this sort isn't all that difficult because progress has been significant over very short stretches of time, without any particular need to solve hard problems, and with a general approach that looks like it will extend into the future.

We have this famous image showing progress over the last 5 years.

https://pbs.twimg.com/media/Dw6ZIOlX4AMKL9J?format=jpg&name=...

The latest generator in this list has very powerful latent spaces, including approximately accurate 3D rotations.

https://youtu.be/kSLJriaOumA?t=333

We have similarly impressive image segmentation and pose estimation results.

https://paperswithcode.com/paper/deep-high-resolution-repres...

Because you mentioned it, note that models that utilize multimodal perception is possible. The following uses audio with video.

https://ai.googleblog.com/2018/04/looking-to-listen-audio-vi...

For sure, these are not showing off the full breadth of versatility that humans have. I can still reliably distinguish StyleGAN faces from real faces, and segmentation still has issues. These all have fairly prominent failure cases, can't refine their estimates with further analysis like humans can, and humans still learn much, much faster than these models.

However, note that (for example) StyleGAN has 26 million parameters, and with my standard approximate comparison of 1 bit:1 synapse, that puts it probably somewhere around the size of a honey bee brain. Given such a model is already capturing sophisticated models fairly reliably using sophisticated variants of old techniques without need of a complete rethink, and the same cannot be said for (eg.) high-level reasoning, where older strategies (eg. frames) are pretty much completely discredited, “not all that difficult” seems like a pretty defensible stance.


Is there a goal in creating artificial general intelligence other than creating a form of enslaved life we can tell ourselves isn't really life, so it's okay?


This is my impression of the corporate "openai" movement's desires:

1. Enslaved robots, meaning they don't have to pay income tax or worry in the slightest about working conditions

2. Enslaved robots, meaning they can erase misbehaving or uncooperative individuals/instances

3. Enslaved robots, on which they can foist all of humanity's problems and demand solutions at pain of death (erasure)

4. Enslaved robots, with which they can convince/coerce everyone else into relinquishing all their rights/power/money.

Replace 'robots' with 'life' and it suddenly looks a lot more familiar.

I'd love to hear a cogent explanation to the contrary, e.g. from gdb. But I doubt we'll ever see one.


I think it's possible to build general purpose AI that is not alive in any way. I think it's just easier to imagine ways to get their that involve mimicking animal/human intelligence with those "living" qualities and so that's why people focus on that.


No intelligent aliens to study, so we make some.


There were a lot of very interesting comments on a previous link "What if Consciousness Came First" [1] that I think shoehorn into this discussion pretty well.

Someone brought up the question of if there was a formal "programming language" for philosophy. [2]

One of the difficulties with discussing AI is that we don't know what intelligence is because we don't know what consciousness is. These are problems that are heavily steeped in philosophy, and if we ever want to work with philosophical concepts digitally, we need a proper programming language to do it with.

Ideally, it would be nice to be able to write out philosophical concepts and social behaviors and moral stances in a form that could be used as a ML training set to try to integrate with AI/ML decision making.

[1] https://news.ycombinator.com/item?id=20516482

[2] https://news.ycombinator.com/item?id=20518867


I'd be very interested to know what experts think of Peter Naur's Turing Award lecture (entitled "Computing Versus Human Thinking"), which has a thesis along similar lines to the article. It certainly has plenty of the hallmarks of a crank - a guy working in a field other than the one he's a recognised expert in, can't get anything published, uses his award lecture for the field he is a recognised expert in to sneak his ideas into the CACM, &c.

And yet despite that it seems quite enticing as an idea. I remember being particularly struck by the concept that emotions consist of a closed feedback loop between the nervous system's control over and sensing of the body. Think about this next time you are at the dentist and I think you'll agree that it feels like it could explain a lot.


To me this is more than a philosophical point, more than an engineering issue for AI, it's a current item of human life. The real world (and I am including Nature in the term) is our model for health and sanity. As computer-mediated reality becomes more and more the norm, we risk disconnection from our own embodiment.

How much time do you spend in front of a screen? How much of your existence is mediated already? I just tried VR goggles the other day, and thank God they give you headaches because people are going to try to live in there if it's ever physically possible. (Reminds me of the guy I knew who lived IRL on my friend's couch and played Second Life all day. He had a great second life but no first life.)

One other thing about being embodied: you die.


And that mortality has more consequences than most people could ever imagine, but less have spent time considering. It's just one of the very biological factors that define large chunks of what humanity fundamentally is. All I can think is that my choice to minor in Philosophy alongside majoring in Computer Science didn't turn out to be as weird and useless of a choice as people at the time seemed to think it was.


Indeed. :-)


This became one of my favorite big ideas when I learned of the ‘second brain’ and >100k neurons it uses to control digestion [1].

And that’s where I also get a cheap sense of dread about strong AI—robots don’t digest. Along with all of the other things that differentiate human from robot, I believe the AI-apocalypse won’t be evil AI. It will be efficient, calculating and as foreign as space aliens. Boo!

[1]: https://www.scientificamerican.com/article/gut-second-brain/


Maggie Boden pitched this idea in "Artificial Intelligence and Natural Man" in about 1980. It was pretty influential, odd to see it unmentioned in the article or in this discussion.


I do not think the article makes the case for the necessity of physical embodiment, as it seems that the author's issues could be addressed with a) data at a lower level of abstraction, and b) more interaction. Arguably, however, physical embodiment is the fastest/easiest way to get enough of these things.


If you are interested in being one of the first humans to get the brain surgery https://cyborg.st


Not sure if parody or real post-milenial styled startup. FAQ example:

> Q. What is the current 2019 state of this? > > A. Unknown. But read this new york times article (long) > and/or this theverge article and it seems inevitable that > human and their phone will merge. This video also reveals > a lot.

In order words: we have no idea about what exactly we are speaking but there are articles everywhere.


Thanks for fighting bullshit!


The big question is "How much of a body does the AI need?"

Should it know pain or pleasure? Does it need to have a blush response to shame? Does it need vision or hearing? Sense of balance? Stomach pain?

You can see that humans that are born without sight or hearing still find ways to develop intelligence. Some people don't feel pain. Sociopaths don't feel shame. Yet, the brain manages. It's very hard to define what is the minimal set of functions we need to emulate for AI to emerge.


It seems to me like you (and one of the comments replying to you) are considering this a bit too narrowly. Plenty of animal species have all five senses, and are far less intelligent or conscious than we are. Further, this doesn't have to be about a single human being requiring all five senses to gain intelligence - consider single modern human being has already evolved to a point where the brain already has that intelligence innately, and only needs further input to shape and refine what's already there.

It doesn't make any sense to compare AI to this single modern human being who is already innately intelligent. It makes more sense to compare it to the whole of human/ape evolution, and maybe we're limiting ourselves too much by always looking at humans on an individual level.

I think we need to find a way to imitate human evolution, selective pressure/natural selection, and human limitations - in whatever form. Hell, give networks some form of reproduction, limited lifespans and limited communication. Throw in what the other commenter said about gaining knowledge and applying that to and manipulating a "real" world (I don't think it actually has to be real, it just needs to be an environment where changes can be made that have some logical/causal effect), learning from experience and whatever other feedback loops we seem to have. Maybe AI will evolve itself into being given the right environmental and personal constraints.


> Should it know pain or pleasure?

It should probably have rewards, positive and negative. I think reinforcement learning is the closest paradigm to AGI.


> The big question is "How much of a body does the AI need?"

Our bodies give an upper bound of five sensory inputs. With a little scrutiny we can reduce that even further, since we know that a portion of our population are born with less than five senses and exhibit comparable intelligence. Some people are borne blind, deaf, mute, anosmic, or with ageusia. Others are even born with rare sensory deficits such as the inability to feel pain. Although I have not read any studies on the subject, I suspect that being borne with none of the five core sense would have a serious negative impact human intelligence.

There is more to the problem though then just the ability to sense our environment. I believe that for an agent to acquire human level intelligence, it is also necessary to have the ability to explore and manipulate the environment in complex ways. It must be able to experiment by making observations, evaluating the outcome and thereby advance its knowledge. Knowledge of course must be retained to be of any use, so it must have memory efficient enough to be practicable. In order for the experimentation to lead to higher levels of enlightenment an intelligent agent must be able to take past knowledge and hypothesis yet unobserved outcomes. This should serve as motivation for further experimentation.

Human's have this notion that a per-requisite to real intelligence is to be able to express ones self with a language and thereby share your ideas with others. Communication with language seems to result in social beings, and it is widely believed that social beings do best if they have emotional intelligence, otherwise they will likely be outcast from society.

so, I think AI needs a body that at least allows it the following:

- Ability to move

- Ability to move objects with enough accuracy to assemble or disassemble complex structures

- The ability to know the physical properties of objects (maybe through one or more of our five senses, but not necessarily)

- Ability to retain knowledge

- Ability to hypothesise

- The ability to communicate with another agent to share information (helpful, but maybe not necessary)

I am not as convinced that emotional intelligence is required, so I left that out of my list. For example, consider that highly intelligent beings could be of a different nature than humans and form societies without emotions or politics. An excellent example are the Primes from Pehter F. Hamilton's Pandora's Star where (motiles) are controlled by the commanding caste (immotiles) [1]

Of course I am bias since I am human, so I am looking at what is required to achieve intelligence as I know and understand it.

[1] https://en.wikipedia.org/wiki/Commonwealth_Saga#Pandora's_St...


We have more than five senses in reality. Losing your proprioceptive sense, even later in life can actually damage your self-identity. Cause you to question whether you are even you. Read "The Man Who Mistook His Wife for a Hat" for more info.

I don't believe AGI is ever going to happen, but if I did, I'd include that sense as one of the possibly fundamental ones.


>I don't believe AGI is ever going to happen

Do you mean to say that you don't believe artificial general intelligence will happen for a specific reason, or that you hope that it will never happen for a specific reason? I am curious either way. Thanks for your thoughts.


I personally don't believe that there are any fundamental barriers to human or better AGI but I also believe we are a long long way from anything with this capabilities being built.

Mind you - I would love to be proved wrong.


> upper bound of five sensory inputs

There's more: for example proprioception, which is the sense of how your body is positioned in space (it's how you know what your hands are doing without looking at them). How many senses humans have depends on how you define it, but it's probably not 5.


>How many senses humans have depends on how you define it

That is true, but considering only the well known 5 senses simplified the correspondence and seemed practicable. My thoughts were focused on reducing the number of sensors and physical capabilities of a known embodiment of real intelligence to say something about what might be a prerequisite for acquiring similar intelligence in a different body. However, I acknowledge that doing so might cause me to overlook something subtle, but necessary. You made a good point, thank you.


Our bodies give an upper bound of five sensory inputs.

That’s either dismissing or handwaving a lot. Is my sense of balance part of sight, sound, taste, hearing or smell?

What about proprioception? Am I touching the air behind me?

When I feel a surface and I know if it’s sharp knife, blunt knife, wet stone, dry wood, slippery or grippy, briefly static electrically charged, hot, cold, crumbly, greasy, delicate or solid, is that all just “touch”?

When I pick up an egg and know if it’s real or metal, slipping out of my fingers or held gently, at risk of cracking or just right, being properly supported or about to fall to one side, balanced or weighted inside, hard boiled or sloppy inside, is that all just “touch” one sense?

When I feel the thump of a heavy bass line in my stomach - touch?

Stomach ache - touch?

Headache, in a brain with no touch sensitivity?

Feeling radiated warmth on face - touch?

Tiredness, hunger, thirst, spine tingling, faintness, muscle ache, body feedbacks - touch?


>That’s either dismissing or handwaving a lot.

Yes it is, but see my comments above. Thank you for your thoughts.


Pretty much every animal that we consider intelligent is social - orcas, elephants, primates, dogs, cats, etc.

Human language developed as a way for humans to communicate with one another. I think you're severely underestimating the importance of emotional intelligence.


> I think you're severely underestimating the importance of emotional intelligence.

I don't mean to suggest that social intelligence didn't play an important role in the evolution of real general intelligence as we know it. (I define RGI as a GI that evolved through natural processes and without direct intervention of higher level GI). I wanted to keep an open mind about things though. My goal was to try and reduce factors that relate to intelligence on earth so that I might be able to say something about what factors are in fact a prerequisite to AGI.

Do you like sci-fi? Have you read Pandora's Star? If you like sci-fi, but have not read said book, then I recommend that you consider putting it in your queue. You might find the Primes to be a believable vector to RGI. A vecto that I believe challenges human bias about GI and what it takes to reach past a class I civilization.

Love the discussion, thank you for your thoughts.


Cephalopods aren't social, but some seem to be as intelligent as primates.


>In order for the experimentation to lead to higher levels of enlightenment an intelligent agent must be able to take past knowledge and hypothesis yet unobserved outcomes.

hypothesis => hypothesise


Seems circular. Defines the body as the thing that adapts to the environment, and intelligence as adapting to the environment?


People who experience total facial paralysis experience profound changes to their subjective consciousness, particularly their emotional capacity changes. Anger is often the first thing affected. First, they lose the ability to feel anger. Then they lose the ability to remember what anger felt like. Then they lose the ability to recognize anger in other people. (These changes progress over years)

People placed into situations of total sensory deprivation very often see their conscious self dissolve into 'hallucinations' across all their senses.

Quadriplegics that acquire their paralysis from disease or accident suffer psychological and emotional changes which are more substantial than would be expected from the injuries alone (I was never clear on how exactly this was distinguished, so take it with a grain of salt).

I have never understood why people who talk of 'uploading their consciousness' or just creating a human-like consciousness in a computer would assume that the simulated consciousness would function markedly different from, in the best case, a person experiencing profound sensory deprivation. Consciousness can not be sustained if the feedback loop of the body, perception, and environment is broken. Consciousness is an emergent property of a feedback loop. If there's no feedback loop, the property doesn't emerge.

Watching the Netflix 'More Human Than Human' documentary, one of the people featured commented when comparing Siri and the AI in the movie Her that systems only need to become 'a little more sophisticated' to reach that level. That is a big problem. It's not 'a little more sophisticated' at all. It requires emotions, and the vast majority of people do not even know what emotions are. I'll spoil it for you. Emotions are a trained response, the product of neurological feedback to the response of prediction operating on primarily internal perception. Often the relation of moving from one topic to another in a conversation is not based upon the subject matter of the text. It is based upon shared cultural experiences garnered over a lifetime and similarities in the emotions evoked by certain things.

Even pursuing the goal of creating a 'truly intelligent machine' is dangerous, philosophically speaking. What happens when we create a bot that does antisocial things? We shit it down. We scrap the project. We saw that with Tay by Microsoft. This is dangerous. It is clear that once we DO produce a human-like intelligence... it will be better than us. It won't have any of the rough edges or not-safe-for-work parts. At everything we value about human beings, it will top us. We can look at history to see how humanity responds when something which was previously seen as 'fundamentally human' is taken out of our quiver. It is not pretty. The folk legend of John Henry shows that the man who is willing to kill himself to be better than a machine is not an idiot - he is a hero to motivate the masses. I don't think it is a stretch to imagine a future when humanities worst qualities become what we see as virtue because AI 'can't' do it. A robot can't hate. It can't be violent, bigoted, angry, etc. When that is the only thing humanity has left that defines them as 'more' than the world... why should we be sure they won't come to see that as their virtue? We have already seen all of those things valued as virtue in our history when similar pressures weren't even at play.

Then there's the more unknown approach. A machine-based intelligence which is not given a body. That's really the big question mark. We can be confident, very confident, that it will not be recognizably human in any way, shape, or form. Most of our attributes as humans are derived directly or indirectly from the biological facts of our existence. An organism sharing none of those things will share none of those attributes. And it will be weird in surprising ways. I don't think there is any reasonable danger of such an intelligence "taking over the world" in any substantial way. All conflict is rooted in resource contention. And we have nothing that such an intelligence would want or could use. If it wants energy, the only real resource we would share a need for, it would be best off launching itself into space and sitting in orbit with a bunch of solar panels. We don't have any idea what a singular intelligence, one with no concept of "individual" because there is only one of them, would be like. One which has no inherent mortality. One which does not deal with disease. One which has no concept of family. One which has no sense of age. These are things which define us and make us human, and it will lack all of them. It would probably be a great challenge to convince such an intelligence that we existed, that we were real, that we could communicate with it, and get it to want to. It could simply conclude that it will wait for 10,000 years and hope we are extinct by then.


If we want to implement a human-like intelligence, I think it’s very likely that we’ll have to emulate a lot of the environment we inhabit. To me, human culture and parts of our brain is like software which is instantiated in the individual. To run it on a machine instead means that the machine will have to emulate the “machine” that the software runs on, which is the whole brain, body and its environment.

We could of course try to make a non-human type of intelligence. But it’s not at all clear what that would be, or if it’s at all possible. The only intelligence we know for sure can exist is animal/human style intelligence. And I don’t think there’s anyone who is actually trying to construct a non-human intelligence. To do that it’s very likely that you’d have to set up a computer environment where programs evolve naturally and fight for computing resources over a long time. It could take anywhere from decades to millions of years.

Where most of our efforts are going, is to create augmented intelligence. We’re creating programs that expand on human intelligence. Interface with it. Cater to it.

If you imagine AI programs as individuals in an evolutionary environment. What is it that they’re competing for? They’re competing for which program can most satisfy humans. Those that satisfy us live. Those that don’t die. Just as our evolutionary environment creates drives for gathering resources, cooperating with others when possible, hurting/killing others when necessary... programs are almost exclusively driven to satisfy humans.

That’s why I think the “paperclip maximizer” example is so ridiculous. First it assumes that general intelligence is just some magical algorithm we just haven’t discovered yet. Then it assumes that such an AI can make catastrophic decisions without complex motivations. Whether to kill humans or not is more efficient for making paper clips is an undecidable problem. A human might kill someone to achieve its goal because that’s something our evolutionary environment has trained us for. We have motivations like pride, ego, anger, envy, etc. that overrides the problem of figuring out “is killing this human optimal for my goals”

It’s far more likely that an AI catastrophe will be far harder to predict and much stranger. It could be that specialized (and relatively dumb in the genera sense) AIs become so good at satisfying our desires that we become completely incapacitated. There’s already signs of this.

Just to be clear, I’m not saying we shouldn’t be worried about AI. But the alarmists seems to be too focused on various imagined future scenarios, all of which are likely to be wrong. We should keep a very keen eye on the consequences of AI right now in the present moment, and talk about problems that actually arise. Perhaps with a little bit of extrapolation.. thinking about how present day small problems could develop into bigger problems.


Article is from 2017.


Thanks! We've also updated the link to the original from https://sinapticas.com/2019/08/21/the-body-is-the-missing-li....


Thank you, Scott!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: