Hacker News new | past | comments | ask | show | jobs | submit login
Kurzweil’s Next Book: Creating An Artificial Mind (singularityhub.com)
41 points by kkleiner on May 26, 2010 | hide | past | favorite | 39 comments



I believe we're much closer than other commenters apparently. Here's why.

1. The wetware is nifty but it's not what makes up intelligence.

The differences between sapient and non-sapient species are slight at the level of genetic code. The changes enabling intelligence are also very recent. The changes seem to be focused around the number and distribution of cortical columns in the neo-cortex, which are all remarkably similar regardless of the sensory input they are ultimately connected to. (This is why Hawkins' work is so exciting)

2. A huge portion of what we consider intelligence is a product of language and culture.

Neglected and wild children are barely sapient in a way that we would understand (though with care they can become so). Our minds are largely constructs of a transmissible set of patterns external to our biology.

3. The blue brain project is a ten year project

While I don't think we need molecular level or even synaptic level modeling to understand the software embodied in our brains, we have a hard target to be able to model, in large part, the human brain.

4. The large numbers in the brain are no longer scary

100 billion neurons and 100 trillion synaptic connections used to seem like an absurd scale for study. Now we regularly deal with billions of rows in databases and petabytes of storage. Not only that, the transmission speeds in the brain are so slow, if we need to replicate anything like the connectivity of the brain we can do it over a massive and distributed area. There's simply no need to cram an intelligence into a single box or server farm.

5. AI research has narrowed a great deal.

Whereas we had 50 tangents twenty years ago research is beginning to focus on two key aspects of learning. A. Large data sets for statistical analysis (e.g. Google) and B. Hierarchical representations of input. (e.g. Hawkins)

6. PR2 (Willow Garage)

I'll go out on a limb here and say that a shared open source robotics platform is the best thing to happen to AI research in 50 years. People forget the sheer number of hours experiencing the world it takes for the brain to acquire sapience. By supplying a common framework for experiencing the world the PR2 will enable the kind of long term learning necessary for intelligence to emerge.


Sorry, but what on Earth does ROS have to do with strong AI? It's great for what it is, but what it is is a great messaging system with some control and perception libraries. Robotics isn't really related to AI...sure, robots apply some "weak AI" techniques like A* (or D*-Lite if you're hardcore) but the conflation of robotics and AI is really, well, Hollywood.

Edit: Reread what you wrote. It's possible, of course, that the first strong AI will be the result of a robot that crawls around putting stuff in its metaphorical mouth for two years like a human baby. But it seems unlikely, given the alternatives. Connecting something to the web gives it way more data than giving it a camera and treads.


You're absolutely correct about the scale of input available through the web. The databases that Google has amassed are a perfect example. However my thinking is as follows.

1. The variety, detail and coherency of input available from interacting with the real world is more likely to give rise to intelligence.

2. Learning (above reflex modification and simple US-CS associations) appears to be based on feedback from behavior.

If I want to understand how a wooden block works, for example, a great way to learn about it is to pick it up and play with it. The richness of this experience and the speed which minor modifications to behavior produce reinforcement are difficult to replicate without a body.


Well, if you want it to develop an intuition for mechanical physics, sure. but that doesn't required for intelligence (and even if it were - connect the thing to a Havok-based sim).

You're right that for an intelligence to develop as humans' do, it needs constant feedback. But there are billions (?) of humans on the web...there are ways for an AI to get feedback.

Okay, though. I do see your point...human-like strong AI might grow better in a robot body than as pure software, other things being equal. It's a maybe, for a subset of AIs (those that develop like human intelligences), but fair enough.


I think the web satisfies both 1 and 2. But if you want an intelligence enough like ours that communication is easy, then it might help to have it embodied and thus have a similar perspective and hence consciousness. I doubt it but who knows.


can you provide references for #2?


can you provide references for #2?

I'm not the OP, but Matt Ridley's book The Rational Optimist, just released a few days ago, has much more on this. Selection here:

http://online.wsj.com/article/SB2000142405274870369180457525...



I'll take the time to read Kurzweil's book on building an artificial mind after he actually builds one.


I might skip that, but I'd definitely read "How I Was Built" by Kurzweil's Artificial Mind.


Do we want a "brain in a box?"

Note that this idea is distinct from computers that solve complex problems on our behalf. Creating something with its own desires, ambitions, and emotions, just like us, except vastly more powerful, is there any reason to think something good will result from introducing such a thing into our environment? Does it ever go well for a species when a more intelligent species enters its niche and competes for resources?

As smart as Kurzweil is, I am always shocked that he takes for granted that such a development will be good for humanity. He scarcely gives a thought to the philosophical ramifications of a human being crossing over into a completely digital reality.

I think the question of whether super human intelligence is good for humanity is at least as important as the engineering questions about how to build one.


Why is human supremacy a necessity? Why must a child be flesh and blood? If we want our children to be smarter, more powerful, and happier than us, why should we create children as feeble as us?


Our machines are not our children.

I know that's a metaphor, but I feel the better metaphor is another species. These machines are likely to be different from us in significant and important ways, and they are likely to have interests we do not share, and the ability to achieve those interests with or without our consent.


if we create them, then they are our children. oedipal perhaps, but children nonetheless.


Cows are not supreme and look at what the "superior" species does to them. Better to be on top than lower down, as a general rule.


This is seriously sick, IMO.


I suspect you feel that way reflexively, which is understandable...but not that interesting. However, if you happen to have a more fleshed out (no pun intended) reason for feeling that way, perhaps you could share it?


This is reminiscent of this judeo-christian hate for flesh... That and the fact that our machines and programs certainly aren't our children in any meaningful way.

Overall, I'm finding these singularity fantasies philosophically entertaining, but I'll keep saying that's sick, or disgusting.


| something with its own desires, ambitions, and emotions,

There was an SF author singularity panel discussion recently:

http://www.antipope.org/charlie/blog-static/2010/02/what-i-d...

Alastair Reynolds made the great point that the singularity's /identity/ needn't be its own. You could fix it to something / someone else, and have it use its mental power on behalf of someone else. It might not even know it exists.

Actually it sounded great at the time but, surely, if it had a balky human it would make a mental model of the difference between what it wanted to do and what it could do, and reach /some/ conclusion. If it knew about people I guess it could make a good guess. Hrm, I guess it'd be like our own ids, like the apostle Paul said - "I don't do that which I want to do, and do that which I don't want to do". So the creature would come up with a notion of 'singularity frailty' and 'original ineffectuality', perhaps.


For more on this topic, see Hanson's writings on uploads: http://hanson.gmu.edu/uploads.html


These questions are very important, and there are people working on them. Check out the Singularity Institute. The biggest fear would be that we create an intelligent system that doesn't care about humans - for instance giving humans the same moral value we give to grass.


If it can be done then it will be done eventually. So the question becomes who does it first.


I quite enjoyed _On_Intelligence_ from Jeff Hawkins, interested to find out how the two books will relate.


I've skimmed through that book and looked at the API over at Numenta: http://www.numenta.com/. They are focused on practical pattern recognition applications. Kurzweil has always been about strong AI and almost sci-fi type scenarios in his books. It's a way more ambitious goal, but he has predicted a lot of things correctly before. I'd check out some of his earlier books beforehand, just to get a better idea of his mindset.


That's not really true, Hawkins theory revolves around prediction as it relates to intelligence. There is an entire section at the end of the book that explains why intelligent machines based on the neocortical algorithm he proposes won't resemble human intelligence/self awareness for a long time to come (if ever).


I'll need to take another look at Hawkins book. In any case, Kurzweil is promoting the idea we can have AI equivalent to human intelligence in a few decades.


I'm just reading 'The Singularity Is Near' at the moment and am finding it fascinating finding ideas, such as 'reversible computing' and interesting information relating to cellular automaton.


The only way this can ever happen, I think, is if humans are able to model the building blocks of the brain (i.e neurons) and the processes that created it (i.e. evolution), because there's just no way we'll ever understand all of the emergent behavior of a human brain, as I see it. After all, don't you need something vastly more complex than the thing you're studying in order to understand it?


I think the major problem to overcome is that current computers are based on a von-neumann architecture while I would argue that the brain is not as structured (at least not structured like in a von-neumann architecture).

tl;dr - I agree with Jeff Hawkins about the future of AI


not really. the supercomputers used for brain simulation are massively parallel.


The Kurzweils and Drexlers of the world make a living by claiming that amazing technological advances (immortality, molecular nanotechnology, thinking machines) are far easier than they actually are. When these things fail to get developed in the timescales they predicted, they blame the scientists who failed to make it happen.


Kurzweil has made a living by doing a lot more than just making claims about technology. He's created new and contributed to existing technologies throughout his lifetime, and saying he'll throw up his hands and blame others when his predictions fail to crystalize paints him as a crock and disingenuous -- he strikes me as more than competent and extremely passionate in regards to his work and theories.


His fundamental assumption that change will continue to follow an exponential curve because that's what it's doing now seems highly suspect to me. The problems you have to solve become exponentially more difficult as your tools become exponentially more powerful.

The human mind is the product of billions of years of parallel computation and experimentation. Raw horsepower is probably the easy part.


These two podcasts, http://twit.tv/fib52, http://twit.tv/fib55, of an interview with Rahul Sarpeshkar discussing Ultra Power Bioelectronics provide a good summary of current knowledge on how the brain works. There is a decent understanding of how many areas computationally function. The biggest surprises were that the brain is a hybrid digtal/analog computer and something similar jpeg compression is in operation in the brains visual system


But he's not saying "Because that's what it's doing now", he's saying "because that's what it's always done".

Human / animal muscles are also the product of billions of years of experimentation, and a few decades of steam power research trounced them with ease. Just because it's old doesn't mean it's inherently incredibly good and hard to replicate.


Simple biological systems like bones/muscles evolved long ago and have mostly just been refined since then. Human-like intelligence is a much more recent and much more difficult development.

A lot of people seem to think that if you build a machine of sufficient raw horsepower that intelligence will magically emerge. Maybe, but I suspect it's not nearly so easy.


Kurzweil is not one of these people.


In fairness, central nervous systems didn't crop up until circa half a billion years ago. But, yes.


ETA: Real Soon Now.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: