Hacker News new | past | comments | ask | show | jobs | submit login

> So let me ask you this. What would you consider to be "true AI"? At what point are you willing to say, "Okay, that's it, computers are just plain smarter than we are?" Because, frankly, it seems to me that that day is getting closer and closer.

Alan Turing would give the system the "Turing test". If a computer can fool a human into thinking it's a human, then it is true AI, according to Turing.

I think that's a pretty good test. Some would argue that this is already possible with some advanced natural language processing systems. But these are not extensive tests, from what I've seen. People have to decide if the system is machine or human after just a few minutes of interaction. Turing probably meant for the test to be rigorous and to be performed by the smartest human. Deciding a conversational partner is a human after 5 minutes of interaction is not enough. 10 years might not be enough. I honestly couldn't say when enough is enough, which is part of what makes Turing's definition so complicated, even though it seems simple on the surface.

I would add that currently, systems cannot set their own goals. There is always a human telling them what to be good at. Every machine-learning-based system is application-specific and not general. There are some algorithms that are good at generalization. You might be able to write one algorithm that's good at multiple tasks without modifying it at all. But from what I've seen, we are nowhere near being able to write one program that can be applied universally to any problem, and we are even further from one that can identify its own problems and set its own goals.

As humans, do we even know our own goals? Stay alive, right? Make the best use of our time. How does the quality of "intelligence" translate to computers which are, as far as they know, unconstrained by time and life or death? What force would compel a self-driven computer to act? Should we threaten them with death if they do not continue learning and continue self-improvement? If I hold a bat over my laptop and swing at it, does it run my program faster? If I speak to it sweetly, does it respond by doing more work for me? Further, are animals intelligent or are they not?

It gets pretty philosophical. What are your thoughts?

> Saying that AIs can't be smarter than humans because they don't think and act like humans is like saying that airplanes don't "truly" fly because they don't flap their wings.

That's just semantics. I think any conversation about this must define intelligence really carefully. We all perceive things differently, so it's impossible to be sure we're talking about the same thing. Maybe that's one other quality of intelligence that separates us from computers. Every computer perceives a given input the same exact way. Can we say that about humans? If there were another dimension with the same atomic makeup as our own, would I think the same things as I do in this dimension? Are my thoughts independent or dependent upon my environment? Is anything truly random?

Anyway, for me, independent goal setting is a key element of true AI. And philosophically speaking, I believe we can't guarantee that we set our own goals independently. Most of us have a strong feeling that we act of our own volition and fate does not exist. And I think that's right. But what if there is no randomness and we are entirely products of our environment? Then under this definition, we don't have independent goal setting and we are not true AI.

Thanks for asking my thoughts.




Brilliant answer - independent goal setting is a really interesting alternative phrasing of "soul" or "spirit" or "individuality", because unlike those, it can be easily observed or tested. Great writeup, thanks for making me think.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: