I'll grant that for this very constrained problem ;-)
I respect your work a lot. I've studied and used ML myself, including gensim, in industry. I've given nowhere near your level of contribution to society / the field. My opinion is true AI is quite a ways off. I haven't read anything from a ML researcher that says it isn't. Perhaps you weren't saying true AI is nearer.
I find it funny that whenever there's a case of computers being able to do something that they couldn't before, whether it's drive or beat humans at Go, the goalpost on what is "true AI" shifts to be something that computers can't do yet.
So let me ask you this. What would you consider to be "true AI"? At what point are you willing to say, "Okay, that's it, computers are just plain smarter than we are?" Because, frankly, it seems to me that that day is getting closer and closer.
Saying that AIs can't be smarter than humans because they don't think and act like humans is like saying that airplanes don't "truly" fly because they don't flap their wings.
He's not saying that AIs can't be smarter than humans. My interpretation is he's implying that AlphaGo does not indicate that much progress towards AGI.
I'd also like to point out that you didn't define "smart" or "intelligent" either. The fact is, it's a very very difficult concept to define.
Reposting this comment from another thread:
The "moving goalposts" argument is one that really needs to die. It's a classic empty statement. Just because other people made <argument> in the past does not mean it's wrong. It proves nothing. People also predicted "true AI" many times over-optimistically; probably just as often as people have moved goalposts.
>> What would you consider to be "true AI"? At what point are you willing to say,
"Okay, that's it, computers are just plain smarter than we are?"
Look, there's no doubt that computers can outperform humans in specific tasks.
There's no doubt that AlphaGo is intelligent when it comes to Go, but on the
other hand it would be completely incapable of tackling a different congitive
task- say, language, or vision, or discriminating between say two species of animal [Edit, since there seems to be confusion on this: you'd need different training data and another training session to perform well at a different task].
That's a limitation of our current systems. They generalise badly, or they don't
model their domain very well. You have to train them again for each different
task that you want them to undertake and their high performance in one task does
not necessarily translate in high performance in another task.
Humans on the other hand are good at generalising, which does seem to be necessary for general intelligence. If we learn to play a board game, we can take the
lessons from it and apply them in, I don't know, business. If we learn maths, we
can then use the knowledge [Edit: of what constitutes a well-formed theory] to tackle physics and chemistry. And so on.
So, let's say that "true AI" is something that can show an ability to generalise
from one domain to another, like humans do, and can be trained in multiple
cognitive tasks at the same time. If we can do that, then computers will already be super-human, because they can already outperform us in terms of speed and precision.
The real question would be is there a goal of the AI. The true superhuman AI, that is problematic, is when AI decides to learn something on its own. And I think that is a long road.
This is simply not true, and there are many tasks the architecture would fail with and the networks presented are architecturally different from the highest performing vision networks like GoogLeNet. The techniques behind AlphaGo have no memory component holding state between moves, such as a hidden state in an RNN. It is completely reactionary.
A simple game that this architecture would fail at is Simon [0], where you are presented a sequence and then are tasked to replay the sequence.
The same statement holds for humans too. I don't think you can teach a person how to play Go and expect him/her to learn anything else than how to play Go.
I'm not talking about learning the rules of the game. You don't need an AI to
model the rules of the game. You need an AI to model the winning strategy.
Winning strategy is what humans generalise to other domains and computers don't.
There's a lot that suggests that humans and
machine learning algorithms learn in very different ways. For instance, by the time a human
can master a game like Go they can also perform image processing, speech
recognition, handwriten digit recognition, word-sense disambiguation and other
similar cognitive tasks. Machine learning algorithms can only do one of those things at
a time. A system trained to do image processing might do it well, but it
won't be able to go from recognising images to recognising the senses of words
in a text without new training, and not without the new training clobbering the
previous training.
To make it perfectly clear: I'm talking about separate instances of possibly the same algorithm, trained
on a different task every time. I'm not saying that CNNs can't do speech recognition because
they're good at image processing. I'm saying that an instance of a CNN that's
learned to tag images must be trained on different data in a different time if
you also want it to do word-sense disambiguation.
And that that is a limitation, that stands in the way of machine learning
algorithms achieving general intelligence.
A human brain (or any other animal brain for that matter) is almost infinitely more advanced and computationally efficient than state of the art machine intelligence, even without taking things like thoughts, emotions and dreams - which we currently do not understand at all - into account.
It's a huge accomplishment for machines to be able to win over humans in games like chess and go and <insert game here>, but these are games originally designed for humans - by humans - to be played recreationally and I think we shouldn't read too much into it.
Didn't you see KarateKid: wax on, wax off. Jokes aside, as someone said before. The human mind is far better at generalizing and can reuse learned skills in different fields.
Humans have been trained for all the different scenarios you cite, AlphaGo has just been trained for go. Give AlphaGo 20 years to chew through training data and I think it would destroy a college sophomore at cognition.
When a computer itself makes a persuasive argument that it is intelligent (without coaching.)
I have always believed AI is possible, and I am undecided whether current techniques alone will get us there, or whether other breakthroughs are needed, but I have no time for premature claims of success.
> the goalpost on what is "true AI" shifts to be something that computers can't do yet
while I agree that this does happen, I don't think it did in this case - that is, I don't remember anyone saying that they would take a computer beating the top human in Go as evidence of "true AI"
> So let me ask you this. What would you consider to be "true AI"? At what point are you willing to say, "Okay, that's it, computers are just plain smarter than we are?" Because, frankly, it seems to me that that day is getting closer and closer.
Alan Turing would give the system the "Turing test". If a computer can fool a human into thinking it's a human, then it is true AI, according to Turing.
I think that's a pretty good test. Some would argue that this is already possible with some advanced natural language processing systems. But these are not extensive tests, from what I've seen. People have to decide if the system is machine or human after just a few minutes of interaction. Turing probably meant for the test to be rigorous and to be performed by the smartest human. Deciding a conversational partner is a human after 5 minutes of interaction is not enough. 10 years might not be enough. I honestly couldn't say when enough is enough, which is part of what makes Turing's definition so complicated, even though it seems simple on the surface.
I would add that currently, systems cannot set their own goals. There is always a human telling them what to be good at. Every machine-learning-based system is application-specific and not general. There are some algorithms that are good at generalization. You might be able to write one algorithm that's good at multiple tasks without modifying it at all. But from what I've seen, we are nowhere near being able to write one program that can be applied universally to any problem, and we are even further from one that can identify its own problems and set its own goals.
As humans, do we even know our own goals? Stay alive, right? Make the best use of our time. How does the quality of "intelligence" translate to computers which are, as far as they know, unconstrained by time and life or death? What force would compel a self-driven computer to act? Should we threaten them with death if they do not continue learning and continue self-improvement? If I hold a bat over my laptop and swing at it, does it run my program faster? If I speak to it sweetly, does it respond by doing more work for me? Further, are animals intelligent or are they not?
It gets pretty philosophical. What are your thoughts?
> Saying that AIs can't be smarter than humans because they don't think and act like humans is like saying that airplanes don't "truly" fly because they don't flap their wings.
That's just semantics. I think any conversation about this must define intelligence really carefully. We all perceive things differently, so it's impossible to be sure we're talking about the same thing. Maybe that's one other quality of intelligence that separates us from computers. Every computer perceives a given input the same exact way. Can we say that about humans? If there were another dimension with the same atomic makeup as our own, would I think the same things as I do in this dimension? Are my thoughts independent or dependent upon my environment? Is anything truly random?
Anyway, for me, independent goal setting is a key element of true AI. And philosophically speaking, I believe we can't guarantee that we set our own goals independently. Most of us have a strong feeling that we act of our own volition and fate does not exist. And I think that's right. But what if there is no randomness and we are entirely products of our environment? Then under this definition, we don't have independent goal setting and we are not true AI.
Brilliant answer - independent goal setting is a really interesting alternative phrasing of "soul" or "spirit" or "individuality", because unlike those, it can be easily observed or tested. Great writeup, thanks for making me think.
I think humanity will be drastically changed by AI-made decisions calculated by human overseers but not so much that AI will be designed to communicate with one another, have access to markets, and the freedom to decide what they want to do. Is there a sci-fi novel that assumes that approach?
We know that true human intelligence includes being able to help machines think better. So it seems reasonable to state that true machine intelligence includes being able to help humans think better.
Other things that humans can do that machines can't yet:
* change other human's minds
* contribute to the state of the art of human knowledge
* determine the difference between a human and a machine.
I'm afraid I don't understand the point you're making with "computer says no".
> Genetic algorithms have designed circuitry that we failed to even understand at first but that did work.
This is an excellent example of what I think of as not machine intelligence. If humans can't understand it then it's something entirely different that we need a different word for - an "artefact", perhaps. Meaningfully contributing to the state of the art of human knowledge requires being built upon. If these genetic algorithms can explain how they can be incorporated into the design process by humans, that's intelligence. If they are similar to being able to evolve a mantis shrimp by fiddling with DNA, that is marvellous but not what I would regard as intelligence.
We apply the same standard to human intelligence: someone who can multiply numbers very fast but not explain how they can do it is a savant; someone who can discover and teach other people a faster way of multiplication is intelligent.
Savant literally means 'one who knows', and they're not required to explain to you how they know, it's up to you to verify that they do. Just like a chess grand master doesn't have to prove to you he or she is intelligent, it's enough that they beat you. They are under no obligation to prove their intelligence to you by teaching you the same (assuming you could follow in the first place).
> Meaningfully contributing to the state of the art of human knowledge requires being built upon.
No, it requires us to understand. But we will not always be able to (in the case of those circuits we eventually figured it out, but not at first). And in Chess we did too, computer chess made some (the best) chess players better at chess. But there is no reason to assume this will always be the case and that's a limit of our intelligence.
I think you're indeed arguing a sensible definition of Artificial Intelligence, but it's not what most people (especially laymen) mean by the phrase. I think most actually equivocate AI with Artificial Sapience: some cognitive architecture that can—in theory, with the right training—do all the things humans do (socialize, write persuasive essays, create art, etc.) at least as well as the average human.
Though in all honesty, I think a lot of people just want to see a machine with emotional "instincts" and an understanding of tribal status-hierarchy dynamics such that you can empathize with it. A lot of people would consider a machine that accurately simulated a rather dumb chimpanzee to be "smart enough" to qualify as AI, even if it couldn't do any useful human intellectual labor.
I respect your work a lot. I've studied and used ML myself, including gensim, in industry. I've given nowhere near your level of contribution to society / the field. My opinion is true AI is quite a ways off. I haven't read anything from a ML researcher that says it isn't. Perhaps you weren't saying true AI is nearer.