>> At this point it seems likely that Sedol is actually far outclassed by a
superhuman player.
I still don't agree that this is the case, and I don't care what a thousand
Google-hyped press releases say, beating the best human player in anything is
not "superhuman" and "superhuman" performance has not been achieved by anything
yet. [1]
Why do I think so? Two reasons.
One, because you can be entirely human and still beat all other humans, without
fail for a very long time. Long "winning streaks" in professional sports are
very well documented. For instance Rocky Marciano went entirely undefeated in
his whole heavyweight boxing career. In chess, Mikhail Tal went undefeated for
95 games. And so on, so forth.
Of course most humans' winning streaks end eventually. That's because our
performance degrades over time. When a computer wins against the best player, it
keeps on winning, and in fact it actually gets even better over time.
Still, and this is number two: in most instances where a computer does better
than a human, we don't claim superhuman performance. Automatic calculators,
going back to mechanical calculators, have been better than humans at arithmetic
for a very, very long time. I would wager that nobody discusses pocket
calculators as exhibiting "superhuman" performance. You only hear this sort of
claim when it comes to Deep Blue, AlphaGo or Watson.
So maybe we need a better definition of what it means to be "superhuman" that
covers both pocket calculators and AlphaGo. Without one, I don't accept that
the performance of AlphaGo can be said to be superhuman, unless pocket
calculators' performance is also celebrated as superhuman.
_______________________
[1] I'm perfectly willing to go even further than that and say that we can't make machines that have
superhuman intelligence and that even if we did, we wouldn't be able to
recognise them (this last bit is similar to what the GP says).
Superhuman has a very specific definition in the field of game playing algorithms. It is the case when the algorithm can always beat all humans. AlphaGo winning 5-0 against the number 5 ranked human (Lee) would give only a small indication that it is superhuman. Regarding streaks, evenly matched humans can go 5-0 an expected (0.5)^5 or 3.125 percent of the time. So not particularly rare. If it loses even one game then it is not yet superhuman.
If top humans get beat 5-0 with significant handicaps then it is likely AlphaGo is superhuman. However, it is expensive to run AlphaGo so it is unlikely that we will know the true strength of AlphaGo for a while (more challenges) or until hardware catches up.
The term ("superhuman") is used very differently in other communities though, that don't have such a clear metric of "beating humans" as in traditional game-playing. Which means it's about time we clarify what is meant by it, especially as various parties that have commercial interests start throwing it around carelessly (there was an example on HN a while ago).
The way in which it is superhuman matters. The calculator uses simple mechanical algorithms. Alpha Go uses a completely novel approach to deep learning that can likely be applied to many other systems/problems.
As long as those systems/problems include a grid based problem space where the goal is to successfully place stones restricted by a limited set of rules.
Ok flippancy aside, there are two problems that make techniques like this single-domain: network design and network training.
The design, uses multiple networks for different goals: board eval (what boards look good) and policy (which moves to focus on). Those two goals, eval and policy, are very specific to go. Just like category layers are specific to vision and LSTM is specific to sequence learning.
Network training is obviously hugely resource intensive -- and each significantly complex problem would need such intensity.
It is amazing the variety of problems DNNs have been able to do well in. However, the problem of network design and efficient training are significant barriers to generalization.
When network design can be addressed algorithmically I think we may have an AGI. However, that is a significant problem where you automatically add another layer of computational complexity so it is not on the immediate horizon and may be 50+ years down the road.
My pocket calculator is faster than a human and has better memory. I don't know that this means the same thing as "superhuman in arithmetic". I can concede that it means superhuman in speed and memory, but, arithmetic? I don't think so. What it really does is move bits around registers. We are the ones interpreting those as numbers and the results of arithmetic operations.
AlphaGo is rather different in that it actually has a representation of the game of Go and it knows how to play. I don't doubt at all that it's intelligent, in the restricted domain it operates in. But I do doubt that it's possible for an intelligence built by humans to be "superhuman" and I don't see how your one-liner addresses that.
Your calculator does have a representation of arithmetic too. It's those bits is moves around in registers, which are very much isomorphic to the relevant arithmetic.
Why would an intelligence built by humans not be able to be superhuman? The generally accepted definition seems to be "having better than human performance" in which case it seems we've done it many times (like with calculators).
>> The generally accepted definition seems to be "having better than human performance"
I don't think there's a generally accepted definition and I don't agree that
performance on its own is a good measure. Humans are certainly not as good at
mechanical tasks as machines are -duh. But how can you call "superhuman"
something that doesn't even know what it's doing, even as it's doing it faster
and more accurately than us?
Take arithmetic again. We know that cat's can't do arithmetic, because they
don't understand numbers, so it's safe to say humans have super-feline
arithmetic ability. But then, how is a pocket calculator super-human, if it
doesn't know what numbers are for, any more than a cat does? There's something missing from the
definition and therefore the measurement of the task.
I don't claim to have this missing something, mind you.
>> Why would an intelligence built by humans not be able to be superhuman?
Ah. Apologies, I got carried away a bit there. I meant to discuss how I doubt we
can create superhuman intelligence using machine learing specifically. My thinking goes like
this: we train machine learning algorithms using examples; to train an algorithm
to exhibit superhuman intelligence we'd need examples of superhuman
intelligence; we can't produce such examples because our intelligence is merely
human; therefore we can't train a superhuman intelligence.
I also doubt that we can create a superhuman intelligence in any other way, at
least intentionally, or that we would be able to recognise one if we created it
by chance, but I'm not prepared to argue this. Again, sorry about that.
>> Your calculator does have a representation of arithmetic too. It's those bits is
moves around in registers, which are very much isomorphic to the relevant
arithmetic.
Hm. Strictly speaking I believe my pocket calculator has an FPGA, a
general-purpose architecture that in my calculator happens to be programmed for
arithmetic, specifically. So I think it's accurate for me to say that, although
the calculator has a program and that program certainly is a representation of
arithmetic, I have to provide the interpretation of the program and reify the
representation as arithmetic.
In other words, the program is a representation of arithmetic to me, not to the calculator. The calculator might as well be programmed to randomly beep, and it wouldn't have any way to know the difference.
(But that'd be a cruel thing to do to the poor calculator).
There used to be jobs for thousands of people to do what pocket calculators do. Those jobs have been gone for decades. Humans entirely replaced by machines. So, yes, calculators are superhuman.
I agree somewhat, but then what is your gauge for superhuman if not some type of competition? How do you evaluate it?
On another note, Rocky Marciano had the mafia behind him. Harry Haft fought him and was knocked out, and later claimed the mafia told him he had to throw the fight [1].
There's also a good graphic novel about this. Maybe the AI's goons have threatened Sedol or kin ;)
>> what is your gauge for superhuman if not some type of competition?
To be honest- I don't have one. My intuition is that we can't have a good definition of "superhuman intelligence" because having one would require us to demonstrate superhuman intelligence ourselves. Which is obviously a contradiction.
Intuitively, a calculator is not superhuman.
A person who could do mental arithmetic as well as a calculator? That would seem to be superhuman.
If you find out that they were using a calculator under the table the whole time, they're not superhuman any more.
So I think the word "superhuman" must imply a fair competition, in the sense that the participants are competing using comparable approaches. For some definition of comparable.
I still don't agree that this is the case, and I don't care what a thousand Google-hyped press releases say, beating the best human player in anything is not "superhuman" and "superhuman" performance has not been achieved by anything yet. [1]
Why do I think so? Two reasons.
One, because you can be entirely human and still beat all other humans, without fail for a very long time. Long "winning streaks" in professional sports are very well documented. For instance Rocky Marciano went entirely undefeated in his whole heavyweight boxing career. In chess, Mikhail Tal went undefeated for 95 games. And so on, so forth.
Of course most humans' winning streaks end eventually. That's because our performance degrades over time. When a computer wins against the best player, it keeps on winning, and in fact it actually gets even better over time.
Still, and this is number two: in most instances where a computer does better than a human, we don't claim superhuman performance. Automatic calculators, going back to mechanical calculators, have been better than humans at arithmetic for a very, very long time. I would wager that nobody discusses pocket calculators as exhibiting "superhuman" performance. You only hear this sort of claim when it comes to Deep Blue, AlphaGo or Watson.
So maybe we need a better definition of what it means to be "superhuman" that covers both pocket calculators and AlphaGo. Without one, I don't accept that the performance of AlphaGo can be said to be superhuman, unless pocket calculators' performance is also celebrated as superhuman.
_______________________
[1] I'm perfectly willing to go even further than that and say that we can't make machines that have superhuman intelligence and that even if we did, we wouldn't be able to recognise them (this last bit is similar to what the GP says).