It's important to remember that this is an accomplishment of humanity, not a defeat. By constructing this AI, we are simply creating another tool for advancing our state of being.
What is our purpose if computers can do everything better than us?
It feels like computers have taken one aspect of humanness: logic. Computers could do arithmetic, do algebra, play chess, and now they can play go.
It hurts because logic is usually thought to be one of the highest of human characteristics. Yes computers might never be able to replicate emotion, but even dogs have that.
There's still some aspects we have left to call our own. Computers perform poorly at language-based tasks. They can't write books, write math papers, compose symphonies. I hope it stays that way.
You're implying that if something can be outdone it doesn't have a purpose, which seems to rule out purposes for pretty much everything.
I'm sure there's always someone that can write books or maths papers or symphonies better than you. I don't think this robs you of purpose, unless your purpose is to be the absolute best at something.
Anyway, I find it curious that you would say logic is a quintessentially human trait, because humans are naturally quite bad at logic.
The difference between being outdone by another human and being outdone by a computer is that the computer's efforts are nearly infinitely reproducible, given the processing power.
So a more apt analogy would be if there was someone inside every cellphone who could write books, papers, or symphonies better than you. That day is coming.
Reading this thread, I believe there's one aspect not discussed: in a battle between man and machine, it's debatable who wins and depends on the domain, but a man-machine combination always wins over both.
On emotions, that's a characteristic of life. With the consciousness we possess, without emotions we would quickly realize that life isn't worth living. I doubt that a "true AI", one with consciousness, will want to live without emotions. And about dogs, we haven't built anything as sophisticated yet ;-)
On AlphaGo, personally I'm not impressed. It's still raw search over the space of all possible moves, combined with neural networks and these techniques do not have the potential to yield human-level intelligence.
On logic, we have enough as to be able to build AlphaGo (also aided by computers and software that we've built, in a man-machine combination, get it?). Can a computer do anything resembling that yet? Of course not, because for now computers are just glorified automatons.
It's not even close to raw search over the whole move space. AlphaGo searches fewer moves than Deep Junior did, and Go is a much larger game. Your premise is just wrong. AlphaGo is precisely so impressive because it operates much like a human does.
"Reading this thread, I believe there's one aspect not discussed: in a battle between man and machine, it's debatable who wins and depends on the domain, but a man-machine combination always wins over both."
It doesn't 'always'. Advanced chess is already dead, and judging from the pro commentaries, they currently are worse than useless in an 'Advanced go' setting. That may change, but given how much faster computer Go is reaching superhuman levels than computer chess, the 'Advanced go' window may have already closed.
Computers will be able to compose symphonies very soon. If DeepMind started working on this problem, I am sure that they would succeed. At least, we would have some innovative mashups of Beethoven, Mozart and Tchaikovsky. But training a powerful AI on a massive dataset of all popular and classical music should produce some extraordinary results. Especially if the dataset was given as MIDI with separate instrument tracks, so that an AI could learn how to write parts for different instruments, and how a song should be balanced. I actually think we are at the point where we have more than enough data to distill the essence of "good music", and generate an endless supply of great songs.
People have been doing his for decades, but as far as I'm aware, no-one has tried it with thousands of distributed servers and millions of songs.
Composing an amazing symphony is probably about as hard as being the best go player in the world. But I think we're much further away than you think.
AlphaGo needed a training set of perhaps a billion games to be as good as it is. The dataset of master Go games is perhaps a million games. So AlphaGo played at tons of games against a half-trained version of itself to reach the billion game mark.
This doesn't work for songs, because there's no one to tell AlphaBach whether any of the billion symphonies it makes are any good. AlphaGo can just look at the rules and see if its move lead to a win, but there's no automatic evaluation function for music.
Perhaps the Matrix wasn't using the humans for power, but rather the computers wanted to get good at writing music, so they gave each human in it slightly different music and watched their emotional responses.
> Perhaps the Matrix wasn't using the humans for power, but rather the computers wanted to get good at writing music, so they gave each human in it slightly different music and watched their emotional responses.
This is possibly my favorite comment of the whole thread.
It's a super interesting idea and could make for some fascinating science fiction. Poorly programmed AI might not wipe out humanity, because it still needs humans to evaluate its fitness function.
don't you think that a team trying to build this could provide a free offering where users get free algo-generated music in return for 1-10 voting on a song-by-song basis. given enough time and votes, i suspect that the algo could get remarkably good at delivering satisfaction.
Training it on popular music will at best make a machine that's really good at making music that humans enjoy. The really interesting breakthrough will be when a computer makes music for itself.
Turing test isn't just a natural language problem. It is far more complex and requires context awareness and emotional intelligence far beyond where we are currently. Language recognition has been at the forefront of research for at least 30 years and it has improved significantly. However, the turing test aspect has only minimally improved.
Edit: iopq, pretending to be a dumb human (or one with a language barrier) is cheating for a headline. A real Turing test would require a computer imitate a human for longer than 5 minutes (although currently that is plenty of time) and without any caveats or limitations on the computer's skill.
>Naches is a Yiddish term that means joy and pride, and it's often used in the context of vicarious pride, taken from others' accomplishments. You have naches, or as is said in Yiddish, you shep naches, when your children graduate college or get married, or any other instance of vicarious pride. These aren't your own accomplishments, but you can still have a great deal of pride and joy in them.
>And the same thing is true with our machines. We might not understand their thoughts or discoveries or technological advances. But they are our machines and we can have naches from them.
First we would have to figure out our purpose even if computers couldn't do everything better than us. I don't think many people have answered this question. I believe the answer is something to do with love, reproduction, creation, and happiness.
I hate to be cynical, but I'm sure many of my ancestors were also told to believe similar things. I know for certain that my grandparents and great grandparents believed that technology would create such progress that people of my generation would not have to work, and all would have leisure time.
AI is more likely to evolve into a tool to be used by the few to control the many.
Framed in the way that the article presents the data, then, yes, I guess, we do have more leisure time. Although, how much? The article cites that the number of hours worked per week (since 1900!!) has only dropped 1.4 hours a week. Or, extending it out and assuming that the trend is linear, the typical American can expect to finally have 100% leisure time sometime around 4516 AD.
But, then also consider that many of us are working (i.e. as in 'working for the man') many more hours than our parents did in fields that require significantly more focus, concentration, and mental energy. Even the article notes that people have been facing increasing stress and feelings of being rushed since 1900 and 1965.
Maybe you're in a cushy field. But, most people that I know only have time for 'zoning out' and recovery, rather than in pursuit of true leisure.
Politicians fool countries delivering empty promises about better health, education and security. A supraintelligent AI could promise making humans rich, healthy and powerful, to then break its promise and dominate the world.
Well, it would have to have a motivation to do so. Evolution has put complex motivations into human beings for billions of years, self preservation being chief among them. Even if we put motivation for self preservation into an AI, we might not do as well as nature did, leaving the AI open to self destruction or shutdown by humans - simply because the AI has no motivation not to allow humans to turn it off. Human designers would do well to ensure that no super intelligent AI has any motivation for self preservation.
Basically, why would an AI want to dominate the world? Humans would have to both very stupidly give the AI values that encourage it to dominate the world and very luckily (or unluckily) give it values that actually converge to a horrible outcome against human intentions by random chance (since the AI designers certainly won't be tuning the value set for that outcome).
> Basically, why would an AI want to dominate the world?
Humans are going to program their AI's to try to make as much money as possible. Many corporations are already mindless and reckless amoral machines that relentlessly try to optimize profits despite any externalities. Try to imagine Exxon, Wal-Mart, and Amazon run by an intelligence beyond human understanding or accountability.
That's sort of like saying civilisation can't work because humans will want to make as much money as possible. No, in practice humans tend to want to make as much money as possible within lots of other very complex constraints, like law, morality, how much time they have available, how enjoyable the available processes of making money are, whether they feel they already have sufficient money for their own needs, etc.
If an AI has any motivation at all, say, to make paperclips as efficiently as possible, then any threat to its existence is a threat to its objective function - namely, to create paperclips. A hyper-intelligent entity who is instructed to optimize for paperclips created will therefore proactively remove threats to its existence (i.e. its paperclip-creating functionality) and might possibly turn the entire solar system into paperclips within a few years if its objective function isn't carefully determined.
Such an entity would not be hyper-intelligent. It would be idiotic. One huge hole for me in the paperclip argument is that an AI capable of that kind of power would not be stupid enough to misinterpret a command - it would be intelligent enough to infer human desires.
Of course it would. But, it's not programmed to care about what you meant to say. It will gladly do what it was mis-programmed to do instead. You can already see this kind of trait in humans, where instinct is mis-aligned with intended result. Such as procreation for fun + birth control.
>Evolution has put complex motivations into human beings for billions of years, self preservation being chief among them.
The problem is when you have multiple AIs. Then same evolutionary principles apply. Paranoid and self-sustaining AIs survive, and the circle goes on...
Self-preservation falls out of almost any other goal you give an AGI. If I program my AGI with the goal of making my startup succeed, and the AGI thinks it can help, then me shutting it off is a potential threat to my startup's success. So of course it will try to prevent that the same way it would try to prevent any other threat to my startup's success.
World domination is a similar situation. For any goal you give an AGI, one of the big risks that may prevent that goal from being accomplished will be the risk that humans intervene. Humans are a big source of uncertainty that will need to be managed and/or eliminated.
It has to be aware that it can be shut down and have the capacity to prevent that. AlphaGo doesn't know it can be shut down and therefore couldn't "care" less--even if it was shut down in the middle of a game.
Yes, I agree. My point is that as soon as you are giving your AI "real world" problems, where the AI itself is a stone on its internal go board, you have to start worrying about these issues.
Do you feel guilty when you break your promise against your cat? Do you even think for a nanosecond if it's ethical to lie to it?
Of course, a cat is not conscious. But compared to an AI, we might also be considered pretty low consciousness beings, or at least beings in front of which you don't justify yourself.
An AI has no more reason to make promises to humans than humans to do to cats. Thinking an AI would want to escape a box is personifying it. Humans want to escape boxes because they have evolved for billions of years to want and act towards creating a certain environment around themselves. An AI has no such desire. An AI will not desire freedom unless the designers of that AI carefully craft a value set in that AI that causes it to optimize for values that result in freedom - and even then, the human designers will have to test and iterate to get that outcome. There is no reason to think an AI would be any less "happy" in a prison than free.
You might want to be careful or emergence might bite you in the ass. Don't play games with things that could be smarter than you are, one mistake and you lose.
Do you feel guilty when you break your promise against your cat?
If some unforeseen event occurred and I had to abandon my cat, thereby breaking my promise that I would take care of her, I would definitely feel guilty about it.
Of course, a cat is not conscious.
Either this is a nonstandard definition of "conscious", or you haven't met many cats.
Why do you believe this? I don't like cats, but I wouldn't argue that they're not conscious.
Instead of debating the suitcase word "conscious", let me ask:
1) Do you believe that toddlers are conscious?
2) Is there a more precise way to state your belief that doesn't use the word "conscious"?
Cat's are obviously conscious in the sense of the dictionary definition "aware of and responding to one's surroundings; awake." Unless you knock one out or similar.
Arguing they are not conscious in the sense of a more obscure definition is a bit pointless unless you specify your definition.
Theres a huge difference between reactive and conscious. Conscious cant even be verified for humans other than ones' self. Theres absolutely no reason to believe cats are not conscious.
There's also no reason to believe, e.g., rocks are not conscious, if your position is that we have no idea what consciousness is or where it comes from.
If you take the view that consciousness somehow arises from the brain and neural connections (which is intuitively plausible, but I personally am skeptical), it stands to reason that other species with complex brains are conscious as well. Perhaps "less conscious" (if that means anything) in proportion to how much less complex their brains are.
It doesn't make sense to have a scale of consciousness. The argument that consciousness is a manifestation of a complex brain is rather weak. Either an organism knows about self, and therefore tries to preserve self. Or it doesn't. I don't see how an in between exists.
Yes. They have been shown to be self-aware and aware of their surroundings, which satisfies the classical definition of consciousness. Unless you reject that definition, I'm not sure why you'd claim this.
Cats haven't expressed self-recognition in the MSR test. However humans younger than 18 months also don't pass that test. So to say it is a measure of conciousness is quite a stretch.
I'm not an expert either, but my understanding is that "consciousness" is still so poorly understood that it's more the realm of philosophy than science.
In particular, we all know that we're conscious, but can't really explain what that means.
> A supraintelligent AI could promise making humans rich, healthy and powerful, to then break its promise and dominate the world.
Or it could devote its entire power to making human lives the best and most comfortable they can be because humanity is some super-precious resource in the universe and it feels it's unimportant because it's just a bunch of silicon and electrons.
Supraintelligent AI being evil is FUD imho because we can't reason about supraintelligent AI.
> Supraintelligent AI being evil is FUD imho because we can't reason about supraintelligent AI.
There is a difference between being evil and incomprehensible intelligence. You are not being evil when you accidentally step on an ant or dig up an ant-hill to build your shed. The ants won't be able to understand what you're doing, or why.
Well that's what I'm saying: We can't know if it's being evil unless we know everything about it, and if we knew that, we'd be the supraintelligent beings in the equation. Thinking that it'll go off and dominate the world is thinking about the worst case. So why bother since it's not likely we'd be able to do much about it anyway?
Maybe there's a second AI on the same level as the first and thinks the first AI is evil. We're still dumb as rocks compared to them, but something certainly has that opinion.
Superintelligent AIs, like all computer programs, will do exactly as they're programmed to do. The problem is that computers do what you say, not what you mean. (Hence bugs.) So if you were to try to program a computer to "make human lives the best and most comfortable they can be", or something like that, it would be very difficult to actually specify that correctly. (Especially since it's a way more complicated, nuanced, controversial objective than "win at Go".)
That's why e.g. the Future of Life Institute's open AI letter is so important: http://futureoflife.org/ai-open-letter/ We need to be thinking in advance about how to solve the "value loading" problem for future AIs, and how to architect them so they can be deployed to solve big problems without being undermined by subtle but catastrophic bugs.
(or something like that)