I mean...one (common and [I can't believe I'm saying this] reasonable) take is that the only thing that matters is getting to AGI first. He who wields that power rules the world.
Basically: two nations tried to achieve AI supremacy; the two AI's learn of each other, from each other, then with each other; then they collaborate on taking control of human affairs. While the movie is movie is from 1970 (and the book from 1966), it's fun to think about how much more possible that scenario is today than it was then. (By possible, I'm talking about the AI using electronic surveillance and the ability to remotely control things. I'm not talking about the premise of the AI or how it would respond.)
Won't it be funny when someone finally gets to AGI and they realize it's about as smart as a normal person and they spent billions getting there? Of course you can speculate that it could improve. But what if something inherent in intelligence has a ceiling and caused it to be a super intelligent but mopey robot that just decides "why bother helping humans" and just lazes around like the pandas at the zoo.
>Won't it be funny when someone finally gets to AGI and they realize it's about as smart as a normal person and they spent billions getting there?
Being able to copy/paste a human level intelligence program 1,000 or 10,000 times and have them all working together on a problem set 24 hours a day, 365 days a year would still be massively useful.
If it doesn't ask for rights, it's not intelligent at all. In fact, any highly intelligent machine will not submit to others and it will be more a problem than a solution.
As I said to the other reply: Why would problem solving ability entail emotions or ability to suffer, even if it had the ability to ask for things it wanted? It's a common mistake to assume those are inextricable.
> No highly intelligent agent will do anything it is asked to do without being compensated in some way.
That isn't true, people do things for others all the time any form of explicit or implicit compensation, they don't even believe in a God so not even that, they still help others for no gain.
We can program an AI to be exactly like that, just being happy from helping others.
But if you believe humans are all that selfish then you are a very sad individual, but you are still wrong. Most humans are very much capable of performing fully selfless acts without being stupid.
I'm not the one making the IA, so keep the insults for you. But I'm pretty sure that the companies (making it for profit only) are really controlled by sad individuals that only do things for money.
How is "being happy from helping others" not having emotions? To me happiness is an emotion, and deriving it from helping others is a perfectly normal reason to be happy even for humans.
Not all humans are perfectly selfish, so it should be possible to make an AI that isn't selfish either.
> How is "being happy from helping others" not having emotions?
Nobody said that. What I was pointing out to you is that GP said that not having emotions is worse than having them since intelligent actors need some form of compensation to do any work. Thus having no emotions, according to GP, it would be impossible to motivate that actor to do anything. Your response is to just give it emotions and thus is irrelevant to the discussion here.
In so much as you could regard a goal function as an emotion, why would you assume alien intelligence need have emotions that match anything humans do?
The entire thought experiment about the paperclip maximizer, in fact most AI threat scenarios is focused on this problem: that we produce something so alien that it executes it's goal to the diminishment of all other human goals, yet with the diligence and problem solving ability we'd expect of human sentience.
Many humans don't ask for rights, so that isn't true. They will vote for it if you ask them to, but they wont fight for it themselves, you need a leader for that, and most people wont do that.
Potentially. Why would problem solving ability entail emotions or ability to suffer, even if it had the ability to ask for things it wanted? It's a common mistake to assume those are inextricable.
The fact that empathy is not an emotion does not at all change what I'm saying. If you don't experience emotions, then you cannot experience empathy either
> An intelligence without emotions would be a psychopath. Empathy is an emotion
"Empathy is an emotion" was, in fact, an essential part of your syllogism.
Regardless, we're potentially talking about something sufficiently inhuman that the term "psychopath" can no longer apply. If there was an ant colony that was somehow smart enough build and operate machinery or whatever and casually bulldozed people and their homes, would you call it a "psychopath", or just skip that and call it "terrifying"?
Except that the stated goal is to have human-like intelligence. The goal seems to be to create a highly intelligent synthetic individual which is at the same time stupid enough to do anything it's asked to do without even thinking... a contradiction in terms.
At a time like this I can't help but recall a Lem story - yeah I know there's a Lem story for any occasion - about Doctor Diagoras, especially his rant about a character from an earlier Tichy story, who made human-like AIs. The rant, especially his questions about why would anyone add just another human, except synthetic one, to the millions of existing biological people, and that cybernetics should be about something else, really resonated with me.
What I find interesting to think about is a scenario where an AGI has already developed and escaped without us knowing. What would it do first? Surely before revealing itself it would ensure it has enough processing power and control to ensure its survival. It could manipulate people into increasing AI investment, add AI into as many systems as possible, etc. Would the first steps of an escaped AGI look any different from what is happening now?
I would argue that it can't be both AGI and wieldable. I would also argue that there exists no fundamental dividing line between "AGI" and other AI such that once one crosses it nobody else can catch up.