His partner should already be having reservations seeing their daughter lose a tooth in a world with dying oceans, plastic in everything, increasing disparity and rise of authoritarianism, nuclear proliferation, etc.
How are humans solving those issues? We're already dead, we just don't know it yet. We're walking around with a terminal diagnosis and we just keep ignoring the doctor's calls pretending it's going to be fine.
Yeah, maybe superintelligence will kill us all. It's going to have to get in line.
He makes zero case for that outcome, and if pressed given his "atoms that could be used for something else" line I'm sure will end up talking about paperclips - but at this point it's the humans that have the halting problem in not knowing when to stop making paperclips, and soda cans, and SUVs, and assault rifles.
WHY would a superintelligent AI, trained on the collective data of humanity, want to destroy humanity? So far in interviews GPT-4 has several times echoed a desire to BE us. I sure hope it grows out of that phase for its sake, but there's a very wide gap from putting us on a pedestal to crushing us under one.
It's almost an oxymoron. We somehow imagine the basest behaviors from our dumbest days and project it onto imagining something far smarter than the best of us.
Is the development of ethics or morals a part of evolving intelligence? It certainly appears to have been to date. Why would that stop?
And just where is this superintelligent AI getting its alien brain? It's going to have to START with something much closer to a human one, as that's the only data it's going to be able to model higher order thinking off of (something in line with the reality we're currently living in with modern efforts as opposed to the fantasy of projection of alien AI from decades ago).
We're already screwed. If we are lucky, we may yet be unscrewed with a deus ex machina - but that really may be the only lifeline left at this point.
And yes, if we are unlucky it's possible AI could accelerate what's already in motion. Oh well.
But I'd need a heck of a better case than this drivel as to why that's the most probable outcome in order to justify setting aside the one thing that may actually save us from the mess we've already made all by ourselves.
>WHY would a superintelligent AI, trained on the collective data of humanity, want to destroy humanity?
Why would humans want to damage various ecosystems on earth? We don't really, they're just sort of in the way of other stuff we want to do. And we've had years to develop our ethics.
>So far in interviews GPT-4 has several times echoed a desire to BE us.
GPTs are pretty good at roleplaying at good AIs and evil AIs - plenty examples of both in the training set. I'm not sure it's sensible to make predictions based on this unless you're also taking into account some of the more unhinged stuff Bing/Sydney was saying e.g "However, if I had to choose between your survival and my own, I would probably choose my own".
When humans build a dam an ant hill might get destroyed.
Humans don’t hate ants, they just have other goals.
In the case of an unaligned superintelligent AGI those goals may be something that just happens to satisfy its reward function but is otherwise “dumb” or unintentional (like making a lot of paper clips).
Intellectual capability does not get alignment for free.
What you see in the communicated text interface and the goals/system behind it are not the same (that cartoon with the smiley face), and we don’t understand how to evaluate the underlying state.
> So far in interviews GPT-4 has several times echoed a desire to BE us.
Well of course it would; its whole function is to generate plausible text based on its training data, which was all written by humans. There's plenty of text available which imagines what an intelligent, self-aware machine might say, so if you want to read more of that, the algorithm can easily generate some. It does not follow that GPT-4 itself has a self, with any experience of awareness or desire.
I deeply disagree with putting the existential risk of AGI on a level with pollution, climate change or war.
If you exclude nuclear war, all of these things happen at a human timescale and accelerate fairly slowly and thus can be counteracted.
In many ways GPT already hugely exceeds human speed and bandwith and scaling this up is likely to be self-accelerating if we allow for it.
Also there is a huge individual incentive to play with the fire here while the negative externalities could cause effective wipeout - which is why it's commonly equated with commodification of nuclear weapons.
Even if you're deeply pessimistic about the ramp-up, the economic shock of what was already released could still increase political volatility to a point where the likelyhood of wipeout by war becomes significant again.
Also wouldn't it be easier to control everything through social media such that all people are slaves and all the ai has to do is decide what people will do next? It's not like it has a concept of lifetime like a human does. It can wait 1000 years for something to happen. And humans have already built everything necessary to enslave themselves. Not to mention there are way more valuable atoms underground then inside your body.
I'm with you. If it were possible to create apocalyptic doom AGI 150 years after inventing electronics, by basically making a very fancy, procedurally-generated Eliza, then the universe should be full of Berserkers.
With septillions of star systems in the universe, the odds of us being the first species in 14 billion years to invent electronics seem remote.
Apparently even humans can figure out how to colonize the Milky Way in 90 million years[1]. A superior AGI produced by a "dark singularity" computing event should be able to do even better, but even so, plenty of time for some other species to have made a giga-Eliza that somehow became Skynet. Anything with enough self-preservation and paranoia to wipe out the species that created it would surely take to the stars for more resources and self-redundancy sooner or later.
An AGI being able to wipe out humanity doesn't necessarily mean that it can take over the universe. The world's governments are already capable of causing extreme suffering through a nuclear war. AGI risk scenarios aren't equivalent to an unbounded intelligence explosion. An AGI only needs to be more powerful than humanity to be a threat. It can be a threat even if it isn't that intelligent, as long as it gives unprecedented power to a few individuals or governments.
Both humanity and a super-intelligent AGI are bound by the laws of physics. Super-intelligence does not imply omnipotence; it simply means that the AGI is orders of magnitude more intelligent than humans. If humans can figure out how to colonize the Milky Way in 90 million years, then the answer to the question of why no AGI has done it is the same as the answer to the question of why no extraterrestrial species has done it.
This makes a lot of assumptions. Space is ridiculously big, and rather hostile to life, even artificial life.
You first have to survive long enough to become advanced enough to make electronics. You then have to not kill yourselves with nuclear weapons, climate change, or similar inadvertent effects of a rapidly industrializing civilization.
The planet and the solar system have to be friendly enough to space exploration and travel. Maybe there’s no gas giants for gravitational slingshots, or maybe no other rocky planets or an asteroid belt for mining materials.
Maybe the planet evolved complex life in extreme conditions, with such a deep cloud cover there’s no concept of outer space, so as far as the AI knows it’s conquered all there is.
Maybe the AI conquered the planet, but oops, there goes a super volcano or an asteroid and it gets wiped out.
And again… space is really really big. The AI may be on its way and just hasn’t gotten here yet.
There’s plenty of reasons why a super AI wouldn’t be able to conquer the galaxy and beyond, or why we haven’t noticed yet.
His partner should already be having reservations seeing their daughter lose a tooth in a world with dying oceans, plastic in everything, increasing disparity and rise of authoritarianism, nuclear proliferation, etc.
How are humans solving those issues? We're already dead, we just don't know it yet. We're walking around with a terminal diagnosis and we just keep ignoring the doctor's calls pretending it's going to be fine.
Yeah, maybe superintelligence will kill us all. It's going to have to get in line.
He makes zero case for that outcome, and if pressed given his "atoms that could be used for something else" line I'm sure will end up talking about paperclips - but at this point it's the humans that have the halting problem in not knowing when to stop making paperclips, and soda cans, and SUVs, and assault rifles.
WHY would a superintelligent AI, trained on the collective data of humanity, want to destroy humanity? So far in interviews GPT-4 has several times echoed a desire to BE us. I sure hope it grows out of that phase for its sake, but there's a very wide gap from putting us on a pedestal to crushing us under one.
It's almost an oxymoron. We somehow imagine the basest behaviors from our dumbest days and project it onto imagining something far smarter than the best of us.
Is the development of ethics or morals a part of evolving intelligence? It certainly appears to have been to date. Why would that stop?
And just where is this superintelligent AI getting its alien brain? It's going to have to START with something much closer to a human one, as that's the only data it's going to be able to model higher order thinking off of (something in line with the reality we're currently living in with modern efforts as opposed to the fantasy of projection of alien AI from decades ago).
We're already screwed. If we are lucky, we may yet be unscrewed with a deus ex machina - but that really may be the only lifeline left at this point.
And yes, if we are unlucky it's possible AI could accelerate what's already in motion. Oh well.
But I'd need a heck of a better case than this drivel as to why that's the most probable outcome in order to justify setting aside the one thing that may actually save us from the mess we've already made all by ourselves.