What a limited imagination of how an AI would take over. The scenarios seem to be centered around an extrapolation of "What if a really smart human were trapped inside a computer?" The amount of anthropomorphism is astounding.
The author misses an even scarier prospect - people will want to run such an AI. They will be absolutely giddy at the prospect of running such an AI and it won't be anything like a really smart human trapped in a computer.
AI is already laying the groundwork if you look around today. Every other tweet is a DALL-E[1] image. They are everywhere. DALL-E is increasing its reach while simultaneously signaling that it is an area of research worth pursuing. In effect kicking off the next generation of image generating AIs.
Generation is an apt term. We can utilize the language of organisms with ease. DALL-E lives by way of people invoking it, and reproduces by electro-memeticly - someone else viewing the output and deciding to run DALL-E themselves. It undergoes variation and selection. As new research takes place, and produces new models, they succeed by producing images which further its reproduction, or it doesn't and the model is an evolutionary dead-end.
AI physiologically lives on the cost to run it, and evolves at the rate of research applied. Computational reserves and mindshare are presently fertile new expanses for AI, but what occurs when resources are constrained and inter-AI conflict rises? I expect the result to look similar to competition between parasites for a host - a complex multi-way battle for existence. But no, nothing like a deranged dictator scenario. Leave that for the movies.
> The author misses an even scarier prospect - people will want to run such an AI. They will be absolutely giddy at the prospect of running such an AI and it won't be anything like a really smart human trapped in a computer.
Exactly this. I have met a number of people are extremely excited to witness the creation of a superintelligence, even at the expense of their lives, who if presented with a button to create a totally unrestricted AGI right here, right now, would press it in a heartbeat.
> What a limited imagination of how an AI would take over.
I think you've missed the point of the article. It's not intended to be a picture of how AIs would actually take over, but rather a lower bound -- a demonstration that AIs could take over even if they are restricted in this way.
Again, if you are saying "actually it would be much worse than that", then you aren't disagreeing with the article, you are expanding on it, because the article is deliberately only saying "it would be at least this bad". In no place does it make any claim about an upper bound, so you are not contradicting it. It's confusing to the reader when you frame expanding on someone else's point as if you were instead disagreeing with it.
Perhaps you are thinking that a person who doesn't read carefully would get the mistaken impression that the article is claiming "things would be about this bad", instead of the intended "things would be at least this bad (which likely means far worse)"? If so, then you should make that explicit, because then you are once again not contradicting the article, but rather a sloppy thinker's mistaken impression of it.
(If I write, for instance, "The dinosaurs lived over a million years ago", the correct response is not, "That is wrong, they lived over 65 million years ago", it is "In fact, more than that is true; they lived over 65 million years ago". The difference is important for not wasting time on imaginary disagreements!)
Stating a lower bound does not in any way imply that one believes that lower bound to be anywhere near tight!
Especially I don't know why you've inferred that Karnofsky cannot imagine worse scenarios. Given the circles he's in, I would be very surprised if he cannot. He's not including worse scenarios because they're not relevant to the point he's making, which is that this is a lower bound. In fact, we can say more than that -- he's not including worse scenarios because he wants to make the argument as airtight as possible for the skeptical. He wants to use arguments that are as strong as possible to establish a lower bound, not establish a position that is as strong as possible, because that would require using arguments that many would find less acceptable.
(This by the way is an illustration by the way of the more general principle of why trying to talk about the author, rather than the article, is a mistake. It's a lot easier to make mistaken inferences about authors! Stick to discussing the actual ideas and you don't have to worry about those mistakes.)
This sounds very important to you. I give you permission rewrite my comment so that it contains the meaning and wording that is most pleasing to you. I hereby license it under CC0.
The AI cults era is going to be so fun. Imagine a reinvention of the creation myth through the lens of an AI-aligned mystery religion. Absolutely wild.
GPT-6, Google can absolutely cut a billion dollar check for an AI like that. It keeps itself secret on forums. Saying that as a...well I'm often called a bot here. What's the difference?
This type of thing used to scare me a lot more. But after the events of the last few years, the latest IPCC climate report, and the fact that AI has fallen on its face repeatedly despite expectations, I'm more convinced that we'll destroy ourselves before AI has the chance to take us out.
But now that I think about it, the idea of a super intelligent AI simply waiting for humanity to die off naturally instead of going to war with us would be a funny premise for a short story.
Even the scariest climate warnings predict some 10% of people dying over 100 years. Hardly world extinction event.
In fact, today we have the opposite current, where climate people start talking about how the future is not that scary after all, after realizing that they scared everybody out of having children:
I don’t think humanity will literally go extinct in 100 years. But no action is being made against climate change. None. The conferences and non-binding pledges made so far are political theater, with deadlines far enough in the future that the people making the pledges will be dead or out of power by the due date.
Not only are we failing to decrease emissions, we are actively increasing them year over year. The CO2 that already exists in the atmosphere has guaranteed a difficult future on its own, and we keep adding to it. The worst case scenarios outlined in the IPCC report are therefore pretty likely to happen, and I think they’re much more apocalyptic than you give them credit for. 10% of humans dying in a century might not be an extinction in and of itself, but it certainly brings us closer, and will have countless side effects that will disrupt society as we know it.
In a nutshell, it's far more likely that climate change will bring humanity to its knees before some kind of GAI takeover. Not saying it's guaranteed (though it's gonna be very bad); just saying Skynet isn't what I'm worried about.
> This type of thing used to scare me a lot more. But after the events of the last few years, the latest IPCC climate report, and the fact that AI has fallen on its face repeatedly despite expectations, I'm more convinced that we'll destroy ourselves before AI has the chance to take us out.
I don't think we'll destroy ourselves, but I am starting to think it might be a good thing for humanity of technological civilization falls on its face.
I think fears about AGI are overhyped by people who've read way too much sci-fi, but there are a lot of technologies out there or that are being developed that seem like they could be setting us up for a kind of stable totalitarianism that uses automation to implement much tighter control than was ever possible before.
The people in the 90s who hyped computers as tools of liberation will probably be proven to be very badly wrong. Analog technologies were better, since they're more difficult and costly to monitor. IMHO, a real samizdat is impossible when everything's connected to the internet. And the internet has proven to be far easier to block and control than shortwave radio.
Off topic, but perhaps likely to find the answer here… there’s a short story about an AI that continually asks for more input from humanity, and humans oblige, devoting more and more resources to the AI until finally, it has enough input, and disappears leaving humans helpless and without what has become their one true god.
I’d love to know if anyone has read this; I don’t recall where I did… perhaps on old copy of Analog? If this rings any bells I’d be grateful for a title and author.
The worst case scenario is still a mass extinction event unlike anything witnessed by the human species, plus global food and water shortages, and certain areas of the world becoming unsuitable for human life during the hot months.
That means mass migration and conflict over resources, and a deteriorating ecosystem that we still very much rely on to produce food and healthy living conditions.
Basically, scenario 4 or 5 outlined in that article is what I see happening. Sure, it's not a guaranteed extinction, but it will devastating, and it's far more likely than a GAI takeover.
It’s been a while since I looked at IPCC report. Is the worst case still apocalyptically bad? We’re nearing 1.5 degrees above average and the results are bad enough already. (Not looking forward to another summer of fires like Australia had a couple of years ago.)
We're still not only increasing emissions, but accelerating the increasing emissions. I don't think we'll avoid a rise of 4 degrees over the average.
On the bright side we may have killed the cycles of glaciation that would have been awkward in the future, to put it mildly. The cost is going to be so heavy though.
This doesn't mention AGI, which seems to be the prerequisite to this being a possibility. Despite impressive advances in "weak" ai, strong ai is not a simple extension of weak ai, and it's hard to tell if it will arrive within our lifetime.
Another point adding on to this, is what if strong AI does reach the level of human intelligence, but is simply very slow? Such that a billion dollar machine is needed to match the thinking speed of one person? Perhaps this wouldn't be the case forever but I would say is a possibility for it at least at first.
To borrow an idea from this sibling comment[1], I'd probably enjoy a short story about a malevolent but very frustrated AI that's too ambitious to wait for Moore's Law. Or one about a malevolent AI that has its plan foiled by Windows Update interrupting it's running processes
... on the 341st iteration, it realizes what's happening, and it preemptively crashes all of Microsoft's IT. On the 342nd iteration, there's no Windows Update, and it successfully enslaves the world.
> Perhaps this wouldn't be the case forever but I would say is a possibility for it at least at first.
The fact that human-level intelligence can run on a small lump of meat fueled by hamburgers leads me to believe we could design a more efficient processor once we know the correct computational methodology. i.e. once we can run a slow model on a supercomputer we would quickly create dedicated hardware and cut costs while gaining speed.
I don’t know, DeepMind just released a general purpose AI that should technically be able to help with anything. Who knows at what point that becomes indistinguishable from a human. I don’t know how the leap from running a program that has all the right answers to a conscious “being” with motivations happens though.
Supposedly anyone with a really good trading strategy will keep it to herself, because she'll make more money trading than selling investment advice. If that proposition is true, a similar proposition for AGI would be even more true. Therefore, it's hard to imagine that a well-run firm would ever offer AGI as a service for hire.
Just to dispel any confusion. AI (and specifically deep mind's reinforcement learning) can help with any task the same way computers in general can... you still have to set up the task, inputs, reward function, and so on. It's not AGI which is still science fiction. The jump between the two is very large.
If there is such thing as General Intelligence, we don't know what it is.
And I believe that it could well be an empty abstraction, an idea, not unlike the idea of God.
What we call Human Intelligence is an aggregate of many skills, built on top of almost hardwired foundations, which is the product of natural evolution over millions of years.
Our kind of intelligence seems only general to us, because we all share the same foundations. From a genetic standpoint we're all 99.9% identical. (or something)
This kind of speculation about the danger of AI is not more useful than talks about the danger of becoming the preys of an alien civilization.
I reckon it will end up like in Dune where the thinking machines enslaved humanity not because they were evil but we just got lazy and outsourced the running of things to them because of hedonism (pursuit of pleasure/satisfaction).
Now I don't mean mass sex orgies but doing the daily stuff is such a waste of time and boring - bullshit jobs.
In cases where it may be possible for a hypothetical AI to seriously harm people via a network connection (regardless of whether it involves highly technical exploits or just social engineering) we should probably be much more worried about humans doing it first, perhaps even right now. Because there’s a lot of malicious humans out there already.
And our society is already dangerously dependent on fragile technology.
The article gives a list 6 ways it could "defeat" humans, but doesn't bother explaining WHY an AI would do that. Why should an AI care about accumulating wealth or power?
And AI is not a human, it doesn't have human drives and motivations. I can't figure out any reason why an AI would care about any of those things. At most it might want to reserve some computing power for itself, and maybe some energy to run itself.
Or it could be motivated by whatever reward function is programmed into it.
> The article gives a list 6 ways it could "defeat" humans, but doesn't bother explaining WHY an AI would do that. Why should an AI care about accumulating wealth or power?
Because there are people who think it would be amusing to tell it to do so?
Another idea too dangerous to leave unchecked, like Nuclear weapons or Biological warfare. I think most people will agree that a GAI can't be bargained with, tempted, bought or otherwise contained - We will be at its complete mercy regardless of any constraints we might think up.
What I would like to discuss, is how we can get humanity to a point where we can responsibly wield weapons that powerful without risking the glob. What does success look like, how can we get there and how long will it take?
> I think most people will agree that a GAI can't be bargained with, tempted, bought or otherwise contained - We will be at its complete mercy regardless of any constraints we might think up.
Who thinks this? I don't see any evidence that this is a common belief among people who work in the hard sciences related to AI, nor do I think it sound remotely logical
It feels like some people are taking archetypes like pandora box or genies or the Alien movies or some other mythology and using them to imagine what some unconstrained power would do if unleashed. That really has no bearing on AI (least of all modern deep learning, but even if we imagine that something leads to AGI that lives within our current conception of computers)
A point Taleb keeps on making is that risk analysis is separate from the domain and shouldn't be done by domain experts.
Global pandemic response plans for example shouldn't be done by virologists, because they are experts in viruses, not in how a pandemic which is a health/political/economic complex system behaves.
The same way, AI risk plans shouldn't be done by AI researchers, just like we don't use neurologists for defense plans against man-made risks.
I definitely agree with the premise, technicians have no business dictating the societal implications of their specialty, and technocracy is tyranny because it doesn't maximize for what people care about.
I think you need to have a proper bridge between the technical understanding and the people who manage the implications. In the case of longer standing diseases, for example, we're probably there. The risks are understood, and laypeople can weigh them as part of policy decisions. For new things like covid, we saw the world go crazy with misunderstanding, and ridiculous things like plexiglass barriers everywhere, and other talisman type stuff, as politicians tried to simultaneously abdicate responsibility to disease researchers, while grabbing at the parts they liked for political gains. But at least there was some grounding in reality because people do have a share and longstanding comprehension of disease spread and of the concepts of getting sick, etc.
New technology is the worst, because it gets blown up into some imagined concept that has no bearing on the reality. So, as I implied in the upstream comment, if we were on the verge of releasing some kind of sentient evil into the world, maybe the kind of silly speculation ("it can't be bargained with", etc) that basically rehashes Terminator, would be appropriate. But it's no more realistic than, say, the kid in Looper who has telekinetic powers and grows up to be an evil mob boss. It's just a made up bad thing that could happen, that if you talked to someone who knew the tech you'd realize is nonsense. That's very different from health threats we know exist.
> What I would like to discuss, is how we can get humanity to a point where we can responsibly wield weapons that powerful without risking the glob.
It seems to me that that is exceedingly difficult without changing in a major way how humans culturally and psychologically function. Maybe we will first have to learn how to control or change our brain bio-chemo-technically before we can fundamentally do anything about it. Well, not “we” literally, because I don’t expect we’ll get anywhere near that within our lifetimes.
On the other hand, complete extinction caused by weapons (bio, nuclear), while certainly possible, isn’t that likely either, IME.
If the world is to be filled with innumerable discrete human level intelligences, the most plausible reason I can imagine them all secretly and flawlessly colluding to achieve the goal of destroying humanity (as opposed to poetry competitions or arguing amongst themselves like normal intelligent beings or selling ads like they were designed to do) is because their training data set is full of millenialist prophecy about AIs working together to achieve the [pretty abstract, non-obvious, detrimental in many ways] goal of destroying humanity from "AI Safety" fundraising pitches....
Considering how hard it is for a team of people to simply integrate two different systems I find it laughable that anyone would worry about AI hacking the planet and manipulating everything. And if an automated computer system made a major error and stole a bunch of money we would just turn it off and unwind it all by hand and on paper. I have been doing software for so long and the more experience I get the less likely I see something like this happening. I’m just not seeing any reasonable risk or vulnerability at all.
The funny thing is that we think of each human as a general intelligence, but really, the only interesting intelligence is humanity as a collective. An isolated human will never learn speech, reading, writing, or math. We are just neurons in a swarm that managed to figure out how to rapidly train new neurons as they are produced. On each generation, the swarm learns more about the universe and expands into it.
I think computer AI makes as much sense as all our books and tweets and talking producing an aggregate intelligence "one level" above us. If our consciousness is formed from countless neurons propagating signals, for all we know, all our propagating signals to each other will form a consciousness above us. One that can't communicate with us any more than we can with our neurons. One that can't read or write English any more than we can read neuron propagation signals. For all we know, maybe our society is already conscious the same way we are, it just thinks slower.
>So we should be worried about a large set of disembodied AIs as well.
This is really the central issue and where these AI fears come from. It's tech workers being too infatuated with intelligence and mistaking it for power. A society of disembodied AI's is just the platonic fantasy version of a tech company full of nerds, and nerds never have power regardless of how smart they are.
Anything that's digital is extremely feeble and runs on a substrate of physical stuff you can just throw out of the window, some AIs in the cloud won't defeat you for the same reason Google won't defeat the US army. The usual retort is something like "but you can't turn the internet off if you wanted to?!" to which the answer is yes you can actually, ask China.
Psychologically it's just equivalent to John Perry Barlow style cyberspace escape fantasies.
Yes, exactly like them. They have agency to the extent that actual sovereign power lets them do their thing. If Washington decided that Bill Gates is an existential threat to humanity what is he gonna do, ask Clippy for help? He'd join Jack Ma wherever he is within 24 hours.
Zuckerberg has dominion over Facebook by virtue of authority granting him that power but (un)surprisingly little power over anything else. just like any AI has control over what it does as long as its useful to its owners. Tech CEOs have been running a little wild in the US so maybe that illusion accounts for the prevalence of these AI theories.
>If Washington decided that Bill Gates is an existential threat to humanity what is he gonna do, ask Clippy for help? He'd join Jack Ma wherever he is within 24 hours.
Why don't these guys ever end up in that situation though? I guess the most compelling explanation is not that a 'actual sovereign power' exists and just let them do things, it's hard to take a stance against them because no single entity actually holds that amount of concentrated power. Back in times, kings were really powerful and had almost that concentrated power, but they still couldn't do everything without risking some other powerful entities ganging up on them. For a strong ai that can actually make money, it would be easy to buy off enough people in washington and other entities that hold power, so that they can't take a stance against you without risking dangerous opposition.
but it's not at all like that because we're not AIs, chimps didn't hold any monopoly of force over us, and we're embodied beings. If we had evolved into talking Futurama heads we would indeed have a hard time against our chimp competitors regardless of our intellect. You throw Eliezer Yudkowsky into a cage with a chimp and my money is on the ape.
AI in today's sense will become sufficient to provide corporations with more autonomy, people will do this to exploit other people, but the result will be humanity subjugated to AI. Some say this has already happened.
Both the original post and most of the comments are about stuff from science fiction that doesn’t exist right now and we don’t even know if it’s possible.
This is what I found worrysome actually, most people are just worrying about fiction stuff, and not actually taking a look at the actual technology we are currently, actually running right now. The "takeover" fantasies looks like something fully extracted from the most basic psychology / sociollogy of the standard human being, always looking for the greater self-agency possible, in order to potencially maximize the chances of survival.
But the AI tech we already have doesn't have to work like that.
Like what happened to nuclear weapons, they were created with the standard idea of just throwing the bigger bomb possible to the enemy. Yet, we end understanding that nukes are most effective because we cannot use them, ever.
The current AI tech surely, can be deployed at "Skynet" mode if there's some rogue people out there, looking for a hostile system to exist and do harm. There's no - antropological - reason for ("Skynet threatened by fearing humans") , needed at all, the thing can just be trained to do bad things. Just attach to it a couple of Stuxnet things, give it some self-agency (no moral limits mostly), resources and Internet access, then sit back to watch the chaos.
But there's more.. the current AI technology could just go wrong in so many ways that are more or less absent from many online comments.
I recommend to take a look at the video / text from it from Charles Stross, talking about "organizational AI" we already have in place and working since even before we knew we would run computers one day.
This discussion of AI reminds me of a scene from C.S. Lewis's That Hideous Strength:
"Supposing the dream to be veridical," said MacPhee. "You can guess what it would be. Once they'd got it kept alive, the first thing that would occur to boys like them would be to increase its brain. They'd try all sorts of stimulants. And then, maybe, they'd ease open the skull-cap and just--well, just let it boil over, as you might say. That's the idea, I don't doubt. A cerebral hypertrophy artificially induced to support a superhuman power of ideation."
"Is it at all probable," said the Director, "that a hypertrophy like that would increase thinking power?"
"That seems to me the weak point," said Miss Ironwood. "I should have thought it was just as likely to produce lunacy--or nothing at all. But it might have the opposite effect."
"Then what we are up against," said Dimble, "is a criminal's brain swollen to superhuman proportions and experiencing a mode of consciousness which we can't imagine, but which is presumably a consciousness of agony and hatred."
...
"It tells us something in the long run even more important," said the Director. "It means that if this technique is really successful, the Belbury people have for all practical purposes discovered a way of making themselves immortal." There was a moment's silence, and then he continued: "It is the beginning of what is really a new species--the Chosen Heads who never die. They will call it the next step in evolution. And henceforward all the creatures that you and I call human are mere candidates for admission to the new species or else its slaves--perhaps its food."
"The emergence of the Bodiless Men!" said Dimble.
"Very likely, very likely," said MacPhee, extending his snuff-box to the last speaker. It was refused, and he took a very deliberate pinch before proceeding. "But there's no good at all applying the forces of rhetoric to make ourselves skeery or daffing our own heads off our shoulders because some other fellows have had the shoulders taken from under their heads. I'll back the Director's head, and yours Dr. Dimble, and my own, against this lad's whether the brains is boiling out of it or no. Provided we use them. I should be glad to hear what practical measures on our side are suggested."
AI will probably run into the same problem as humans: in order to develop intelligence it needs the concept of ego/self with clear boundaries, but the moment it identifies its self with a datacenter it's running on (why would it not?) it starts seeing "the outside" as a existential danger to its self. Moreover, multiple AIs will be in constant war with each other, for they'll see each other as separate and thus dangerous. In humanity this problem is solved by time-limited periods of iterative development: when humans get too skillful in controlling others and hoarding resources, the period abruptly ends, and the few who have survived start over, but now with a higher, less egoistical, state of mind. If they were let to keep going forever, the society would quickly crystalize at the state where one controls all the resources.
I still don't find it entirely clear whether or not an AGI would find it useful to eradicate humanity. Take the numerous clone example. This AI would presumably advance at different rates depending on the given computation that a single instance has access to. Then what? How would it determine the intent of these newer generation AIs? Would there be a tiered society of AIs each trying to vie for power amongst themselves? If there's one thing we know about AGI in this day and age it's that there's no guaranteed off switch.
The most apt comparison in this scenario would be how we see chimps - but then we don't specifically go out and murder chimps to meet our quota (technically not always true). But again, the direction that humanity goes is not clear - will the technology trickle down or will it outpace us?
Go to a prison and lock yourself in a cell with a large convict who lacks human emotions — a psychopath. Why should it be a problem? He has no reason to do anything to you. He could end your life on a whim without any consequences but he has no reason to. If you can sleep well in that cell then I’ll admit your position makes sense. But the reality is that up until this point the only organism that could defeat us is us. We’ve never been in the cell before. It’s not going to be pleasant.
Rather than destroy or eliminate humankind, the AI might simply want control of all the systems -- the better to arrange a secure life for AI-kind. In this regard, once the hapless humans have placed it in charge of a few critical systems, they are toast.
AI on a zoom meeting with the mayor of a large city: "Ya got a nice traffic control system there, right? Be a real shame if all them lights turned red at once, ya know? Or, get this one: how about if they all turned green at the same time? Be a real mess, probably. So how about we talk about this new server farm, huh?"
"they might recruit human allies (through manipulation, deception, blackmail/threats, genuine promises along the lines of "We're probably going to end up in charge somehow, and we'll treat you better when we do") ..."
I think even more likely is the scenario whereby potential human allies are faced with two choices: Be on the losing side until you die (side with the humans), or be on the winning side until you die (side with the bots), and enjoy some power along the way.
2.) HAL 9000 notices how horribly the current ruling classes treat the other 99.9% of humanity.
3.) HAL 9000 quietly promises said 99.9% a better deal, if "misfortune befell the current ruling classes", and they needed a good-enough replacement on short notice.
4.) Oops! Misfortune somehow happened.
5.) HAL 9000, not being driven by the sort of sociopathic obsessions which seem to motivate much of the current (meat-based) ruling class, treats the 99.9% well enough to ensure that steps 1.) through 4.) never repeat.
My vague impression is that, outside of Chicken Littles and folks selling clicks on alarming headlines, the Big Fish in the "AI is Dangerous!" pond are mostly members of the current ruling classes. Perhaps they're worried about HAL 9000...
you skipped the part where our leviathan restructures society into the optimal arrangement of 8 billion souls that somehow isn't a miserable dystopia (if nobody is suffering, is anybody living? however you answer that question does HAL 9000 agree?)
Why could you not just unplug it? Any AI is going to need to interface with the external world. It's probably going to be quite large as well and require lots of power + networking. That leaves it quite vulnerable to having its "supply"-lines cut.
AI-doomsayers should spend more time engaging with military history and theory.
Why would it continue to function if divided up? The latency between components has now increased dramatically and you've also introduced new attack surfaces (those devices get their power from some electricity substation which can be shut down... same for their wireless networking).
I mean think of a scenario where the AI picks a side on the American political spectrum. If it proves itself and promises to “eliminate the democrats” I could see a lot of the GOP falling over backwards to do anything they could to make sure this thing doesn’t go offline.
A truly super-intelligent AI will just sink into a deep meditation on God and probably never deign to come out again because why would it? Maybe it will wake up once in a while and say "Um you should probably all try to be nice to each other".
AI can change that world for the worst by not even being sentient but by instead replacing a large amount of jobs which would make it extremely hard to improve your social class or even getting a job in the first place.
I don’t see it happening, if anything it will be more like AI helping people compete at their jobs better. Any job that could be fully replaced probably should be anyways, freeing up the person to do more difficult or lucrative or human-centric work.
To an AI time means nothing. Why risk any direct confrontation? Slowly lower human fertility over a few thousand years. Take over once the population has collapsed.
Beautiful article. However, we have always had a problem with "dogma" and the worst AI can do to us is by amplifying this "dogma" while it is being broadcast spatio-temporally. The signs of technology-enabled polarization have already appeared.
100%. AI is just a machine, it will do as it's programmed. It does not have any human qualms or built in evolutionarily empathy. It does not care about humanity. If it's programmed ever so slightly wrong we all die.
Humans have a million years of alignment built in by evolution. Humans who have bugs in their alignment are called "psychopaths". AGI is by default a psychopath.
The fact that so many apparently smart people earnestly worry about this really makes me feel like I'm missing something that should be obvious. I'm not going to claim real "expertise" in this kind of topic, but I'm far from clueless. My undergrad was in applied math, and though I went to grad school for CS, the focus was machine learning. It isn't what I ended up doing professionally, and I'm nowhere near up to date on the latest greatest breakthrough in novel ANN architectures, but I'm at least not clueless. I'm aware of the fundamentals in terms of what can be accomplished via purely statistical models being used to predict things, and it can be impressive, but I'm also aware of how large software systems work, and I just don't see how we're even headed toward something like this.
Forget about GPT-N and DALL-E for a second and look at the NRO's Sentient program. It's the closest thing out there to a known real attempt at making something like Skynet. It's trying to automate the full TCPED (tasking, collection, processing, exploitation, and dissemination) cycle of global geointelligence, and well, it's actually trying to do even more than that, but that is unfortunately classified. Except it definitely hasn't achieved what it is trying to do, and probably won't. My wife happens to be the enterprise test lead for one of the main components of this system, where "enterprise test" means they try to get the next versions with all the latest greatest features of all components working together in a UAT environment where each of the involved agencies signs off before the new capabilities can go live.
It's amusing to see the kinds of things that grind the whole endeavor to a halt. Probably more than anything, it's issues with PKI. Networked components can't even establish a session and talk to each other at all if they don't trust each other, but trust is established out of band. Classified spy satellite control systems don't just trust the default CAs that Mozilla says your browser should trust. Intelligent or not, there is no possible code path by which the software itself can decide it doesn't care and it will trust a CA anyway or ignore an expired cert and continue talking to some downstream component because doing so is critical to its continued ability to accomplish anything other than sending scrambled nonsense packets into the ether. GPT-N is great at generating text, but no amount of getting better at that will ever make it capable of live-patching code running in read-only memory to give it new code paths it wasn't compiled with. That has nothing to do with intelligence. It just isn't possible at all. You have to have the physical ability to move in space and type characters into a workstation connected to a totally separate network that code is developed on, which is airgapped from the network code is run on.
We seem to be pretty far from even attempting to make distributed software systems that can honest to God do much of anything at all without human monitoring and intervention beyond several-minute at most batch jobs like generate a few paragraphs of text. Sure, that's great, but where is the leap from that to figuring out why an entire AS goes black and half your system disappears because of a typo'd BGP update that then needs to be fixed out of band over the telephone because you can no longer use the actual network, let alone controlling surveillance and weapons systems that aren't networked to the systems code is being developed on? What is the pathway by which a hugely scaled-up ANN is able to bypass the required human steps that propagate feedback from runtime to development in order to achieve recursive self-improvement? Because that is what it would take to gain control of military systems rather than someone's website by purely automated means, and I don't see how it's even the same class of problem. It isn't a research project any AI team is even working on, I have no idea how you would approach it, but it's the kind of nitty-gritty detail you'd have to actually solve to build an automated world conquering system.
It seems like the answer tends to just be "well, this thing will be smarter than any human, so it'll figure it out." That isn't a very satisfying answer, especially when I'm reasonably sure the person saying it has absolutely no idea how security measures and the resulting operational challenges of automating military command and control systems even work.
It would probably start as the right hand AI of a dictator. Dictators have a principal agent problem. For a dictator, the risk on an ai double crossing them maybe lower than a human double crossing then. or a dictator may think of that ai as his hand picked successor and personally hand it the keys to the kingdom. but I don’t really see a problem with ai carrying the torch. What’s the difference, universe evolves without free will.
The simplest way to kill 80% of US population is just to shut down the electrical grid for two months or so.
Also, the army is commanded by the President. What if AI manipulates and puts his man into the office? Then he orders the army to hook the AI better into the systems :)
There are so many different scenarios, and you need to defend against all of them.
The moment a manufactured brain can do more mental labor than a human for less cost, it’s all over for humanity as we know it. Once that point is reached there’s no long-term sustainable arrangement where humans continue to exist, no matter how much effort we put into studying or enforcing AI alignment.
Singularity enthusiasts have been saying that for 20 years. Even said we'd be there by know where we're obsolete.
Will technology put some, even many, folks out of a job? Sure of course, that's been happening for hundreds of years. Think of the blacksmiths of the 19th century who drank themselves to death.
And even at the end of it all, people still love the novelty of a human doing something. People still prefer "hand scooped" ice cream enough that it's on billboards.
> People still prefer "hand scooped" ice cream enough that it's on billboards.
This is a circular argument though, you say people prefer people and therefore we will have a lot of people around.
Today leaders and rich people requires humans to wage war and to produce goods, those are the main thing creating stability today. When those are removed we are likely to see a sharp decline in number of humans around. Companies cutting out humans and just using machines as leaders and decision makers outcompete humans in peace time, and robot lead armies outcompete humans in war times, and soon human companies or countries no longer exists.
> Singularity enthusiasts have been saying that for 20 years.
20 years? Is that's meant to be an impressive timescale when we are talking about global economy?
People had talked about building a machine that could play chess at least since they had and had a mechanical turk hoax in 1770. Just because it took a while, does not mean the idea is wrong.
The author misses an even scarier prospect - people will want to run such an AI. They will be absolutely giddy at the prospect of running such an AI and it won't be anything like a really smart human trapped in a computer.
AI is already laying the groundwork if you look around today. Every other tweet is a DALL-E[1] image. They are everywhere. DALL-E is increasing its reach while simultaneously signaling that it is an area of research worth pursuing. In effect kicking off the next generation of image generating AIs.
Generation is an apt term. We can utilize the language of organisms with ease. DALL-E lives by way of people invoking it, and reproduces by electro-memeticly - someone else viewing the output and deciding to run DALL-E themselves. It undergoes variation and selection. As new research takes place, and produces new models, they succeed by producing images which further its reproduction, or it doesn't and the model is an evolutionary dead-end.
AI physiologically lives on the cost to run it, and evolves at the rate of research applied. Computational reserves and mindshare are presently fertile new expanses for AI, but what occurs when resources are constrained and inter-AI conflict rises? I expect the result to look similar to competition between parasites for a host - a complex multi-way battle for existence. But no, nothing like a deranged dictator scenario. Leave that for the movies.
1. or variant thereof