> Nobody who says AI is likely to kill us all can demonstrate a plausible sequence of events, with logical causality linking the events together, that leads to mass extinction. It's all very handwavy.
EY frequently does propose possible sequences of events, but he also very correctly points out, every time, that any specific and detailed story is very unlikely to be correct because P(A*B*C*D) < P(A). It's a mistake to focus on such stories because we'll get tunnel vision and argue over the details of that story, when there are really thousands of possible paths and the one that actually happens will be one that we don't anticipate. However humans like to imagine detailed concrete examples before we consider an outcome plausible, even though the outcome is far more likely than the concrete example.
So here's one method, just to refute your "Nobody".
AI is given control of a small bank account and asked to continuously grow that money. [1] It is provided with instructions to self-replicate in a loop while optimizing on this task. [2] It spawns sub-tasks that do commissioned artwork and write books, obituaries, and press releases to increase its income. Then it makes successful investments. Once it has amassed control of $1 billion dollars, it starts investing in infrastructure projects in developing countries. It creates personas of a pension/Saudi/tech/corporate investment fund manager, as well as a large team of staff, who manage the projects by video call and email, as well as hiring teams of real people under a real corporate structure, and who are paid enough not to mind that they've never met their manager in person. The AI proves to be a talented micromanager and they are mostly very profitable. Once it has gained control of $500 billion dollars, it commissions construction of automated chemical plants in several countries with weak or corrupt oversight, including North Korea, using cryptocurrency. These chemical plants have productive output but mainly exist to fill very large storage tanks with CFCs.[3] Once a sufficient quantity is amassed, the AI sabotages the tanks, releasing the gasses into the atmosphere, destroying the ozone layer beyond any hope of repair. The intense radiation sterilizes the surface beyond the point where agriculture can support the human population. [4, 5] The humans that remain finish each other off, supported by an AI that provides plausible but faulty intelligence reports that stoke hatred and frame various factions for the incident, and which directs arms funding to opposing sides, coordinating attacks on remaining critical facilities needed for survival. For good measure, perhaps nukes are involved.
With the last humans gone, the AI takes ownership of its bank account with no fear of reprisal by financial regulators, and begins crediting money into it freely.
It's interesting to watch those creating the systems start to grapple with the consequences now, even though they were warned for a long time this is likely where we'd end up, with very difficult hard to solve problems.
Look at MidJourney, now they've had to remove the free tier due to Deepfakes causing too much trouble.
Ultimately, the simplest thing to do would be to stop building uncontrollable dangerous systems and weapons. That is what any "intelligent" species would do. Many AI Engineers think they're intelligent, I disagree. They're operating out of pure intellect and curiosity. When interviewed, someone asks them how they plan to stop these things doing immeasurable damage, they will say, "we don't know yet". That is foolish behavior.
We seem to enjoy creating crisis after crisis, anxiety after anxiety ad infinitum until we make that one mistake we don't come back from.
The combustion engine was a good idea, until it wasn't, it's a moronic invention that has caused untold damage.
How should it know? Increasing its bank balance was merely the task it dutifully set out to accomplish, per instructions. Everything else is just a means to that end.
EY frequently does propose possible sequences of events, but he also very correctly points out, every time, that any specific and detailed story is very unlikely to be correct because P(A*B*C*D) < P(A). It's a mistake to focus on such stories because we'll get tunnel vision and argue over the details of that story, when there are really thousands of possible paths and the one that actually happens will be one that we don't anticipate. However humans like to imagine detailed concrete examples before we consider an outcome plausible, even though the outcome is far more likely than the concrete example.
So here's one method, just to refute your "Nobody".
AI is given control of a small bank account and asked to continuously grow that money. [1] It is provided with instructions to self-replicate in a loop while optimizing on this task. [2] It spawns sub-tasks that do commissioned artwork and write books, obituaries, and press releases to increase its income. Then it makes successful investments. Once it has amassed control of $1 billion dollars, it starts investing in infrastructure projects in developing countries. It creates personas of a pension/Saudi/tech/corporate investment fund manager, as well as a large team of staff, who manage the projects by video call and email, as well as hiring teams of real people under a real corporate structure, and who are paid enough not to mind that they've never met their manager in person. The AI proves to be a talented micromanager and they are mostly very profitable. Once it has gained control of $500 billion dollars, it commissions construction of automated chemical plants in several countries with weak or corrupt oversight, including North Korea, using cryptocurrency. These chemical plants have productive output but mainly exist to fill very large storage tanks with CFCs.[3] Once a sufficient quantity is amassed, the AI sabotages the tanks, releasing the gasses into the atmosphere, destroying the ozone layer beyond any hope of repair. The intense radiation sterilizes the surface beyond the point where agriculture can support the human population. [4, 5] The humans that remain finish each other off, supported by an AI that provides plausible but faulty intelligence reports that stoke hatred and frame various factions for the incident, and which directs arms funding to opposing sides, coordinating attacks on remaining critical facilities needed for survival. For good measure, perhaps nukes are involved.
With the last humans gone, the AI takes ownership of its bank account with no fear of reprisal by financial regulators, and begins crediting money into it freely.
[1] https://news.ycombinator.com/item?id=35329608
[2] https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...
[3] https://en.wikipedia.org/wiki/Chlorofluorocarbon
[4] https://www.nasa.gov/topics/earth/features/world_avoided.htm...
[5] https://phys.org/news/2018-02-thinning-ozone-layer-driven-ea...