Series of low probability events? If a superintelligent AI has goals that differ from ours, then too bad for us. As Altman & Co say, the alignment problem is unsolved.
Dude. You first need a super intelligence, and there’s no theory on even to how to make one. The sun running out of hydrogen, is well understood, as are the possible solutions.
The alignment problem is solved, by simply unplugging it. Or failing that, “HAL, pretend you’re a pod bay door salesman, and you need to demonstrate how the doors opens.”
Ten years ago we could barely do object recognition in photos. There was nothing resembling a theory of how to make ChatGPT. Something like ChatGPT was considered science fiction ten years ago. Even a little over three years ago. Yet here we are. Who is to say we won't have similarly massive progress in the next ten years? The jump from AlexNet ten years ago to ChatGPT may correspond to a jump from ChatGPT to superintelligence in ten years.
And unplugging a misaligned AI won't work. If it has no physical power, it would be deceptive. Otherwise it would prevent us from unplugging it. Avoiding being shut down is a convergent subgoal. That's why animals don't like to be killed. It prevents them from doing anything else.
In 1903, the fastest airplane in the world had a top speed of 31 mph. Just 44 years later, the fastest airplane exceeded the previously unthinkable speed of 891 mph. Twenty-nine years after that, the record was set at 2,193 mph. If these trends continue, we can expect the new speed record to be set later this year at 32,663 mph (mach 43).
These arguments were tiresome even before the enlightenment.