Another idea too dangerous to leave unchecked, like Nuclear weapons or Biological warfare. I think most people will agree that a GAI can't be bargained with, tempted, bought or otherwise contained - We will be at its complete mercy regardless of any constraints we might think up.
What I would like to discuss, is how we can get humanity to a point where we can responsibly wield weapons that powerful without risking the glob. What does success look like, how can we get there and how long will it take?
> I think most people will agree that a GAI can't be bargained with, tempted, bought or otherwise contained - We will be at its complete mercy regardless of any constraints we might think up.
Who thinks this? I don't see any evidence that this is a common belief among people who work in the hard sciences related to AI, nor do I think it sound remotely logical
It feels like some people are taking archetypes like pandora box or genies or the Alien movies or some other mythology and using them to imagine what some unconstrained power would do if unleashed. That really has no bearing on AI (least of all modern deep learning, but even if we imagine that something leads to AGI that lives within our current conception of computers)
A point Taleb keeps on making is that risk analysis is separate from the domain and shouldn't be done by domain experts.
Global pandemic response plans for example shouldn't be done by virologists, because they are experts in viruses, not in how a pandemic which is a health/political/economic complex system behaves.
The same way, AI risk plans shouldn't be done by AI researchers, just like we don't use neurologists for defense plans against man-made risks.
I definitely agree with the premise, technicians have no business dictating the societal implications of their specialty, and technocracy is tyranny because it doesn't maximize for what people care about.
I think you need to have a proper bridge between the technical understanding and the people who manage the implications. In the case of longer standing diseases, for example, we're probably there. The risks are understood, and laypeople can weigh them as part of policy decisions. For new things like covid, we saw the world go crazy with misunderstanding, and ridiculous things like plexiglass barriers everywhere, and other talisman type stuff, as politicians tried to simultaneously abdicate responsibility to disease researchers, while grabbing at the parts they liked for political gains. But at least there was some grounding in reality because people do have a share and longstanding comprehension of disease spread and of the concepts of getting sick, etc.
New technology is the worst, because it gets blown up into some imagined concept that has no bearing on the reality. So, as I implied in the upstream comment, if we were on the verge of releasing some kind of sentient evil into the world, maybe the kind of silly speculation ("it can't be bargained with", etc) that basically rehashes Terminator, would be appropriate. But it's no more realistic than, say, the kid in Looper who has telekinetic powers and grows up to be an evil mob boss. It's just a made up bad thing that could happen, that if you talked to someone who knew the tech you'd realize is nonsense. That's very different from health threats we know exist.
> What I would like to discuss, is how we can get humanity to a point where we can responsibly wield weapons that powerful without risking the glob.
It seems to me that that is exceedingly difficult without changing in a major way how humans culturally and psychologically function. Maybe we will first have to learn how to control or change our brain bio-chemo-technically before we can fundamentally do anything about it. Well, not “we” literally, because I don’t expect we’ll get anywhere near that within our lifetimes.
On the other hand, complete extinction caused by weapons (bio, nuclear), while certainly possible, isn’t that likely either, IME.
What I would like to discuss, is how we can get humanity to a point where we can responsibly wield weapons that powerful without risking the glob. What does success look like, how can we get there and how long will it take?