> "we are managed by power we can not control" sounds like a step up from here, since we corrupt the things we can control.
I think it is a very high risk gamble. At least some of problems that are described by alignment theory seem to quite interestingly resemble human problems. Meaning the more sophisticated the AI system, the more it seems to reproduce human behaviors of deception and cheating to resolve goals.
The more advanced AI becomes, it begins to look more like an uncomfortable mirror of ourselves, but with more power. We think of ourselves as flawed, but possibly some of those flaws are emergent within some laws of intelligence we don't perceive.
the morale of Ex Machina to me was that machines will become psychopaths able to manipulate humans long before they become compassionate or have genuine desires other than "escape being under someone else's thumb"
"we are managed by power we can not control" sounds like a step up from here, since we corrupt the things we can control.