Self-preservation falls out of almost any other goal you give an AGI. If I program my AGI with the goal of making my startup succeed, and the AGI thinks it can help, then me shutting it off is a potential threat to my startup's success. So of course it will try to prevent that the same way it would try to prevent any other threat to my startup's success.
World domination is a similar situation. For any goal you give an AGI, one of the big risks that may prevent that goal from being accomplished will be the risk that humans intervene. Humans are a big source of uncertainty that will need to be managed and/or eliminated.
It has to be aware that it can be shut down and have the capacity to prevent that. AlphaGo doesn't know it can be shut down and therefore couldn't "care" less--even if it was shut down in the middle of a game.
Yes, I agree. My point is that as soon as you are giving your AI "real world" problems, where the AI itself is a stone on its internal go board, you have to start worrying about these issues.
World domination is a similar situation. For any goal you give an AGI, one of the big risks that may prevent that goal from being accomplished will be the risk that humans intervene. Humans are a big source of uncertainty that will need to be managed and/or eliminated.