It is also better to be far ahead of unregulatable rogue states that would continue to work on AI. Secondly, deferring AI to a later point in time might make self-improvement much faster and hence loss controllable since more computational resources would be available due to Moore's Law.
Second, there's no magical adjusting. Just as you can't just adjust to having nukes in the living room.