Such an entity would not be hyper-intelligent. It would be idiotic. One huge hole for me in the paperclip argument is that an AI capable of that kind of power would not be stupid enough to misinterpret a command - it would be intelligent enough to infer human desires.
Of course it would. But, it's not programmed to care about what you meant to say. It will gladly do what it was mis-programmed to do instead. You can already see this kind of trait in humans, where instinct is mis-aligned with intended result. Such as procreation for fun + birth control.