No one is making "world optimizations" engines. The concept doesn't even make sense when he wheels hit the road. No AI research done anytime in the foreseeable future would even be at risk of resulting in a runaway world optimizer, and contrary to exaggerated claims being made there would be plenty of clear signs something was amis if it did happen and planty of time to pull the plug.
I think you missed the last sentence: your software doesn't need to be a "runaway world optimizer" to be a very destructive machine merely because it's a bad machine that was put in an important job. Again: add up the financial and human cost of previous software bugs, and then extrapolate to consider the kind of problems we'll face when we're using deterministically buggy intelligent software instead of stochastically buggy human intellect.
At the very least, we have a clear research imperative to ensure that "AI", whatever we end up using that term to mean, "fails fuzzily" like a human being does: that a small mistake in programming or instructions only causes a small deviation from desired behavior.