We've already had robots "run away" into a water feature in one case and a pedestrian pushing a bike in another, the phrase doesn't only mean getting paperclipped.
And for non-robotic AI, also flash-crashes on the stock market and that thing with Amazon book pricing bots caught up in a reactive cycle that drove up prices for a book they didn't have.
> the phrase doesn't only mean getting paperclipped.
This is what most people mean when they say "run away", i.e. the machine behaves in a surreptitious way to do things it was never designed to do, not a catastrophic failure that causes harm because the AI did not perform reliably.
To those who are dead, that's a distinction without a difference. So far as I'm aware, none of the "killer robots gone wrong" actual sci-fi starts with someone deliberately aiming to wipe themselves out, it's always a misspecification or an unintended consequence.
The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.
"Unintended consequences" has nothing to do with AI specifically, it's a problem endemic to every human system. The "alignment" problem has no meaning in today's AI landscape beyond whether or not your LLM will emit slurs.
And for non-robotic AI, also flash-crashes on the stock market and that thing with Amazon book pricing bots caught up in a reactive cycle that drove up prices for a book they didn't have.