That's just a story I found after 10 seconds of Googling. The broader point is that having your packages delivered fast-moving multiton machines controlled by fallible people and sharing space with pedestrians and other people-controlled machines is a very dangerous state of affairs. That there are risks in alternative means is true enough, but it's important to measure those risks relative to the status quo and not just assume they're purely additive.
Of course googling you will find multiple fatality accidents with current drive and drop delivery methods. As of 2007 there were 254.4 million passenger cars registered in the US alone. That number has most likely continued to increase since this.
Until someone does some real quantitative risk studies; the risk is uncertain in comparison. It will only take one child seriously injured from one of these copters though for Amazon to be slapped with a major lawsuit; worse than the lawsuit though will be the public backlash from angry and indignant customers should something like this happen.
"It will only take one child seriously injured by one of these motorized carriages though for Ford to be slapped with a major lawsuit"
To be more nuanced about it, there's a big difference between this being a good or bad idea for humanity and this being an optimal or suboptimal move for Amazon. That there might be lawsuits and backlash even if it actually reduces the number of accidents might make it a poor play by Amazon, but it doesn't make it a bad idea in some ethical sense. For example, perhaps you're saying that even though it might save lives and improve efficiency, the current regulatory framework and public disposition make this kind of advance impossible (compare to cars hitting the streets in a different era). So there's a sort of normative versus positive question here. Perhaps you're talking about the latter, which is fine (and of course debatable), but I'm interested in the former. Especially so because the normative stuff is an input to tech policy debates and the positive stuff is its output, so you inform the debate by figuring out what the end result you want is.
But maybe you're not saying that; maybe you're saying "well, we don't know how risky drones are and maybe, safety-wise, it's a big step backwards" and the part about lawsuits and stuff was sort of a separate point. That's true enough. The reason I gave the Fed-Ex crash example was just to point out that it does not follow from an RC helicopter accident in Queens that drone deliveries are a bad idea. But it's true that the relative safety of cars and drones is unknown (and, as you can tell, I have a guess about it). Now the issue is that you need some way to find out. Or resign to never advancing at all. Human studies of new medicines encounter this problem too, yet few people are saying, "I don't want any medicinal advancements."
The reality is that if you want to improve society, you're going to have to take some risks with some unknowns. Perhaps you try it in some select cities in some limited fashion and build from there. (Think it's unethical to experiment on humans? What about the hundred-year, totally uncontrolled experiment we've been conducting using automobiles? I certainly didn't sign up for it. The status quo is not a special case.) Look at it like this: what does the world look like in three hundred years? I'd like to think it looks like something from Star Trek. So what, schematically, are the steps to get from here to there? How do you draw a line from 2013 to 2313? Because I'm pretty sure it doesn't involve "we don't have flying robots, so we don't know if they're dangerous, so no flying robots."
http://www.wsbtv.com/news/news/local/1-killed-after-bus-deli...
That's just a story I found after 10 seconds of Googling. The broader point is that having your packages delivered fast-moving multiton machines controlled by fallible people and sharing space with pedestrians and other people-controlled machines is a very dangerous state of affairs. That there are risks in alternative means is true enough, but it's important to measure those risks relative to the status quo and not just assume they're purely additive.
But because we're human, we're bad at that. http://en.wikipedia.org/wiki/Status_quo_bias