Whoever invents it is responsible for it. You could argue that extremely deadly nerve gas would have been invented inevitably, for instance, but it is still unethical for you to help in its development. Claiming that "someone else would have invented it anyway" is the oldest excuse in the book.
Do we work as hard as we can to invent unethical technologies in order to mitigate their effects, or do we try to suppress or discourage the invention of new technology knowing that some less-"ethical" society will get there first?
Or is that a false dichotomy?
This looks like a false dichotomy to me. If your argument was sound, then e.g. attempting to limit nuclear proliferation would be pointless, since every nation on earth would eventually develop nuclear weapons anyway. I don't think that's true, though, national and international laws with suitable enforcement can prevent unethical technologies.
Same guy as before but from different account. Disclaimer: I am an ethicist, although my original AoS was philosophy of language.
First of all, there is a whole bunch of contemporary ethicists who would deny that unrealistic scenarios can give us any ethical insight, but let's not enter this debate.
There are good and convincing arguments against this view, but let's assume for the sake of the argument that using the nerve gas in your scenario would be the right thing to do. That means that you have shown that there is one hypothetical scenario in which the use of that technology could be considered better than not using it, although its use would still be very bad and horrific.
That's not enough to show that the technology is ethical or that its development should be encouraged. I'd argue for the opposite. Your scenario also does not provide any argument against my claim that the person who develops the technology is at least indirectly responsible for its later use. Some technologies should and maybe even need to be suppressed world-wide.
This is an important topic if you take into account the pace of technological development. It's entirely thinkable that in the near future - let's say, in a 100 years or so - just about anyone could in theory genetically modify bacteria and viruses to his likings in a basement and for example develop an extremely powerful biological weapon capable of wiping out 90% of mankind. It is obvious that such a technology has to be suppressed and should probably not be developed in this easy-to-use form.
I believe what you really want to say is that nation states should develop all those nefarious technologies in order to control their spreading, because someone ("the opponent") will invent and spread them anyway. That's indeed the traditional rationale for MAD and the development of nerve gas, biological weapons, and hydrogen bombs. The problem with this argument is that anybody can use it, the argument appears just as sound to North Korea than to the US, and is leading to a world-wide stockpiling of dangerous technologies. So there must be something wrong with that argument, don't you think so?
The counter argument to your basement geneticist terrorist is that you shouldn't suppress that technology, you should in fact distribute it as widely and freely as possible and as early as possible. This allows good intentioned actors to understand and learn about the capabilities and develop defenses such that it gets harder and harder to create the wipe-out-90% of people weapon because you have higher barriers to overcome for it to be effective.
> That's indeed the traditional rationale for MAD and the development of nerve gas, biological weapons, and hydrogen bombs. The problem with this argument is that anybody can use it, the argument appears just as sound to North Korea than to the US, and is leading to a world-wide stockpiling of dangerous technologies.
But that’s not what happened, right? I mean, it is if you stop reading history just before the first non-proliferation treaties began being implemented. This was almost half a century ago, though, so IMO it doesn’t make sense to stop reading at that point.
I agree. The solution to massive technological threats is mutual entanglement by treaties and international laws that limit or prohibit the development of dangerous technologies. That's my point.
Doesn't everyone imagine themselves to be one of the 'good guys'? Surely the 'bad guys' in your example will also be telling themselves they just have to do some bad things to defeat a truly bad foe.
Does it even make sense to speak of 'good guys' who do bad things? Intentions count, but at some point I don't find it unreasonable to call someone who is doing very bad stuff a bad person, no matter how they rationalize it to themselves.
Which ethical theory are we using, anyway? Sounds like consequentialism is assumed?
Do we work as hard as we can to invent unethical technologies in order to mitigate their effects, or do we try to suppress or discourage the invention of new technology knowing that some less-"ethical" society will get there first?
Or is that a false dichotomy?
This looks like a false dichotomy to me. If your argument was sound, then e.g. attempting to limit nuclear proliferation would be pointless, since every nation on earth would eventually develop nuclear weapons anyway. I don't think that's true, though, national and international laws with suitable enforcement can prevent unethical technologies.