Suppose there were a technology X which was going to be invented eventually. Suppose also that it's a highly unethical technology, for some definition of unethical.
Is it therefore unethical to create X?
Note: The constraint is that X is inevitable. The only question is who creates it first. And in that context, isn't it at least possible to argue from multiple axes that you should help to create it? The limit case of this argument would be "It's your duty to the society you live in to ensure it has the competitive advantage, not some other society."
A less-hostile way to phrase that would be "The first company to invent a technology can then try to enforce ethics onto that technology."
That is, if you invent something, it's easier to dictate how it's used than if you didn't.
Hence, paradoxically as it may seem, the logical conclusion would seem to be that you should work as hard as you can to invent whatever unethical technology you're worried about -- in the hopes that you can minimize the damage later.
If it seems like a technology can't really be controlled (e.g. nuclear weapons), I counter with this: Bitcoin was the implementation of a set of ideas. The exact implementation could have been very different. It could have been inflationary rather than deflationary, for example. The precise choices were very important, because Bitcoin has huge first-mover advantages. And that is often true of the first X to be invented.
So, what's the answer? Do we work as hard as we can to invent unethical technologies in order to mitigate their effects, or do we try to suppress or discourage the invention of new technology knowing that some less-"ethical" society will get there first?
Or is that a false dichotomy? I'm fascinated by the possible answers.
Whoever invents it is responsible for it. You could argue that extremely deadly nerve gas would have been invented inevitably, for instance, but it is still unethical for you to help in its development. Claiming that "someone else would have invented it anyway" is the oldest excuse in the book.
Do we work as hard as we can to invent unethical technologies in order to mitigate their effects, or do we try to suppress or discourage the invention of new technology knowing that some less-"ethical" society will get there first?
Or is that a false dichotomy?
This looks like a false dichotomy to me. If your argument was sound, then e.g. attempting to limit nuclear proliferation would be pointless, since every nation on earth would eventually develop nuclear weapons anyway. I don't think that's true, though, national and international laws with suitable enforcement can prevent unethical technologies.
Same guy as before but from different account. Disclaimer: I am an ethicist, although my original AoS was philosophy of language.
First of all, there is a whole bunch of contemporary ethicists who would deny that unrealistic scenarios can give us any ethical insight, but let's not enter this debate.
There are good and convincing arguments against this view, but let's assume for the sake of the argument that using the nerve gas in your scenario would be the right thing to do. That means that you have shown that there is one hypothetical scenario in which the use of that technology could be considered better than not using it, although its use would still be very bad and horrific.
That's not enough to show that the technology is ethical or that its development should be encouraged. I'd argue for the opposite. Your scenario also does not provide any argument against my claim that the person who develops the technology is at least indirectly responsible for its later use. Some technologies should and maybe even need to be suppressed world-wide.
This is an important topic if you take into account the pace of technological development. It's entirely thinkable that in the near future - let's say, in a 100 years or so - just about anyone could in theory genetically modify bacteria and viruses to his likings in a basement and for example develop an extremely powerful biological weapon capable of wiping out 90% of mankind. It is obvious that such a technology has to be suppressed and should probably not be developed in this easy-to-use form.
I believe what you really want to say is that nation states should develop all those nefarious technologies in order to control their spreading, because someone ("the opponent") will invent and spread them anyway. That's indeed the traditional rationale for MAD and the development of nerve gas, biological weapons, and hydrogen bombs. The problem with this argument is that anybody can use it, the argument appears just as sound to North Korea than to the US, and is leading to a world-wide stockpiling of dangerous technologies. So there must be something wrong with that argument, don't you think so?
The counter argument to your basement geneticist terrorist is that you shouldn't suppress that technology, you should in fact distribute it as widely and freely as possible and as early as possible. This allows good intentioned actors to understand and learn about the capabilities and develop defenses such that it gets harder and harder to create the wipe-out-90% of people weapon because you have higher barriers to overcome for it to be effective.
> That's indeed the traditional rationale for MAD and the development of nerve gas, biological weapons, and hydrogen bombs. The problem with this argument is that anybody can use it, the argument appears just as sound to North Korea than to the US, and is leading to a world-wide stockpiling of dangerous technologies.
But that’s not what happened, right? I mean, it is if you stop reading history just before the first non-proliferation treaties began being implemented. This was almost half a century ago, though, so IMO it doesn’t make sense to stop reading at that point.
I agree. The solution to massive technological threats is mutual entanglement by treaties and international laws that limit or prohibit the development of dangerous technologies. That's my point.
Doesn't everyone imagine themselves to be one of the 'good guys'? Surely the 'bad guys' in your example will also be telling themselves they just have to do some bad things to defeat a truly bad foe.
Does it even make sense to speak of 'good guys' who do bad things? Intentions count, but at some point I don't find it unreasonable to call someone who is doing very bad stuff a bad person, no matter how they rationalize it to themselves.
Which ethical theory are we using, anyway? Sounds like consequentialism is assumed?
I think you are missing the mark a bit. Ethics isn't first and foremost about deciding what should happen, it is about accounting for effects of your actions.
Emerging technology sometimes develops faster than accountability for said technology. That is true of Bitcoin, but also thing like oil and pesticides.
Nuclear is a pretty good example. Imagine if nuclear technology wouldn't have the history it has and instead we spent the time since it was invented solving its issues. Energy would probably cost far less than today, which means everything would. The result might be that our standard of living would have been double what it is today (which is hard to quantify, but this is a thought experiment).
So it might be alluring to be first, but in the grand scheme of things you are always paying the price in the end. The reason ethics is part of engineering isn't because it is fun, but because you get the best results.
Here's a counterargument: if you're good at what you do, it is better to let the inevitable development of X be done by someone less competent.
If the first mover does have an advantage, then making the first mover slow and buggy gives society more time to argue against it. To pick on Bitcoin (not taking on a position on whether it actually is unethical, but assume for the purpose of argument it is): if someone less thoughtful than Nakamoto had thought up the idea of using HashCash to prevent double-spend attacks, they may have used MD5 as the hash. Or they may have built the initial software with accidental integer overflows. Or they may have tuned it poorly so that the storage requirements got intractable after a few years. Then it would be more possible to shake Bitcoin with a second mover that didn't involve mining (or were actually anonymous, or whatever fix you'd want to make).
If the first mover doesn't have an advantage—if the technology is so clonable that someone can make an evil variant quickly—then there isn't any virtue in shipping the good variant first. Let them discover it on their own, it's not any worse, and spend your time working on either defenses or on unrelated, ethical things.
And it could be better: I seem to recall that there were several countries during World War II who did not focus wholeheartedly on developing the atomic bomb because they had no idea it was even possible. The Soviet nuclear weapons program, for instance, became much more of a priority only once they saw the bombings of Hiroshima and Nagasaki.
I'm gonna say a modified Bruce Schneier quote:
"Ethics is a process, not a product."
Meaning that, what's ethical or unethical is always changing, and nobody can foresee the future, so the only thing we can do at best is to keep questioning it and be wary about it. The same thing goes for freedom and democracy too, I guess.
> It's your duty to the society you live in to ensure it has the competitive advantage
That s not self-evident. You are free to abandon your society in favor of another even if you lose some ‘technologies’ such as slavery. Let alone that definitions of ‘ethical’ differ between societies.
> in the hopes that you can minimize the damage later.
Thats also false. The antidote to a ballistic nuclear head is any other means/missile that can destroy/neutralize the nuclear missile before it hits the ground. Countering offense with offense or relying on mutual deterrence is ethically inferior.
I believe the ethical stance is ‘assume the worst and prepare to counter it’
The introduction of MIRV led to a major change in the strategic balance. Previously, with one warhead per missile, it was conceivable that one could build a defense that used missiles to attack individual warheads. Any increase in missile fleet by the enemy could be countered by a similar increase in interceptors. With MIRV, a single new enemy missile meant that multiple interceptors would have to be built, meaning that it was much less expensive to increase the attack than the defense. This cost-exchange ratio was so heavily biased towards the attacker that the concept of mutual assured destruction became the leading concept in strategic planning and ABM systems were severely limited in the 1972 Anti-Ballistic Missile Treaty in order to avoid a massive arms race.
Some threats can't be countered, only controlled. And the country that first invented nuclear weapons seems to be in a position to help control its proliferation.
Because you can use dummy warheads, chaff, and other means to lower the efficacy of the interceptor, and you only need a small percentage of your payload to make it through. The defense meanwhile, must be perfect. Missile defense using interceptor vehicles is a pipe dream, pure and simple, and has been for decades even in principle. You can have what appears to be hundreds of warheads falling at once, or even thousands if you’re the US or Russia. Nothing is stopping that well enough to avert mass casualties.
Besides, those interceptors are not cheap, they’re cutting edge while MIRV’ed warheads with dummies are old, proven tech. A live nuclear warhead is only expensive because of the physics package, so a convincing dummy is dirt cheap to make and deploy compared to an interceptor. It’s a losing proposition, and an extension of the truism that armor piercing tech inevitably beats armor in arms races.
It's a false dichotomy and your constraint of "The constraint is that X is inevitable" I reject outright.
Invention/engineering/development... whatever you want to call it is in the end an act or series of acts.
You have the responsibility to act ethically. Abstracting or abdicating away your responsibility to act ethically is still unethical, no matter how may layers of "invention" or after the fact ethics you try and layer on top.
An easier example is the invention of nuclear weapons. The the grep bullshit is theoretical and hard to reason about.
You raise a good point. But I don’t think there are ethical or unethical technologies just unethical uses of them. This ethical os seems very misguided to me. If we have the science to engineer a given technology someone is bound to invent it at some point. It’s the humans that are the agents of morality not the devices.
To muddy the waters further - what if the definition of "ethical" itself changes based on who invents the technology and wins the conflict? Have you acted ethically if everyone left alive considers you evil, even if you were upholding your own moral code?
There are ample historical examples of this. In the United States, we consider property rights, progress, and industrialization to be advances and theft and trespassing to be crimes, but this value system was used to justify the dispossession, relocation, and eventual genocide of the Native Americans. I bet that if diseases had worked the other way and slaughtered the colonists rather than the native inhabitants, we'd be living under a very different value system. Similarly, if the South had won the Civil War slavery would likely be considered just & ethical, or if the Nazis had won WW2 Jews would be historical scum, rightfully exterminated, and white supremacy would be the natural order of things. These groups are considered morally abhorrent because we won, which lets us write the histories and gloss over American atrocities like the firebombing of Tokyo or atom bomb.
Definitions are numerous and mutually-disagreeable enough that if there is your X then there will also be a corresponding person Y who will value early access to X at a maximally irrationally high price. Thus, if you know what X is and you have the skills to produce it, then it is wasteful to neglect to find the Y and profit from their irrationality.
The real ethical dillemma is this: people working for corporations and governments are asked to develop nasty stuff that will directly be used, and then they try to justify it by saying that what they're doing will be done eventually. However this doesn't hold water because by doing it themselves, today, they are making it happen sooner than it would have. Obviously everyone will die eventually, but we don't want to be killed because "sooner" is a significant thing.
By comparison, academics working on their own making very little money with no pressure to be evil do not have a problem.
This isn't hypothetical at all. Look at the atomic bomb. Did the scientists from the Manhattan project control it or influence it? Did they influence its use?
Also inflationary bitcoin doesn't solve the money problem either. You need it to be controlled by a third party consensus and not just "ticked" at a specific rate. The Fed tries to target inflation at 3% but that doesn't mean we always have 3% inflation.
I think there is a difference between inventing and commercializing.
Think of it like that: if you park your cars with the keys in it somewhere you could say that it will be stolen eventually by someone. But someone could also call the police and handle it properly.
Thus, you could argue: if it is unethical and inevitable, then commercializing it first is a good measure of how unethical you are and should be prosecuted.
I am still of the related view that we (Western society) should be putting huge investment into open source govtech - essentially how to run an open, democratic society in a box.
Thus any society that downloads and uses this software is on tramlines to become and act like open democratic societies. we could start with Western society
Can it be put in a box? I think democracy is a set of values and norms, not a device. If people (either powerful people or powerless people) do not believe that they will get the results they want out of democracy, the fact that a democracy box exists will not change their minds.
In particular, if the democracy appliance you're imagining includes voting machines, there is zero technical way to prove that a voting machine is reporting trustworthy results if you cannot rely on trustworthy humans to be part of the proof. (Standard auditing mechanisms include things like "representatives from both parties," which doesn't help if you suspect the two major parties are colluding to suppress outside viewpoints.)
not voting machines but all the other things needed to run society. from applying for licenses for planning permission (or second children) to water rights and more.
We are digitising society. how we do that matters as much as the society we have when we start.
i am just frustrated by inability to get oss into government at the basic levels
Seems like a game theory [1] though I have no clue which one.
The problem in your reasoning lies with "Suppose there were a technology X which was going to be invented eventually." This is simply not true. If you look at the size of the world population it is a given nearly every person can use their fist and nearly every person can punch someone else (which we -generally- have laws against).
That is a completely different level than inventing something. You use the example of nuclear scientists. Not the whole world is able to grasp the science behind that. Its the same with programming and hacking. So the amount of people who have to make the conscious decision to build the technology yes/no or any other ethical decision is much smaller than you appear to argue in your premise, and if all refuse your premise is false.
Which means that your argument is abused as a fallacy to justify the unethical behaviour.
I'd say a possible better solution to the problem is teaching smart people the value of ethics before they end up making these decisions.
It seems "Suppose there were a technology X which was going to be invented eventually" is not a very powerful constraint because on a long enough time span, every possible technology is invented. So, this argument explains everything, so it explains nothing.
Just let the power centers decide what's ethical and what's not. If the social power brokers massively promote and subsidize X, then surely it must be ethical?
Many (many) years ago, I was leading business planning for Demon / Thus and as part of our template introduced "Conscience Breakers" - a section (much like the health and safety planning for school trips i guess) that asked what could go wrong with our products we were about to launch. It seemed a good idea then and still does.
Could you go into some detail on why it couldn't be followed by them? I know of some different arguments about why it could or couldn't, but I don't know what you are referring to.
The answer is that publically traded companies face heavy pressure to keep sustained quarterly growth indefinitely and various "activist" investors will insist upon ousting any who stand in the way even if it is better for longterm health not to say lay off experienced engineering staff in a stable industry to inflate quarterly profits (Boeing) when it comes to bite them with electrical fires in their next big plane.
Most of the time people need to eat and feed their families so they do whatever gets the quickest buck. The ethical calculus is really that simple. As humans, we have a hard time extending our concern to people we don't know and will never meet.
Many (most?) companies and many boards/ceos are like this. They don't care if they're squandering modern tech for dumb or unethical reasons. They have a responsibility to the families and employees to make money and keep people fed.
I guess in the end I'm just rendering judgement on capitalism. Sorry it's not more insightful than that, but i don't see how free market participants (sellers) can drive ethics at all.
Someone like Rand Paul might counter that the free market will make ethical choices if fully informed, but I don't see how consumers at this stage can really be expected to be fully informed (or even act reasonably).
Suppose there were a technology X which was going to be invented eventually. Suppose also that it's a highly unethical technology, for some definition of unethical.
Is it therefore unethical to create X?
Note: The constraint is that X is inevitable. The only question is who creates it first. And in that context, isn't it at least possible to argue from multiple axes that you should help to create it? The limit case of this argument would be "It's your duty to the society you live in to ensure it has the competitive advantage, not some other society."
A less-hostile way to phrase that would be "The first company to invent a technology can then try to enforce ethics onto that technology."
That is, if you invent something, it's easier to dictate how it's used than if you didn't.
Hence, paradoxically as it may seem, the logical conclusion would seem to be that you should work as hard as you can to invent whatever unethical technology you're worried about -- in the hopes that you can minimize the damage later.
If it seems like a technology can't really be controlled (e.g. nuclear weapons), I counter with this: Bitcoin was the implementation of a set of ideas. The exact implementation could have been very different. It could have been inflationary rather than deflationary, for example. The precise choices were very important, because Bitcoin has huge first-mover advantages. And that is often true of the first X to be invented.
So, what's the answer? Do we work as hard as we can to invent unethical technologies in order to mitigate their effects, or do we try to suppress or discourage the invention of new technology knowing that some less-"ethical" society will get there first?
Or is that a false dichotomy? I'm fascinated by the possible answers.