Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If there's no duty to consider potential harm when releasing tools, then why would there be a duty to avoid criticising people who release them?


But if such things exist in secret and there's neither awareness nur a reference implementation to develop countermeasures against, wouldn't that also cause harm?


Absolutely, and there's definitely a strong argument to be made for increasing awareness of existing tools and attack vectors.

Importantly though, that aim is served by proofs-of-concept and information sharing more than it is by releasing point-and-shoot attacks. This comment puts it rather well [1].

As above, I'm not saying this shouldn't have been created, or even that it shouldn't have been released. I am saying that there are important ethical considerations and questions raised that should be addressed directly by the creators, rather than being handwaved away as in the quoted disclaimer.

[1] https://news.ycombinator.com/item?id=31652008


OpenAI attempted this with GPT-2’s incremental release strategy and was lambasted on moral and ethical grounds as well.

https://openai.com/blog/better-language-models/

So individuals working on a technology like this have no clear societal consensus to use as a guide when making the decision. Strong arguments can be made in both directions.


I thought that was actually a positive sign from OpenAI; potentially a disingenuous one, but it at least showed an awareness of the potential issues.

There will always be people who find the very idea of thinking about things from an ethical perspective offensive; that doesn't mean they're right. A lot of valuable discussion in this space is drowned out by people shouting down attempts to explore nuance and shine a light on the grey areas [1].

I'm not asking for censorship or a government crackdown; I do think that it's irresponsible to dodge the issues raised, act as though they are settled, or dismiss any concerns as anti-progress. As you say, there is no clear societal consensus, and there are strong arguments in both directions; what I want to see is those discussions taking place, in good faith, and those issues being acknowledged, rather than ignored.

[1] https://twitter.com/emilymbender/status/1532699418281009154


Considering potential harm is one thing, being liable for misuse is another. If you're releasing something that can only harm others and has a arguable net downside, that may be a good reason to not release it.

What we're talking about (I think) is releasing something with a net upside, but with potential for misuse and the resulting liability?

Am I misunderstanding your point?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: