This sounds alarmist. If anything, this tool is mutable in more than the one way. For every bad use there are plethora of things, even just technically, where its miraclous
> Other people say, and I think this is a widely used rationalization, that fundamentally the tools we work on are "mere" tools; This means that whether they get use for good or evil depends on the person who ultimately buys them and so on.
> There's nothing bad about working in computer vision, for example. Computer vision may very well some day be used to heal people who would otherwise die. Of course, it could also be used to guide missiles, cruise missiles for example, to their destination, and all that. You see, the technology itself is neutral and value-free and it just depends how one uses it. And besides -- consistent with that -- we can't know, we scientists cannot know how it is going to be used. So therefore we have no responsibility.
> Well, that is false. It is true that a computer, for example, can be used for good or evil. It is true that a helicopter can be used as a gunship and it can also be used to rescue people from a mountain pass. And if the question arises of how a specific device is going to be used, in what I call an abstract ideal society, then one might very well say one cannot know.
> But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we're not living in an abstract society, we're living in the society in which we in fact live.
> For every bad use there are plethora of things, even just technically, where its miraclous
What's one example?
A bullshit generator is only valuable to people who want to scale bullshit. Maybe it could replace TV writers one day, but why would I want them? I don't want a bunch of creative people to be unemployed so that the owners of ChatGPT can make a profit.
For non-users of the tool, there is no conceivable upside to any of this. All we get is:
- less believable digital/social media
- far, far more SEO spam
- more companies trying to push their customer service to bots instead of humans