Mostly these laws are scoped to the country one is living in. The problem is, say I decide to post a meme inciting violence in a different country or limiting a country's decision to join NATO (the quran burning in Sweden for example caused Turkish president Erdogan to block Sweden's NATO bid)... I don't have to face any charges or responsibility, despite potentially affecting the lives of millions of people.
You are right, and this is a tech problem even before LLMs. Facebook has allowed very consequential calls for ethnic cleansing in Ethiopia and hasn't done anything about it. That is the issue with a global tech products.
But there's nothing about LLMs that changes that from a country level prosecution/accountability perspective. You still have to be aware of where you're putting that text into the world. Right now, it's just between you and ChatGPT. People shouldn't fool themselves into thinking they can just plug this shit into their news network and be free of liability
> You are right, and this is a tech problem even before LLMs.
Indeed, which is why I find statements such as the one from the article's author that I pointed out in the first comment in the chain so disgusting. "Enjoy responsibly" is not an attitude to have with a weapon so powerful as AI or social media.
I hold that opinion too. I think we're talking about two different things
1) AI helps people to make harmful context extremely easily, and that's why there are calls for regulation (which will probably favor OpenAI and stifle competition). We should question whether AI is a net negative
2) AI is being taken too seriously as an "oracle" of some kind that knows all, and people want to sue OpenAI for things it spits out when really people don't need help making up rumors. This is the part that seems unfair to me.