The core issue with pig butchering is that the harm process is distributed across platforms, so there’s no single view of the problem at a platform level.
This hasn’t stopped efforts to improve knowledge and coordination, but the next barrier to action is in geographical location, and out right kidnap of people to man the fake accounts.
But now, to make matters worse, there’s LLMs which can simply fake humans at scale.
I find it pretty amazing that so many people have a positive view of AI, when the plus side is that it perhaps makes writing code faster and easier (although too fast and easy and you'll be put of a job) while the downside is, as you point out, widespread scamming at infinite scale, destruction of trust in photos and videos as reality, spamming, cheating, harming, and who knows what else once not-so-nice people get creative on its uses. Automated internment camps? Dynamic pricing to extract the absolute maximum from every consumer transaction? Families unable to trust each other? I expect them all and more.
The core issue with pig butchering is that the harm process is distributed across platforms, so there’s no single view of the problem at a platform level.
This hasn’t stopped efforts to improve knowledge and coordination, but the next barrier to action is in geographical location, and out right kidnap of people to man the fake accounts.
But now, to make matters worse, there’s LLMs which can simply fake humans at scale.