Yeah, this is the downside of AI: while it might make some kinds of work more efficient, it makes other kinds of work less efficient by enabling the construction of lots of "fake" inputs which have to be discarded.
It's probably going to further the destruction of text-based social media and force a retreat of people back into their social, political, and geographical circles. You will have to stay in the "bubble" because almost everything you encounter outside the bubble will be fraudulent AI content.
Not just text. Deepfakes are going to ruin voice- and video-based interactions, too. You're only going to be able to trust in-person interaction. And even then, you're going to have to know where the person you're talking to got their information.
Epistemological trust is broken. We're back to "what you have seen with your own eyes".
And that's going to seriously impede progress, because it means the number of people you can learn from is now very small.
But Deepfakes only make those things easier, they're not what makes it possible in the first place. Have someone with makeup and a similar voice, and you can fool someone with a video call. Prank-calls with people sounding like some politician were a perennial source of entertainment for radio shows etc.
The trust was misplaced in the first case, but we're now getting a clear demonstration why. Like someone walking up to your car with a universal remote and unlocking it within 60 seconds.
I would argue that the trust wasn't necessarily always misplaced; one of the lessons of the book "Lying For Money" is that some level of fraud is unavoidable because the cost-benefit tradeoffs of always checking everything don't make sense to try to drive fraud to zero.
However, AI moves the point on the cost-benefit curve.
The internet has grown massively over the last two decades, but maybe AI leading to a contraction would be a good thing. Maybe an end to Eternal September.
> Since the early days of the pandemic, I’ve observed an increase in the number of spammy submissions to Clarkesworld. What I mean by that is that there’s an honest interest in being published, but not in having to do the actual work.
More details (also calling this "AI spam"):
> What I can say is that the number of spam submissions resulting in bans has hit 38% this month. While rejecting and banning these submissions has been simple, it’s growing at a rate that will necessitate changes. To make matters worse, the technology is only going to get better, so detection will become more challenging.
> Clarkesworld Magazine is a Hugo, World Fantasy, and British Fantasy Award-winning science fiction and fantasy magazine that publishes short stories, interviews, articles and audio fiction. Issues are published monthly and available on our website, for purchase in ebook format, and via electronic subscription. All original fiction is also published in our trade paperback series from Wyrm Publishing. We are currently open for art, non-fiction and short story submissions.
(just summarizing a bit, since the title didn't make sense alone, then the site crashed and I wasn't that familiar with Clarkesworld in particular)
I see AI ending up under strict regulation so that all AI generated content has to be stored somewhere read only. Other highly regulated companies and law enforcement agencies will then have access to this AI web of content and will be able to search it to see if any submitted content, be that an essay or a forum post or whatever, is AI generated and to what extent. The public will still be able to access AI tools via the regulated companies but the scope of damage the individual can inflict will be limited. People caught using AIs outside of the state sanctioned companies will get harsh jail sentences to deter anyone from trying the same.
I don’t think there’s any other way to avoid chaos. I doubt anyone is going to be able to come up with a reliable algorithm to detect AI content. We’re literally going to have to just store everything these AIs produce in plain text and search it when needed.
I get what you're saying but if they think it's a Mutually Assured Destruction style scenario if it's allowed to run rampant they might end up changing their minds.
I was on a support "chat" the other day and strongly felt that soon, it'll be a legal requirement to differentiate when you are interacting with a scripted bot and a real human.
This is a very dangerous and ill-conceived idea, and not only is extremely unlikely to happen, but must not happen.
AI access must be open and unfettered. If it is closed or restricted, then a closed or restricted class of human will form that has privileged AI access. If you think the wealth gap is bad now, wait until a subset of humans have AI-augmented intelligence and capabilities that are unrestricted.
Your post is the type of solutions I see readily in governing functionally and it is the thinking of people who really need to spend more time in reality and understanding human nature / nature / observe the universe. It comes from a place of “we have this problem, wouldn’t it be nice if it was this way?” but ignores all history and reality of human nature, because of fear and a need to control the lives and destinies of other people in order to feel safe.
It is extremely dangerous in approach. Freedom and open access will have miraculous results in lifting the wealth of everything. We can deal with the dark sides as they come, but we cannot as a people allow the dark side to dictate our policy.
If the only utopia possible is a barren wasteland where everyone is equally paralyzed by the DDOS of utter nonsense these models produce, we may well be better off under inequality, despite how inefficient it makes our distribution of goods & services.
> AI access must be open and unfettered. If it is closed or restricted, then a closed or restricted class of human will form that has privileged AI access. If you think the wealth gap is bad now, wait until a subset of humans have AI-augmented intelligence and capabilities that are unrestricted.
In the proposed solution, everyone would be able to access AI, but there would only be a few companies able to offer the service. Theoretically, unlimited companies would be able to operate the service so long as they recorded the results to the central pool and complied with the legal frameworks.
> Your post is the type of solutions I see readily in governing functionally and it is the thinking of people who really need to spend more time in reality and understanding human nature / nature / observe the universe. It comes from a place of “we have this problem, wouldn’t it be nice if it was this way?” but ignores all history and reality of human nature, because of fear and a need to control the lives and destinies of other people in order to feel safe.
I strongly disagree with this. It comes from a place of realism about human nature. The reason pantheistic gods are 'immortal' is because they represent parts of human nature that never die, at least not on our time scale. And there is a reason why you will always find a god of deception who is bringing about untold misery, or in Loki's case, armageddon. And also note, that the gods representing the role of leaders such as Zeus and Odin, are also prolific deceivers.
There is always, always, always, some cunt who will weaponise or exploit a system for their own personal gain to the detriment of everyone and everything else. The only reason there is not mass warfare on the planet right now is because we invented the nuclear bomb and the two major opposing powers of the world allowed each other to build up an arsenal of weapons so big that if they used them, it would mean death to everyone.
AI is the intellectual equivalent of a nuclear bomb. We've already had Cambridge Analytica and more recently the Team Jorge hackers meddling in elections and that was before AI really took off. And we've had Neil Clarke come out and say that spam submissions for his site are rising at an exponential rate. And we are literally only at the beginning of what this thing can and will do. Hopefully we'll be able to use it for good the same way we harness nuclear energy for fuel. But if we mainly use it for ill, then god help us.
You are correct about human nature, but what I challenge is the approach to contain our dark sides. Elections are a great example because this is where my thought and work focuses. Our elections are a complete and utter joke. Of course there can be deception. We have a corporate media, corporate academia, 2 corporate parties, and paper elections in the digital age without any means for a common citizen to realistically audit anything.
These things need to be fixed with a new way of operating, and that new way as a foundation automatically can enforce things like identity validation and media authentication. That eliminates a need to trust the foundation of the system, and gets us back to trusting people instead. All our current ways and thinking requires we trust a shadow infection of our foundations, that is completely opaque, while also trusting people. We have to solidify the foundations and remove the rot.
Where you might need to think more is around restricting access to intelligence in a non-equitable way. This is a corruption of the foundations. It can only work if there are unbreakable rules in accessing the system, and I do not trust the dark side of humans to operate with that — everything is breakable. Having access to intelligence breeds more intelligence. Things that can physically harm will still be restricted, but when you have organizations like the CCP or a world network of corrupt oligarchs, trying to allow only a subset of vetted people AI is misguided when such a large network of bad actor will most certainly have unfetter access, which they can use to control the minds of humans around the world.
You combat that by giving people unfettered access. Because there exists people we have no idea about, that are forces for enormous good. And if they have access to these kinds of tools, they can transform everything for the better. These forces are in the shadows right now, because the people in the light of society are largely already rotten. There will not be more harm giving everyone access; we already give gotten people the keys to everything and seem to survive.
Restricting access will most certainly result in short term stability in exchange for guaranteed death long term. It is a very misguided approach.
The enforcement would be a very lengthy prison sentence for the people not complying e.g hacking state secrets level prison sentence. I think this is where it will ultimately end up if it ends up causing complete carnage.
I don't see a globally coordinated anti-AI effort happening, let alone resolving the free speech and privacy implications of such a measure. We're going to get weaponised AI because that's what humans want to build.
I wish I could GPG sign or hackcode my online content in pain text. But the tools to do so don’t make it to the general publics consciousness. We used to seal our communications with wax stamps but we don’t have a digital equivalent. Kinda how ROT13 used to be universally understood for spoilers now their use actively confuses people. This makes me sad.
As software engineers, we need to take responsibility for the impact our software has on the world.
I hope Sam Altman and everyone else who contributed code to ChatGPT is taking a long, hard look in the mirror and considering what it means to make the world worse with our labors.
Pretty soon the content will be better than we can produce anyway. You're fiddling while Rome burns. You need to pivot hard because your business model is dead. As are the jobs of almost all white collar workers. We have AI and its not like we thought, AI is people in a box. A person in a box is much cheaper than a meat sack in the real world and will only get cheaper.
We don't write to produce objectively "good" writing: we write to communicate. "Good" writing is writing that accurately and successfully communicates to the audience the thing the composer wanted to communicate. It is always contextual in the context of both writer and reader.
The computer has nothing it wants us to understand. It writes for no purpose, which is why it is flooding systems like Clarksworld submissions designed to support purposeful communication. It may eventually be able to evade the detection algorithms that for now have prevented it from completely shutting down human communication, but that won't make it "better": it will make it a successfully DDOS attack on society.
writing, painting, etc, is not a business model. it is a creative act. sure, some do it for the money, some artists become successful and can live off their artwork.
but, unlike many other businesses. humanity needs creativity and art for its development as a society. AI created art is somehow part of that, but its domination prevents the development of art by actual humans.
it is not a question of what is better, but what is actually creative and furthers human development. AI created art is not that.
but they are stale, static humans that don't develop. frozen. they are zombies. (actually, even zombies are more alive than an AI)
the point of being human is continuous development. an AI can't do that. it can only summarize existing data and not generate new insights on its own. (doing that can help humans generate new insights, but that is something different)
It's probably going to further the destruction of text-based social media and force a retreat of people back into their social, political, and geographical circles. You will have to stay in the "bubble" because almost everything you encounter outside the bubble will be fraudulent AI content.