I share the frustration but I'm also confident that solutions will be found.
People had the same thought in 1995.
"Consider today's online world. The Usenet, a worldwide bulletin board, allows anyone to post messages across the nation. Your word gets out, leapfrogging editors and publishers. Every voice can be heard cheaply and instantly. The result? Every voice is heard. The cacophany more closely resembles citizens band radio, complete with handles, harrasment, and anonymous threats. When most everyone shouts, few listen." https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirv...
That problem spurred many innovations over the past 30 years. Today, the barrier to posting on the internet is lower than ever but it's also never been easier to find high quality material.
AI brings things to a new level. The multitude of fake images sand videos of the Hollywood sign burning are one example. It was slop that was almost impossible to differentiate from real images. Current systems for ensuring the right content filters to the top are breaking. It's a rich area for new innovations.
USENET, as I recall, was really dominated by the very small minority of the populate of mostly college students/faculty who had access to the then tiny internet.
It wasn't until AOL that the internet was turned into shite.. by letting the unwashed masses online.
As for AI images being horrible, I disagree 100%. If I, as an independent artist, had made some water colors, chalk, or line drawings of the Hollywood sign burning, would that be so horrible? What if I did a photo-realistic oil painting, like John Baeder's, "that couldn't easily be distinguised from the real thing" - and they were widely circulated as "photographs" and confused with same - would that be so horrible?
Is there a real need to see real images of the Hollywood sign aflame versus an AI generated one? As long as it happened, the two images tell roughly the same information. And even if it's untrue, it's not like a burnt or unburnt sign matters that much either way.
And there's an idea that all mediums produce a certain bias, I think McLuhan's Medium is the Message had some ideas along those lines. The act of taking a particular photograph is, itself, a subjective and biased action. I don't know if that's true or not, but it's an idea.
Two years ago I worked for a sw company where we had to post RUST tutorials on the Internet.
for me, one of the main concerns was that any rust code I post on the Internet, has to be validated by the rust compiler before I publish it,
I was using artificial intelligence at the time I think it must have been early February 2023 or maybe March 2023, at the time, the large Language models were not as good as they are today, so we actually spent the time with my boss to actually sit down and write an automatic-validator rust tutorial programmer.
It takes as an input an idea for a rust tutorial, it writes the computer program, and then it automatically validates the computer program against the compiler.
and if there is a ERROR it will automatically try and fix the Rust Error, and then (and only then) when everything is validated and executed successfully then we can say we have a finished Rust tutorial.
so, what this means is that the artificial intelligence will not kill the internet, because there are (probably) many people like me (actually plenty of people like me) that actually care about the readers and will validate everything they publish.
I also understand that there is a lot of people that will not validate and I think this is sad and there is not a lot we can do about it right now.
the only thing that comes to my mind is to write a program that will automatically try to validate tutorial so that you can save time.
But the main point i'm trying to make is not that "I made a thing", not the main point at all, the trying to make is that "people care about other people", people care about their readers, and people will continue to make efforts to validate what they publish on the internet.
so dear reader from the internet please do not worry, instead I will suggest you focus your efforts on the automation of the validation of information generated by the artificial intelligence, and the creation of oracles that validate the results of the AI.
I hope this idea will be met with optimism rather than pessimism.
I hope I sincerely hope that this idea will be met with a wider adoption, if anybody is interested in discussing this for their feel free to message me at dataf8L@gmail.com
IMHO the main downside of AI is that it is really easy to generate this kind of site.
I know that people can do that all by themselves (without AI), generating fake stuff on the internet, but it took time, effort, so you had to be really committed to the joke to actually do it.
In that case, I think it took less time to create this site than it took me to realize the stuff it had in it did not exist.
People had the same thought in 1995. "Consider today's online world. The Usenet, a worldwide bulletin board, allows anyone to post messages across the nation. Your word gets out, leapfrogging editors and publishers. Every voice can be heard cheaply and instantly. The result? Every voice is heard. The cacophany more closely resembles citizens band radio, complete with handles, harrasment, and anonymous threats. When most everyone shouts, few listen." https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirv...
That problem spurred many innovations over the past 30 years. Today, the barrier to posting on the internet is lower than ever but it's also never been easier to find high quality material.
AI brings things to a new level. The multitude of fake images sand videos of the Hollywood sign burning are one example. It was slop that was almost impossible to differentiate from real images. Current systems for ensuring the right content filters to the top are breaking. It's a rich area for new innovations.
reply