If someone consumes what you call useless content, why would they use AI to produce anything else? It's not as if there isn't useful content around, already.
Just referring to entertainment (ie not educational videos), the clickbait social media entertainment is easier to consume and is readily recommended, you know you have at least a little bit of interest in it. It just tends to be lower quality entertainment than what you know is possible. Long form entertainment is higher investment and higher risk in that if you don't like it, there's more energy/time expended.
So there's a choice between low risk entertainment that caps out at like 20% of the value of what your favorite thing is, and higher risk entertainment that very well could still be terrible but might have a chance of being good.
AI bridges that gap to de-risk the high investment entertainment so you no longer have to make that choice.
Even this would require somehow verifying the raw data. It's plausible a bad actor could "reverse engineer" their data from a pre-determined conclusion.
But yes, overall more openness is good. Still, the cost losing trust in society is very high (as you need to verify everything).
> It's plausible a bad actor could "reverse engineer" their data from a pre-determined conclusion.
I've already heard of someone planning a product ( initially targeted at lazy^H^H^H^Hbusy high schoolers and undergrads) that will use AI to reverse discover citations that fit a predetermined narrative in a research paper. Write whatever the hell you want, and the AI will do its best to backsolve a pile of citations that support your unsourced claims and arguments. The founder, and I use that term very generously, expects the academic community to net support this because it will boost citation counts for the vast majority of low citation, low visibility works.
Sorry, but you have a logical fallacy there (affirming the consequent). It does not follow, that humans and LLMs have all same properties, even if they have one same property.
("sentient being is text predictor" does not follow from "text predictor is sentient being").
That is what I'd call empathizing though. You can 'put yourself in the other person's shoes', because of the expectation that your experiences are somewhat similar (thanks to similarly capable brains).
But we have no idea what qualia actually _are_, seen from the outside, we only know what it feels like to experience them. That, I think, makes it difficult to argue that a 'simulation of having qualia' is fundamentally any different to having them.
Same with a computer. It can't "actually" see what it "is," but you can attach a webcam and microphone and show it itself, and look around the world.
Thus we "are" what we experience, not what we perceive ourselves to "be": what we think of as "the universe" is actually the inside of our actual mind, while what we think of as our physical body is more like a "My Computer" icon with some limited device management.
Note that this existential confusion seems tied to a concept of "being," and mostly goes away when thinking instead in E-Prime: https://en.wikipedia.org/wiki/E-Prime
Absolutely this, well put. I guess enough people misunderstand AI models to the point of treating them like they are not software. I guess this validates Clarke's third law (Any sufficiently advanced technology is indistinguishable from magic).