Hacker News new | past | comments | ask | show | jobs | submit login

<Playing devil's advocate>

Consider this classic XKCD: https://xkcd.com/810/

There are 2 possibilities: The AI can comment as well as a human, or it can't.

If it can comment as well as a human, our experience of using social networks will not be degraded. We still get the same experience as before.

If it can't, then its contributions won't get as many likes/upvotes, and they won't be particularly prominent. Most content users see will be human-created.

So I think the key question is whether AI will be able to manipulate the liking and voting systems. But there are methods for preventing this: ignore votes from accounts created after ChatGPT's release, or for a site like Twitter with paid accounts, ignore votes from unpaid accounts. It's not even clear what ChatGPT adds beyond a human voting ring, and social networks presumably have lots of experience dealing with voting rings already.




>If it can comment as well as a human, our experience of using social networks will not be degraded. We still get the same experience as before.

Out of fun I posted a few "posts" generated by ChatGPT to my local town's facebook group (complaining about the quality of snow this year, nonsense like that).

The posts got a lot of engagement (100+ comments each) and the only one who figured out it was AI generated was a girl from the local university who studies computer science.


I agree, AI is like that long flash intro you couldn’t skip in the early website days. Just annoying enough to disrupt any actual commentary (aka true exchange of ideas)


Flash intros eventually went away, presumably because websites without them were more successful. So your argument appears to imply that humans will still be more successful on social media than AIs?


No argument pro or against, I honestly don’t know. The point being that it will be annoying enough to get rid of it (in some places at least)


The AI could comment as well as a human until it gets to its payload, which is a scam. People will vote it down after they get taken in by the scam, which is too late. Or maybe not even then, if the payload isn't a scam per se, but something that's deliberately designed by the AI's developer to be slanted. A million AIs that are indistinguishable from humans but occasionally throw in a comment about how they got sick from Pepsi could make things very profitable for the Coca-Cola company.

Also, upvotes are an imperfect measure of how good the post is and AIs would be able to game the measure.


>People will vote it down after they get taken in by the scam, which is too late.

Not too late to create training data by which the social media platform can detect and remove subsequent scams. Especially if they 'report' rather than downvote.

>A million AIs that are indistinguishable from humans but occasionally throw in a comment about how they got sick from Pepsi could make things very profitable for the Coca-Cola company.

How is this different from Coca-Cola paying influencers to post negative stuff about Pepsi?

>Also, upvotes are an imperfect measure of how good the post is and AIs would be able to game the measure.

How is this different from humans gaming the measure?


The ability to generate nonsense/noise at immense scale for essentially zero cost.


>Not too late to create training data by which the social media platform can detect and remove subsequent scams.

How is that going to happen? We're assuming the AI posts exactly like a human until the comment about Pepsi comes out. It'll be impossible to distinguish from a human, by assumption. Or are you going to just have the site reject everything which mentions Pepsi negatively?

>How is this different from Coca-Cola paying influencers to post negative stuff about Pepsi?

AIs don't need to be paid.


>How is that going to happen? We're assuming the AI posts exactly like a human until the comment about Pepsi comes out. It'll be impossible to distinguish from a human, by assumption. Or are you going to just have the site reject everything which mentions Pepsi negatively?

You could have a reporting system.

>AIs don't need to be paid.

At scale, this sort of service is not going to be free. But I suppose AI could make it a lot cheaper.


> How is this different from Coca-Cola paying influencers to post negative stuff about Pepsi?

Coca cola can't have millions of influencers on their payroll (or at least, it wouldn't be economically profitable for them to do that). An AI could easily create and post on accounts on a scale that dwarfs real human-generated content.


80/20 rule -- 20% of influencers generate 80% of the views. Don't need to pay everyone.

https://www.ftc.gov/business-guidance/resources/disclosures-...

^ Wouldn't this apply to ChatGPT bots as well?


>If it can comment as well as a human, our experience of using social networks will not be degraded. We still get the same experience as before.

I think you're massively underestimating how important motivations are. Bot accounts that can perfectly mask as human but whose prime directive is to recommend their sponsors' products would ruin trust in a heartbeat.


Half of the Reddit front page is this already. Dudes are swimming in millions of upvotes while blatantly advertising


But you can tell that it's happening. And also it's still only half, and not the entire internet, because human marketers don't scale as much as AI.


I can tell that it's happening because I'm a paranoid fucking psycho. That's not true of the millions and millions of regular human beings using Reddit and Facebook every day


What you’re missing here regards diversity and volume.

Diversity: because language models are in some sense capturing an average (even if that average can be from a subset of training data), its comments will not be very diverse. I suspect if you try sampling away from the center of the distribution, you’ll find the quality degrades to nonsense.

Volume: the biggest problem is that you’ll end up with complete overwhelm of plausible comments. Language models cannot reason, which means they are incapable of producing the best insights (they can probably provide insight through “monkeys at typewriters” effects).

So the smart human who has analysed the subject really well, who currently has to rise above a certain amount of background noise, will now get completely swamped by it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: