Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are so many tangible vectors in content! It makes me feel like moderation is a doable, albeit hard to automate, task:

- substantiated / unsubstantiated - extreme / moderate - controversial / anodyne - fact / fun / fiction - legal / unlawful - mainstream / niche - commercial / free - individual / collective - safe / unsafe - science / belief - vicious / humane - blunt / tactful - etc. etc.

Maybe I'm too techno-utopic, but can't we model AI to detect how these vectors combine to configure moderation?

Ex: Ten years ago masks were niche, therefore unsubstantiated news on the drawbacks of wearing masks were still considered safe because very few people were paying attention and/or could harm themselves, so that it was not controversial and did not require moderation. Post-covid, the vector values changed, questionable content about masks could be flagged for moderation with some intensity indexes, user-discretion-advised messages and/or links to rebuttals if applicable.

Let the model and results be transparent and reviewable, and, most important, editorial. I think the greatest mistake of moderated social networks is that many people (and the network themselves) think that these internet businesses are not "editorial", but they are not very different from regular news sources when it comes to editorial lines.



>Maybe I'm too techno-utopic,

https://xkcd.com/1425/

I personally believe this won't be a solvable problem, or at least the problem will grow a long tail. One example would be hate groups co-opting the language of the victim group in an intentional manner. Then as the hate group is moderated for their behaviors, the victim group is caught up in the action by intentional user reporting for similar language.

It's a difficult problem to deal with as at least some portion of your userbase will be adversarial and use external signaling and crowd sourced information to cause issues with your moderation system.


Not a good idea. Your example already has flaws. An AI could perform on a larger scale, but the result would be worse. Probably far worse.

I specifically don't want any editor for online content. Just don't make it boring or worse turn everything into astroturfing. Masks are a good example already.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: