Hacker News new | past | comments | ask | show | jobs | submit login

The problem is most likely that the moderators do pattern matching for certain key phrases rather than trying to understand its intention. Automatic tools can really only do this.

This is very common in moderation and censorship. It's a problem, because you get false positives like this, and some topics becomes impossible to discuss, even for people that discuss them in a neutral manner. It also makes it very easy for people with actual Nazi sympathies and other extremist views to hide from censorship by not mention the wrong key phrases that matches the patterns.




In all likelihood, the business model where users can get a free service for trading their data to advertisers cannot sustain a reasonably qualified human moderation service. So bots it is, but bots cannot really understand human culture and behave like mindless sledgehammers.

If FB et al. are one day required to provide human moderation, they will go bankrupt. Or they will be forced to charge users money to cover the costs.


Why must we force mainstream dumbed down platforms to open up. Moving the discussion to somewhere open will bring people over and create new safe areas where this content can be discussed.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: