> When I get home from work, what would motivate me to sift through countless posts about misinformation and flag them?
Turn it around. Make it like Reddit's /new: have moderators able to sift through countless posts about misinformation and approve the good ones. It's not a large difference in what moderators end up doing — they still have to at least skim over all the misinformation. But it's psychologically very different — you can just "walk away" from annoying things that stink of quackery up-front, while "engaging with" only the things that seem good, and eventually "upvoting" the things that still seem good even after you've read them carefully.
Yes, I'm actually suggesting that every post on such a site would go through a moderation queue. (Just one that any user can dip into to look at, if they like, but only moderators can actually vote on.) Or, if not every post, then a good sampling of them; or maybe every post from users with less than N approved posts.
The big effect of that would be that there wouldn't be "countless posts about misinformation." There'd be a couple, mostly by new users with clear signal of that user just being an attacker to the community who doesn't actually want to become part of it (and therefore, can just be banned wholesale.) Noise would drop over time, because crackpots wouldn't even get a short blip of engagement. They'd get none. Their account would die in the crib, never witnessed by anyone but moderators and curious /new viewers.
Combine it with a KYC mechanism (so users can't keep making new accounts) and the moderation load actually becomes reasonable.
Assuming you managed to hire an army of experts who are good at moderating the posts across various fields - Often times people also link to external articles/blogs/videos so now the moderators have to read through several page documents or sit through hours of video. I just find a moderation system like that hard to practically implement for a platform like twitter. And to be honest, I see this as going down a dark path - something that will lead to the 'Ministry of Truth' type entities with their own in-groups/fighting/politics.
That's one practical aspect, the second is, people are often times misinformed themselves and are simply posting something they heard from their buddy or on TV/youtube/etc in good faith - they're not bad-actors looking to attack the community.
Those are just my thoughts, but what do I know, I'm not an expert on these topics :)
> Assuming you managed to hire an army of experts who are good at moderating the posts across various fields - Often times people also link to external articles/blogs/videos so now the moderators have to read through several page documents or sit through hours of video.
The moderators would never be expected to audit "posts" (top-level links to big things that need a long analysis process), just comments.
Or rather — "posts" can be, in some sense, raw evidence/data, not assertions about anything in particular. (Think e.g. a link to a scientific study. Nobody assumes that the poster of such a link is asserting, through the link, that they believe the study's own conclusions to be true — just that they believe the study to be interesting in some way — worth discussing.)
Moderators would be expected to poke their head into a post link for just long-enough to confirm that it's that "artifact to be interpreted" kind of post. If it is, it's allowed to stand.
Whereas "comments" — those that are part of a post alongside the link, or those in reply/reference to a post — are almost always the conclusions drawn from the data, editorialization by the participant user(s). Those are what need moderating.
If you prune only the bad comments, then bad posts no longer matter, because their engagement (which is univerally in the form of bad comments) disappears, and so the post itself is no longer "interesting" according to any kind of social recommendation system.
("Posts" can also be external-to-the-platform editorializations/opinion pieces. I would suggest just banning this type of content altogether. Moderator notices an external link is to an opinion piece? Out it goes. If you want to talk about some externally-written Op/Ed in the forum, you'd have to "import" it into the forum in full text — at which point it would be subject to moderation, and would also be the karmic responsibility of whoever chose to "import" it. You'd be claiming the words of the Op/Ed as your words. Like reading something into evidence in a court room — if it turns out to be faked evidence, that's libel on the part of whichever party introduced it.)
> are good at moderating the posts across various fields
I see what I think you're imagining here, but I never meant to imply that moderators are required to actually verify that statements are true (which requires domain knowledge), only to verify on a syntactic level that the poster is engaging in valid logic to derive conclusions from evidence via syllogisms/induction/etc. (which only requires an understanding of epistemics and rhetoric.) Basically, as long as the poster seems to be behaving in good faith, they're fine. It's up to the userbase themselves to notice whether the logic is sound — built on true assumptions.
In other words, the point of the moderators is to catch the same types of things a judge will notice and subtract points for in a debating society. But instead of points, your post just never shows up because it wasn't approved; and you edge closer to being banned.
> That's one practical aspect, the second is, people are often times misinformed themselves and are simply posting something they heard from their buddy or on TV/youtube/etc in good faith
I mean, that's the main thing I'd want to stop in its tracks: repeating things without first fact-checking them. Yes, preventing people from parroting things they've "heard somewhere" without citing an independent source, would kill 99% of potential discourse on such a platform. Well, good! What'd be left is the gold I want out of the platform in the first place: primary-source posters who can cite their own externally-verifiable data; secondary-source investigative-journalists who will find and cite someone else's externally-verifiable data to go along with their assertions; and people asking questions to those first two groups, making plans, and other types of rhetoric that don't translate to "is" claims about the world. Who cares about anything else?
Turn it around. Make it like Reddit's /new: have moderators able to sift through countless posts about misinformation and approve the good ones. It's not a large difference in what moderators end up doing — they still have to at least skim over all the misinformation. But it's psychologically very different — you can just "walk away" from annoying things that stink of quackery up-front, while "engaging with" only the things that seem good, and eventually "upvoting" the things that still seem good even after you've read them carefully.
Yes, I'm actually suggesting that every post on such a site would go through a moderation queue. (Just one that any user can dip into to look at, if they like, but only moderators can actually vote on.) Or, if not every post, then a good sampling of them; or maybe every post from users with less than N approved posts.
The big effect of that would be that there wouldn't be "countless posts about misinformation." There'd be a couple, mostly by new users with clear signal of that user just being an attacker to the community who doesn't actually want to become part of it (and therefore, can just be banned wholesale.) Noise would drop over time, because crackpots wouldn't even get a short blip of engagement. They'd get none. Their account would die in the crib, never witnessed by anyone but moderators and curious /new viewers.
Combine it with a KYC mechanism (so users can't keep making new accounts) and the moderation load actually becomes reasonable.