I have designed some solutions around this, but haven't found the right product/ecosystem in which to implement it yet.
The basic idea is that you need multiple independent publishers of append-only "credit rating" feeds, publishing their own views/opinions of the reputations of different servers, users, or hashtags across the whole of the network. Services can aggregate all of these moderation/rating feeds in realtime, and provide to their users a list of all of the different "social credit rating agencies", or moderation feed publishers. You as a user could then choose your moderators from across the internet, then their own moderation decisions are applied to your feeds. It's sort of like outsourcing the management of your block/mute list. You could, of course, disable all of the moderation feeds and see the firehose of slurs and spam, or switch to different ones.
We solved this with email (poorly, and over a long period of time), and RBLs were part of that process. We'll eventually see the same for federated/p2p systems as well.
I don't think this is the point. You are trying to solve a social problem via technical means, and that generally does not work.
Spam/scam email isn't a great parallel: that sort of thing is a more-or-less anonymous party intruding into someone else's life in order to try to sell them something or steal something from them. Blocking that kind of communication is the correct solution, and that's what success looks like.
Getting people to have nuanced, respectful conversations online is a completely different thing. If you get to the point where your best option is to block the other person, or moderate/delete their posts, that's a failure, not a success.
In a system where anyone can talk to anyone, for free, natural human tendencies are going to result in the vast majority of traffic being ads for sex, drugs, or salty carbohydrates.
Social networking needs moderation and filtering, because there are always going to be people who don't respect the time of others. Email just happened to be the first online social network, followed by usenet (which had killfiles).
There's going to be filtering. The only question is do you want it to be a small number of large, unaccountable corporations (and the governments that can put guns in the faces of their sysadmins), or "everyone who cares to, and you can pick"?
Dealing with millions/billions of people online, it's impossible to know who I can/can not expect to have nuanced, respectful conversations beforehand.
So, yes, I think it's not a bad idea if we took deny-by-default approach with new connections and treating them as hostile, unless they can have some backing social proof from one of your peers.
I would also be interested in an approach where every the initiator had to pay actual money to be able to interact, no big amounts, just enough to work as a deterrent to stop spammers, scammers and moderation crusaders:
- Want to send a DM? Pay $1, get it back if the recipient clears you up.
- Want to make a comment for the first time on someone else's thread? Poster decides the minimum amount to leave as scrow. Really good comments could even collect some of the money from spammers/hostile ones.
- Want to report someone because you don't like them or their views? Put $10 in scrow for the moderators. If accepted, you get the money back. If there is no grounds for the report, the reported person gets to choose which charity to donate the money and the next report from you will cost double.
Sadly, these sorts of bond-posting antispam systems are rendered mostly illegal due to financial surveillance requirements in the US: you can't really do micropayments like this without going through full KYC/AML on everyone you're receiving from or paying, which is a huge barrier to entry and adoption.
The intense regulatory requirements for total financial surveillance in the USA are holding so many insanely cool apps from being developed now that there's programmable money. Doing anything novel or cool with it is basically illegal.
https://kleros.io/ can do it on ethereum. Alas, it's still too expensive to do due to gas fees. I wouldn't be surprised if they come up with a Layer-2 approach for it, though.
I have designed some solutions around this, but haven't found the right product/ecosystem in which to implement it yet.
The basic idea is that you need multiple independent publishers of append-only "credit rating" feeds, publishing their own views/opinions of the reputations of different servers, users, or hashtags across the whole of the network. Services can aggregate all of these moderation/rating feeds in realtime, and provide to their users a list of all of the different "social credit rating agencies", or moderation feed publishers. You as a user could then choose your moderators from across the internet, then their own moderation decisions are applied to your feeds. It's sort of like outsourcing the management of your block/mute list. You could, of course, disable all of the moderation feeds and see the firehose of slurs and spam, or switch to different ones.
We solved this with email (poorly, and over a long period of time), and RBLs were part of that process. We'll eventually see the same for federated/p2p systems as well.