I'd also argue there's a way to get a reasonable form of this in a more centralized world: You need functional KYC with a high degree of certainty that every participant in the discussion is human and culpable, and you need a system that allows users to block other users. That's really about it.
The problem with many of these social media platforms (X, Threads, Insta, etc) is that they are actively disincentivized to enforce the first requirement of those two. Strict KYC is (rightly) viewed by many as overtly invasive of privacy, it would hurt the volume of content on these platforms (because 90% of it is AI generated engagement bait), and it would hurt their user signup and retention metrics (being ignorant of how many bots and alts are on your platform is vastly better than knowing and doing something about it).
> You need functional KYC with a high degree of certainty that every participant in the discussion is human and culpable
This would actively lower the freedom of expression. There are opinions people hold that they're only comfortable sharing when they know it won't come back to them in real life, as holding such opinions can be dangerous. These people should still be allowed to share their views and discuss with others that disagree with them/they disagree with. This could be one of the only ways for them to even challenge their own world view.
If you're optimizing freedom of expression, then anonymity is a requirement.
I very clearly did not assert that your identity had to be shared publicly or shared with the people you are interacting with. I only said that the entity operating the centralized social network had to have a strong and confident sense of each users' identity. The purpose of this is to ensure that everyone interacting within the walled garden is human, and to ensure that blocked people cannot just go create an alt account to get around being blocked. Whether it is "@JimJohnson" or "@ButtDestroyer420" you're interacting with isn't relevant.
I understand the anonymity argument, even from the perspective that sometimes you don't want even the service operator to know who you are. I'll be blunt on my take on this: I think this is millenial/genx idealism, tossing coins into a wishing well for an internet that does not exist anymore and will never again. Generative AI is out of the bag, and social network service providers can either grow up and recognize that their Paramount Number 1 service they can offer is some reasonable guarantee of protection against generative AI, or they can stay addicted to their juiced engagement numbers and play dumb when it comes out that 99% of tweets are from bots trying to build rapport, sell products, and influence elections (bye bye advertisers!). Its their call. Twitter is making the wrong one. Meta is up next. Traditionally they've all done bot detection; this doesn't work. Bots are indistinguishable from humans now. You need human-detection; KYC, meat-space verification. If you don't like the anti-privacy angle, then don't participate; no one is forcing you, and you're welcome to start your own mastodon server out in the generative AI wildlands (and, by the way, you should always have that right; i'm not prescribing how the world should be ran, just how e.g. Threads should be).
The problem with many of these social media platforms (X, Threads, Insta, etc) is that they are actively disincentivized to enforce the first requirement of those two. Strict KYC is (rightly) viewed by many as overtly invasive of privacy, it would hurt the volume of content on these platforms (because 90% of it is AI generated engagement bait), and it would hurt their user signup and retention metrics (being ignorant of how many bots and alts are on your platform is vastly better than knowing and doing something about it).