Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m working on a new way to talk online, with the goal of killing cancel culture, increasing understanding, and basically calming down current radicalization. Picture Reddit, but with the ability to anonymously share ideas with other people in your social circles.

My theory is that most reasonable people stay off social media, so places like Twitter end up filled with unreasonable narcissists. At the same time, discussing politics on a semi-anonymous forum like Reddit is pointless, who cares if someone on the Internet is wrong. But maybe there’s a better way of communication, something new, that lets you talk with people you actually know.



This is a great idea. I would love to use a service like this.

One of the downsides of anonymity that many people raise (fallaciously) as an argument against it is that too many of the anonymous people are just trolls, therefore it is a negative quality. Having it be people that are known to the person brings about an assumption of character that can solve this issue, so long as the user associates with people that they tolerate (which is a mostly fair assumption, considering how private groups and similar functions operate on the established platforms).

One problem that I could see is that someone could be found out simply by virtue of being the only person that would say something like them. I guess this could be alleviated by filtering in public posts from others, but this could cause other problems while not solving what it's supposed to.

The other issue for this is the Gab problem, which cannot be easily solved: that if you're using this service, you're a bad person who needs to be blocked/fired/etc. Unfortunately the mobs you're working against work to destroy with little rhyme and no reason, and they work for free (often literally unemployed). This problem is hard to get around, but I think could be mitigated with marketing it as a platform first and a solution second.

Nonetheless, I wish you the best. This is a very interesting concept and could really do a lot of good in the world.


Thank you. I think it’s a really important problem and honestly wish it was already built so I could just use it.

The Gab problem is core. Reddit bans a bunch of assholes so they go off and find alternative platforms. Free speech is great as a principle, but in practice nobody else wants to hang out with these people.

My idea is that a new model of interaction could get around the problem. I’d want this to be a link people would feel comfortable sharing to their LinkedIn network. Using the site doesn’t mean you were banned from other platforms, just that you think this way of communication is better.


I've been thinking about the gab problem a lot and have a solution: moderation and votes should be user centric rather than wisdom of the crowd based. Explained further here: https://adecentralizedworld.com/2020/06/a-trust-and-moderati...


Great to see more people in this space! Your idea seems very related to: https://cblgh.org/articles/trustnet.html


Won't that just turn echo chambers into even bigger echo chambers?


It's a chamber that you decide vs one the group decides.

If you want to only surround yourself with people that are exactly like you that's up to you. If you want to be an intelligent thinker and surround yourself with believable truthful people who expand your worldview that's also possible.

It puts the power back in your hands instead of the site operator or group consensus.


One of the reasons people use social media is to get a feel for the generally environment of opinion, even if we disagree with it. We want to know what "people are saying" and understand the rapidly-changing boundaries of what is socially acceptable. That's why I follow people on Twitter and Facebook that I don't trust at all. I would not want to use a decentralized trust system to consign myself to an echo-chamber, even if it is full of believable truthful people.

But the system TimJRobinson describes is flexible: you don't have to simply filter out the less trustworthy posts. You could simply flag them somehow as low-trust.

Right now the major social sites are designed to amplify the voices of users that produce content that drives engagement, even though the most engaging content tends to be offensive or inaccurate. That's why the people I least trust on Facebook always show up on the top of my feed: I sometimes engage with them by telling them I think their posts are inaccurate.

So I can see your system being used not just as a filter for what the users sees in their feed, but as a feedback mechanism for people's posts.

I think a more responsible social site would optimize for positive outcomes, and not just engagement: employing algorithms and techniques to optimize for accuracy, quality, and civility. I think decentralized moderation could be one of those techniques.


That sounds like a step backwards to me.

The problem we seem to be seeing just now is lots of echo chambers have formed without input from "the other side of teh argument" on Facebook, Twitter, whatever social media you choose.

What you are suggesting is rather than group level echo chambers which will validate maybe 80% of your views and go against maybe 20%, you are now going to have a fully customized echo chamber that echos your views 100%. I think that will make things worse.

Personally I try to get views from both sides, but I think I am in a minority for doing this.


> too many of the anonymous people are just trolls,

I have a different take on this: trolling becomes an optimal farming method when reward mechanisms isn’t working. When being a squeaky wheel is the best way to get the oil, people tries to be the worst shaped wheels. Fittest survive.

P.S. to parent:

> who cares if someone on the Internet is wrong.

( ಠ_ಠ)...


I really like this. At the start of this podcast [1] they talk about the dark forest theory of the Internet. It's that when you go into a dark forest at night you don't see many animals because they know if they move or make noise they become prey. Similarly many intelligent people stay silent on Twitter and only speak up in private circles for similar reasons.

[1]: https://www.vox.com/platform/amp/podcasts/2019/11/21/2097616...


Thanks for the thoughtful and interesting responses, looking forward to listening to the podcast. I loved the Cixin trilogy, but hadn’t heard of the dark forest theory applied to online conversations before. I’m not sure I agree with such a strong suggestion, maybe more that the people that are most verbal in a group are rarely the people with the most interesting thoughts. It is sadly becoming increasingly true though that anything you say is now a permanent liability.


Did you read the English version? I'm trying to read the trilogy but the prose is just dull and the characters are flat.


I did. The beginning was a slog, but then it got really interesting. For me it was more about the plot than the characters.


Upon relistening to that podcast I realized Yancey wrote it as an article originally: “The Dark Forest Theory of the Internet” by Yancey Strickler https://link.medium.com/0MeIwL0ET7

The podcast is amazing and worth listening to regardless.


Have you read the Three Body Problem trilogy, by any chance? It talks about the dark forest theory, except in terms of alien civilizations.


Just relistened and that's exactly where Yancey in the podcast originally got the idea from.

The conversation starts at 13:00 and also realized he wrote an essay about it here: https://onezero.medium.com/amp/p/7dc3e68a7cb1


I have a theory that, individually, people fundamentally disagree less than they think they do. And in real face to face meetings, they are willing to disagree about a topic without killing each other. They talk around points of disagreement. Eg: me, when I'm talking to my grandmother.

The problem is that online people tend to polarize into factions during complex discussions. The more heated the discussion, the more polarized they become. Eventually it becomes impossible for either side to be self critical, or to cede a point to the other side no matter how true it may be.

"B" says "the sea is made of saltwater". A moderately reasonable "A" agrees. Mistake! The "A" group accuse them of being pro-B or anti-A. Suddenly, the idea that the sea is saltwater becomes an "B dogwhistle". A cunning trick! At this point, the "A" group adds 'saltwater sea' to their list of unacceptable opinions, and the policing of everyday language and opinion is in place. That's when the notion of defending free speech gets questioned. How can you police language if people defend free speech?! It won't do! So now free speech is a dogwhistle too. Add it to the list! There is no possibility for constructive dialogue, no matter how sensible, kind and cool-headed it tries to be. Disagreement with dogma or talking to the opposition are offences. New rules: "Don't follow an A on Twitter". Guilt by association. Next up: let's use the abuse/reporting system to ban individuals. Eventually, online systems that were supposedly designed for communication and conversation have become tools for suppression and virtue monitoring.

While anonymity itself will help people to own up to an opinion that they might be otherwise afraid to voice, unless there are no usernames at all it will be possible to track an individual across the system and figure out "which side they're on". Then you'll be able to figure out if their comments are acceptable or hatespeech without having to properly consider them.

Don't get me wrong, I'd love to see this happen, but I think it requires a lot of work. Deduplication of points made, automatic detection of logical fallacies via natural language processing, personal dictionaries with auto-translation of terms (eg: an acceptable word to you might be a slur to someone else - if the word has a genuine meaning, auto-translate to a non-slur). That would avoid having people point and scream and claim moral victory because of a word infraction, instead of hashing thing out properly.


Another theory is that people disagree more than they think they do, but their actual actions are not as disagreeable as their opinions. So, two people will often hold opinions that the other considers abhorrent but will be able to get along in practice (as long as they don't learn of each others' opinions) because their actual actions will usually be acceptable to each other.


Mostly agree.

> I don't get involved because these platforms simply encourage the worst in us.

In-person, I've found I can meet and talk with folks who believe and say most anything. In fact, I enjoy it. As long as we're trying to get along, we can have powerful and dynamic conversations. I can learn things and we can challenge one another.

Online, there's none of this "trying to get along". Some person types something wrong. Other people pile on. Pointless argument ensues.

People agree on most things! That's the crazy thing about it. But if we only interact online through text, we're raising an entire generation that doesn't grok that fact.


In your opinion, what is it that distinguishes your grandma from people in online discussions? I've been thinking about it a lot lately - that the facelessness of it makes it more difficult to employ empathy? That there are too many people online and connections with them are too ephemeral to form emotions necessary for kind discussions? That the anonymity makes some people lose inhibitions? What else?


I suppose you have to get along with grandma. You have a shared family group. Imaging taking to your mom, and saying "I've cancelled grandma because she thinks that Napoleon was no better than Genghis Khan, so no more family meet ups. If you don't cut ties with her, you're blocked and cancelled too."

There's also the fact that you've known grandma your whole life, and you think that she's 90% adorable, 10% a product of her generation. So you're willing to change the subject, or maybe just roll your eyes, when it comes Napoleon.


Pretty sure we don’t disagree on truth in a vacuum type information, but what axiom is most relevant to start from in a given moment in time.

I blame old social habits for work forcing us to continue to huddle into “flocks”, aka companies, to collectively funnel our output upward for validation.

Our biology as a literal thing has to then constantly contend with validation seeking. We minimize allowable results to narrow what is valid output. It’s economical they say, back in the day they would say this narrow lane of validity is most pious band likely to get you the reward of Heaven.

“They” of course are elected officials who we can watch flaunt they social rules they tell us to teach each other.

Given the literal mess we’re making it certainly does not appear to have “economy” in mind.


I will say that I’m not really interested in talking with people I necessarily know. I want to talk to people who have well thought out arguments that are non obvious.

Take what you will from that and good luck.


Thanks. Thinking about that problem too, and trying to find a way to give people both. “The personal is political” and one hypothesis is that at least a starting point for content you’d find more interesting is content that people you know find more interesting.


I'd say HN is the right place to be.


I've come to feel that a crucial feature missing from online discourse is the influence of third parties to the conversation.

If you have a conversation in meatspace, sufficiently nearby third parties send signals, because they cannot help but hear you (setting aside headphones or the like). Without entering the conversation, they can communicate by means like a change in stance, a muffled laugh, a roll of the eyes. If those signals are noticed, they provide feedback on the words exchanged and moderate the conversation when it grows heated or extreme. Social media doesn't well allow for these signals: the input interface doesn't allow for them without conscious effort to translate.

In the absence of those signals, any outrageous speaker on the internet can mistake the silence of our lossy input interface for silent approval. If they were wrong, someone would say something, surely? But no, there's just too much effort required to tell every damned fool they're an idiot. Even when one person speaks up, there's no apparent audience to serve as jury, and the fool can go on believing themself in the right.

In short: I think "audience feedback" is a necessity.

In effect: I think the concept of a "timeline", as a presentation of user-generated content, is socially broken. Every posting must be weighed, none should be allowed silence.


Up/downvotes on Reddit/HN/etc. are, I think, supposed to be this. Even if Reddit users stuck to the guidance that the downvote button is to hide disruption rather than to disagree, saying nutty stuff "loudly" would count as disruption.


The flaw in HN/reddit votes, Twitter faves, FB likes, etc is that not all persons who see a submission will use them. Every silent observer contributes to that illusion of assent that helps entrench deviant behavior.

Timelines are great tools for skimming content, but while we can aggregate content easily enough ("scan the room"), we have no tech solution for returning the social feedback that meatspace society relies on ("the room fell silent"). You could force that feedback by turning the timeline into merely a historical lookup, and delivering new posts to the user individually and not letting them return to the rest of the app without interacting with it in some way. Also, block unregistered users entirely, unless they can be locked into the same use pattern by some means that escapes me at the moment.

(If we split off into sci-fi dystopia land, measure the user's emotional state as they scan each post and accumulate those scores back to the author. This is terrible, please no one ever implement it.)


This never works because the people attracted to anonymous discussion with no ‘cancel culture’ (no banning/moderation?) ARE those who have been radicalized and would get banned for it anywhere else.


I think that even if it did attract those who would be banned anywhere else, it would not necessarily mean it is not working - depends how you define "working". I see the problem with cancel culture in suppressing legitimate but unpopular opinions and polarizing society by promoting echo-chamberization. I would love to have a place where opinions would be discussed and voted down not because people do not like them but because they can find arguments against them in rigorous debate. But how to set up the system in such a way that it would distinguish and push to the most visible place the best reasoned opinions and not the ones most liked, that is the hard problem.


Or maybe it's just people who want to hear alternative opinions that have been banned from the other heavily moderated platforms.


Opinions like what? The only opinions people get banned for are ones which seem openly racist, homophobic, etc. Nobody gets banned for having or encouraging alternative opinions about, like, which Java framework is the best. Not even for more controversial arguments like whether God exists (at least as far as I’m aware.) Therefore the only ‘rational debate’ that will take place is between what most people would call racists, homophobes, etc. Nobody else is going to want to post in or even look at such an environment, so it rapidly turns into a completely worthless bigoted echo chamber where everyone’s patting themselves on the back for having such enlightened alternative opinions.


"There are two genders" would be one - a position that I guess that the majority of the public also believe. That's ground for banning in plenty of places. We aren't allowed to discuss on many platforms. Of course I will be labelled transphobic for that opinion.

> Nobody else is going to want to post in or even look at such an environment, so it rapidly turns into a completely worthless bigoted echo chamber where everyone’s patting themselves on the back for having such enlightened alternative opinions.

Pure speculation on your part. How does it end up any more of an echo chamber than somewhere where certain opinions are removed?


So, let’s assume that everyone who got banned for saying there are two genders (and presumably strongly believes it, because they cared enough to be banned for it) is attracted to this platform. People who disagree won’t be attracted to it; they can already post their opinion on, say, Twitter without being banned, and they hold the opposite opinion anyway. Now Twitter is an echo chamber of one side of the debate, and this new platform is an echo chamber of the other side. No debate can happen on Twitter; debate can IN THEORY happen on new platform, but won’t, because of the huge imbalance in participants (imagine an in person debate where one side is allowed to bring a hundred people who will all argue with the single guy on the other side). What have we gained by that? There’s still no debate happening. The banned people might as well have started blogs instead.


The difference being that you can actually post / see both sides of the debate on one platform. And that isn't the one with censorship policies. You seem to be in agreement that the lack of moderation actually gets us closer to the form of debate that we are after.


I don’t really agree. Like I said, I don’t think you will ever get both sides of an argument there; one side will force the other out eventually. I’d rather have a proliferation of communities which are ‘censored’. Sure, have a forum for racists where non-racists get banned, whatever (although I won’t shed any tears if it gets closed down). If I want to see what racists think, I can go check it out. But I don’t want to be part of a space where I’ll regularly encounter racist stuff, and they don’t want to be part of a space where they’ll regularly encounter anti-racist stuff. Mashing me and the racists together in an anonymous arena where anything goes obviously won’t result in us all nicely debating and getting to know each other. It is not a step in the right direction.


Basically you are in favour of censorship over free speech. History has shown that top be a poor choice.


It’s not exactly censorship when you can start your own blog or platform with the exact opposite form of ‘censorship’, is it? Give me a break - someone saying “you can’t post things I think are racist on my website” is not a violation of your human rights, any more than the host of a party saying “you need to leave after what you just said” is dire political censorship akin to burning books.

Bigots of all stripes like to use the goodwill of others and their belief in freedom to say horrible, bigoted things and demand everyone listen to them. But we don’t have actually have to. That’s not what freedom of speech means.


Looks like censorship to me whatever way you justify it.


You are a victim of your own bubble (as we all are). Rational discourse doesn't appeal to most people. Hence this idea won't work at scale - eg. outside said bubble.


You're not going to "calm down radicalization" with phrases like "killing cancel culture" and suggesting that viewpoints you don't agree with on twitter/the platform as a whole as "unreasonable narcissists".


“Killing” was a poorly chosen term. I think cancel culture is very dangerous in the way that it discourages reasonable discussion by picking on people whose opinions are deemed to be out of line and punishing them with mob justice. Maybe this interaction illustrated my point though. If I was talking to you in a pub and you said the same thing I’d respond in the same way. But if I’d posted about my great idea on twitter and you responded like this I can picture my ego urging me to say something more defensive and inflammatory.

I don’t mean perspectives I disagree with are narcissistic, I was honestly making an observation based on my limited understand of psychology. Some personality types like to hear themselves talk more than others, and I honk it’s clear that they are more attracted to Twitter.


For me, it's not about the word choice of "killing"; it's the seeming implication that "current radicalization" is caused by lack of "understanding" and then leads to "cancel culture" as the true problem. Your mission statement sounds kind of like "I want to win the argument with my current set of beliefs" rather than "I hope through honest and open minded discussion we can discover our own blind spots and reach common ground to move forward" or whatever.

Look at the supportive and critical comments you received; it looks split down the line based on political leaning. It may be worth considering that if you want to avoid creating just another echo chamber.


It’s definitely been interesting hearing some feedback. There’s a lot of passion about the topic.

The problem I’m interested in has nothing to do with winning arguments, more that so many people are quiet, not saying anything, because they aren’t interested in current discussion options, and the huge negative downsides.

I will reflect on all the feedback, especially the criticism. I don’t agree that I can divide it by political leaning though, and I certainly hope it’s not true. Definitely agree that the last thing the world needs is another echo chamber.


To add to this train of thought - the phrase 'cancel culture' has become politicised, hence the perceived split. I agree that were seeing a wave of puritanism and witch-hunt behaviour online, but that much of it is springing from admirable causes like climate change and minority rights. I think the risk of using the phrase is to discount the cause. So that brings up a sticking point in creating a utopic online space, language itself means different things to different people. It's worth thinking about in your project - how do you use a light touch but prevent people from filling the space with newspeak or in-group language?


No one mentioned opposing viewpoints except you, and only to downplay real-world harassment as a mere difference of opinion. The radicalization occurring on Twitter is encouraging 'adults' to gang up on others (often children) and try to get their lives permanently ruined over ACTUAL differences of opinion, or for vastly disproportionate acts. So yes, we should use the word 'kill' when describing a force that destroys reasonable, well-meaning, and good people's lives every single day.

The "unreasonable" part is the the disproportionate and permanent effect of internet hatred in regards to comparatively non-permanent acts (that are still often harmful, but more often than not nowhere near to the same extent). The "narcissist" part is the need to do so for beneficial social points among those that do. Those that have the online support to keep the basic needs in life that they want to deprive others of. So yes, these people meet the definition on both counts.

It doesn't mean that those who are bombarded don't deserve to be reprimanded, just that they probably don't deserve to be bombarded with threats and harassment. You can express your opinion on Twitter without personal attacks and threats of harassment and violence. That part should not be a fringe opinion.


I'd like to maybe add onto that with a personal anecdote regarding twitter:

I've never used it and had an account created from about a decade ago. The only people I had followed when I first created the account were a few random celebrities and public figures. I pretty much never logged in despite many emails from Twitter to remind me of my account. A few months ago I opened it up to look, and I was really shocked and appalled at what I saw. My reaction to it was quite visceral. The best way I can describe the content I saw is just "hate", not the general term hate-speech that people throw around, but rather just people tweeting hateful things and exhibiting what appeared to me to be their hatred over something or someone.

Sure in between all of that there were a few wholesome tweets of course. But for the most part, it was just hate. I didn't really stick around too much other than to maybe add some additional public personas I do follow (I guess in support of them). The whole experience left me with the impression that Twitter and perhaps social-media entirely, are really bad/toxic/divisive to our society, at least in their current form.


I think Twitter is especially bad in comparison to other sites because it incentivizes people to post their real names, photos of themselves, etc. Even worse for this issue is the ability to retweet, and the trending page of Twitter. Instagram for comparison doesn't have this problem to the same extent because it's about sharing photos and not about following every new trend. This design gives those that live their lives as online reactionaries a 24/7 outrage factory. The trending page is a constant line of pitchforks and threads to bring them to. Regardless of the benefits of the service I think it's hard to deny how the site's design contributes to the problem, at least to some degree.


> You can express your opinion on Twitter without personal attacks and threats of harassment and violence. This should not be a fringe opinion.

I can. I expect that most people can as well. I don't however agree that any opinion should be expressed without consequence. If someone says "we should kill the Jews and reinstate the 3rd Reich", or "The place of blacks is subjugated below whites". This reveals how they see others, and making sure that their employer is acting based on compete information seems perfectly reasonable.

> The "narcissist" part is the need to do so for beneficial social points among those that do

Why are you ascribing nefarious motives to actions? Why is it clear that these people are acting in bad faith?


I totally agree with the extreme example of nazism or something like that, or most less extreme examples of prejudicial behavior, especially when that person is managing or just in any way talking to others. The problem is that it isn't so cut and dry. Too many of these people either aren't actually doing anything wrong (Some truck driver with his hand out the window) or have done something that is comparatively minimal to what they received (Justine Sacco, for instance). It's not that these people shouldn't be reprimanded, just that it shouldn't be the default for it to happen in this public, permanent, and very often dangerous way.

The narcissism is real, if you're not closing your eyes to it. It's especially obvious when these people doing it have done what they're complaining about themselves. It gets back to the main idea, that you don't have to harass people to get your point across. Even the most egregious reaction, trying to take someone's employment (and most often the ability to support themselves and their dependents) can be done in a way that isn't public and permanent.


> permanent

Can you give an example of incorrect permanent punishment? Sacco found a job almost immediately. The truck driver may have a more difficult time, but not because this event will follow him, but because the labor market at the moment is shitty. (And to be clear here I'm not saying what happened to him was right or just).

> The problem is that it isn't so cut and dry.

Sure but now we're in a very fuzzy area. We're no longer saying that public shaming is always wrong, but that there are situations where, in your judgement, the scope is misapplied. Those are two very different situations, especially if you're willing to acknowledge that your perception of the severity of some action may indeed be different than the actual effect of that action.

> The narcissism is real, if you're not closing your eyes to it.

I still don't buy this. People not being self aware is narcissism, but not in the way that you seem to be meaning, which is more like that the activism is performative and not genuine.

Not to say that there aren't people who are performative. Lots of social media activism is, but in many ways so is stuff like signing petitions, and that doesn't get a bad rap.


The story about Wadi from https://www.theatlantic.com/ideas/archive/2020/06/stop-firin... is pretty horrible. I mean they all are, but maybe it’s the most clear permanent damage.


So that story shows the potential for long term damage (his store hasn't closed yet), but I agree that if it does, it would be the best example I've yet seen.

However even if we assume the worst outcome in that situation, the negative impacts of cancel culture are tame compared to a lot of other systems. If we're calling for an end to cancel culture due to the one case of permanent damage, why aren't we calling for an end of the US justice system which, on a daily basis, causes far more permanent and far more cases of damage?

And this is sort of whataboutism, but lots of the recent concern about cancel culture, at least that I've seen, is from mostly upper class, mostly non-black and latino, mostly well educated people. Their concern has been that they'll be cancelled if they don't support recent protests enough or in the right way.

So we have two systems of justice, one that unjustly kills innocent people on the daily, and one that might end up closing down a single restaurant whose owner was innocent. Why are we focusing our energy on dismantling the second system over the first?


There are a few reasons I’m particularly concerned about cancel culture. One is that there is a mechanism for me to change laws I disagree with. There’s lots of things I hate about the justice system that I believe are being worked on. But fundamentally, I accept I was born into a particular society, and I’ve implicitly agreed that I need to agree to certain rules.

Cancel culture is mob justice. There’s no mechanism to change it, and it’s totally irrational. In the example, blaming one person for the actions of a family member goes totally against the philosophies I believe in.

Finally, I don’t think we can trivialize the impact of tossing the idea of free speech out the window. Human history is full of particularly nasty examples of what can happen if everybody feels forced to obey a mob.


> One is that there is a mechanism for me to change laws I disagree with.

There are also mechanisms to address culture you disagree with (and you're exercising them!). The question was not why do you find cancel culture distasteful, nor was it even why do you personally find cancel culture potentially worse than unjust policing (which for now let's just agree to disagree on), but why it is that you are prioritizing the push back against cancellation over the pushback against unjust policing.

At this current moment, it is, I think, clear which unjust system causes more harm. It is the criminal justice institution. That cancel culture could grow worse is feasible, but it has not yet. People aren't routinely killed at the hands of twitter complaints.

> Finally, I don’t think we can trivialize the impact of tossing the idea of free speech out the window.

Here we disagree on premise: cancel culture is the result of people who previously did not feel empowered to speak freely taking advantage of a system that raises their voices more prominently. That it is extrajudicial is a failure of the institutional justice system, which continues to systemically fail underserved communities (women, minorities, poor people). Sharing controversial opinions has never been without risk, that's why they're "controversial". There have been privileged sects of society for whom the risk of holding controversial, and even outright despicable, opinions was low. I don't think more equality in that regard is a negative thing.

> In the example, blaming one person for the actions of a family member goes totally against the philosophies I believe in.

Do you believe the situation would be different if the employer had been unrelated to the girl who posted bad things? While the exact numbers might be different, I don't see the overall picture being that impacted by him being her father. And "punishing" an employer for the actions of an employee, while still perhaps fraught in some cases, is much less worrisome to me than targeting family.


I’m rooting for you. Here’s a thought about a possible trade off between radicalization and engagement. When social media platforms optimize for growth, it makes sense for them to make it as easy as possible for users to share/retweet. It lowers “amplification friction” and allows messages to go viral. The most successful platforms have very low amplification friction, which suggests that low friction is an important ingredient.

What we are learning is that making it trivially easy to amplify anyone’s message enables cancel culture and (I believe) leads to radicalization.

If this is correct, then increasing amplification friction on your platform will lead to less radicalization, at the cost of lowering engagement. My guess is that for this to be successful requires a careful balance of where you land on the higher/lower friction spectrum. Too much friction leads to low engagement which leads to failure. Too little friction leads to uncontrolled amplification which leads to radicalization. So a balance is needed.

Either that, or a totally new idea is needed that turns existing platforms on their head.


Unethical Thought Experiments: what if Twitter shrank the retweet button or hid it under the fold to but the brakes on the viral spread?

Put amplification behind a trivial inconvenience that would make knee-jerk retweets less frequent and allow the whole chain reaction to cool down.


Instagram doesn’t have a share/repost feature, but people still do it a lot, either manually or through 3rd part integrations.

This leads to other problems like blocking someone, but still seeing their content because it gets reposted a lot.


Twitter didn’t either in the beginning.


Thanks. I think another key part is that social networks get huge. They get thousands of employees, billions in investment, etc. This makes them fragile, and forces them down certain paths. If I stay small then I’m not handicapped by forced expectation of user engagement or growth.

I agree with you about the danger of low friction amplification of messages. My hope is that if users can amplify messages but they get no social credit for it, it will dampen down this behavior.


Sounds kind of similar to Yik Yak, except using social circles instead of area? Hopefully it ends better than Yik Yak did.


Thanks for the pointer, I hadn’t heard about YikYak but will check them out.


Are you trying to reinvent 4chan?


4chan is one idea about how people can talk online with total strangers. The conversations my friends and I have at the pub are very different from 4chan, but also very different from Facebook. Recording everything you say so it can be played back years later in a job interview changes things, and I would say not for the better. But our conversations also don’t sound at all like 4chan threads, where it seems like “ironic” idiots talking to idiot idiots.


I think a problem with this is that if you're at all familiar with how someone speaks or writes you can figure out who they are rather easily.

So there's another weird problem (probably not so hard though): translating conversation within the same language in a way that doesn't lose meaning but obscures identifying patterns.


This was a problem that I briefly worked on it undergrad. Turns out authorship attribution is an active sub field in comp ling. So you’re not just trying to obscure identity from human readers, but the neural nets looking through all your public writing as well.

For the interested some paper keywords would be adversarial stylometry and authorship attribution.


Or multiple translations. This is a great idea that I’ve considered but want to leave for V2. V1 is maybe disappoingly simpler, but at least it would mean that your boss 10 years from now isn’t reading through your post history


I don't know if there's a solution but in my mind the problem is 'likes' or 'retweets'. Most networks create an incentive to angle only for those at all costs (how else does one become an "influencer"?)--whether it be saying something ridiculous or not. This is one thing that snapchat never did that I appreciate. It's really unhealthy for people because it's essentially like a social credit score (spread across many posts) that anyone new will look at. If you can build a social network that avoids that I think people would be much happier. Also there are lot of people that still want to share content but not participate in that disingenuous ecosystem (or who don't have time) so building a platform for them might be useful.


We already have those social networks. They’re called forums. (I know some introduced likes but they don’t mean much, since your post is still just as likely to be seen as everyone else’s.)


I'm thinking more closed off than forums. And more feed oriented.


I love this idea. I have been working on something similar and would love to discuss this with you some more.

Anonymity is important but tricky. Many researchers believe that anonymity can be key to countering groupthink, where people say what they think is socially acceptable, and not what they actually believe, for fear of denunciation and retaliation (some great books on this subject: Elizabeth Noelle-Neumann "the Spiral of Silence" and Timur Kuran "Preference Falsification.") Anonymous polling, for example, is probably critical to the functioning of a democracy.

In social media I think anonymity is more tricky. One idea I have is that posts always have an author, but likes and other indicators of support and popularity are anonymous.


Hi, if you’re interested I just registered chiefmccloud at mail.com (not as concerned about anonymity as spam). I’d love to chat more to you or anybody. Thanks


I liked that Slashdot had different upvotes for informative/insightful/funny, and funny doesn't add to karma.

The forum needs to strongly encourage every comment to have their opinions substantiated, and show rigorous reasoning and inference explicitly where it occurs (and where an opinion is not directly based on evidence but is a reasonable conclusion from evidence). It's not an impossible task, since we have a similar example of a forum with high standards in Stack Overflow and its network.

This means that the platform is primarily one for discussion and developing opinions grounded in evidence and reality, not one for free expression (which reddit primarily is).


> But maybe there’s a better way of communication, something new, that lets you talk with people you actually know.

Isn't Facebook's newsfeed populated with people you know? If so, why will this be any better?


I think you're confusing radicalism with extremism or aggressiveness.

* Radicalism - from the Latin "radix" - means focusing on (what you consider to be) the root of an issue, striving for fundamental change of the way things are.

Also - I couldn't agree more about "who cares if someone on the Internet is wrong" - but I think the conclusion is that you care when it's in physical real life. On the other hand, talking to people you don't know is how you get to know new people, so that part might need some more nuance.


My current theory is that you would make Reddit much better if you would make spillitting and merging subreddits much more fluid eg users could vote that branch this subreddit and they would automatically moved to "new branch".

Also I would change moderetion to slashdot style https://en.wikipedia.org/wiki/Slashdot#Peer_moderation so moderation would be basically comminity based.


That's a nice idea. I agree that we need social networks that are designed around fostering good discussions.

However, it would be a good idea to not only approach this from an optimistic angle but keep a pessimistic one in mind by asking:

"How would a malevolent actor abuse my design to manipulate public and/or indiviual oppinion?"


Maybe the easiest way to lift the standard of conversation is to start with a community of vetted members, much like a scholarly journal doesn't allow just anyone to publish.

I'm not saying its an exciting way to go about it, but it would likely work.


This sounds like Yikyak.

I have a fully working Webapp, mobile app, dB and REST API from a project I worked on when lockdown started that I lost interest in scaling but I’ll make the repos public and edit this comment later with the links if they can help you out with this.


That would be awesome, thank you! I’m particularly interested in any front end work that I might be able to get ideas from (still a React noob).


I'd love to chat more about this if you'd like to reach out to me via the email address in my profile.

I have some ideas for this domain which you may find useful or at least worth discussing.


> Picture Reddit, but with the ability to anonymously share ideas with other people in your social circles.

> something new, that lets you talk with people you actually know.

Anonymous Facebook? The problem with that is this:

> so places like Twitter end up filled with unreasonable narcissists

That's also a problem among the people you know and have added on Facebook, many of them are unreasonable narcissists and they don't mind showing that with their name and picture on the web. I feel like giving them anonymity can only make things worse. Just think of your racist uncle, your Nazi grandparent, your homophobic friend, but now you won't know which of them is posting the bullshit.

You have chosen a very difficult problem to solve, I can only tell you why I think your current idea won't work and wish you luck.


You’re not wrong when you say it’s hard. I have been struggling with the problems you’ve brought up for awhile.

Re:anonymous fb, I’m not going for exactly that but let’s imagine I was and consider the problem for a second of the homophobic/racist/etc Uncle. The simplest solution for not getting that content is to remove them from your list of contacts. But I also have a theory that this is where there could be a global improvement in communication. I believe some people are trolls not because they’re anonymous but because they’re not. Their identity is basically a troll - think the guy who wore a MAGA hat to a recent BLM protest. If that person was allowed to, maybe they would want to change their opinion.


> The simplest solution for not getting that content is to remove them from your list of contacts.

People already have the chance of doing that on Facebook (not anonymous) or Twitter (relatively anonymous), there's a reason they don't do it. It's probably something to do with human nature or psychology, this is likely an area that requires more study (or for which I don't have enough knowledge about).

> I believe some people are trolls not because they’re anonymous but because they’re not. Their identity is basically a troll

I agree with this, but considering your example:

> think the guy who wore a MAGA hat to a recent BLM protest.

I wouldn't call those people "trolls," they are closer to the definition of a antagonist. They are against something beyond reason and logic because it's part of their identity. That's why they love their symbols (the swastika, the Confederate flag), symbols don't require logic or thinking, they only require faith. This brings us to the next issue:

> If that person was allowed to, maybe they would want to change their opinion.

They are unable to listen to opinions or try to change theirs. There's a quote that applies to them:

> You cannot reason people out of something they were not reasoned into.

Online discussions generally don't work because they are not about two people sharing opinions, they are usually about people trying to impose their opposing faiths.


how will you remove the racist uncle if his content is anonymous?


That’s the interesting tech challenge. Keeping things anonymous, even if a db dump leaks, but still allowing moderation and users being able to block specific people. I’m still digging into the solution but believe I have a simple solution. The key is that not everything is anonymous, you know who you’re associated with. You just don’t know the association between a specific user and the content they authored.


A "show less of this" button should work even if you don't know who it is being applied to.


I feel like this was the premise of “Branch”. It had a really interesting start. Then Facebook acquired it and killed it.


Good luck!

I love writing. I used to freelance and string. I wrote a lot of speculative sci-fi that didn't go anywhere. I was an early blogger. I was an early HN'er as well. So I'm creating content constantly no matter what the current tech.

I've been watching what works. Random thoughts: every conversation should have a moderator who is responsible for the subject and tone of the conversation. The start of every conversation should include what the goal is: analysis, venting, compassion, meme, etc. Conversation participants should be rated over time based on their ability to adjust their tone based on what the moderator requests. Each person votes up and down various moderators depending on the type of material they present and how well they manage their conversations. Over time the ranking system should match up moderators and participants, but not subject areas (important). There should be some escalating trust level based on this, the ability for people to make mistakes, be forgiven, delete/apologize for bad-day remarks, etc. I think with the right system this would happen organically, though. Not sure.

Note that a popular/well-used system will by necessity not be a hugely-successful commercial one. You might make it work for a bit, but these are separate and conflicting goals.

Personally, I'm dialing back FB and moving all of my content over to locals. I want to own whatever I write and I want the option of putting things I create behind a paywall easily. I also don't want to ever worry about having a bad day and getting my life destroyed by the mob.


I think Google already tried it with circles


There are some similarities. The interesting thing about circles was the recognition that you are many people, and you don’t want to necessarily have political arguments with your soccer friends. So many people seem to have given up on the promise of the internet to connect people. We’ve tried a few different modes of communicating, I still have hope the problem can be cracked.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: