Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This to me is a vital point.

One of the things rarely touched on about Twitter / FB et al are that they are transmission platforms and then a discovery / recommendation layer on top.

The "algorithm" is this layer on top and it is assumed that this actively sorts people into their bubbles and people passively follow - there is much discussion about splitting the companies AT&T style to improve matters.

But countries where much of the discourse is on WhatsApp do not have WhatsApp to do this recommendation - it is done IRL (organically) - and people actively sort themselves.

The problem is not (just) the social media companies. It lies in us.

The solution if we are all mired in the gutter of social media, is to look up and reach for the stars.



I think this gets almost all the way there but not quite — there is one more vital point:

How we act depends on our environment and incentives.

It is possible to build environments and incentives that make us better versions of ourselves. Just like GPT-3, we can all be primed (and we all are primed all the time, by every system we use).

The way we got from small tribes to huge civilizations is by figuring out how to create those systems and environments.

Yes, the algorithm is not the problem alone, but a good algorithm can help fix the problems — since it creates the "loss function" (the incentives) for the humans using the platform (I go into that in more detail here https://twitter.com/metaviv/status/1529879799862378497 and here https://www.belfercenter.org/publication/bridging-based-rank... for those who are curious).

So it's not about "reaching for the stars" or complaining about how humanity is too flawed. It's about carefully building the systems that take us to those stars!


But there are communities that make it work and I believe these are negatively affected by general rules we try to establish for social media through some systems.

I don't believe any system can be a solution, it isn't a requirement for a lot of communities either. I don't know what differentiates these groups from others, probably more detachment from content and statements. There is also simply a difference between people that embraced social media to put themselves out there and ghosts that have multiple pseudonyms. Content creators are a different beast, they have to be more public on the net, but that comes with different problems again.

I believe it is behavior and education that would make social media work, but not with the usual approaches. I don't think crass expressions with forbidden words or topics are a problem, on the contrary they can be therapeutic. Just saying because this will be the first thing some people will try to change. Ban some language, ban some content, the usual stuff.


I had been thinking how I’d put it, and I think:

- by “failure of algorithm”, the vocal minority actually mean “lack of algorithmic oppressions and treatments according to alignments of a speech with respect to academic methodologies and values”.

- average people are not “good”; many are collectivist with varying capacity of understanding individualism and logic. They cannot function normally where constant virtue signaling, prominent display of self established identities, said alignments above, are required, such as on Twitter. In such environments, people feels and expresses pain, and makes effort to recreate their default operating environments, overcoming systems if need be.

- introducing such normal but “incapable” people - in fact honest and naive and just not post-grad types - into social media had caused the current mess, described by the vocal minority as algorithm failures and echo chamber effects, and by the mainstream peoples as elitisms and sometimes conspiracies.

Algorithmically oppressing and brainwashing users to align with such values would be possible, I think(and sometimes I’d think about trying it for my interests; imagine a world where every pixel seems to have had 0x008000 subtracted - it’s my weird personal preference that I don’t like high saturations of green), but an important question of ethics has to be discussed before we’d push for it, especially with respect to political speeches, I also think.


How do you go about determining what is collaborative or "bridging" discourse, though? That seems like a tricky task. You have to first identify the topic being discussed and then make assumptions based on past user metrics about what their biases are. Seems like you would have to have a lot of pre-existing data specific to each user before you could proceed. Nascent social networks couldn't pull this off.


This also seems to be gameable. Suppose you have blue and green camps as described in the linked paper. And if content gets ranked high when it gets approval from both blue and green users then one of the camps may decide to promote their opinion by purposefully negatively engaging with the opposite content in order to bury it.

This seems no different from "popularity based" ranking mechanisms (e.g. Reddit) where the downvote functionality can be used to suppress other content.

Maybe the assumption is that both camps will be abusing the negative interactions? But you can always abuse more.

How is such system protected from someone manipulating the consensus by employing a troll farm (https://en.wikipedia.org/wiki/Troll_farm)?


What a great reply thank you. i agree


> It lies in us.

As despicable as Facebook is, I wish someone there would just come out and say “have you looked at yourself in the mirror?”

It’s mindboggling to see them passively accept the questioning without mentioning that people are wolves and sheeples, and you don’t really need Facebook to join the two.

Platforms like Facebook only bring us closer but the rotten core is within us.


People act based on circumstance.

If there is a lot of food, people will lean towards sharing their food, and helping others - even anonymously in a truly "altruistic" sense.

If everyone is starving, people will lean towards violence and stealing.

Does this "expose the rotten core within us?", or is it just saying we have the capacity for both? If we were truly completely rotten we wouldn't share in either case.

The fact is that environment and circumstance are inexorably tied in with our behaviour - a fact that we seem to wish to see negatively, our ideal being a perfect person who operates with true altruism in all circumstances regardless of personal cost.

Throwing our hands up and saying "go look at yourself in the mirror" is missing the big picture in my view, which is that if you want a good behavioural outcome, the environment and context of the behaviour is one of the biggest factor, and is a big target to attempt to improve. If more people are operating in a positive environment, you get more positivity.

Personal accountability, yes, is a thing, but is quite a lot more difficult to instil and improve from within a smaller slice of the world, like an app, and is more of a greater societal concern or otherwise would require propagation of an ideal with a reach that is quite hard to achieve through your app.


I agree people are not rotten to the core but we keep blaming system and thus get rid of any personal accountability. Both are important. But so far it seems people have forgotten "looking at mirror". Everyone keeps blaming everyone else. So many conflicts would automatically get diffused if people looked within. Nobody wants to do that because it's hard. Blaming others and system is easy. Platforms providing healthy environment is equally important to personal accountability.

Also comparing our social media squabbles to food security is not right imho.


What if people don't know how to look within?

If you think about it, not only do we not teach such things, these sorts of ideas very often get mocked and dismissed as "woo woo".


People havent forgotten anything. People are generally the same as they have always been except perhaps on average they are more educated at a global level.


I don’t think your proposed truth is always true, do you have any data to back that up?


they made a couple of statements. Which are you asking for clarification on?


If there is a lot of food, people will lean towards sharing their food, and helping others - even anonymously in a truly "altruistic" sense. If everyone is starving, people will lean towards violence and stealing.

This.


This seems like game theory, and there are historical incidents, like WWII where people were put into forced famines and didn’t act this way. People aren’t purely self interested or “logical” there is something else there besides self interest


>As despicable as Facebook is, I wish someone there would just come out and say “have you looked at yourself in the mirror?”

That's orthogonal. Human nature is what it is, we have to work with what we have, and at best change that slowly within a culture/society.

Facebook on the other hand, and how it operates, is almost infinitely malleable, and can be changed with just programmers working on the change.

And it's not some neutral playground that's "only bring us closer" but an active agent, which has policies to create echo bubbles, stir up engagement, and use distraction and partisanship for maximum profit, and whole teams dedicated to it.

So, no it's not like a knife that can be used to cut a cake or kill a person, and it's "just up to us how we use it".

It's more like an automatic riffle with a laser guide careful designed for maximum hurt, and promoted as such to customers (in this case, advertisers), with product teams devoted to stirring up conflict to improve the riffle selling business...


Well, it's more like giving everyone a Green Lantern ring. You can do whatever you imagine with it.


It would be, if the Green Lantern ring came with teams doings psychological studies how to fuck with your mind for profit and increase your spending time with it, and if its business model was dependent on framing how you discuss and what news and stories you see...


> The problem is not (just) the social media companies. It lies in us.

my take is that social networks opened a new 'space' and just like every new spaces, old rules / regulations / institutions got thrown out to enjoy the newfound freedom, until people experience the same issues.. aka the need for organization and will thus reinvent similar rules they went away from. It's a kind of cycle.

such freedom itself is not a benefit, it's more like removing brakes from your car to save weight and maintenance cost.


It's not like friendster and myspace caused these issues. It's the social media platforms that are absolutely designed to cause polarization to drive up platform metrics.

It's the difference between a carefully tended forest and a forest where the "tending" made sure there was flammable underbrush spread everywhere. Sure it is the match that started the forest fire, but there is a world of difference between the two.


I wouldn't be surprised if future generations looked at these few decades and wondered "crazy times". You are right this is new and it would take decades before we master this new "space".


Methinks they will read it as we read history, just a funny story they won't grasp :)


Not really. People don't change. Slavery, rape, murder, theft, corruption. History books are not funny stories they are painfully relatable.


This explanation would also indicate why we had less of this issue in the pre-Internet era of broadcast media.

Broadcast is centralizing the conversation, and to maximize viewership / listenership they are encouraged to talk about a broad variety of things... But they're the ones doing the talking.

Apart from choosing to tune into specialized shows, there's less self-sorting possible when the channels and content are finite. You can't easily just tune into the stories you want to hear on the six o'clock news; someone else is deciding relevance and the topics chosen to be relevant are seen by everyone.

But either introduce bidirectional communication or the modern cable dynamic of a thousand channels, and that homogenization goes away.


Did we have less problems in the pre internet era?

We had WW1 and 2, Vietnam. Civil wars, internment camps, genocide.

Lots of these were approved and even celebrated by the majority of the population when all they had was broadcast media telling them what to think about.

I think it's wild how much people romanticize "traditional media." Broadcast and print media is and always has been awful.


I'm not talking about fewer issues in general, I'm talking about less of this issue: the issue that people are self-radicalizing by sorting into echo chambers where their priors are re-enforced instead of being compelled to hear someone else's worldview or a broadcaster-controlled consensus worldview.

Broadcaster-controlled consensus worldviews can be faulty. But we have not replaced those faults with the self-sorting of the listener, because listener worldviews are also faulty.


The algo loss function is based around the inherent failings (shadow) of humans. Engage the shadow and clickity click click comment.

Essentially scrolls through the various options that will start a fire.

Human nature is what it is. And AI systems are designed to understand it (sometimes not designed but unhappy accidents find them)


Sorry, but this isn't "organically". There is a lot of money being spent to produce viral content and there a lot of mobile phone farms to spread this content. "Organic" is just the last mile.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: