I’ve rewatched it last year and, like a really good book, I found myself liking a different set of characters than on my first watch. There’s truly a lot of depth there. And a lot of humanity, which is something we sometimes forget about the tech industry.
We'll often do design reviews in figma, that usually means a few people looking at the same doc and potentially a few people making tweaks at the same time. Usually in separate bits of the doc at the same time.
Sure, we want individuals to act in a way to mitigate collective action problems. But the collective action problem exists (by definition) because individuals are trapped in some variation of a prisoner's dilemma.
So, collective action problems are nearly a statistical certainty across a wide variety of situations. And yet we still "blame" individuals? We should know better.
> So you're saying Head of AI of Google of Jeff can't choose a better venue?
Phrasing it this way isn't useful. Talking about choice in the abstract doesn't help with a game-theoretic analysis. You need costs and benefits too.
There are many people who face something like a prisoner's dilemma (on Twitter, for example). We could assess the cost-benefit of a particular person leaving Twitter. We could even judge them according to some standards (ethical, rational, and so on). But why bother?...
...Think about major collective action failures. How often are they the result of just one person's decisions? How does "blaming" or "judging" an individual help make a situation better? This effort on blaming could be better spent elsewhere; such as understanding the system and finding leverage points.
There are cases where blaming/guilt can help, but only in the prospective sense: if a person knows they will be blamed and face consequences for an action, it will make that action more costly. This might be enough to deter than decision. But do you think this applies in the context of the "do I leave Twitter?" decision? I'd say very little, if at all.
Yes, but the game matrix is not that simple. There's a whole gamut of possible actions between defect and sleep with Elon.
Cross-posting to a Mastodon account is not that hard.
I look at this from two viewpoints. One is that it's good that he spends most of this time and energy doing research/management and not getting bogged down in culture war stuff. The other is that those who have all this power ought to wield it a tiny tiny bit more responsibly. (IMHO social influence of the elites/leaders/cool-kids are also among those leverage points you speak of.)
Also, I'm not blaming him. I don't think it's morally wrong to use X. (I think it's mentally harmful, but X is not unique in this. Though character limit does select for "no u" type messages.) I'm at best cynically musing about the claimed helplessness of Jeff Dean with regards to finding a forum.
I think this medium already proved that it doesn't work.
A new social media model needs to emerge - we shouldn't try to rebuild the same old with a fresh coat of paint.
Bluesky is the first time users have control over their social networking, and that's exciting & so so different. There's 3rd party feeds, algorithms, moderation, & labels that are all open ended tools we've only just begun to put into people's hands to try out.
The data architecture starts eith a personal cryptographic data store. Which can have endless different types of data in it. So for example you can have SmokeSignals Events in your feed or FrontPage reddit like in your feed. https://bsky.app/profile/smokesignal.events
And it's all open data. So we can analyze thr network & try to spot & root out large influence campaigns, find patterns of behavior that bad actors and sometimes state sponsored actors are pushing on the network.
There's so many open possibilities here. I can get how on the surface this looks "the same old with a fresh paint of cost," but the techies are moving here en masse because we believe this is so much more fertile a ground with so much more potential & possibility, we believe it will be a place we can put in energy & ongoingly iterate & improve social networking.
And the team has very explicitly designed the system to assume that someday BlueSky may become the problem. It's not fully tested at scale but we have the technical ability to walk away, & keep the social network going, if the BlueSky corporation ever loses our trust.
It's sad that there's such a fatalistic nihilistic air about, that judgements are so quick to be called. It's still early days for the AtProtocol/BlueSky, but we already see the protocols not platforms decisions here enabling experiences so far beyond what BlueSky corporation alone could ever hope to provide, and that ever-richening interconnection & potentiation is incredibly alluring & a chance unlike any we've seen in a generation.
How does any of this address fake news, bots, or inflammatory discourse?
It feels like a lot of technical jargon that neither improves nor worsens the platform compared to what already exists.
Clickbait will remain immensely popular, regardless of whether you implement a fantastic cryptographic datastore or not.
Twitter has a very active community of folks doing disinformation research & working to uncover bots and state actors and propaganda. Since the data is all public on Bluesky and since you can't really delete old data (you can add a new record saying to please ignore previous data), I think it will be vastly easier for good civic works like this to resume & bring some sunlight in. We've had to trust the other social networks with whatever job they care to do for a while, and this alone brings me great hope.
There's absolutely for sure social challenges with clickbait. I'd love to see labelers spring up which users can opt into to give some forewarning, which can label posts as clickbait or misleading or not-supported-bt-their-links, et cetera.
Bluesky already has great defenses against some of the worse forms of inflammatory discourse. If someone reskeets only to dog pile on you, you can remove your inner skeet from theirs. Your blocks can remove previous interactions you've had from your feeds. When people are coming in to flame you, there's a much more mature set of tools on Bluesky to handle it.
So hosting your data (hosting a PDS) is cheap and easy at the moment, and you can get other people's data. Other folks have their own Relays up already, proving we can combine data decentralizedly. There's active efforts to DIY the third part, the AppView, atop these Relays, but it's computationally expensive (aggregating likes, synthesizing timelines across many sources) & WIP. But nothing stops that; it's just not easy.
Enjoying your link. Pfrazee is such a straight shooter. His post doesn't seem to support the idea that this is centralized. And the whole team keeps saying things like "the network should outlast the company"; everyone has been very vocal about that. https://bsky.app/profile/pfrazee.com/post/3lau2bgyolc2g
> Bluesky is the first time users have control over their social networking, and that's exciting & so so different.
Are you aware that Bluesky is censoring posts algorithmically? X is full of people posting videos of themselves starting with a brand new account with zero followers, posting something the left doesn't like (e.g. "there are only two genders") and the account is instantly suspended.
It's unclear how this is compatible with the idea of user-driven decentralized networks or moderation. Maybe they'll implement that one day but by then it'll be too late. Everyone who might benefit from user-driven decentralized labelling will have simply given up and gone back to X years earlier.
It's funny to me that you say this. Because this is part of why the Xodus is happening, why Bluesky is so loved.
People interested primarily in intentionally shitty inflammatoryness had taken over X. We are all so excited not to be surrounded by antagonistic bluecheck trolls being top-ranked & algorithmically boosted.
I have a hard time believing there's a problem here. This seems ideal. You could host your own PDS very easily if you still wanted to be part of this, and I expect it would then be up to moderation lists & labelers to handle this retrograde junk. The protocols exist & are available to all. Thankfully gratis hosting on Bluesky isn't unconditional too.
There's a big gap between "the first time users have control" to centralized algorithmic bans being "why Bluesky is so loved". Users really have no control if things are being banned by a company before anyone sees them.
Your setup isn't first time users. It's malfeasants, recording themselves being antagonistic pricks, for no reason. Or, doing so so they can make frivolous lame grumbles about a social network that isn't playing that shitty game with them. Good riddance & thank gods.
tbf, I don't have a vision for it. I'm just saying that this model works only because we've seen that the current model works around bots, inflammatory content, and fake news.
These are all really hard problems..!
I’ve rewatched it last year and, like a really good book, I found myself liking a different set of characters than on my first watch. There’s truly a lot of depth there. And a lot of humanity, which is something we sometimes forget about the tech industry.