Some of the most interesting excerpts (although it's worth reading in its entirety):
> My path in technology started at Facebook where I was the first Director of Monetization. [...] we sought to mine as much attention as humanly possible and turn into historically unprecedented profits. We took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset.
> Tobacco companies [...] added sugar and menthol to cigarettes so you could hold the smoke in your lungs for longer periods. At Facebook, we added status updates, photo tagging, and likes, which made status and reputation primary and laid the groundwork for a teenage mental health crisis.
> Allowing for misinformation, conspiracy theories, and fake news to flourish were like Big Tobacco’s bronchodilators, which allowed the cigarette smoke to cover more surface area of the lungs.
> Tobacco companies added ammonia to cigarettes to increase the speed with which nicotine traveled to the brain. Extreme, incendiary content—think shocking images, graphic videos, and headlines that incite outrage—sowed tribalism and division. And this result has been unprecedented engagement -- and profits. Facebook’s ability to deliver this incendiary content to the right person, at the right time, in the exact right way... that is their ammonia.
> The algorithm maximizes your attention by hitting you repeatedly with content that triggers your strongest emotions — it aims to provoke, shock, and enrage. All the while, the technology is getting smarter and better at provoking a response from you. [...] This is not by accident. It’s an algorithmically optimized playbook to maximize user attention -- and profits.
> When it comes to misinformation, these companies hide behind the First Amendment and say they stand for free speech. At the same time, their algorithms continually choose whose voice is actually heard. In truth, it is not free speech they revere. Instead, Facebook and their cohorts worship at the altar of engagement and cast all other concerns aside, raising the voices of division, anger, hate and misinformation to drown out the voices of truth, justice, morality, and peace.
This bit of dialog should be the smoking gun in my opinion. Big Tobacco got taken to the woodshed over this very thing: making the product as addictive as possible. This should be the club that is used to beat Social Media platforms over their heads. As with Big Tobacco I'm sure it rings true with Social platforms as well in that not just one of them is doing it they all are.
One problem with this is that it's easy to conflate "addictive" with "people like to use it". Should television shows been punished for cliffhangers because they hook people into seeing the next episode? Breaking Bad had an interesting plot and character progression that made me want to keep watching - are they addicting me?
One person might say "We created all these statuses and features to be addictive" but it seems just as true to say "We created this stuff because people liked it and we are trying to make something people like."
> Should television shows been punished for cliffhangers because they hook people into seeing the next episode?
Does this significantly negatively impact the lives of viewers or of those around them? Addiction doesn't just mean "want to have it". Addiction means "want to have it so bad it messes up other aspects of my life".
(For what it's worth, I do personally avoid cliff-hanger shows because I find the anxiety and frustration of being left hanging is rarely sufficiently well compensated by the quality of the show.)
Doesn't addiction mean "compulsive consumption after habituation," as in the original enjoyment has worn off, but if you stop doing it you will experience a hangover ?
I figured out that trick recently too! It worked really well for "Devs". Every episode of a cliff-hanger-y show tends to have a point 1/3 of the way through where they have resolved the previous episode's tension but not yet started the next one. I stop right there.
Facebook is definitely addictive. It took me more than a year of trials to be able to break the cycle and get off the platform. The thing is, it gets harder over time to take a pause from it.
But now that it has been one month since I last used it, and I noticed that all I did was to replace my Facebook time with Hacker News, I can't but wonder: Does the addiction problem lie with the user, or in the platform? Or is it, more generally, in the way the internet serves us content?
Well, many times the thing one addicted to is used to try to help manage negative feelings in other areas of life. So maybe it’s time to take stock of that?
Honest question though, if we were to dig into “brain disorder”, how well defined is that really? Is there a measurable effect on a human brain when using an “addictive” product vs. one we are coming back to because we like to use it?
The dividing line over whether an addiction or behaviour is "bad" is whether or not it negatively affects your life and/or the lives of those around you.
At least with tobacco there’s good evidence of chemical dependency on nicotine. That IMO is what distinguishes that from whatever moral panic people are into at the moment.
That's a really good analogy for algorithmic tweaks to your timeline to promote engagement. Replace incendiary content with drugs, and community/enriching content with... well, community and enriching activities... and it really does look like fb is doing its best to replicate the "empty cage with a bottle of heroin" environment on your phone.
> Should television shows been punished for cliffhangers because they hook people into seeing the next episode? Breaking Bad had an interesting plot and character progression that made me want to keep watching - are they addicting me?
Honestly, this is a super interesting question. I would say anything designed to succeed by hijacking human brain chemistry instead of providing superior or novel quality is probably worth regulating at some level.
From that standpoint, Breaking Bad would not have an issue - it's superior and novel. Shows that succeed in making a viewer binge with a combination of (effectively) mid-episode endings and autoplay, are somewhat hacky. You can't regulate cliffhanger endings, so autoplay should probably not be legal - Netflix already asks you if you want to continue watching, they should simply do so after every episode. Shows with good content like Breaking Bad would still be easy to binge (just press yes once an hour), and poor quality shows would have a harder time taking advantage of brain chemistry by requiring an affirmative act.
>"I would say anything designed to succeed by hijacking human brain chemistry instead of providing superior or novel quality is probably worth regulating at some level."
My point is that there is no real dichotomy, 'Breaking Bad' and menthol cigarettes are not so different; they each possess both qualities.
I'm not a smoker, but adding a nice flavor to a (previously unflavored) cigarette seems to be 'superior and novel'.
You originally posted that:
>"Advertising is an act of malice, particularly with addictive products."
But changed it to:
>"Manipulative advertising is an act of malice, particularly with addictive products."
What do you see as the difference between "manipulative advertising" and regular "advertising", and how is either (or both) malicious? Advertising is basically telling people that you are offering them something, and trying to persuade them to buy/use it, and I am not sure how that is "characterized by unscrupulous control of a situation or person."
You are talking to two different people, and the streams are kind of crossing.
I agree adding a flavor can be superior and novel, but if you read what I originally wrote it was specifically worded to make the addictive quality the overriding concern. Menthol wasn't more addictive because of the flavor, it was addictive because it allowed the user to get more nicotine per hit.
TeMPOraL was making the case that menthol was not superior and novel, and I was contesting that specific assertion.
That aside, if you consider the addictive quality to be the overriding, and believe that "Breaking Bad" possesses (some of) it, then why doesn't BB's addictiveness override its superiority and novelty?
My read of your original comment was that Breaking Bad was at least pseudo-addictive. If you don't think BB is addictive at all, there was no "super interesting question."
> You originally posted that: (...) But changed it to: (...)
Yes, because I wanted to narrow down my originally too broad statement before picking on the generalization will derail the subthread (as it sometimes happens on HN).
> What do you see as the difference between "manipulative advertising" and regular "advertising", and how is either (or both) malicious?
So you've abandoned the claim that menthol cigarettes were not superior and novel?
With respect to your discussion of advertising, as someone who has used various forms of marketing to promote products, I think advertising is much less effective than you seem to believe. Second, you say that informing is okay, but convincing is bad, but the problem is that almost all 'informing' is an attempt to convince. Those points aside, I understand that you find certain advertising patterns unethical or distasteful, but I am not sure exactly how to draw the lines; your post seems to be a polemic rather than an ethical framework, so it expresses your feelings, but doesn't explain your thinking to me.
I agree. It's difficult to determine where the line is drawn. I think the primary considerations should be the methods used and the potential harm caused.
Yes, TV shows can be made to be "addicting" but what is the potential harm? Someone sits around watching too much TV? That's not a very big drain on society at the end of the day. Sure it's not great, but the negative outcomes for the society as a whole don't seem to be too impactful.
Now look at gambling. It's certainly addictive because of various techniques used by casinos to get people hooked. It seems that much of society agrees that it also has some negative impact on society as a whole. It drags people into impossible debts which can have a variety of negative externalities... loan sharks, violence, evaporation of wealth, financial crimes, etc.
It seems clear to me that not only is social media addictive but it is also having a net-negative impact on society and that is why people are concerned. If the impact was just people are spending their evenings glued to the screen but not going out and causing societal issues then I don't think anyone would be too concerned.
They do have embedded product placements. More sinister than ads with "sponsored" labels because there's no indication to users that it's an ad. Also, someone who watches Netflix 4+ hours a day is probably much more likely to be hooked and unable to cancel their subscription than someone who watches 30 mins a day.
And when it's done for the day, it's done. You can't turn on your TV every 7 minutes to check for new General Hospital, and even if you binged 57 years of episodes, you would in time exhaust the content.
That's true if you think about being addicted to a specific TV show. If you think about being addicted to TV in general it's not true. More than 1 hour of television content is produced every hour (I assume).
Tobacco use as a percentage of the population has consistently declined by .5% since data started to be gathered the 1960s [0].
The Master Settlement Agreement in 1998 [1] had no statistical impact on the rate reduction of smoking - the rate of decline of smokers is the same now as it was in 1965.
The tobacco industry is more profitable than ever and they are diversifying into nicotine delivery vehicles like vapes, gum [2]. So the underlying goal - increase nicotine dependence across the global population and capture the nicotine consumption market is still going strong.
Much like the desire to be intoxicated, the desire to influence people will never go away. It's baked into our biology. Everyone in this thread interacting with each other is trying to influence everyone else. Facebook etc... is just doing successfully what Bernays dreamed of.
You can beat these platforms all you want - just like the tobacco industry was beat. The problems will just surface elsewhere in a different form.
Attack the root issue - ban advertising. oh and do it in a way that allows for "free speech." The challenge of the century.
I tend to agree with this line of thinking, but I wonder if banning advertising wont have similar difficulties. There will be more sneaky product placement, anonymous donations to podcasters who tend to promote certain products/beliefs/etc.
I say this not because I think we should just give up and not ban advertising but because I'm curious how it might be done effectively.
In order for the decline in smoking to remain linear, you must convince people who are increasingly less likely to quit. Consistent .5% decline is (weak) evidence for the effectiveness of efforts to combat smoking, not the opposite.
I don't know if that impacts your larger point with regards to nicotine addiction in general, but I think it's worth noting.
It's an interesting point. I would tend to agree with that in the sense that it's a log graph of "difficulty" required.
However I'm not sure how that would be supported without assuming there is some base-rate that would smoke no matter what, as though smoking specifically is a natural inclination, with everyone above the base rate on some log distribution of "ability to convince to stop smoking."
I disagree somewhat. The addiction argument is merely an extreme.
Suppose someone offered to mow your lawn for free. Great offer, so you take them up on it. Turns out they're also using the access you give them to mine gold you didn't know was in your backyard. Whether or not you were addicted to their mowing services is irrelevant, they're stealing from you.
The problem with Facebook is that they're taking your attention and monetizing it. There's no serious argument against requiring them to disclose their actions - particularly who is buying your attention. It doesn't make any difference if you're addicted or a mere user of their product, they're still using your attention without telling you. They simply know more about science.
I think it’s more than just making it addictive. It’s making choices that make the product more harmful in order to make it more addictive. Even that bar, though, hasn’t triggered action against food producers for sugaring things up. I think there also must be a critical mass of cultural anger.
They really didn't; cigarettes were allowed to flourish for decades and legal action was only taken once their popularity started to wane. Don't expect any meaningful action from your govt
Sure, Big Tobacco is still a thing. However, their actions of targeting kids/teens were curtailed. The money they were forced to spend on ad campaigns informing teens of "smoking is bad" appears to have worked. Current reports show smoking in teens has dropped significantly, including current downward trends in vaping as well.
The biggest thing you go do to hurt the likes of FB,IG,Twitter would be to brand them as lame and uncool. If people don't want to use it, then it effects their bottom line. Gov't action isn't require for this, but the right campaign attacking the cool factor will motivate people away from it. (I'm currently wearing my positive thinking cap)
Re: the comments about incendiary content and maximizing attention.
This is what every news outlet tries to do. The only difference is that FB is better at it. It reminds me of the controversy about targeting ads towards protected categories (age, gender). This is something all media buys do as well, based on location, event type, but FB just has a better way.
I'm not saying its right, or necessarily wrong, just that this seems to be more about them being good at something than it is about them operating in moral territory that is different than any other business.
Many news outlets try to do this, but not all of them. There are some that strive to be fair and prioritize informing rather than inflaming their audience. The problem is that there is more money in the latter and many investors are greedy.
The guy is complaining about incendiary content whilst repeatedly comparing Facebook to "Big Tobacco"...I think there is a lot of bombastic nonsense being thrown about.
And I agree Facebook is not the first company in the world to maximise attention with this kind of content. Go back to when political pamphlets started appearing in the 16th century, it was mostly salacious bullshit about well-known public figures being possessed by the devil or drinking the blood of orphans.
I am not even sure what the problem is anymore, let alone what the solution is...but this is not going to stop with Facebook, this is just a reflection of human nature (and yes, everyone has complained about this kind of "content", it ignores the fact that most humans enjoy consuming it).
(I think the most problematic part of Facebook is just that so many people get their news from there and, like every human that has ever existed, they have been unable to deal with that responsibility in an even-handed way...I don't know though. They are basically a dead platform anyway, it is mainly used by old people to keep up to-date with their grandchildren afaik...I don't really know anyone who uses it, and I have never used it myself).
They are basically a dead platform anyway, it is mainly used by old people to keep up to-date with their grandchildren afaik...I don't really know anyone who uses it, and I have never used it myself
This is terribly myopic; you don't have to like FB or want to use it to recognize its influence. Consider the possibility that you just haven't really wrapped your head around it yet. Also, I'm gonna guess you don't know a whole lot of older people, and may be falling into the cognitive trap of thinking your experience of social demographics is reflective of the population at large.
Yep, all I mentioned is that I didn't use FB and you leapt on that instantly (mention that you don't use FB, and people will think they know everything about your life).
You're guess is incorrect (I love that you have considered all the things I don't know whilst jumping to random conclusions). When I said: I don't really know anyone who uses it, I meant I don't know anyone under the age of 35 who uses the platform with regularity (remember, I said that it was dead, not that no-one used it...they have 3bn MAU, people use it but my point is that people don't use FB in the way that is often assumed by politicians...who btw, mainly see FB as a way to target voters...the political use of FB peaked with Obama).
'Consider the possibility that...' and 'i'm gonna guess' is not thinking that i 'know everything about your life' or jumping to conclusions. Chill out and reread your comment in a while.
I used to smoke, and I also have (very mild) asthma that was diagnosed prior to me starting to smoke. I always said that I could breath better after a cigarette and people would laugh at me. It never occurred to me that of the thousands of chemicals in a cigarette some of them might be geared specifically to "help" you take in more smoke, and by extension, more air after.
> misinformation, conspiracy theories, and fake news
It's amazing to see people casually use these words as if they still have universally meaningful definitions. Not anymore. What one half of the country considers misinformation another half of the country considers the truth. Not to mention that social media operates internationally.
You can't have a meaningful discussion without admitting this and doing something to escape the semantic trap of perfect ambiguity. In other words, you first need to establish some sort of information processing principle that is unambiguously defined and everyone (or at least the wast majority of people) agrees with.
> In 2016, internal analysis at Facebook found 64% of all extremist group joins were due to their own recommendation tools. Yet repeated attempts to counteract this problem were ignored or shut down.
That's pretty damning. Facebook execs knew that extremist groups were using their platform and Facebook's own tooling catalyzed their growth, and yet they did nothing about it.
On the surface it sounds pretty outrageous. My question would be though, what should Facebook do instead?
A recommendation engine is just an algorithm to maximize an objective function. That objective function being matching users with content that they enjoy and engage with. The algorithm has no in-built notion of political extremism. It almost assuredly seems to be the case that people with radical opinions prefer to consume media that matches their views. If Bob is a three-percenters, it's highly unlikely he'd prefer to read the latest center-left think piece from The Atlantic.
Unless you're willing to ban recommend engines entirely, the only possible alternative I can see is for Facebook to intentionally tip the scales. Extremist political opinions would have to be explicitly penalized in the objective function.
But now you've turned Facebook from a neutral platform into an explicit arbiter of political opinion. It means some humans at Facebook are intentionally deciding what people should and should not read, watch and listen to. Remember Facebook as an organization is not terribly representative of the country as a whole. Fewer than 5% of Facebook employees vote Republican, compared to 50% of the country. Virtually no one is over 50. Males are over-represented relative to females. Blacks and hispanics are heavily under-represented. And that doesn't even get into international markets, where the Facebook org is even less representative.
The cure sounds worse than the disease. I really think it's a bad idea to pressure Facebook into the game of explicitly picking political winners and losers. A social media platform powerful enough to give you everything you want is strong enough to destroy everything you value.
> My question would be though, what should Facebook do instead?
What should Big Tobacco do? If your business is a net negative for the world... get out of business. This is not hard. Corporations are not precious endangered species that we have some moral obligation to keep alive.
> A recommendation engine is just an algorithm to maximize an objective function.
A cigarette is just dried leaves wrapped in paper. If the use and production of that devices harms the world, stop using and producing it.
> But now you've turned Facebook from a neutral platform into an explicit arbiter of political opinion.
Facebook is already a non-neutral platform. Humans at Facebook chose to use an algorithm to decide recommendations and chose which datasets to use to train that algorithm.
Playing Russian roulette and pointing the gun at someone else before pulling the trigger does not absolve you of responsibility. Sure, the revolver randomly decided which chamber to stop at, but you chose to play Russian roulette with it.
The difference is that it's unquestionable that cigarettes are enormously harmful. To claim that the case against social media is anywhere near as clear-cut as tobacco is to do a disservice to the heroic public health efforts it took to cut down on smoking.
With social media, anecdotal accusations abound of negative impacts on mental health or political polarization. Yet the most carefully conducted research shows no evidence that either[1][2] of these charges are true to any meaningful degree. Simply put the academic evidence is not contagious with the journalistic outrage.
What's more likely is the panic over social media is mirroring previous generations' moral panic over new forms of media. When the literary novel first gained popularity, social guardians in the older generation worried that it would corrupt the youth.[3]
The same story played out with movies, rock music, video games, and porn among other things. The dynamic is propelled by old media having a vested interest in whipping up a frenzy against its new media competitors. In almost every case the concerns proved unfounded or overblown. I'd be pretty surprised if social media proved the exception, when we've always seen the same story again and again.
> The difference is that it's unquestionable that cigarettes are enormously harmful.
It was certainly questioned for many decades before we got to that point. Meanwhile, millions died. And during that entire time Big Tobacco had no difficulty drumming up doctors and scientists willing to argue against the negative health consequences of smoking.
Precisely this. Many people denied the idea that smoking was unhealthy. It sounds hard to believe, but I personally know many people who said these things to me in the 1990s.
Rejection of science in favor of something you personally want to be true isn’t a new internet age development.
The difference between social media and all the other media you mentioned isn't the format (still mostly just images, text, and video like the old media) or in its content (Sturgeon's Law is universal); it's in the ability to disseminate messages to a global audience instantaneously, and the careful curation of that content to drive engagement.
The clear result of this algorithm has been to happily send lies, misinformation, emotionally manipulative opinions, and other content at a scale and speed that was never achieved by a New York Times bestseller, MTV, or Rockstar Games.
All media has always exploited our cognitive biases and irrationality to its end; but to do it worldwide and simultaneously, 24 hours a day, 7 days a week, without rest or remorse, is pure stochastic terror.
It's the linking or sharing manually that matters. Chronological order doesn't. If a hundred things are shared with you, who cares what order they're shown? If you want to have any real effect, you have to change the distribution patterns. Change what's in the list, not in what order. (Note: recommendation systems are a whole different matter. I'm talking about what's left after they're taken out of the picture.)
Limit shares/retweets. Limit groups sizes. Surface more information as topics/tags/whatever so that users can do more sophisticated filtering themselves or collaboratively. I want to mute my uncle when he talks about politics, not all the time. Facebook already does more sophisticated analyses than just extracting topic information (I know because I work there and I can see the pipelines). Show those results to me so I can use them as I see fit. That's how you make things better. Chronological vs. algorithmic order? Pfft. In fact, I do want the most interesting things out of that set shown first. I just want to have more control over what's in the set.
Sorting something to 1000-th page is censorship. Legally probably not, it's still available you just need to click page down 1000 times but IANAL and don't care.
I don't want algorithms to do any filtering. If someone shares crap every 10 minutes I can always unfollow. Still, I like your idea about manual filtering with tags.
Social platform recommendation engines are designed to optimize "revealed preference." I've commented in the past that "revealed preference" is just a new name for exploitation of vice.
People's higher goals are often counter to their day-to-day instinctive behaviors. We should find ways to optimize those goals, rather than momentary metrics.
Your original statement can be casually read to seem like your disagreeing with OP’s message (that Facebook did not quell extremist groups), due to the structure of your message (a lead statement followed by a delayed contradiction). You could make it clearer with better emphasis that you are stating “agree and” instead of “no because”.
But doing so would hurt engagement, and hence the bottom line!
Facebook have, perhaps accidentally, created a monster of perverse incentives. Not sure what the solution is, besides regulation (which would be extremely difficult).
I would be very careful about considering any actions that Facebook, in particular, takes to be accidental. From the very beginning, intentional recklessness ("Move fast and break things") has been their credo.
When you're being reckless on purpose, none of the damage you create is accidental.
Not sure what the solution is, besides regulation (which would be extremely difficult).
The solution is only difficult if you start from the basis that Facebook must continue to exist. If they cannot run a profitable business that isn’t harmful, that’s noone’s problem but theirs.
I would love to see Facebook labeled as a domestic terrorist group for how much they have aided and abetted, and sanctioned accordingly. Make it illegal for companies to do business with them. You don't have to shut them down, but if you make it so they can't earn enough to survive, then oh well. Sorry, not sorry, you're business model wasn't a good one.
How did they define "extremist" in that analysis? And how many total people are we talking about?
Seems like the relevance of that line really depends on answers to both. I.e., if extremist is super narrow we may be talking about 64 people out of 100. If extremist is overly broad, then maybe all the recommendations were for groups that a majority of the population would not find offensive.
Just saying the line by itself without context doesn't convey as much information as it first appears.
Sadly unsurprising. I have Facebook sockpuppet accounts that I use just for researching extremist types and I am constantly amazed at how much of the work FB does for me.
According to Tim Kendall's LinkedIn, he stopped working at Facebook in 2010. So it's interesting that he claims to have internal information from 2016.
Maybe, but it's pretty common for people to keep up with former coworkers and happenings in the company, especially since he was an early employee with (I'm assuming) a fair amount of equity.
I could see an employee giving him that data out of concern, but that's a fair point.
It is really not that surprising. People talk. When you work for a company like this you end up with a large portion of your circle of friends being current or former co-workers. I knew things there were not NDA-cleared for years after I left various startups because people chat and if you know the right questions to ask or are reasonably good at appearing to know just a bit more than you happen to know then people will often fill in the blanks for you. The well really only runs dry when most of that cohort have also left the company.
Why anyone think capitalists can actually practice morality? That's never been done in the hundreds of years of history of capitalism.
And capitalists can be quite moral personally. Across the history, the rich and powerful have always had a positive image. But their enterprises have always been requiring regulations.
I don't like Facebook but why would they configure a recommendation engine to stop suggesting extremist groups to people with extremist affinities ? There are no humans behind those wheels.
What else beside outright banning should have they done ? (I think banning extremists wouldn't have impacted their revenues much so they should have but that's another debate)
> I don't like Facebook but why would they configure a recommendation engine to stop suggesting extremist groups to people with extremist affinities
That's almost certainly not what they did. When you see someone ranting about the 5G turning the coronavirus communist or whatever, that person didn't generally come up with that idea themselves; they were exposed to it online, either via friends, or via this.
Their algorithm is likely pushing extremist nonsense on people which it determines are vulnerable to believing it, which isn't the same as having an affinity for it. Obviously this isn't what they set out to do; they presumably set out to increase engagement, and if that happens to increase engagement, well...
There are very similar issues with YouTube's "Rabbit Hole of Extremism" [1]. YT's algo has noticed that "mild" extremist content gets views and by feeding you progressively more extreme content in series, people get sucked in. I expect this wasn't even planned. It was generated.
It got my father. Living in rural area, cable/satellite TV became too expensive and low quality. So, us kids paid for an internet connection for him. Given only YouTube to inform him, he went from a generally relaxed redneck to talking about how "black community is a lost cause" and "we need to glass (nuke) the middle east and take their oil" in a very short time.
We got Netflix for him and he's calmed back down some. But, definitely not back to where he was before.
I've seen this too and it's really worrying. I don't understand what can be done about that. A legal solution seems difficult and will probably have some negative side effects. I think people just need to slowly learn how dangerous it is to fall into these traps.
There is no doubt that there's a lot wrong with social media, such as spreading fake information, privacy, etc...
Maybe they should have some king of regulation specific to them.
But I fail to see how making your product as addictive as you can, without breaking laws, is terrible. I mean, no one is forced to create a FB/TW/IG profile, as far as I know.
I'm not defending Social Networks, or saying that a case against them should not be made, I'm just saying that I can't get behind the "your product is too adictive" argument.
Just my two cents. Maybe I'm missing something right now that will force me to change my mind later.
>But I fail to see how making your product as addictive as you can, without breaking laws, is terrible
This is an interesting take. Usually I suspect people would say something more like "Making your product as addictive as possible is terrible, but definitely not illegal. And, it's difficult to design laws against something that is addictive and destructive."
I think it's pretty clear that "making your product as addictive as you can" is absolutely terrible. Again, I'm not sure that regulation can solve this problem in a constructive way, (and would love to be proven wrong here) but I fail to see how this isn't bad.
No one is forced to become obese, however it's definitely bad to have a nation full of obese people.
>I think it's pretty clear that "making your product as addictive as you can" is absolutely terrible
Why? Honest question. For instance, you mentioned obesity. Should a restaurant that makes the most delicious and sugar loaded food be forbidden to do so because its customers can't stop eating it and are getting obese?
IMO obesity is an individual problem. I'm all for helping obese people that want to change, don't get me wrong. I'm just saying that they got themselves in that situation. The restaurant should not be punished for their clients lack of control. They should, however, be forced to let clients know exactly what they're eating, but after that, it's not their fault.
I think it’s probably important to define what “addictive” means with respect to social media. If it’s literally addictive in the same way opiates are (obviously to a lesser extent), e.g. the user cannot feasibly control the urge to continue consuming, then I think it’s very easy to agree it’s Bad and Wrong for the business owners to invest in making their product more addictive.
To your example if McDonalds added cocaine to their fries, we would likely agree that that’s wrong and we should stop that behavior, right?
If it’s more along the lines of addiction like “people love fast food” but aren’t actually physically addicted to it, then I think it’s fine that the business owners make it more delicious or “more addictive”. In that case I’d agree it’s likely on the consumer to make the call. (I’m going to gloss over the realities of the fast food industry preying on lower economic communities and pretend we’re operating in a vacuum where someone has equal agency/ability to go eat McD’s or eat a healthier alternative.)
I can see where you're coming from, but just like opiates, you start using them if you want, and you are aware of the risks, which most people should be when it comes to social networks. I don't know if they are but if not, they should be forced to clearly state the risks of getting addicted.
As for your McDonalds argument, cocaine is illegal. I stated that as long as it was within the law, I saw no problem.
I’d be interested to see the number of people who know the risks of opiates and the people who know the risks of social media. I would guess the latter is a pretty small minority. And for both, I’d venture for the people who do know they’re dangerous there’s a great disparity between how dangerous they think they are and how dangerous they actually are (i.e. they’re worse in reality).
As for the cocaine part, that’s immaterial to the thought experiment I proposed. I was just trying to delineate between true physical addiction and whatever makes me people want to eat unhealthy food. Say it’s something else that causes physical addiction but isn’t illegal.
> I’d be interested to see the number of people who know the risks of opiates and the people who know the risks of social media. I would guess the latter is a pretty small minority. And for both, I’d venture for the people who do know they’re dangerous there’s a great disparity between how dangerous they think they are and how dangerous they actually are (i.e. they’re worse in reality).
Agreed, that's why I think companies should be forced to clearly state them, but not forced to stop users from consuming.
Another avenue could be providing proper education to individuals regarding addiction to food, drugs, etc... But this is beyond my scope of understanding.
> As for the cocaine part, that’s immaterial to the thought experiment I proposed. I was just trying to delineate between true physical addiction and whatever makes me people want to eat unhealthy food. Say it’s something else that causes physical addiction but isn’t illegal.
My bad. I didn't get that.
But I still think, assuming they clearly state the risk of physical addiction, they should be allowed to sell their fries.
Now, just to convey this one more time, it's a totally different situation if they use something illegal to make the fries addictive. They should be punished.
>Why? Honest question. For instance, you mentioned obesity. Should a restaurant that makes the most delicious and sugar loaded food be forbidden to do so because its customers can't stop eating it and are getting obese?
I tried to cover this in my post, but this is why I believe it's a bit of an impossible situation. I don't believe that in your example the restaurant should be forbidden from selling the addictive and unhealthy food. Because it should not be illegal does not make it good. The law and morality are not one in the same.
The usual way people talk about this sort to thing is to invoke free speech. I should not be legally prevented from insulting you, or saying rude things to you. But, it's still an awful thing for me to do.
Regarding the problem being individual. I agree that's where the blame should rest, but the reality is that moral blame is often not really as useful as people want to believe. For example, with obesity, most people are making the 'wrong' decisions. Again, I'm not suggesting that government regulation should be invoked to try to fix this. But surely, it's not good a thing that so many people are unhealthy. And therein lies the problem. Who cares about blame? I don't care whose fault it is, but I would like to fix it. It's a near guarantee that the general public will not fix it. It's not even an American problem anymore: you're even seeing obesity in some parts of Africa. When most people have access to high calorie food most of the time, they will become overweight and obese. You can (maybe even should) assign blame to people for making the wrong decisions here. But that will do nothing to modify the problem.
And, as I said, I'm not necessarily arguing for regulation. But I would be curious if you think there is any solution here, or if you think there should be any solution here.
Ok, I get your point now. Even though the individual is to blame, there's no point in doing it as they won't change. The only solution is to "force" change by regulation. Did I get the gist of what you were trying to convey?
> And, as I said, I'm not necessarily arguing for regulation. But I would be curious if you think there is any solution here, or if you think there should be any solution here.
That's a great point. Off the top of my head I am inclined to say there should not be any solution, besides making sure companies act within the law. But that's above my paygrade. I'm only stress testing my opinion.
>Ok, I get your point now. Even though the individual is to blame, there's no point in doing it as they won't change. The only solution is to "force" change by regulation. Did I get the gist of what you were trying to convey?
Yeah, I think we understand each other. And, I appreciate your comments. too. I vary how I feel about this general issue depending on the topic.
Perhaps the way to split the difference, in your example/metaphor, is to ban the restaurant from giving the delicious sugary food to the obese rather than banning the food altogether.
I can see that happening, but I don't think a private business should be forced to do it.
Now, I think they should do it, but because they want to. If anyone is to take action, I think the way to go is to reach the obese people and help them. Explain why they should not visit the restaurant anymore.
I understand your point, and there was a time in my life when I would’ve agreed with you I think, but perhaps I’ve become cynical in my age and I question whether there’s sufficient agency in the broader population to achieve that goal.
Near my office in SF there is a guy who sits on the street corner with his pants rolled up so you can see that his calves were pretty much just two big, open, leaking sores as a side effect of so many injections. I bought him some bandages but he wouldn't use them until the end of the day because showing them off got him more sympathy money that he needed in order to purchase more injections. The motivation center of his brain has been completely hijacked by a product. Suffering to death is no longer a concern for him. Only the product matters.
I don't know what physical processes are behind a facebook addiction, but I doubt it's as serious a condition as that caused by a chemically addictive product. I would equate it more with gambling addiction. Not to say that it's not a problem, but I have a hard time equating the two. That might just be my naivete' though. I've been lucky enough not to encounter either type of addiction.
The desperation I've seen in addicted gamblers in Las Vegas doesn't seem so different from the despair I've seen from junkies. Both of these are addictions to which some people lose everything.
I could definitely be wrong. I'd heard about people with gambling problems losing everything they own. In the end, perhaps the same parts of the brain are getting triggered.
To me, It’s not just that it’s addictive that is the problem, it’s that the addiction is accelerating the spread of misinformation and allows national/global hate groups to not only exist but flourish.
Many have suspected it for a long while but this testimony proves that Facebook profits from hate groups and the spread of misinformation. That’s not hyperbole, that’s now fact.
It has also accelerated the pace at which good information can spread. What happened to the idea of free-speech and countering bad-ideas with better ones?
Perhaps the real acceleration is in the ballooning expansion of who we consider a "hate-group" -- which seems to have no fixed definition and is thrown around rather cavalierly.
>What happened to the idea of free-speech and countering bad-ideas with better ones?
Go on Twitter or Facebook, or 4chan, 8chan, Voat or wherever you can find these crazies, and try to engage them in rational debate, and convince them their ideas are bad and yours are better. Let us know how that turns out.
There are always going to be crazy people. But what does it matter what they think? What matters is what the average rational adult who is a contributing member of society thinks.
What is the end goal? To make it impossible for crazy people to be heard online? Wouldn't a better goal be to educate ourselves on how to ignore the crazies and focus on reliable sources?
Are you under the impression that Venn Diagram of "rational adults who are contributing members of society" and "crazy people" do not overlap, and that the latter cannot influence the former?
Do you believe QAnon has gone from a 4chan meme to a political movement which has gained the support of the President and seats in Congress because no rational adult or contributing member of society has ever fallen prey to them?
Human beings are not rational animals, human beings are emotional animals, we're great apes hardwired for paredolia and bigotry because it helped us survive the tall grasses of the Savannah a hundred thousand years ago. The assumption you and others like you make, that given a free (as in unregulated) market of ideas, rationality and truth will always win out, is as naive as the belief that ethics and quality always win in free market capitalism. Bad actors always dominate unless some external regulating force prevents them from doing so.
>What is the end goal? To make it impossible for crazy people to be heard online? Wouldn't a better goal be to educate ourselves on how to ignore the crazies and focus on reliable sources?
False dichotomy - we can do both. It is impossible to effectively educate ourselves or anyone else in an environment in which it is also impossible to distinguish good from bad information, or even attempt to do so, without fear of "censorship". We don't need to pretend Joe Rogan and Alex Jones are sources of truth on par with the BBC and Al Jazeera, or that evolution and the Book of Genesis are equally valid attempts to describe the natural world, or that QAnon represents a legitimate framework of political and social criticism, merely for the sake of allowing controversy, in the false belief that controversy is equivalent to freedom.
You have more faith in your own virtue and right to rule over others than I do. I prefer to live as much as possible letting others live as they choose and being responsible for myself. QAnon isn't really much of a problem in the real world -- nobody is burning down cities and rioting because of it.
>You have more faith in your own virtue and right to rule over others than I do. I prefer to live as much as possible letting others live as they choose and being responsible for myself.
I never claimed any such personal faith or right, and that is a disingenuous reading of my comment. The chip on your shoulder is noted, albeit not compelling.
>QAnon isn't really much of a problem in the real world -- nobody is burning down cities and rioting because of it.
Hm... understate the threat of right-wing extremists, overstate the threat of left-wing activists.
It is a fair reading. You are advocating the control of speech and online freedom of others based on your own personal beliefs. Any way you cut it, you can not advocate such things without believing your views on the world are superior enough to justify those illiberal desires.
I just stated the facts. There are riots and violence in the streets of America right now. I'm not saying it's rampant, but it DOES exist. You can't point to the same to justify your fears about qanon and their incoherent ramblings.
As for the chip on my shoulder, i've tried to keep this civil even though I do strongly disagree with your desire to suppress the speech of your political foes.
> I fail to see how making your product as addictive as you can, without breaking laws, is terrible.
I think it's important to be clear about "addictive" because people use it in different ways. If by "addictive" you mean "really compelling" then, sure, it may not be intrinsically terrible. A product that, for example, makes it really compelling for users to improve their physical health or fight climate is probably not terrible.
But the clinical definition of "addiction" which is why "addiction" has a strong negative connotation is that for something that is so compelling that your need to use it causes significant disruption to your quality of life of that of those around you.
Read the testimony again. The argument here is not just that Facebook is super engaging. It's that Facebook use harms its users and the world at large and its level of engagement magnifies that.
For sure. But I mentioned the "too addictive" argument specifically. I understand and agree that facebook does more harm than good, and that is wrong and must be addressed. I just don't understand this addiction angle. Making your product as addictive as you can, without breaking laws, is not wrong IMO.
But I think I see where you coming from. They're getting people addicted to something wrong, did I understand you?
> They're getting people addicted to something wrong, did I understand you?
That is part of it, yes.
Also, the mechanism of addition itself often causes the harm. With chemical addiction, the same components that make the substance addictive also cause miserable withdrawal symptoms.
With social media, this is more nebulous, but I do think part of what makes systems like Facebook "engaging" is the anxiety they create when you aren't on them, and the low self-image that users try to assuage by posting flattering photos of their life.
Part of addiction (and advertising too, for that matter) is creating a need for your product in the mind of the user. They were probably happier before they had that need in the first place.
> With social media, this is more nebulous, but I do think part of what makes systems like Facebook "engaging" is the anxiety they create when you aren't on them, and the low self-image that users try to assuage by posting flattering photos of their life.
> Part of addiction (and advertising too, for that matter) is creating a need for your product in the mind of the user. They were probably happier before they had that need in the first place.
I cannot agree with this. Facebook cannot be responsible for people wanting to be on the site/app or for which photos of their life they choose to post. I thought we were discussing the methods by which they make people want to be on FB.
As for your last paragraph, I may be missing your point. Advertising is creating a need for your product, or tapping an existing need. People being happier before they had that need cannot be a reason to stop companies from trying to sell a product. If you bought something that made you feel worse, you would probably just stop using it.
Now, if you can't stop using it because you're addicted, but the company didn't do anything illegal to make their product addictive and the risks are clear (not saying this is FB case), why should they be blamed?
If I totally missed your point, please feel free to enlighten me.
It's bad if you accept that people deserve agency: the ability to freely choose how they act.
The primary purpose of making an addictive product is to remove peoples' agency by hijacking known deficiencies in our minds/bodies. It's a form of coercion, because your goal is to prevent people from being able to choose whether they use your product or not.
But they can't do it without said people help, correct?
If they aim to remove agency, it's because you have it in the first place, meaning you can stop it from happening with proper information.
I understand that some people might not understand they are being targeted and should be clearly told what could happen to them. But the majority of people must know FB is addictive.
After that, I can't see how people still getting addicted is the company's fault.
Wow, if this were not hosted on house.gov I wouldn't believe it was real.
Edit: A side note, Kendall's current venture is about "Break[ing] your screen and social media addiction". You're free to make any assumptions regarding that in connection to this hearing.
In that case, if you want to really freak out, read Snow Crash by Neal Stephenson. We have been steadily cruising into this future for some time now. "Libra" is just another word for "Kong Bucks."
Just because it's on house.gov doesn't mean it's not fake. The house has a democrat majority, and they're all corrupt lying communists who are out to destroy Trump at all & any costs, so you clearly can't believe this just because it's on house.gov /s
To be clear, that was sarcasm. But sarcasm aside, this is exactly the stance that several members of my family would take if I shared this and asserted that it's not "fake news" because it's on house.gov. The problem is that we're so far through the looking glass that legitimate attempts to pull back the curtain face a huge uphill battle because of the very system that they're trying to expose.
>My path in technology started at Facebook where I was the first Director of Monetization.
While the title is a bit creepy, I can certainly see how it could take awhile to start second-guessing the work when it's your first tech job. Making a platform more interesting, useful, and engaging would certainly be an interesting challenge, particularly at first.
I'm sure getting people into the ovens at Auschwitz presented a fascinating logistical challenge as well. They probably got some really good enterprise architects on that effort. Likely a great career start for many.
We know a few from a different company. They’re outraged by what they saw on The Social Dilemma, but they also have paid off mansions in extremely high cost of living areas thanks to the exact same companies they’re now troubled by.
Wonder what it's like to be a Facebook employee on HackerNews and see this
Anyone who joined, or stayed at Facebook, in the last say 5 years, 100% knew about it and was OK with it. They’re probably laughing at everyone else taking so long to figure it out!
I could almost see it being like Grisham's The Firm. As a new employee, you're all bright eyed and busy tailed. Thinking about what you're going to buy, where you're going to live with all of the bags of cash you're earning. As you continue to work there, the bloom starts to fall off of the rose. Next thing you know, your consciousness starts to get an itch. You're either too dependent on that salary to voluntarily leave, or you've just decided "it's not that bad" and stick you head back in the sand and cashing that paycheck.
Some part of the company is surely not okay with it. I assume many employees have been wrestling with their conscience for months or years.
I had my own experience, once, leaving a company due to ethical concerns: it took me a year and a half to finally follow through and quit. I had coworkers who felt the same who stayed on for years.
No doubt the thought that occurs to many FB employees when they read these articles is "I need to gather my resolve and get the hell out of here!"
That's not really true. Working at a big company like that usually leaves you with very little visibility into anything except your small domain and whatever news the company publicly releases. Sure, you can watch the news, if you aren't working yourself to death. But if someone is paying you bags of money every week, it skews your perception just a bit and changes what you want to hear.
Maybe if you're working for a DoD secret lab where groups aren't allowed to talk to members of other groups because of clearance levels and what not. Facebook is not, and people talk amongst themselves around the proverbial water cooler. "Hey, that other group is trying out something and it seems to be having good results. Maybe we should try something similar?"
Have you worked for Facebook? I'm curious because my experience at another large valley company was that people rarely interacted between groups. When they did, the conversations tended to be more general and not specifically regarding work.
Incidentally I've worked for a DoD contractor and the visibility was actually better in that environment. It was a smaller org(<1k employees) though.
This thread* is a good glimpse, they apparently have teams at FB dedicated to creating propaganda touting the great cause FB is working towards. Those that remain working there have seemingly bought in.
Building something like this intentionally not only contributed to societal breakdown, but acutely impacted the mental health of millions of users. I wouldn’t be surprised if there is a link to the commiserate increase in suicide we’ve also seen.
What do you think about a dystopia were GPT-3 / GPT-4 bots post comments to Hacker News including references and links without being distinguished from real humans?
If indistinguishable, would that be a dystopia or a utopia[1]? At least this is Hacker News, not /r/totallynotrobots. Maybe if we gaze long enough into a procedural abyss, the abyss will gaze back?
The key to making a bot indistinguishable is to mix patterns in it along with machine generation.
For example, simply providing an alternative to pay walled article is a recurring task people do here. It's easy to automate and doesn't raise eye brows. It raises someone's perception of the profile if they were to do a quick check. Another one include providing alternative to products. It's easy. Search through product hunt or other sites for results or wishing someone on their product launch/show HN which again doesn't require contextual understanding to the same degree.
Big tech, philosophical, news media, etc threads are predictable. T5 and electra models from Google are good at filling the blanks (in contrast to gpt which generates texts in forward fashion) so they can be used to make unique sentences following a pattern. They are more meaningful at the cost of less randomness.
Many posts on HN appear first on lobster, small subreddits, GitHub trending, and popular twitter accounts. You could simply fetch the links at a random interval within a timezone and post unique links here.
You can target a demography who is least likely to suspect it's a bot. HN is siloed in many small parts despite having the same front page. You can predict which users are likely to post in certain threads and what their age demography is i.e Emacs anything. Database of HN is available on big query.
You can train a response to suspicious comment calling them a bot: That hurts. I am not a native English speaker. Sorry, if I offended you. or Please check the guidelines...
There are many techniques to make a sophisticated bot. ;)
>"In 2016, internal analysis at Facebook found 64% of all extremist group joins were due to their own recommendation tools. Yet repeated attempts to counteract this problem were ignored or shut down."
But then, plenty people still work for Big Tobacco. Many do so voluntarily, not just because it's their only viable option. The trouble is it comes down, in large part, to ethics and morals. And we don't all share the same moral compass.
The pay sounds really good. Not gonna lie, if I was offered a job there with 400k comp it would be hard to turn down, depending on what I would be working on
Anybody that continues to use Facebook is complicit in the downfall of democracy. That goes for Twitter and 24/7 news channels as well. Get off that crap, it’s rotting out minds.
On one hand, I agree that using Facebook and other social media can be destructive for most people. On the other hand, I see that democracy and many other institutions of our world have fallen behind the times. We can't stop that the world is becoming more connected. We can't stop that there are means to broadcast all kinds of information to millions and billions of people. Perhaps it's time we try to figure out how to upgrade existing systems or draft new ones.
I stopped using Facebook because I agree that it's a net negative for the world.
For what it's worth, I don't have that experience with Twitter. There, I seem to have enough control over who I follow and whose tweets Twitter shows me that my Twitter use is generally beneficial and healthy to me. Despite trying very hard to do so, I was never able to tune my Facebook feed to be healthy in that way.
I understand and fully agree with the damage that social media contributes to radicalization and extremism.
However I think the safeties they place on this are going to contribute to regulatory capture. Facebook has already benefited from policies as is, changes that put a substantial cost on new media companies will just further aid in Facebooks "clone, advertise and usurp" behaviours.
We need to empower people with these algorithms. Imagine the power of a tiktok/YouTube/etc. algo where you get to make the choices. I want: less cartoons, more long-form Econ, increase diversity of recommendations, no repetition, etc. over decades of use and tweaks.
problem i have with this is that while it is true that Facebook is optimizing for something harmful with their business model, these testimonies and congressional hearings are not aimed at solving the problem. they are aimed at scoring political points and huffing and puffing.
so it is a situation where an organization with shitty incentives that doesn't have good faith alignment with society at large is regulated by an organization with shitty incentives that doesn't have good faith alignment with society at large.
the whole process is completely illegitimate and basically a TV show. i don't have a solution for this, i just know that this is not it.
I view that the core issue is that democracy and many modern institutions as they exist today are hopelessly ill-equipped to deal with the reality where billions of people are connected. We live in a world of interconnected differences. Existing social and cultural norms basically compel us to synchronize everyone in the network to the same instance of truth, and this simply doesn't work anymore.
We are already seeing huge increases in support for things like systems thinking, ecological worldview, decentralization, holism, etc. The future is pluralistic and that's okay.
The biggest difference is that Nicotine that administrated via cigarettes, while social media doesn't administrate anything and the addiction is self-generated.
First, I think the most-missed story about the 2016 election is the role that Groups played in Bernie Sander’s ascendance. The volume of meme content and direct voter contact that I received from Bernie volunteers and passive supporters from just a few major pro Bernie groups alone— ones that I was not even part of— exceeds the volume that I have received from all other campaigns online to date.
Second, in the early days of Groups, FB decided I was a very far right-wing activist and recommended that I join a series of groups agitating for a US military coup. I still have screenshots of it. It eventually got better at guessing my tastes.
Yeah for sure. I found the post I made about it (01/24/2014) but I am going to have to dig through my messages/HD to find the screenshot. I'll post when I'm done w/ work
I hate fb as much as the next guy, however it seems like these "mea culpa" admissions might be motivated by raising the profile of the person making them. It feels like these public acts of self-flagellation are meant to shield them from scrutiny for doing these things in the first place. What brought about this change of heart? Why did you work there in the first place?
Unless you think he's lying, who fucking cares what his motivation for whistleblowing is? Many whistleblowers want to get back at companies that screwed them, or want to get interviewed on cable news, or want a book deal, or whatever. Who fucking cares if they're not ideologically pure altruistic saints? What matters is the truth, not the motivations of those telling the truth.
(Do people with exclusively pure selfless motivations even exist? Even people who donate to charities anonymously are plausibly motivated at least in part by the warm tingly feeling they enjoy when giving charitably.)
These things aren't mutually exclusive, a testimony in Congress is inherently a public memorialization.
Tim Kendall has been an outspoken critic for a long time and also a recent central figure in the movie "The Social Dilemma" which is the about the same thing and will lead to more speaking engagements on the topic.
That doesn't dilute the message. If you think it does, what does a better arbiter of this aspect of reality look like? Who would that person be and what would their credentials be?
What do you mean they have been an outspoken critic for a long time?
Honest question: Were they outspoken when they were a director at FB, or when they were president at pinterest? Or did it start two years ago when they became CEO of Moment selling an app to cut down on screen time?
In my mind, an ideal arbiter isn't also selling a product to fix the problem they are raising awareness about.
This doesn't mean what they are saying isn't true, or that they didn't have a real change of heart, but is certainly a conflict of interests.
You asked a question, who better than Kendall to speak on the topic, I provided an answer. You said you don't care, and anyone will do.
I don't really follow the rest of your comment.
You say anyone speaking to the representatives is a good start, but representatives are ineffective. Also, why can only those trying profit/exploit an addict be of help?
I’m saying the mere presence of a potential conflict of interest dont matter to me.
I’m saying I dont care if there is some way their current predilection can be seen as disingenuous because they made a bunch of money or maybe have a new company that can make a bunch of money.
I don't care that they made money, but I also don't trust Kendall to spin the topic if it suits their interests. Why would anyone trust the manufacturer of anti-facebook software about the dangers of facebook.
In this case, it doesn't much matter because they didn't say anything new or of substance. Facebook is designed to be "addictive". Any psychology undergrad could tell you this.
What evidence do you have to back up these claims?
“It seems like these mea culpa admissions might be motivated...”
It seems like you’re not willing to state there is another agenda but you want to attack people speaking up anyway — because they were part of the problem or contributed to it, that anything they say now doesn’t matter.
> My path in technology started at Facebook where I was the first Director of Monetization. [...] we sought to mine as much attention as humanly possible and turn into historically unprecedented profits. We took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset.
> Tobacco companies [...] added sugar and menthol to cigarettes so you could hold the smoke in your lungs for longer periods. At Facebook, we added status updates, photo tagging, and likes, which made status and reputation primary and laid the groundwork for a teenage mental health crisis.
> Allowing for misinformation, conspiracy theories, and fake news to flourish were like Big Tobacco’s bronchodilators, which allowed the cigarette smoke to cover more surface area of the lungs.
> Tobacco companies added ammonia to cigarettes to increase the speed with which nicotine traveled to the brain. Extreme, incendiary content—think shocking images, graphic videos, and headlines that incite outrage—sowed tribalism and division. And this result has been unprecedented engagement -- and profits. Facebook’s ability to deliver this incendiary content to the right person, at the right time, in the exact right way... that is their ammonia.
> The algorithm maximizes your attention by hitting you repeatedly with content that triggers your strongest emotions — it aims to provoke, shock, and enrage. All the while, the technology is getting smarter and better at provoking a response from you. [...] This is not by accident. It’s an algorithmically optimized playbook to maximize user attention -- and profits.
> When it comes to misinformation, these companies hide behind the First Amendment and say they stand for free speech. At the same time, their algorithms continually choose whose voice is actually heard. In truth, it is not free speech they revere. Instead, Facebook and their cohorts worship at the altar of engagement and cast all other concerns aside, raising the voices of division, anger, hate and misinformation to drown out the voices of truth, justice, morality, and peace.