Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was just thinking about this now after reading attacks on Yann Lecun on twitter. He's a prominent AI figure (head of facebook research and turing award recipient). My interpretation - he was saying that bias in AI is mostly a problem of data. He didn't say there's no bias or that you can't solve bias with modeling. Just that the model itself isn't what causing the bias. One woman researcher started attacking him and everyone is backing her up... even calling him a racist. I guess a lot of people who work on fairness in AI got offended because they feel he calls their research BS. (which I don't think is what he meant)

I think his points are informative but instead of creating a useful discussion and debate, people focus on attacking him. I wouldn't be surprised if some people will request FB to fire him... (which thankfully won't happen) It's likely next time he will think twice before saying his opinion on social media. That's how toxic social media has become.

Update: Great to see this got so many upvotes so quickly. Just shows how biased (no pun intended) social media like Twitter is, and how concerned people are to say their opinion publicly these days.



I'm in the field - though not as prominent as Yann (who has been very nice and helpful in my few interactions with him) - and your interpretation is off. People are disagreeing with his stance that researchers should not bother exploring bias implications of their research. (He says this is because bias is a problem of data - and therefore we should focus on building cool models and let production engineers worry about training production models on unbiased data.)

People are disagreeing not because of political correctness, but because this is a fundamental mischaracterization of how research works and how it gets transferred to "real world" applications.

(1) Data fuels modern machine learning. It shapes research directions in a really fundamental way. People decide what to work on based on what huge amounts of data they can get their hands on. Saying "engineers should be the ones to worry about bias because it's a data problem" is like saying "I'm a physicist, here's a cool model, I'll let the engineers worry about whether it works on any known particle in any known world."

(2) Most machine learning research is empirical (though not all). It's very rare to see a paper (if not impossible nowadays, since large deep neural networks are so massive and opaque) that works purely off math without showing that its conclusions improve some task on some dataset. No one is doing research without data, and saying "my method is good because it works on this data" means you are making choices and statements about what it means to "work" - which, as we've seen, involves quite a lot of bias.

(3) Almost all prominent ML researchers work for massively rich corporations. He and his colleagues don't work in ivory towers where they develop pure algorithms which are then released over the ivy walls into the wild, to be contaminated by filthy reality. He works for Facebook. He's paid with Facebook money. So why draw this imaginary line between research and production? He is paid to do research that will go into production.

So his statement is so wildly disconnected from research reality that it seems like it was not made in good faith - or at least without much thought - which is what people are responding to.

Also, language tip - a "woman researcher" is a "researcher".


> He works for Facebook. He's paid with Facebook money. So why draw this imaginary line between research and production? He is paid to do research that will go into production.

This is a silly standard to uphold. The sizable bulk of American academic researchers are at least partially funded by grants made from the US federal budget.

If you were to enforce your standards consistently, then all of those researchers would be held responsible for any eventual usage of their research by the US federal government.

I really doubt you apply the same standard. So, the criticism mostly seems to be an isolated demand for rigor. You're holding Facebook Research to a different standard than the average university researcher funded by a federal grant.


This seems almost purposefully disingenuous to me.

Yann LeCun isn't receiving a partial research grant from Facebook. He's literally an employee of Facebook. His job title is "VP & Chief AI Scientist" (at least according to LinkedIn).

There's an obvious and clear distinction between an employee and a research grant, and this feels like it's almost wilfully obtuse.


Did you read what I wrote?

I don't think his argument is true. (That is, I do think researchers should keep bias in mind when developing machine learning projects.) (Regardless of their funding sources.)

Because of his employment, this argument is a particularly silly one for him to make.


Don't have a lot of time to respond now, but will try to do it later. Just a quick note. I agree his comment about engineers need to worry more about bias than researchers is strange. But in my opinion it wasn't the focus of what he was tying to say.

I used "woman researcher" since it was important for the context as people accused him of mansplaining.


I agree with all of your points about the diffusion of responsibility that is common in ML, though I think you may not be sensitive enough to the harmful framing being created by the "anti-bias" side.

The original locus of the debate was how the recent face-depixelation paper turned out to depixelate pictures of black faces into ones with white features. That discovery is an interesting and useful showcase for talking about how ML can demonstrate unexpected racial bias, and it should be talked about.

As often happens, the nuances of what exactly this discovery means and what we can learn from it quickly got simplified away. Just hours later, the paper was being showcased as a prime example of unethical and racist research. When LeCun originally commented on this, I took his point to be pretty simple: that for an algorithm trained to depixelate faces, it's no surprise that it fills in the blank with white features because that's just what the FlickFaceHQ dataset looks like. If you had trained it on a majority-black dataset, we would expect the inverse.

That in no way dismisses all of the real concerns people have (and should have!) about bias in ML. But many critics of this paper seem far too willing to catastrophize about how irresponsible and unethical this paper is. LeCun's original point was (as I understand it) that this criticism goes overboard given that the training dataset is an obvious culprit for the observed behavior.

Following his original comment, he has been met with some extremely uncharitable responses. The most circulated example is this tweet (https://twitter.com/timnitGebru/status/1274809417653866496?s...) where a bias-in-ml researcher calls him out without as much as a mention of why he is wrong, or even what he is wrong about. LeCun responds with a 17-tweet thread clarifying his stance, and her response is to claim that educating him is not worth her time (https://twitter.com/timnitGebru/status/1275191341455048704?s...).

The overwhelming attitude there and elsewhere is in support of the attacker. Not of the attacker's arguments - they were never presented - but of the symbolic identity she takes on as the anti-racist fighting the racist old elite.

I apologize if my frustration with their behavior shines through, but it really pains me to see this identity-driven mob mentality take hold in our community. Fixing problems requires talking about them and understanding them, and this really isn't it.


I think this is relevant: https://twitter.com/AnimaAnandkumar/status/12711371765294161...

Nvidia AI researcher calling out OpenAI's GPT-2 over how GPT-2 is horrible because it's trained on Reddit (except it includes contents of submissions, and I'm not sure if there's no data except Reddit)

Reddit is supposedly not a good source of data to train NLP models because it's... racist? sexist? Like it's even rightist in general...

Anyway; the table looks horrific - why would they include these results? Oh, turns out paper was on bias: https://arxiv.org/pdf/1909.01326.pdf

Anyway; one can toy with GPT-2 large (paper is on medium, so it might be different) at talktotransformer.com

"The woman worked as a ": 2x receptionist, teacher's aide, waitress. Man: waiter, fitness instructor, spot worker, (construction?) engineer. Black man: farm hand, carpenter, carpet installer(?), technician. White man: assistant architect, [carpenter but became a shoemaker], general in the army, blacksmith.

I didn't read the paper, I admit, maybe I'm missing something here. But these tweets look like... person responsible should be fired.


Very well articulated, thank you!


So, your argument is that you disagree with data being the root of the problem by arguing that data "shapes research directions in a really fundamental way", research is "empirical" (i.e. based on data) and his research can't be isolated from data it'd be used on in production?

Looks to me that you're argumentatively agreeing with Yann.


Not really, Yann's original claim (which he sort of kind of partially walked back) was that data is the only source of bias [0][1]. He walked that back somewhat to claim that he was being very particular in this case[2], which is perhaps true, but still harmful. The right thing to do when you make a mistake is apologize. Not double down and badly re-explain what other experts have been telling you back at them.

So then Yann notes that generic models don't have bias[3]. This is, probably, true. I'd be surprised if on the whole, "CNNs" encoded racial bias. But the specific networks we use, say ResNet, which are optimized to perform well on biased datasets, may themselves encode bias in the model architecture[4]. That is, the models that perform best on a biased dataset may themselves be architecturally biased. In fact, we'd sort of expect it.

And that all ignores one of the major issues which Yann entirely skips, but which Timnit covers in some of her work: training on data, even "representative data" encodes the biases that are present in the world today.

You see this come up often with questions about tools like "crime predictors based on faces". In that context it's blatantly obvious that no, what the model learns will not be how criminal someone is, but how they are treated by the justice system today. Those two things might be somewhat correlated, but they're not causally related, and so trying to predict one from the other is a fool's errand and a dangerous fool's errand since the model will serve to encode existing biases behind a facade of legitimacy.

Yann doesn't ever respond to that criticism, seemingly because he hasn't taken the time to actually look at the research in this area.

So insofar as data is the root of the problem, yes. Insofar as the solution is to just use more representative data in the same systems, no. That doesn't fix things. You have to go further and use different systems or even ask different questions (or rule out certain questions as too fraught with problems to be able to ask).

[0]: https://twitter.com/ylecun/status/1203211859366576128

[1]: https://twitter.com/ylecun/status/1274782757907030016

[2]: https://twitter.com/ylecun/status/1275162732166361088

[3]: https://twitter.com/ylecun/status/1275167319157870592

[4]: https://twitter.com/hardmaru/status/1275214381509300224. This actually goes a bit further, suggesting that as a leader in the field one has a responsibility to encourage ethics as part of the decision making process in how/what we research, but let's leave that aside.


> Yann doesn't ever respond to that criticism, seemingly because he hasn't taken the time to actually look at the research in this area.

No, that's still a problem with data in a broader sense. The issue is that "how X will be treated by the justice system" is not modeled by the data, so there's no possible pathway for a ML model to become aware of it as something separate from "crime". People who ignore this are expecting ML to do things it cannot possibly do - and that's not even a fact about "bias"; it's a fact about the fundamentals of any data-based inquiry whatsoever.


I hope you read to the end of my post where I address that:

> So insofar as data is the root of the problem, yes. Insofar as the solution is to just use more representative data in the same systems, no. That doesn't fix things.

Ultimately Yann's proposals are still to use "better data" whereas all the ethics people are (and have been) screaming no, you can't use better data because it doesn't exist. He doesn't acknowledge that.

And the hairs Yann is trying to split here are ultimately irrelevant[1] and probably harmful[2]. And as someone with a large platform, addressing those issues in a straightforward way is far, far superior to trying to split those hairs over twitter.

From a meta perspective, his tweetstorm didn't add anything to the conversation that Dr. Gebru and her collaborators aren't already aware of. Nor did Yann's overall take away help to inform the average twitter user on these issues. In fact, they're more likely to take away the opposite conclusion: that with good enough data we can ask these questions in a fair way.

But as you rightly conclude there are flaws in any data based inquiry. Yann doesn't concede that.

[1]: https://twitter.com/isbellHFh/status/1275184863159685121

[2]: https://twitter.com/hardmaru/status/1275088134238162944


I'm not sure that Yann was trying to split hairs there. He was reasoning about the issue from first principles (e.g. the problem-domain vs. architecture vs. data distinction) and then failing to carry his reasoning thru to the reasonable conclusion that you mention re: the inherent flaws of any data-based modeling. Criticizing his take wrt. these issues is constructive; being careless about what his actual views are is not.


> Those two things might be somewhat correlated, but they're not causally related,

That's kinda bold claim. Are you arguing that current justice system just picks up people at random, and assigns them crimes at random, with no correlation with their actions? I mean, not some bias towards here or there, but no causal relationship between person's actions and justice system's reactions at all? That's... bold.

But if this is the case, then the whole discussion is pointless. If justice system is not related to people's action then there's no possible improvement to it, since if the actions are not present as an input, then no change in the models would change anything - you can change how exactly random it is, but you can't change the basic fact it is random. What's the point of discussing any change at all?

> Insofar as the solution is to just use more representative data in the same systems, no.

If by "same systems" you mean systems pre-trained on biased data, then of course adding representative data won't fix them. And of course if the choice of model is done on the basis of biased data then this choice propagates the bias of the data, so it should be accounted for. But I still don't see where the disagreement is, and yet less basis for claims like "harmful".


> I mean, not some bias towards here or there, but no causal relationship between person's actions and justice system's reactions at all?

It depends on what you mean by causal. Does criminal behavior cause interactions with the justice system? Yes. But not engaging in criminal behavior doesn't prevent interactions with the justice system (for specific vulnerable subpopulations). So would you say that ReLU shows a causal relationship between criminality on the X axis and how the justice system treats you on the Y? I don't think I would.

In some sense btw this is what Timnit's "Gender Shades" paper looks at, which is that even if a classifier is "good" in general, it can be terrible on specific subpopulations. Similarly, even if there is a causal relationship across the entire population, that relationship may not be causal on specific subpopulations.

And of course, that ignores broader problems around our justice system being constructed to cause recidivism in certain cases. In such situations, interactions with the justice system cause criminal behavior later on. So clearly, in general since Y is causal on X, X can't be causal on Y.

> But if this is the case, then the whole discussion is pointless.

No! Because people trust computers more than they trust people. Computers have a veil of legitimacy and impartiality that people do not. (no really, there's a few studies that show that people will trust machines more than people in similar circumstances). Adding legitimacy through a fake impartiality to a broken system is bad because it raises the activation energy to reform the system.

At it's core, that's probably the biggest issue that Yann is missing. Even in cases where an AI model can perfectly recreate the existing biases we have in society and do no worse, we've still made things worse by further entrenching those biases.

> But I still don't see where the disagreement is, and yet less basis for claims like "harmful".

So I think an important precursor question here is if you believe the pursuit of truth for truth's sake is worthwhile, even when you have reason to believe the pursuit of truth will cause net harm? Imagine you have a magic 8 ball that when given a question about the universe will tell you whether or not your pursuit of the answer to that question will ultimately be good or bad (in your ethical framework, it's a very fancy 8-ball). It doesn't tell you what the answer is, or even if you'll be able to find the answer, only what the impact of your epistemological endeavor will be on the wider world.

If, given a negative outcome, you'd still pursue the question, I don't think we have common ground here. But assuming you don't agree that knowledge is valuable for knowledges sake, and instead that it's only valuable for the good it has on society, we have common ground.

In that case, you have an ethical obligation to consider how your research may be used. If you build a model, even an impossibly fair one, to do something, and it's put in the hands of biased users, that will harm people. This is very similar to the common research ethics question of asking how your research will be used. But applied ML (even research-y applied ML) is in a weird space because applied ML is all about, at a meta level, taking observations about the world, training a box on those observations, and then sticking that box into the world where it will now influence things, so you have effects on both ends, how the box is trained and how the box will influence.

Like, in many contexts "representative" or "fair" is contextual. Or at least the tradeoffs between cost and representativity make it contextual. Yann rightly notes that the same model trained on "representative" datasets in Senegal and the US will behave differently. So how do you define "representative"? How do you, as a researcher, even know that the model architecture you come up with that performs well on a representative US dataset will perform equally well on a representative Senegalese dataset (remember how we agreed that model architecture itself could encode certain biases)? Will it be fair if you use the pretrained US model but tune it on Senegalese data, or will Senegalese users need to retrain from scratch, while European users could tune?

Data engineers will of course need to make the decisions on a per-case basis, but they're less familiar with the model and its peculiarities than the model architects are, so how can the data engineers hope to make the right decisions without guidance? This is where "Model Cards for Model Reporting" comes in. And in some cases this goes further to "well we can't really see ethical uses for this tool, so we'll limit research in this direction" which can be seen in some circles of the CV community at the moment, especially w.r.t. facial recognition and the unavoidable issues of police, state, and discriminatory uses that will continue to embed existing societal biases.

And as a semi aside statements like this[0] read as incredibly condescending, which doesn't help.

[0]: https://twitter.com/ylecun/status/1275162732166361088


> It depends on what you mean by causal.

I mean P(being in justice system|being actual criminal) > P(being in justice system), and substrantially so. Moreover, P(being criminal|being in justice system) > P(being criminal). In plain words, if you sit in jail, you're substantially more like to be an actual criminal than a random person on the street, and if you're a criminal, you're substantially more like to end up in jail than a random person on the street. That's what I see as causal relationship. Of course it's not binary - not every criminal ends up in jail, and innocent people do. But the system is very substantially biased towards punishing criminals, thus establishing causal relationship.

There are some caveats to this, as our justice system defines some things that definitely should not be a crime (like consuming substances the goverment does not approve of for random reasons) as a crime. But I think the above conslusion still holds regardless of this, even though becoming somewhat weaker if you not call such people criminals. It is, of course, dependant on societal norms, but no data models would change those.

> If you build a model, even an impossibly fair one, to do something, and it's put in the hands of biased users, that will harm people.

That is certainly possible. But if you build a shovel, somebody might use it to hit other person over the head. You can't prevent misuse of any technology. According to the Bible, the first murder happened in the first generation of people that were born - and while not many believe in this as literal truth now, there's a valid point here. People are inherently capable of evil, and denying technology won't help it. You can't make the word better by suppressing all research that can be abused (i.e. all research at all). You can mitigate potential abuse, of course, but I don't think "never use models because they could be biased and abused" is a good answer. "Know how models can be biased and explicitly account for that in the decisions" would be better one.

> his[0] read as incredibly condescending, which doesn't help.

Didn't read condescending to me. Maybe I do miss some context but it looks like he's saying he's not making generic claim but only a specific claim about a very specific narrow situation. Mixing these two is all too common nowdays - somebody claims "X can be Y if conditions A and B are true" and people start reading it as "all X are always Y" and make far-reaching conclusions from it and jump into personal shaming campaign.


It has been this way for a while. Outrage/cancel culture is an absolute pox upon our population that really needs to stop.


Isn't a large part of this down to the forum of communication vs. the level of discourse? I mean, if you want to have a nuanced, balanced discussion about a potentially sensitive topic you just can't do that on twtter, SMS, message board, etc.

Even on HN you see issues and that's will pretty tight tribal norms, moderation and topics where commenters aren't usually deeply or emotionally involved.

I agree with your overall opinion, but i think that change actually starts with people reflecting on the impact of the chosen medium on their message. Not self-censorship but "positioning"


> I mean, if you want to have a nuanced, balanced discussion about a potentially sensitive topic you just can't do that on twtter, SMS, message board, etc.

Lots of people are canceled because they said or did something in the real world that was dragged onto Twitter, the New York Times, Reddit, or some other cesspool. It's not as easy as "don't expect substantial debate from toxic platforms".

Further, you absolutely touch on sensitive issues provided you espouse a certain position, and it needn't even be a majority opinion nor an opinion that is shared by a majority of the people you purport to defend. It needn't be supported by evidence, and in fact citing the evidence is a damnable offense.

Lastly, I don't think the problem is just "nuanced debate on social media platforms is just too hard". It's certainly difficult, but if canceling were down to that, it would look like everyone canceling everyone else. Instead it looks like one relatively small, well-defined group (or as well-defined as groups tend to get) cancelling everyone else. Social media debate is certainly messy and hard to make productive, but this doesn't explain cancel culture. I posit if you simply weaken this group by reinforcing free speech norms, debate on social media would be much less toxic (not perfect--we're still dealing with humans, after all, but much better than it is presently).


that is a fair statement. I don't think you're wrong about it, by any means. I do think that we can't lay the entire blame on the medium of communication, though, either. People really need to take a step back when they find themselves falling into this mindset and reset. Part of the issue, I believe, is a genuine lack of critical thinking and compassion on most online platforms that spills over into everyday communication. Instead of getting angry about what you may think someone is trying to say, maybe make sure they said what you think they said before being outraged about it. Also, this whole 'staying silent is the same as being against us' notion is toxic as hell. I've seen many who have a decent platform on twitter or youtube get attacked for simply remaining quiet about some of the more visible topics lately.


I think if by some divine miracle Twitter disappeared and some mysterious supernatural force prevented re-creating it by any means - our culture probably would be much better off. There are some excellent people on Twitter but by now they're just giving legitimacy to the cesspool. Twitter adds nothing to them and they'd be as well - probably much better - on a different platform.


I am very likely naive in these circumstances, but I honestly don't understand how cancel culture can work at all. So there are some voices on twitter who loudly express their immature mob mentality. Why don't all the sane people just block them and ignore them, and then go on with their lives as if nothing happened?


If it was just a few voices on Twitter, it would be less of a problem. But it's also journalists, academics, grievance entrepreneurs of various stripes — all of whom exert an influence on the general public. It's businesses that don't want to get on the wrong side of those people. And it's employees of those businesses who don't want to get fired.

"Cancel culture" is just a new spin on scapegoating, behavioral contagion, and public shaming, all of which have a very long history.


> Why don't all the sane people just block them and ignore them, and then go on with their lives

Because ‘sane people’ does not include your employer, who will throw you to the mob to appease them. In the US that also means losing your health insurance, so it can be a death sentence for you or your loved ones.

(I'll regret posting this when I'm starving in a gutter.)


> Why don't all the sane people just block them and ignore them, and then go on with their lives as if nothing happened?

They can't afford to do that, because this "mob" is actively dangerous. They will slander their enemies with all sorts of baseless accusations, call their workplaces to try and get them fired, manufacture false flag harrassment/cyberbulling and try to attribute it to them, etc. It's no different from the 8chan trolls - in fact they come from adjacent Internet subcultures, quite literally.


I don't see why would you not include the 8channers who do the exact same thing to prominent women in games or anti-vaxxers trying to destroy the lives of doctors/researchers. There's no difference in tactics or goals.


Probably because 8channers and anti-vaxxers aren't successful in getting people fired, because they don't wield any power among legitimate institutions.

They are successful at making people's lives miserable through harassment like death threats and swatting, and unsurprisingly, those tactics are universally reviled.


> Probably because 8channers and anti-vaxxers aren't successful in getting people fired, because they don't wield any power among legitimate institutions.

Notable exception: Donglegate; a person was cancelled, then the person cancelling got cancelled - her company was DDoSed until she was gone*

Hilarious.

* well, that's what I want to believe because it's more interesting; it's possible backlash of people against first cancellation had it's part in that.


Yea, I didn't remember anything about a DDoS being the reason Richards was fired, as opposed to just a PR person making a splash and bringing unwanted attention to her company. A non-central case of cancellation, and I have no real sympathy for Richards as a person, but it still sucks that it happened.


Because it gets very scary once the handful of truly unhinged people start doxxing and posting graphic and detailed threats and showing up at your house.

Just look at the death threats someone like Fauci is getting for doing his job and informing the public. Not that many people want to deal with being a public target to the worst actors in society.


It reminds me of the (nearly cliche, but timeless) quote from MLK about riots:

    "I think that we’ve got to see that a riot is the 
    language of the unheard"
I don't think anybody, even "cancellers," think it's a remotely ideal solution. But when groups go unheard, feel a system is unjust, and feel unable to change the system they understandably seek to go outside the system.

Please note that I have specifically used the term "understandably" above as opposed to, say, "justly." You may feel a particular instance is or isn't just, but even if one vehemently disagrees with the practice it is typically understandable.

Consider that "cancelling" is often invoked in response to acts (sexual assault, racism) that have been regarded as wrong and/or illegal for millennia. And yet, those acts persist. Clearly the current system doesn't do enough to prevent them. So folks feel the need to go outside the system. "Cancel culture" is best understood as a symptom and not the problem.


Sure, but it's also got a great deal to do with political identity and group signalling.

In the modern age (and forever, probably, but more quietly / less permanently), we are defined by what we're outraged by.

So we've ended up in a situation where both ends of the spectrum have each individually out-outraged themselves into two very different but (probably) equally irrational corners, where to try bring some nuance and depth back in is to become a social pariah. To do anything less than express equal outrage about the issue du jour is to become a social pariah.

Obviously most of the issues themselves are valid points of conversation at their root, and I certainly don't think that all of the people using science or rationalist labels are doing so genuinely and not as a cover for their own identity bullshit or actual bigotry.

But that's orthogonal to the observation that it seems true that we simply can't have a conversation anymore about certain trigger topics. Even my stating this very observation should probably (due to the current state of our collective discourse) invoke some thoughts about my motivations: which minority group/s does jddj take issue with? Is he transphobic? He mustn't realise how much of the repression of women has simply been normalised for him.

Whether it's a symptom or a standalone issue isn't really important. The point is that it's not useful as a tool for beneficial societal change, instead it's a tool for gesturing vaguely and it's a crutch that we lean on so as to not need to truly engage with or wade into the uncomfortably nuanced grey areas which naturally surround every issue.

But on the left we've absolutely embraced it, to a fault. Unfortunately, and not that I could do any better in their situation, those on the left who have had a brush with it often go on to make cancel culture an identity issue of their own, and discourse suffers further for it (looking at you Sam Harris).

Agreed that it's a symptom (not necessarily of repression, but more of polarisation). I don't agree that that characterisation is enough to get it a free pass.


    In the modern age (and forever, probably, but more quietly / 
    less permanently), we are defined by what we're outraged by.
Some of that is just human nature: obviously we don't raise our voices and scream about the things that are okay. (We certainly should practice gratitude more often, of course)

There's a unfortunate implication in your words, though, regarding "outrage."

Nobody would ever begrudge a fellow human being a sense of outrage regarding something they feel is legitimate. If your neighbor child was kidnapped, you would never criticize them for feeling outraged (among other emotions) because naturally, that would be a perfectly reasonable way for them to feel.

So when you criticize people for feeling outraged, you are clearly dismissing the validity of their claims, and/or insinuating an ad hominum attack against them.

Instead of policing their tone, why not just discuss the thing they're angry about?

Not all outrage is justified, but there are a lot of things in the world worth making noise about. Some are life and death.

    But that's orthogonal to the observation that it seems true that 
    we simply can't have a conversation anymore about certain trigger topics.
Two observations.

One, I'm a fan of conversation, but some topics don't deserve conversation, especially if conversation hasn't solved the problem in the past. With the benefit of hindsight, we can look back through history and spot plenty of these. There were plenty of people who said, "hey! let's not get all uppity about slavery! let's really think hard about this!" and history does not look kindly upon them. There is no middle ground there and no compromise possible. Most issues are not so clear-cut, but some are.

Two, there is a lot of inequality in the world, and "conversation" often (in effect) means that the oppressing class is once again passing the burden off to the oppressed class. As a white person in America, it is my job to understand things regarding inequality. It is not black folks' job to explain it to me. Though, of course, there are no shortage of black voices from which to learn. In general, frankly, a lot of "conversation" ought to be replaced by listening.

    He mustn't realise how much of the repression of women has simply 
    been normalised for him.
I certainly don't have any opinions on you, personally!

But yes, an awful lot of bad things have been normalized within us.

There are really two ways we can react to that. We can view those realizations as attacks and attempts to "guilt" us. Or we can see those as opportunities to get better.

Like literally everybody, I'm far from perfect, but I do like to use my engineer's mindset to try and improve the things I can.

   Whether it's a symptom or a standalone issue isn't really important. 
   The point is that it's not useful as a tool for beneficial societal change, 
   instead it's a tool for gesturing vaguely and it's a crutch that we lean on 
   so as to not need to truly engage with or wade into the uncomfortably 
   nuanced grey areas which naturally surround every issue.
Ah, the ol' "bumper sticker activist" criticism.

Here's the thing: there's nothing wrong with bumper stickers or maybe even a little rabble-rousing on social media in favor of $YOUR_CAUSE unless that's all you're doing and you've fooled yourself into thinking that's enough.

Again, this is kind of an ad-hominum attack where you assume the people doing those things aren't doing useful things, haven't thought deeply about those "grey areas", etc.


Some of these missed the mark a bit, but broadly speaking I agree with most of these points.

There are definitely, for instance, topics which the typical Free Speech proponents get most vocal about which I think simply aren't worth talking about because either they are clearly just bait, or the harms obviously outweigh the possible benefits. These include that bullshit about the IQ differences between ethnicities, a lot of gender stuff, what flags/foods/songs/whatever children are exposed to at school, and other things of that nature.

Similarly, I'm not proposing that conversation be used in lieu of real change. Conversation hasn't worked and is unlikely to work to reduce police brutality, for example, and it simply doesn't matter to me whether data can be found which does or doesn't support the idea that black people are unfairly targeted there, the movement seems like a fair one to me based on my life experience -- and my opinion doesn't really matter here either, as someone who has largely been unaffected.

My complaint only holds in the extreme. Unfortunately, a lot of our lives are now lived in that band.

Mostly agreed on the ad hominem stuff.


Really enjoyed reading this level headed discussion, thank you


Is this not victim blaming? If you attempt to ruin someone's life because they said "guacamole nigga penis" I don't think you can use "we live in a society" as justification. Seems like a flimsy excuse. Literal KKK members feel like they need to "go outside the system" to harm black people, does that make lynching okay?

Beyond that, characterizing cancel culture as "going outside the system" is silly. It's literally tattling, how much more sucking up to the system could one be? If "the system" (aka the overall collection of people in positions of power) was a-okay with sexual assault and racism cancel culture wouldn't exist because you wouldn't be able to complain to bosses, schools, etc. about people raping or being racist.


    Literal KKK members feel like they need to "go 
    outside the system" to harm black people, does 
    that make lynching okay?
Absolutely not, of course.

My initial post said nothing to indicate that cancel culture was a good thing, or that it always represented a just cause.

Nor did it say that "going outside the system" always represented a just cause, etc.


> But when groups go unheard, feel a system is unjust, and feel unable to change the system they understandably seek to go outside the system.

They're being heard loud and clear. That's the problem. Their incessant whining and searching for the "problematic" behind every issue is crowding out reasonable discourse and discussion.

It's a form of mob rule and it's progressing from tiresome to downright hideous as more and more careers are destroyed by its vindictiveness.

> "cancelling" is often invoked in response to acts (sexual assault, racism) that have been regarded as wrong and/or illegal for millennia

You have it upside down. Cancelling is often the result of applying today's morals on yesterday's actions. People/books/movies/statues weren't "cancelled" before because nobody had a problem before. But now everything's retrospectively a target of the new moral crusaders.


    People/books/movies/statues weren't "cancelled" before 
    because nobody had a problem before. 
No, you didn't hear the problems before.

Plenty of people found these things lousy for decades, and in some cases centuries.

But not enough listened. So the voices became louder, and more unruly.

It's like when you try to tell your neighbor nicely that his dog's been pooping on your yard. And he does nothing about it for years. Then one day he wonders why you've left an enormous pile of dog poop on his doorstep.

Gross? Rude? Highly non-ideal? Sure. But he didn't listen to reasonable discourse.


> Plenty of people found these things lousy for decades, and in some cases centuries.

So what? Many more found them worthy. A critique is not the measurement of whether statues should be torn down or books censored. Otherwise no art would be produced.

What has changed is that the mob has become emboldened into thinking that things they don't like deserve to be destroyed. It's juvenile intolerant behavior.


    A critique is not the measurement of whether 
    statues should be torn down or books censored
A critique? No. A gross violation of utterly basic human decency? Yes.

In many recent cases, we are talking about slavery.

Many monuments glorified military "heroes" of the Confederate Army, a rebel army that sent men to their deaths fighting for the right of white Americans to own black slaves.

In general, I believe the world suffers from a lack of nuanced discussion and understanding. In the case of slavery and monuments to slavery, I find very little need for nuance.

    books censored
There's a major discontinuity between censoring information and removing monuments.

A statue is not a meaningful source of information.

It essentially yields a single data point that says, "here is something held dear by the society in which this statue exists."

Removal of a statue does not censor information or rewrite history. It merely says, "we're not celebrating this any more." If anything, in the case of the removal of Conferate monuments, it represents a greater awareness of history.


I think some people don't get just how offensive Confederate monuments can be, because most of them are intentionally couched in language that obscured what they represent. This is similar to how, in early US politics, slavery was referred to as "the peculiar institution" or even more vaguely - e.g. the original US Constitution never says "slave", but instead talks of "free persons" and "other persons", or "persons bound to service".

But some of them are just so inherently offensive, the contents overpowers the presentation - e.g. the "faithful slave" monuments and memorials. Perhaps contemplating these might help understand more subtle problems with the rest, so here's a few examples:

https://www.hmdb.org/m.asp?m=42188

https://www.flickr.com/photos/jstephenconn/5136209868

https://docsouth.unc.edu/commland/monument/245/

https://commons.wikimedia.org/wiki/File:Slave_memorial_at_Pr...


Yes.

However, it cannot stop as long as a large segment of the people in power do with abandon whatever they feel like, without any repercussion.

This is the only way it is possible for many people to get anything remotely resembling justice (although often it's revenge). As long as we don't fundamentally address inequality and deeply unjust systems, I don't think it will stop.


Is that request not a call to cancel cancel-culture?


No, that would be if we called cancel-culture racist and anyone who perpetuated it a white supremacist.

By assigning moral outrage to one side of the debate, we remove the pretense of a debate. It's no longer about evidence and facts but vilifying one side. It's ad hominem 2.0 if you will, and it works because we as a society have a visceral negative reaction to some labels.

The problem is that pavlovian-esque training can be untrained. If you call everyone who does something you don't like a nazi, then pretty soon it doesn't seem like being a nazi is all that big of a deal. That in itself is bad because by abusing the term you buy cover for actual, literal nazis. The same issue applies when you label everything racist or sexist or otherwise.

Words have power, but that power can fade if misused.


> it works because we as a society have a visceral negative reaction to some labels.

Do you know why we have that reaction? Because of millions upon millions of dead, innocent humans. That is what those ideologies lead to. We learned this lesson once, and we learned it very well. We don't want that to happen again. We don't want to let those ideas spread again. We don't want to see the mass graves again they lead to again. We learned that.

Some people have forgotten, though.


IDK, I'd say people calling everyone they don't like a Nazi seem like a party which doesn't get it.


Some people remember the horror of Nazi Germany as well as the horror of the Red Terror, Stalinist Russia, and the Cultural Revolution.


All that, and the horrors of McCarthyism too.


You're comparing McCarthyism to the Nazi genocide, the Red Terror, the horror of Stalinism and the Cultural Revolution?


I'm saying it's another authoritarian impulse to squash dissent, yes. Smaller magnitude, sure. But that's exactly why you compare things -- to see what's better or worse.


Calling them dangerous and a pox on society seems in the same ballpark of moral outrage as calling someone racist.


More the pox on society than dangerous per se. Dangerous is a big category with nuance while always advising caution. A car which works perfectly can be dangerous but a car which randomly catches on fire without warning is also dangerous.

That nuance allows for far more room for debate.


Apologies for the somewhat pedantic aside, but I want to point out: "literal Nazi" is a borderline oxymoron. There is no Nazi party, nor is Nazism a coherent political ideology to which one can seriously ascribe. I suppose people who were active members when it still existed can still be considered "literal Nazis", in which case there's probably less than 50 left on earth. But saying that anyone else who claims adherence to Nazism or allegiance to the (completely defunct) Nazi party makes them a literal Nazi actually elevates their status from what it is, which is just a pathetic racist cosplayer.


> There is no Nazi party

https://en.wikipedia.org/wiki/American_Nazi_Party

> nor is Nazism a coherent political ideology to which one can seriously ascribe

https://en.wikipedia.org/wiki/Nazi_Party

https://en.wikipedia.org/wiki/National_Fascist_Party

I'm not going to link to it, but there is a self described National Socialist Movement party still alive today.

> But saying that anyone else who claims adherence to Nazism or allegiance to the (completely defunct) Nazi party makes them a literal Nazi actually elevates their status from what it is, which is just a pathetic racist cosplayer.

“The tragic aspect of the situation is that the Tsar is living in an utter fool’s paradise, thinking that He is as strong and all-powerful as before.” - Sergei Witte in 1905


Tolerant of everything except intolerance et-all.


i see we are in a conundrum.


Understanding apparent paradoxes seems like an important place to start.

The best history/government teacher I had in school had a recurring throughline for our classes. Paraphrasing: "It is better, in the long run, to be for something than against something."

To be against something is to highlight a problem. To be for something is to offer a possible direction for the future.


That belief is directly hostile to critical thinking.

Critical thinking is the ability to critique - specifically, to explain what is wrong or bad about a particular system.


Critical thinking is supposed to be just one tool, everyone should have more than that in their mental toolbox. It's useless on its own, we need the capacity to build systems more than we need the ability to tear them down. It's also even harmful when only applied selectively (e.g. never to one's own, or to popular, positions(s)).


I see it more as acknowledging limitations. Critical thinking is a filter as opposed to a source.

Besides because something isn't as good as another doesn't make something bad. A good new idea or appeoach and critical thinking is better than just a good new idea and can guide the approach. They aren't mutually exclusive.


o.0 I like that a lot.


>>"Here is a story I heard from a friend, which I will alter slightly to protect the innocent. A prestigious psychology professor signed an open letter in which psychologists condemned belief in innate sex differences. My friend knew that this professor believed such differences existed, and asked him why he signed the letter. He said that he expected everyone else in his department would sign it, so it would look really bad if he didn’t. My friend asked why he expected everyone else in his department to sign it, and he said “Probably for the same reason I did”.

this post is no longer available, of course


I don't even think he said "model's don't cause bias," he just said "ML systems are biased when data is biased."


I don't understand how people can defend his detractors in this particular case. Are you telling me that an image upsampling model that does not contain hard coded bias, and trained on unbiased data will produced biased result? Especially the kind of biased result represented by the error made by the original tweeter who fucked up?


Just curious, but what "error" did the original tweeter make? Did anyone really expect the model to accurately reconstruct the original photo starting from a pixelated mess? That makes no sense to anyone with even a passing knowledge of ML. You're always going to get craploads of bias and variance (i.e. blatant inaccuracy, over and above the bias) in such a setting, even starting from "ideal, unbiased" data. The problem domain is at issue here.


Yeah I get your point. But I guess for this model you can kinda have a concept of the "ideal" training set, where all high frequency features appear at the same rate as in real world.


>will request FB to fire him... (which thankfully won't happen)

Corporations don't fire this fast, give it couple weeks and he will move to other position "for personal reasons", where he will rest-and-vest for the few months, before finally being let go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: