Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>The issues are more disconcerting, like senior executives trying to make sure that 'Trump never happens again'

That's not what she said, she said "Trump situation", and by situation, one would assume, the massive abuse of social platforms to spread false and misleading data about Clinton, which we know from scientific studies, that false stories were reshared far more by Trump supporters than vice versa, and that a non-trivial chunk of these fake gifs came from overseas.

Isn't that a situation we don't want to repeat?

Also, I'd argue the definition of a Doctor is genderless, so if I ask Google what a doctor is, I don't want it telling my daughter "A doctor is a male human being who..."

This isn't "social justice", this is fact.

You can call for breaking Google up all you want, but AI ethicists at every company are concerned about cognitive bias creeping into machine learning, and correcting that bias will be perceived you as putting your finger on the scales to 'correct the wrongs of history'

I'm just telling you, this view goes beyond Google, and pretty much any honest data scientist is going to be looking to combat bias in his training data.

Anti-social justice warrior people arent going to like it, but the genie is out of the bottle government isn't going to stop the march towards unbiasing our databases.



A doctress is female. If not "doctor", how are we to distinctly refer to the male version?

See also: waitress, actress


> That's not what she said, she said "Trump situation", and by situation, one would assume, the massive abuse of social platforms to spread false and misleading data about Clinton, which we know from scientific studies, that false stories were reshared far more by Trump supporters than vice versa, and that a non-trivial chunk of these fake gifs came from overseas.

1) Can you please cite those "scientific studies"?

2) Even if what you said is true. I am not sure how much it has anything to do with Google. Most of influences happened on Facebook or other social media platforms. How can Google prevent things from happening again on Facebook?


That was certainly the most vague and easily re-interpreted statement she made. Hoping for fuller context of the conversation so we can get some better understanding of her intended use.


>>> ""Trump situation", and by situation, one would assume, the massive abuse of social platforms to spread false and misleading data about Clinton, "

I don't know how one would assume the term "Trump situation" would imply "all the lies about Clinton on social media".

A more direct interpretation of her statement would be "we don't like Trump".

Now - of course you could be right, but I don't see how her words communicate that fact at all.

If the exec was concerned about 'all the lies about Clinton' - why didn't she just say that?

I don't think that 'getting rid of lies' is counter factual, nor is it controversial.

If this exec was communicating something along the lines of:

"There was some interference and misinformation in the last election, and it possibly affected the outcome, we want to make sure people are well informed"

That'd be great! There would be no scandal, nothing to talk about. This would be Google just 'doing their job' in a fairly conventional manner. But she didn't use that language, she went much further.

>>> "I'd argue the definition of a Doctor is genderless,"

Nobody is going to argue that.

But what does that have to do with offering up as a 'first suggestion' when people type 'men can' ---> 'have babies'?

Making sure the world is informed about that very specific 'fact' that does require some contextualization, is an ideological problem, and this has nothing to do with 'un-biasing' our databases': it's projecting a whole set of ideals.

So the 'first problem' with the 'Social Justice search algorithm approach' is that it will extend far beyond 'un-biasing'. It will go straight into ideology and narrative.

But the more subtle problem of 'un-biasing our databases' is the fact that it's going to be difficult to determine what bias is.

So instead of 'Doctor' how about - 'Truck Driver'. About 6% of Truck Drivers are female. Is this a 'bias' problem in society? Do we need to 'unbias' our data to make sure that every representation of 'Truck Driver' is 50%/50%?

What about race? In Sweden, there are almost zero Black Doctors, mostly by virtue of the fact there's very few Black people in Sweden of course. So, in Swedish search results, how do we represent the image of 'Doctor'? 15% Black? 50% Black?

What's the 'unbiased' racial representation of 'Doctor' in a country that fairly ethnically homogenous?

So the very mention of 'un-biasing databases' is a frightening, Orwellian concept, again, because it'll take a considerable amount ideology for someone to determine what 'unbiased' even is.

>>> "Anti-social justice warrior people aren't going to like it,"

"Anti-Social Justice Warrior" people are not generally not 'Anti Social Justice' - and they are not likely opposed to having search results showing a Female, or Black Doctor, or Pilot (or even Truck Driver), but there's a very legitimate concern about the extent to which information is manipulated, and how it's manipulated.

The examples given in the reference article I feel go beyond issues of 'getting rid of fake news' and go far beyond merely showing a 'female Doctor' on a search for Doctor.

The terminology she used was along the lines of 'correcting for historical injustice'. Terminology like that, and more directly 'un-biasing our databases' I interpret as 're-writing information to suit my view of the world and how it ought to be' - which is surely 'Social Justice' in the minds of those tampering with the data, but it's not to others.

Given the magnitude of the influence of Google, this issue has to be addressed in a more open and transparent fashion, and it probably needs to be regulated (and publicly communicated) - and in any case there needs to be more competition in search.

Edit: I should add, doing things like carefully altering AI training data so that the tech doesn't erroneously and in a 'biased' fashion identify 'black people' as more likely to be criminal is totally fair game (as we learned with MS's 'racist' AI). But we should be thoughtful and deliberative about such things.


>A more direct interpretation of her statement would be "we don't like Trump".

How about we just not make any interpretation until Veritas, known for deliberate -- to be charitable -- "creative" editing in the past, releases the full unedited exchange without cherry picked statements out of context?

"So, in Swedish search results, how do we represent the image of 'Doctor'? "

How about a GAN-generated of an doctor with randomized features? The issue isn't really imagery, past instances of bias in machine learned text results has translated for example, gender pronouns according to historical bias, so if you translate foreign language text containing doctor, sometimes even female doctors, it'll end up replacing with male pronouns.

>I interpret as 're-writing information to suit my view of the world and how it ought to be'

How about rewriting it so it fits categorical definitions? The definition of a doctor does not specify gender or race, so any automatically generated extracted knowledge should not leak cultural bias.

> mostly by virtue of the fact there's very few Black people in Sweden of course

Geo-centric thinking. Google is a global service. People in Africa search for doctors. People in Asia search for doctors. So why should every query return a white, male, doctor? Shouldn't a global service either a) be localized to return culturally relevant results for the context of the region or b) be internationalized so that it returns unbiased results that can be applied anywhere?

The problem is when a Swedish engineering office ships a feature to Google globally, and all of a sudden, their own cultural biases turn up globally. This isn't just social justice, it's bad for business!

This is a classic example of why there is strength in diversity hiring, and monoculture is bad if you're a global brand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: