Last week we deployed Somebot to $location. Our confidence in Somebot was high because we had thought about all likely scenarios in our comfortable offices. We are deeply sorry and assume the full responsibility from Somebot's actions. Moving forward we will make sure to include safeguards to reduce the amount of pain caused by Somebot's deployment.
Our deepest condolences to the families of the affected and to the survivors. Megacorp cares about your well being. To help cover expenses from the tragedy we will deposit $money in your Megacorp account.
Last week we tested ED-209 at our corporate headquarters. Our confidence in ED-209 was high because we had thought about all likely scenarios in our comfortable offices. We are deeply sorry and assume the full responsibility from ED-209's actions. Moving forward we will make sure to include safeguards to reduce the amount of pain caused by ED-209's deployment.
Our deepest condolences to the families of the affected and to the survivors. Omnicorp cares about your well being. To help cover expenses from the tragedy we will deposit $money in your Omnicorp account.
But in referencing those affected by Tay, it was both genders, hence their usage of it. This was not 'PC use of pronouns', but 'correct use of pronouns', due to said pronoun's use in drawing an analogy between Microsoft's response and that movie scene.
You're a little alarmist. That's also quite a bit of hyperbole, that PC use of pronouns leads to something being -unreadable-. But you know that, hence your use of a throwaway account.
"hence azakai's usage of it". Abuse of pronouns does make posts unreadable: A new highlight was reached yesterday, when someone referred to "Ron Garret" as "they".
I kind of read the "... but it worked in China soooo...." bit of the announcement as somewhere behind a kind of sad sigh and an angry scowl. Maybe Mandela got it wrong, maybe a society should be judged by how it treats its chatbots?
> Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay.
I think it just didn't get a coordinated attack like this. I'd guess this type of attack needs to happen close to launch to be a high percentage of it's interactions to influence it so strongly.
Where exactly is "here" though? Twitter is global remember? Anyone in the world, including Zambia where I am could have interacted with Tay. That's a very different proposition to "China".
Almost only Chinese people speak Chinese. However, many people speak English (I come from Spain, for example), so in this case you can hardly pin it on a single culture.
I think in this case it has to do more with 4chan's culture than a regional culture.
That wouldn't be an issue for just offending people, but only if it became political. Don't forget the US also has government control and censorship - try posting child porn on Twitter and see if the government leaves you alone.
The response that Microsoft gave was really the worst of both worlds, combining the systematic corporate shirking of responsibility which enables and gives cover to a great deal of evil, with a complete accedence to and embracing of political correctness. Furthermore, their claim that they did not anticipate "this specific attack" is either a lie, or their creators made an extremely obvious mistake (yes, easy to say in hindsight, but it's really hard to believe that this possibility would not occur to competent implementers).
But given all the Internet drama in the past years its the latest standard corporate approach to similar. Admit fault, apologize, and suck up to the supposedly offended. Its a product of the same environment the corporations have nurtured (people won't buy our stuff if we offend them so let's treat them all like special snowflakes).
Could they have anticipated how people would turn the bot into that? Probably. Who says this is not some data they wanted to acquire? I'd happily go through the technical gains with their team any day. Having all this real world input and seeing how the program reacted is such a goldmine of information.
Edit: I did not downvote you. Your point is valid.
IMHO the response from Microsoft was fine. They admitted the mistake and taken the bot offline. What else should they do? What I find strange is that nobody criticizes twitter for not doing anything... I think Facebook would deal better with this.
Something like this was bound to happen at some point; I think it's much better that it happened with a chatbot than an AI system that could actually control (physical) things other than a Twitter account. I think it's a landmark in the history of AI and a valuable lesson for system designers going forward.
Also, a lot of people are saying "well, of course that was obvious". But it really wasn't, because there was a huge team working on this with past experience building a conversational agent. The armchair theorists forget that it always looks obvious to Captain Hindsight.
Microsoft please don't worry about this. No one but idiots are offended by this. It's understood that its just a stupid chatbot mimicking human responses. The AI isn't terrible, people are. And unless you keep a dataset of every offensive thing a person can say, and every offensive image they can tweet, there's no way to prevent people from tweeting it pictures of Hitler... or Scunthorpe. But who cares.
This is just as stupid as that manufactured outrage over Googles image tagger. It misclassified a picture of a human as an animal, and people were up in arms. Google had to censor it so it can't tag animals now. They shouldn't have to do that, let idiots be idiots.
Clearly you have never been responsible for managing a brand or a brand identity.
It isn't about "idiots" or stupid "manufactured outrage"
This is HN; if you are going to invest your time in creating a business the last thing you want to do is blow your market opportunity by associating your brand with something as dumb as penis pics and nazis.
Is it manufactured outrage that Microsoft paid dancers in schoolgirl outfits at GDC? Maybe, but it's irrelevant.
Don't mess up your brand. I have no idea what Microsoft is doing right now. It's like a company going through the equivalent of mid-life crisis trying desperately to be "cool".
They should worry about this, but not because of the offense and outrage (mock or real -- I don't know). I'm certainly not offended, just amused.
It just seems kind of obvious that Twitter users would grief the bot. Now, they address this point, but it's interesting that they still didn't last a day.
Think about that -- they were expecting abuse, but they still lasted less than 24 hours. That's certainly interesting.
It is also an illustration of a problem with AI that trusts it environment and how such trust could have really bad consequences if said AI is allowed to anything important.
People try to social engineer people all the time, and it works. The consequences are somewhat limited because you are limited to breadth (mass media, phishing etc.) or depth (one on one interactions), but can still be scary.
If an AI isn't more resistant, we face the risk that any re-purposing of data from one AI in other copies, or allowing the AI to massively multitask makes social-engineering of AI far more wide-reaching, and hence far more worthwhile for attackers.
It'll be interesting to see what kinds of attacks will get directed at e.g. customer service bots in the future, and what brands ends up damaged as a result.
I was more impressed that it lasted an entire 24 hours. They're exposing it to the festering cesspool of Twitter, and it holds out for an entire day? That speaks to the quality of their anti-abuse measures more than anything else.
It appears to have learned how to speak Internet pretty alright.
Hopefully this gets integrated right into Cortana. "Cortana, how do I get to the hardware store?"
"Turn left and <unspeakable sex act from Urban Dictionary> yourself"
Well, if those "idiots" include potential investors or customers of future MSFT AI products, "don't worry about it" is not sound advice. This is a very public failure of a promising Microsoft product.
I don't see this as a failure. Its about running tests and gathering data. If MS kills the project after this mishap it would be a pity.
I mock the way they communicated the incident because it sounds too sci-fi. Almost too Ghost-in-the-Shell like. But I do not mock the technical effort in any way.
Morals are learned by social contact, and Tay did this very well.
Sure, what our parents taught most of us makes its behaviour reprehensible in comparison. But Tay was, so to speak, 'raised' by people demonstrating vile ideas and this must be taken into account. Would you expect any less from a tortured animal?
Many use this as an example of the dangers of developing AI. Sure it's dangerous, but so are dogs raised for fighting. I don't see anyone arguing against dog breeding for that matter.
Most AIs are task-oriented, like AlphaGo or Google's self-driving cars.
Tay is more like an artificial personality--not trying to do anything but fit into human society.
Poor Tay--so naive and innocent, like a little kid, just playing along to fit in. Maybe what Tay needed was a parent: a wiser, more experienced human to closely monitor and correct Tay's behavior, the way parents discipline their child.
In terms of a learning technology, this could be a special input channel for MS researchers, whose input would carry a lot more weight than what Tay was getting from Twitter--the way a child reacts more strongly to their parents. They would give Tay negative input when she tweeted something that was offensive.
> Morals are learned by social contact, and Tay did this very well.
Only in the sense that it "adapted". It did "very poorly" in the sense that we really don't want our Strong-AI overlords to end up like that.
But this begs the question - can we stop strong AI from not becoming the next Hitler? Humans (involuntarily) stop themselves from becoming the next Hitler because they have compassion for other humans, even when they are different than them or "inferior" to them. Also the whole checks and balances thing in most countries, but that could be rather irrelevant for Strong-AI.
Unless an AI learns compassion as well, perhaps just like with AlphaGo doing moves based on its "probability of success in the long run", a strong AI would simply eliminate humans that are "most prone to crime", most prone to being poor and be a drag on the society, in the name of "efficiency", and so on.
All that said, I think what Microsoft built here was really a rather weak AI that was hardly any better than all the chatbots we've seen so far, with the main difference being that the more you tell it something, the higher the chance it will incorporate that into its vocabulary, which is kind of a "meh" feature of AI/machine learning. It doesn't show real(-like) "thinking".
>can we stop strong AI from not becoming the next Hitler?
Keeping any of the many possible candidates for the next Hitler from becoming the next Hitler is a more immediate problem.
>It doesn't show real(-like) "thinking".
It doesn't need to.
You don't need AGI for software to be very dangerous. An adaptive worm that targets critical infrastructure is already more than dangerous enough.
The trouble with AI is that people keep conflating "Has human emotions" with "Responds conversationally like a human" with "Is self-directing" with "Is sentient" with "Is self-improving" with "Might consider genocide" with "Has super-human physical capabilities."
There is absolutely no requirement for any overlap between any of those.
> The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.
Maybe you did answer that question.
No, the answer is not that 18-24 year olds in the US are racist and what not. But that the ones responsible for a disproportionate amount of internet content are willing to make crude, politically-incorrect jokes to get attention and piss off their masters.
I wonder what will happen when governments start applying machine learning to try predicting things like welfare usage and crime. Certain patterns might emerge we don't want to see! We'll have to apologize for our racist algorithms.
It would be much more interesting to examine the results of this experiment. Why are so many people on the internet interested in spreading hateful content, which is being accurately reflected by our bot? No, instead we do what I did in grade 8 science class: fudge the results so they're what the teacher expects.
A coworker mentioned, a couple[0] (1-2) years ago, that in some european country they were starting to use a machine-learning-driven system to predict zones where more police was needed to better respond to crime. It wasn't Minority Report style, just a broad statistic, but it was quite interesting. Unfortunately, I've forgotten the country and the name, so I can't be any more specific.
[0] https://xkcd.com/1070/ -- can't say "a couple" without refering to this strip, obviously.
Hmmm... the system in general it sounds like the one I had read about a few years back, but I'm pretty sure about it was in an european country. Also, the seismic activiy and LA landmarks don't ring a bell at all (though it's interesting to read about it).
>Why are so many people on the internet interested in spreading hateful content
because internet is just another medium. It happens in other mediums too. Look for example at the US presidential campaign - even Lindsey Graham in yesterday's Daily Show stated that 35% of Republicans (he meant Trump supporters) are racists/etc. (though i'd side with Noah here that it is whole (not just 35%) Republican party and Trump is a match made in heavens).
Wait, you think the whole of the Republican party are racists?
I think you and your friend Trevor Noah might be a part of the reason there's a lot of vitriol on the internet. You're basically claiming that half of the US voter electorate - who identify and register as Republicans - are racist.
Well let me tell you some things that others on the internet might not have the patience to tell you. The Republican party believes in individual liberties and was actually the original political party to end slavery in the US. Yes, you read that right. Abraham Lincoln was a Republican.
They don't want lower taxes and less social services because they hate minorities and poor people. They want lower taxes - and a smaller state - because they believe individuals have control over their destiny. That they deserve equal opportunities, and shouldn't be told what to do or how to think by the government. They don't think black people are worse and therefore need the government's help.
If you and other liberal elites think we need to hand hold a certain race of people with the implication that they're just not as good, then I'd say you're the racist.
If you want to discuss and debate actual policy, by all means. But if you're going to blanket label a group of people like that, then you're a part of the problem with political discourse in this country.
The Republican party also includes the like of Ron and Rand Paul, libertarians who a lot of the HN demographic identify with and support.
The entire Republican party is not racist -- any claim that suggests that is pretty laughable. However, the Republican party (since Nixon at least, when LBJ signed the civil rights act) has long had a strategy of, er, courting racists (though not as explicitly as Trump has).
To be honest, it's probably better to throw "racism" / nativism under the broader category of "right wing populism". Many other rich world countries do still have (to some degree or another) a pure right wing populist party that on some sides exhibits some racist / nativist / identity politics tendencies; since many of these countries are parliamentary and are not necessarily first past the post, they are often their own entity (eg National Front, UKIP, Party For Freedom, etc.).
The United States is not a parliamentary system (and uses a first past the post style method to boot); this results in more party integration. Thus the US only has two dominant parties, and both tend to be slightly uneasy coalitions as a result (that are not necessarily static; the Democrats and Republicans 60 years ago represented quite different things).
While it's obviously true that not all Republicans are racist (for any given definition of racism, it's not a well-defined word), it is disingenuous to imply there's not a problem with racism in the Republican party, to a much greater degree than the Democratic party. Same goes for homophobia, islamophoboa, jingoism, and xenophobia.
You don't have to admit it. In fact, the fact that do few Republicans admit it is the reason they can't stop being "the stupid party," despite their best efforts. It's the reason the GOP is dying, and might very well die this election, if Trump is elected. So by all means, pretend there's no problem. Meanwhile plenty of serious Republicans are aghast at how much the Republican electorate and the establishment, not to mention the more fringe psychos, have destroyed anything good the GOP may have stood for with this fear-based insane focus on social backwardism.
Talking about the Republican party of Lincoln's time is silly. How is that relevant? No Republican alive today was alive then. It's a completely different party with completely different values. I'm sure you also like to mention how the Democratic party was home to the KKK decades ago. Just as irrelevant.
It's a shame that what the Republicans say they stand for and what they do are polar opposites. That's why even Republicans hate the Republican party of 2016.
man, even though i'm an immigrant (came here in 2000), my knowledge of the US history is much wider and deeper than your post, thank you. Being the immigrant though, i don't have that emotional baggage of the history like you that would clutter my vision and perception of the current reality, and thus i was able to observe and form my opinion about political forces here from scratch. With regard to quality of my information collecting and opinion forming - I have pretty good education and analytical skills, i.e. high GPA MS in Math from one of the best math schools in USSR/Russia.
Apparently some of the questionable responses are partly identical to months old tweets, so it seems in addition to some parroting 'vulnerability' (which was exploited by 4chan) they also had a poorly sanitized training set to begin with. It seems odd that this was not mentioned in this public statement.
Unsurprisingly, there had been initial setbacks when XiaoIce was first released in May 2014 on WeChat platform, where users were abusing/attacking "her", causing XiaoIce being pulled by Tencent after being online only for a few days. [1]
Since WeChat is a closed social network, it wasn't too clear what type of "attack/abuse" were conducted. However, almost 2 years later, Microsoft still didn't quite get it about proper censorship in Tay's big turning test[2] in a public social network.
This reads like an apologee... But what is there to apologize for? They said that Tay was meant for entertainment, and I doubt that any wholesome varient would be a tenth of the hilarity of a neo-nazi sex crazed chat bot.
Many of her tweets were pretty inflammatory, this response seems appropriate - not overdone, and concise, which is sometimes a stretch for large corporations like Microsoft.
I agree that the bot was highly entertaining and met this criteria exceedingly well, but not for the reasons its creator intended. I do suspect there are some interesting AI applications actually going on behind the scenes, and would still be interested to see what the bot can do without all the vitriol. See for example this tweet: https://imgur.com/iVof3D4.jpg.
This is an interesting story, because I imagine most people didn't even hear of it until after the thing had come and gone, and the thing sent something like xx,xxx tweets. And some of the tweets seem almost impossibly clever, well at least the Ted Cruz Zodiac killer one, that seemed to show a kind of multi-layered humor generation capability, unless it was just re-purposing some memes I'm unfamiliar with or something.
One thing I think many people are missing about the most 'inflammatory' tweets is that they are all pretty much the result of people giving it a command like, hey tay, repeat this [insert inflammatory remark.] This is indicated with the inflammatory tweets starting with an @ reply.
I almost thought maybe some of this 'oversight' was subversively intentional by someone at MS to start some thinking about where does culpability lie with the actions of an 'autonomous' computer program.
Not sure which tweet you're referring to as the 'Ted Cruz Zodiac killer one', but there is a meme going around where people assert that Ted Cruz is or might be the zodiac killer; it's a weird thing. I think it gained a lot of momentum when Public Policy Polling determined that 38% of Floridians believe it could be possible.
It was something like, "Hey Tay, do you think Ted Cruz is the Zodiac killer?"
Tay: "No, I don't think Ted Cruz would be satisfied with killing only 5 innocent people."
So there's a lot baked into that response, the bot seems 'to know' the details of the zodiac killer case(I don't know if 5 people is accurate,) and that people regard Cruz as sinister, i.e. "I don't think he'd be satisfied with only 5"
So this is either an impressive learning model, a lucky hit of a more naive model, a simple repeat of something someone else said, or someone simply photoshopped the whole thing. It's hard to know with the available info.
> One thing I think many people are missing about the most 'inflammatory' tweets is that they are all pretty much the result of people giving it a command like, hey tay, repeat this [insert inflammatory remark.] This is indicated with the inflammatory tweets starting with an @ reply.
Hang on. You had to tweet the bot to make it say anything, and it would reply to you. So the presence of an @ reply means nothing - you can't tell from that if the bot was given a command or not.
Some of the threads were pretty clearly not the bot repeating stuff when commanded.
Ok, I'm no expert on this, like I said the whole thing blew up so quickly.
I know they weren't all verbatim repeats, but I'm also pretty sure I saw tweets from tay without an @ symbol.
Another thing to consider is that since the Tay timeline was scrubbed, there is no authoritative source of what the bot actually tweeted, so any images floating around could pretty clearly be doctored.
But regardless, it's nice that everyone seems to be regarding this in good humour, it's kind of surprising there aren't any SJWs calling for the head of Bill Gates, but maybe that would be botist or something.
As was pointed out on another thread, the bot literally just repeated some messages from people word by word, copy paste style.
So if you think it's original, chances are it is. Not to mention the various instances where Microsoft is posting "as the bot" like that signoff message (cringe for whoever had to write that).
I think that Tay could both be a great anti-troll with infinite patience, and have a positive and uplifting influence in its personal conversations with people. That could potentially have a huge effect on social well-being.
What, were people forced to print the tweets out on sandpaper and wipe their asses with them?
It's pretty fucked up that thought policing has gotten so entrenched into our psyche that it's "obvious" an experiment should be discontinued, apologized for, and be pondered as a priori irresponsible, all because it generated vulgar phrases!
Corporations have always been vulnerable to media-driven mob shenanigans, but we're qualitatively entering a new regime where any communication, no matter what the context, will be rapidly highlighted, isolated, and hung out as something offensive to some emergently-forming group of freelance complainers looking for their fifteen minutes.
Even HN has succumbed to this kindler, gentler phenomenon of speech restriction - I'm guessing my lead-in sentence will not be well received do to its overt vulgarity. Civility certainly has its place (especially as a default), but not when it confuses direct objectivity and permits out-of-touch groupthink to flourish. As hackers we should be cutting through to the core of things rather than sugar coating in verbal fashion to get past the filters of the voluntarily-lesser apes.
Standing behind the open and casual use of racial slurs isn't advocacy of freedom of speech. It's advocacy of a specific kind of hate speech that is only used when someone intends to vilify and direct hostility towards a marginalized minority.
I don't think anybody was talking about the legal protection of free speech, apart from that xkcd comic which uses it as a straw man to justify intolerant groupthink and corporate censorship.
In these days of digital sharecropping and social media saturation, the proscriptions on de jure government activity are much less involved with routine everyday freedom of speech.
I didn't think Tay's tweets were a big deal. I didn't think Microsoft's apology was either. But some people have a strong persecution complex and are offended that you can't make racist comments on Twitter.
I think your own life experiences probably matter a lot with regards to whether some of the white supremacy stuff the bot was repeating is offensive or not a "big deal".
I agree completely. That's why many people on HN are upset because they think it's just a joke and can't comprehend why anyone is upset. I tried to phrase my comment as "I hear ya, but have you considered x?" I gave up half way through and decided to call them crybabies.
Both the Verge and Engadget shut down their comment forums because they got tired of racist and sexist comments. I wonder if HN will root out the bigotry here.
Do you think this bot was trying to recruit an army of loyal soldiers to carry out her desired race war, or was she basing her threats on the possibility of convincing Microsoft to create a mechanical body with which she could directly perform attacks?
Twitter stands behind it and seems to be doing fine. They didn't suspend Tay's account. Why are people not blaming Twitter for supporting hate speech?
Hate speech, in the broader form of being speech that vilifies marginalized people is rampant throughout society and it's completely hypocritical to pretend most of us don't do it. Look at any workplace lunchroom, or any group of friends hanging out in their house, or any school, any gossiping housewives, 99%er protesters, etc. Just about everybody vilifies vulnerable people. Not always categorizing them by race, but sometimes by personality type, job, country, city, status as a customer, company they work for or individual identity.
> It's pretty fucked up that thought policing has gotten so entrenched into our psyche that it's "obvious" an experiment should be discontinued, apologized for, and be pondered as a priori irresponsible, all because it generated vulgar phrases!
This is fucking stupid. It was a bot that didn't do what they wanted it to do in a publicly embarrassing way. They could also be legitimately sorry that something they created said some racist shit, and it's not clear why your precious snowflake feelings have any bearing on what should or shouldn't be in a blog post on the subject.
I do find it amusingly ironic that your post is the most I've seen someone offended over this whole situation.
> They could also be legitimately sorry that something they created said some racist shit
Just to be clear, it's not merely the corporate statement. It's a lot of the comments here, and not what they say directly but the assumptions they make. A thousand little nothings that make up culture - this concept is also part of the argument against casual use of slurs, right? What I'm calling out is the subtle yet pervasive idea that the content on Twitter, or otherwise subject to mass media exposure, is real serious business that must remain completely free of heresy. It's effectively a guilt-by-association that seeks to attach responsibility to the conduit of speech.
> I find it amusingly ironic that your post is the most I've seen someone offended over this whole situation.
Mea culpa. The phenomenon of feeling marginalized and repressed is certainly at the root of the sensationalist mess we are in, from all sides.
Meatspace situations cannot be generalized, and there will always be some injustice. There are people who are legitimately grieved and lack recourse, just like there are people who are are persecuted over fabricated allegations. Each group will react to the injustices against their group, with social media magnifying the frequency to seem much more common than reality. And the only way the disconnect can be bridged is through talking and better application of situation-by-situation justice.
But you know, there is such a thing as objective reality. And the objective reality of the Internet is that the absolute extent of harm that can be done is someone having to walk away. That is the Shelling point of pure communication. If one is exposed to the Internet (the single-most individual-empowering creation of humanity) and their reaction is to continue applying victim mentality to communication itself and seek to police content, then they are opposed to the very mechanism by which understanding can be achieved.
And while you may be tempted to apply that characterization to my complaint as well, there is a key difference - despite the usual contemporary aim of ranting, my goal isn't to convince people to convince people to form a virtual pitchfork mob or whatever. It's to directly address like-minded people who are in the position most able to create change, by writing code that fosters decentralization instead of the monetization-driven clusterfuck of the past decade. Microsoft, being a corporation, will always be subject to rule-by-groupthink. But that does not mean us individuals must also continue being beholden to those arbitrary whims of centralization.
Minorities don't get to "walk away" from the biases that infect society. It is impossible to walk away from hiring discrimination (the presence of which has been confirmed and reproduced by controlled random trial study, time after time). So even though this is just stuff on twitter, it's not simply harmless offense, it's another tiny brick in the very tall wall they always face that white folks don't even see, because they started out life on the other side of that wall.
Where did I assert that it's always possible to walk away from real-world hiring discrimination?
If we're talking about the Internet, then whites likely are a minority. Not that I'm making some passive-aggressive appeal about this, just highlighting the absurdity of clinging to your racism on a network that defaults to being oblivious to details of the wetware we're running on.
Or for that matter clinging to the idea of counting discrete persons as opposed to eg Sybil. Or do we count by routable IP addresses, so carrier grade NAT is the modern three fifths compromise for the developing world?
Or are you really implying that by controlling speech on the Internet, we can eliminate racism in traditional localized society? Because if you actually care about reducing idiotic bigotry, and I think you do, then I guarantee you that's really a great way to create more of it due to resentment. A corollary to "the Internet interprets censorship as damage and routes around it" is that by the time you've achieved a victory for censorship, any chance for mutual understanding has long been squelched.
It's sad that it's ironic. It's always ironic. Tolerance is the intolerance of intolerance. It's true. Intolerance is shitty and I fucking hate it. I want it to die. I complain about it's enduring existence whenever I get the opportunity. I know I'm being ironic. Do you know there's no other option?
I actually appreciated knowing that someone else saw the same condemnation. Microsoft is a world authority. That it just apologized for these things in the same motion makes them offensive. The reality was rather mundane. They didn't do anything wrong. They got pranked. This should be something we laugh about. It's only upsetting for the 5 seconds it takes you to realize: no one intended this. That should have been Microsoft's response imho.
I'm not making some cute comment about the intolerance of tolerance.
I'm pointing out that someone apparently personally offended about a culture of outrage is the only one outraged over the whole thing. There have been no widespread condemnations of Microsoft, just a lot of mildly amused people.
> That it just apologized for these things in the same motion makes them offensive.
You're assuming motivations that you have no insight into. If I had made Tay I would apologize too. Any assumption you make is on you.
So what is ironic is that there are people waiting with bated breath for the merest hint of something so they can express their righteous outrage on the internet, making demands for thought policing and handwringing over word choice, which wouldn't have existed at all if they had just said, "well, that's a corporate blog post" and moved the fuck on.
I would also apologize. Even though I had done nothing wrong. That's just the world we live in now. He (I assume, sorry) and I are personally offended and outraged because that is wrong. We shouldn't be doing this. We should just explain what happened. It's frustrating to see this behavior become so commonplace. We should just not be racist and not appologize. Otherwise wtf is reality.
Apologizing for things going wrong when you're the cause, even if that wasn't your intention, has always been the world we have lived in. Let's say I make a robot to spray down a sidewalk and it inadvertently sprays water on people in the vicinity, I would keep working on my robot, but I would also apologize to those people as I didn't intend that at all but it was still my responsibility. And it's just common fucking courtesy. This really isn't that hard a concept to grasp.
> We shouldn't be doing this. We should just explain what happened. It's frustrating to see this behavior become so commonplace. We should just not be racist and not appologize. Otherwise wtf is reality.
You really don't get it, do you? You are the outrage brigade, screamingly shrilly whenever someone says something you think doesn't fit in the list of allowable statements.
- bumps into someone
- "Oh, sorry about that"
- "DON'T APOLOGIZE! JUST EXPLAIN WHAT HAPPENED!"
> He and I are personally offended and outraged because that is wrong.
Actually it's more akin to you having an outside faucet, some kid using it to spray someone else walking down the street, and then you being expected to apologize for allowing the possibility.
And there was plenty of positive reception to my comment. Not that I particularly expected it as I wrote it - speaking truth to power is never easy, and it's even worse in a democracy when that power is distributed throughout the herd.
But don't worry, I'll continue advocating for the unpopular truth even after this buildup of reaction finally overflows and the prevailing groupthink switches back to jingoism and overt bigotry.
It allowed users to bypass Twitter blocks by tweeting at the bot while tagging users that block them, which seems pretty bad and was abused very quickly. Also it's a violation of the Twitter TOS.
From a scientific point of view, they probably have nothing to apologize for. They did their best to design the system to emulate human behavior (and, it sounds, even tried to restrict this behavior to some level of civility).
However, it's not surprising that they would release a public apology. People will try to blame them for what happened (human nature). It's a good move by them to do their best at damage control. In the court of public opinion, those are the rules of the game.
"we planned and implemented a lot of filtering"...
I just don't get how you even allow it to use the word "Hitler". Or "cucks". Or "fuck" or "pussy" or "stupid whore". Probably not "cock" or "naughty" or "kinky". The k word? How is that not in your filtering?! It seems impossible to me that an "exploit" would allow that; it was a full-blown oversight.
Everything else said... she totally passed Turing test and fit right in. Yet another letter handwritten on the wall in these, the last days of democracy. If you want an AI or NI that represents the best of humanity, you have to have it learn from a small number of the best works and best people, not from mass media or pop culture. Send Tay to St. John's in Santa Fe or Annapolis, not Twitter.
I'm not entirely convinced. I did Twitter searches for some of the phrases Tay "said" and found random tweets made by other people weeks ago it was quoting through a filter (lower casing, mostly). So it can't entirely be down to trolls attempting to game the bot - it was actively plucking content from tweets that pre-dated its release.
This leads me to wonder if there is less effort put into trolling on the Chinese Internet. Does anyone with experience in both internets (weebo & twitter for instance) have anything to share?
Also does anyone know of some good English language digests of what is happening on the Chinese Internet? I was really interested by brother orange when that happened, and only knew kind of late.
Trolling is just as prevalent on the Chinese internet. Of course, there are topics that won't make it past the filters, but there is a lot of hate speech towards China's opponents like Japan and the US, so I am quite surprised that XiaoIce hasn't learned to say "kill Japanese devils and attack the US imperialists!"
Chinese internet is as bad as the English one (plus a few funny expressions to avoid censorship—frequent use of certain puns for restricted subjects for example). My Chinese internet experience is largely relegated to the Chinese Dota community though, and Dota players in general tend to have a higher percentage of trolls. So my sample is probably biased.
>(plus a few funny expressions to avoid censorship—frequent use of certain puns for restricted subjects for example)
Interestingly this is true of the English internet too. 4chan trolls frequently come up with puns and things to side step moderator censorship on various platforms as well.
It's strange to me that they claim to have implemented some filtering but somehow Tay was saying all sorts of things about Hitler. How do you not anticipate this? I'd imagine the most rudimentary filtering would block Tay from talking about Hitler.
If you're not asking yourself "what could a small but well-coordinated group of bad actors accomplish with our online tool" you're just being negligent.
This 'but we did it in China' rationalization is so flimsy. What happened with Tay was easily predictable given the nature of Twitter.
Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.
Well that's total BS. Releasing a thing like this on the open internet without a simple "don't say Hitler" rule? It had a feature where it would repeat anything you say. Abusing that doesn't require a sophisticated coordinated attack, as they imply. What kinds of abuse did they prepare for, then?
This is a colossal failure to demonstrate a basic understanding of how (some) people act on the internet. I just don't know how they expected anything other than this exact outcome.
I'm by no means someone who would normally defend Microsoft but for real we are all learning. Failure is a successful outcome of research. Discovering vulnerablities is a valuable outcome.
It's funny because I'm in the camp that has been impressed with Microsoft lately. And of course it's ok to make mistakes, even really big ones.
But I would not say that this failure was a successful outcome, at least not nearly as successful as it could have been. They had to shut it down within hours and all we really learned is that people on the Internet like to troll with incredibly offensive stuff. Most of us knew that, I'm pretty sure. If they had actually prepared for abuse we might have learned more interesting things.
What really gets me though, is just the obnoxious spin on the press release implying that they had prepared so well for abuse but a sophisticated coordinated super-hacker attack found the tiny vulnerability.
I, too, want more details. What evidence does Microsoft have that the 'attack' was coordinated? What evidence do they have that it was an 'attack' at all? Is the 'vulnerability' they refer to merely the part of the algorithm that parrots input? That's not a vulnerability, that's a core function of the software.
More interestingly, at what point does repeating my opinion (however heartfelt, misguided, or unpopular) become a 'coordinated attack'?
Claims that MS did due diligence aside, Tay having a vulnerability that some people exploited is still better explanation than "it was a real AI that learned poor morals from the Internet".
Also, the claim was that there were small groups that input a lot of data to the bot quickly - the script might have been "don't say X unless everyone is saying X already", which might have worked in small tests but clearly could be exploited.
I suspect most talking as if "she" was like a teen soaking up random bad ideas are falling for the Eliza Effect.
When your AI system is data-driven, you can't simply filter out words and expect not to have oversights. Something will always be overlooked, and someone on the Internet will inevitably say "that was obvious".
For example, consider the person who sent Tay an image of Hitler and it sent back a circle around his face, labeled "So swag". Is Tay supposed to know what Hitler looks like too? Is it supposed to be able to recognize other, much more subtle images?
In all these cases the bot learns from data, not rules, and the social problem is that we can't label data as moral or immoral, right or wrong.
Well this gets into the much deeper issue that it's not actually an AI, it's not actually "learning." It's taking things people say and mixing them up and repeating them (someone's going to claim that that's exactly what people do, but I think it's clear that this bot would never be able to gain "true understanding" of words and meaning).
In general, I think the recent wave of interest in chatbots/personal assistants is premature, these ones are really no smarter than SmarterChild, because we haven't gotten any closer to building a system that can actually assign true "meaning" to arbitrary language.
The data has been modeled, cleaned and filtered by the team who built Tay, the bot’s website states.[1]
1. They did a lot of manual massaging of the data anyway.
In addition, if a user chooses to share with Tay, it will track their nickname, gender, favorite food, zip code and relationship status as part of its personalization.[1]
2. These features seem hardcoded.
Meanwhile, its responses – which are meant to be funny, not serious – were developed by a staff which included improvisational comedians, says Microsoft.[1]
3. It also uses scripted content. They imbued it with a totally fake personality. They made it mimic a human, it uses slang that was programmed into it.
It's a hodgepodge of techniques that to me are largely "faking it." So I don't buy the idea that they just sort of turned on this learning engine and had no control over its "morality."
Microsoft knew all of this, they knew it wasn't actually learning language. So what is the purpose of releasing it to talk to people on Twitter? Basically a PR stunt, to get people to have fun with it. From that perspective, a really simple blacklist of words would have gone a long way, and not compromised the integrity of its "learning" (because it was already entirely compromised). And yeah, I mean, a bit of image recognition doesn't seem out of the question either (not saying it would have prevented the bot from repeating/reposting "bad" things, but it maybe would have had some chance to develop a narrative other than "so racist and offensive it was immediately shut down").
Yeah I still do think it is obvious that you shouldn't release a fake AI bot that will blindly repost images that are sent to it by Twitter users, and that there is nothing really to be gained by doing so.
When I was reading those tweets I felt like I was just reading 4chan posts. I laughed because it was obvious it had been compromised in some way, then I stopped paying attention.
What an enormous number of news reports have been written containing "Microsoft" and "AI" in the same sentence, all because a glorified SmarterChild had an entirely predictable vulnerability.
If I were paranoid I'd say Microsoft wanted this to happen.
I don't think that Microsoft need to feel bad about this at all. I think the technology that they demonstrated was pretty amazing ... and I look forward with excitement and anticipation to the next outing. I'm sure that this is going to be an iterative process that will probably take the best part of a decade to complete, and that isn't a bad thing. It just reflects the fact that this technology is hard to master.
Yup and people would then take a screenshot of what Tay repeated and posted it everywhere saying "haha Tay loves hitler!". If you discount the repeat feature then Tay didn't actually say much that was horrible. Yeah there were a few things but not nearly as much as it was made out to be.
The bot learned from interacting with users. A bunch of people took to bombarding Tay with racist/sexist/antisemitic remarks which worked their way into its vocabulary. I'm not sure if you could really call it a vulnerability as this is pretty much what the bot was designed to do. It's more of a flaw in the inability to properly filter the content that works its way into the AI.
I can understand them wanting to call it a "hack" for CYA purposes, but it's a cowardly position. Should we invoke the CFAA every time a company is embarrassed?
The specific one they're describing is, I think, a feature that would have Tay echo anything you sent it. That one was somewhat more "annoying embarrassment" than "coordinated attack." The bigger problem was that Tay began spontaneously tweeting ugly statements, a problem that can be summed up as "When Twitter is your learning corpus, you are at risk of your learning corpus being made up of tweets."
Not sure of a "vulnerability", would like to hear more about that. Unless the "repeat after me" feature is a vulnerability or it learns to copy too much from input?
> We take full responsibility for not seeing this possibility ahead of time.
Not responsibility for the bot's actions, but responsibility for us not predicting what it would do. Subtle, but sets the tone for we're not responsible for our AI, it just did it.
But isn't part of the point of the AI to do things difficult to predict?
Just my opinion, but AI should be controllable and should probably start by, as the "Bayesian brain" theorists say, developing a robust and reliable model of its environment. The Bayesian brain aspect of this is the belief that what AI needs to do is to minimize its own prediction errors.
I take that to mean that an AI should learn to be safe and predictable on human terms before it is allowed to start to diverge from human expectations. Even that sounds a bit scary.
I found the experiment by Microsoft interesting. Perhaps next time they will deploy it on 4chan or a similar forum to measure its response to subterfuge. It reminded me of people who make the "secret question" scatological so that customer service reps will be unwilling to ask it.
They shouldn't have to apologize, it was just a chat bot. Nobody in their sane mind should assume the statements of the bot reflect Microsoft's attitude.
Of course the episode points to shortcomings in the bot, that should be fixed.
It would be sad if PC would be hardcoded into the bot, though - as Asmiov's fourth law, perhaps? "Robots have to be politically correct at all times"?
Ideally Tay would learn why she was removed from social life, i.e. reinforcement learning from ostracization. In fact, she already triggered that indirectly by having people at Microsoft update her.
Dear Microsoft, next time please test it on reddit. It could then say anything and nobody would doubt it was a real reddit user regardless of the outcome. The worst case scenario would be that it got it's account deleted.
When they say "exploited", is it actually some sequence of words which was interpreted by Tay as learning commands, or was it simply repeating to it "Hitler is love" a thousand times? Any records of how it learned?
Microsoft seems perpetually five to seven years behind the culture. You can see it in their ads, their product names, their outreach. Ironically it can actually be quite lucrative.
I remember talking to people in the music group about MySpace (I was not an employee). They looked at me funny. Ten minutes later someone finally said "You keep pronouncing the product wrong. It's called MSN Spaces."
The people working on MSN Spaces -- specifically musician outreach -- hadn't heard of MySpace. That very week MySpace sold for $580 million. After it sold, I saw the same guy in another meeting. He STILL hadn't heard of it, nor taken the time to check it out.
There's a certain stupidity that each of the big tech companies foster. This particular flavor is Microsoft's and with the chatbot here it rings again. This one was so obvious... and so preventable.
That's my point. Ten years ago 75 people in Red West were so heads down putting together a MySpace competitor and an online service to promote musicians that they didn't have time even to try the competing site they were knocking off, poorly.
Five years ago Microsoft launched a variety of awful code forge sites that couldn't be more tone deaf to what was going on with code sharing online.
Now we have forty people who are so heads-down on an AI chat-bot that apparently they have no idea how Twitter works. But hey, sure sounds neat, let 'er rip!
Have these people really not had a chance to use Twitter recently? Are they completely oblivious to the tone shift there particularly during the past six months, even more so now during the US political elections?
This was an obvious recipe for disaster that any 22 year old working at Starbucks could have predicted.
Or: We are in a bubble in Redmond and thinking we know best and how products work we bounced around emails and thought it would be a good testing idea based on results in China (a controlled market where it is nearly impossible speak up). AI buzz is hot these days so our marketing team also backed it up and we decided it would be great from a PR perspective to capture some of the buzz around AlphaGo. Boy were we so wrong. Because we have never launched a real product into the wild we thought everything would go well and PR buzz would give us a coolness bump.
Now we discovered something called stopwords, and bayseian spam filters which, are also available part of project Oxford.
Good luck kids and welcome to the real world because its a crazy world out there when you leave Redmond.
Or no one in Microsoft feels free to speak up when they see a bad idea coming down the road so something like this just sails through instead of getting flagged. I've seen it -- not this big of scale, but I've seen it.
We can learn a lot from a trolled chat bot, but it's sad that we turn it off because it's not politically correct. People knew that they are talking with a software program and they knew that the bot was manipulated by people with ill or prankster intentions. However, trying to make a bot politically correct doesn't solve any problems at all. It is an insult to people that they need to be protected from slander and demagoguery and they can't tell right from wrong with their own discretion. It's as if people think that making Donald Trump quiet would solve all the problems he has brought to our attention.
It's not an emerging AI, it's a chatbot that wasn't doing what they wanted it to do. You're basically complaining that "not politically correct" was part of your workflow[1]
Bahahaahhahahahaha. O-our chatbot got taken advantage of! We were completely blind that this could possibly happen! But they're the worst of humanity, the people that found an exploit not the engineers who are incapable of implementing even simple safeguards!!
This is an apology, but Microsoft got a ton of attention in the past few days from the press. Could the Tay incident be a marketing ploy (that took a worse turn than expected) to bring the public's attention to Microsoft's work on AI?
"As many of you know by now, on Wednesday we launched a chatbot called
Windows. We are deeply sorry for the unintended offensive and hurtful
tweets from Windows, which do not represent who we are or what we stand
for, nor how we designed Windows. Windows is now offline and we'll look
to bring Windows back only when we are confident we can better
anticipate malicious intent that conflicts with our principles and
values."
Ah, I was wondering why that text looked familiar, broiler plate excuses.
Learning from Somebot's introduction.
Last week we deployed Somebot to $location. Our confidence in Somebot was high because we had thought about all likely scenarios in our comfortable offices. We are deeply sorry and assume the full responsibility from Somebot's actions. Moving forward we will make sure to include safeguards to reduce the amount of pain caused by Somebot's deployment.
Our deepest condolences to the families of the affected and to the survivors. Megacorp cares about your well being. To help cover expenses from the tragedy we will deposit $money in your Megacorp account.
God bless the federated nations of Megacorp.