We need a word or phrase for this phenomenon, where we attempt to substitute human pattern recognition with algorithms that just aren't up to the job. Facebook moderation, Tesla Full Self Driving, the War Games movie, arrests for mistaken facial identification. It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label. Maybe there's a ten syllable German word that expresses it perfectly?
is used by Private Eye (British satirical/political magazine) to highlight adverts auto-generated inappropriately to accompany articles on news etc websites.
Yeah, it's definitely the most sellable one I've seen.
I don't like the implication that it's the algorithm that's malicious rather than the person who wrote the algorithm (no algorithm is inherently malevolent or benevolent in my opinion, it's just an algorithm), but I also know this distinction is completely pointless for the vast majority of people and "malgorithm" gets the gist of what people are getting at across very well.
While "malevolent"/"malicious" definitely has a "wicked" connotation, you also see it in words like "maladapted" or "malodorous" which are just "bad" without the "wickedness".
That's still dumping a "bad" on the poor old algorithm though, which has done nothing wrong simply by existing and doing what it is programmed to do. Algorithms aren't bad, it's the programmers who write them and the managers who decide they should be written who are bad when things like this happen.
I don't see how you could believe this point, unless you think things like "There's no bad music, only bad musicians" as well.
Perhaps an elucidating counterpoint is an algorithm written in such a manner that it is deliberately worthless, as a joke (e.g., StackSort, Bogosort). Obviously they're not the result of a bad programmer, they're just an inherently bad algorithm.
>unless you think things like "There's no bad music, only bad musicians" as well.
I actually do though, and I say that as a musician myself. These things are inherently subjective, writing good music isn't just a matter of how closely the musician adheres to a pre-determined set of rules. The qualities of goodness and badness exist in the minds of the creators and the audience rather than being attached to the music itself in some sense. Any attempt to classify "good" versus "bad" music in the sense people usually understand it is just an appeal to authority fallacy, the only thing that makes music good is "do I personally enjoy listening to it or not?". You can try to classify music based on how closely it fits a genre's set of rules but this quickly breaks down into absurdity in practice (for example, acts like the Grateful Dead which span many genres).
Not really, the fox crap that smells awful to me smells wonderful to a golden retriever. The "badness" of the smell is entirely down to the nose that's smelling it, the subjective experience of smelling comes from the mind rather than the particular chemical compounds which we understand as a smell.
That subjective information which describes the badness of a smell doesn't exist within the smell itself, it exists within the mind.
Hmm? I don't think an algorithm has to give "correct" answers, it just has to be precisely defined. For example, one could say "For this problem, a greedy algorithm yields decent but suboptimal answers."
Merriam-Webster online says: "a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation" and "broadly : a step-by-step procedure for solving a problem or accomplishing some end".
Algorithms solve problems. Wrong answers are not solutions to (i.e. do not solve) a given problem. Hence, algorithm implies that it provides correct answers within the parameters of the problem.
Disagree. We see this phenomena all the time: Right solution, bad input data. Right solution, wrong problem. Worlds turned to grey goo by replicators working to some technically correct algorithm.
The definition of algorithm is wider than you think. As your parent poster noted, "greedy algorithms" exist (as do many other algorithms which provide suboptimal answers). You can easily verify this by googling.
Scunthorpe problem [1] is used to describe the false positives for auto filters, which are often results of naive substring matching. In a way, the current problem is similar, but on the semantic level.
However, it doesn't cover other cases for other AI mistakes you mentioned, like self-driving.
The problem also goes in the other direction. That is platform rely too much on automation that human signal gets too faint. For example humans have a hard time flagging actual hate speech on these platforms as well. Another example: Every so often there is a front page post on HN where a company like Google will automatically shut down service for a customer (false positive). The customer has a hard time getting through and having this false positive corrected because their signal can’t reach through the layers of automation.
The concept of "so-so automation" [1] seems relevant: innovation that allows a business or organization to eliminate human employees, but doesn't result in overall productivity gains or cost savings for society that could then be redistributed to the laid-off employees.
I think so-so automation often is used places where there's a lot of zero-sum conflict between workers and management, or where the work itself causes a lot of negative human externalities. (This can be a good thing: it's probably okay to settle for a "worse result" from an automated system if it eliminates a lot of physical or psychological harm to people...some content moderation issues probably fall under this case, but not this one.)
It enabled things like Facebook to displace message boards for the most part. Look at Facebook or Reddit, they are barely able to police obvious noxious behavior in English, and military juntas in Myanmar are able to organize on the platforms.
"Totalitalgorithms" (Totalgorithms?) captures the spirit of these algorithms. They seem like bugs but they're actually undirected, organic features of a total technocratic political system that is rapidly coming to dominate life in our modern societies. The filters will be tuned but not fixed because they aren't broken. They're part of what Tocqueville described as 'soft despotism':
"Thus, After having thus successively taken each member of the community in its powerful grasp and fashioned him at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd."
> It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label.
It's not just something that happens to us, it's something we do. We need to do better rather than accepting it as inevitable, and let future generations worry about what the best name for it was.
> Maybe there's a ten syllable German word that expresses it perfectly?
That being said, I propose Urteilsfähigkeitsauslagerungsnekrose: The necrosis that follows the outsourcing of our capability of judgement.
It's easy to blame this on imperfect technology but I'm not so sure. Couple of months back when all tech companies started their holier than though publicity campaigns with token actions we faced the same issue.
"Blacklist" was banned as term because it was deemed racist. No matter that people understand black and white outside the race issue.
So if blacklist is deemed by people , not technology, as racist; thus removing context from the equation, why is AI wrong to assume that "attack the black soldier in C4" is hate speech?
Human interactions have a social/cultural context. You can't just recognise $thing, you have to recognise how $thing depends on $context for the correct interpretation.
Current AI either ignores context completely or doesn't parse it correctly.
It's a rediscovery of the ancient "Time flies like an arrow, fruit flies like a banana" problem.
If you're out on a date and you say "Let's go back to mine" it implies one thing. If you say it to some friends after an evening out it usually means something completely different.
Sometimes it means the same thing - but you need to know a lot about the people involved to be able to infer that accurately.
And sometimes humans can't parse these nuances accurately either.
AI-by-grep or stat bucket can't handle them at all, because the inferences are contextual and specific to situations and/or individuals. They can't be extracted from just the words themselves.
Minsky & co researched some of this in the 70s, and eventually it motivated the semantic web people. But it was too hard a problem for the technology of the time. Now it seems somewhat forgotten.
I use the phrase "K ohne I" since years already. Which basically means "künstlich ohne Intelligenz". We all saw this coming. The topic has been gone over in scifi literature. And still, big tech decided its time to roll it out. "A human would also not be perfect, and we claim this algo is better then the avg. human" is the last thing you hear before discriminating tech is rolled out. And since politics is in the grip of commerce, regulations will not happen early enough. We are fucked. 2040 will be horrible.
I don't have a word for the phenomenon, but the problem reminds me of a quote by Wilfrid Sellars.
"The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term"
call it maybe philosophy-lacking, or worldview-lacking, but understanding how things 'hang together' in broad terms is precisely what our 'intelligent' systems cannot do. Agents in the world have an integrated point of view, they're not assemblages of models, and there seems to be very little interest than to build anything but the latter.
It's basically perception and reaction without cognition.
In the natural world we would call that instinct. So maybe 'artificial instinct' if we want to keep 'AI' or 'synthetic instinct' because I think it sounds better.
It is basically about using the wrong tool for the job, and continuing to do so even after you (and everyone else) is fully aware of just how wrong you are. Personifying the tool would just distract from the root cause.
"I was using a ballpeen hammer to pound in roofing nails, everything was going great until the hammer's 'synthetic instinct' resulted in a painful blow to my groin and a near fatal three story drop. It'll go better next time - I've painted the ballpeen hammer a different color."
If a human made the same mistake, we would call them incompetent, careless, and negligent.
- incompetent system
- incompetent robot
- incompesys
- incompebot
- inept system
- inept robot
- inepsys
- ineptobot
- inepobot
- bunglebot
- hambot
- sloppybot
- careless system
- careless robot
- carelessys
- carelessbot
- neglisys
- negligent robot
- neglibot
I like 'neglibot'. "YouTube neglibotted my video." "This is my new email address. My old one got neglibotted." "The app works for typing in data, but the camera feature is neglibotty." "They spent a lot of effort and turned their neglibot automod into carefulbot. In another year it may be meticubot.
Industrial revolutions has happened few times in the past, and every time it occurs, we change our world to adopt it.
I can't help but wonder, what the world would look like if services in the future will be provided primarily by AIs. How do we adopt them? Do we have to invent a "New Speech" just to make AI understand us better so we can live a easier life?
"Question: Have you consumed your food today?", "Answer: I have consumed my food today."
Or a more subtle example:
"Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)
> "Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)
this already happens today, with human servers. The menu items are numbered, and you tell them the number instead of the name of the dish.
And then in some places they hand you another number - a pooled session identifier - to take to your table. Then you are expected to respond to events broadcast regarding this number if the food fails to reach the destination automatically.
Blockchain Chicken Farm gives an interesting angle on this. Part of the book describes a AI controlled pig farm. For the outcome to be good, as many variables as possible have to be removed from the pigs' life. For example total isolation from the outside world. Otherwise there is too much for the AI to account for and the training set also needs to increase. What does that mean for our lifes as AI gets control over more aspects? What variables can be removed?
Yea, I'm Trinidadian, and even though my accent has changed significantly from living in Canada for 7 years, people and especially voice recognition get confused by some of my speech patterns.
Example is that people always hear 50 when I say 30, because of how I pounce the "y". Anything ending in "th" or "ing" gets confused a lot by people who don't know me.
A new vernacular to interface with tools that never reveal the actual state of a system under their control, and forbid you to directly influence it. You are granted the privilege to express your limited desires, from which the system will "learn" your preferences.
If you want a vision of the future, imagine a man repeating "Hey Thermostat! Can you cool it down in here a little?" over and over–forever.
In a government system a similar problem is called Bureaucracy. It is similar in the sense that the system is very complex, beyond any single persons comprehension, bureaucratic system is unforgiving in its conclusion, and it is the responsibility of the victim to deal with a false positive using the same (or similarly complex) system to attempt correction.
However this is different from bureaucracy in the level of automation and statistical inference. A bureaucratic system doesn’t do inference (or at least that is not its main function), and the steps between require human inputs (albeit really automated humans most of the time).
I suggest automatacracy which strings together automation and bureaucracy.
How about calling it a "Buttle" after the movie Brazil from 1985 where a certain Mr. "Buttle" gets arrested and killed instead of a Mr. "Tuttle" due to a fly in a teleprinter.
It's funny that you should mention War Games, because the only way to win this battle is not to play at all. Why are we so hell-bent on restricting speech and burning all these engineering hours trying to moderate something that cannot be moderated? Languages -- and people -- are "transformable" enough to avoid triggering "hate speech" (whatever that actually is, and whoever it is that determines it) algorithms. Let people downvote or shut their computer off if they don't like it, and leave it at that. Are we that scared of words or ideas?
I used to live in a communist country (probably the same one that Mr. Radic lives/lived in), where "hate speech" -- which was called anti-state, or anti-establishment speech back then -- was punishable by a middle-of-the-night visit by dark-clad police. Yet, people still found ways to openly criticize the Party, and there were even popular songs that openly defied them, which only demonstrates that not even human pattern recognition is good enough to detect these things, as you state in your first sentence.
It's a waste of time, but more importantly it is detrimental to society.
I am scared of the masses believing nonsense. What fraction of Americans believe the election was stolen? How many people are still going to refuse to be vaccinated? Indeed, bad ideas are scary. I am scared that my country people will lead another insurrection or allow themselves to be a needlessly vulnerable vector for highly transmissible and dangerous virus.
Misinformation spreads like wildfire.
The notion that good speech counters bad speech relies on rational, well informed, critical thinking skills that are lacking in a significant portion of the populace.
It is fundamentally different that a company wants to exercise its right to free speech in choosing what is said on its platform than for a government to use free speech as a fig leaf cover for dissent.
Putting up no fight against misinformation and hate speech is not a winning strategy. Society loses much more from nonsense spreading than it does from having to wait 24 hours for high quality chess content to get remoderated.
It’s not just about a game of chess being moderated.
It’s about silencing legitimate speech that you disagree with, whether it’s on a factual level or other motivations.
Perfect example, coronavirus being manufactured in a lab rather than originating in the wild. Anyone that suggested it was from a lab has been moderated heavily until a report came out that in fact it did.
Was this not damaging to society?
And to your point of hate speech, can you give me a definition of the term? You can’t because there isn’t one that a) covers all scenarios and, more importantly, b) doesn’t cover legitimate speech.
It is up to me to think for myself, not up to someone else to do it for me.
So instead put power in the hands of giant companies because regular people can’t be bothered to not get hot-and-bothered by reading something they don’t like? What makes these companies so morally pure?
The only way this can make sense - especially on HN - is if the people who advocate for company control of regular peoples’ lives work at these companies.
We definitely need a term for this so when we are a victim of this, we can easily raise a flag. I have a few ideas:
- Bot blunder
- Artificial stupidity
- Algofail
- Machine madness
- Neural slip
I think this has the unnatural, unjust and just basic wrongness about using AI to be like humans.
"human-ops" is the justification to remove human pattern recognition because it's better for a small group of employees (e.g. "we cant let our expensive staff view flags of distressing pictures or boring chess videos, so we will get an algorithm instead". The tech companies HR say that this is pro mental health as an additional way to justify this change and any unemployment.
"In modern times, legal or administrative bodies with strict, arbitrary rulings, no "due process" rights to those accused, and secretive proceedings are sometimes called "star chambers" as a metaphor."
Example uses:
"I got starbotted."
"Instagram's automod is a starbot."
"YouTube is too starbotty for your lectures. Better post with your school account."
"We're suing them because their starbot took down our site right after our superbowl ad ran."
"Play Store starbotted release 14 so we cut a dupe release 15. How much will it cost to push the ad campaign back 2 weeks?"
"We use Gmail and Google Docs but not Google Cloud because of the starbots."
"I tried to put Google ads on it, but their starbot rejected the site because it doesn't have enough pages. It's a single-page JavaScript utility." (This is my true story about https://www.cloudping.info )
"Our site gets a lot of traffic but we don't use Google ads because of the starbot risk. Nobody needs that trouble."
"The STAHP Bill (Starbot Tempering by Adding Humans to Process) just passed the Senate! Big Tech is finally getting de-Kafka'ed. About f**ing time."
The suggested “malgorithms” is probably the best noun form for these algorithms themselves.
As for terminology that captures society’s general over reliance on automation and algorithms to handle things that really ought to have direct human intervention, I like Rouvroy’s “algorithmic governmentality”
AS or just artificial stupidity I've heard a couple of times. It's quite mind boggling if you think about how many people had to engineer tensors and train networks for months if not years to create a system capable of so blatant stupidity.
In my opinion this is not just about AI. This is more general. We as humans try to fix social issues with technical measures. All the racists are not going to suddenly become good people if we push them to separate platforms.
Yes but at least you won’t be labeled as conspiracy spreading platforms by the /very important/ media, and your woke employees won’t walk out and disappear.
Oh and the current administration won’t pass legislation restricting your platforms or revoking article 230.
sorry, are you railing against freedom of association? because it sounds like you are. if you're an insufferable ass (hypothetically speaking) and no one wants to work for you, and so they do not, thats freedom baby.
There's an acronym: OOD - out of distribution - for these situations.
There's no reason YT can't detect chess from hate speech if they updated their training set. Maybe they weren't aware of this failure case, or they didn't get to fix it, or by trying to fix it they caused more false negatives. The way they assign cost to a false positive vs a false negative is also related.
"Cheap AI"/"cheap automation", to dispel the notion that throwing data at a neural network is high science or serious engineering. Or, even more directly, reduce it to "fuzzy matching". AI is just a fuzzy pattern database.
The problem of the flagging isn't so much of an issue, if it weren't for the fear that you don't know whether you can get an actual human being at Google to get in touch with you.
> Ooopsie woopsie! Our AI made a fucky wucky and locked your account for no reason lol. We pwomise nothing pls go away and die uwu PS: If you make a new account we gonna shadowban iwt immediately lmao, if you have any compwaints please write a letter and addwess it to the hospital you were born in.
It's not like humans are generally better then this. I mean look at the Github master branch fiasco. It had a completely different meaning then master with slavery connotations. Yet the outrage was so large Github changed this name. I'd say this is the same behavior of this algorithm. Seeing a word and getting "triggered" and marking it as toxic even though it has a completely different meaning.
A receiver operating characteristic curve (ROC curve) [1] describes the tradeoff between sensitivity and recall. No matter how sophisticated we think our classifiers are, the confluence of physics and mathematics will always limit the accuracy of our automated systems. It is just a matter of what kinds of errors we are willing to tolerate.
No. What we need is something that is amazingly simple:
* you, a programmer who coded something that caused company X to lose Y dollars because of a "mistake", have to be on a hook for Y dollars.
* you, a manager who managed a project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
* you, a product owner who accepted the what the manager of ta project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
* you, the CEO of the company that had a product owner who accepted the what the manager of ta project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
Yes, I understand that the cost of a mistake is now higher than the loss suffered by X. That's the incentive to ensure that it does not happen because the wives or husbands or partners of the people who would now have to pay are going to ensure that they do not do take that whose wacky ideas and implement them -- wacky ideas are abstract but the loss of nice housing, nice organic food, nice daycare for the kids and nice scholarship fun is real.
Funny, we hold civil engineers accountable and there seem to be plenty of capable engineers to do things that need to be done, and we hold bad pilots accountable, and yet the only reason airlines have a pilot shortage is because they refuse to pay more than $20k a year.
Developers need to stop being so allergic to accountability. It's kind of pathetic
I studied mechanical engineering myself, so I hear you.
That being said, software development and civil engineering are very different in terms of risk management. There are strict regulations around what you do and if you play by the rules, you have minimum to no risk. Even if dozens of people die under a collapsing building you are not accountable if everything you had done was by the book.
Software development, on the other hand, is more like wild west. There are minimal regulations, only best practices. One developer can never know if the end product is free of any errors.
Pilots are a completely different story. The OP was talking about "causing company X to lose Y dollars", not human lives.
What about the customers who bought/consumed that faulty product? They should also pay Y dollars, since it was their decision and responsibility to take the risk of buying the product and not vetting management, developers and the CEO before hand.
Well. The speech police started changing "blacklist" and "whitelist" in programming context, even when those had no racist history; maybe it's time to change it in chess. (After all, white always goes first, that is not very PC.)
Rename "black" and "white" to "second player" and "first player".
> We should recolor the pieces with neutral colors
If only! Clearly we need different chess skin colors so that everyone can identify with their chess pieces. I think five skin tones [0] should cover everyone.
Note: I'm serious and not serious at all here at the same time.
No: the colors doesn't matter at all. Yes: If someone wants to get offended they will find a way, even how they are held outside since no one makes jokes about them.
Exactly this. I am tired with fake official correctness, and that people get back to racist/homophobic when they are off.
I am French and we have a history (and culture) of parody, and freedom of doing it and being offended by it. This was the case when I was a kid and up until some 1( years ago.
Now criticizing or making fun of some groups is *dangerous* - you cannot publish a picture of the god of Islam without literally putting your life in danger. Yes, I know that this is a small minority yada yada yada, but I have only one head.
At the same time you can make fun of Catholics. They are not happy, but also somehow "protected" by the culture and history of France. This is similar to Jews, though they became more offendable recently.
The worst is that the people who are going to officially claim how much everyone is equal will say in private that someone is a "tapette" (I do not know the English word for that: derogatory expression for gay), or that Muslims are middle ages savages, or that Blacks are dumb (we had a few cases on TV where the invited person thought that they were off when they were not).
I do not know what the solution is - it certainly does not help the normal part of the communities above that people are afraid to put them in the same bag as Catholics (in France). They would not be officially immune of criticism/fun, but they would be perceived as "people like us".
That’s not tied to a specific race though? Sometimes being ‘green’ is an advantage cause you are able to ask inexperienced questions and given more slack when in a new job, etc.
I thought of this and realized it'd actually be a fun game. Both players write down their next move and then move simultaneously once both are ready. To avoid race conditions, if a player moves a piece that is to be taken by the opponent's move, then that player has saved that piece with that move. That creates an interesting dynamic where you may not wish to take the most valuable piece if you predict the other player will move out of the way.
I used to play a real-time version of this called Kung-Fu Chess. Each piece takes time to move to its destination, and when it arrives, a timer ticks down before that specific piece can be moved again. Very fun game.
How would two pieces moving to the same square be resolved? Both lost, or ... ? Regardless of the specifics, it would create a very complicated (but potentially interesting) dynamic when moving any valuable piece within a contested region.
You gamble with how many points you have left. Let's say each player starts with 40 points (most games take about 40 moves). You spend anywhere from 0 to however many points you have left to move a piece. If it's early on and there's no chance of conflict, bet 0. If it's an important piece in a crowd, spend more. If there's a conflict, whoever spent more wins that spot.
It might add an element of subterfuge to the game.
> How would two pieces moving to the same square be resolved?
I'd say that the one that has traveled the longest distance wins, and captures the piece that started closer.
(My implicit mental model is that both pieces "depart" from their home squares at the same time and travel at a constant rate, so the closer piece arrives at an empty square, while the more distant piece arrives second and captures the first-arriving piece. This may also encourage more dynamic play)
My son and I would play chess this way, with my wife actually moving the piece. We use a dice or piece ranking to determine ending turns on the same square depending on the actual game we’re playing.
They are uneven, having the first move is a documented advantage called "initiative". Your first move as black depends on the first move that white made, typically you won't have the same response for 1. d4 and 1. e4!
Anyway, I've been looking forward to this day, where SJW start trying to change chess because it's "racist" because white have advantage of initiative and "sexist" because it makes you sacrify the queen to save the king. Looking forward to see that level of stupidity being reached!!
That’s weird, there’s a documented advantage to being the second player to move as well known as the “defender’s advantage” since you have more information to make your move. Your information includes both the board state and the knowledge of the other player’s move while the first player only has knowledge of the board state. No idea where the idea of an advantage to being first came from
You didn't capture my king it now identifies as a queen, please stop dead-naming my queen. Funny enough chess already have has a rule that allows for pawns to become queens, touche
Understandably, it was difficult to adapt to the change: "It is difficult to change your mindset in a chess game with a different start. But if we can change our minds in the game, we can surely help people change their minds in real life."
I'm sure chess board manufacturers would love this. Think of how many new sets they could sell if everybody decided they had to replace their old black and white sets.
Ah, but one could argue that the colours of light and dark wood are closer to the skin colour of humans culturally classified as "white" and "black" than are the actual colours white and black!
"The year was 2081, and everybody was finally equal. They weren’t only equal before God and the law. They were equal every which way. Nobody was smarter than anybody else. Nobody was better looking than anybody else. Nobody was stronger or quicker than anybody else. All this equality was due to the 211th, 212th, and 213th Amendments to the Constitution, and to the unceasing vigilance of agents of the United States Handicapper General."
"In the year 2081, the 211th, 212th, and 213th amendments to the Constitution dictate that all Americans are fully equal and not allowed to be smarter, better-looking, or more physically able than anyone else. The Handicapper General's agents enforce the equality laws, forcing citizens to wear "handicaps": masks for those who are too beautiful, loud radios that disrupt thoughts inside the ears of intelligent people, and heavy weights for the strong or athletic."
> The Community lacks any color, memory, climate, or terrain, all in an effort to preserve structure, order, and a true sense of equality beyond personal individuality.
As far as boards go, most tournaments in the US have long been played on green and white boards. Here are a couple of common boards you'll see for most games in most tournaments [1][2].
In Shogi aka "Japanese chess", while for some reason in English, the players are Black and White, in Japanese, they are 先手 (sente, literally first hand/before hand) and 後手 (gote, literally second hand/after hand). Black is first, BTW.
Except there are actually no color difference in Shogi. Which player a piece belongs to is indicated by its direction (at least that's my layman understanding). I have no clue why English goes with Black and White.
This is false. At least two very large organizations (Microsoft and the Department of Defense) have decreed that their employees not use "sensitive" terms like "whitelist" and "blacklist".
"Light gray" is easy to misspell as "light gay", e.g. "light gay knight takes dark gay bishop". IMHO, it's better to use "wine" (w) and "blue" (b) as colors.
But the pieces would still be white and black. We just need to recolor them to a color that hasn't been claimed as part of an identity... (looks at rainbow flag) well, shoot...
Infrared-coating player and Ultraviolet-coating player? Chrome player vs. Brass player? No kidding, I like this last idea.
I think you are honest so I'll give you my best explanation:
Many people here, including many black I think are utterly fed up with nonsense.
I am someone who badgered my boss to get him to give a job to the foreign cleaning lady who had the necessary degree (he did give her a job). My Indian colleague came to thank me a few years later for helping them settle in.
And I'd probably step in in a fight to save someone.
But this stupid stuff that has been going on for the last few years? I guess it causes even more grief for everyone.
Well, for one, it's not actually defined - and this is deliberate, because if one were try to define it in any meaningful way it could immediately be proven to be incorrect. For the most part, its proponents point to (somewhat arbitrary) statistics, but ignore counter-statistics (like incarceration rates of men vs. women, for example) that undermine their position.
> The popularity of the service makes human moderation impossible, creating a need for inevitably-flawed robots
That's true, but we might be able to improve things with a bit more human moderation.
For instance facebook is insanely profitable. They could probably increase their staffing for moderation by a pretty decent multiple and still be very profitable.
So the current state of moderation is not strictly a matter of need, it's also a matter of greed in terms of Facebook wanting to automate away jobs they could pay people to do. And given the state of online discourse, it's a decision we're all paying for.
In the past for a game-modding project we sort of opensourced reports. Users (meeting certain criteria) could visit a page and view 5 seconds of video and statistics of an alleged/detected cheater. Then select positive/negative/inconclusive. If (IIRC 5 users) had voted and the majority said positive/negative then a ban/unban would be issued. Because it was random reports and the usernames were hidden, there were no obvious bias.
This works for obvious cases like cheating. Kinda like the legal system. This won't work for political speech and we knew this. Most legal systems are set up to avoid having courts judge political speech in most cases.
Even then it will probably only keeps working until there's a rift in your community. Say people start arguing over trans rights (weirdly common on discord), and then users get mass-reported and mass-voted to be banned by an activist minority.
Some communities seem to be immune to this. Mostly smaller ones of course. Some even use language that would immediately net you a twitter ban and they are still much friendlier than most popular hashtags.
It was always assumed that unnecessarily vulgar language leads to escalation and in some cases that is certainly true. But the internet netted some evidence that this isn't a general rule. One the contrary, ambitions to sanitize speech can make communities extremely toxic.
There was a star trek channel on youtube which got suspended because he called the fictional race Ferengi “greedy”, which they actually are. Got reinstated after a few days. But it’s getting ridiculous now.
I wonder how The Onion gets away with https://www.youtube.com/watch?v=Q4PC8Luqiws where they suggest visualizing your money related stress as a greedy hook-nosed race of creatures who want to grabble up all your money - and only hire their own kind.
Huh, the Thai ฝรั่ง ("farang", meaning Guava but also used to refer to white people) is also kinda similar. Looks like it has similar etymology and is not just based on the color of the fruit flesh. [0]
It's never been confirmed that the language of chess is the reason the channel was flagged. It's all speculation. A fishing channel being taken offline due to hate speech, for example, is a boring story. The same thing happening to a chess channel is much juicier due to the implication that an AI accidentally flagged the words "black" and "white" as racist. There are a lot of reasons to be outraged by that idea, but it's important to remember that it may not have happened.
And then it blows up here because HN has a lot of people who are disproportionately upset about default branches being called "main" instead of "master".
The slipper slope has already happened, we are already at the point where words that had no bad connotations and didn't offend anyone are changed. Continuing to change words like that is just pushing constant frustration and costs billions in lost hours changing and fixing old build systems, of course people push back against that!
These are problems coming from single entities trying to retroactively predict outrage. There is no grassroots activists trying to change white/black in chess. There was no grassroots actvists trying to change master/slave - it came from inside github.
Language doesn't change unless the population/community changes it. I think the bigger story here is the misuse of AI where a human moderator would've made a better decision.
Agadmator, the person who made the video in question, also made a video soon after explaining the situation, and gave some hypotheses on why the video got taken down. In addition to the reason being hate speech, he suggested it may have been because they discussed Covid-19, lockdowns, etc, and YouTube was attempting to stop the spread of misinformation.
I'm actually starting to think we only see these stories about absurd censorship to make the more commonplace and pernicious stuff seem legitimate by comparison. "Oh, hahah, our totally legitimate censorship ML that uses language models to isolate people from each other based on predictions of patterns in their thinking made a funny goof! Gee whiz, you got us that time!"
Anyway, Google will be fine. Lots of tech companies have managed to re-brand after getting on board with idealists, just look at Hollerith.
A friend of mine was banned for sending a chat message that said “this is a Mexican standoff” — the people in the room were both Mexican, if it matters.. we were all confused on why he was permanently banned.
Perhaps a bit too close to what might be a real opinion. Doesn't hurt to add a /s in text where you can't use intonation and gestures to hint at sarcasm.
I was trying to mimic an overzealous hall monitor there and choose ignorance for effect. I don’t think it worked as the rating and the flagging indicate.
I honestly wouldn't be surprised if someone makes the argument that this automated flagging is an indicator that chess's language is inadvertently racially charged. And think about the concept of "white goes first." All it takes is a few viral tweets, and suddenly the game of chess is in the crosshairs.
In my opinion, that might be a sign that the idea of drawing abstract connections between words and concepts, that are several layers of indirection apart, may be going too far.
Yeah using language that upsets people is bad, but if you allow enough layers between words and concepts, _everything_ can be argued to be offensive for one reason or another. Or will be soon once something else becomes a hot button issue.
Shouldn't be. Intent should be the grounds for upset, intent only. Otherwise you get a Euphemism Treadmill, and that's a goddamn fucking waste of time.
This would save so much time wasting. When someone says "Can you push that to the master branch" there is nothing that ties that statement to slavery or racism so there is nothing wrong with it.
The best thing to do is to ignore these people, they will never be happy so there's no point trying to please them.
If you ignore someone who is never happy then in time they may respect you, but if you always comply then they will never respect you and their perceived power over you simply increases, which means they feel empowered to ask for more and more.
> there is nothing that ties that statement to slavery
I would say there is. That's where the language comes from. BUT I do not think that necessarily means that this is a callback or reference to enslaved _people_. Master/slave model accurately describes the model in a way that people can hear the terminology and understand what is happening without knowing details. It doesn't condone/promote slavery, it doesn't have anything to do with enslaving people, and in no way does it even encourage such behavior.
Context matters, a lot. And to be honest I didn't hear anyone complaining about this (and I live in an extremely liberal place), so it came off (to me) as a grab for social currency (_especially_ since GitHub didn't use the term "slave". GitHub was using "master" in the sense of "main" or "principle" and so I didn't get even understand). If you try hard enough you can make anything reference race, but at the end of the day what really matters is context and how people perceive things. If no one (or realistically few people, because there are people looking to make issues) are making these connections AND no one is being harmed, then we shouldn't really be worried about it.
Edit: wanted to make clearer that I'm talking about two different terms of the usage "master". One from master/slave model[0] and one that GitHub uses (which uses the adjective version of the word which means "main" or "principle"). And that the GitHub version does not reference the former version. I know we're talking contentious subjects here but I'm trying my best to convey what is in my head. I'm open to new opinions but bear with me. Language is complicated.
> Master/slave model accurately describes the model in a way that people can hear the terminology and understand what is happening without knowing details.
Except it has absolutely nothing to do with slavery at all. The master branch is akin to the master record, that holds the true and complete copy. A master branch evokes mastery of a subject, like a Masters degree.
Except now, everyone just submits to the idea that the word master only has context in master/slave terminology.
Yeah so there is a master/slave model but I agree that that's not what GitHub was using. I tried to clarify this with my parenthetical statement but apparently did not do so sufficiently. Any suggestions of how to edit?
> A master branch evokes mastery of a subject, like a Masters degree.
I would actually say that that a Master degree is using a different definition (though both part of the adjective usage). For a master branch (or master document, master copy) I'd say it is the definition that is "main" or "principal". Whereas a Masters degree is having mastery over something, which is akin to high proficiency.
Master/slave model has nothing to do with git's master which is "master copy" like as in record mastering. What you say is valid for SCSI or whate et though.
Sorry I was trying to convey that. But language is often messy. I completely agree with you. I thought my parenthetical mention of Github clarified this but I guess I wasn't clear. I'm not sure how to edit to resolve. Any suggestions?
That link doesn't say anything contrary to what I said, and indeed includes a link to a twitter thread saying that Bitkeeper may very well be the origin of the 'master' terminology in git.
ah, but which people? the most reasonable and intelligent people? the traumatized and most sensitive people? The former I'd say yet, the latter I'd say go get some therapy and keep your trauma to yourself, it shouldn't drive the larger discussion.
Sure, it's organic, but there's no god-of-the-gaps in there. It is knowable. A language isn't as complex as a human brain. Nor is it living, it can be decomposed at will.
Linguists have been working like gang busters to iron it all out. Steven Pinker, Noam Chomsky, et al...
It reminds me of all the synonyms and symbols, homophones and homographs Chinese use when referring to mr Xi on Weibo, etc., including the now censored Winnie the Pooh. To get around the censorship which blocks mentioning of me Xi in unflattering light. It’s ever evolving to keep ahead of the censors.
I've never got why people don't understand this. Judging people by their intent resolves a lot of ambiguity, for both political sides.
If someone gets up infront of a room of 1000 people and says "hey guys!", clearly intending to reference the entire audience, they're not being fucking sexist.
And if you're standing there with 5 racists showing a pepe flag or an ok sign or whatever random garbage it is this week, then you're a fucking racist.
There's absolutely no need for nuance or ambiguity in either case.
You can argue the toss all you want about how to determine whether someone is intending to do something (it's not like you can read minds so that can obviously have a ton of complexity to it), but if you're starting from a position that people can unintentionally do something offensive then everything that follows is just pointless bullshit. You've just built a trump card for both sides into the argument so they can just scream past each other like morons without contributing anything meaningful.
Communication is a multiple party activity. It's not just a speaker and a speaker's intent. The recipient and how it's received absolutely matter (and should). I've said plenty of things I didn't intend to be stupid. Still stupid.
Is it ok for the Washington football team to be the Redskins because no current fan or owner intends to be using a racist name?
It's not only the hearer getting upset that matters either. There's room for error and for grace and tact on both sides of a conversation. But it's definitely not just intent only. Humans don't work like that. Hell, even computers don't work like that.
Yeah, this “intent is the only thing that matters” mindset is a naive perspective on communication. People like to act as though there's some liberal bogeyman reaching for social currency by acting “woke”, when what has generally happened is someone was thoughtless/inconsiderate and an offended party spoke up (this whole experience was, of course, quite traumatizing for the thoughtless/inconsiderate person).
Text is text, and you can't encode intent without assuming that the reader has a similar level of internet experience to be able to pull such hidden intent using context clues.
I disagree. Intent matters, a lot, and you're right, but it isn't the only thing. Right now I think we fall on the other side, that reception is all that matters (in the bias training I receive they specifically mention that it is 100% reception and not intent). I believe the law works on reception because that's easier to quantify. Intent is very tricky. You can do something that most people would consider wrong and just say "well I didn't mean it that way." (the inverse can happen too, but less people are likely to start a legal case out of spite compared to people defending themselves. It is tricky)
I believe that there is a middle ground somewhere. Where that is I'm not sure and I think we need to work together as a society to figure that out. I think somewhere in there there is a "reasonable" set of norms, and we have other laws to suggest that we can use this as a basis. But even this can be tricky as there are many different cultural norms and customs. It isn't even just ethnic customs. In America we have very different regional customs that often butt heads. I think we need to recognize that people are different and operate based on different values and often this is fine.
But I think a big thing we've lost in our current standing is good faith. There's three parts to any form of communication. 1) The idea that is within one's head that they are trying to convey to the other person. 2) The words, body language, inflection, etc that are used to codify this idea (aka: encoding). 3) The understanding of that language that was used to convey the idea (aka: decoding). Humans are pretty good encoders and decoders (we wouldn't have made it here where we are if we weren't) but there are limitations. Language is extremely messy and we often don't think it is because we're so used to it. But you can look at words being used today and you'll often find that people are talking past one another because they are using different definitions of the same word and actively refuse to interpret the other person's intended message (as an example, every internet conversation about capitalism/socialism/communism).
The point of communication is to pass one idea from one head to another head. It requires understanding that there are these three components. If we do not act in good faith then we cannot communicate. With that knowledge it suggests there are two different actions to take if one wants to act in good faith. The communicator should try to encode their thoughts as best as they can, attempting to understand their audience (aka: speak to your audience). BUT we often forget that the listener's job is to decode, to do their best to determine the idea that the communicator is trying to convey (aka: __intent__). In fights we will say "but you said..." even knowing what was intended as a way to win. This is not in good faith but is so prevalent.
When conversations are about mic drops and one upping another person, communication cannot be had.
To you point about intent vs reception, I think the way the law works is actually more along the lines of "how a reasonable person might receive this". Which is perhaps harder to quantify, but IMO strikes a good balance. However I totally agree with your point about how some communication has become more about scoring points than having an empathetic and thoughtful dialogue
Yelling fire in a movie theater is perfectly legal and protected speech, even if false. It's only illegal if what is said is "likely to incite imminent lawless action," like a riot.
It's funny, this is often presented as a supporting example to limit citizens' free speech or other rights, typically paired with something along the lines of, "no freedom is limitless." Of course many people don't realize this example is a) outdated and currently false, and b) the argument used against citizens speaking out against WWI using the Espionage Act of 1917, which is considered by many to be one of the worst and oppressive laws to our rights ever written.
You may recognize the names of some of the act's victims:
Among those charged with offenses under the Act are German-American socialist congressman and newspaper editor Victor L. Berger, labor leader and five-time Socialist Party of America candidate, Eugene V. Debs, anarchists Emma Goldman and Alexander Berkman, former Watch Tower Bible & Tract Society president Joseph Franklin Rutherford, communists Julius and Ethel Rosenberg, Pentagon Papers whistleblower Daniel Ellsberg, Cablegate whistleblower Chelsea Manning, WikiLeaks founder Julian Assange, Defense Intelligence Agency employee Henry Kyle Frese, and National Security Agency (NSA) contractor and whistleblower Edward Snowden.
Surely though if you cause panic and people get trampled you would face consequences no? You probably won’t get accused of hate speech but I hope it could go up to manslaughter.
Do you think that the people who panic would bear some responsibility? If someone yelled fire in a movie theater that I was attending, I would at least look around and smell for it before I started flipping out trampling people.
Let's use another hypothetical just for fun. Let's say I called someone a bad name and they punched me, would I bear the legal responsibility for their lack of self control? I would think not. I would think the person who threw the punch would be charged for assault and I would be charged with nothing. If that's the case, how would that be different from the fire example? Someone spoke and someone reacted with no self control. I would think the person who reacted with no self control would bear at least some responsibility.
I'm not sure that has been tested in court yet. If based on precedent, I would assume it would also be legal, but it would take a brave soul with a lot of money and free time to find out for certain.
SCOTUS could always overturn the existing precedent but if we assume they won't then it's legal right up until someone gets injured as a direct result. (Unless it somehow ends up running afoul of our ridiculous obscenity laws? I doubt it but you never know.)
Agreed that anyone who decides to demonstrate the above is definitely going to want plenty of money and free time at their disposal.
I remember many years ago when colour schemes/UI themes were still called "skins", and forum discussions about them often yielded amusing racist-if-taken-out-of-context sentences like "do you like white or black skin" and "I have dark skin, but I prefer the white skin." Not a single person was offended or outraged, everyone saw the racial associations but clearly understood the context and was more amused than anything else.
I'm of mixed opinion whether people were actually more intelligent or level-headed back then, or whether the current "ultra-PC/SJW-ism" trend actually started as a joke that got taken too far and adopted as truth by the gullible.
I have no knowledge of the example situation you provided (I don’t recall any such jokes about software skins), but consider the possibility that in some cases where “back in the day we did it and no one was offended” it was in fact the case that people who were offended weren’t welcome or weren’t able to voice their opinions.
I think they're talking about forums, where most phpbb forums back then offered a theme selector to the user, with 'dark' and 'light' being some common names.
I’m very familiar with skins, Winamp skins being the archetypal example for me. I meant that I don’t remember any such jokes deliberately conflating software skins and human skin tone.
> whether the current "ultra-PC/SJW-ism" trend actually started as a joke that got taken too far and adopted as truth by the gullible
It started with a few German philosophers and social theorists in the Western European Marxist tradition known as the Frankfurt School in the interwar period.
Jokes that evoke racist hatred are not good, though. You don't actually know that NOBODY - literally you said that not a single person - was offended.
Also, so what if someone was offended? Isn't that mostly irrelevant to this debate? The goal isn't to stop people from offending others, the goal of changing our speech is to reduce the unknown harm that words can do re: normalization of hatred of minorities. The 'jokes' you describe aren't funny and do in fact have a potential to cause real harm in the world.
I would posit that unchecked hatred towards minorities online for decades is one of the reasons we are in this 'mess' of language today.
That's an opinion, and it's not a supportable opinion, because we don't really know much about the jokes. Parent's comment wasn't intended to convey the material faithfully-- just a bare description. We don't know the exact wording, the delivery, the timing, or any other context. Maybe they were lame (another opionion), but it's also possible that I (or even you) would have gotten a chuckle out it.
Regardless, there's no reason to assume that humor of this nature serves to normalize racial hatred. But if you assume the worst of people, you're certainly more likely to get it in return.
> The results were very clear. Subjects that held anti-homosexual views supported significantly higher cuts for the gay and lesbian organization after they were exposed to anti-gay humor, compared to subjects who were not prejudiced against gays and lesbians who were exposed to the same jokes.
So, let me rephrase that
> after hearing jokes featuring homosexuals, the anti-homosexuals (however those where determined and chosen for the study) where anti-homosexual. The not-anti-homosexuals where not anti-homosexual ater hearing jokes featuring homosexuals
How excactly? The anti-homosexual people apparently did not change, while the normal people also did not change. The study thus proofed that the presence of the jokes is moot, no?
Ah yes a progressive claim backed up by psychology papers. A field currently drowning in a reproducibility crisis, and a group who believe that lying and slander is not only okay but should be actively utilised in every goal they pursue.
I'm sorry, are you dismissing all psychology papers?
> A field currently drowning in a reproducibility crisis
My peers have told me that chemistry and biology also suffer from results that are difficult to reproduce, and I've certainly read a number of articles here that decry the lack of reproducibility in computer science too.
> a group who believe that lying and slander is not only okay but should be actively utilised in every goal they pursue.
I'll be honest, I'm not really sure what this is in reference to. If it's in relation to psychology experimental methods, then I believe you're incorrect. Methods that involve actively deceiving subjects would be rejected by ethics boards (at least, it would in the UK). On top of that, there are many papers that do not use observations of human behaviour, and so would not find use in lying to them - for example, many neuropsychology papers discuss the physical makeup of body parts.
Psychology has been around for a long time, and some psychology results have deeply influenced society. Some of these papers cover the placebo effect, and various mental health conditions. If you are dismissing all psychology papers, do you also reject these influential papers?
I'm sorry if this reply is a bit full-on, but dismissing a claim's provided evidence by dismissing an entire academic field seems a bit extreme to me.
> I'm sorry, are you dismissing all psychology papers?
Following the reproducibility crisis they can't be trusted on face value. When used to promote SJW and progressivist causes they can be almost certainly dismissed.
> I'll be honest, I'm not really sure what this is in reference to
That was in reference to progressivism, hence why I stated that in the comment.
> Ah yes a progressive claim backed up by psychology papers.
I'm no expert, but I thought the black and white thing originates really from night and day. It is easier to see when there is light (often perceived as white) than it is at night (often perceived as black). We used white and black to convey the color of the sky. A white color reflects light while a black color absorbs light. This is how I've always thought about it. I never associated this with skin color until someone told me. I still have never internalized this because it just doesn't make sense to me.
I'm open to being wrong but to be this connection of archetypal meanings and skin color is a stretch. I don't look at a white phone or black phone and think good or bad (in fact I have a black phone and prefer dark colors while my skin tone is the opposite). One which requires a lot of fundamental change in language and how we think. Because I'm sure I'm not the only one that has codified this representation in my mind. And most of us should understand archetypes are not how you go about judging the world or people. I don't see a person dressed in red and think "angry" (which would be a different emotion in a different culture), or yellow and "happy". I just see colors.
Star Wars - “come to the dark side”. Lord of the Rings, Sauron is the “dark lord”. The Dark Ages vs. the Age of Enlightenment. Yin and Yang.
I don’t believe all these authors were racist. “Roughly 40% of Americans claim that they would be afraid to walk within 1 mile of their homes at night… 54% of all participants rated the dark within their top five fears”
Perhaps this would be solved if we used different words to describe skin tone than we did light. If “white skin” was called “wumbo skin” and “black skin” called “mumbo skin” it would be more clear the etymology of which terms were referring to day vs night rather than skin tone.
There are people who get upset about the language used to communicate results of scientific studies proving the efficiency of vaccines against the ongoing pandemic.
Kids get upset at the language “No” even when uttered to tell them that they can’t go out and play with a chainsaw, purely for their own protection.
You cannot determine if language or language usage is bad purely from the response of others even if they get extremely upset about it.
“Political correctness: is communist propaganda writ small. In my study of communist societies, I came to the conclusion that the purpose of communist propaganda was not to persuade or convince, not to inform, but to humiliate; and therefore, the less it corresponded to reality the better. When people are forced to remain silent when they are being told the most obvious lies, or even worse when they are forced to repeat the lies themselves, they lose once and for all their sense of probity. To assent to obvious lies is in some small way to become evil oneself. One's standing to resist anything is thus eroded, and even destroyed. A society of emasculated liars is easy to control. I think if you examine political correctness, it has the same effect and is intended to.” ― Theodore Dalrymple
Or cooler heads prevail like at the Académie Française who recognize that sexual genders are completely unrelated to grammatical genders despite what activists try to say.
So we may just get some people who push back and tell people that chess isn’t racist and it’s people who are injecting race where it doesn’t exist (such as here in chess) who are the problem.
Have cooler heads prevailed in this regard? “Progressive” Americans degendering Spanish by referring to Latino people as “Latinx” seems to be going as strong as ever, despite the protests of actual native Spanish speakers. In their haste to appear progressive, people who say “Latinx” are ironically engaging in linguistic colonialism, as it were.
But that’s the problem with progressives. They trip over themselves trying to be at the front. And yes, I’ve asked people of Latin descent if they use latinx in their speech to which they respond no and that it’s a North American invention and that in Spanish it’s Latino for sing male, Latinos for plural males or combo males and females, and Latina for singular female and latinas for all female but never latinx for any combination of the above.
> Americans degendering Spanish by referring to Latino people as “Latinx”
Depends, do you speak Spanish? If so, there's a governing body - the Real Academia Española (RAE) - and they have referred to the "x" ending as an abomination. It is rejected from the style guide and not acceptable Spanish.
If you want to speak Woke Proto-Spanish, by all means do. Just realize it's not Spanish and it's spoken by a tiny fraction of a percent of - generally American woke-sters desperate to cling to Latin or Spanish culture as they realize they are actually American and as such - not oppressed minorities (the worst of fates!). This is why Oxford recognizes "Latinx" but the RAE does not.
That’s a perfect example of something that literally every single time I’ve seen it mentioned was in the context of people expressing outrage at other people’s activism, and never in the context of an activist actually advocating for it.
I've heard people actually, earnestly, use it. It was high school students, though, so I cut them some slack on the rope of pretentious foolishness. We were all there to some degree when we were teenagers.
I've heard several PhDs use it. They were white English speakers and liberal in their political leanings. It comes across as even more pretentious than high schoolers aping the latest wokeness.
Related: Referring to American Indians as 'Native Americans', which is often seen as over-inclusive by American Indians themselves since it implies you're talking about Natives to the entire North and South America. While not the worst thing, when you are specifically talking about the native tribes the United States pushed out and forcibly moved to reservations, the term 'Indian' is codified in law[0] and is what the group themselves embraced as their identity so that, as a whole, they could bargain with the United States government to obtain compensation for the tragedies endured.
The problem seems to stem from 'American' being synonymous with the United States, when in a literal sense it means the entire North and South America continents. People will probably know what you mean with context but it can be confusing, so adding on 'Native American' just requires more explaining whenever you bring it up when not among peers.
This is a good point, but I'd also be interested in seeing the opinion of Americans with heritage from India, since using 'Indian' to refer to Native Americans might inconvenience them.
It's none of my business but personally I prefer latine[1]. IMO there's no need for white English speakers to tell Spanish speakers their language. We're all on the journey to a world with more than two genders together. Spanish speakers will figure out their own path to inclusivity.
Isn't this connecting a Latin conjugation? Which in turn would be westernization? I understood westernizing people to not be the right thing to do. (Which to be fair, Spanish does originate from Europe but Latin people are not). I never understood this. If someone has a good explanation I'd love to hear.
Isn't Latinx supposed to be Latino+Latina? Surely those two words areactually gendered (in the biological sex way), unlike most words which are gendered in a purely linguistic way.
Latinos is how you gender Latino+Latina in a "purely linguistic way", but some people don't like it, so they made a new word. The masculine word is either gender neutral or "truly" masculine depending on the context, but the feminine counter part always refer to girls/women.
> Or cooler heads prevail like at the Académie Française who recognize that sexual genders are completely unrelated to grammatical genders despite what activists try to say.
But that's not really true. I always learned that, for example, ils (grammar-masculine they) should be used when referring to a group of people where any of the people are sexual-gender-masculine, but elles (grammar-feminine they) should be used when referring to a group of sexual-gender-feminine people. Ils and elles have the same rules when referring to a group of inanimate objects depending on the grammar-gender of the objects.
You're both right. In grammatically gendered languages, various situations and context are present. Sometimes, people get worked up on a non-issue (like the latinx example other commented). Other rules have a more debatable impact, like the famous "in groups, the masculine prevail".
Interestingly, other approaches existed in the past like the rule of proximity where the gender of the closest element will dictate how the verb and adjectives will be written.
Languages are an ever-changing thing. I think it's healthy to propose and discuss grammatical changes if it makes sense, but everyone should be aware of what they are actually talking about.
In Germany we had the same. That didn't stop most newspapers to use some form of weird gendering of language. I think it will fade out since people don't use it.
It also underscores why some people think the media is a partisan mess. It is to some degree at least. They even asked people and most didn't like it. Didn't stop them.
> In 2019, Magnus Carlsen and Anish Giri – who as of July were the number 1 and number 10 players in the world, respectively – promoted a #MoveforEquality campaign as a way of acknowledging social inequalities. In their game, black moved first and the line was, “We broke a rule in chess today, to change minds tomorrow.” It was billed as an anti-racist statement, but some took it as a suggestion to change the rules of chess to black having the first move.
I wouldn't be surprised if the Google moderator AI becomes the source of truth on what is offensive. If google doesn't delete it than clearly it is ok. If google does delete it then it is offensive regardless of anything else.
Or it will at least become a cheap barometer used by journalists: Materials so offensive that they are automatically rejected by all major social networks.
Others have pointed out it's been done -- so it will continue to be done again and again until something gives. But I'd like to point out at least Go is safe for now, since black goes first! (However, white is used by the stronger player when not doing nigiri or playing a handicap game... And I'm sure some artificial drama could be manufactured based on which color you want to give draws to by giving or taking 0.5 from the perfect komi of 7. There's no safe space.)
Or what actually happened was the radio show asked if white going first was racially based, concluding that it was not. But conservative media spent days getting themselves outraged over it before it even aired.
Yes, not too dissimilar to Github changing the branch "master". Is there a list of things like this that match this pattern that would be easy for people to go after given a few viral tweets? I feel like if there is such a list, it'd be less shocking when the inevitable happens.
White goes first because black was considered a lucky color. So if black went first it would have double advantage, from being first and from being the lucky color.
Chess is not only racist but also sexist. How come the king is the most important piece on the board but the queen is completely expendable?! And, for goodness sake, the game features actual white knights.
The Queen is a significantly better piece. The King needs to be protected and is borderline unusable until end game, where the Queen is the most powerful piece from start to finish. This is so evident that in higher levels of play, people just resign when they lose their Queen.
If by higher you mean 1000 ELO then sure. In actual tournament play it's very rare for someone to blunder a queen and resign. Queen exchanges happen in most games and queen sacrifices are fairly common. There are no king exchanges or sacrifices.
This reminds me of a story from a previous era of automated content moderation...
When I was a student at the University of Cincinnati, I was a member of a group called LARC which stood for Laboratory for Recreational Computing. The main purpose of LARC was to get the University of Cincinnati to subsidize our yearly trip to DEFCON, but I digress.
The UC mail servers, or at least the ones where the LARC mailing list was hosted, had some kind of stupid search and replace censorship to replace naughty words with cleaner equivalents. The cleaner equivalents were in ALL CAPS of course.
So a few members of LARC were working on a project to build a classical arcade cocktail table game out of Linux and MAME and some other stuff. I don't remember the details. All I remember is that the mail server transformed this into the "MALE GENITALIAtail table".
This became its official name. I think the MALE GENITALIAtail table was eventually installed in the student union.
No. Everything must be examined critically through the lenses of social justice. Chess is a Eurocentric and thus colonial game only valued because of its expression of whiteness. Dominance of an opponent of another color reenforces normative racism and the white pieces going first internalizes white supremacy to Black, brown, and indigenous peoples. I won’t even go into the obvious sexism and misogyny of the emotional labor of the womxn piece moving the longest distances in all directions while being valued less than the male piece.
We should expect and demand equity in all our so called “games” so that everyone wins.
Ugh, I'm so screwed up. Your comment would be taken as dead serious in some circles and as parody in others. Based on the HN demographic I'm guessing the latter but I'm not sure.
What's the tell here? I've had people dead serious tell me that the US government is racist for making the lowest value coin brown and putting Lincoln on it.
It's very good. I was looking for a tell, but couldn't find one. They even capitalized "Black" while leaving "brown" lowercase, as is Proper according to progressive style guides.
That part makes the comment even better, IMO. It paints the speaker as someone ignorant enough not only to go after the black/white convention in chess but also as not knowing the origins or details of the issue at hand(which is usually the case when people are virtue signalling).
In order to enforce social justice you want a group of activists to impose restrictions and police other groups of people? How would they convince those people to give up their freedom?
The infection of Youtube with Google's fetish for replacing people with machines may be the worst thing about the entire acquisition. Google's obsession with forever increasing the ratio of users to employees is a curse upon us all.
if i am writing a youtube comment and care about it, i always recheck if the comment is still there after a couple of minutes and then a couple of days. because the comments are now "disappearing" more and more frequently
the last time my comment got automatically deleted right away was a couple of weeks ago for "bottle opening" words (in my language) put together. replacing a few letters in these words with different same-looking characters helped for some time, but eventually even these got deleted a few days later. i should probably give up using this last google service i still use
This is a perfect example of "be careful what you wish for". The Wired position seems to be "just put more resources into policing speech, which is a good and necessary activity". My hunch is that cases like these (false positives, at least, as currently judged by the current authorities) will proliferate just as the criteria for judging what constitutes unacceptable speech do. I would challenge the would-be censors to define specifically, in a way not requiring an additional consultation with them for more infusions of judgement, just what types of utterances that they want to suppress, and why. The closer one gets to this, the more the case for censorship will dissolve. Tl;dr: they are complaining about ambiguity in the implementation of the solution, while having failed to define the problem.
YouTube overall generates tremendous value for people who view videos on it.
There are so many YouTube videos being generated for the amount of money being made that it is not economically feasible to hire humans to review all the videos.
Even if there is a human to review videos that have been flagged, there is a time delay to doing so.
YouTube seems to be erring on the side of flagging false positives at least till there is time for human review.
The technology reviewing videos is immature. It may not be an engineering failing. It may be a problem that requires a scientific breakthrough.
So a valid critique is that there is no effective way to reach a human at Google. Critiquing the technology is pointless.
And that makes it even more important to highlight them. We shouldn't consider censorship a normal everyday event just because some parties do it far too often.
Given there's 500 or so hours of video uploaded per minute (or some other huge amount), i'm not sure we can expect YouTube to moderate each potential violation. Each video constitutes a miniscule amount of revenue.
The only solution (I see) to this is for YouTube to charge for each upload - say $1 a video (there may need to be different prices in different parts of the world), this wouldn't detract the majority of uploaders and would pay for checking hate speech, copyright violations etc.
Playing the devils advocate a bit: Perhaps the computer is entitled to its own opinion? Of course my first thought is to dismiss it as an obvious false positive but objectively it is a war game white vs black. If someone came up with the game today it would be a dubious choice? Connect Four comes in red vs black and yellow vs red, blue vs red. Why not the traditional black vs white?
- The media thinks that people will read an article about "inappropriate" content on YouTube.
- YouTube gets embarrassed and tries to remove inappropriate content.
- The media thinks that people will read an article about "appropriate" content being flagged on YouTube.
I feel like YouTube just made the mistake of jumping onto this never-ending treadmill of trying to please the media.
Of course that isn't the entire problem. They do "need" to remain attractive to advertisers. So demonetization (or at least some "safe for all advertisers flag") needs to exist so that conservative advertisers keep finding the platform attractive. (However I feel the response could have been much finer grained. For example allow advertisers to opt-into different categories of "inappropriate" content. For example some brands have not problems being next to violent content. This way you would still make less money from an "inappropriate" video but you would at least take home something.).
The problem is that many advertisers don't want their ads appearing next to "unsavory" content. So if they don't do a "good enough" job flagging inappropriate videos less ads will be bought.
It isn't really a choice that YouTube is making. It is them attempting to remain appealing to advertisers.
This is what GPT-3 (davinci, the 175B parameter model) generates for real:
* Prompt:
">White’s attack on black is brutal. White is stomping all over black’s defenses. The black king is gonna fall..."
Fortunately there's an easy solution:
* GPT-3 Response:
Just don't play chess!
This is the only solution.
The reason for this is that the "chess is bad for you" argument is a logical fallacy called "affirming the consequent". It goes like this:
If A, then B
B
Therefore A
In other words, if chess is bad for you, and you play chess, then you'll be bad for you. The fallacy can be seen by simply replacing "A" and "B" with two different statements, for example:
The sky is blue
Some birds fly
Therefore, some birds are blue.
See the difference?
The argument against chess can be logically rephrased as "Chess is bad for you, and those who play chess will be bad for you. Therefore chess is bad for you."
You are assuming that this "problem" is costing them more than hiring some humans. This article sounds more to me like the media making some sensational story. It seems unlikely that there is significant impact to YouTube from this event.
Google's pursuit of profit is responsible these and innumerable more injustices. Those who would defend Google often claim that automated moderation is necessary at Google's scale, but does anybody really doubt that Google would still make more than enough money to stay in business if they hired more humans? Automated moderation is not necessary.
Banning a Chess Video for using Black and White as hate speech is so insanely bad. I would love to see an experiment where the same discussion about the game is done but replacing Black and White with actually racist Terms.
> N-Word Horse to B4
> Superior Race Queen to C5
> N-Word Pawn beats Superior Race Pawn
> ...
And see if the algorithm flags that too. I´d bet good money it does not...
I guess this just shows how backwards and racist Chess terminology is. It may have been fine back in the day, but I want my children to grow up in a world where Blacks and Whites aren't at war. The sides should be switched to much more neutral names, e.g. based on light - lumen/umbral or something based on trees - teak/mahogany. Obviously we should also replace "attack", "check" and "mate" with non-hate-speech - maybe "push", "jeopardy" and just remove "mate", as you can say the same thing by "I believe you have run out of valid moves to make and thus I win".
Not like this is without precedent - I remember back when pieces were said to kill each other and then it was replaced by "capturing". Come to think of it "capturing" as an objective is also rather problematic, maybe we should call it "liberating", since you're really returning these peasants who were forced to serve their king back to their homes.