Hacker News new | past | comments | ask | show | jobs | submit login
YouTube Demonetization Screenshot Leaks and Secret YouTube Meeting (twitlonger.com)
377 points by djsumdog on Dec 12, 2017 | hide | past | favorite | 243 comments



I have a feeling that the internet will get outraged about this for one reason or another, but just from looking at this, it actually seems like a sensible policy designed to keep YouTube out of trouble. Nothing extreme or unfair, just avoiding to show ads next to stuff advertisers would not want to see their ads next to, while also making an effort to avoid offending or censoring youtubers.

Probably won't be flawless, because things rarely are, but seems pretty reasonable.


The real problem with these rules is subjectivity. Something you would consider offensive is probably not from my perspective. In the end it will be a moderator bias galore. I can easily see a Christian mod demonetizing atheist videos, democrat <--> republican, prude <--> free, etc etc etc.

I'd like to see a true free speech platform with violence/nudity sectioned off according to the local laws, and then let the advertisers decide where they want to spend the money, instead of treating them as children who don't know better.


That makes sense. The problem is that at the end of the day the decision will have to be made either by algorithms or by people. Algorithms are less nuanced, hard to write, and will often fail, people are expensive and biased.

With the amount of videos youtube has to process, and without an almost human-level AI, I don't think there's a perfect solution.

Algorithms flagging videos for review, and then humans making decisions based on some basic set of principles seems right.

Also, to let advertisers decide where to spend their money there's targeting tools...


The solution is obvious - tag/categorize the videos, according to very strict rules, let the advertisers decide which tags to serve. Allow the publishers to dispute the tags.

For instance, "nudity" can be defined as "visible genitals, female nipples" in the US.

"Controversial" tag should not even exist, it makes no sense, any non-trivial subject is controversial.

And honestly, YouTube can benefit from a more precise tagging/category system, it's currently extremely crude.


Where would this fit in?

https://www.youtube.com/watch?v=9R5w-PIzlUk

It's a clip-art style "coloring video" drawing a baby with a lot of syringes, which then each empty their colored liquid into the baby. Technically, perfectly innocent. In context of a positive flood of videos with an undertone ranging from brain damaging to abusive targeted at little kids it doesn't seem that innocent to me. The sum isn't larger than the whole, but it's larger than each individual piece seen in a vacuum. However, where to draw the line? I would draw the line at "if it targets little kids, demonetize it", not even because of the content of such videos, simply because I think marketing to little kids is immoral no matter in what context, but I know that's not going to fly. But there could be a category for it, and then people could check out which advertisers advertise to little kids and let that inform their wallet voting.

For me, YT is just totally ruined anyway. Yeah I still watch videos on it, 99% of the video material on the web is on YT, but even using an ad-blocker what used to feel lame now feels positively fucked up considering how horribly bad YT has handled this before it blew up as well as after advertisers started retracting ads. I want to turn my back on it for good, and if we can't find a way to self-host and directly pay what we watch, I'll find a way to live without video on the web.


Okay so I've watched quite a few of those particular videos now to make sense of them... And I've come to the conclusion that the reaction to the syringes is cultural misunderstanding. My hunch is that the purpose of the syringes is to get children comfortable with the vaccination process. There are number of other videos where the syringes are being demonstrated on animals to make the animals feel better after getting sick. In this case, it seems like US Americans are projecting their particular sensibilities onto videos made by some people in an entirely different part of the world. Yes, they're weird to me too, and definitely not high quality, but I've actually started to appreciate some of those videos as being well intentioned propaganda, as opposed to some of the other videos that are literally showing children rape scenes and violence. It's important that we don't just lump everything that makes us uncomfortable into the same category.


I found that channel by following chains of featured channels from hardcore ElsaGate channels. You'll also find such coloring videos on channels that have blatantly messed up content, too.

> it seems like US Americans are projecting their particular sensibilities onto videos made by some people in an entirely different part of the world

What part of the world would that be? Just saying "oh, there's probably a culture that has these different sensibilities" is heard a lot around ElsaGate, without ever actually referencing a specific culture. And what is an "entirely different" part of the world? Made of antimatter?

> It's important that we don't just lump everything that makes us uncomfortable into the same category.

None of this makes me feel "uncomfortable" though, and I'm not American either, so don't lump my reaction and your reaction together as "our" reaction. Also, just because the fringe is not the center, doesn't mean they're not connected: don't lump noticing the connection together with lumping things together.

"Disturbing" or "weird" or "uncomfortable" or "nice" or "awesome" and so on are not very descriptive words. A cold breeze might make me feel uncomfortable, so might a video of a puppy getting hurt, but that doesn't describe these things. It's making it about the people calling it out, and I for one am not buying.


Since the number of categories is limited, even if large, there will always be weird videos that don't strictly fit into any category. It's a slippery slope argument, it doesn't mean we shouldn't organize information better.

This particular video can be under

Art > Drawing > Coloring

I don't see anything damaging about it. Yes, it's weird, but so what?



> “We are shocked and appalled to see that our adverts have appeared alongside such exploitative and inappropriate content,” said a Mars spokesperson in a statement. “We have taken the decision to immediately suspend all our online advertising on YouTube and Google globally. Until we have confidence that appropriate safeguards are in place, we will not advertise on YouTube and Google.”

http://www.tubefilter.com/2017/11/24/advertisers-suspend-you...

> I don't see anything damaging about it. Yes, it's weird, but so what?

So you don't, probably not having looked at thousands of video descriptions and thumbnails on hundreds of channels featuring bondage, pregnant children, adults and children impregnated after having been drugged, dolls in bath tubs filled with things, objects and persons under car wheels and feet, drinking urine and eating poop, dominance and submission, binge eating of candy or just objects, objects being removed from a body or lumps removed by getting a syringe, babies faking their death, adults and kids with pacifiers, maggots, people being eaten, limbs being removed, unresolved tension, and dozens of other concepts [you see, that stands for something, there is just no space to expand all placeholders, just like when I said "flood" or "targeted at little kids"] repeated ad nauseam, across live action, claymation, 2D and 3D "art", all underlaid with the same handful of music and audio samples, produced on all continents except Africa maybe. So what?

Like many forms of abuse, e.g. mobbing or sexual harassment, each individual act can be explained away, and people who don't really look into things just see the one-off "weird thing". So what?

Then there's all the stuff that's neither here nor there, like making people jealous by drawing a heart on someone's belly, or a million "finger songs", or "Jony Jony Yes Papa". That's how babies learn colors. It's just copycats that experiment with the medium while not straying from the script that doesn't exist.

And of course, it's just the brains of toddlers, those aren't sponges or anything, and quite obviously, if it's not traumatizing to you, there can be no damage. We know that even "just" too much lack of healthy interaction can stunt development, but what's millions of hours of low-effort, "weird" content gonna do?

Anyways, I'm not here to tell you how you live your life, I'm just stating how I live mine. If you're not at work and not easily grossed out, maybe enjoy this 0.01% slice: https://i.imgur.com/MziRRQw.jpg -- but that's still just images, that's without the deceptive music and channel descriptions that advertise themselves as great entertainment for kids. And you can always find something there or in the ElsaGate subreddit that can serve as lightning rod, to say oh, this is overreacting, let's dismiss it all out of hand.


Your entire argument is "won't somebody think of the children".


You do realize that sometimes it's actually acceptable to be concerned for the well-being of children?


Not even once. Not even in the context of something called "Youtube for Kids". Pearl-clutching consideration for others: not okay. Pearl-clutching shock over people having consideration for others: totally on.


Argument for what? For me personally being turned off from YT for good? Heh. If you think thinking of those who can't defend themselves yet is some sort of no-no, then that is your "entire argument". In the end, I don't care what simplification you use as your personal lightning rod, it's your life. If you were a friend, you would have been for the longest time - as it is, no biggie.


Strict rules for tagging would just run into the same problems as strict rules for demonetization. LGBT groups won't be any happier to see their content publicly branded "sexually explicit" than just demonetized.


By strict rules I also mean objective and unbiased.

Penis/vagina -> nudity (even if it's medical or educational).

"Fuck" -> cursing (even if it's about the etymology of the word).

Advocating murder of group X -> ban (even if you're quoting a religious text).

If these rules are strict and clear, we have a chance of creating a legal-like system. Otherwise it's subjective chaos with everyone pissed off.


Unfortunately, it is impossible to do this -- even strict objective and unbiased rules will piss people off.

The classic example I can think of, that has already generated headlines, concerns nudity policies in social media (Facebook in particular) that have flagged breastfeeding mothers for "nudity" (and thus ticked people off). A "by the letter" policy risks much more of this sort of thing... flagging something as "bad" that much of the social norm says isn't really a big deal.

One other thing I would worry about with "objective" rules is what culture determines the "watch list". The cultural norms for what is taboo actually does vary to some degree from culture to culture. Youtube is used heavily in so many countries, with diverse cultures. How can a single policy encompass the social norms of countries as diverse as, say, heavy Youtube using countries such as India, Japan, the UK, Germany, and the US? (And this isn't even accounting for the diversity of culture within these countries.)

As an example, the swastika, which is extremely taboo in Germany and triggers "hate speech" flags in much of the rest of the West, is (usually styled differently but still) a religious symbol in India. You could create a generic flag category for "swastika", but chances are this will eventually flag some, say, Hindu content from India and tick those people off.

I suppose you could structure the rules to account for as many cultural norms as possible, but that can get very complicated fast.


So a lecture quoting from the Old Testament, or an atheist quoting its worst parts, would get banned under your regime? Because, at least for the first three examples, you are explicitly including meta-discussions of the subject.

It's impossible to create an "objective" standard. Because what you want to rate is meaning, and meaning is subjective, based on context and intent.

One of history's most famous photos is of a fully nude girl, maybe 8 or 9 years old. It makes no sense to stick any labels to such an image without considering the context it was created in, and the cultural significance it has.


You didn't read carefully enough. I said "advocating murder". So no, a lecture or a simple quotation wouldn't result in anything.


"Thou shalt not suffer a witch to live."

"And he that blasphemeth the name of the LORD, he shall surely be put to death, and all the congregation shall certainly stone him."

"If a damsel that is a virgin be betrothed unto an husband, and a man find her in the city, and lie with her; Then ye shall bring them both out unto the gate of that city, and ye shall stone them with stones that they die"


No, I read with plenty of care. Which is why I asked you to clarify why meta-discussions are included for the first three examples, but not the last.

I'd also posit that it's impossible to define a clear distinction between a "lecture quoting a text advocating murder" and actually advocating murder.

Just as it is impossible to just say penis->nudity, without losing the vast difference in meaning and effect from a photo of Michelangelo's David to hardcore pornography.

That's because, fundamentally, there just is no such clear delineation. Give me any two photos including a penis, and I'll give you one that is more offensive than one, yet less offensive than the other.

The tech community loves to ignore the long history of these problems, and denigrates everything that can't be expressed as a smart contract as "biased" or "subjective".

But that's ok... In less than 200 years we'll have a great algorithm who will finally render the definitive test of what's porn and what's art: "I'll know it when I see it".


> I'd also posit that it's impossible to define a clear distinction between a "lecture quoting a text advocating murder" and actually advocating murder.

A steel-manning of context should be enough to draw a distinction. Sadly, most contexts today are being redefined by folks who are seeking or making porn for those who need to get off on their own moral outrage.


> Advocating murder of group X -> ban (even if you're quoting a religious text).

An alarming number of social justice advocates have been equating words with violence. Canadian Bill C-16 (now law) is particularly troubling because under the Ontario Human Rights Code it makes debate over gender neutral pronouns a punishable offense. It seems to me that the folks at YouTube are, necessarily, responding with the most "conservative" policies to avoid "controversy" (i.e. boycotts).


C-16 just adds gender identity to the list of identities you're already not permitted to discriminate or advocate violence against (race, color, ethnicity, religion, age, sex, disability). It does not prohibit debate over pronouns. Go read it: http://www.parl.ca/DocumentViewer/en/42-1/bill/C-16/royal-as...


That's true - but the Canadian Dept of Justice indicated (and then removed the link from their website) that they would enforce it in accordance with the Ontario Human Rights Code. Go read it: http://www.ohrc.on.ca/en/code_grounds/gender_identity


Are you under the impression that the legal system is entirely objective?


> Advocating murder of group X -> ban (even if you're quoting a religious text).

I would say that this should not be banned. Let all who profess these beliefs be out there in the open so that they are easier to find.


It's ironic that all your examples are so impractical and yet you confidently used them to illustrate how tagging is so easy.


So just mentioning genocide is advocating?

Could you give a specific example of what you meant?


> The solution is obvious - tag/categorize the videos, according to very strict rules, let the advertisers decide which tags to serve. Allow the publishers to dispute the tags

Tagging would use the same ML model that youtube is current running for demonetization, so there will be error as well.

Problem here lies with the expectation, it aims with recall not precision. In other words, false negatives really hits Youtube's image, twice already this year. People only needs to find a handful problematic videos with misplaced ads and claim Youtube had some major problem, which might not be case behind the scene. Yet each time it becomes major media parade on bashing the company and result in an exodus of advertisers.

If the goal is to eliminate false positives, which means less tolerance and stronger censorship, it will hurt a lot of innocent Youtubers, regardless. The crisis lies however with Youtube's model, if they cannot do a good job, no matter by algorithm or by human, managing the balance between quality and quantity of videos on their platform while keeping the advertisers happy and assured, it is just a matter of time, Youtube would ultimately degraded into our era's little television.


Afaik they already thought of that solution because YouTube videos support tagging/categories but it's the video uploader who sets those tags.

The problem is that once you introduce such a system, and leave all the control over the tags to the uploader, people start gaming it for better search rankings/more views. So you are back to the same old problem of "who checks if the content actually matches the tags".


I think in bufferoverflows solution the tags come from algorithms and user reports not the publisher.


I think a better solution is to have some core principles that Youtube's censorship team abides by. If the video in question doesn't violate all the principles (or X number of them), then it remains up.

Of course that is difficult and actually involves discussions, which Youtube doesn't seem interested in having.


There is zero censorship involved here. This isn't a leak about deciding what gets taken down, its about what is able to be monetized. The leak in fact specifically mentions this choice was used to avoid censorship by removing video.


Aren't algorithms also biased in theory since they are created by people?


The fact this is often glossed over causes many problems in society. https://99percentinvisible.org/episode/the-age-of-the-algori... makes some interesting and good arguments imo.


While algorithms can be biased intentionally or unintentionally, it is not true that ALL algorithms HAVE to be biased because they are created by people.

In machine learning it might be easier to unintentionally create biased algorithms. On the other hand there are domains where I would trust algorithms to be less biased than people.


> Algorithms are less nuanced, hard to write, and will often fail, people are expensive and biased.

Algorithms have a fantastic property of repeatability which means that it is

(a) possible to adjust them over time

(b) possible to identify what exactly happened

(c) blind to the context and inputs outside of the algorithm

That's exactly what is needed.


(b) possible to identify what exactly happened (c) blind to the context and inputs outside of the algorithm

Neither of these are really true for neural networks: it's hard to pinpoint the exact feature or hidden layer which results in a particular classification. Training also requires datasets which are often proprietary, and stochastic approaches mean you can get different networks from the same dataset.


You may get a different network from a same dataset.

You will get a different result if you let two humans classify the data.


Not "may" but will - it's very easy to render your network essentially unauditable (you don't have thousands of petabytes' worth of storage available? you can't fab your own ML coprocessor? too bad!)

You bring up Trump's Twitter suspension in the other subthread, but in that case the failure was transient, and the person responsible and their reasoning were quickly identified; YouTube has demonetized videos with little to no explanation or recourse.

Uniformity is absolutely a goal where people's livelihoods are affected, but ML algorithms can't guarantee it by themselves: they make it even more important to also maintain accountability and transparency.


> Not "may" but will - it's very easy to render your network essentially unauditable (you don't have thousands of petabytes' worth of storage available? you can't fab your own ML coprocessor? too bad!)

Google can. So that does not apply.

> You bring up Trump's Twitter suspension in the other subthread, but in that case the failure was transient, and the person responsible and their reasoning were quickly identified; YouTube has demonetized videos with little to no explanation or recourse.

There's no recourse or explanation because it is not youtube's business model regardless of what/who makes a decision to demonetize them. Adding arbitrary human behavior to those decisions only makes it worse.


Google can. So that does not apply.

If nobody outside Google can test the algorithm, by definition it isn't auditable.

Adding arbitrary human behavior to those decisions only makes it worse.

Neural networks are derived from human behavior - they don't magically divine the spirit of what you want to do, they're an approximation based on training data someone has to put together.


> Algorithms [allow to] identify what exactly happened

"This video was demonetized because at the 1:15 mark, there is a patch of light dirt that our neural network classified as naked skin with a 70% confidence."

Oh, you meant you were going to do image recognition without any sort of machine learning?


Your alternative is the idiot on his last day at Twitter who in his post Trump account blocking interview basically admitted that it was the totality of things that happened that day to him and it being his last day and that Trumps account popped up in front of him which caused him to make the arbitrary decision to block it.

Please remove humans from making decisions that need to be uniform. So yes, algorithm it is.


You're implicitly making the assumption that an algorithm is unbiased and less likely to be fallible - and unlikely to be susceptible to bias or manipulation (at least, compared to a human). There is a considerable amount of academic and popular literature showing that these assumptions are false, at least at a technical level.

The problem of vetting YouTube videos isn't necessarily something that can be solved by an algorithm, at the end of the day. Lots of the simple cases, sure - but such an algorithm won't work on even the most moderately challenging content. After all, moderating complex content such as video doesn't just require a well-trained neural network to recognise objects portrayed in images and sounds in audio... deciding whether a video is appropriate or not for monetisation is something that requires an explicit understanding of the social and cultural context which the video is going to be viewed in - which is both highly subjective and difficult to express in an algorithm.

NB;

- Neural networks (and other AI techniques) can be tricked and manipulated (see [0,1]; examples of adversarial images)

- Machine learning algorithms can induce or confirm bias through inadequate training data and poor assumptions. (see [2]; a book about bias in machine learning algorithms)

[0]: https://arxiv.org/abs/1510.05328

[1]: http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf

[2]: https://weaponsofmathdestructionbook.com


> You're implicitly making the assumption that an algorithm is unbiased and less likely to be fallible - and unlikely to be susceptible to bias or manipulation (at least, compared to a human). There is a considerable amount of academic and popular literature showing that these assumptions are false, at least at a technical level.

No, I'm arguing that algorithms are transparent, which means that they can be adjusted.

> The problem of vetting YouTube videos isn't necessarily something that can be solved by an algorithm, at the end of the day. Lots of the simple cases, sure - but such an algorithm won't work on even the most moderately challenging content.

of course they would here:

while (!decision_pass_the_spot_check()) { tweak_algorithm() }

> deciding whether a video is appropriate or not for monetisation is something that requires an explicit understanding of the social and cultural context which the video is going to be viewed in - which is both highly subjective and difficult to express in an algorithm.

Certain cultures think it is acceptable to stone women. In their societal context it is OK. We do not, however, care that it is OK in their societal context. So we make sure that in our algorithm stoning women ==> demonetized. Now we move onto the next problem. Did someone flag a video that we processed that had women stoned that was not demonetized? We open a ticket with the people responsible for that model, with the video that was not demonetized and have them figure out why it did not happen.

That's how you solve a complex problem. You break them into simpler ones and when you see an issue you address it. Claiming that the problem is just too complex for algorithms is avoidance.


Your argument that algorithms are transparent is equally flawed. There is an entire field of research dedicated to introspection of layers in convolutional neural networks! A system that is complex and nuanced enough to be able to deal with a problem such as auto-moderating YouTube videos will be phenomenally difficult to inspect (and nearly impossible to "adjust" in the way that you seem to be suggesting).

To clarify; I am not saying that it will never be possible to solve this sort of problem with an algorithm - just that doing so would require solving the small inconvenience of general artificial intelligence first... :-).

It certainly isn't beyond the realm of possibility in the (very distant) future, but to suggest that it's possible right now (and that it would be better than a human!) is to over-egg the pudding somewhat.


It is not flawed at all. If you cannot provide a validation for your algorithm decisions when challenged then you algorithm belongs in a research lab or on your laptop, not in production.


>It is not flawed at all. If you cannot provide a validation for your algorithm decisions when challenged then you algorithm belongs in a research lab or on your laptop, not in production.

Alright, then you cannot use an algorithm for image or video recognition. So how do you solve this problem?


Okay, then your argument is valid... in a different layer of reality. Meanwhile, everyone and their dog are rolling out blackbox neural net classifiers to production


And i cannot wait until one of these companies ends up in court trying to explain why its engineers cannot explain how a decision was made.


Great interview article posted yesterday on this exact topic with an actual practitioner:

https://news.ycombinator.com/item?id=15901628


> The real problem with these rules is subjectivity. Something you would consider offensive is probably not from my perspective.

I'd be surprised if YouTube didn't account for this, or in other words, if they made the demonetization decision indeed dependent on a single person.

I'd actually expect them to look for this bias, and filter out the reviewer, for example: reviewer X has consistently flagged videos that a number of other reviewers have cleared, etc.

Edit: perhaps a better example:

For any sample of 100 videos, ensure that 5 videos (5%) are from a pool of videos pre-approved by some gremium.

If a reviewer flags more than X (X being 1 to 5, however strict you want to be) of these pre-approved videos, there is a high probability that this reviewer exhibits a strong bias, so you discard that reviewer's selection.


What about slight biases, say, 1.1x more probable to favor whites to blacks? Everyone is biased in many ways, and it adds up.


The selection of test videos (the 5%) could address that -- just pick videos that a fair reviewer would ignore, and an at least slightly biased reviewer would reject.

If a reviewer flags more than X of these videos, one might assume that the reviewer is biased. (X should probably be large enough to account for false positives from fair reviewers).


>I'd like to see a true free speech platform with violence/nudity sectioned off according to the local laws, and then let the advertisers decide where they want to spend the money, instead of treating them as children who don't know better.

A true free speech platform wouldn't have any concern at all for laws regarding content or any specific cultural mores - laws exist to limit the freedoms of the individual for the good of the community, not to define or enforce them, and the web doesn't belong to any culture but its own.

I think you could still have tagging or filtering for such a platform, but it might need to be a service or market of systems in and of itself, either community driven or local to each user, and not something enforced by a central authority.

Of course, that means first treating all content equally - including objectionable and illegal content, because that's the real and for many uncomfortable consequence of free speech.


> In the end it will be a moderator bias galore.

Same problem as with AI bias.

A great video about this problem: The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford - https://www.youtube.com/watch?v=fMym_BKWQzk


But what advertiser wants to spend their time selecting videos to put their ads on? I don't see how your solution would be accepted. And automatic categorization doesn't work - otherwise Google wouldn't be spending money on humans.


Automatic detection of curse words and nudity does work, as far as I know.

The regular categorization should be left to the publishers, but it should be much wider and multilevel, not the current choice of, what, 10 categories?

Yes, other things like hate speech are harder to detect, but, as I said, it's extremely subjective, and should not be dealt with in a general way, but rather broken down into subcategories, like "religion criticism", "radical feminism", "men's rights", etc, whatever ruffles people's feathers these days.


> Automatic detection of curse words and nudity does work, as far as I know.

I don't know about nudity, but I can assure you that automatic detection of curse words doesn't work. Enable automatic subtitles and watch a few videos... do you think that's working?

Also, youtube is not english-only, and automated processing of anything other than english is even worse!


Recognizing a few curse words is a much simpler task than general speech recognition. Nobody says it has to be 100% correct, even humans aren't.


Yes, other things like hate speech are harder to detect

Exactly.

but, as I said, it's extremely subjective, and should not be dealt with in a general way, but rather broken down into subcategories, like "religion criticism", "radical feminism", "men's rights", etc, whatever ruffles people's feathers these days.

Unless you have some reason to think that a non-insignificant number of advertisers would be OK with ads on a subset of those categories but not all, that's just pointless busywork.


  that's just pointless busywork.
Not at all - Youtube could use a complex, multi-layered system to deflect blame from themselves.

"Oh, we didn't demonetise your video, we just give advertisers tools to choose what sort of content they want their ads to show alongside. And your video is rated 'gender insensitivity level 1' because you said 'motherboard' instead of 'mainboard'. It's the advertisers' fault you made much less money on this one."

If Youtube make a clear line, they have to take a stance on whether saying 'motherboard' should be allowed or not, and whatever choice they make some people will protest. If instead they have eight categories of offensive content and twenty thresholds in each, they can outsource the decision on whether saying 'motherboard' is allowed to anonymous advertisers' marketing departments - YT can avoid taking a stance at all.


Without even considering the ethical concerns around directly funding that sort of thing.


These are huge groups. Consider religious criticism - we have Dawkins, Sam Harris, Dillahunty, Thunderf00t, PZ Myers, Aron Ra, Hitchens, etc. All with hundreds of videos. Tens to hundreds of millions of views for each of these names.

If you demonetize them all because religious people consider criticism hate, you are throwing out tons of potential ad revenue.

It's not much more work, just create more granular multilevel categories.


Are all those videos being demonetized now?

My point is that implementing a complex system that forces hard decisions on both reviewers and advertisers would only be worth doing if it would actually change the things much. Since we don't have the data to know, any position we take is simply uninformed.


There is no real solution for that. Even the legal system flip-flops periodically. We have a Supreme Court as final arbiter of these judgement calls, but there is no means to completely escape the issue of human subjectivity influencing human judgement calls.


You can't seriously compare the legal system, where you're innocent till proven guilty, with evidence, hearings, lawyers and multiple levels of appeal with what YouTube has created: non-transparent judge, jury and the executioner in one, extremely biased moderators. Yeah, it's not as bad as Twitter, but it looks like it's getting there.


I worked in insurance. My job was making judgement calls that could stand up in court. I worked for a very large insurance company. I am pretty confident that other large companies, like Google, operate rather similarly with regards to making sure they can defend their decisions in court, if necessary.


Haha no. Google, Fb, Twitter, Youtube - they all do not have the incentive to do so, as there is no way to sue one of them for wrongful account termination.

With my telco, I can sue - and unless I'm more than 3 months behind in payment they legally can't cut me off in Germany. Who says that the social media giants shouldn't be regulated in the same way, given their importance in today's world?


Insurance is very highly regulated. It falls under federal rules for both financial industries and health, so it is subject to both Gramm-Leach-Bliley and HIPAA.

And when the actuarials decide some benefit is costing the company too much, they send the policy to legal who reinterprets the language and then the entire claims department gets retrained on how to pay the benefit correctly and informed we have been doing it wrong for the past 20 years.

So, ha ha yourself. They have an in-house legal department to justify whatever the heck they plan to do anyway. They just want all their ducks in a row from the start. The lawyers basically get paid to get the company's story straight ahead of time. It's quite mind-boggling that insurance is even legal as an industry.


What YouTube has created is a direct consequence of the structural incentives of the civil justice system which reflects it's practical outcomes quite well.


Civil justice = mob justice.

Combine that with the fact that the group in charge of moderation is likely to be extremely biased, simply by being Google employees, this is hardly any justice at all.


How in the world do you know Google employees are bias? Maybe they are not and you are?


> The real problem with these rules is subjectivity.

The real problem with humans is subjectivity.

> I'd like to see a true free speech platform with violence/nudity sectioned off according to the local laws, and then let the advertisers decide where they want to spend the money, instead of treating them as children who don't know better.

Build one. Good luck preventing it from going bankrupt when advertisers decide they won't spend any money on your platform.

It has been tried, this isn't any different than all the "alt-right internet" services popping up. Or 4chan.

https://www.nytimes.com/2017/12/11/technology/alt-right-inte...


They have a pretty clear set of guidelines for what counts as controversial. I think they do a good job of circumscribing the worst of what they've been funding up to this point, without being too overreaching.


I don't use youtube much, but I know that Joe Rogan's podcasts as well as a bunch of Jiu Jitsu stuff has been demonitized for no real reason. I can't imagine it's going well for other, more dramatic communities if even a combat sport with no striking is seeing a bunch of videos culled in the process.


This is entirely untrue. Just search for "youtube" in HN's search and you'll see articles describing how inconsistent, inaccurate, and overreaching they've been.


[flagged]


We've already asked you twice to stop trolling, so we've banned this account. We're happy to unban accounts if you email us at hn@ycombinator.com and we believe you'll start following the guidelines.

https://news.ycombinator.com/newsguidelines.html


A big problem is that youtubers do not know the reasons for the demonetization, so creators have to guess what is the problem, an example was a Linux video, nothing was bad with the video, people were speculating that maybe the problem was the word "bash" but we can't know if this word is on a blacklist or not.


My wife runs a "mom vlog" mostly recipes, lifestyle things, how tos, etc. Since late October her content is regularly demonetized with no indications as to what's wrong.

Her content is literally some of the most desirable, adfriendly content on YouTube.

No idea why they don't have a whitelist for channels with 10 years of a spotless record.


> No idea why they don't have a whitelist for channels with 10 years of a spotless record.

Give it 30 seconds of thought, and it'll be blindingly obvious.

"Google/Alphabet doesnt have to pay them for the content now."

My question, why do they keep using a platform that's screwing them over?

Yippee: -3. Evidently stating the emperors new clothes are his birthday suit is bad/wrong.


> "Google/Alphabet doesnt have to pay them for the content now."

When a video is demonetized, only a small subset of advertisers or no advertisers can be shown on that video. If no ads are being shown then neither Google nor the creator are making money (and Google is spending money to host and serve the content), so really this is a lose/lose for both of them.


Yes, and demonetized also de-indexes from search and hides the video unless directly shared.

AvE has been dealing with this very issue, where he'd upload a new vid, and within an hour, demonetized. It would within 1 day go unindexed. And for those who don't know, the first 48 hours is when most views come in. So, no ads = no money. He figured out if you make the video unindexed, then it would be demonet'ed. Then he would argue, and monet would be turned on. And then he'd make it public.

AvE got sick of that, and moved to Patreon, and then releases videos telling people to install Adblocks to stop youtube ads. (Pateron's had its own recent scandal https://blog.patreon.com/updating-patreons-fee-structure/ - Ive been seeing quite a few artists and customers move away from that platform. Where will they go? Who knows.. I digress.)

But to a larger point, storage space is near-zero cost. Bandwidth over a video that's unindexed is ~=0. And if unindexed stale video happens to go away, aww gee shucks. Google keeps the mindshare as being the only videosite in town, and starves any upstarts with their monopoly of videosite/ad/search proficiency.

Tl;Dr: Google is a very bad actor here.


There is nothing in your post that suggest Google is a bad actor. They do not benefit in anyway when fine content is demonetized. It actually cost them.


I think the way you responded caused your downvotes not the point you made. Google is not paying the creators for uploading a video, then you can upload 100 videos a day and get money, the creator gets payed only if ads run on the video but if the video runs ads then Google also gets payed, so me creating a popular video and running ads on it will make Google money. I suspect Google is afraid of detailing why a video was flagged because bad people could adapt and not use those words but other equivalent, but this hits hard on good creators and frustrates them. Why are not creators moving to other providers? Because the consumers are on youtube for now, but I noticed a few youtubers moving on Twitch completly so if a few more youtube alternatives would show up then with the bad karma youtube is getting now google may not be able to reverse the migration to new platforms. Google should admit that their AI is dumb, colaborate with creators in finding a solution, maybe have a reputation system like if this channel has a good reputation and our AI is confused then assume AI is dumb until proven that creator is bad.


> because bad people could adapt and not use those words but other equivalent, but this hits hard on good creators and frustrates them

A bad actor will try to find the limits to your system 10,000 times. A good actor will give up and go elsewhere. When everybody loses the bad actor will gain more perspectively.


Wrong. Google still has the expense of serving the video without the ad revenue. They do not benefit but instead the exact opposite.


Google might still incur a net benefit from keeping viewers on their platform where they'll also watch monetized videos.


That's precisely it.

"Social Media", be it facebook empire, twitter, twitch, google empire, amazon empire...

They want mindshare. They want people to keep using it, millions of people. Because the bigger the number, the more they can use various ways to entrap people into those platforms.

People keep saying that a demonetized video costs - the trivial bandwidth and storage of a deindexed video is miniscule. Yet, making people think they're the only game in town is all encompassing. You deal with them and their games and give them content, or you go away.

And not only that, but when people put up videos, they're also giving Google/Alphabet free machine vision content. Which that is actually worth quite a bit because content's hard to just generate.


Perhaps it competes with a favoured provider or someone who has paid for promotion?


The real problem is the outcome. If you demonetize a video by accident, and then "fix" it, you've effectively killed up to 90% of the youtuber's income from that video. Many youtubers get most of their views within the first 48 hours of posting a video.

This impacts people who post a video <1/wk the most, because one demonetized video can mean a huge hit to their monthly income. Now you're driving out higher production values in trade for shorter, more frequent videos.


That is true but it also hurts Google. So Google is not a bad actor. It is unfortunate and hopefully Google can figure out a way to change.


I would not be shocked if Google was not at 100% ad fill, especially considering how much I see their own products being advertised.


> Now you're driving out higher production values in trade for shorter, more frequent videos.

The market does that anyway, though. Ask any YT creator, or just looks at the stats for the two approaches in the same niche. Frequency (with a minimum bar of quality) will beat out infrequent but very very high quality every time. It's better to post 2 minimally edited, acceptable quality videos every week than one high production value video every 2-3 weeks.


Funny, I see plenty of adds next to "offensive" content in other media. Plenty sponsor TV news, which is fully of the violence specifically forbidden here. SouthPark gets advertisers. So youtube's blanket demonetization is overkill. If certain sponsors want out of certain videos, they should be free to do so. But leave room for those advertisers who aren't so squeamish.


Yea, I don't see how this is different from the public guidelines: https://www.youtube.com/yt/about/policies/#community-guideli...

The problem is the demonetization of the videos that don't break the guidelines. I'm guessing it comes from the poor automated detection and not from the human reviewers.


To me it's absolutely not okay that they have came up with a bunch of new and secret rules all on their own, against their own community and apply them selectively and retroactively over night to videos from months or years ago and blame "the algorithm" for anything that goes wrong. This is also on top of other debacles and constant layout changes that many people hated. If YouTube wants so badly to not be outraged at by the public then they are free to get their shit together or close up if they can't manage. Things like [0] are also not okay to be happening. Ad money shouldn't justify (accidentally) sweeping up terrorism under the rug and throwing journalists who risk their lives out of the platform.

[0] - https://www.nytimes.com/2017/08/22/world/middleeast/syria-yo...


[flagged]


If you won't stop breaking the guidelines by posting unsubstantive comments we'll ban the account.

https://news.ycombinator.com/newsguidelines.html


I concur, youtube is filled with trash, at least for the youngest audience (2-5 years)


If you let your 2-5 year olds watch Youtube videos, then you're the problem, not Youtube. I really mean that seriously, this is just irresponsible.


We're watching some nursery rhymes (Farmees) on YT now, it's as good as TV (!), it's low effort but there far worse things IMO educationally/recreationally.

Is it all TV you object to for this age range or just when it's served via YouTube?

Mind you, as an aside, we're using FireTV so we won't be allowed by Google to have YouTube from January ...


I agree with you, that is responsibility of the parents. Anyway, that doesn't change the fact that youtube has trash content targeted to small kids


The thing I really, really don't get about modern media companies (IE. YouTube, Facebook, etc.) and their approach to advertising is their almost universally puritanical policies. I mean I get it to a point, they're trying to mitigate the risks of someone getting offended and suing them or something, but at the same time its such a lazy and one size fits all approach.

They have all this data about people, yet they're too lazy, or too genuinely puritanical, to actually use that data to show the right ads to the right people. Take for instance the category of drug "paraphernalia," (not to mention actual drugs). Neither Facebook, YouTube or Google will allow advertisers to advertise for these kinds of products, even in markets where they are completely legal. You'd think a more reasonable, and profitable approach, would be to use all that data to only allow advertisers to target these kinds of ads to people of legal age, in markets where these products are legal, but no, no-one can advertise them to anyone, anywhere, ever.

And what about sex toys? Why can't videos of say sex toy reviews, be age-gated, and then so called "Adult ads," you know, only be shown to the verified adults watching those videos? I'm sure there are lots of sex toy companies who would love to advertise to that audience, and I'm sure someone watching a sex toy review video would much rather see an ad for a sex toy than for another fucking Nissan, but yet again, Google et all would rather impose their bizarrely puritanical morality on the world, than do their jobs and build a system that actually works.


>use that data to show the right ads to the right people

To expand on what you're saying here, YouTube allows offended people to bully them by contacting advertisers and saying things like, "YouTube is putting ads for your company in front of videos about hitler!!" And YouTube just immediately caves to that.

YouTube's response from day one should have been, "no, we don't put ads in front of videos, that's not how this works. This isn't tv where the ad is broadcast whether someone is watching or not. This is a website that plays videos you ask for. We (youtube) have data about you. We (try to) select an ad specifically for you. We did not play that ad for Pepsi because it has anything to do with hitler. We played it because (our data suggested) the ad is relevant to you. And then afterward, we played the Hitler vid because that's what you clicked on."

That seems like such an obvious, slam dunk response to me. I think the reason YouTube didn't push back in that manner is that they welcomed the excuse to start pushing political content they don't like off their platform.


In addition to what you're saying, it feels like YouTube is underestimating their leverage over advertisers. They hold exclusive access rights to either the biggest or second biggest audience for video content in the world. It seems to me that advertisers need YouTube far more than YouTube needs any individual advertiser.

The demonitization policy also seems overly broad. While not wanting to display ads before literal executions makes sense, the blanket bans on profanity or use of brand names are counterproductive. People mostly watch videos from creators they like, or that are similar to videos they've watched before. That means that the content in question isn't objectionable to the viewer themselves, and won't cause them to form a negative association with the advertiser's brand. Advertisers should WANT their content to be next to content that the viewer is positively disposed to, regardless of what that content is.


Except that hitler video is now making money from the pepsi ad. And its not far fetched to say that pepsi is giving money to creators of that video and when pepsi exec find out that that is a thing, they will want to shut it down asap.

PR is not a logical thing because people aren't logical, they are emotional and in that context your logic doesn't mean anything...


Based on what do you think Google is pushing off political context they do not like? We can see the screen here that there is nothing like that. Would suggest your allegation is untrue and Google is doing g nothing like you suggest and is the exact opposite.


On one of the images, they advise the moderators to demonetize controversial content, and then make exceptions for certian controversies (which Google does not feel it would be right to suppress). This is obviously bias; it's the definition of bias. It just doesn't feel like bias to us because if we were made autocrats and instructed to rule as undemocratically as possible, we would enforce the same bias.


It's not "modern" companies or tech companies... it's the advertisers that run ads on media networks.

I worked as a copywriter for most of my 20s in Manhattan... have a ton of friends at all levels of the business from ad sales to OOH to Facebook.

My friends in TV ad sales always say the same thing when a company or product comes up at a party... "oh, ____ is actually a client of mine, love them, got shit faced the other week at lunch... but I don't buy their product because they're homophobes."

"What?"

"Oh, they're in my Excel sheet of companies that have requested to not be on Bravo... because of the content. too many gay dudes on shows."

I've had plenty of friends that worked in daytime and network ad sales where they literally would get an email with all the "issues" people might have with an upcoming episode of a popular show, when the issues would appear and what slots they can move the commercials into. They then spend the day calling all their advertisers and media buyers so they can go manage expectations and start moving spots around. People have no idea how much deal making has to happen with Fortune 500 company marketing departments and media agencies every time there is something 30% of this country would find offensive such as men kissing... or... men kissing... or someone saying "fuck".

Now, the people at these companies usually aren't actually homophobes... they usually just don't want to anger the far right and the groups that support their beliefs.

Then you get into the creative. I had a very popular underwear company as a client back in the day... part of every shoot was a reminder that we had to play by their rules. If there's a couple in their underwear... they have to both be wearing wedding rings because otherwise Christian groups will complain. This is common in many ads... look for the rings. We even had a Christian football star become a spokesperson for the company... but part of his contract had it so that he couldn't appear in underwear bc of his values and those of his supporters. So we put him in undershirts.

Then you have race and everything else.

But this all comes down to money and those with the money need to start flexing or support other platforms.


> Google et all would rather impose their bizarrely puritanical morality on the world, than do their jobs and build a system that actually works.

I'm having a bit of trouble with the idea that something that is generating around $80 000 000 000 per year revenue and has been growing ~20% per year for the last several years is not something that "actually works".


It's the filtering that isn't working, not the platform


Likely same reason as for the Apple Store's policies. The market of family- and children-friendly content is huge, so these platforms do not want to have any dark stuff associated to them.

Either way, that also leaves a hole, that is, an opportunity for other services.

The possibility to target ads to the the very specific taste of single individuals is very recent and, I would argue, not fully mastered yet.


I mean sex toys and bongs are one thing but excess profanity? Are you kidding me?


Google is an American company. Their "rules" are extremely embedded in a US culture. Unfortunately YouTube is a product that is consumed worldwide. Many of the regions where it has a dominant market share for web VoD do have cultural palates that are very different from the US. I feel Google should embrace their international position and be less biased in it's reasoning.


So, what is your suggestion here? Should they run with the most open interpretation (i.e. allow anything that is socially acceptable somewhere on earth) or most strict interpretation (i.e. limit stuff that might be offensive somewhere on earth)?

Both approaches look rather impossible to me: the former would mean allowing stuff that might even be illegal in the U.S., the latter would mean that e.g. all videos showing alcohol would have to disappear.

What is your specific example here?

Also note that this is not about allowing videos onto the platform, but only about next to which videos to show ads. And I would guess that most ad customers would want rather strict rules, so it makes sense for YouTube to listen to their customers here.


First, it is important to note that Youtube has a defacto monopoly on non-film Internet VoD with close to 90% market share. If you include film/tv style content (basically Netflix ), it is still 80% in a huge number of countries. By lumping in everything from the most outrages, and as you mention even illegal content, with the most benign things like 'swearing', which in many countries people wouldn't even notice, and then presenting the advertiser with an all our nothing choice (if at all), where is the choice realy? "If you want to advertise to this game-themed channel where the hosts drop the occasional f-bomb, sure, but then you also agree to run adds on the neo-nazi Holocaust denier channel". Google is the sole broker here, brokering audiences to advertisers in a monopoly setting. How they set up that market, has very profound implications. The model that is exposed through the OP is far from 'evidently supported' by universal culture or rights.

You mention only extremist polarized positions, full anarcho-libertarianism or ultra-global-conservatism. There are many more moderate positions in between, or even entirely different solutions, such as opening up the YouTube brokerage market.


This was all figured out at least five to six decades ago, as global media companies emerged.

It's just one more step in the inevitable bifurcation of the Internet down to national barriers / legislation / cultural beliefs. In the not very distant future (within 15-20 years), the commonality laws, things nations have in common with regards to legislating the Internet, will be the minority. Every nation will want and will set its own unique policies (on top of any other agreement lines as eg in the EU).

There's no possible outcome for the Internet other than becoming another well regulated medium, given a few more decades. The laws will pile up, it's what politicians do, and the Internet is the single biggest target.


Well, they already have a system for "this video is not available in your country", driven by the music licensing.


They should probably have different demonetization strategies in different cultures. "This video is demonetized in USA" for example.


Yeah having country specific moderation teams with people form those countries would be great. But I won't hold my breath..


It would be great until you reach Saudi Arabia or Iran, or heck don't need to go that far since Venezuela is just near by...


So you are saying it would not be ok for those cultures to impose their morals, but it is ok for the US?


Moral relativism is nonsense, there are objectively better and worse moral systems.

The US and any other country in the world is far from perfect but it doesn’t mean there aren’t better and worse, much much worse moral systems out there today.


What culture-specific reasonings are you referring to, exactly?


The US is unusually sensitive about naked bodies and swearing for instance.

In most countries it's unusual to BEEP swear words on TV, and in some other countries nobody will bat an eye at casual nudity, even in TV shows aimed at children.


Meanwhile excessive violence is permitted on regular TV.


In some countries, exposing children to advertising would be considered inappropriate.

In some places, nudity and sex are considered fine for children.

In some places, LGBT topics, or even showing people who are openly gay discussing something entirely unrelated would be considered unacceptable.

In some places, standards for nudity are very different, for example failure to wear a headscarf would be unacceptable in parts of the world.


For example, the US is extremely sensitive to things like nudity, swearing, identity political issues, and at the same time callously tolerant about violence and commercial interests.


Days of discussion around a staged nipple reveal during the superbowl come to mind. /s


For example, in Germany, public TV channels (e.g. the ARD), showing a 90 minute movie containing a minute of nudity before an implied sex act in a movie targeted at teens, broadcast at prime time (20:15) is not unusual, and won't lead to complaints (see: the first 5 Minutes of "Mein Sohn Helen").

In the US, this would spark an outrage.

Most German public TV channels try to upload all content to YouTube, but sometimes some videos aren't available there due to YouTube's restrictions.


"Any language intended to spread hate." Could that be any more vague? Does encouraging people to hate DRM or hate cancer count under that? Clearly not. So what does 'hate' even have to do with it?

Really its because "Hate speech" in and of itself is a dumb phrase. AFAIK It only refers to an incredibly small subset of hate. It does not refer to hate. And often refers to mean-speech instead of referring to the emotion hate at all, which is even more frustrating.

I'd rather we just throw the phrase into the dumpster where it belongs and come up with something better. Good terminology combats extremism.


"Hate speech" isn't a dumb phrase, it's a newspeak phrase perfectly designed for its purpose - to strike fear in the heart of anyone accused of it, and for it to be spoken with an undeserved sense of self-evidence.

You'll not find anyone outside of the "anyone who disagrees with my political opinions is inherently evil" group using it.


Put more bluntly “hate speech” is a term constructed exclusively to justify censorship against any group the political left disagrees with.

Anyone using it hates actual democracy.


Tangential, but you might like this: https://www.ecosophia.net/hate-new-sex/


Maybe this is real, but it doesn't seem to line up with what I have heard from other Youtubers.

Firstly it is almost certain that a video is initially demonetized by an algorithm. Only during a manual review is it looked at by a human.

The main complaint from Youtubers is that the algorithm is way too strict and inconsistent. But the manual review process is pretty fair and the vast majority of videos get remonetized. The issue here being that by the time a manual review is complete most of the views for a video have already happened.

I could of missed it, but I have seen very few complaints about the manual review process being overly strict or biased.


They are probably using the manual reviewers to train an ML model to look for these things. It's going to take some time, but I think it's possible to get the models to match the manual reviewers.


I wonder why you're the only skeptic in a forum that's usually skeptical about every post. Yes, you're right, this is not Google's MO. Google tend to look for software based solutions first and foremost before resorting to humans to solve problems. Recently Google have really been working on their voice transcription software and I imagine they could easily design a bot that scans subtitles and video frames for offensive content


In the past I would have agreed with your skepticism, but they just announced a huge human-powered review team:

http://money.cnn.com/2017/12/05/technology/google-youtube-hi...

I don't know based on that article if it's directly related, but they've obviously decided that they need serious human review teams.


Wow, my mind is blown. Makes me now feel like all those claims Google made in the past about how impractical hiring a customer care team would be for them was just pure BS.


Couldn't the algorithm give its judgement before the video goes live?


YouTube's advice to creators is to do exactly this - post the video privately to let the algorithm and potentially the human review go through before making it public.

Sadly, since this process can take up to 48 hours, this doesn't work if your videos have any kind of time sensitivity.

It also doesn't help with videos of streams, which are automatically posted immediately after the stream's conclusion.


It also doesn't work - uploading a video privately passes monetization, uploading the same video pubically has the video monetized. It's completely arbitrary.


YouTube is trying to roll out something like this, it hasn't taken effect yet though.


> The issue here being that by the time a manual review is complete most of the views for a video have already happened.

Wouldn't the solution, then, be to credit the account for the revenue generated while it was demonetized? It's not like Youtube didn't get revenue from ad impressions while the video was flagged.


Sorry, when they say demonetized, they mean "ad restricted" as in no ads are shown. The issue being advertisers don't want their ads shown against objectionable content.


Google does not benefit from demonetized content but instead hurts as they still have infrastructure expense.


My take on the demonetization controversy is that it is too coarse: the output is a boolean, is it advertiser friendly or not so, to say nothing of the preferences of individual advertisers.

To me it seems that even assuming the current single set of criteria for advertiser friendliness (with all of the other problems discussed in the other comments regarding subjectivity) that different advertisers might or might not be willing to be exposed to it given different prices. Right now there's no escape hatch, so instead we've seen content chilling effects and a ton of new customers for Patreon, something that is starting to exert some degree of competitive pressure on YouTube.

I'm not familiar with the AdSense interface on YouTube, but I'm guessing that it's some sort of keyword bidding model. If advertisers could bid on keywords with with or without the friendliness bit turned on, then market prices could emerge. I can see an incentive on the part of advertisers to be willing to take a bit more "friendliness-risk" if they can get access to a valuable demographic with valuable keywords at a better price.

Hardly a perfect solution (the friendliness bit itself remains the real issue). Even better would be a set of several different friendliness scores that advertisers can make decisions about and thus allow prices to emerge.

edit: spelling.


>My take on the demonetization controversy is that it is too coarse: the output is a boolean, is it advertiser friendly or not so, to say nothing of the preferences of individual advertisers.

Different metrics matter more/less to different advertisers/industries/audiences and the more niche it is the less it matches up with the global average that is a "general audience"

This. If there's a video of some guy laying in the mud welding some piece of heavy equipment back together in the middle of nowhere whether or not he's swearing like a sailor probably doesn't matter at all to Hobart or Lincoln or some other company that might be advertising their line of welding consumables with the video.

Likewise I'm sure some companies that sell products that make us less afraid of unlikely events (fire extinguishers, life insurance, etc) would have little problem with their ads appearing in conjunction with violent content.


I think they can create a reputation for a channel, say each time a demonetization decision is reversed by a human then bump the channel reputation and consider this time when deciding to demonetize.


Other comments in this thread indicate that there is a friendliness bit, but most advertisers are risk averse.


To demonetize "sensitive topics (such as abortion, suicide)"

and then there is a mention: "Does not include: Content made by gay ..." or whatever else.

I understand what they want, but these guidelines are somewhat problematic, at least.


Does anyone have any idea about why these rules would be secret?

The only thing I can think of is that YouTube is not confident in their guidelines & categorization, and does not want to defend these choices publicly.


The reason to keep them secret is because they are very clearly not rules. There's nothing too specific about it, and so there's going to be a fair bit of actual judgement calls being made. If they publish the rules, then they become actual rules instead of content moderation guidelines and they lose their freedom to interpret those rules based on context.


> any idea about why these rules would be secret?

Don't work for youtube, but I'm guessing they would be secret for a couple of reasons.

1. If made public, people could post violating content and work around it by making sure they won't get flagged with carefully crafted thumbnails, content etc. Ex: Take a look at the 'Nudity' section. There are examples of 'Partial nudity' in which a guy wearing underwear is deemed inappropriate because it looks 'Vulgar'. So one could, in theory avoid such thumbnails, and still post vulgar content in the video and get by their censors.

2. It would also open Youtube to criticism, just like the Facebook breastfeeding content was, a whiles back.


I think you're spot on. I manage an online game, and we don't have rules. We have somewhat vague guidelines, and moderators we trust to enforce them.

Rules are open to arguments, and with dozens of bans/appeals a day, it'd be ridiculous to do that. We tried strict rules, and they are both too restrictive and ineffective.

Humans need to make judgements, and guidelines give you the latitude to do that.


People who skirt those judgement will always exist, but having the 'we are the law' kind of rule sucks for a community.


Would potentially encourage disputes by people on the cusp of the rules. YouTube does have published content policies, obviously.


1. Many very popular YouTube videos (eg brosciencelife) are profane - the Dom Mazetti character is a douchebag - and people come to YouTube to watch them.

2. The guidelines seemingly allow what would be considered hate speech if the topic is women's equality.


They're not a secret, I'm surprised this is so high up on HN: https://www.youtube.com/intl/en-US/yt/about/policies/#commun...

https://support.google.com/youtube/answer/6162278?hl=en

The only thing I found confusing was the unwanted creators for the alleged secret meeting. GradeAUnderA is hilarious[0] and has a few videos that point out the drama in YouTube. Keemstar is toxic. I don't know about the others and I can name quite a few channels that should be on YouTube's unwanted creators list (if such thing existed).

[0]: https://www.youtube.com/watch?v=1XyH8MJARcM (NSFW language)


The problem is they don't tell you which "rule" you've broken when they demonetize your video.

My wife literally posted a 3 minute sped up video of us decorating our Christmas tree and it was demonetized without explanation.


These rules seem incredibly strict, in a way that could seriously limit original content.

If I was to make a show similar to Trailer Park Boys, about a bunch of comedic stoners bumbling around and swearing a bunch, it would be demonetized.

Trailer Park Boys is a good example, since it's a show that you could produce on a minimal budget and try to launch on Youtube.


I wonder if they could get outside sponsorship. Many creators show ads (sorry, "sponsorships") in their own video, outside of YouTube's ad system. Apparently YouTube has been saying they want the advertiser to pay for YT ads on the channel, but if the videos are demonatized, I wonder if that still holds?


Outside sponsorship is explicitly prohibited by section 4 of YouTube's TOS.

"YouTube creators can not include promotions, sponsorships or other advertisements for third party sponsors or advertisers in their videos where YouTube offers a comparable ad format" https://support.google.com/youtube/answer/3364658?hl=en

(Patreon, as a way to make money as a YouTube creator occupies a gray area but this restriction hampers their potential for growth since YouTube could just shut them down.)


It's not that simple: https://support.google.com/youtube/answer/154235

You can't splice an actual ad into the video (although channels do, and they haven't been banned), but you can have outside sponsorship. I believe stuff like vloggers cooking using a meal kit sponsored by the company is OK.


a lot of videos i watch seems to be sponsored videos (lately, there's been lots of 23 and me, and squarespace, skillshare and a few others).

How come these videos are not violating youtube's policies?


I think most sponsors probably have something in the small print about the appearance being monetized because demonetized videos usually don't fare very well, popularity-wise, on YouTube.


I believe advertisers can still choose to advertise on specific demonetized videos or channels, so if this is your content, you could still approach advertisers directly.


I don’t understand what the term “demonetization” means in this context.

When a video is flagged for “demonetization” what are the immediate consequences? I’ve never actually met anyone who got paid for something that happened on YouTube, so I just don’t understand how the business side of YouTube even works.

For the life of me, all I’ve ever directly witnessed is the occassional friend who received a DMCA notice, because they uploaded an audio rip of an album they wanted to listen to from a work computer, as a public video.


It means the video won't have ads and thus the creator won't earn any money from ads. It's very difficult to make a lot of money from YouTube ads, but some people are doing it and getting "demonetized" really hurts them.


> It's very difficult to make a lot of money from YouTube ads

Like anything else, I think you have to find your audience. A 6 year old apparently made $11 million this year off his YouTube channel:

https://www.washingtonpost.com/news/morning-mix/wp/2017/12/1...


That's news precisely because it's rare.


Not rare, just extreme.

If you see a YouTube channel with 250,000 subscribers or more, they are probably taking in around $10,000 a month from YouTube (assuming they post regularly.)


Some others have speculated that demonitized videos tend to not show up in the recommended videos, which leads to fewer views.


once your channel has a certain number of subscribers (1000, maybe? it's a fairly low number), youtube starts paying you per view. demonetized videos don't count towards your viewcount so you don't get paid for them. This makes youtubers very angry.


Ads taken off, I think.


The rules to flag videos is here: https://imgur.com/a/uTLTS

and they seem quite vague. Stuff like "Regular TV" is not something many people can understand.

That said, is there a way to find the demonetized videos?


creaters can find the monetary status of the videos in their dashboard. They get notified whenever any of their video is demonitized.


But it doesn't tell them why it was demonetized, which is the crux of the matter. Very Brazil.


Thanks but is there a way for non-creators to find it?


may be probablistically by loading up an incognito window (or use a vpn), and see if ads show up (removing adblock of course).


I don't really understand why advertisers would have a problem with "objectionable content" (obviously excluding fradulent websites that might be faking ad views).

If someone is viewing the content, they presumably like it, so there doesn't seem to be any difference compared to people viewing advertisement on "non-objectionable" content.

If someone else sees the content along with the advertisement, they can just truthfully say that the advertisements are placed by an automatic algorithm and that they don't endorse the content.

What's the problem?


Partially, I think "they presumably like it" is an invalid assumption, as evidenced by the many youtube videos with more downvotes than upvotes.

More importantly, the argument made by campaigners looking to stop ads on objectionable content is that placing ads in association with objectionable content constitutes funding that content; whether you do that via an algorithm or manually makes no difference, the point is that the advertisers choice of how to place ads is paying out money to producers of objectionable content.

That, in turn, makes those advertisers liable to explain to their regular customers, who are now vicariously funding objectionable content by paying money to the advertiser, why they are placing ads as they are.

All-in-all, it'd be much easier for a generic advertiser to just avoid objectionable content; but that isn't properly possible on Youtube, which is what I suppose is what this whole scheme is trying to change.


If you advertise on a video, you are giving the creator of the video money.

There is no stronger endorsement.


I find it very strange that one of the criteria is if you would want to watch it in public. There are many thing on regular TV I'd never watch in public.

I'm also baffled by the levels of fear naked skin seems to solicit in the US, how common is this? Often you can have extremely violent games/movies with teen rating, but the moment you show a tit it's instantly adults only.


I don't see anything wrong with that. Google did this before to fight search engine spam. Top positions for money keywords in search results were manually reviewed by humans. Now advertisers do not want to be advertised on specific videos to avoid risk of brand damage.


This isn't true.

Top positions were reviewed to determine the best parameters.

A review didn't affect that one individual sites ranking, but all sites as a whole.


check this out https://searchengineland.com/library/google/google-search-qu...

edit: i am pretty sure the websites that were marked as low quality lost their rating on next google search update.


What I don't understand is why don't they allow advertisers to accept showing their ads on these more explicits videos? They were ok with it for a decade but all of sudden they're not? I'm sure most non-American brands would be ok with it, even many American brands would be ok with it. I'm sure Google has the ability to add such a simple feature to their video ad platform (it's basically a checkbox with a boolean that would say "restrict to family friendly content" or something). Am I missing something?


The advertisers themselves saying they don't want their ads shown on this content.


All millions of them? I know Amazon still advertizes on Breitbart and they're one of the biggest advertisers in the world.


There is a compromise that would work for YouTube and its "less desirable" creators.

YouTube can stop monetising those channels with advertising it has sold. That keeps YouTubes normal advertisers happy.

The owners of those channels could instead sources their own advertisers and use the YouTube ad management systems to inject the adverts in to their videos - with YouTube taking a cut of the revenue.

This way everyone is happy, the channel owners get revenue, the advertisers know exactly where their adverts are appearing and YouTube gets a cut too.


I know some people are happy about this happening because currently the demonetisation matches their own beliefs/politics/dislikes.

What happens though if this is outsourced out from Americans doing the moderation to cheap labour in other nations that don't share your beliefs or politics.

If one thing is clear it's that we need competition in this space.


Any other provider would have the same issues with advertisers.


Who says the other provider needs to use advertising?


So there isn't too much juicy stuff here. They decide to manage out some unwanted channels, the list seems regardless of the channel's political affiliation, then I didn't really see a problem with that.


On a different YouTube note if they want to continue being a powerhouse they need to also get rid of all those robot voiced bot created videos. Also those videos that are clickbait and play only an image of a website to go to.

Youtube for me is being plagued by the above videos.


Fantastic news. Though a conspicuous absence is lack of any classification that deranks videos in the search results or recommendations.

Even if it's largely financially motivated, it's still nice to see some kind of assessment of what kind of content they are willing to enable.


I wish Youtube had a real competitor to stop them from doing things like this.


How would it be different? Google is subject to what is comfortable for the advertisers and putting these rules in place which seem fair.


A competitor doesn't have to use advertisers


I agree with almost all points, but some are excessively restrictive in a blind way. There should be a distinction between uploading violence because there are idiots around the world wanking when they see someone killing someone else, and reporting excessive and wrongful military force or domestic everyday police abuse. By their rules one could not expose these neither, which to me is disgusting.


If I was a company buying lots of ads on YouTube, I definitely would not want them next to any channel like leafy or any other troll spam channel.


Could this be a way to train a demonetization AI model?

Given it's Google behind it, I don't think they are expecting to keep a work force to flag videos for the foreseeable future.

I think they are probably doing this in parallel with AI and most of the false positives are on that end, which means they hire more people to make a bigger training data set, rinse and repeat.


Yeah, considering Google has been cutting search raters' pay[1], and they tested the waters with automated moderation tools[2] earlier this year, I think it's pretty safe to say their goal is to cut out people from moderation entirely.

[1]: https://arstechnica.com/features/2017/04/the-secret-lives-of...

[2]: https://www.perspectiveapi.com/


So this "secret" meeting was just a meeting that YouTube didn't write a press release about, right? By that logic, I've endured hundreds of "secret" meetings.

If only I had known! I would have suffered much less boredom knowing I was part of yet another vast conspiracy.


I would like to start an experiment where I create my own youtube/vimeo like website and run it as bootstraped buisness. For me, the hardest challenge would be getting viewers to watch the content. But the technicial side would be trivial.


Seems like a reasonable solution. And that YouTube made an effort to think this through. There's no ideal solution for this problem though. And this sort of things may make you cringe but I think the alternatives are much worse.


I don't see the issue. Google is curating their platform and trying to remove as much un-original and reasonably offensive content. The doc also didn't seem to contain any political leanings or cultural shifters. I believe much of this is part of the service agreement on Youtube.

Only issue I see here is the manual review process and subjectivity baked into that.


1) they don't tell you what rule you're breaking.

2) transparency is zilch.

3) rule changes came overnight.

4) no system to "pre-approve" videos.

5) it's affecting tons of completely innocuous videos. My wife runs a family friendly vlog, prior to October she'd never had a problem with any of her thousands of videos, and now every other one is demonetized without explanation.

From a creator experience it's just confirmation the platform is totally dead and beyond saving. The algos have won.


Completely disagree about there not being any political leanings.

See the Controversial section here: https://i.imgur.com/vkBrkns.jpg

e.g. religious videos of any sort are automatically deemed controversial (way more inclined to be right leaning), but videos about racial justice (left) are exempt (unless, of course, they are anti-racial justice videos, in which case they must be removed.)


>e.g. religious videos of any sort are automatically deemed controversial

Thats not what it says. It reads "Content promoted by extreme religious groups" - Certainly there is subjectivity here and Google should provide clearer guidelines on what is deemed "extreme" (clearly violent jihad and isis recruiting would be considered extreme, and general islam/muslim content should be considered okay).

But it clearly does not say "religious videos of any sort".


You realize this for ads not the content to be in YT. I do not see anything that could be considered leaning right or left.

You seem to come to this with a bias that Google is left leaning and why you see a political leaning situation here. I am a centrist and do not.


Yes I am aware what demonetization means. It's a very effective way to get people to stop producing content that does not align with Youtube/Alphabet/their advertiser's values.

The problem here is the looseness of the guidelines and the complexity of navigating deep issues with nuance.

Entertain me here for a second... Let's talk about a few hypothetical videos:

"The positive contributions of undocumented migrants in the US" - positive content about immigrants. Exempt from demonetization per the guidelines.

"The problem with illegal immigration in the US" - antagonism of a specific group of people/xenophobia per the guidelines. Should be demonetized.

Now, do you think it's an unreasonable position to say that undocumented immigration has both pros and cons? Of course not, everything does.

Youtube's distillation of rules that only allow for discussion of one side these issues with zero nuance is the problem. Coincidentally or not, the allowed positions for discourse align with the brand's corporate values.


It’s not even about removing the content - but just not showing ads on it.


Not showing ads has the side effect of burying the video when it comes to discoverability as well.


What does "demonetization" mean anyway? It's the second time I read about "demonetization" and I still don't get it what does this mean at all.


A video is demometized when Youtube refuses to run ads on the video.


I don't see any problems with these rules in particular (I would prefer something less strict but we're talking America here), my problem is with all the seemingly innocent videos by people like EEVBlog that are purely noncontroversial tech stuff that gets instaflagged for no apparent reason.

The manual review isn't the problem it's the initial algorithm that robs creators of all the views in the initial week(s) before the review gets carried out (i.e. 90% of the revenue)


Except EEVBlog videos aren't completely innocent by these rules. I suspect a good number of Dave's past videos that have been flagged are because of cursing which he avoids today but definitely used to do occasionally.

He can be culturally insensitive at times. I don't know if to the point that I would flag him as violating the rules, but then again these people were instructed that if in doubt flag. As the father of a daughter I'm am very much aware that he can be sexist at times. In fact this is why although I will occasionally watch one of his videos I am no longer subscribed to the channel.


These regulations talk about what's common on TV, I can't recall him cursing worse than that, has he dropped an F-bomb?

Anyway that was just one example. Another who's gotten demonetized is LazyGameReviews, and he's a total kitten. Ashens is also clean and has gotten demonetized. There are countless examples.


Feel like I'm spamming my wife's channel but hers is Kyleandcourt. 100% family friendly, thousands of videos posted over the years with no flagging, then in October every other video is demonetized without explanation.

So nope, their algo/system is jacked up and they won't be transparent about anything.


[flagged]


Your point is appreciated, but keep in mind that these are people's livelihoods. They're in a vulnerable position, and change is a big deal for them. Fairly small changes to the YouTube landscape could mean major changes to their income and business viability. Change a few bits of a suggestion algorithm and the kid making $11 million per year playing with toys could be out of business overnight.

It's the same reason people freak out when Apple does anything to the app store. It's life or death for a bunch of small companies and they have no power over the situation.


The fact that those people chose or happened to make a living off someone else's website doesn't give them employment rights.

Unfortunately the platform is property of YouTube and while I think it would be nice for them to explain their rules when they invite the public to engage and participate, they are allowed to hold these as trade secrets :(


That's exactly it: YouTube is 'sharing economy' - privatise the profits, socialise the expenses, in this case the costs of videos which are controversial.


You misunderstand. They all know there's no right to use YouTube. It's just that if they can't make money, people will stop making good shit and it will hamper or kill the whole platform.


You mean that if there is no direct economic incentive people will stop contributing to a generally available pool of content/IP? That does explain why open source software died off when no one could figure out how to make money from it...


Open Source, and Free Software paid a lot of attention to foundation, on which their software was being built.

You can think whatever you want about RMS rants about Java, but he and FSF sure did paid attention to the fact, that they must control everything, turtles all the way down, as much as possible. Otherwise, just one proprietary, non-replaceable layer in the stack, and the layer owner can control you.


As a YouTube partner, I completely agree with this assessment. youTube doesn't give a damn about its creators. Caveat emptor.


> They're in a vulnerable position,

Making shitty videos for pre-teens and racking millions per year.

Yeah, please tell me more how vulnerable they are.


There are lots of YouTubers who make "just" thousands a year making helpful videos like recipes, tech how tos, workout videos, etc. not just shitty videos for preteens.

People who've built up a following on the platform are at risk. Quit trying to demonize them.


Seems very pragmatic to me.

Actually "If people feel uncomfortable watching it in public" is vague but reasonably good guideline.

It has to be nary impossible to draw the line in so many cases, but the guidelines seem reasonable and the line will roughly get drawn about there.


So if I'm uncomfortable to watch a video in public about, let's say "How to avoid getting STDs", which is clearly something me and probably a lot of people would benefit from, it should be demonetized?


" it should be demonetized?"

I don't see why PSA should be being 'monetized'.

I don't see why Coca Cola should be forced to pay for ads during this program.

This is not 'banning or censoring videos' - this is basically 'deciding which bits of content advertisers want to pay for'.

Different game.


>I don't see why PSA should be being 'monetized'.

That's irrelevant to the argument.

>I don't see why Coca Cola should be forced to pay for ads during this program.

I didn't say that and I haven't picked up that argument anywhere else really. I merely refuted your argument that something "comfortable to watch" is a good metric for this program. I came up with this example off the top of my head. And you would find a lot of suitable advertisers for this kind of content as well.

Please stick to the original argument, that "comfortable to watch" is a good metric instead of strawmanning me with censorship and forced payments.


"I don't see why PSA should be being 'monetized'. That's irrelevant to the argument."

It's relevant because we are discussing which videos can and cannot be monetized - some are not intended to be monetized.

"comfortable to watch" - is actually a perfect way to communicate a complex and nuanced issue, in a simple way to various audiences - internal, operational, close ad partners, long tail ad partners.

It reasonably summarizes the otherwise reasonable policy.

And yes - someone talking about explicit sexual subjects for whatever reason definitely falls in that category, because advertisers would be weary of it. Google wants it that way for obvious reasons, and it seems like it's working.

Finally - there is nothing to report here. It's great that we get some insight, but this has to be a fairly naive view of 'insider reporting'. There is no story. Google has a reasonable policy for determining which content is good for ads, in general, and which is not. They seem to be acting professionally on it. This is people doing their jobs reasonably well especially with nuanced things.

If this story were about content removal or censoring, it would be a different discussion.


If advertisers do not wanted be associated with the content then yes. This is all about advertisers. YT cost money to run and that money comes from advertisers.


And eyeballs for advertisers come from content. For example Laci Green has up to several million views on her videos. So she pulls a lot of people onto the platform. But because sex ed is "uncomfortable" she shouldn't be able to monetize? That isn't quite fair to the creators either. (I'm talking about the general moral juxtaposition, not Google's interests)


"That isn't quite fair to the creators either."

It's fair if it's clear and applied reasonably consistently.

Google makes completely arbitrary moral judgements about 'advertising guns' - which I might support, but they are arbitrary.

With this - it's about advertisers interests, not Googles.

If Coca Cola et. al. don't want their ads jammed into CardiB rapping about her vagina, her butt, your python, and her vagina again - well that's Coke's prerogative. Surely.


As I just explained to my 16 year old daughter about YouTube policies. YouTube is a business. Businesses will do whatever they need to do to get customers and make money. If anyone was under some strange illusion that YouTube was a public and free platform to make money and/or create any content they wanted was naive at best.

If you want to create a channel for users to view your content, purchase a domain, buy a VM on some cloud service, and set up your own video streaming service for your own channel.

It would be interesting to see how much it cost one person versus the income generated. I'm pretty sure that anyone with millions of subscribers would be bankrupt instantly.

I think the one reasonable avenue would be to unionize all of the major content creators and invite any creator to be a part of the union. Then threaten a strike. But the content creators are probably like a herd of cats. It's unlikely any coalition is remotely possible.


> I'm pretty sure that anyone with millions of subscribers would be bankrupt instantly.

It would be interesting to see a napkin calculation on this, because I highly doubt this is true. Bandwidth cost and video compression are both a lot better these days than people think, and if you could cut the middle man out of advertising (which you might could if you have millions of viewers) I think the benefit would be significant.

Of course, YouTube does not only provide bandwidth and servers. They provide a "platform", not in the technical sense but in the broader sense. They have well working apps on all mobile platforms, they have a social network, they have editing tools and redundancy, they have so many benefits that you'd really have to think twice before taking them on.

Your idea of a strike is interesting, while I doubt it would happen I would love for content creators to team up and make common demands.


Why would million of subscribers bankrupt someone? I work in the industry building video software that would allow anyone to create their own personalozed video on demand or live streaming service. I havent ran the numbers myself, but the cost to provide a youtube like experience is not that expensive. I would be happy to run the numbers if we could determine how many minutes a single subsriber would watch in a month.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: