This reminds me of a hotel that elected to advertise its own "Room Service Pizza" as a local commercial business. It went something like this...
A given guest checks in, unpacks, and finds themselves ready for dinner. They skim over the room service menu, considers Room Service Pizza, and then dials a nearby pizza chain for delivery. Reviews for Room Service Pizza were not good. At the time, "Dominoes Is Cardboard" among similar chain-pizza love was going on so it wasn't looking up for Room Service Pizza even though it was right there on site and arguably not that bad as the story goes.
The hotel tried "everything" but it would never work; recipes, surveys, fresh ingredients... "One Chain Pizza, Please" was always the result.
One day someone desperate and with authority had an idea. They had changed "everything else" by this point, so let's go "marketing". This person drums up a separate phone number, business name, menu, logo, uniforms, the whole bit for Room Service Pizza. Drops the flier menu into the "local restaurants" courtesy spread in their rooms. It's not "Room Service Pizza" anymore, it's "Tony's Tower Pizza" or whatever.
Orders pick up. Reviews go up. Deliveries for Chain Pizza start to drop a bit on a noticeable level. Every time the special phone line for "Room Service Pizza", aka "Tony's Tower Pizza", rang up they knew something had finally broke free for themselves.
... So, I guess my point is that this isn't the first time pizza and playing masquerade have come together for a business. Here, a hotel uses ads and "rebranding" to sell their own pizza vs Google using fake pizza to sell more ads. If pizza were a coin, these might be opposite sides of it.
Universal Orlando has their own in house pizza that’s totally branded differently and they even throw flyers under the hotel room doors to make it look like a local chain. Thing is, it’s actually pretty decent pizza and the price isn’t too bad either. In their case I think most people have caught on to theme park resort pizza being awful so branding was essential.
1. It's hard to understand these conclusions without knowing what they used to measure the effectiveness. Clicks? Skips?
2. From the headline, I assumed that they used a fake brand to track the number of searches for it later. Perhaps to see how paid ads could influence organic search. It seems like that would have been a more interesting result.
I read the article and I also can't figure out how they measured what they claim to have measured. The results seem made up in much the same way other advertising metrics and results seem made up.
Whenever I used to run Youtube ads ($100k campaigns) the account manager at Google Ads informed us about an automated option of neasuring assisted and unassisted recall. People whom seen the ads were presented with one question surveys on Youtube. It was pretty interesting as you could see what would be a good reach and frequency for your ads.
Hey--This is Ben--I did the research. We used a survey-based tool called Brandlift that measures post-exposure lifts in recall and favorability against control audiences using a defined competitive set. The tool automatically separates out the control audiences and the various test cells so there is no cross contamination. There's detail on the tool here: https://www.thinkwithgoogle.com/products/brand-lift/ Happy to answer any other questions you have--glad you found it interesting...
An MVP has just enough features to satisfy early customers. If all you've built is a landing page to gauge interest, that's useful for validating the idea, but that's not a product.
I mean... just because someone can call a video an MVP doesn’t mean it was an MVP.
An MVP is a functioning product/service. It’s something that provides enough utility / functionality that the user can actually use. Unless dropbox was a media company demoing a new film, the video was not an MVP. It was a marketing campaign.
I do wonder if some of the outcomes were dictated by people wondering about what this new brand was. It would be different if they were serving small populations, but they served it 20 Million times. I guess that is a good case study for small brands, but do the results stay the same when people recognize the brand?
It seems like this might be the precursor to doing a real test (probably small/regional) with a real brand. If the response was extremely negative I'd think that they'd shelve it altogether.
Is this even legal, given false advertising guidelines? They have provided an advertisement for products by a company that doesn't exist. I don't know the legal details, but I am questioning whether this is actually permitted in the U.S. if you go by the letter of the law.
I am not an expert in this area, but none of the ads I watched seem to actually advertise anything. There is basically just a picture of a pizza and a saying like "The tangy sweetness of fresh mozzarella"
For this to be false advertising, there would need to be an offer that's extended to customers. For instance, "$10 large pizza" would be an offer: give us $10 and we give you a large pizza. If they failed to follow through on that deal (by, say, not accepting money and not giving pizzas) then that would be false advertising. There's no such offer in these ads, or anything that could even be remotely construed as an offer.
I agree that companies should to have the freedom to be able to test markets to optimise their systems designs (free trade is important), but surely a multi-billion-dollar company can use a real business and spend a little extra dough to test their system without breaking the law. This seems lazy, and legally it should be investigated.
And this doesn't look good for their brand either, now that we live in an age of 'fake ads' swinging entire US elections.
This concept is very creepy. This particular pizza example reported is not but who knows if they come clean on ALL their experiments. They have a lot of power in their hands and now they are running experiments on people without them even knowing? This can get out of hand just like ads were used to push certain narratives during elections all over the world in all platforms and just like Facebook was playing with people's emotions with different posts shown [0]. No regulation, no ethic panel, no external audit, no transparency and they still show it off innocently.
I am trying to understand the moral/ethical issue here. Care to elaborate? There might be something I am missing here cause I just don't see it. It's a genuine question.
Google ran ads under a false brand name to test how viewers reacted to them. From what I can tell, the marketing experiment was fairly similar to standard A/B tests advertisers on ad design and copy or the kind of tests used by designers and developers for new site functionality. The biggest difference being that--and this is the possible ethical concern from the parent comment--the the brand wasn't real. Users had no way of knowing who the ad was produced by, or who was attempting to analyze their behavior.
They aren't totally analogous, but there are circumstances where obscured identity is tested: focus groups market research tests that ask consumers opinions about possible product lines and/or changes, political polling asking how potential voters would perceive an unnamed politician (such as might done when developing a response plan prior to a major scandal breaking...if they have time), etc. In those examples, however, the subject at least knows who is doing the research if not who commissioned it. And most importantly, they give consent.
The Facebook paper was a much more significant ethical concern, largely because it was (1) clearly a psychological study and was, in fact, published as such[0]; (2) had the explicit goal of eliciting changes in subject emotional responses and behavior in order to test the hypothesis; and (3) made no effort to comply with standard ethical practices for informed consent in human research. The paper got around the lack of informed consent because Facebook was a private company and was able to point to their Data Use Policy. That was enough of a fig leaf for Cornell's Institutional Review Board to accept because the Cornell professor was only given access to completed data after the experiment was independently run by Facebook. They accepted it in those circumstances, but I doubt anyone was very happy about it.
Google's experiment is quite tame in comparison, and there are obvious parallels to the sort of benign A/B testing marketers use. There's also--arguably--a pretty major difference in the type of experimentation going on, and it's not intended for academic publication. Also, YouTube terms of use presumably contain language that gives them enough cover to avoid any legal consequences.
Ethicists and university IRBs put a lot of thought into where the boundaries lie and how proposed experiments can cross them unexpectedly. Some of those boundaries have been learned about after some truly horrendous experiments in our past that nobody ever wants to even consider repeating. Others have been much less destructive, but nevertheless with their own flaws and problems.
Personally, I think Google ought to have at least labeled it as a research ad and put their name on it at the end or on the page users clicked through to. While this example was fairly benign with only relatively minor ethical concerns, I understand the parent comment's concerns. There's a lot of interesting questions that can be researched, and I'd imagine some pretty abusive ways in relevant data could be discovered.
In that context, the discussion's a good one to have for both the public and the companies themselves. Not because they're likely to jump into those abusive methods or because they'll repeat something along the lines of the Milgram experiment (or far, far worse). It's simply useful to understand the pitfalls beforehand. I'm not normally one for slippery slopes, but there are instances where considering them beforehand helps avoid the problematic outcomes altogether.
Just a note of response here: We did put our name on it--there was a landing page with details on the aims of the research, the methodology used, etc. It included contact info, and we responded personally to any and all questions.
Well if it's just the slippery slope argument then I get that but I don't agree it's a problem until we actually do hit fake HIV med ads.
Furthermore, it's been pretty normal for startups to try gauging customers interest by setting up pages with products that don't exist yet. Maybe it's different because of the size of google but I am personally waiting until the slippery slope hits things like fake HIV cure ads.
It is not that, it is mostly informed test subjects. The slippery slope argument is the most extreme case let's say. The thing is this is not black or white, there are grays in between, who decides what shade of gray? Do we wait like we did with social media and until we see another CEO in front of Congress saying 'I'm sorry we helped misguide people'? Or should we be more proactive? Moving fast and breaking stuff works when you are small company with no liability not when you are responsible for a bunch of data of the whole world.
Why is this "not a simple A/B test"? It seems to be functionally the same (maybe A/B/C/etc. and not A/B), just with no actual product.
Is it the nonexistence of a real product which bothers you? Is it the comparison testing?
More generally, what do you feel constitutes an experiment that requires informed test subjects? I agree with you that there are shades of gray, but I feel the distinction likely lies somewhere between "red vs green button on landing page" and "new cancer drug," while you seem to be arguing that the former should still require informed consent. (Which could be a reasonable argument! But I'm trying to get a better gauge on what it is that you're saying.)
Again, informed test subjects. Were the users informed? How do we know when is it a real ad and when are we test subjects? Are they going to publish every test and result they do or cherry pick the funny pizza ones only? Maybe read this other comment who explains it way better and organized https://news.ycombinator.com/item?id=17815404
This is almost literally a Godwin argument. Your argument basically unpacks into, "there's a dangerously slippery slope between fake door tests and war crimes", and it's most often used as a self-serving justification by bureaucrats who want you to do hours of busy work before giving people a questionnaire: http://slatestarcodex.com/2017/08/29/my-irb-nightmare/
Look, marketing researchers are not going to just accidentally vivisect concentration camp inmates to death just because you leave them unregulated. It's not like people just accidentally go around committing crimes against humanity because there aren't any regulations stopping them from doing so. In fact, at Nuremberg, it turned out that most of them committed crimes against humanity specifically because there were regulations requiring them to do so!
This isn't to completely pooh-pooh the necessity of research ethics and regulation. But those regulations need to be based on realistic risk models.
It is not my fault that the rules that govern research and interaction between researchers and tests subjects occurred because of WW2 and that Godwin's law exists. I mentioned that as a historic fact. So nobody can talk about how this started because it is literally Godwin's law? Great idea.
I'm not saying it's your fault; I'm saying that AB testing pizza advertisements for a fictitious pizza brand on YouTube is so far away from the risk of committing war crimes that it's a big stretch to invoke the Nuremberg Code as a criticism of it.
The code is more about a simple set of rules for research involving human test subjects. That is the baseline. It started because of that and got adopted by the scientific community as the most basic rules. I'm not implying anything about war crimes because I would've still referenced if they were created randomly by any other random event.
If they would first create a Doctor Fork pizza in some small town ("we won't deliver to you, but wherever you are, you can come here and try it") would it be OK then? What is your argument based on in that scenario?
Because if you think that it would be OK, then I would argue it makes zero difference for the end user experience. And if it doesn't make any difference to users, then the only issue seems to be that you don't like something about it.
There are loopholes everywhere, yes that is one. No, it's not about finding the minimum set of changes to make it airtight. It is about informing test subjects that they are being tested on.
"Cool and creamy. With just a small dose of OMG. Absurdly delicious and just what the doctor ordered. A balanced diet of cream, cheese and sugar. [Unintelligible] Put a fork in it. Doctor fork."
I keep being baffled at how transgressions tend to get handwaved away with "everybody does it" by some commenters on Hacker News.
First, there are plenty of locations where advertising is heavily regulated. Second, even in locations where that is not true, "everybody does it" does not justify anything. Every surgeon used to perform their job without washing their hands, and yet we got rid of that behavior.
There's definitely been creepy experiments done on Facebook and OkCupid when they play with peoples' emotions and social responses- especially on platforms they themselves control, but this specific example just seems like market research at its purest: seeing how people respond to a typical product. Agreed that such experiments should be published, or at least made to follow ethical guidelines.
25 years ago I'd regularly (every month or two) get focus group invitations that paid $100+/hr. Along with informing the subjects, compensating participants is a rock-bottom ethical standard that Google apparently cannot meet.
Do you consider all A/B tests to be studies one should be compensated for? Not to mention that they are compensating you by sponsoring the content you're viewing, but that should go without saying.
(Focus groups and A/B tests are different, and afaik, Google does compensate for focus groups)
Would this be unethical if it was a real pizza company doing an A/B test for their real brand?
Would this be unethical if it was a real pizza company doing the A/B test for a fake brand?
Is it unethical for Google to have done this?
You've just said no to the first question, and previously yes to the last. so I'm curious where you lie on the second, and how you justify those different answers. I don't see a difference between any of them. (in other words: the line is "when its no longer an A/B test and they're actively soliciting your feedback", but you draw it differently. Why?)
If a company engages in false advertising, which is against the law, they should be fined by the FTC at the very least, and certainly someone visible should lose their job. It's not complicated.
This is not without precedent in the ad industry, although the level of subtlety varies a lot.
Billboards are a notable example, because the medium is in-house and there's almost always latent inventory. Adams Outdoor once advertised the fictitious, toilet humor brand 'Outhouse Springs', then years later the seemingly personality-enhancing wonder drug 'Reachemol'. Lamar's Milwaukee division advertised a cat doctor with a preference for chocolate who healed 'boo-boos with nom-noms'.
This is essentially a big A/B test, and unlike billboards, they can collect fine-grained data on how people interact with the ad.
Damn, I'm jonesing for a Celeste four-cheese microwave pizza really bad rn. Where might a lowly member of the public borrow magical 1970's microwave technology in the Mountain View area?
PS: Anyone have one of those huge Radarange RR-9 microwaves with that (capacitive?) touch-panel in the late 70's or early 80's? Yup, I'm ooooold.
A given guest checks in, unpacks, and finds themselves ready for dinner. They skim over the room service menu, considers Room Service Pizza, and then dials a nearby pizza chain for delivery. Reviews for Room Service Pizza were not good. At the time, "Dominoes Is Cardboard" among similar chain-pizza love was going on so it wasn't looking up for Room Service Pizza even though it was right there on site and arguably not that bad as the story goes.
The hotel tried "everything" but it would never work; recipes, surveys, fresh ingredients... "One Chain Pizza, Please" was always the result.
One day someone desperate and with authority had an idea. They had changed "everything else" by this point, so let's go "marketing". This person drums up a separate phone number, business name, menu, logo, uniforms, the whole bit for Room Service Pizza. Drops the flier menu into the "local restaurants" courtesy spread in their rooms. It's not "Room Service Pizza" anymore, it's "Tony's Tower Pizza" or whatever.
Orders pick up. Reviews go up. Deliveries for Chain Pizza start to drop a bit on a noticeable level. Every time the special phone line for "Room Service Pizza", aka "Tony's Tower Pizza", rang up they knew something had finally broke free for themselves.
... So, I guess my point is that this isn't the first time pizza and playing masquerade have come together for a business. Here, a hotel uses ads and "rebranding" to sell their own pizza vs Google using fake pizza to sell more ads. If pizza were a coin, these might be opposite sides of it.