Hacker News new | past | comments | ask | show | jobs | submit login
Roger McNamee is really sad about Facebook (time.com)
96 points by muzz on Jan 17, 2019 | hide | past | favorite | 62 comments



> Zuck has always believed that connecting everyone on earth was a mission so important that it justified any action necessary to accomplish it.

Anytime someone starts telling you that their end is so noble it justifies any means, you should run away from them as fast as you possibly can.


It's no wonder that folks in the valley are so fascinated with AIs running amok and turning the world into a paperclip factory, because apparently that has been Facebooks business model from the start.

A very relevant article from Ted Chiang: https://www.buzzfeednews.com/article/tedchiang/the-real-dang...

"Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism."


I guess in a way, these big valley companies are indeed turning themselves into superintelligent entities that do exactly what folks in the valley worry about.


This is not new. First it was the militaries and governments, then it was the financial institutions, now it's big tech companies, and eventually it will actually be AI (because firms will use AI to guide their actions).

Aligning anything with its stated goals has always been a hard problem, the superintelligent paperclip factory just happens to be sensational enough to grab attention.


"The louder he talked of his honor, the faster we counted our spoons." --Ralph Waldo Emerson


Anyone who tries to tell me that forcing me to use them as a middleman to talk to someone is "connecting" me to that someone is an idiot or think I am an idiot.


If you ever find yourself in a situation where you have to explain to a relative or acquaintance why FB is bad and they just don't get it, point to this article, it'll save a bunch of time. Great articulation that sums up their issues and potential solutions. Sometimes things bear repeating they are so important.


Is it feasible to create a corporate structure that would encourage it to act with the public interest in mind? Are there any known patterns, research, or tries? My very ill-informed mind is thinking of a parent-child structure where the parent gets to veto and only veto decisions made by its children that may be harmful for the general public.


I'm currently reading an amazing book that touches on this very subject : http://www.marjoriekelly.com/books/owning-our-future/

I would highly recommend if you are interested in this topic.



Do you know of any successful and "big" companies that follow the structure?


Per the linked article, B corps are a very new thing -- they started within this decade and not all states even have them yet.

Almost all big companies are older than B corps have existed for. It might be a "wait and see" kind of thing.


Big according to who? Purism is a relevant example[1].

[1] https://puri.sm/posts/purism-now-a-social-purpose-corporatio...


I'm wondering if the next Google can be successful under such a structure, or if it would get outcompeted by a company that treats growth and profit as the only virtue. Companies like Purity is awesome, though.



Sounds like government regulations?

Thats typically been the way we've managed these kinds of threats/bad actors, but technology is changing so fast and policy makers are hamstrung by bureaucracy, incompetence, apathy or all three.


Yeah, initially my first reaction was "we need regulations" then I realized that's just not going to possible especially in the future. We need a new corporate structure that can compete and reproduce in the current market.


Or additions to / restructuring of the current regulatory bodies so that they can respond to such things faster.


The legal fiction known as a "corporation" is a government regulation. So there's that.


You're getting awfully close to suggesting "democracy in the workplace" on HN aren't you?


What's "public interest"? Why do you think everybody among the public has the same interest and this interest is easily identified and not subject to gaming and corruption by whoever is selected to identify it?


I don't think we need to define what public interest is in a perfect way, because different ideas undergo competition and selection process to be classified as public interest by the society at large. This is what happens today. When people care about global warming, reducing homelessness, etc., it's because these ideas became selected. What I'm wondering is if there's a structural way for these to be seriously considered when a company makes business decisions. Any system can be gamed, of course. But that doesn't mean we shouldn't have any system.


But the system already exists. Any company that thinks people care about global warming can start a global-warming related activities and people would support them. Any company that wants to reduce homelessness can start doing something about that, and if people think they are doing it right, they have ways to support this company, both by direct donations and by buying more of their products or services. Why any other system is necessary?


there's this thing called a "non-profit" - it's a good start.


I'm reading all the time about how fake news on Facebook supposedly influenced the elections, but I haven't heard of any factual confirmation of that. Did it really happen? Did somebody research that? How much influence did they find? I have seen research that says convincing people campaigning is pretty much hopeless - they'd nod to ads they agree with and ignore ones they disagree with, but won't change their minds : https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3042867

Maybe this research is wrong - but I'd like to see any research if Facebook fake news did really have any influence at all.


It’s not about changing minds, it’s about making sure people get out the vote by appealing to people’s emotions through misleading statements or outright lies (they are taking away your guns, they are prdophiles!)


Wait, I though getting people to vote is a good thing? I am seeing a wave of propaganda to vote each time elections are near, registration drives, celebrity appeals, etc. Everybody is talking about how voting is good, not voting is bad and "suppressing" somebody's vote (including convincing them not to vote) is plain awful. There's a case where judge threatened a person with jail unless they vote: https://www.wkyc.com/article/news/investigator-judge-ordered... (clearly unlawful but the judge wouldn't do it unless they thought voting is great).

So if fake news drive people to vote it should be a good thing, at least in the minds of people that think voting is always good?


I guess that depends on whether you think appealing to outrage and fear through lies and disinformation is good for Democracy.

When a candidate/politician tells a lie, they can be called out on it and are (until recently) accountable to their words and actions.

However, fake news purveyors can practically anonymously blanket the internet to such a degree that mass manipulation can occur with no real way to counter it.


> I guess that depends on whether you think appealing to outrage and fear through lies and disinformation is good for Democracy

Given how much it is done by top politicians, and they are still getting elected and re-elected for decades, I think most of the electorate is OK with that.

> When a candidate/politician tells a lie, they can be called out on it and are (until recently) accountable to their words and actions.

Or not. But I don't see how it is related to the topic under discussion. You said the main effect of fake news is getting people to vote. But getting people to vote is considered to be a good thing, so what is the bad thing then? That they are lies? Politician lie all the time, nothing changed. Democracy survived it.

> However, fake news purveyors can practically anonymously blanket the internet

Do you realize the size of the modern internet? Nobody can "blanket" it.

> to such a degree that mass manipulation can occur with no real way to counter it.

So we're back to the original question - is there any evidence of mass manipulation actually occurring, and if so - who is being manipulated and to do what? So far the hypothesis was that the people are manipulated to vote more - but, as I noted, people are manipulated to vote on every election, this has nothing to do with "fake news", literally everybody is trying to manipulate their supporters to vote more (and their opponents to vote less). Is there any data that fake news makes a noticeable effect on this? I certainly haven't seen any, and neither of the MSM articles mentioning this as an obvious fact that everybody knows is happening provides any. It looks more like the routine moral panic than something factual.


You keep glossing over the fact that they are convincing people to vote entirely through deceit. Do the ends justify the means to you?

Re: blanketing the internet, an estimated 126 million Americans were exposed to this kind of content just on Facebook - https://www.nytimes.com/2017/10/30/technology/facebook-googl...


> You keep glossing over the fact that they are convincing people to vote entirely through deceit

So what? First of all, politicians routinely use deceit to reach their goals, and aren't even ashamed about it. Second, if voting is always good, why achieving it through deceit is so bad? Let's say I know you have a bad diet, and I could through deceit convince you to eat healthier and add 5 years to your lifespan. Would you think it's a bad thing and it'd be better for you to be undeceived but die 5 years earlier?

> an estimated 126 million Americans were exposed to this kind of content just on Facebook

"Exposed" is a weasel word. They are just taking entire US audience of Facebook, and saying since any of those people could have seen this content, let's imply they all did and by implication were influenced by it. But it's baloney. If you take a moment to think, how likely it is that there are 136 million voters in the US, and almost every single one of them was influenced by Facebook? Moreover, you claim that main influence of the fake news is convincing people to vote. Do you imply that absent fake news, only about 10 million people would vote? That's clearly nonsense. So if we believed your hypothesis and NYT numbers, it would be obvious that there's no noticeable influence - we don't have sudden spike of voter turnout comparable to the number of 126 million, if fact we don't have any noticeable spike at all. So either 126 million number is nonsense, or voting turnout hypothesis is nonsense - I think likely both.

And all that for a campaign with a whopping $100K budget? If that were true, why politicians spend billions on their campaigns at all? If you want some more realistic numbers, look here: https://sci-hub.tw/10.1257/jep.31.2.211 - it's in hundredth of a percent. So, about 10-20 thousand people out of hundred millions. That's the likely size of the influence of this phenomenon, and maybe even less since this assumes fake news are as effective as fine-tuned political ads (and they are usually not, if you seen Russian fake news, they're utter crap). So probably around a thousand or so is the real size.


So if fake news drive people to vote it should be a good thing

Absolutely not. I would rather people not vote at all than vote based on incorrect information.


But you must know tons of people are voting based on incorrect information already. Just read any public politics forum, you'll encounter thousands of them. Nobody says "please vote only if you are sure you have correct information". Everybody says "please vote" without any qualifications. Nobody says "we'd make a voter registration drive, but before registering every voter we'd double check they have correct information with this easy 20-minute exam". It's always unconditional "voting is good, no matter what". So, is it one or the other? Is it always good or only good if you have "correct information"? And who's gonna decide that information is correct, given how wildly different information providers disagree even about basic questions of the policy? Even if we look at the same person position in 5 years, they may be saying something diametrically opposite to what they said 5 years ago - so which information is correct? Should we discourage people that we think have incorrect information from voting, and if it's a good thing, why is nobody doing it?


Note for example some politicians are going as far as creepily threatening to publicly shame non-voters: https://freebeacon.com/politics/wallace-campaign-uses-orwell...


Your vote is not worth more than a common idiot's vote.


The waters are further muddied by the fact that nobody can agree on a definition of "fake news."

It's meaningless, sensational claims all the way down.


Sure we do: fake news is the dissemination of incorrect information purporting to be the truth (and not someone's opinion).


That's "lying" (if intentional) or "being mistaken" (if sincere). People do it all the time since speech was invented (and probably before that too - I've read about some animals being capable of lying).


I suspect if we had a workable definition finding any evidence it has had any influence would become even harder.


Cry me a river. He still owns FB stock and AFAIK has not donated prior profits to charity.


A heavily-recycled story that's been making the rounds since at least 2017. No matter, have to keep the outrage machine going to drive traffic!


I'm personally glad the story made to the front page again as I find it very relevant today and I've never read it before.


Is it more reasonable to think that this is the beat of a baseless drumhead conspiracy, or that influential people are genuinely changing their minds about very large issues, despite the profitability of keeping silent?


Here's the same story from 2017: https://www.usatoday.com/story/opinion/2017/08/08/my-google-...

I don't think that makes it less relevant in general, but there's nothing "new" here. There doesn't need to be anything new for the point to still be interesting, but it does reveal something about our bias towards the "new" in our political conversation and analysis.


I first thought it was because FB thought his last name was made up.


    I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months.
few months? Then you haven't been paying attention or are just as bad as them.


He says in the next paragraph that that was from an email he sent to Marc and Sheryl before the 2016 election.


Is there a official confirmation that he really mentored him?

All I saw once is a statement that he knows him


That'd be a hell of a thing to lie about, and he'd easily be called out and discredited over it.

Also, not sure what kind of official confirmation you'd expect in this situation. Facebook isn't going to come out in the wake of this and confirm it, as it'd give it more creedence. You should be watching to see if they deny it.


McNamee was an early investor in Facebook. I don't doubt the relationship.

He's an interesting guy. http://www.elevation.com/EP_IT.asp?id=102


Roger's a nice guy and a cool Deadhead, however it's hard to get the guy to stop talking at times. He was interested in a company I created at one point and in the few times we chatted (2-3) he dropped Zuck's name a few too many times in normal conversation. I don't know if it's true or not, but I frequently wondered to what extent he mentored him, like they chatted frequently for a few months? It seemed to me at the time that he was trying to position it as, "would you like to be the next person I mentor post-Zuck?" Lots of weird red flags all-around in those talks, which later fizzled out.


yeah, thats exactly what I imagined


I think the right steps are already being taken. Breaking up Facebook would just decentralize the problem up and make it impossible to solve. At least with Facebook everything is in one place, and they have the money and resources to fix things.


Honestly curious: what steps do you see that are being taken?


The company is slowing down revenue growth to ramp up more security measures. Zuck seems committed to fixing Facebook even if it hurts near term investments.


The data security, while deeply troubling, is not the most troubling thing about FB.


The post title was changed from the article title, it should be: "I Mentored Mark Zuckerberg. I Loved Facebook. But I Can't Stay Silent About What's Happening."


Anyone have a GDPR-friendly link? Wayback[0] won't save the site[1], for some reason. =[

[0] - https://web.archive.org/web/*/http://time.com/5505441/mark-z...

[1] - https://web.archive.org/web/20190117191830/http://time.com/5...



The piece of sh#t bloatware that is time.com just crashed my phone while loading all the resources to read a text-based article. Go figure.


Looking at it via ublock origin the number of things blocked just keeps on ticking up. At aprox the 1 minute mark its at 108.


Try installing Brave on your phone




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: