Hacker News new | past | comments | ask | show | jobs | submit login
FTC Probing Facebook for Use of Personal Data, Source Says (bloomberg.com)
1107 points by coloneltcb on March 20, 2018 | hide | past | favorite | 371 comments



This presents an interesting opportunity for the FTC.

The amount of data being amassed by Facebook, Google and others has become exorbitant, and apparently has already been abused (some might even say weaponized) in a major election.

If Facebook indeed violated the 2011 consent decree, then the FTC can fine them up to "thousands of dollars a day per violation [per user]". This presents the FTC with the opportunity to send a message to these data hoarders: protect the data you collect, or else.

Fine them to the point where they have to start asking themselves whether it's even worth it to collect and store certain data, and with whom to share it.

It shouldn't be the government's job to ensure that the data gets protected, this should be in Facebook's own self interest.


To your second point, I argued this sort of data was weaponsized and not just in an election (http://www.armyupress.army.mil/Journals/NCO-Journal/Archives...).

To the third point, focusing on Facebook seems like that scene from Casa Blanca though: "There's gambling going on, I'm shocked, shocked" "Your winnings, sir" Not confident FTC fines would actually change any trends.


I was the technology lead at Myspace for the Games Platform during the 2011 crackdown by the FTC. We took the FTC filings seriously and spent large amounts of cash and resources to prevent our data from making it to databrokers. Fines are one thing. FTC can shutdown or cripple your business.

I bet if Facebook is found not to take reasonable steps to mitigate issues raised during the 2011 FTC investigation, they'll be forced to do yearly audits of every app on the platform and require KYC(know your customer) process for all app publishers. This will be very costly and we'll probably see the end of the FB graph API except for trusted and highly capitalized partners.


I have not been involved in FTC decisions but I have worked at companies subject to FTC consent decrees. I agree with adrr's comment. The initial fines are not that big a deal; the work required to demonstrate compliance is non-trivial.


Last I saw there were over 1 million accounts distributed by CA.

Even assuming they were only distributed 3 months (unlikely) and there were only 1 million accounts (also unlikely) the maximum fine is:

1000x1000000x90 = 90 billion dollars.

Imposing the maximum fine would be more than double their entire 4th quarter earnings last year.

That's a bite. That would hurt any company.


From this article:

> If the FTC finds Facebook violated terms of the consent decree, it has the power to fine the company more than $40,000 a day per violation.

> Facebook Inc. is under investigation by a U.S. privacy watchdog over the use of personal data of 50 million users

So I think the maximum (assuming this went on for 90 days) would be:

40,000 x 50,000,000 x 90 = 180,000,000,000,000

180 Trillion.


I think the world would be a better place if they just pulled the plug on the fb. Donald Trump is one really really bad outcome.


Hillary lost because she was the worst Democrat candidate in history. She had MSM, the entirety of liberal America, all major tech companies, most/all colleges, illegals voting en masse -- all of these organizations were united in their support for Hillary, and she still lost.

The Dems had the election on a silver platter and they still lost because Hillary was awful.

Hillary lost the election, if it wasn't her it would've been a win.


Calling Hillary the "worst Democratic candidate in history" is just a meme - she was perfectly qualified for the job, more so than Donald Trump, anyway. What she wasn't was photogenic, charismatic or capable of not coming across as "a politician" at a time when both parties were in a disgruntled, antiestablishment mood. I think she and the DNC felt it was finally "her time," and she didn't take Trump seriously, perhaps because she felt the winds of destiny were at her back.

Unfortunately for her, Julian Assange decided to make it his religion to ruin her and Donald Trump happens to be very good at channeling populist antipathy. So it goes.

>She had MSM, the entirety of liberal America, all major tech companies, most/all colleges, illegals voting en masse

Ok. Let's go through this one by one...

- The Democrats/leftists/DNC do not control the mainstream media. That's a conspiracy theory started by the right-wing fringe and Fox news, and of course, canonized by Trump and his supporters, in order to dismiss all criticism in the media as being manufactured.

- The entirety of liberal America does not think and act in unison, nor were they entirely behind Hillary. Both parties were fractured this last election, and many Democrats who couldn't get Bernie wound up voting for Trump or stayed home.

- All major tech companies are not liberal or leftist. There is a deep wellspring of right-wing, alt-right and libertarian ideology in tech and SV.

- "most/all colleges" are also not automatically leftist. Plenty of right-wing, alt-right and libertarian ideology there as well.

- "illegals voting en masse" is just a baseless conspiracy theory.

You are correct that the race was Hillary's to lose. Unfortunately you couldn't resist running through the typical Trumpist hyperbole. Sad.


"Hillary lost because she was the worst Democrat candidate in history"

Because she was a woman? I mean, in 1984 Walt Mondale got 13 electoral votes and just 37 million votes. I think this qualifies as much worse.

But I get you.


How revolting, between nitwits who voted for Hillary purely because she was a woman and nitwits that dismiss votes against Hillary as purely a masculine act of defiance towards women in positions of power -- I don't know what's worse. Clearly some people are only capable of reducing others to arbitrary superficial qualities inferred from their own prejudices.

Is it really beyond your comprehension that someone would judge Hillary based on the quality of her character rather than her gender?


Sure you voted for Trump because you want a tax cut. I'll give you that.

But on the other hand, you brought up the "worst candidate in history" thing because of other reasons. Its just not mathematically true, man. So bringing up bias is fair game; you aren't using math as a judge. But I guess it could be a bias of recent events. Who knows - either way its not true.

I'm sorry I triggered you with the word "Trump" and I'm sorry you triggered me with just saying something that is mathematically false.

I also looked at your hacker news profile and it looks like you only only talk about politics here - this is a technology forum so I think you have the wrong audience. I'm sorry you are so angry but Jesus Christ, lets talk about computers here.

PS - If I could save your blood pressure; I'd down vote this response for you. I don't care about internet points here.


Your concern is touching but unnecessary, and, while you are correct that Mondale faired terribly, the basis of my reasoning is that a significant portion of those 62M votes that went to Trump could've easily went to the Democrats but didn't because of explicit and universal distaste for Hillary.

Mondale may have received only 40.6% of votes but Trump, as a general rule, shouldn't have had a chance. It was a Black Swan event of epic proportions and the Democrats made a mistake every step of the way, the statistical likelihood of that happening was so astronomically low but Hillary's involvement made it a guarantee.


Facebook didn’t elect trump. The us populace did.


The US populace that lives in key electoral college swing states... what a convoluted system.


Thanks for clearing this matter up.


I'm not sure I follow your math. Facebook had the following figures at the close of 2017 per https://investor.fb.com/investor-news/press-release-details/...:

Earnings (Q4 2017): $4B

Earnings (Y2017): $16B

Revenue (Q4 2017): $13B

Revenue (Y2017): $40B

So, maybe you're confusing revenue with earnings (net income) and a quarter (3 months) with the entire year (12 months). Because $90B is over 20x FB's Q4 2017 earnings and over 5x their entire 2017 earnings.


I messed up.

I saw their Q4 revenue statement and read the year end 40B as the Q4 revenue.

My bad.


i find it depressing that 90 billion dollars is only double the 4th quarter earnings.


> To the third point, focusing on Facebook seems like that scene from Casa Blanca though

It's mere coincidence, but your spelling "Casablanca" as two words (Casa Blanca) put into my mind that the literal translation of that place is "white house" (two words, natch). [0]

To your point, yes, Facebook knows user data trafficking (gambling) goes on as well as the stakes of such trafficking. Facebook is the gatherer and ostensible guardians of such data, but they directly profit from such trafficking. Very likely their "interest" in user data security is pretense.

[0] https://en.wikipedia.org/wiki/Casablanca#Etymology

EDIT: recast second paragraph to more clearly convey intended meaning.


I just went and read the linked article -- it's definitely worth a look. Personally I hadn't seen media coverage of the evolving relationship between Russia and DPRK, so I learned something new.


> and apparently has already been abused (some might even say weaponized) in a major election.

You mean major election_s_, right? I do seem to remember the Democrats crowing about how Obama's team had used social media to their advantage and Republicans were hopelessly outmatched in this arena.

http://swampland.time.com/2012/11/20/friended-how-the-obama-...

Fun tidbits:

> But the Obama team had a solution in place: a Facebook application that will transform the way campaigns are conducted in the future. For supporters, the app appeared to be just another way to digitally connect to the campaign. But to the Windy City number crunchers, it was a game changer. “I think this will wind up being the most groundbreaking piece of technology developed for this campaign,” says Teddy Goff, the Obama campaign’s digital director.

> That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85% of those without a listed phone number could be found in the uploaded friend lists.

Whoa, that sounds exactly like the "breach" we're talking about here!

And a former Obama staffer confirms this: https://www.theblaze.com/news/2018/03/20/ex-obama-staffer-cl... (yeah yeah "I don't trust your source", but it's just screenshots straight from the horse's mouth).

Money quotes:

> Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing.

> They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.


The major differences:

1. The Democrats didn't harvest the data under false pretenses; the data came from people who signed up for a political app.

2. The Democratic campaign data wasn't illegally transferred from one company to another.

But I agree that the Obama campaign's actions should have been a flag and we should have worried harder about it, even if they weren't as bad as what Cambridge Analytica did.


> 1. The Democrats didn't harvest the data under false pretenses; the data came from people who signed up for a political app.

Were these people aware all their data and friend's data was going to be recursively sucked down? Somehow I doubt the app included a disclaimer to that effect. Doesn't really matter what your app does if the main goal of it is to, well, harvest data.

2. The Democratic campaign data wasn't illegally transferred from one company to another.

That you know of. It's data, it can get around. The staffer did mention that the Democrats still have the data, and they weren't supposed to be sucking down the whole graph in the first place, hence Facebook's initial freakout (but of course, it was OK because "we're on your side.")


Nope, not "that you know of." Cambridge Analytica got their data from a third party, violating their contract with Facebook. The Obama campaign got their data directly. That is an actual difference between the two actions.

It's possible to say "I think the Obama campaign also took undesirable actions" without saying "and they were just as bad." I agree with that position, as I said.


Here's another difference.

Obama campaign was US CITIZENS who are legally allowed to work on election programs.

CA was staffed almost entirely by BRITISH and CANADIAN citizens, and ALL of their Trump 2016 (and Cruz et all) actions are straight FEC violations of foreign actors working US elections.


Thanks and I agree in theory. It remains to be seen whether that statement was true, or just CA pumping up their own importance.


CA also has Russians playing key roles in its lifecycle, with early work done in Russia, and a link to a Russian government oil firm, Lukasoil, considered to be an overseas intelligence/influence agent of Putin's. I'm less concerned by the connection with Allied national citizons.


Looking at the last quotes, is it worse that Facebook did not protect the data from a violator vs giving it away explicit and intentionally?


"That you know of" is referring to the fact that you don't know where the data is _now_ (well, we know the Dems still have it) and what it's going to be used for in the future, much as in the CA case. Unless you believe that the Dems destroyed all the data harvested in 2012 and haven't used it again.


I believe in judging based on the facts in evidence rather than making assumptions about what happened.

CA acquired data from a third party which did not have permission to give CA the data. The Obama campaign did not do that.

Facebook required the third party (Dr. Kogan) to certify that the data had been destroyed. Dr. Kogan certified that the data had been destroyed, but did not do so. The Obama campaign did not do that.

These facts support the conclusion that nobody should have access to this kind of data, including the Obama campaign. They do not support the conclusion that the Obama campaign did the same thing as CA.

I also don't think you've provided evidence that the Obama campaign still has the data. If I've missed that please let me know.

I also noticed that you are conflating the Obama campaign with the Democratic Party. If you have evidence that the Obama campaign shared this data with the Democratic Party, you should also share that.


> I also don't think you've provided evidence that the Obama campaign still has the data. If I've missed that please let me know.

> “Where this gets complicated is, that freaked Facebook out, right? So they shut off the feature. Well, the Republicans never built an app to do that. So the data is out there, you can’t take it back, right? So Democrats have this information,” she said.

This is what Davidsen has said.

Also, as you said, they obtained the data legitimately. Why _wouldn't_ they keep the data around for future use?

> I also noticed that you are conflating the Obama campaign with the Democratic Party. If you have evidence that the Obama campaign shared this data with the Democratic Party, you should also share that.

Common freaking sense. It's a goldmine for future elections, they would be fools not to share it with the DNC.

Considering how much traction this story is getting, and considering that the Obama campaign used the same friend list "breach" to obtain data, they really should comment to the effect that they aren't keeping the data around. Otherwise, common sense says they are. That, coupled with Facebook's rather "it's OK" response to learning that they sucked down tons of data makes me think FB didn't make a big stink about deleting the data. If they did, they need to attest to that.


> Common freaking sense. It's a goldmine for future elections, they would be fools not to share it with the DNC.

Well, no. They'd be people who are violating their Facebook contract if they did.

When you live in the swamp, it's easy to assume everyone is dirty. The Obama campaign certainly used data in a way I personally find uncomfortable, which makes it even easier to leap to conclusions. However, there's no value in this conversation as long as you don't understand the difference between evidence and the things you want to be true.


We rarely get to deal in certainty; life is mainly degrees of probability.

It's very likely that the Obama campaign retained the data: I'd put it around 75%. Others have different assessments.

Lumping all uncertain things into one bundle of low probability is a massive category error.


> Well, no. They'd be people who are violating their Facebook contract if they did.

Again, who’s actually asking any questions whatsoever about their use of harvested social media data? You’re only in breach of your “Facebook contract” if someone cares to look into it in the first place. You still haven’t addressed the staffer’s claim that Facebook was freaked out about the campaign’s harvesting of data but then said they were “OK” with it. You trust FB to make a stink if the Obama campaign misused data? Seems to me like they were perfectly content to look the other way.


You are very naive if you don't know that many, if not most campaign consulting agencies are entirely apolitical about collecting and shopping around their data to various candidates. It's simply about expanding their market. Do yourself a favor and volunteer on a single campaign for a state or federal level committee-favored candidate to see for yourself.


Sure, the Obama campaign itself did not do the above, but liberal-leaning SuperPACs did


No, the only truth here has been

1. It was not Democrats, therefor it was wrong if not illegal.

If Hillary had won none of this would have come about and even if it did no one in Congress would be up in arms. We have had nearly two years of people trying to delegitimize Trump's win. This is a standard political tactic by the losing side but this time Trump beat both sides at the game.

These politicians and activist refuse to acknowledge that their message is either not acceptable or delivered wrong or even worse, that a large number of people were just tired of them.

There wasn't simply enough money spent by Russia to change the outcome and this is completely ignoring the fact they have been doing similar in nearly every election they could if not within political parties and the media.


> illegally transferred

I'd question illegality. In violation of agreements, perhaps. If there were any, and there wasn't a wink, wink type of understanding on what would be done.


In violation of agreements, definitely, if you believe Facebook's public statement. I think it would be risky for Facebook to lie about their developer policies but that doesn't mean it's impossible. I don't have time right now to dig through archive.org to find an old copy of those, unfortunately.

For a much better examination of legal aspects than I can provide, see https://www.lawfareblog.com/cambridge-analytica-facebook-deb.... Please keep in mind the sentence "I am leaving aside for now the potential claims under British and European law, but those add to this list considerably," which is rather important given the EU's more aggressive privacy regulations.


It's like SuperPAC coordination. Every election cycle there are countless obvious violations of SuperPAC coordination at all levels and parties but these are hardly ever investigated much less prosecuted.


Exactly.

I sort of don't care why the media firestorm is so bad, even if it's unfair, because it means we might see some action which will limit bad actors on all sides of the political spectrum.


IMO the point is the origin application. A Campaign App used for that purpose vs. an app that shows you what your face would look like when you were older to swing distorted news.


> A Campaign App used for that purpose

But how long does the harvested data remain "valid" for that purpose? The Dems still have the harvested data from 2012, is it OK to use it for 2016, which they most likely did?


[flagged]


You can see that already with the Obama staffer. Direct quotes from someone who was there yet the mainstream media simply isn't reporting on it. Just another right wing conspiracy, otherwise CNN would be talking about it, right?

You do sometimes get bits and pieces like the Time article from 2012 that haven't been memory-holed yet, but again, the media won't bring up something like that because the intent to paint this chilling use of social media as something unique to the Trump campaign.


There is a new Washington Post article that covers the Obama campaign story - it's not being entirely silenced: https://www.washingtonpost.com/business/economy/facebooks-ru...

I agree that there is a pattern of bias to all large media outlets on both sides. They may put a piece out like this one to appear impartial but only post-facto and if it supports the rancor of a news cycle that currently leans in their side's favor.

Anyways, there is bipartisan benefit to people becoming more aware of their online presence. Maybe people will use social media less and become less fervently partisan?


They squeezed it in right at the very end, but it was actually rather surprising how little they minced words:

“We ingested the entire U.S. social graph,” Davidsen said in an interview. “We would ask permission to basically scrape your profile, and also scrape your friends, basically anything that was available to scrape. We scraped it all.”

So obviously a fair amount of strategic writing going on but all things considered, pretty respectable.

EDIT:

Bloomberg has also admitted Obama took advantage of it as well:

https://www.bloomberg.com/view/articles/2018-03-21/facebook-...

"The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump."

To me, the interesting part going forward is: will Democrats and the mainstream media continue to frame this as if it was Donald Trump who committed the wrongdoing? I'm not really sensing any widespread public outrage so I would suspect not, but time will tell.


(Yeah yeah "I don't trust your source", but my methamphetamine-enthusiast uncle assures me that Safeway supermarket lets the Jews decide how much salt your food is allowed to have, and Gwyneth Paltrow's magnet stickers can totally cure hemorrhoids...)


Quality whataboutism that doesn't change the overall debate about these practices. You do realize they talked about exactly this in the linked article right?


This is already how HIPAA forces data decisions in the health care industry. We ask ourselves: "Is it worth the time and effort to store patient PII?"

If the answer is no, we don't store it.


>If Facebook indeed violated the 2011 consent decree, then the FTC can fine them up to "thousands of dollars a day per violation [per user]". This presents the FTC with the opportunity to send a message to these data hoarders: protect the data you collect, or else.

No one ever seems to get the maximum fine in America, often because it would "destroy the company".

But we're willing to execute living people.

As the old adage says: I'll believe corporations are people when they execute one in Texas.


The problem for these companies is that hoarding and monetising data _is_ their business model. If they can't do that anymore, they are going to struggle in a serious way.


That seems like a feature, not a problem.


They aren't going to struggle, their currently spectacular profits are just going to get somewhat more modest.

This is what happened to the banking industry after the 2008 financial crisis.


I've been wondering what kind of collapse would happen when something like this happened to a business where the majority of their revenue comes from monetizing their consumers. Of course, a collapse would only be possible if:

* The FTC actually does something about this in a way that companies in a similar manner are also affected (directly or indirectly) * These companies don't find a way to get around the issues.

I'm not convinced anything will significantly damage tech companies whose primary profit driver is their users' data anyway. The general public has been using them for years now and despite any outrage, it's become too integrated in society for people to suddenly stop (unless someone comes up with a better alternative).


I'll have two, please.


Acxiom and others of the old guard have been doing exactly the same for 40+ years. Why should Facebook be singled out for voluntary disclosures when the data mining industry has far more aggregious transgressions.


"...and apparently has already been abused (some might even say weaponized) in a major election."

While it's clear that CA/Russians/whoever tried to influence the election through these techniques, is anyone aware of any studies or evidence that they actually affected anything at all? Has anyone even done a survey asking people if they either did not turn out to vote, or changed the candidate they were going to vote for, based on paid advertising they saw on Facebook?

I'm genuinely curious about this, I'm not trying to be argumentative. After this erupted yesterday, I went looking and found nothing. This whole thing may be much ado about nothing.


I think this ultimately comes down to the problem of attribution in marketing -- how do you determine if an ad or story is effective in actually influencing somebody to buy a product or vote for a candidate? We know millions of people engaged with content from Russian trolls masquerading as Americans, but (like any marketing campaign with an offline action) it's difficult to quantitatively measure the ultimate impact they had.

https://www.theatlantic.com/technology/archive/2018/02/the-r...


Yeah, but even a simple survey would at least start to unravel this. "Did you either fail to vote or change your vote based upon paid advertising you saw on Facebook?" would at least be a good start. Even anecdotal stories of people being swayed by a paid Facebook ad would be a start. I haven't seen a single one, and I've looked.


The whole point of using the data like this is to change people's opinion without them knowing why, so I doubt anyone can answer a survey like this accurately.


Perhaps if there were two similar candidates, this would be true. However, that wasn't the case here. These candidates and their supporters were polar opposites. If they were swayed at all, it wouldn't have been through subtlety. The stories would be "I was going to vote for Hillary, but then I saw [X] on Facebook and was so horrified that I decided to [not vote at all or vote for Trump]".


I'm pretty sure that polls like that are ineffective for discerning the impact of any type of marketing. The best evidence I can think of for whether somebody found something influential is whether they liked or re-shared a post, and there's plenty of evidence for that. Those are, after all, the sorts of metrics typically used for measuring the success of a social media campaign: https://www.socialmediaexaminer.com/10-metrics-to-track-for-...


When broadband got reclassified by the FCC under the "huge loss" for Net Neutrality, a little noticed M.O.U. was published as part of that decision that explicitly stated the FTC would be now be beefing up its presence to protect the consumer. It's only in Facebook's interest if they believe they'll get caught. If they think they can sell this data for profit and escape scrutiny, they will. Here's hoping this is a sign of more work to come from the FTC.


Isn't the FCC being run by a Trump shill at the moment? I mean, they just repealed net neutrality, I doubt they're going to go around imposing fines on Trump buddies now are they...


FTC, not FCC in this case. And what was repealed was a huge blob of legislation called Title 2, not “net neutrality”.


Well net neutrality was repealed. Without title 2 there is really no net neutrality until some other law or regulation puts it back into place.


The reason I make this distinction is because the limited neutrality is the least of what the legislation does.


To any engineers reading this who are now working at Facebook, you have a choice to make:

Stay in the organization and work to turn it away form casual misuse of personal information. Prevent an Orwellian future of machine-learning assisted, personally targeted messaging preying upon our fears and insecurities. Stand up and speak out against the performance of unethical psychological experiments on unwitting participants.

-OR-

Leave now.

This is one of the important moral issues of our time. To stay on the sidelines is unacceptable.


For what it’s worth, when I worked there, all the engineers I met sincerely cared about just…making a useful product, and respecting people’s data in the process. Pretty much the only guaranteed fireable offense was looking at someone’s data without permission and a valid reason, e.g., to directly fix something broken about their account, which almost never required viewing anything private anyway.

Nobody appeared to be “casually misusing” data—I think the problem is that they’re largely just engineers, particularly young ones, naïvely considering only the engineering side of things. All the data queries go through the robust privacy-checking system, so everything is good, right?

In a case like this, they didn’t consider the optics of what happens when someone scrapes the public (at the time) profiles of Facebook users and uses that information for nefarious deeds. What happens when users are angry not because their private data was “breached”—a technical problem with an engineering solution—but because they didn’t realise how much they’d already shared publicly (even if you explicitly told them) and how it could be used to influence them en masse?


One of the problems with the Facebook API is that it is disconnected from policy on too many points. The policy is all hand-wavy honor system, and the API lets you trample all over the policy.

Case in point, one of the most common policy violations is prefilling the user message on posts made via the API. It is forbidden. But the field is right there for you to abuse and put whatever you want into it. Sure there are some automated enforcement algorithms and policy employees look at things when complaints go up, but if the policy says you can't do it, why on earth does the code allow it?

OK I know the pat answer is that apps are allowed to prompt the user earlier in the workflow for the message, and then use that value when calling the API. That is true but weak (what would it hurt to eliminate that loophole vs. the benefit of no longer having to detect and take enforcement action on an impossible action) -- the point remains, if they really cared about their vaunted policy and protecting the user, they would put more controls directly into the code behind the API to disallow prohibited actions.

These are things where smart engineers can make a difference. Spend some time on the FB Developer Community Group and you will see the flood of questions from developers who are completely ignorant of the policy, even on basic things like "don't use an account with a name other than your own" aka, there are no business or developer accounts. Many of them willfully ignore policy and just do what the code allows them to do. A lot of good could be done by FB devs taking more accountability for how the platform is abused.


It is not so much that something is wrong, but that everything is working as it should. The system is the problem.

Case in point, Cambridge Analytica used ill-gotten data from 50 million people to craft extremely effective political ads. And since user engagement with those ads was very high… Facebook's algorithm made it cheaper for them to buy even more ads.


Well, not overwhelmingly effective. Ted Cruz was a client and look how well he did.


Got second to their other client.


In this forum, I think this is the most important comment.

I think there is enough information available for Facebook employees to be faced with a decision, after which they are morally culpable for the growing net-negative effect that Facebook has on society.

I'm not a Facebook engineer--and I'm probably not smart enough to be one--so I can't really say how I would act if faced with an ethical decision to provide for my family or take a stand. However, I think anyone who has been employed by Facebook is capable enough to be able to immediately find comparable employment.

Similarly, I think there were lots of well-meaning people involved in Big Tobacco, who didn't realize they were contributing to the deaths of millions of people. I imagine there was a similar inflection point for them, as well.

(Please note, I do not think Facebook is as damaging to the world as Big Tobacco. I also don't think that individual contributors are as culpable as leadership. I am not comparing the degree of moral evil, but am comparing the complicity of individual contributors.)


"put your bodies upon the gears"

I absolutely agree with you that this is a moral decision for the employees. At a former company I pushed to improve our user privacy and decrease our storage of unused personally identifying information.

I left that company when they neutered my project to only affect the UI...

We aren't soldiers following orders, we are humans that can reflect on our actions.

That said, I had the savings to be unemployed for a while, not everyone does.


Just following orders wasn't an acceptable excuse at the Nuremberg trials. It shouldn't be an excuse now.


Or, you could make money off people who don't know what they're giving away and don't care even if they're told, be one of the experimenters, and make a lot of money doing so.


I'll take one of these, please


Another person's job is an easy thing to sacrifice.


As if someone at facebook will have a hard time being hired elsewhere. But either way, that's what solidarity is all about.


Luckily, that wasn't the only option.


Prevent an Orwellian future of machine-learning assisted, personally targeted messaging preying upon our fears and insecurities.

Is this the only option?

Why can't it (not necessarily facebook) instead be a "machine-learning assisted, personally targeted messaging to help support your long term goals?"


Because it's apparent that's not Facebook's goal.


Surely you've deleted your own FB account, and went on to convince your friends and family to follow suit, prior to suggesting that?


He made an impassioned plea on the social network of his choice.


Oh stop with this ridiculous hyberbole. Just stop.

>This is one of the important moral issues of our time.

No, it's not. Even if it was (and it's not) I'm not even sure if it would crack the top 100. For example, did you know there are people without access to clean water? There are civil wars? State run gulags? Did you know man-made global climate change is a thing? How about that we're going through an unprecedented ecological collapse? All non-issues. The big moral problem of our time is a social media company that wants to sell you shit.


Doesn't this entire issue corroborate the idea that this ISN'T just about a social media company trying to sell us shit?

Two of the issues you mentioned, state run gulags and anthropogenic climate change, are issues really only solvable at the federal level. Facebook's and Cambridge Analytica's ability to influence an election can have a profound effect on those kinds of issues. I mean, we now have a climate change denier in the White House who is dismantling the EPA. If propaganda spreading through Facebook created that, could that not also be partly responsible for our inability to do something about climate change?

That's just one example, but I think you're being just as hyperbolic by saying this wouldn't crack the top 100.


>Doesn't this entire issue corroborate the idea that this ISN'T just about a social media company trying to sell us shit

No. OP called out Facebook, not Cambridge Analytica. OP attempted to shame Facebook employees not Cambridge Analytica employees. Facebook is here to sell targeted ads.

>but I think you're being just as hyperbolic by saying this wouldn't crack the top 100.

I stand by it. This smells like a big nothing burger. I'm not even sure what the news here is. Candy Crush probably has info on hundreds of millions of Facebook users. No outrage there.

It isn't even novel that Facebook was used for political targetting, as the Obama, Romney, and more broadly DNC and RNC did the exact same thing. I just assumed this was all part of that vauted digital strategy all the news outlet were blaring about everytime one party won an election. It may be a coincidence that this is a problem because Trump used this method for voter outreach. Maybe.

Maybe it's the potential Russian meddling that's the new news here? But then it's not really what the news outlets are focusing on. It's all about how Cambridge Analytica created 'psychological profiles' on voters...which sounds more like a query that was ran against the dataset.


It's the obvious malicious intent that we see time and time again with Facebook, the companies they own, and like-minded companies like Google and Amazon. People are fed up with the BS. It's atrocious that no tech people speak out or get the airtime to inform people what's going on without their knowledge (and most of the time, consent). Facebook is a scourge to humanity.


He said one of the most important issues. No need to be on such a strong defense for that.


And I disagreed with that characterization. Especially in context of OPs hyperbolic call for Facebook's employees, and not Cambridge Analytica's employees, to quit their jobs.


And I disagree with your characterization. Selling people shit isn't the bulk of the problem like you say - it's everything that surrounds it. I'm not sure why you don't think of it as a major issue, but I hope that someday you will.


Not that I necessarily agree with OP's "call to action," but as software people we have more potential for impact on software-related issues...


Do you realize Facebook has the power to change this all, but instead keeps people misinformed with their obvious malicious intent?


Don't forget the starving children in africa.


Exactly


Let us please remember that these incidents are not specific to Facebook, rather they are systemic to the big five.

A couple of years or more I was posting on Facebook regarding Cambridge Analytica's practices and was considered a tin foil hat and crazy.

No the reason I was able to shed some light at the time was I knew exactly how we could utilize the Facebook API back then to elicit the kind of data we are talking about, and completely legally. Nobody needed to circumvent FB API policies, it was yours for the taking.

I didn't do it although I did put together multiple PoC's from 2011 to 2014 to see what was possible and it was bad.

Another thing we should remember is that Cambridge Analytica is just one small tip of a fractal iceberg whose body is Facebook and the big five, your internet connection and certainly your smartphone themselves.

Google, Apple and Amazon are no less culpable in this regard.

The question now becomes which side of history we want to be on.

Another question is we assume we want to take our privacy back and how we do that with consent and assurance.

I don't have a Facebook account anymore but I'm still tracked as we all are. My mother doesn't like me not being there but a small price to pay. I can contact her elsewhere and do.

Surely enough is enough?

I think it is time to look for broad scale technologies that are better both in the real world and in our private world.


Out of interest, is there any evidence that Apple are collating data and making it available to 3rd parties in the same way as Facebook? They like to position themselves as more caring of the user’s privacy than the rest, but I’d definitely like to know more about any problems.


Unlike Google and Facebook, Apple does not make money by selling user profiles for marketers to target.


Only because iAd (https://en.wikipedia.org/wiki/IAd) was a horrific failure.

In early 2011, the minimum buy price on the platform was $500K. By midyear, $300K. By early 2012, $100K. Early 2013? $50 (no K missing, just fifty bucks).


I believe you that it was a failure, but that doesn't follow from the minimum price dropping. Perhaps as they gained confidence in the system, they allowed smaller buys with smaller prices (but still just as profitable).


Are you sure?

Specifically it isn't necessarily about advertisers it regards surveillance.

Advertising revenue can be completely offset by Governmental tracking.

As I said in the other post we can't prove the positive but it certainly is a feasible option.

I know I could do it given the charter.


Advertising revenue can be completely offset by the government? That seems unlikely given how much these companies make off of advertising. It would be amazing that the Apple and the USG could hide that kind of massive money transfer off their books.


Isn't facebook embedded in iOS to some degree?


It used to be one of the only share targets (Twitter was the other). iOS 10 & 11 removed it; to log in with FB or share to FB through the OS, you must install the app to do so.


Lack of evidence is conspicuous in of itself, although that right now is tin foil hat territory. I'll tell you in a few years.

On the other hand I'd refer you to Bletchley Park.

Turing et al knew the decrypted Enigma messages but the Government were unable to act.

For good reason.

Secrecy is a thing


Apple likes to loudly proclaim that they care about protecting their user's data, but they also refuse to put their money where their mouth is. That to me is telling enough.

I do think it's important to note that I have not seen direct evidence of them abusing that data, but we've seen plenty of companies/governments/organizations doing bad things for years without direct evidence.


What are you referring to with “refuse to put their money where their mouth is”?


They refuse to open-source their products, and they also refuse to put in zero-knowledge encryption systems.


I guess you can argue that WebKit, CUPS, Darwin, LLVM etc were open-source before Apple started using/sponsoring them (or based new software on them) and so had to continue, but Swift was a from-scratch project that was open sourced.

As for zero-knowlege encryption, iCloud Keychain is although the rest isn’t, you’re right there. Hopefully they’ll move in that direction.


I'm not saying that Apple is staunchly against FOSS or anything, and they absolutely do release a lot of FOSS stuff (which is awesome!), but their platform is absolutely not FOSS. I still can't compile my own iOS or MacOS.


If Apple open-sourced their OS you'd have a CentOS in half a day. Apple definitely doesn't want clones, it means less customers and less cohesive branding, so is there any reason this wouldn't be a very damaging move?


It's definitely possible that this would have detrimental effects to their bottom line. I know I would start buying their products, and I would encourage others to do so, though, I'm not sure if that would make up for the loss.

But that's irrelevant to the point. The point is that Apple prevents users from understanding or controlling how the user's data is being used. Just because we understand why they won't fix it doesn't make it any less true that they could fix it, but choose not to.

And that's what I mean by "putting their money where their mouth is". They talk a big talk about protecting their users, but their actions are different than their speech.


I think this is good advice, not only because it generalizes the problem, but also because it avoids the politicization of the topic re: Cambridge. This shouldn't be viewed as a left vs. right problem.


Absolutely and hits the nail on the head.

We can't be seen to pick on Facebook or CA here since there is a bigger picture.

It's not about picking on anyone, it's about a line being crossed and bringing it back home.

Thank you for your comment.


It might be beneficial to engage in such fiction, seeing how unable the right is to even pretend to put country above self-interest with regards to election hacking.

But let’s not pretend that this fiction is true. Only one campaign hired this company. And if they are bragging to journalists now that they are willing to entrap politicians with hired prostitutes, I’m fairly certain they would have had some things in their sales pitch two years ago that would raise red flags in an ethical campaign.

The people you hire are a reflection of your character. And if they end up arrested one after another, it becomes less and less likely to just be bad luck.


Another point is even if Cambridge Analytica didn't exist, Facebook itself would and are doing the same things themselves, although not over a 50M population radius but over a billion. With a budget to match.


It's laughable that any Facebook user assumes any degree of privacy.

What's far more concerning, and what this probe doesn't appear to address, is what Facebook does with the information of non-users.

https://fieldguide.gizmodo.com/all-the-ways-facebook-tracks-...

https://www.wordstream.com/blog/ws/2016/06/02/facebook-ads-f...


You seriously think grandma and Joe the plumber are aware of Facebook's constant data collection?

Let's have empathy for people outside the tech bubble and realize that it's our duty as technical people to educate people around us about these issues.


I was on the phone with my mother and told her about the recent facebook things and she said "I am very careful about what I post on facebook so I don't worry".

Then I told her about what they actually do with the pixel and like buttons and she was flabbergasted. "You mean they can see what I read even if I don't press the like button?"

Not sure I convinced her to delete the account though as all her friends are there.


I would guess that the elderly are much more skeptical about handing over their personal information online than younger generations. For example, Facebook started off in colleges. And other forms of social media like Snapchat, Twitter, and Instagram are predominantly used by younger cohorts who are ambivalent about what companies might use with their data. The information about data collection is out there, what with Google searches and all. People choose not to abstain.

I'll give a more recent example: I meet 20-somethings at a meetup I go to each week. Most of them go to a pretty well-known university (thus, they are well educated), they ask me if they can connect with me via Facebook. I say I don't use Facebook, and then spend an extra 20 minutes explaining all the reasons why often to their astonishment. In my mind I'm like, "Really? How do you not know all of this? You read tons of magazines/journals?"

The sad reality is that billions of people don't care. Even with this whole scandal, I'd be shocked if Facebook's stock price was hurt in the long term.


You don't talk to many casuals then. They are fully aware that all of their data is being recorded and used against them. They've been aware since the massive NSA scandal.


What about the like button on web sites and the FB pixel that is invisible to the user? Again, you seriously think Joe average is aware of that?


In my opinion, the problem isn't just with being an active Facebook user. Anytime you visit a website with a FB Pixel, "Like" button, or any other FB embedded content, you are tracked - whether or not you are a user.


I do have the Facebook trackers disabled in Ghostery but I wonder if that’s enough


You're being additionally tracked in the reverse by a third party affiliate on the knowledge of you being a person who blocks Facebook, if that makes any sense.

Adblock detectors that function in the same vein as "FuckAdblock" look at if the client blocked a Facebook pixel.


Seems like the fingerprinting that can be done in this case is much less -- the affiliate would just get "ip / website / using-adblock" instead of "ip / website / FB profile ID" right?

Or are the websites providing identifying information like email? (I've never heard of this but I'm not well-versed here).


It's more like the combination of ip / website / using-adblock / screen resoluction / installed fonts / installed plugins and their versions / hash-this / hash-that are individually non-identifying by themselves, but a combination of them can be used to uniquely identify an individual. [1]

But who, exactly, is the individual? Well, that comes later. Maybe your blocker fails to block something that is gathering that data plus your identity. Now, all of that activity (that was previously not tied to an individual) can now be safely linked to you, the individual.

1: https://panopticlick.eff.org/


Very good point, those fingerprinting techniques were what I was missing.

Also, thanks for sharing that EFF link, I really like the breakdown of how much entropy they can get from each fingerprint dimension.


I do not doubt this, but do you have more information/sources?


>laughable

Yea, so "laughable" that people are not constantly paranoid and super informed about how the information industry works. /s

It's not people's fault that facebook is abusing their data. That's some sociopathic logic.


Just think of all the children whose whole lives are being put unto Facebook by unthinking parents. They never stood a chance.


Remember for a lot of regular internet users, Facebook is almost indistinguishable from "the internet" itself.


Prediction: Facebook's goal for this investigation will be to make sure the public doesn't learn that Cambridge Analytica was only one of countless political actors that "somehow gained access" the pool of user information.

I was in meetings with FB almost 10 years ago, as the OpenGraph API was being implemented, where they were openly selling, to anyone willing to pay, exactly what CA supposedly "hacked their way into".


In Australia the Liberal party did basically what CA was doing - in 2013. There's a pastebin dump floating around containing the JS code; notably, it flags any of the user's friends who lived in specific electorates (I think they were opposition-held swing seats at the time)


The Obama campaign was very proud of doing something at least as bad, but I don't think we'll see that on the news, which offers a glimpse into what the motives are here.


Why does everything have to be US based?

For instance, before this, some of the most ethically questionable censorship stories I have heard from Facebook have had to do with minority groups or various activists in more repressive regimes around the world being blocked or censored.

Likewise, with Cambridge Analytica claiming to have worked with more than 200 elections around the world [1], and Channel 4 not painting an exactly flattering picture of their ethics, it's very possible that some of the most disturbing details that will emerge from this scandal have zilch to do with Donald Trump.

[1] http://www.businessinsider.com/cambridge-analytica-secretly-...


The extent to which the HN consensus is simultaneously exuberant about EU regulatory enforcement because "you should follow the law" and angry about other forms of regulatory compliance is astounding.

Repressive laws under authoritarian regimes are laws too. At the very least, we should admit that we're evaluating the specific rules (or the people making them) under some other rubric before deciding whether they ought to be obeyed. The sentiment you express here is exactly why "companies should obey the laws where their users live" and "countries should make laws according to their values and enforce them against websites accessible by their citizens" are too simplistic.


At least as bad seems a bit strong. The impression I'm getting is that they gave out an app explicitly from the campaign that also collected some info. There's a major difference between that and putting out an app and getting info under false pretenses


Yes, I too doubt that a very different case from many years ago will appear on the news, which is for current events as far as I'm aware.


The Obama campaign was involved in laughably simple PR and door to door efforts compared to the complexity and microtargeting of this one.


The Obama campaign employed Big Data to target which doors to knock on and which issues to bring up at each one.


CTR would be a better comparison, and perhaps even more nefarious, since they employ full-time... let's call them "engagers".


Both used Facebook data, but the comparison doesn't go further. The Obama campaign did something very different - as different as legal, ethical medical experiments using informed consent and the tests on the Tuskegee Airmen. I won't repeat what was said elsewhere:

https://news.ycombinator.com/item?id=16630214


To put the FB selloff into a market perspective: -Equifax is only off 13% from its peak after its unprecedented leak of nearly every American and many Canadians' credit info, leaving the population vulnerable to identity theft. The population mostly never agreed to give Equifax this information, but Equifax collects it anyways.

FB is off 8.5% now as a client business failed to adhere to FB users' privacy for data that the users were willingly giving out to FB and the client (but not the third party). Not likely much more downside to the stock on this news imo.


Perhaps the market's reaction reflects the widespread belief that this publicity may drive users away from Facebook, whereas Consumers can't really opt out of Equifax's collection.


Biggest concern for Facebook has got to be them investigating the other sharing of data beyond Cambridge Analytica. I seriously doubt they turned the other way for conservative but not liberal think tanks / firms.


And with Peter Thiel being a Trump supporter, even giving a speech on stage at the RNC in praise of Trump, it would be crazy to think that Palantir's involvement isn't of similar (if not much greater) scale as Cambridge Analytica.

Palantir, in my guess, is probably like CA on steroids.


Whenever I would talk about stuff like CA a few years back to my friend who works at Facebook, he would just say Palantir is even worse. Palantir has everything that can possibly be scraped from Facebook, and everything else they can get. It's not far off from that show Christopher Nolan's brother made... Person of Interest?


I'd be curious to read more about what your friend saw. There are ways to make that happen, e.g.

https://www.nytimes.com/newsgraphics/2016/news-tips/index.ht...


I would assume Facebook has more than Panatir can scrape. They are the real problem.


https://www.cnbc.com/2018/03/27/palantir-worked-with-cambrid...

Looks like CA is just one means Palantir has used to get Facebook data.



For those who may be concerned about this being a partisan hit job because it's a news article, the primary source from a verified account: https://twitter.com/cld276/status/975565844632821760


Also, a new article from the Washington Post: https://www.washingtonpost.com/business/economy/facebooks-ru...


They were directly involved with the Obama and Clinton campaigns... and so was Eric Schmidt.


The Podesta e-mails have conversations between John Podesta and Sheryl Sandberg about meetings to help elect the first woman president. So, while I think Facebook is morally reprehensible, this latest media outrage because of connections to the Trump campaign feels a bit like an economic hit job more than anything.


They also have datasets of 50M user profiles floating around out there filled with Facebook-like data. There still hasn't been a public leak of that kind of data that I can think of. I think a concern for Facebook is also what happens if/when 50M Americans' names, gender, hometown, birthday, and names of 500 closest friends become public for all to see. That's not really data that you can change or put back into a bottle.


The name and address data isn't anything unique. There's probably multiple companies with better address data than Facebook has. And the national party organizations have pretty much complete voter rolls with addresses.

The unique data is the friend graph and the likes, which they can use to (quite effectively) predict political attitudes.


In the long term we need HIPAA style regulation for all kinds of personal data: friend graphs, behavioral histories, private messages, and especially things like location data or voice assistant audio samples.

Leak such data without explicit customer consent? That will be $10,000 per incident. So if you leak 100 data points of someone's location history that will be a $1,000,000 fine.

Explicit consent must be per-incident as in "YES I give my consent to send this information to <recipient> for purpose of <...>."

That would incentivize strong security practices and even more importantly dis-incentivize data hoarding beyond what is needed to provide a service. Hoarded personal data would be a gigantic risk and liability.


It would just pass the buck. What everyone on HN is clamoring for will result in basically more bureaucracy and longer EULAs with no actual change in business practice.

Businesses don't take it seriously because people don't actually care. Some do. The vast majority don't. They might say so in a survey, but at the end of the day, Facebook (and companies like it) will continue to survive doing what they always have been and people will continue using those services.

That's the root of the issue, though. People really don't care as much as posters on HN think they do. If we could acknowledge that, I think we could come up with better solutions.


The GDPR should take care of that to a large degree.

Consent must be asked for in a clear understandable fashion.

Burrying some legalese crap at page 29 of your 12'000 word TOS doesn't cut it.

Could be that Facebook tries to push the envelope yet again. They may come to regret it.


We'll see.

If I had to bet, it will not be the case that we'll see some big exile of users as a result of having to click through an additional "Agree and Continue" dialogue to get to what they were going to do anyway. The GDPR will do a lot more to appear to be doing things right than actually benefiting users.


It depends. Courts could rule that click through licences don't constitute "informed consent", because let's be honest, people aren't informed about what they're signing.


Respectfully if users are honestly considered too dumb to read targeted dialog boxes, the regulation required to "fix" that "problem" is going to be downright draconian.


> implying that adding an additional "are you sure" to the process of joining a social media platform _doesn't_ benefit users.

/snark


It'll be about as beneficial as the "this site uses cookies" notices or the Vista UAC that everyone just clicked through anyway.


What about the data already collected? Do they need consent to keep it?


I'm not the expert, but from what I know

You have a right to learn what data they have about you, with whom they share it and a lot more details In addition you can opt out of anything you agreed on earlier and I would be surprised if you can't request deletion if the business has no business reason to store it. (Arguably, a difficult call with Facebook whom's entire raison d'être is to fuck with your privacy).


That's the root of the issue, though. People really don't care as much as posters on HN think they do. If we could acknowledge that, I think we could come up with better solutions.

FB didn’t drop billions in value because nobody cares, it’s just that most people take a lot of repetition to grasp the scope of the issue.


Investors care when a company is painted in a negative light and threatened with regulation.

That is not the same as individual users caring about data privacy as much as HNers believe.


What constitutes that reputation if not the views of “individual users” determining the fate of the platform?

I think you’re selling poeple short, and in the face of evidence contrary to your claims.


Looming regulatory threats will push any stock price down.

And I'm not selling people short. I think the risks are incredibly overstated and the people who appreciate free services (acknowledging some data sharing is happening) are not necessarily just dumb-dumbs being preyed upon.


It’s not a free service, it’s a surveillance platform that provides a paid service to advertisers.


So is Google. But the semantics are irrelevant.


The same Google covering its ass with $300 million to fight disinformation on its own platforms?


Yeah, that same one?


Is that what taking down videos that are conservative or critical of the government is called now?


> [Spammers] don't take it seriously because people don't actually care. Some do. The vast majority don't. They might say so in a survey, but at the end of the day, Facebook (and companies like it) will continue to survive doing what they always have been and people will continue using [email].


"People don't care" about things like this until there are consequences. Nobody cares about pollution until it impacts their health or destroys their property. Nobody cares about financial crime until it crashes the economy and costs them their job.

I think we're reaching the point where all these data mining honeypots we've built over the past 20 years are being used in ways that are nefarious enough that people are starting to care.


I feel like claims like these are easy to make, yet very difficult to substantiate.

I think it's in the media a lot right now because it potentially helped Donald Trump win an election (despite the Obama '12 campaign being praised for similar tactics).

People having the data I put on Facebook (which is not notably more than is available through public sources) is not going to destroy my property or lose me my job. The rhetoric here has been dialed up to 11 and it's not winning any converts.


Yes, always thought it weird that folks are so protective about health data but nothing else. Why the dichotomy?


Yeah but then who is going to pay to use every single website? I'm going to do that. I'd rather they spend all this time attempting to show me ads that I will never see and be able to use great sites/apps for free rather than keep paying everytime I want to go to a website.


This is basically what GDPR is.


I expect that FB will be receiving a few milliong letters like this [1] in a couple of months, when GDPR will be kicking in. And following that, a few million requests (smaller number) for the "right to be forgotten" [2].

#1 was on the front page of HN a couple of days ago.

[1]: https://www.linkedin.com/pulse/nightmare-letter-subject-acce...

[2]: https://gdpr-info.com/the-right-to-be-forgotten/


They already had an obligation to respond to such inquires under the older EU Data Protection Directive. But they generally ignored the requests or only gave incomplete data (e.g., you'd get your facebook posts, etc., but no tracking data they had on you).


The focus of this scandal is curious. Does anyone seriously expect privacy for information that they willingly share with hundreds of people (their facebook friends)?

If you think this kind of information floating around is damaging to society, then you're making an argument against the very idea of FB. Fine; it's at least worth thinking about how social media is changing our society. But to really be upset that this data is out there and being used, after willingly sharing it with many people, is kind of ridiculous. What did you expect? You should assume that anything you do on FB, short of private messages, is part of the public record.

Everyone is targeting you based on information you put out in the world. Every major political campaign in the US works with a database of voters that includes party affiliation, past turnout, name, age, etc. They can also fold in the same data that advertisers use. Being a good citizen of a republic—and a savvy consumer—in the modern world means thinking critically about who is trying to persuade you and what their agenda is. The government cannot protect you from persuasion.


You made your point very clear. Thanks. I concur.


I love how now this issue is alright to talk about on hacker news. For some reason when it was happening and all of the alarms were going off, it was banned content on hacker news. Technology is inherently political with political ramifications. -_-


Facebook is just one target right now. Working in ad tech, you know how many data brokers are available and how much information your phone and apps, regardless of OS, leaks real time personal data. The issue is with the ad tech industry as a whole.


Wait a minute.....

Wasn't the FTC already "monitoring" Facebook?

https://www.cbsnews.com/news/ftc-to-monitor-facebook-for-20-...

Oh, that's right. That whole monitoring for 20 years thing, that they're also applying to Google and a few other companies is a complete joke. If anything, it has become almost a badge of honor/certification thing - like "Look, the FTC is monitoring us for 20 years, so that obviously means we can't possibly abuse your data or do anything nefarious with it!"


"...50 million users..."

Anyone know how this number was added up? Any reason not to believe it wasn't 500 million or 5 million?


The Guardian's original article on this says that number was shown in documents from 2015 [1]. However, the whistleblower said that by now it should be over 230 million [2].

[1] https://www.theguardian.com/news/2018/mar/17/cambridge-analy...

[2] https://www.theguardian.com/news/2018/mar/17/data-war-whistl...


It's the number of Facebook users whose data was accessed by Cambridge Analytica, allegedly without being authorized for how they used the data.


It's 270.000 users directly impacted. The friends of those users has also been fetched, but with a limited amount of data most of which is pulic already.


Facebook is a net negative force in our society. Particularly for the startup ecosystem.


Disclaimer: I'm anti-democracy in it's current form so before you down vote me think through my point carefully.

What I don't get about this whole issue, is that we are basically admitting that the average person who uses these services has poor critical thinking skills and the American democratic process is easily gamed as such. I feel we are trying to fix the wrong thing. Sure, regulate FB and any other company that happens to collect lots of user data I don't have a problem with that. My fundamental issue is that FB is mostly an opt-in service. As far as I can see, the shadow profiles they create on a user who isn't signed up for their services doesn't really contain enough data to be of material impact for the type of things that a company like CA is doing (it's mostly to help in it's ad network to sell you more stuff rather and is rather well anonomyzed).

The only argument against the fact that FB is an opt-in service is that it has a near monopoly on social media and seems to either buy or kill any serious threat. More important than trying to regulate FB's data collection and privacy would be ensuring that our antitrust and monopoly laws are being enforced to remove FB's near monopoly.

Further, I'm not here to defend Facebook, but I feel it's being used as a scapegoat for an easily gamed democratic process where rather than biting the bullet and fixing that, we are saying it's FB and CA are the real evil here. In fact they are not. They are for-profit businesses who are operating within the current regulatory framework they've been provided. It seems obvious to me what really needs to be fixed is a broken electoral process.


I think you make a great point here: we absolutely need a population that thinks more critically before they act, whether it's voting or signing up for a service or purchasing a product.

Unfortunately, I'm not sure how realistic that is. I think people just don't want to have to critically think all the time, and it's unrealistic to expect every member of the population to exert constant vigilance against ill will. There are simply too many forces trying to manipulate us and get our attention to expect every person to never mess up ever.

This is where the government steps in. Expect companies to be reasonably transparent about their intentions (signing up for Facebook definitely did not give me adequate warning a decade ago about their intentions with my data, and subsequent changes in their plans were not adequately expressed to me as a user), and reasonably cautious in their expansion (I signed up for Facebook at the age of 12, which is technically against Facebook's TOS. They didn't put enough effort into policing those TOS to kick me off the site at the age of 12, and I certainly wasn't mature enough to understand the breadth of Facebook's TOS. Unfortunately I don't think many non-lawyers are equipped to understand the true meaning of signing up for Facebook, which... is it's own problem).

Basically we need these corporations to be better citizens than then currently are, which shouldn't be terribly hard since they're currently downright psychopathic citizens, exploiting the law and their large workforces to manipulate other citizens at every turn. In a better world than our own, corporations would actually be model citizens, but that's probably not realistic in the capitalist system. And we also need the government to do its job better, by punishing corporations that act selfishly so that others actually have an incentive to behave well.


How do you distinguish a population that "thinks more critically before they act" and one that lives in constant fear of social backlash? They're two sides of the same coin!

An intellectually free society is one in which one doesn't have to think critically about the potential social and economic ramifications of every remark. It's a society in which people can pitch inchoate ideas, receive feedback, and iterate on worldviews.

That we're increasingly living in an intellectually unfree society isn't the fault of "corporations" and their citizenship. Instead, it's the result of a push toward controlling people in the name of 'fixing' society and preventing 'harm'. History tells us that down this road lies only death and blood.


I agree to a certain extent. I'm not a fan of legalese and do think things like TOS should be much simpler. To have rational consumers who think critically we need to remove hurdles for people to think critically. But here's my main issue. Our economy and governments are built on the premise of rational consumers. If that basic assumption isn't in place then let's stop the facade, call it out for what it is and change the system. Let's look into a technocracy or epistocracy. Let's stop pretending that America can function as an unfettered capitalist nation and really it needs elements of socialism to function properly and reduce concentration of power and wealth.


>There are simply too many forces trying to manipulate us and get our attention to expect every person to never mess up ever. This is where the government steps in

If government should step in to prevent manipulation by misinformation, then one or both of the New York Times and Fox News needs to be shut down, depending on who you ask.


The argument is that even if you give information to a company willingly, they should not be able to do whatever the hell they want with it. We have accepted this maxim for certain pieces of information already, I don't see why it can't apply to other information too.

If you think FB can do no wrong since we willingly give them this information, what do you feel about HIPAA? Why is it okay for FB to sell data I willingly give them but a doctor can't? If the answer is "one is illegal, the other isn't", then you aren't arguing against the idea of it being made illegal, just that we don't currently have a law saying they can't do it. Laws change though.


FB isn't a life or death service and neither is any social media for that matter. I think we sometimes seem to forget that the world existed and did rather well before the FB ad network made it's debut. At the end of the day that's what FB is. It's a large media company/platform that sells ads. Don't ever forget that. I can chose to not use FB and none of my fundamental rights as a human being are being hurt. Comparing FB and health care is disingenuous. I need health care to survive whether I like it or not. HIPAA exists to prevent the for profit medical and medical insurance model from taking advantage of it's patients privacy, because otherwise they would since patients have no choice but to go to a doctor, and provide lots of detailed personal information to their hospitals, insurance and other health care practitioners.


>It seems obvious to me what really needs to be fixed is a broken electoral process.

Could you propose an alternative that addresses your points.


Isn't regulating companies like facebook part of fixing the democratic process? Or what did you have in mind?


Why don’t they just offer the option to turn off ads? If there’s no ads to show to you because you’re paying to opt out, there should be no reason to harvest your personal info. They have 2 billion active users and make around $50 billion a year. So they’d need to charge $2 per user per month to opt out of ads (and could probably charge more). The fact that this isn’t even an option just reeks of their holier than thou corporate attitude.


The type of person to pay to turn off ads is generally the type of person that advertisers want to influence.


I don't know about that.. those are probably the people who it's way harder to influence, or even not possible. It's not worth their time. Focus on the.. I don't know.. 70%? who doesn't care if ads are flying across their screen, it's easy to influence them.


I don't know where else to say this so i'll say it here as it's relavant.

Is there not a way to have a browser extension that would scramble the meta data that is read by websits lik Facebook and Google? so we would still be feeding data to their brain but it would be worthless and random gibbrish. Does that make any snse?



I'm somewhat surprised by the amount of criticism levied at Facebook here. Consider the following:

1) Both Android and iOS allow apps to access your contacts, which in aggregate is more or less the same kind of social graph that Facebook has. If you happen to be in someone else's contacts, you don't get a say here either. I suppose Facebook's data is richer in some ways, but not in other ways.

2) When Twitter removed API access for 3rd parties, there was an uproar in the developer community about how evil this is and so on. There's a trade-off here - openness at the platform level necessarily means less privacy for users.

3) A lot of the criticism Facebook has received in the past (both here and elsewhere) had to do with not allowing 3rd party developers to do more and hoarding user data, which is not theirs, for monetization. Here Facebook was explicitly giving the app owner and the user the power to decide - the app owner could ask and the user could either accept or decline. You could argue that this isn't adequate protection, but consider how this works for other platforms such as Windows, Mac OS, iOS and Android. Apps can access more or less everything and permission dialogs, even where they do exist, aren't taken seriously by the user.

4) Most publishers that are currently publishing these articles criticizing Facebook are also selling everything they know about you to marketers, often more explicitly for the purposes of targeting. The "scandal" here is that a third-party app gathered personal information that wasn't supposed to be used for targeting and the data ended up being used for targeted political ads. Most publishers have no problem explicitly selling whatever data they can get on you to these centralized data brokers who will sell that data to anyone.

5) All this talk about privacy and data aside, the motivation seems to be that the wrong guy won the presidential election - I don't see anyone whose personal data was supposedly used in this manner being upset nor anyone owning up to the fact that they were falsely manipulated into voting for Trump or not voting. It seems to be mainly Clinton supporters being upset that other people were manipulated into voting for the wrong guy, amplified by the same concern about privacy and social graph data ownership issue we've always had.

6) If we accept that it's the presidential election result that most people are upset about here, the media is even more culpable, both from creating this false narrative that it was not a close election and prematurely taking the moral high ground against the potential Clinton administration by focusing on the irrelevant stuff (emails, etc). And that's just the "mainstream" media, before we get to Fox News, etc.


> The "scandal" here is that a third-party app gathered personal information that wasn't supposed to be used for targeting and the data ended up being used for targeted political ads.

The scandal is that an organization impersonated a health care research entity and knowingly collected PII for use in political actions. Not only is this awful in itself, but it undermines public health by making people distrust legitimate data collection projects for beneficial health purposes. It's similar to when the CIA used a vaccination effort to locate Bin Laden, and now those aid personnel are routinely attacked and not trusted by locals which makes it more difficult to eradicate disease. If you are representing yourself as a health care entity and collecting PII for stated purposes of public health, you are likely bound by HIPAA, and I would like to see people go after this company for HIPAA violations as well as fraud.


If we’re going to go this route, Google should be hauled in first. You can easily choose not to play with Facebook. Google? Not so much.


I don’t see how this argument makes sense. You can opt out of using Google’s services just the same, if you so choose.


I have to disagree, and hopefully can make a better job at convincing you than the other comments. In terms of search "google" is a verb for a reason, but I actually do think you could use other search engines, true, I agree.

What worries me is Android however. Perhaps not in the US, but in a lot of other places around the world Android is the only operating system accessible to people and a large majority of the market, since iPhones are an impossible purchase. Phone manufacturers depend on Android like computer manufacturers once depended on Windows.

Apple is raking in the profits because it controls the vertical and sells expensive phones in premium markets, but in terms of raw market share things seem to be shaping towards a Mircosoft/Windows like situation. Android is close to passing the 75% mark [1] and judging by sales it probably will [2].

Maybe we're still not quite there, but it's shaping up in that direction. Facebook def has the lion's share on social networking, and messaging, but I can't help but believe that when it comes to Facebook it's mostly network effects keeping people in, since there are plenty of other (and sometimes better) messaging apps, lots of photo-sharing apps, and alternatives for event planning, getting your news, posting updates, etc.

Idk if "Google should be hauled in first", but it should def be hauled in too (alongside Amazon, but that's a whole other story).

[1]: http://gs.statcounter.com/os-market-share/mobile/worldwide/#...

[2]: https://www.statista.com/statistics/266136/global-market-sha...


Tbh, Google has a monopoly over search, but Facebook does not have a monopoly over social networks. Bing and DDG are much smaller competitions to Google in comparison to what Twitter, Snapchat, et al. are to Facebook.

This means you don't have much choice when it comes to search, but you do when it comes to social networks. Obviously, you can live without both; but if you need both search and social networks in your life, it's obvious which company is more powerful. (IMO you can live without social networks, but not without search engines.)


Sincere question: It appears that it's crystal clear to everyone where the distinction between fake and real news lies, or who should be allowed to post (mis)information (serve ads, if you prefer) on the interwebs, or how to identify the culprits exhaustively. I fail to see a way in practice to draw the boundary and identify/persecute the offenders. Imho virtually all ads spread misinformation (push someone's agenda) to my detriment. My only means of resistance is inherent in my duty (civic, but also self-serving -- to keep my sanity) to consciously make the effort to seek/filter through multiple viewpoints.


Its simply astounding that the "use of personal data" is seemingly coming as a revelation to anyone. What's especially ridiculous is that arms of the government (like the FTC) are feigning indignation. The US government is - by far - the biggest collector and aggregator of personal data and information in the world. Both the government, corporations, "social media companies", and everyone else with access to data has been doing the same, and using this data to create models to influence people believe and support (or buy) whatever agenda or product that respective organization is pushing. Unfortunately this latest episode of faux outrage is very much like the faux outrage that has infected our cultural landscape for the last 18 months (due to Trump). Things that have been going on for decades are suddenly being attacked as evils unique to Trump. If the net result of this selective outrage led to substantive changes in our society, then perhaps it the outrage would be a "good" thing (despite its manufactured nature). Unfortunately, its very clear that all of these practices - the data mining, the troll farms, the bot-networks, the propaganda (both public and private) - will roll forward at full steam once Trump is consigned to the wastebin of history (where he belongs). Once the reigns of power are back into the hands of a trusted steward of global US military empire (rather than a deluded carnival barker) you should have absolutely no doubt that Comcast, and GE, and Disney will direct their minions in the media to focus your outrage somewhere else.


Reminds me of this quote from the tabbacco industry in the early 90s;

“We don't smoke that shit. We just sell it. We reserve the right to smoke for the young, the poor, the black and the stupid." - R.J. Reynolds executive’s reply when asked why he didn’t smoke


Ok. I know it's off-topic here, but...

Now what's good alternative to Facebook Messanger video calls? Signal doesn't support it - I read that it's in beta, but I can't enable it. Telegram doesn't seem to support it either. Skype is not an alternative.

Is Matrix/Riot good enough right now? For me it should work on Linux, Android and iOS for my family.

Now that I think about it, maybe I should setup some private WebRTC service for video calls. But it seems cumbersome for part of my family. However it probably would be easier for my parents and my in-laws.

EDIT: As Thriptic mentioned Signal indeed does support video calls. I failed to find the functionality, because I expected to see separate button for video call. One has to first start calling and then enable video.


Signal supports video calling on mobile at least. I've used it many times.


What is a socially acceptable way of getting people interested in Signal? I installed it and use it for messaging (Ive never been able to actually receive an mms with it). But I have not one contact on signal, so it's not doing much for me at this point. My daily work contacts don't seem like a prime target user base.


You're going to have a hard time convincing non-technical people / people who don't acutely care about privacy to use it in my experience. I generally try to shift people who are not in these demographics over to WhatsApp as that has a large user base and implements the signal protocol for end to end encryption. It's not a "safe" as signal but it's far better than something like messenger.


I can't even send pictures on Signal.


There is a camera icon for taking a picture. Otherwise you need to use the attachment function for sending a picture.


I've tried everything AFAIK. The pictures appear inside the chat area but with a red cross on their upper right corner indicating that sending failed.


After you take or attach a picture to the Signal chat you haven't sent it yet; you can write a caption for the image or send it without. The red cross if for removing the image if you change your mind. At least, this is what popped up in my mind immediately after your description.


Riot is perfectly usable now, works on all the platforms you mentioned and it supports video and voice calls. I'm helping them make the UX/UI more pleasant to use


Wire works ok for me. I use it on my desktop for video calls. I'm on Ubuntu.


Whats wrong with Skype?


I don't have much of an issue with targeted advertising or content by harvesting of data through what you make publicly available, as long as it is de-identified. It has plenty of applications for good as well. I hope that platforms are working on ways of taking these algorithms to edge nodes, making large master databases containing un-aggregated data that can be used to de-anonymise people less prevalent.

I'm more immediately concerned about blatantly fake news, clickbait, bots, sock-puppets and fake accounts posing as trusted parties in order to harvest trusted information and spread misinformation.


Isn't Facebook valuation is implicitly based on how Facebook can [ab]use personal data?


Ah, the other shoe drops. This is the reason for the blitz of anti-Facebook stories all at once... Legal authority to examine all that juicy personal data Facebook holds.

And if you think you are safe because you don't have an account, I have bad news for you.


The government doesn't have to resort to conspiracies to get your personal data from Facebook, if it wants it, it can get it. We live in the age of PRISM (in which Facebook is a participant,) secret orders, and NSA bulk surveillance. They could literally just build or hack an app that uses Facebook's API, apparently, or buy something from the black market, or in the narrow case get a warrant.

Not to mention... I don't think this would actually give anyone "Legal authority to examine all that juicy personal data Facebook holds." I don't think "legal authority" actually works that way, but IANAL.


Nice try. The government already has your Facebook data. https://prod01-cdn07.cdn.firstlook.org/wp-uploads/sites/1/20...


This is exactly how I see it. The government already has your “official” details like your SSN, bank account info, address and phone number, but it doesn’t have a very good look into the kind of things you like to do. Having access to that data would be a surveillance analyst’s wet dream.


We see companies doing this all the time and I hope there's some sort of fix here, but I'm curious about the individual to individual implications. If we're serious about fixing privacy then things like sharing private messages, doxxing, etc should be addressed as well imho. Not sure what that would look like without curtailing free speech however.


At a previous employer we had a high up executive from Facebook come out to give a presentation.

One of my main take away was that one of FBs big goals was to build a 'knowledge economy'. It struck me as a bit of an odd objective at the time, but I think I am now starting to understand what this means (and it's a little scary).


I’m no fan of Facebook, and welcome this scrutiny. But what about all the people Facebook buys data from? Can we regulate them? What about Equifax?

All these corps do it without even the veneer of informed consent; we should make sure we criminalize the activity and not just crucify a well known practitioner and call it a day.


Nothing will change with government regulations, those companies like Facebook, Google, etc are gold mines for the FBI, CIA, NSA and other 3 letter organizations, so in the interest of national security, they will continue to harvest the user data... Only major boycott can do some damage...


Facebook deserves a good deal of criticism for this, but I can't be the only one who thinks this is just a continuation of the anti-startup, anti-social media dialog the major press agencies have been pushing ever since the Election?


With these practices or lack of restriction on data pulling at Facebook, is it wrong to assume the board knew about this? Too many brilliant minds on their board, including pmarca, to not understand firms were able to do this.


Better than a new law. I'm sure Congress is mulling one over, but surely they won't take a large sweeping, harm-the-good-more-than-the-bad, regulation-instead-of-enforcement approach.


I'd wage money that Feinstein will campaign on Facebook Regulation this year.


> If the FTC finds Facebook violated terms of the consent decree, it has the power to fine the company thousands of dollars a day per violation.

Maybe I'm misreading this, but only a few thousand?


Yeah, this struck me too. It seems like the FTC's powers are far too light to deal with modern tech.

I'd really like to see something like the NTSB here, but for privacy/security issues. After an incident, the NTSB comes in, investigates everything, and produces a very detailed report as to what happened and what the industry should be doing differently. You can see their recent reports here: https://www.ntsb.gov/investigations/AccidentReports/Pages/Ac...

It's very clear from Facebook's behavior since the elections that they can't be trusted to investigate and report on themselves. E.g., this article on how their execs thought it best not to say anything until forced by circumstances: https://www.nytimes.com/2018/03/19/technology/facebook-alex-...



If each person is considered a separate violation then $1000/day adds up quickly.


Works out to $2 trillion in liability according to one (highly speculative) article I saw.


That's a neat way to nationalize FB.


Is it. Because if it is immoral for facebook to sell data for electoral campaigns, what does it mean for the president to get all that data legally and at tax payer money.


I did not make claims whether nationalizing FB is neat or not. Just that if you want to nationalize a company, setting fines that make the company immediately insolvent and government by far largest creditor is quite a neat way to do it.


Where can I get my check? Or will the government spend it in my best interest?


I've been saying for awhile if people were just payed fairly for the data all these companies were tricking them into giving away it would amount to a basic income of a couple thousand dollars a year.

It basically requires collective action though. If everyone does it at once they will start paying your bills to track you.


Wow, how much money do you think facebook makes. They have 114$/person in the US revenue.

And that's taking all the global revenue into account.


Facebook is only one company. Start with the FANG companies, then banks + "partners", ISPs, etc.


If people were paid fairly for this data then these companies wouldn't exist. I'm not saying this is good or bad, but you'd turn around and be charged to use the service.

Interesting side question, how do you see market forces working to set a proper payment for data to users? Right now Facebook is essentially saying your data is worth free photos and being advertised and propogandized and people seem to accept that. How does this not become the standard of exchange in your system for any popular network effect service?


>"Right now Facebook is essentially saying your data is worth free photos and being advertised and propogandized and people seem to accept that. How does this not become the standard of exchange in your system for any popular network effect service?"

The point would be that people started valuing their data "correctly". I don't know how high that is but it must be worth much more than free web hosting or else these companies wouldn't have gotten so huge.


Almost certainly the latter. Although it would be whimsical to consider for a moment people retiring from Facebook’s misfortunes.


Fear not citizen, it'll be put to use by the same government that's actively working to dismantle consumer protections.


So about what number roughly (FB needs to pay) do we talk here?


According to this article [1], trillions in possible fines.

[1] https://www.washingtonpost.com/news/the-switch/wp/2018/03/18...


Not as bad as Equifax, yet... all this hoopla.


In the long run this should not affect facebook much, opportunity to buy FB.


The Executive Branch deflecting from the Executive Branch.


As someone who wants to seen the societal view of facebook change, what's prevents a viral uninstall campaign from happening to facebook? #UninstallFacebook


This is crazy! Peter Thiel spoke on March 15,2018 and said this about tech companies :"...If they do not take these issues seriously there is a risk they will be regulated.."

I have the video with the timestamp of the quote I mentioned, posted below.[1]

Ridiculously ironic on the timing of all this (evermore so that it is his darling Facebook) because I think this recent blunder by Facebook is a shot across the bow for other tech companies and even borderline bordering their "vessels" for search and seizure by the U.S government.

For those who do not know who he is.

> Thiel became Facebook's first outside investor when he acquired a 10.2% stake for $500,000 in August 2004. He sold the majority of his shares in Facebook for over $1 billion in 2012, but remains on the board of directors. [https://en.wikipedia.org/wiki/Peter_Thiel]

[1][Tech investor Peter Thiel speaks at the New York Economic Club](https://youtu.be/sxWpvgTH9oI?t=33m52s)


This is a particularly thorny issue for Thiel who said Gawker "ruined people's lives for no reason" after being outed as gay. Despite his involvement in Facebook, I think he's one of the few powerful people in tech who cares about people's privacy.


Does he though, or does he care about HIS privacy? I have no clue where he stands on the world at large based on his actions, just that he really didn't want to be outed (which is a reasonable request)


Really? I think he cared about himself with Gawker. Stop looking up to these people who consistently use you and apparently successfully trick you on their intentions.


If he cared about my privacy, he wouldn't have invested in Facebook.

He cares about his own privacy. FYIGM


Nobody involved in Palantir or Facebook gives a shit about anyone's privacy but their own.


Keep in mind he started Palintir. He doesn't actually care about privacy.


It is crazy Thiel didn't see this years ago and say something then, when there was still a chance something could be done. Plenty of others did. Now, hopefully, it is too late for Facebook to escape regulation and, dare to dream, trust-busting.


Is that why they hate Peter Thiel so much and try to smear him in the media?

ala

https://www.pastemagazine.com/articles/2016/06/peter-thiel-i...

(Kind of strange how they can target a gay guy for getting mad at being outed by a media company. Victim blame much?)

Meanwhile, he apparently called Uber "ethically challenged." So far, he's got 2 and 0 for good quotes.

https://www.forbes.com/sites/richkarlgaard/2014/12/10/do-jer...


That's why regulation always comes to be - no company can be trusted to do the right thing on its own.


FWIW having worked at Google, I can tell you they take these issues pretty seriously there. This is largely because of a few self inflicted wounds early in the company’s life.


Which is why Google collects location data even when location services are disabled, right?

Let’s not pretend Google is holier-than-Facebook.

https://qz.com/1131515/google-collects-android-users-locatio...


And the "Sign Into Chrome" feature slurps up all your history, cookies, etc. in a form Google can read and which their terms allow them to data-mine, unless you know to dig into the settings and configure a separate encryption key.


Isn't that necessary for Sign Into Chrome? The whole point is that you can transfer your history/cookies across browsers.


Firefox manages to sync just fine while automatically encrypting data end-to-end.


I mean..it's a Cell Phone. The "cell" part of it means that you move between different "cells" which correspond to different towers, so that the carrier can best route your call. Even feature phones tracked your location in this way, it's used in criminal cases sometimes.


Ah yes, which explains why my Android phone suddenly stopped working after Google apologized and removed the functionality described in the article.

That the phone has to connect to nearby cell towers is obvious. That this information needs to be sent not to the cell company but to the OS company without an option to turn it off is bad programming or something worse.


But... the market... Oh, the damage that neolibertarianism has done.


Neolibs are dead. Trump was elected on "'free' trade is unfair trade" and if Dems want to change some things on this front it can happen.


And yet most of what he does is just business as usual for Republicans and in common with neoliberal ideas. He is a bit of a wildcard so sometimes he yells something about tarifs, which might come out of nowhere, but looking at the stock markets they are perfectly happy with the current administration.


What about ... Nafta, Steel tariff, Boeing VS Bomboardier, Trans-Pacific Partnership... all those moves go against neoliberal ideas.


I don't think it's about ideology as much as interests. In the 80s and 90s, the US was the world's most economically powerful country and was able to use free trade to further its own interests; the ideological arguments for free trade were constructed to justify policies which were deemed to be in the interest of the US and its owners. Now that the US is facing serious competition, protectionism is gradually becoming a more rational policy, and the public ideology of American politicians is adapting.


And that's what libertarians fail to grasp.


Methods for choosing to ensure a market that operates how the public wants:

Regulation: Laws that mandate exactly how something should be done and if the company doesn't do it that way they are shut down - with no regard to whether or not the legally-mandated process achieves the end. Quite literally it's politicians controlling the means to hope that a certain end occurs, regardless of that end being achieved. Prone to regulatory capture, rent seeking, anticompetitive practices. Apt for: when a negative outcome will absolutely devastate the public, there isn't much variability in how to achieve the end, the domains in which the market operates are stable.

Tort Law: A company is free to change how some process works but is able to be held financially liable for all ramifications of their process. Prone to failure if the assets of a company are less than the damage they can produce. Apt for: a new market which is highly dynamic, the state of the art is constantly changing, ruin is limited to customers who have chosen to engage with the company (no to little externalities).

Neocons: in favor of no regulation and castrated tort remedies Neoliberals: in favor of bloated regulation (rent seeking) and symbolic tort remedies: Libertarians: in favor of /extremely/ limited regulation and strong tort remedies.


The problem with tort law is that by the time the government or court reacts, the perpetrators are long gone with the money and only a shell of a limited liability company is left.

For that to work the owners of a company has to be personally responsible for the damages they create.


For tort law to work, you need a very effective universal legal aid system. It can't all be done by no-win-no-fee and class actions, and it takes an extremely long time.


.. or government


Despite what Redditors like to say, the big banks have done a surprisingly good job at self regulating since the financial crisis. That said, these regulations will probably lax as people get more confident and comfortable with taking risks.


> a surprisingly good job at self regulating since the financial crisis.

The financial crisis was less than ten years ago. That's like calling a drunk driver "responsible" for going a whole week sober since that time they crashed into a minivan and killed an entire family.


Do people who think banks and corporations are good at self-regulating not look at history? The same pattern has repeated for hundreds of years—it’d be embarrassing and horrible if a drunk driver didn’t learn their lesson after a decade, but we haven’t learned our lesson after at least a century.


The ideology of regulation and big government = bad is more important than any amount of evidence to some.


This is the internet where vast troves of historical data are stored and shared but everyone decides to rely on their recent memory instead.


Hundreds is a bit of a hyperbole here. We've only had modern corporate America for maybe the last 150 years. Pre-industrial companies didn't have as much power or clout as large behemoth enterprises do today.


We’ve only had the Internet for a few decades, but that doesn’t mean we can’t draw parallels between the dot-com bubble and what happened before, or that we shouldn’t have learned from the past.

Consider the Amsterdam banking crisis of 1763, which some have compared to the 2008 financial crisis. https://en.m.wikipedia.org/wiki/Amsterdam_banking_crisis_of_...


In this case it's not the same driver though. The majority of people in finance had nothing to do with mortgage backed securities, and a lot of leadership has changed in the past decade.


Remind us which leaders got fired? Wall Street makes the same excuses every time—oh, it wasn’t a majority of us, and those guys are retired anyways.

The same executives that paid fines and damages neither left nor were prosecuted. Perhaps this is because they aren’t guilty, and they just ponied up the money to make people happy. I think most people will agree that it is more plausible that at least some of the people who are still executives were guilty of at least incompetence. https://www.theatlantic.com/magazine/archive/2015/09/how-wal...


People in high positions never get fired. Even Nixon resigned before he was forced to leave office. Do you really think that the board of directors would want to keep someone responsible for causing their stock to drop 99%? I'm sure you can dig up some 10 year old Bloomberg articles to see how leadership changed, but only a few people can make sense of the significance of these changes.

This is all beside my original point anyways. I'm not saying things are perfect right now. My point was that people don't give credit to how common self regulation is because they don't care enough to pay attention to it.


> a lot of leadership has changed in the past decade.

> People in high positions never get fired.

Can you explain how you aren’t contradicting yourself? The first quote is from your original comment.

> I'm sure you can dig up some 10 year old Bloomberg articles to see how leadership changed, but only a few people can make sense of the significance of these changes.

Come on. I gave you an Atlantic article that claims no one suffered any real consequences. In contrast, the sum total of your argument is that “only a few people can make sense of the significance of these changes.” That sounds a lot like mysticism to me— are you operating on faith?


It's not a contradiction. People rarely get fired when they screw up in most white collared jobs. They usually "voluntarily" leave. The Atlantic article focuses on criminal consequences, not leadership changes. My argument is that the board of directors and colleagues would know a lot more than you or me about who to blame and deserved to be asked to leave, and it's in their best interests to make the right decision.


Wells Fargo did excellent.


That's a strange statement. What about all the regulation that doesn't come to be? To take your statement to its logical conclusion would allow no company to do anything on its own.

Also, a statement like "no company can be trusted to do the right thing on its own" is obviously untrue, doesn't account for many of us with companies out here, and harms your overall point when you make these kinds of broad generalizations.


‘can’t be trusted to do’ != ‘does not do’


Of course they aren't the same. I still don't agree with the statement. And if the pitchforks weren't out, most people would realize that they trust companies to do the right thing quite frequently even when there are no regulations to prevent the wrong thing from being done instead.


I don't think bakeries or flower retail companies doing the right thing are the crux of the argument here, on HN of all places. We're talking about startups that are braving new grounds in terms of what they're able to do, mostly without a legal framework to restrain them. These companies, once they reach a certain size will eventually need to be regulated because they come to wield too much power. See AirBnB, Uber, 23&Me, Amazon, Google, etc., etc.


I remember reading a book on Facebook in 2010 or so. I remember distinctly quote from Mark Zuckerberg that "privacy is the concept from the past". Then, a couple years later I read he bought houses surrounding his house to ensure his own privacy.

Lost any respect I had for the guy in that moment. I really hope FTC will force FB to stop most of their unfair practices.


I really hope FTC will force FB to stop most of their unfair practices.

Yeah, Facebook said they would....in 2011.

Ron Wyden (US Senator from Oregon) asks the following:

"In 2011, Facebook entered into a consent agreement with Federal Trade Commission (FTC). Under the terms of that agreement, Facebook is required to maintain "a comprehensive privacy program that is reasonably designed to (1) address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of covered information."

"a. Please describe how, three years after Facebook entered into the consent order with FTC, Spectre and his company were able to download sufficiently detailed data on 50 million Facebook users without their affirmative knowledge or consent."

https://www.wyden.senate.gov/imo/media/doc/wyden-cambridge-a...

Hopefully once Zuck is done getting raked over the coals in the UK, he's dragged back to the USA to answer to Congress as well.


Well, if data on European citizens have been exposed, then ideally Facebook's management would be charged with breaching national European laws on data protection, trialed and sentenced to prison.

The legal base for imprisonment for severe violation on data protection laws is there in many (most?) countries but they are rarely, if ever, used.


What will determine Zuckerberg and FB's fate in the US is the extent and number of politicians that received contributions. If FB donated to many politicians' election campaigns, they may be able to walk away with a slap on the wrist.

Maybe tech companies should take a page from Boeing's playbook and set up offices in all 50 states


And that agreement means absolutely nothing unless the FTC is also willing to enforce it - hard - against Facebook now.


Which they won't. Very few people of significance from the 2008 financial collapse are in jail. The justice system in America doesn't apply to the 1%.

There was a movement that tried to make people aware of that by camping out for weeks. They were marginalized, by the news and media agencies owned by that same 1%.

As the previous post stated, this type of stuff was know in 2011 and 2014. There is a good chance the only reason this is making such strong headwind now is because it's in someone's interest to have the media run with these stories.


> They were marginalized, by the news and media agencies owned by that same 1%.

That's not really true. There are many other organizations that have been around much longer, with substantially more members, that have accomplished much more than Occupy did and yet receive almost no media coverage. When you hear about a ballot initiative, an insurgent candidate, etc., there is a whole network of activists behind the scenes working to get things done that are largely ignored.

Contrast this with Occupy; just about everyone in the U.S. knows about Occupy because of the media coverage they received. Occupy got a substantial amount of coverage, particularly when you consider the amount of people involved (smaller than a whole lot of activist networks) and the political impact they had (not much). It's true that the poor state of the media in the U.S. is a big problem, but solving that problem would make the media less focused on political theater and more focused on the people effecting actual change.


IMO, Occupy marginalized themselves by refusing to ever stand for anything. It's very easy to say what you're against. It's harder to say, in detail, what you would do differently.


Did you attend a rally? Did you sit in at a general assembly? Have you discussed the movements goals with one of its members? The media, quite falsely, reported (cherry-picking interviewees, a similar tactic used for all movements nowadays) the movement lacked any concrete goals - it is patently untrue. [1]

[1] https://en.wikipedia.org/wiki/Occupy_movement#Goals


That's what the people marginalizing them said. To everyone else, it was pretty clear they wanted to see bankers in jail and regulatory restrictions that would prevent the problem from happening again in the same way.


Many of the largest players in the 2008 financial collapse -- e.g. banks -- are not regulated by the FTC:

https://www.ftc.gov/about-ftc/what-we-do/enforcement-authori...


Which, unfortunately, probably won't happen. Worse still, large scale intervention and stringent enforcement on the part of governing bodies and sovereign states is likely the only thing that could potentially curb the insane amount of power tech giants have accumulated over the past decade. Our culture is so saturated by the services and technologies they provide that solutions on the cultural or individual level are nigh impossible. No one will stop using these services because of bad press, this has been proven time and time again. People simply don't care enough or are so controlled by habit that a divorce from the network is perceived as more disagreeable than basically forking over anything to them, no matter what sort of implications or consequences relinquishing sensitive data ultimately has.

It's a sort of paradox: take the road of the libertine and accept that those that provide services to you freely take much more than they provide, or take the road of the conservative and accept a future wherein governments can effectively determine the technological landscape, and decide what services, what extent of data collection, and what levels of sharing are permissible. Neither is a particularly appealing option. In any case, the dream of the internet being some kind of individualist haven is long gone.

This is more of a stretch, but I think there is some degree of correlation between the forms of user-facing technology provided by massive data mining companies and users' apparent nonchalance or apathy toward data distribution issues. A great number of socio-technologic tools promote experiences that are fragmentary and break down focused engagement (asynchronity, multi-channel communication, attention deficits, etc. are hallmarks of our age). Batter your brain with instantaneous, reactionary content 24/7 and you soon lose the capacity for deep or prolonged contemplation. If you've robbed the consumer of the intellectual capabilities to engage critically with your product (or sometimes, in the case of giant networks like facebook you even ensure he likely needs to buy in to the product itself to reach an audience) you've gone a long way of ensuring you maintain hegemony.

Many of our modern technologies, like drugs, are habit forming and addictive. Once you're hooked, good luck getting out of it without a struggle. Most people don't want to struggle, so stories like this come out and effectively result in nothing.


Absolutely. Fed time sounds good for him.


And there it is, now feels like the fable of the boiling frog. Can't remember wanting a company to crash and burn like this.

>The rise of social networking online means that people no longer have an expectation of privacy, according to Facebook founder Mark Zuckerberg.

>Talking at the Crunchie awards in San Francisco this weekend, the 25-year-old chief executive of the world's most popular social network said that privacy was no longer a "social norm".

https://www.theguardian.com/technology/2010/jan/11/facebook-...


I'm not trying to make this political, but I've always found it ironic that people in Southwest cities who talk big about border walls not keeping people out very often live in communities or cities where every house being surrounded by a wall is the norm. And the richer they are, the taller and more elaborate the walls get.

Again, this isn't about politics. It's about irony and how rich people in California often seem to say one thing and do another.


There's a huge difference between having a wall around your home so your neighbors/peeping toms can't look at you and having a wall on the border to keep immigrants out.

These things aren't even comparable, I have no idea where you think the irony lies.


These things are very similar. In one case, your defend your personal property (a house), in another, you defend your collective property (a country). In both cases, you rightly feel that you have the privilege of using that property that you want to be in control of, personally or collectively.


I think you completely misunderstand why people build fences or walls around their homes.

House wall: - keep pesky neighborhood kids from running through it and ruining your grass - keep your neighbors from being able to see you while youre swimming in your pool

Border wall: - attempt to keep Mexicans from immigrating into the country illegally

I fail to see how these use cases are at all similar. Please, if I missed a use case for a house or if your home is constantly being attacked by barbarians let me know, but I don't think anybody in California builds a wall to "defend" themselves.


> I fail to see how these use cases are at all similar.

That's because you don't view Mexicans as pesky neighbors trampling all over your country.


>House wall: - keep pesky neighborhood kids from running through it and ruining your grass - keep your neighbors from being able to see you while youre swimming in your pool

Not according to what I read on Nextdoor. The people talking about making their walls taller, better, covered in more cameras don't care about the neighbors. It's about keeping strangers out.


As someone who grew up in a very much not-rich neighborhood, in a very much not-rich household that still got broken into, I'll have to disagree haha.

The only thing similar about these two contexts is the word "wall".

I don't think a border wall is a good idea because it will cost a lot, not solve the immigration problem, it symbolically means a lot in terms of diplomacy and there's better solutions. Also, a lot of these people just want to escape from local conflicts, poverty, etc.

My parents bought window railings because although they hated how they looked, they were cheap, solved the problem and, I mean, these people just wanted our TV and my mom's jewelry.

In my case it was railings cause we didn't live in a house with the square footage for a garden lol. But I guess for all the well-off folk with gardens and steal-able stuff in their gardens, fences ("walls") serve a similar purpose?


You think it's ironic that people with fences around their houses think a 1,954 mile wall 30 feet high is impractical? What makes that ironic?

Are you suggesting that in these heavily walled Southwestern cities, people are using fences primarily as a form of defense against immigrants, and it's therefore ironic that they think their home walls will successfully protect them from large scale immigration, but not a wall along the border of two countries?


Without wading into the politics of the gesture, I'd point out that in the Southwestern US there's a strong influence of Spanish Colonial architecture, which is based on a central courtyard with high walls (in turn influenced by Arabic architecture in Andalusia). Which doesn't mean that what those high walls mean now is necessarily the same, but there is a general pattern[1]

1: https://en.wikipedia.org/wiki/Courtyard#Historic_use


> people in Southwest cities... very often live in communities or cities where every house being surrounded by a wall is the norm

In addition to the other points in this thread, I can't think of a city here where this is actually the case.


Also consider that these people are much more likely to own their houses and would more stand to benefit from the rising housing costs that come from immigration.


Yeah I've been trying to buy a house for a few hundred thousand dollars but all these illegal immigrants keep outbidding me. /s


Any influx of residents puts pressure on local and regional housing markets. Extra pressure on even low income rentals has an appreciating effect on the rest of the housing chain, because on the short term housing supply is inelastic, and space in desirable areas is relatively finite.


There is of course, some effect but it's relatively minor for the middle class. Someone who's making a good income is not going to settle for the same conditions an illegal immigrant does, they're not competing in the same market. Your analysis leaves out a bunch of realities, for example: Most illegal immigrants share housing, reducing pressure compared to the normal. They also tend to stick to immigrant neighborhoods due to language barrier and need for cash transactions, this reduces their impact outside these areas. There is such a thing as homelessness more population on the lower income scale doesn't always mean that property values go up due to resources being taken since some will end up homeless. People tend to get the best they can afford, if illegal immigrants are taking the lower end of the market the legal people aren't just going to start buying "better" places.

Illegal immigrants have a negligible effect on the property values of the type of property people putting up walls and fences around their homes own. And lastly I highly doubt anyone who's against the border wall is realistically thinking "this will keep my property values up!"


>There is of course, some effect but it's relatively minor for the middle class

This was my only point, because low and middle class markets and income are continuous spectra, and displacement at the base puts pressure on the rest of the continuum. People don't typically end up homeless because of a couple percent increase in rent, they find a way to pay it. Now, the exact quantitative effect on pricing throughout the market is something that neither of us can likely provide.

>Your analysis leaves out a bunch of realities...outside these areas.

And what happens when these areas fill up and start to influence surrounding neighborhoods? What about the increased strain on infrastructure, including roads, schools, police, fire, etc, especially by those who do not pay taxes?

What about the effect on markets cause by middle income flight post spillover, when these growing low income neighborhoods bring with them crime and other undesirable activity?

No amount of arguing over left out "realities" changes the simple fact that more people create increased local demand in housing starved locales, which puts upward pressure on all markets, although of course the derivative of income pressure vs population decreases with increasing housing prices.

>People tend to get the best they can afford

"Best" is highly subjective and dependent entirely on market rates. People will pay more for less if the whole market is inflated by pressure from below.

To be clear, I am not interested in blaming immigrants, legal or otherwise, for any of societies problems. I am simply arguing that more people>increased demand>higher price.


Do as I say, not as I do.

I'm getting more and more happy that I don't have an account there.

However I'm not that confident that my plugins block all their affiliates and data gathering to my shadow profile.


GDPR ought to kill off shadow profiles..


I mean, only the ones that are GeoIped in Europe, no?


So that's going to be a really interesting thing to watch. Technically, the answer to your question is no - GDPR has nothing to do with being in the EU, it has to do with being a citizen of the EU. So if you're an American who travels to the EU, your shadow profile might get tagged with being in the EU, but you're still not eligible for GDPR protections.

On the other hand, if you're an EU citizen living in the United States for the last 20 years (meaning, before the advent of Facebook), you technically have the right to request that all your data be deleted from Facebook's servers.

Now, how will you know if they have data on you? Can you just assume that they do and make the request anyway? Will tech companies begin verifying your citizenship to tell if GDPR really applies to you? We'll soon see.


If you're an American, then you can gain (indirect) access to GDPR protections by transferring ownership of your Facebook account to a citizen of an EU member state. They can then withdraw consent to track personal data from Facebook for that account and/or send a subject access request.

Taking this action violates the Facebook ToS and will result in your account being closed.

Checkmate?


I wonder if someone could make a legitamite bussiness out of this?

pay someone 5$ to get your account trasferred to a EU citizen, and consequently removed by the GDPR guidelines.

Your still taking a huge risk by giving your profile to someone unknown though.


You might be able to structure the sale so that the European was the data subject for GDPR reasons, but did not have the passwords. That seems reasonable because under the GDPR, a company like Equifax would be obligated to purge your data if you withdrew consent even though you don't have access to the account they have on you.


GDPR applies to anyone processing personal data about a subject who is in the EU. His or her citizenship does not matter. So you can just go to Europe and make your claim from there. https://gdpr-info.eu/art-3-gdpr/


It may be - but that would just allow Facebook to say "if the login is from an EU IP, don't save this record" - but the rest of your profile, generated in the United States, can still remain. As long as no collection happens while you're within the EU, they may be fine. The whole thing will have to play itself out in court, it seems.


Nope. If you are an EU citizen, you have a right to be forgotten under the GDPR.

I fully intend to automate a request every 40 days (the response time for a Subject Access Request) to have myself pulled from their data.


Will it? or is this wishful thinking? How would it kill shadow profiles?


You would have the right (if you were an EU citizen) to ask for your shadow profile to be deleted, and I believe that they would have to collect an opt-in before they started to store a shadow profile about you.


This is technically true, but there are a lot of really weird implementation details. Since GDPR only applies to EU citizens, and those citizens could physically be anywhere in the world, how Facebook implements this will be super interesting.

Think about how a shadow profile gets created, for example - they notice that a group of three people keep getting tagged in photos, but there's a fourth person in the pictures who doesn't have a Facebook profile. The three people keep logging in from the same physical place (say, in the U.S.), and that same place is where the pictures are geolocated. You can assume this fourth person was in the U.S. So, Facebook starts a shadow profile on him - pictures he could have been tagged in, locations he probably was in, interests he probably has based on the intersection of his friends' interests.

But this guy is actually an EU citizen who showed up in the U.S. for a vacation. Uh oh. When would Facebook have found that out? When would they have asked this guy to opt-in? Can they assume everyone in the U.S. is not an EU citizen until told otherwise?


GDPR applies to people located in the EU. Citizenship does not matter.


I wrote this in another comment, but this is only partially true. The GDPR protections can potentially extend to non-EU citizens who travel to the EU, but the letter of the law seems to state that that's only true if data is actually collected while the person is in the EU. In other words, Facebook and others could potentially say "if this data is geotagged in the EU, don't record it. Wait until they're back in the US." Then, since no data collection happened in the EU, they wouldn't have the right to get it deleted.

Edit: rereading https://gdpr-info.eu/art-3-gdpr/, it specifically mentions the "processing of data", not just storing. In other words, Facebook could potentially stop an American from logging in when in Europe. Would they? Likely not, it would hurt their business. But what if I (an American) sign on via a British VPN?

It also doesn't answer what would happen to the data of EU citizens who are never geotagged in the EU (due to living outside of it), but also have shadow profiles created without their consent anyway. The first GDPR lawsuit will be fascinating.


Or maybe GDPR will kill off Facebook.


There are also photos out there with Zuckerberg with a covered device camera, but Instagram requires you to turn on the camera and microphone to post even pre-recorded content to Stories.


Any time the incredibly wealthy tell your things have changed, get over it, grab a pitchfork.


> Lost any respect I had for the guy in that moment

Um ... why only then? His statement "privacy is the concept from the past" was dishonest even back then - because if you have a lot of money, you can always buy more privacy than everybody else - and are generally much less exposed to the issues of, let's say, "average people".


Because even if I didn't like it, I could accept it as an opinion on future human interactions. Once he made it obviously clear that he himself considers it bullshit, while still selling the vision to the masses, I realised the man has no integrity whatsoever.


> Because even if I didn't like it, I could accept it as an opinion on future human interactions

But why could you accept it, from a person who is not affected by this vision anyway?

Did he provide some intimate look into his sphere of privacy back then?

Or, did he offer any transparency intiative regarding the Facebook company?


> I really hope FTC will force FB to stop most of their unfair practices.

Which ones do you want to keep?


None of course, but I don't believe they will get them all.


You lost respect for him over that? Should he put a cam in his bathroom for the world to see to be in sync with his business ambitions?


In May the GDPR comes into force in the EU.


I'm confused. Facebook is a service. As a service it's terms are "your data is ours, if you use our service". If you agree to that and you then use their service that should be the end of it. You are the product for Facebook.


It so happens that we live in a world where terms of service can be restricted by law through established processes.

If a country passed a law most of its citizens didn’t like, would you tell them to grin and bear it because “if you don’t like it you can just move?”


And GDPR puts limits on those terms. Just like labour laws put limits on contracts between a company and its workers. You can agree to an illegal contract, but that contract is still void...

That's how society is supposed to work.


That is not the position of EU law. In EU law, the person owns the data. There are limitations to what you can sign away.


To be fair, privacy and security are two different things


I would like them to review my numerous fake profile reports to Facebook, which they ignored... it was probably in the 100s during this last election. They failed to act and I am pretty sure that they promote, allow and encourage the use of bots and fake accounts on their social network as it helps inflate their user count.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: