I'm no Facebook fan but the reasons behind the lawsuit are bad and could set a bad precedent.
Cambridge Analytica used the Facebook API to ask users to share data about them & their friends. Stupid users agreed to that.
The argument here isn't that Facebook is playing fast and loose with tracking & user data (which would be a legitimate argument), it's that Facebook is allowing people to grant access to their data to third-parties and Facebook should somehow be faulted for that. Facebook is a neutral carrier here, and they acted on behalf of the user - he decided that Cambridge Analytica should have had access to his data. Facebook should not be forced to somehow be the arbiter of this.
This lawsuit will give even more reasons to platforms to restrict API access which would impact legitimate usage much more than nefarious abuse (stupid people will always find a way to screw up, API or not - if the API is gone they'd happily enter their Facebook credentials directly instead).
Cambridge Analytica used the Facebook API to ask users to share data about them & their friends. Stupid users agreed to that.
According to Facebook, this is not what happened and CA's collection of this data represented a breach in their platform policies [0]:
In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/Cambridge Analytica ... He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.
[Kogan] did not subsequently abide by our rules. By passing information on to a third party, including SCL/Cambridge Analytica and Christopher Wylie of Eunoia Technologies, he violated our platform policies.
> The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.
Users knowingly _agreed_ to share their data with Dr. Kogan. Dr. Kogan was contractually prohibited from passing that information on to third parties (Cambridge Analytica) but did so anyway and was banned as a result.
I haven't seen it phrased that way before, and now that I have, I simultaneously accept the logic of it and am horrified by the bureaucratic churn that must spin up just to ferry legal responsibility to (in theory) the correct party.
'luckily' it is expensive to sue, so everything works out.
Consider the alternative: the laptop you bought on Amazon catches fire and burns down the school of your daughter. The school contacts your insurance, who now has to contact a component supplier in Shenzen (who supplied the power supply), and sue them under Chinese law, instead of Amazon.
I think that this can end up resulting in less bureaucratic churn then the other approach. In this particular case a bunch of users had their data forwarded by one entity - if CA had harvested data from multiple sources using the Facebook API then I think it'd be unreasonable for those users to need to legally pursue each terms violator - Facebook may also refuse to share the identity of the breach for a variety of reasons[1] making the lawsuit without an identified defendant which doesn't really help matters.
1. Proprietary customer information, privacy, just generally not talking.
The real world is ridiculously complex place, something about almost endless fractal. I too would like to see companies like FB burn because they gave us plenty of reasons in the past, but in this case...
I look it from outdoor equipment perspective - if on my goretex jacket an YKK zipper fails, for me the jacket manufacturer (say Rab) would be the one to raise a warranty ticket/questions, not japanese YKK which produce billions of zippers for everybody all the time. Although Rab is just buying products from DuPont (sigh...), YKK, threads etc. and putting them together (at least that's a more common situation compared to manufacturing it yourself).
I guess in real world Rab would swallow my specific issue while in warranty and issue a fix/replacement, and if they see enough issues with supplier they raise it in batch mode ie for discount for future or one-time compensation.
Well one potential outcome of a suit against YKK for a zipper failure might be that, in fact, the zipper itself didn't fail due to poor quality reasons - but instead the fabric shed and accumulated in the teeth wearing them down over time... Basically, with an assembled product, it's unreasonable to expect consumers to try and identify the actual fault of the design.
As an anecdote on this topic.... I recall having a long necked jacket as a kid where the zipper wore down heavily around the collar since the neck was so long that it ended up being too tall for normal day-wear - and thus a lot of unnecessary stress was put on the zipper mechanism when it was partially zipped up. After a few months the teeth had weakened in that area to the point where the zipper would frequently come off the tracks there. In this specific case fitting a jacket with a four inch collar caused a zipper failure - the zipper was probably cheaply made anyways but if the cut of the jacket had been different there likely wouldn't have been an issue.
Well sure, I don't have a contract with Dr. Kogan, so what can I sue him for? There is no breach of any contract between us. I do have a contract with Facebook, so they are pretty much the only people I can sue. They can turn around and sue Dr. Kogan because they did have a contract with him, but I can't sue him directly for breaching a contract I'm not a part of.
> I do have a contract with Facebook, so they are pretty much the only people I can sue.
Can you sue them for this, though? Your contract would need to say "if I personally turn my data over to a third party, that third party will not misuse it". And in the unlikely event that it did say that, misuse of your data by the third party still wouldn't violate the contract, because... the third party is not party to the contract.
Facebook offered an API for sharing your data with a 3rd party vetted by Facebook.
Even if you don't buy that argument, at the very least the people who were FB friends of people who sent shared their lists with Kogan should still be able to sue FB, as they had absolutely no relation with Kogan or his app and still some of their data ended up sold.
I believe by using Dr. Kogan's app and explicitly giving Dr. Kogan permission to use your data (by accepting the FB interstitial UI that confirms with you as a user whether you would like to give a third-party access to your FB data via the FB API), you are entering in some kind of contract with him, no?
FB doesn't just hand users' data over without user confirmation... Users see UI permission dialogs and have to explicitly agree to it first (similar to an iPhone app asking for permission to use your microphone or location -- how is Apple responsible if said app gets hacked and leaks your microphone recordings?).
The reason we have a system of civil laws is so that everybody doesn’t have to all individually set up their own system to guard against contractual breaches.
And does anything about this make Facebook liable for the way CA used the data willingly given to them by their users?
Facebook is either:
1. a neutral party, acting on the wishes of its users, who asked to grant CA access to their data. The victims are users and the aggressor is CA for misusing data.
Or:
2. a victim party, after CA went against their policies. CA is the aggressor in this scenario, and the users are not part of the equation.
Someone is not responsible for causing damages can still be held liable for damages if they were negligent. One could argue that by having policies but not enforcing them, Facebook contributed to the harm befalling users, especially if an argument can be constructed that users relied on Facebook's platform policies as part of deciding whether to share data with CA.
> Cambridge Analytica's app on Facebook had harvested the data of people who interacted with it - and that of friends who had not given consent.
This is the problem (quote from op article). Facebook collected data on people and than shared this data to third parties without their consent. That friends gave consent to this has no relevance. They don't have the right to do so and they didn't do it themselves. Facebook did - on the request of CA (or rather that 'researcher'), with the approval of friends.
> If you shout your friend's email in a hotel's lobby, you wouldn't blame the hotel, right?
I wouldn't. But if my friend went to reception and told the staff "Hey, you know my friend in room 101? Please share their mail with the stranger in room 301. Thank you." I very much would.
Even worse - in this case, the hotel itself was asking your friend "hey, do you want us to share information about the one in room 101 with the stranger in room 301? The stranger in room 301 won't talk to you unless you do".
> If you shout your friend's email in a hotel's lobby, you wouldn't blame the hotel, right?
No but your friend would be quite in the right to blame you if they got threatening emails as a result of it. Depending on whether actual harm came from the act and whether you shouted their email with an intent to cause harm (even a different harm like a cascade of d** pics) then you could be held liable for damages.
If you act in a manner intended to cause harm and harm results - even if not in the manner you intended - then you can be held liable. Greasing a sidewalk so that folks will fall on their ass for the giggles and then causing someone to break their spine still leaves you liable for damages - even if you were unable to foresee the potential outcome of someone breaking their spine.
The hotel lobby example is likely to not result in liability due to difficulties in showing you acted maliciously but there certainly is a possibility there.
That's not what happened here. People weren't actively posting their friends' details to some stranger. They were using an FB feature and checking a box. They barely even had any way of knowing exactly what data about their friends would end up being shared with the 3rd party.
You are saying Facebook users willingly gave their data to CA as if it is a matter of fact. Can I refer you to the post you replied to? Facebook disputes this!
Also, no need to set up a false dichotomy. There are other possibilities. Usually, figuring out liability is a function of the court, thus this legal action. You'll probably get the answer to your question when the suit has been concluded.
Asking for a victim to find the "root cause" of a problem is too much to ask. If X wronged Y causes a wrong to happen to Z... the legal system is largely designed for "Z sues Y" then "Y sues X".
Asking for Z to sue X directly is asking for too much. There's no way for Z to even know that X exists.
Other than the time that X explicitly and directly asked Z for access to their data, you mean.
This isn't similar to a case where I stored my users' info in a database on Cloud Hosting Service Inc machines, CHS's lax security allowed the data to be hacked, and I am now accountable to my users because I used an insecure service. Facebook's role in this situation was, literally speaking, a permissions broker between the end users and Dr. Kogan. The end users granted Dr. Kogan permission with every opportunity to learn about Dr. Kogan.
Edit: Correction that users had a chance to learn about Dr. Kogan, not CA.
1. Z doesn't know Y's policy towards X. Is the problem truly Y's fault or X's fault? "Z sues Y" doesn't necessarily implicate Y as the root cause, it just proves that Y was "along the way" towards the root cause.
2. Y sues X is a totally separate question. Consider the case where a car-parts company creates a suspension ("X"), who sells their suspensions to Ford (aka: Y). Sometime while customer "Z" was driving, the suspension fails. Z sues Y for a million bucks to cover the cost of back surgery or something. Y then has to argue with X to figure out who was responsible for the suspension failure. Depending on the agreements / contracts Y could be the root cause, or X. (Maybe it's Y's fault: if Y was using the suspensions incorrectly and X can prove it, then the Y-sues-X case will fail).
In this #2 case: if Z sues X directly, Z will fail (because X is not at fault). Its safer for Z to sue Y... and its also the morally sound way to move things forward.
----------
For better or worse, Z (the typical user) has a relationship with Facebook (Y). Cambridge Analytica is X. Whether X or Y is at fault is still ambiguous from Z's perspective (no reason for Z to come up with legal arguments and determine the "right person to sue").
All Z has to prove is someone wronged him, and that Y is the next person up the chain. Let Y's lawyers figure out if Y or X is responsible. Z just needs compensation for Z's issues alone.
That's because you keep making points that ignore that the end users knowingly granted permission to Dr. Kogan.
If I buy a car, I don't expect the transmission manufacturer to be part of the specs or shopping process, so yes, I would sue Ford. But if Ford says "comes with Goodyear tires", and then one of those tires proves faulty, I'll be suing Goodyear, not Ford.
But if you really want to use comparisons, let's use one that's appropriate:
I purchase an iPhone from Apple. In this scenario, Apple has a policy that app developers can't share data with 3rd parties. I download an app and grant it permission to my data. The app developer then shares that data with someone else. Who is at fault? Who is the victim? Should I sue Apple? Should I sue the app developer? Can I really sue anyone considering I myself downloaded the app and granted it access to my data?
Analogies seem to be failing. So lets actually talk about the case then.
Cambridge Analytica asked user A for permission. Facebook allowed CA to gather information about B (A's friend), and B NEVER provided consent. B now is wondering who to sue: Facebook or CA.
My opinion: (IANAL). B sues Facebook. Then, Facebook sues CA.
This leads to a few results:
1. If B loses its case against Facebook, the game is over.
2. If B wins its case against Facebook, Facebook continues and sues CA.
3. If Facebook loses vs CA, then the game is over and Facebook is found to be at fault. If Facebook wins vs CA, then CA was at fault.
4. If CA was at fault, then maybe CA will then sue its consultant over the issue, and it may continue. So on and so forth until the root cause is discovered, wherein the game ends.
Simple process. At least... simple if everyone could afford the proper legal process. B probably can't afford it and many "Bs" need to gather together for a class action lawsuit and all that jazz.
The hardest part of the puzzle is what exactly B would be suing over. It feels like B was damaged as a whole, since Facebook leaked B's information to CA. But its hard for me to formalize the complaint. Then again: that's a lawyer's job to find out.
If we're going with a chain lawsuit, then why skip the step where B sues A for sharing their information with Dr. Kogan?
B sues A for sharing their information, then A sues Facebook, then Facebook sues CA. That would be the full cycle, no? If B is allowed to skip suing A in favor of suing Facebook directly, why shouldn't they also skip suing Facebook and sue Dr. Kogan directly? Or maybe it doesn't even get to go that far: B just needs to sue A, Facebook gets to sue Dr. Kogan, and that's all we see.
> If we're going with a chain lawsuit, then why skip the step where B sues A for sharing their information with Dr. Kogan?
B interacts to A through Facebook, do they not?
If Facebook wants to sue A as the next leg in the chain, they're certainly welcome to try. That's the joy about just following the edge of the graph: its Facebook's job to figure out if A is more at fault (and should be sued) or if Cambridge Analytica is more at fault.
This "chain" methodology further demonstrates which lawsuits are likely fruitless. The concept of Facebook suing A, or even Cambridge Analytica for suing A (if it goes that far) is clearly improper at face value. Breaking things up one-step at a time allows us to seek justice.
> its Facebook's job to figure out if A is more at fault (and should be sued) or if Cambridge Analytica is more at fault.
No, it's not Facebook's job to figure that out. If this was an investigation, then it would be the investigating office's job to figure out that. But it's not an investigation, it's a lawsuit, an accusation by one party against another party. The only thing to figure out here is if the accusation is legitimate. This thread started with the GP remarking that the accusation shouldn't be found valid.
To say that it should be found valid because the accused can then separately try to sue another party is not a proper evaluation of the accusation, nor is it demonstrative of a productive legal process (at least in my opinion), nor is it "the entire point of the justice system".
B shared some data with Facebook. That data has somehow leaked to CA, which B never agreed to. Without doing any investigation of their own, B can pretty easily accuse FB of losing their data.
FB can defend itself by saying B shared their data with A, and it is A who misplaced it.
Or, it could be ruled that A could not be expected to understand that they are sharing non-public data about B with a 3rd party, so the ball could be back in FB's court. Perhaps FB should not have offered this option to A at all.
Or perhaps that was well withing FB's and A's rights, and the problem instead is that A never agreed to share this data with CA, they only shared it with Dr Kogan.
In this case again, it could be Dr Kogan alone who is at fault for sharing the data improperly, or it may also be FB's fault for not vetting app developers enough before giving them access to user's data.
Of course, there could also be many more nuanced decisions as well. But for B, the chain can only really start with suing FB - the only entity that B shared their data directly with. B can't know whether it got to CA through A or through D or E, or whether FB was hacked and the information was stolen from them etc., and they have no right to demand this information from FB outside of a lawsuit.
> Cambridge Analytica asked user A for permission. Facebook allowed CA to gather information about B (A's friend), and B NEVER provided consent. B now is wondering who to sue: Facebook or CA.
B consented to sharing their information with A (by accepting a friend request or making specific data items public-readable), who granted Facebook the right to share that information with (edit) not CA, Dr. Kogan.
If we’re friends, I’ve shared my information with you. I haven’t given you (or Facebook, or anyone else) permission to share my data with Cambridge analytica.
If you authorized an app to access the data shared with you, then you authorized release of your friends' information because that's what data you agreed to share with the app.
Indeed, where is a screenshot of the 'Authorize App' consent dialogue that users were presented with.
- A agrees to sharing info with B by accepting a friend request. Explicitly per the terms of service, and implicitly because technically anyone can take a screenshot or a photo or a video and share whatever's shared with them (even in DRM'd systems with limited key distribution).
- B authorizes C to retrieve the data available to B.
- C then reshares, sells, distributes, or otherwise transmits information to D.
F enabled A to share data with B, given explicit user consent. F enabled B to share data with C, given explicit user consent.
If you don't want people to know things, don't put that information on the internet; and don't authorize friends to share information you haven't volunteered.
Your post conflates ethics (what should happen), law (what is legal to happen) and what actually happens.
Ethically, if I tell you my email address, and you sell my name + email address to advertisers without telling me, you've done me wrong. You violated my reasonable expectation of privacy. I have the same expectation if I make a private post to facebook, visible only to my (curated) list of friends. That content is for your eyes only.
Violating that expectation probably has no repercussions under US law. But it is almost certainly illegal under the GDPR. I installed Clubhouse the other day and clubhouse asked me to share my contacts with the app. Saying yes without checking with everyone on my contacts' list was probably illegal in europe. (Rightfully so, in my opinion.)
In this case, A shared information (posts, etc) with B (A's friend) on Facebook. B authorized app X to access their information - which in turn passed that information to Z (Cambridge Analytica) with neither A nor B's consent. Things that went wrong here:
- B should not have been able to pass A's information to a third party (X) without A's explicit consent.
- X should not have passed information to Z (Cambridge Analytica)
- Facebook shouldn't have built a platform which permitted / encouraged such obvious and blatant abuses of privacy. If a user told facebook that some content was private, facebook violated user's trust by sharing that information with a random 3rd party app. B's consent isn't relevant wrt A's data. (Permission isn't transitive.)
Who's legally at fault here? I have no idea, and I'm glad the courts exist to figure all this out.
Meanwhile, our technology is utterly failing user's expectations of privacy - which, yes, actually exists in much of the rest of the world.
> If you don't want people to know things, don't put that information on the internet
What a ridiculous sentiment. No. I want to use the internet and have an expectation of privacy. I will not settle for mediocrity so that facebook can make more money.
Did the losses and liabilities result from the actions of the plaintiff? Do we need multiple non-joined cases?
BTW, the SOLID project believes that data portability is a privacy advantage of their competing, open source federated system.
If you don't want people to know something, don't put it on the internet; regardless of TOS.
We shouldn't expect or rely upon information asymmetry holding over time.
Facebook certainly required users (who are not paying customers) to sign data sharing agreements.
Facebook did not commit the crimes of Cambridge Analytica (Steve Bannon, Trump's campaign guy; Ted Cruz). Facebook was fined $5b. Cambridge Analytica went bankrupt and Nix is barred from serving on the board of any UK company for 7 years.
Facebook's expenses related to Russian misinformation campaigns, Facebook's expenses related to the administration's denial of ongoing foreign information operations paid for by advertisers who don't want a spot next to sleaze.
>The argument here isn't that Facebook is playing fast and loose with tracking & user data (which would be a legitimate argument), it's that Facebook is allowing people to grant access to their data to third-parties and Facebook should somehow be faulted for that.
Even so you wrote yourself that users shared data about their friends. Why does Facebook allow people to share the data of others who didn't agree to this?
> Why does Facebook allow people to share the data of others who didn't agree to this?
I have names, email addresses, phone numbers, birthdates, email contents, and more for most of my friends. There's no centralized arbiter of this information; I have the ability to share this data in any way I choose.
And I do! I switch email providers, install apps on my phone, use calendaring systems, tell our friends where to meet for surprise birthday parties, etc. I don't need your consent for any of it, because even though the information may be about you, we understand that it's "my" data.
Inserting Facebook in this process doesn't really change the dynamic.
Not really; it's their data, and you're allowed to use it.
> Inserting Facebook in this process doesn't really change the dynamic.
Yes it does, because while you (the individual) are allowed under GDPR to use the personal data of your friends for personal purposes, that doesn't automatically entitle Facebook to use it for their purposes. Only on your behalf for your purposes.
Here in the US, facts are not copyrightable. Your phone number, birthdate, email address, likes on facebook, list of friends, etc are not things that you can "take away" from someone. In theory you could exercise copyright over an email you've written, but I'm not sure that's ever been worked out in court.
Foreplay: I know that you don't like what you read but this is a diction of GDPR, so before you start down-voting, please - Rec.74; Art.24 and read [1]) as the "entity" that obtained the data.
Let me shed some light. Facebook/Google have nothing to do with it except they are breaking the law because of you that have planted data from you friends without their consent.
Following the GDPR, the one who gave personally identifiable information (PII) to the Google/Facebook/whatever, makes HIM/YOU/HER responsible for whatever they do with it.
(Or in other words - if you are gathering the personal data on your website for a 3rd party, you better be sure that the 3rd party has a strong legal bond with you regarding the information you have "traded" to it or you might have troubles.)
Even if "your friend" has given his/hers PII to you, you dont have any consent to share it with whatever 3rd party application you are using and is stealing your data based on "I Agree button". This is making you, as a controller of PII responsible for his PII. If the 3rd party application ("Facebook/Google/...) took it from you for whatever "reason", those information were not yours to share and you have zero comfort in not being given consent. You have decided, for your friend, that you will share his/hers information with 3rd party application. Due to negligence (you didn't read the "I Agree" text, you didn't care (negligence),... whatever. It really doesn't matter.)
You have two troubles here.
- The application was violating GDPR. Clearly. Without any doubt. They slurped in the PII data from your friends which gave no consent. They might argue that you have misleaded them. In this case all guilt is on you. Unless they are well known for their acts. Which against paints a big red text "negligence" over your forehead.
- YOU were violating GDPR by not taking care for PII of your friend and giving it to 3rd party without consent, approval, anything ("Hey I just took his phone number").
Not only can 3rd party application be held guilty of stockpiling PII without consent, in same manner can YOU be guilty of giving them PII data (oh yeah, "I Agree" button) and your "friend" has all the law support in EU to sue you for this - EU wont, they have larger fish to fry but your friend can and might.
[1] - GDPR defines a controller as: >>> the natural <<< or legal person, public authority, agency or other body which, alone or jointly with others, >>> determines the purposes and means of the processing <<< of personal data
> I don't need your consent for any of it, because even though the information may be about you, we understand that it's "my" data.
You do need content though, if I provide my email in a social setting I implicitly give consent to birthday parties etc. I didn't consent to you selling my email as part of a bundle. If people found out you were providing data to random people at least a stern talking to would happen.
Under GDPR it works this way for business too, just because I gave you data for a specific purpose doesn't mean you can do whatever you want with it. I'm not aware of other jurisdictions.
So if I give you my phone number and you store it in Google Contacts, and I later decide I don't want you to have my phone number anymore, under GDPR can I request that Google delete my number from your contacts? After all, I never consented to you sharing my phone number with Google.
There is an exception for data that is required for the functioning of the service. You need your friends email address to use email, but do you really need their birthday or a sentiment analysis of their opinion of cheesecakes?
> but do you really need their birthday or a sentiment analysis of their opinion of cheesecakes
If you go back to the Wild West of the Facebook apps, shortly after platform launch there was an app for everything - apps for fancy birthday cards with birthday reminders, as well as polling apps telling you which one of your friends is the biggest cheesecake lover.
Every piece of data can be spun into being essential.
A polling app building a “psychological compatibility profile” can arbitrarily add new data points, and “streamline” the onboarding process by collecting all of the necessary data with one click (with fully disclosed list of collected data points).
Which is what CA has built.
Not just them - any survey app claiming to help you find out “which Game of Thrones characters you and your friends are” can arbitrarily claim those data points as necessary.
Did people directly add permissions to CA on their accounts? I got the impression they were mislead and wanted to add some different applications with different features.
An app to help you find out "which Game of Thrones characters you and your friends are" can arbitrarily claim those data points, and then use them to discover which Game or Thrones character you and your friends are. On that case, even storing the data looks like a violation, even more sharing it with anyone.
While a surprise birthday party etc. is definitely not a problem for me, if any of my friends considered my address, phone number etc. as his/her data, there'd be a rather serious conversation about it.
Because open Internet principles as understood at the time required that. Facebook used to be heavily criticized (see e.g https://www.google.com/amp/s/www.wired.com/2007/08/open-soci...) for locking data into their platform when it ought to be available on the open web. There was a pervasive sense that you should be able to authorize third parties to do anything you can do through the official webapp; the term “walled garden” was common for platforms that wouldn’t offer this level of control.
Indeed - we should remember that a good chunk of the complaints about Facebook are because they opened up an API to anyone who granted permission, as demanded by power users like us who wanted different services to interoperate seemlessly.
If I put on my blinders against this being Facebook for a moment, supposing that you're on a social network in which your friend is someone you personally trust, then it's not that ridiculous to trust that person with the decision to share your data. In a very limited way, you kind of expect this (your friend giving your number to someone who they think you'll get along with, or whatever).
This goes a bit sideways on Facebook in two main ways, I think:
1. People are way too fast and loose with who they keep as "friends" on Facebook
2. Facebook has way too much data to warrant a blanket "Yeah, please share -all- of that at once" agreement. Something more granular, like "The phone numbers of your friends who have themselves granted permissions for their friends to share their numbers" would be more reasonable.
In a very limited way, you kind of expect this
(your friend giving your number to someone who they
think you'll get along with, or whatever).
I absolutely do not expect this. Nor would I be okay with a friend sharing my number to someone they think I'll get along with. I don't think I'm alone in this either.
The thing here is that "you're on a social network in which your friend is someone you personally trust, then it's not that ridiculous to trust that person with the decision to share your data" does not match the legal expectation. If my friend allows Facebook to give my data to Cambridge Analytica, that does not give Facebook any legal grounds to do that - so as Facebook did it, it would be a violation at least of the current laws (the UK pre-GDPR legislation was more limited). You are required to inform the data subject and, if you use consent as the basis, you're required to get consent from the data subject (or their legal guardian), not some other person, even if that person is their friend or family member. My spouse or parent can't consent to sharing data on my behalf, and any terms and conditions to which they agree can't waive my rights.
Also, it's worth noting that there is a big difference between "your friend giving your number to someone who they think you'll get along with" and your friend sharing your name and number to some company - for GDPR, the first is covered by the "personal activity" clause 2.2(d), and the latter is not, so GDPR applies and the consent of that friend isn't sufficient, i.e. the friend is permitted to click "share", however, that does not necessarily mean that the company is permitted to use the data shared in this manner. So every company that expects EU users to share their phone contact lists had better be very careful on what and how they do it - you can't rely on informing users or getting consent as you're informing someone else and getting someone else's consent.
There's no 1000 page EULA to read, and FB wasn't the predator. CA asked for data and user said yes, same as if you accept a friend request with someone who gossips about you behind your back.
This is ludicrous. So somebody friends me on Facebook, someone who I know and trust. Then that person comes along some quiz, and then in teeny text at the bottom of what looks like a standard "blah blah blah" popup is the information, carefully worded as to not be too alarmist, that that person's friends' data (i.e. me) will also be sucked up.
At that point, why just stop at friends? Why not go to any transitive relationship with the argument "well, you trusted that person, so it's just like them sharing the data you already gave to them". Of course, the absurdity of that is that one person can share the whole world. I do not think this is a slippery slope argument at all, given that FB already went halfway down the slope before there was outrage.
If you told Facebook to give someone access to your personal information and they took it and handed it off to a third party, what is Facebook supposed to do about that? What can Facebook do about that? What could any website do about that?
I despise Facebook, but I really don't understand this whole Cambridge Analytica thing. There doesn't appear to be an endgame for those criticizing Facebook over the ordeal. Of all the despicable things Facebook has done, why is this the one that everyone clings to?
> My friend allowed the app to ask Facebook information about me.
Unless there's a major security vulnerability, you can only delegate access to data you have access yourself. So your friend did the equivalent of giving Cambridge Analytica your data - the technical implementation of it (as to whether CA got the data off your friend's phone or from Facebook directly) doesn't really change the outcome.
How is this any different than FB asking for access to your private contacts, to "help" you find them on FB? What if one of your friends isn't on FB, and doesn't want FB to have their info? Your friend didn't give you permission to give FB their private contact information, ie phone number. FB then goes and makes a shadow profile based on that info you supplied without permission, and any time you mention or tag said friend, whether in a text of photo.
As I understand, it was a browser extension which scraped data off the Facebook pages visited by users. There is no way Facebook could reasonably detect or combat that.
The same argument could be said today about your web browser.
I wrote this comment for people in this conversation to see. Yet you allowed Google Chrome access to the comment. You let your adblocker see it. You let lots of software companies scrape it. You shared the data with a wider audience than I intended.
Sure, the argument doesn't hold much water on the public internet. But now consider HN was an invite-only forum, in fact an invite-only forum just like my facebook page...
You’re making the same argument. Public comments are fair game and there’s no expectations of privacy. A private, or invite only, or a network like fb with complex privacy controls is another story entirely.
I think a lot of people are forgetting that you were also able to get tons of data on a user's friends, just from that user accepting. No consent on the friends part. If you and I were FB friends and I accepted one of those requests, CA now also knows your profile info and likes.
What you said. I was starting to read the like 100 comments about how people clicked yes and so be it, but the entire time I was thinking, wait, no, that's not what happened. It's what you just said. Amazing how a little time leads to revisionist history for these people defending facebook.
There’s no thousand page EULA. Just a screen that says “do you want to allow this app to access your profile data and your friend list” and they clicked yes.
Absolutely, but if I remember correctly Cambridge Analytica is no more and the law doesn't seem to be going after whoever was behind the company.
> Do you think if they knew the full scope of Cambridge Analytica's work they would've allowed them to access the data?
Honestly? I'm not sure - a lot of people already dismiss privacy concerns and ad tracking as "I've got nothing to hide" or "it's just ads, no big deal". Unfortunately I wouldn't be surprised if people opted in even if CA was fully transparent with their intentions.
However, CA even broke Facebook's API terms of use, so at least if CA was transparent Facebook wouldn't have allowed them API access to begin with (though I'm sure they would've worked around that, with malicious apps/browser extensions or just asking for raw Facebook credentials, bypassing the API completely).
If I tell you to go on Facebook and take screenshots of your friends’ profiles and send them to me and then I do dubious things using that information, whose fault is it? I’d split it between you and me. Facebook is not at fault at all.
Before the Cambridge Analytica thing, everyone was saying that Facebook was evil because they were a walled garden. So they created an API, and now everyone is saying they're evil because they created an API. Go figure.
As Mr. Zuckerberg stated personally, the main product of Facebook are ads. That means, everything around ads and user tracking is their core business. Not whatever users do, but whatever users see and click when an ad is up. Making privacy settings more restrictive or convenient goes directly against Facebook's business model, as does transparency about data brokerage and third parties. Saying that basically the users are to blame, because they are sharing their data completely misses the motivation behind Facebook's pretense, that it's all about users, social media and making the world better. In reality it is about collecting and selling as much data and as much ads as possible, all while blatantly violating their users privacy.
The Cambridge Analytica scandal has nothing to do with Facebook's business model and Facebook did not gain anything from this.
To the best of my knowledge CA abused a free feature (API access) designed for legitimate usage to collect user data for nefarious purposes. Facebook was acting as a neutral carrier here and respected the user's intention of sharing their data with CA.
> The Cambridge Analytica scandal has nothing to do with Facebook's business model and Facebook did not gain anything from this.
Hilarious.
The Cambridge Analytica scandal is in regards to a nefarious 3rd party using Facebook's pipes in a way that violated Facebook's policy. There are emails to show that Facebook knew about this, and did not act to remove CA's access to said pipes (no, [1] there are literally emails about this that were found during discovery).
You even mentioned it
> To the best of my knowledge CA abused a free feature (API access) designed for legitimate usage to collect user data for nefarious purposes
Yes. They abused it. They abused it and were allowed to continue abusing it because Facebook's business model is data, not user privacy. You're blaming CA for what FB quite literally (and I mean quite literally) allowed them to do, told them to stop doing, then turned a blind eye when CA _continued_ to do it.
Again. Not my opinion. It's a matter of factual record.
Thanks for the links. After having read the entire thread (it’s not long), a few things stand out:
1) This was not treated as a high pri issue at all until the story broke out in the media (compare the frequency of messages before and after)
2) There was a mad scramble to understand where exactly CA got their data from. It was far from obvious and there was no access to close as CA didn’t even have any app or a relationship with FB.
> Facebook is allowing people to grant access to their data to third-parties and Facebook should somehow be faulted for that.
It's not clear to me that they shouldn't be faulted. How many people read terms and conditions? How many people lack the technological literacy to understand what it means to share their facebook data with third-parties?
It seems to me like we should make our systems robust to the average user, especially systems as big as Facebook.
> Facebook should not be forced to somehow be the arbiter of this.
I kind of agree, but at the same time Facebook has some moral responsibility as the holder of the data. Perhaps it's not on Facebook to implement regulatory mechanisms, but if these mechanisms are implemented (e.g. by the "State") then it should probably be on Facebook's dime.
> It's not clear to me that they shouldn't be faulted. How many people read terms and conditions? How many people lack the technological literacy to understand what it means to share their facebook data with third-parties?
I'm not trying to argue here, but I see this argument pop up often when discussing big tech and user data. I'm curious why the tone is so different when it comes to mortgages or auto loans, for example. It seems society is content with the notion that I must do my ow due diligence when buying a home, but for whatever reason that responsibility seems to slide away when I'm dealing with social media. Why is that?
To clarify, I'm being sincere, not argumentative. I'm not a normal internet user and never have been. I haven't been on social media for a decade, use all the ad blockers, and so on.
Well for one, I expect the bank not to have snuck in a clause that allows them to unilaterally change the contract without providing me with a physical copy thereof. I expect that if the bank tried that that the courts would rule it unconscionable. I expect that if I modify terms in the contract such as typos that the representative will ok them and accept the modification. I expect that they will keep a physical copy of the license or a digital scan thereof so they can track which version I received and whether such modifications were made and to keep track of the witness. I expect the bank to insist on having a translator present since I reside in a land where my proficiency in the language is suspect. I expect that there is a meaningful exchange embodied by the contract and that there is some sense in which I can seek redress if the bank fails to provide the agreed upon funds.
If you are sincere, you should compare more people trying to make payday loans illegal with the fight against Facebook's EULA. Banks have always showed much more diligence to receive meaningful consent for a mortgage than Facebook even comes close to. With Payday loans, a predatory lender is trying to rope someone into a contract and then wants to use the courts to extract much more money out of individuals knowing that the individual may be so far in debt that they have no more disposable income. There are many people who would argue that such things should be illegal (or already are) in the same grain as loansharking. Once upon a time people signed contracts whereupon they became slaves or indentured. Clearly not all contractual terms should be honored by a just society. Facebook is seen by some as closer to the predatory lender than a typical mortgage provider. People are free to disagree with where to place the line, but clearly society places the line somewhere on the acceptability of contractual terms.
Well, for one afaik mortgages and auto loans are regulated and have been for a long time. They're also simpler to understand for _most_ people, since most people interact with money daily, they see their income, they understand interest, etc.
Most people don't need to understand how banks trade mortgages and such because in all likelihood, well except in 2008, it will only affect them marginally.
I suspect that most people similarly only have a very superficial functional understanding of data. They see other people's data daily, they understand that if they post something, other people will see it.
Where this differs is that, at scale data is not well regulated, and can affect users much more directly and non-marginally, in non-obvious ways. Targeted ads, political manipulation, identity theft, etc.
It seems society is content with the notion that I must do my ow due diligence when buying a home, but for whatever reason that responsibility seems to slide away when I'm dealing with social media. Why is that?
Counterpoint: I'm not content with opaque mortgage or auto loan terms (although my last auto loan was actually quite simple). I don't think you can generalize how "society" feels in this way.
This also happens to be a recent hot button issue. If you talked to someone in, oh say, late 2008, mortgages might've been on the front of their mind.
> I'm not trying to argue here, but I see this argument pop up often when discussing big tech and user data. I'm curious why the tone is so different when it comes to mortgages or auto loans, for example. It seems society is content with the notion that I must do my ow due diligence when buying a home, but for whatever reason that responsibility seems to slide away when I'm dealing with social media. Why is that?
That's a legitimate and good question.
And the answer is that the mortgage industry is highly regulated, and so there are things that the federal government demands (if we're talking about the U.S.) and additionally that states demand. So if you sign a mortgage paperwork in my state, for example, there are Riders that have to be provided. Same with signing up for a credit card. One page information sheets MANDATED by the government that the consumer gets to see, before having to sign contracts.
You don't need to be a lawyer to not get screwed.
Another good example. Residential leases are long, technical contracts. However, the state law overrides what's in it. So even if someone signs something that violates their rights, it won't apply. In some cases, the landlord can be sued for damages. Additionally, Riders are often mandated by the state that the landlord should provide, that summarizes rights.
The problem is that individual user data privacy is NOT regulated.
The lack of regulations and laws are why we're all arguing right now.
Reading the EULA and T&C are irrelevant, because what CA did wrong was violate the EULA and T&C.
Anyway, You are making an argument for regulations requiring disclosures, like in mortgages and nutrition labels, not an argument that Facebook is at fault for letting users interact with 3rd parties, which could have been a browser extension that scraped the same data that the FB API provides.
True, but until these regulations exists, I'm not sure why we shouldn't hold Facebook accountable, at least morally -- and in the example you give, why we shouldn't hold browser extension providers accountable.
Why should Facebook be accountable here instead of the party that acted maliciously? Facebook acted as a neutral carrier. Users told them to share their data with CA, and they did.
CA lied to both their users and even to Facebook itself (I think they breached FB API's terms and conditions), why should it be Facebook that's at fault?
In your argument, if a car is used in a robbery, should the car dealership or manufacturer be also at fault, even though the manufacturer had no idea this particular customer was going to use the car for malicious purposes?
If a car has parts that are easily hacked/broken into, leading to injury to its user while the user "chose" to drive, shouldn't the manufacturer be at fault? This is what happened with CA and Facebook's API.
I'm not saying CA shouldn't be held accountable, I'm saying Facebook also has part of the blame.
There's plenty of manipulation out there (including by Facebook themselves), but I wouldn't consider an OAuth consent prompt as manipulation?
Cambridge Analytica may have lied about their intentions, but when requesting access, the Facebook consent prompt is very clear about what data would be shared. Why should Facebook be on the hook for CA's lies?
Being able to share your friends data or even your own so coarsely was a bad design. If it were me designing an API for Facebook apps, you'd only be able to present user data to users from packaged queries, none of the user data would be directly accessible to the app maker, and monetization would only be through ad display or in-app purchases. It'd be a much less popular API since you can't extract users data, but IMO, it's the only sensible form such an API can take.
"Facebook is a neutral carrier here, and they acted on behalf of the user..."
I think that argument is fatally flawed.
The issue with all these user data cases is that the license between the user and Facebook does not restrict how Facebook can use the data. Not simply the data the user submits[1] but the data that Facebook involuntarily collects from the user.
1. This always seems to be how outside observers define "the user's data". They believe it is the data (e.g., photos, etc.) that the user has submitted to Facebook. Facebook may be an optimal place to store photos, etc. Regardless of whether that is true, that is not what is at issue. The issue is what use Facebook is permitted. Can it use the data a user submits any way that it chooses. Yes, it can. Can Facebook collect data on the user, based on their usage, and other sources. Yes, they can. The user has no control over how Facebook uses that data. This is the "loss of control".
It is not always concerned with "transfers" of data, necessarily. It is concerned with how Facebook may use the data it receives from users (knowingly/voluntarily or unknowingly/involuntarily) to support its business. Users pay nothing to Facebook so obviously the data, Facebook's primary asset, is going to be used in ways that generate revenue for Facebook. Users have no say in the decisions over how the data will be used, yet it is "their data" (or data about them). Can it be used in academic research. Yes, it can. Market research. Yes. Users have no control over these uses. It is not simply a matter of changing some setting, which Facebook is constantly fiddling with. It is a matter of the the license the user has with Facebook. The user has no enforceable rights throught that license to control how the data is used.
IIRC the issue with CA was that a friend could share my data without me knowing. It could have been data that I uploaded and not my friend (and maybe the friend didn't even know the existence of this data).
I remember Facebook permissions for Apps in the past were quite laxes. I wonder even if theren't were data accessible to apps that were not visible from the browser interface.
Just because there is no law specifying exactly what happened it doesn't mean something is legal. Like there is no law that says it's illegal to hit people wearing yellow hats, but if you did that you'd certainly get in trouble. The same way you cannot agree for someone to murder you. Just because users agreed it means nothing.
>Cambridge Analytica used the Facebook API to ask users to share data about them & their friends. Stupid users agreed to that.
Stupid users aren't allowed to say yes to share personal information on their friends. Only the friends are. That is how it should be. An API should never give access to other users data than the one who agreed. Luckily because of GDPR they now can't, though the same rules were there in most of the EU already so it was always illegal at least some of it. IMO the rules should be even tighter and the punishments harsher.
> Facebook is a neutral carrier here, and they acted on behalf of the user - he decided that Cambridge Analytica should have had access to his data.
You say that like it's okay that anyone else can press a button and then share my data. It's not my friends' to share, I'd say fb is at fault here for providing this to third parties.
This is on the level of "poor people should just stop being poor" or "she was raped because she didn't dress decently enough"
On on side you have a mega corp doing all it can to harvest as much data as possible, backed by behavioural analysis, AB testing, and all kind of studies on how to trick people's brain into feeding the algorithm with more data points. On the other hand you have "stupid users"
The role of governments, at least in Europe, is, in part and in theory, to protect "stupid" people from mega corp sharks and other predators
Yeah, this whole "people are stupid" argument is incredibly dismissive. Not surprising of tech people's giant ego.
Do people really want their doctor to invest several hours a week understanding the latest shenanigans and dangers of technology? Or should we just let our doctors be good doctors? Replace "doctor" by any other non-tech profession you respect, and you get my point.
This isn't even about protecting "stupid" people, it's about letting people be useful members of society even if they're not tech experts.
I absolutely think regulation should be in place to protect against people/companies acting maliciously (thus preventing a Cambridge Analytica from existing in the first place), and I guess the reason people were so trusting is because they do expect such regulations to exist.
However, I disagree with modifying/removing a legitimate feature (API access) just because it can be abused. Otherwise, why not go further and also ban knives (or go after supermarkets that sell them) to solve knife crime?
> Otherwise, why not go further and also ban knives (or go after supermarkets that sell them) to solve knife crime?
That sounds like a very slippery argument, but no, the way to solve knife crime is not to ban knifes, it's to understand and "fix" the reasons (systems) why people do knife crimes in the first place (e.g. poverty, poor mental health resources).
In this case, we need to understand how the systems (e.g. the API) allowed for this to happen, and perhaps yes, get rid of it (or perhaps some other solution, I don't know, but beyond CA it seems that even a "legitimate" use of the API can easily cause harm).
> On on side you have a mega corp doing all it can to harvest as much data as possible, backed by behavioural analysis, AB testing, and all kind of studies on how to trick people's brain into feeding the algorithm with more data points. On the other hand you have "stupid users"
This lawsuit is explicitly not about the big tech giant's (Facebook) data processing (which I agree is a problem).
The lawsuit is about how Facebook should've somehow been able to predict the future malicious actions of a company and prevent users from sharing their data with them against their own will (again nobody's data was shared without consent - people explicitly opted to share their data - which includes basic info about their friends, which I'd argue is their data too - with Cambridge Analytica).
If there's a "megacorp shark" here, it's Cambridge Analytica and not Facebook.
> again nobody's data was shared without consent - people explicitly opted to share their data - which includes basic info about their friends, which I'd argue is their data too - with Cambridge Analytica
You frame this as if it was a logical train of thought but it isn't to me (and apparently I'm not the only one). You're not to decide what the UK courts determine to be acceptable or not and imho you're reasoning is rotten from the get go so any conclusions you draw from it are equally invalid. "consent" isn't a free pass, I can give you the explicit consent to kill me and it would still be illegal for you to do it in every country I know of.
Facebook has a systemic issue with the way they harvest, handle, share and monetise their users data, this case is just a drop in the ocean of sketchy things they've done, I'm not going to cry for them when they finally get a slap on the wrist
If we agree with your reasoning it would mean that simply getting a new phone would involve me calling/texting everyone in my contacts list to ask for their consent for me to enter their numbers on my new phone since I'm potentially sharing their details with a third-party.
> Facebook has a systemic issue with the way they harvest, handle, share and monetise their users data, this case is just a drop in the ocean of sketchy things they've done
How did Facebook benefit from this? Facebook got duped just like everyone else. CA did not disclose their intentions when they got access to the Facebook API because they wouldn't have got that access otherwise as their actions were against the FB API terms of use.
There's a bit of a witch hunt going on about Facebook, and while I despise that company and want to see it gone too, this is just an outraged mob clutching at straws. They can't go after Cambridge Analytica nor the people behind it (and I guess can't be bothered to vote/lobby for a legislative change so that those people can be prosecuted) so they're venting their anger on the next best thing: Facebook, even though they're a neutral party in this case.
> If we agree with your reasoning it would mean that simply getting a new phone would involve me calling/texting everyone in my contacts list to ask for their consent for me to enter their numbers on my new phone since I'm potentially sharing their details with a third-party.
But again, you're missing the forest for the tree. You discuss the symptoms while I discuss the root cause. A phone shouldn't let third party randomly siphon arbitrary data about you and your friends without making public the full scope of their project. A consent isn't consent if you're being tricked into giving it for nefarious use
> Cambridge Analytica used the Facebook API to ask users to share data about them & their friends
This may be true technically, but if it was not made clear to the users what they were sharing and what their data would be used for then the users cannot legitimately agree.
Facebook, as platform owners, can force organisation to be clear about what will be disclosed and what for.
This is one of the goals of European GDPR and personally I don't think it needs to hinder legitimate usage. It just forces you to consider why you're collecting data, have a legitimate reason and disclose it to the user. Without that, companies have been hoovering up any user data they can get their hands on. User data should be a liability, rather than an asset.
> if it was not made clear to the users what they were sharing
The Facebook OAuth consent prompt is pretty clear.
> what their data would be used for
This is an issue with Cambridge Analytica, not Facebook. Similarly, if a seller on eBay asks you to wire them money and then scams you, why should the bank be liable?
Furthermore, Cambridge Analytica's actions apparently broke the Facebook API terms of use. There's nothing Facebook could've done here unless they can predict the future and retroactively deny access to companies that they know will break the rules in the future.
to me the transcending point in all discussions about FB (or any company that peddles Ad-tech) is that they should not exist as a business in the first place. Targeted advertising as well as UE dark-patterns that lead to addiction and radicalization of vulnerable groups should be illegal. Problem solved.
You're absolutely right. Holding companies responsible for bad outcomes due to their their products and practices would be very bad for our corporatocracy. So don't worry. I'm quite sure we're not going to start trifling with such nonsense now.
Here's a counter argument - API access is just one aspect of Facebook selling user data, as far as I know, Facebook will license its data via customized methods (eg. a custom API) if you pay a fee for its access. There are even vendors who provide data from Facebook that isn't available via what you see as public on their documentation. Governments around the world use this to get user data access from Facebook.
To me, that is the real definition of selling user data. I think Facebook should be held responsible for that and also in general for being a sneaky leech of user data.
Facebook does not "sell" data and has absolutely nothing to gain by selling it or sharing it with Cambridge Analytica.
CA abused a legitimate feature that allowed users to delegate access to their accounts. My comment is about how this lawsuit will set a bad precedent and restrict API access even further (hindering legitimate usage) without doing much to prevent abuse (because people can just share their credentials instead).
I like how we have to preface our opinions with this now. I think if you're on HN it's a given at this point.
I would agree with you that there are a lot of stupid people on the site and they will still find away to do something stupid. But I despise harsh regulation and I don't think that's necessarily the solution. However, when the stakes are so high (elections) I think there is something to be said for forcing Facebook to be accountable. Akin to a parent being responsible for a child, you shouldn't let them get into a situation where they can be that badly behaved.
The stakes change nothing about who is actually responsible. If a terrorist steals your car and uses it for an attack does that mean it is right to hold you accountable because the matter is so serious?
Facebook is not your parent. That logic, that others need to be "held accountable" and that voters are children who cannot be trusted to make up their own minds is far more dangerous to democracy than the Cambridge Analytica ratfucking.
Sorry, should have made it clearer that Cambridge Analytica are the child in my analogy, but I think your point still stands.
I don't think voters are children and I agree with you that that is dangerous thinking, but I do think that showing advertisements based off of data they didn't know they gave in order to change their vote is not the way to allow them to make their own decisions.
I can imagine a lawsuit from the opposite side of the argument: it is these APIs that allow to challenge incumbent social networks, and if the result of the lawsuit rules these APIs out, growing a competitor to Facebook or Twitter might be a lot harder.
Recently I've felt the benefit of this, when sharing my Twitter following with Clubhouse. Clubhouse can grow and challenge other networks because these APIs exist. (We can argue about the degree to which sharing your data should be allowed, but setting an overly restrictive precedent is a real possibility)
I agree, but part of it I think is more finely grained permission requests. ie. Distinguishing viewing X's data, vs viewing data as X. That still doesn't solve the issue where the typical person doesn't actually read permission requests, but should still provide better security.
This submitter bmcn2020 spams this site with just enough bad articles to cover up all the other links to his own website. I don't understand why they allow that here.
The HN Guidelines[1] say "It's ok to post your own stuff occasionally, but the primary use of the site should be for curiosity." They may not be following the intent of the rule, but it looks like they are following the rule as it's written.
Touché, valid point. But someone's gotta break the Matrix once in a while... at the end of the day, this is just a text based website, and I'm hammering keys. It wasn't like I was walking down the street yelling obscenities. It's OK, to be aggressive sometimes. Everything in moderation.
It's okay to be loud sometimes. But in a large room? It's the group's sense of 'sometimes', that matters, not the individual's. In a big group it's always somebody's "sometime", unless you divide by the population.
Certainly. But there's also responsibility of the group members to understand what kind of topics and conversations they're about to enter. Facebook is divisive. It's a stressful topic. The crowd shouldn't click on a link about the 100th crime Facebook committed if the crowd is not ready for the gun fire. The crowd inherited an imperfect world, and it's okay to take a stand, because if the crowd doesn't stand for something, they fall for anything. In certain cultures, "swear words" don't even exist. If certain words deeply offend the crowd, they shouldn't blame other people, they should turn inward using introspection and ask themselves why they're so bothered by certain words written on a webpage. It's not like I hate Mark Zuckerburg with every fiber of my being. I'm sure there's some cool things about him. I'd love to sit down and have him show me his love for the video game Civilization, or what kind meats he likes cooking, or how he designed his smart home. But the topic at hand is business and legality. And frankly, I should be able to take as many shots as I want at him in the public sphere. Just because he's not personally inflicting violence on other humans, does not mean there's billions of lives at stake here. When people enter the ring, political correctness is entirely useless. It's not personal, it's "business".
“If I don’t scream, if I don’t say something, then no one’s going to say anything.”
“I care. I care about everything. Sometimes not giving a f#%k is caring the most.”
“The truth that hurts is the same truth that heals.”
Disagreed - this is the result of stupid users. If the APIs are gone they will just be entering their Facebook credentials directly (which would leak way more data than what relatively limited API access allows).
Someone who has view access to my profile may view my data, and they might also extract that information with API - however, they do not have any right to give permission on my behalf to someone else (e.g. Cambridge Analytica), that would require a power of attorney or something like that.
My friend might technically send that information to Cambridge Analytica, but my friend can't give them permission to use it, CA would be required to acknowledge that they don't have the legal permission to use that data and discard it. My friend can tell Facebook "I permit you to give that information to Cambridge Analytica" but Facebook is not allowed to act based on that "permission" since it's not something my friend can permit.
> My friend might technically send that information to Cambridge Analytica, but my friend can't give them permission to use it, CA would be required to acknowledge that they don't have the legal permission to use that data and discard it.
It's pretty well accepted that Cambridge Analytica acted unethically, and potentially even unlawfully.
> My friend can tell Facebook "I permit you to give that information to Cambridge Analytica" but Facebook is not allowed to act based on that "permission" since it's not something my friend can permit.
This seems like an unnecessary technicality - if CA wasn't allowed to access your data directly they would just proxy it through the original user's device via an app or something. The end result would be the same.
I grant API access to my friend. That is a direct relationship.
I don't grant API access to people that my friend grants API access to.
If one grant allowed for another grant, by that logic you could chain all the way down to any connected node which is clearly not a desirable model.
Data brokers are trying to make it seem like me adding a friend is somehow not a grant so that they can "plus one" on their reach. But it is a grant. It is literally me granting my friend access to my data. Just because the company doesn't call it a grant and doesn't treat it like one on a technical level doesn't change the fact that I have granted my friend access to some data.
API access or not is just a technicality. You grant your friend access to this data. Even if API access was restricted, malicious parties would just get your friend to install malware or give out their Facebook credentials directly (thus bypassing the API access restriction).
Either you trust your friend with that data or you don't. Anything else is just playing a game of whack-a-mole which may just give people a false sense of security.
I think its a bit harsh to call users stupid. When we move away from HN, we realize how teach naive a layman can be. We haven't seen something of this sort happen at this scale happen many times before. A lot of efforts are going into privacy now, way more than before.
People ARE stupid, this is a known fact; this is one reason why consumer and privacy protection laws are a thing, why two-factor authentication is a thing, why e-mail verification of logins is a thing, why you can't just start trading in stocks, buy alcohol, etc etc etc.
Companies protect their users from themselves; they HAVE to, in case they (the users) shoot themselves in the foot. And consumer have a reasonable expectation, regardless of Facebook's terms & conditions, that their data isn't shared to third parties - or that they at least get asked when it happens, instead of being pointed to a tl;dr of T's & C's.
Plenty of people do not have the literacy level to understand terms and conditions [1]. This is a worrying trend, but it's something that companies like Facebook should (and are) aware of - people don't read terms and conditions, people don't understand them.
Roads are not really open, there is a ton a regulations around roads, what kind of cars can be sold for public road use and who can utilize the public road ways as a driver so your analogy fails on its face there.
besides that fact, we also do not sue a Car manufacturer if a User of their car does go 350kph on the highway' crashes and dies.....
> there is a ton a regulations around roads, what kind of cars can be sold for public road use and who can utilize the public road ways as a driver so your analogy fails on its face there.
"open" api doesn't mean "do whatever you want", you're kind of making my point and the point of the article. There is nothing bad about Facebook going to court over that.
This isn't the result of open api, it's the result of badly designed and badly regulated open api.
If Facebook can't maintain a business model without selling their well-trained and dopamine-addicted userbase like a commodity, then their business model does not deserve to be maintained.
Facebook did not "sell" anything in this case. Facebook is a neutral carrier that got duped like everyone else. A malicious company used Facebook's API to ask for access to people's data and certain data about their friends, and people stupidly said yes. Should we now fault Facebook for complying with their user's wishes?
That very plainly was not the users' wishes. The users' wishes were "go away, window, I want to see my feed, yes whatever, click."
That was something that ill-informed users were effectively tricked into doing by a malicious third party who intentionally fogged up the information they gave to those users.
The fact that Facebook gathered the data to begin with is already a huge problem. If they need to do that to exist, they shouldn't exist, and this is another tiny straw on top of the huge pile of reasons why that business model shouldn't exist.
I don't care how responsible you are with all that data, you shouldn't be gathering it.
> The users' wishes were "go away, window, I want to see my feed, yes whatever, click."
Facebook will never ask you out of the blue whether you want to share your data with Cambridge Analytica. They have nothing to gain from it.
What happened is that idiots clicked on some kind of personality test (or similar) shared by one of their equally-stupid friends, the consent prompt appears as it should (and is very clear about what data will be shared) and they clicked yes. There are arguments here that these links should've been identified/marked as malicious and thus removed to begin with, but that's a separate issue.
Removing API access because some people are dumb will lead to lots of collateral damage (including towards those same idiots who expect to be able to "Login with Facebook" everywhere and are suddenly locked out of all these accounts), and will not solve the problem - malicious parties will just start asking for raw Facebook credentials or to install malicious apps/browser extensions to work around the lack of API access.
> The fact that Facebook gathered the data to begin with is already a huge problem
Which data are we talking about here? My understanding is that the data obtained by CA is data that the user explicitly put on their profile (such as photos, etc) and "friends" relationships. Ad targeting data (which is the real issue when it comes to Facebook's data collection) was not included.
---
My worry here (and the reason for the relatively harsh language) is that this lawsuit will set a precedent and give arguments for platforms to restrict API access even more and hurt potential competition as well as impose annoying & unnecessary barriers to users who know what they're doing. We already have this issue with banking where some banks insist on using a hardware 2FA device to protect against scams, and it's not really effective because people are stupid enough to use the 2FA device over the phone with a scammer despite the bold warnings about not using it over the phone printed right on the device itself.
Cambridge Analytica used the Facebook API to ask users to share data about them & their friends. Stupid users agreed to that.
The argument here isn't that Facebook is playing fast and loose with tracking & user data (which would be a legitimate argument), it's that Facebook is allowing people to grant access to their data to third-parties and Facebook should somehow be faulted for that. Facebook is a neutral carrier here, and they acted on behalf of the user - he decided that Cambridge Analytica should have had access to his data. Facebook should not be forced to somehow be the arbiter of this.
This lawsuit will give even more reasons to platforms to restrict API access which would impact legitimate usage much more than nefarious abuse (stupid people will always find a way to screw up, API or not - if the API is gone they'd happily enter their Facebook credentials directly instead).