Honestly I think the main problem with the privacy debate is that most people have no imagination on how their data can be used against them and the response you often get, even from people with years of experience building software, is the now classic response : "I have nothing to hide".
1. If an insurance company finds out that you’re more predisposed to a disease, they will charge you or your employer a higher rate
2. The Fed gov’t has ~20,000 laws, it has more than it can count, can you tell me that you haven’t broken a single one? Could a creative prosecutor find something you’re guilty of if they had enough information on you?
3. In 2012 (edit: 2010) target sent ads to a teen girl for baby supplies, the father angrily talked to the store manager asking why they were sending baby supply ads to his daughter. When he found out his daughter was pregnant, he apologized to the store manager. Target had a good idea the girl was pregnant because of the items she bought in their store. How would you like it if corporations know details about your family before you do? (Pro tip: they already know things you don’t know)
4. Organizations are not monolithic, they’re full of people who are constantly moving around and organizational priorities change. Any restrictions on use and protections will be ignored eventually and changed in privacy policies, which they have complete discretion to change.
5. The world is filled with clever and unscrupulous people, they will never stop finding ways to use your data in ways we can’t imagine, not all of them will be in your favor.
I think the problem with your argument is that most people don't think those things would apply to them, or they wouldn't care if they did. And in their defense, at least in the the US, most of the examples you give are hypotheticals. Possible rejoinders:
1. In the US, the Genetic Information Nondiscrimination Act already prohibits this. For other things, like "are you a skydiver", the insurance company already has the right to ask you that.
2. I mean, sure, I'm sure I've broken some laws, but I can't think of any examples of overly agressive prosecutions for people who weren't already guilty of bad shit.
3. I mean, people are already used to creepy hypertargeted Facebook ads, most of them have don't really care anymore.
4. I mean sure, but how does this affect me?
5. This just sounds like a hand-wavy slippery slope argument.
I think the issue for most people in the US is that they still fundamentally believe that the justice system is generally "just", at least with respect to themselves, though that is obviously changing. Contrast that with Germany, where people are vividly aware of what happens when the government uses personal data nefariously.
Your response to point 2 made me gasp. Just watch a few episodes of any of the many true crime false conviction series on Netflix. Unscrupulous prosecutors are not rare, and it does not matter who you are—if you get caught in the crosshairs you could go to prison for a crime you didn’t commit, or be coerced to provide false witness against a friend or loved one for a crime they didn’t commit.
I watched that series and if we were in a ubiquitous surveillance state few if any of those people would have been convicted. They would have had video/phone tracking to verify their alibi and/or video of the crime showing they didn't do it.
That's making a big assumption that everyone has equal access to surveillance data and that most if not all players are fair-mindedly searching for the truth.
If you assume the other way, pervasive surveillance makes a frame-up easier.
> Police Chief Nick Metz said he wants to release body camera footage from the August death of Elijah McClain to lend more transparency to the investigation. But three months later, police still have it — advised by the district attorney’s office to keep it in house.
In many US jurisdictions, the decision over what police body cam footage to release is left to the police themselves
In the UK as well, there have been several cases where the police have chosen only to disclose “relevant” evidence, when the existence of additional evidence becomes known the case collapses.
I said "the accused", not everyone. If you are charged with a crime and if that footage could be evidence then it is required that the state turn it over to the accused.
For many people in the U.S., overzealous prosecutors are their allies so long as the people they are prosecuting who are "already guilty of bad shit" belong to undesirable demographics. Prosecutors and their selective enforcement efforts are central to suppressing minority votes through felony disenfranchisement, for instance in Florida.
I wonder how the arguments against surveillance capitalism poll across targeted minorities vs. the general population. I speculate that sentiments diverge in the same direction as sentiments towards the police.
I'm sure there are bad prosecutors, but Netflix is hardly a source to look into something like this. They tend to make very compelling, but obviously biased, documentaries.
This is a great example, though, of why most people won't care. Most people will look at that and think "Swartz hacked in to a system, stole data, and distributed it for free. Sure, the prosecution may have been harsh, but I don't go around hacking into systems."
To be clear, I don't agree with that, or any of the 5 points I put above. But I still think it's important to emphasize that 90+ percent of people will look at that and similar cases and think "that situation doesn't apply to me."
> 1. In the US, the Genetic Information Nondiscrimination Act already prohibits this.
That only applies to genetic data. There are several ways for your insurer to find out everything wrong with you, especially when many medical practices sell your data. It does not cover any other type of insurance, other than health insurance. Genetic information can be used to discriminate in disability insurance, long-term care insurance, life insurance etc.
>I think the issue for most people in the US is that they still fundamentally believe that the justice system is generally "just", at least with respect to themselves, though that is obviously changing.
I think you highlighted a bigger core issue: people think that the past was somehow better and what they're seeing in their time is unique. The justice system isn't changing, it's been like this for at least a century, probably longer. America hasn't just elected its first rabble rousing demagogue for a president and now the country is going to shit, this has happened before. Multiple times.
This is worrisome to me because it causes emotional overreactions to situations that aren't really that unprecedented. We elected Donald Trump, and obviously we are special so we have to make sure this doesn't happen again. The punchline is that we are ignorant of the past and are unaware that the republic has carried on just fine after previously awful presidents, and without that knowledge, we may make a number of bad decisions that negatively affect the future of this country.
>1. If an insurance company finds out that you’re more predisposed to a disease, they will charge you or your employer a higher rate
I've never really understood this one. Do you want to force insurance companies to spread the costs associated with insuring high-risk individuals across their whole clientele via this information opacity? Why not just use a single-payer system instead of that Rube Goldberg machine at this point?
People would just say "I don't think that would happen" to all of those things. They are not tangible enough. If anything, they would laugh at 3 like they would to any "funny" story.
>If an insurance company finds out that you’re more predisposed to a disease, they will charge you or your employer a higher rate
That make sense, do you really expect insurance company to not charge higher premium to insure someone who predisposed to a disease ?
>The Fed gov’t has ~20,000 laws, it has more than it can count, can you tell me that you haven’t broken a single one? Could a creative prosecutor find something you’re guilty of if they had enough information on you?
Then the issue is with too many laws
>How would you like it if corporations know details about your family before you do?
Doesn't matter
>The world is filled with clever and unscrupulous people, they will never stop finding ways to use your data in ways we can’t imagine, not all of them will be in your favor.
This will be the always the case, regardless of surveillance or not.
Honestly I wish health insurance companies would be more willing to use data for billing. Someone who works to stay fit should be rewarded vs a fat individual or someone doing extreme sports. I know some insurance pays for health monitoring watches and gives bonuses, but mine still doesn't.
I don't support insurance billing more for predisposition to disease, but that's already covered under the law so your point 1 is kind of irrelevant.
> Someone who works to stay fit should be rewarded
Rewards to incentivize healthy behavior seems like a good idea.
This study about the impact of an incentive program for preventive care concluded: "Voluntary participation in a patient incentive program was associated with a significantly higher likelihood of receiving preventive care, though receipt of preventive care among those in the program was still lower than ideal."
People who die earlier from poor lifestyles and habits incur a smaller lifetime cost for healthcare, because the vast majority of health expenses occur only when a person lives long enough to become elderly and die from natural causes.
The idea of insurers being willing to pass the savings, if they existed, down to their customers in any significant way is cute, though.
>2. The Fed gov’t has ~20,000 laws, it has more than it can count, can you tell me that you haven’t broken a single one? Could a creative prosecutor find something you’re guilty of if they had enough information on you?
people should read "the trial" and internalize it (and ponder what historical context prompted kafka to write it).
1. Isn't this solving the wrong problem? Rather than hiding this information from the insurance company, we can make - and have made - it impermissible for them to use the information as the basis of setting rates.
2. Again, solving the wrong problem. Also, coming to the attention of a prosecutor with a sufficient vendetta to go to various companies to try to dig up dirt seems rather far-fetched. And if they couldn't use personal data for that purpose, wouldn't they find another avenue?
3. If I had kids, I'd want them to be protected - it certainly seems reasonable that children's personal data be protected. Once they become adults, though, I don't see why a parent should have any control over their adult children's behavior.
4. Firstly, you're assuming that your interlocutor shares your concerns about the uses to which that data could be put. Second, data goes stale, meaning it ceases to be valuable or potentially harmful.
5. And the world is also filled with clever and scrupulous people who want to use personal data in ways we can't imagine to everyone's benefit. Shutting down collection and use of personal data prevents those beneficial uses as well as any harmful ones (indeed more so).
It's all about understanding behavior. Humans, all humans, anywhere in the world are creatures of habit and routine ... Once you understand that you can start to exploit it.
And it doesn't matter if you're doing it now, the data they're collecting is there forever, as ML algorithms are getting better your understanding of the data is getting better.
Let's take a concrete example : the cambridge analytica case ... Let's say for the sake of argument I've determined that blue collar workers between the age of 30 and 40 that are in major cities have a 40 to 60 percent chance of voting Republican now if take all of those that have been to a church once or have a close relative that have and bombard them with ads showing how the democrat candidate is supporting abortion for example (wether that's true or not) now the Republican vote have a much better odds. Of course I'm oversimplifying but this is not science fiction this was done with a very good efficiency in different countries and it's just one example ...
Umm that's only some of the information that pertinent. It might also be pertinent that the other candidate cheated on his wife with a male prostitute and used campaign find to pay for an aide's abortion, but I'm pretty sure they wouldn't put that in the ad as well.
Incomplete or biased information can be worse than no information...
Social standards are changing all the time. Maybe there's something you're blind to that'll be considered really offensive by 2040 standards. I know my views on what's offensive have changed over the last twenty years. Even something like digging up and releasing an off-color joke can have serious consequences. It's easy to say "don't ever be a dick by today's standards," but it's much harder to say "don't ever be a dick by tomorrow's standards." Why risk it?
Let me tell you a story. During communism, my grandmother openly disparaged our great and beloved leader Janos Kadar (https://en.wikipedia.org/wiki/J%C3%A1nos_K%C3%A1d%C3%A1r) in our town. She was called in to Budapest, to a police station where it was known that tortures as punishments to political enemies happen. She was asked if she really said X/Y/Z about Kadar. She went on a loud rant about how Kadar is [insert disparaging comments here]. The policeman went pale and ordered the crazy women to be taken away and sent back to her town before anyone even hears such heresy - to their judgement, the woman was obviously crazy to show such defiance in that political climate.
If you think that kind of society won't ever happen during the next X thousand years while the elite have unlimited power to surveil, and if it does we will ever have any chance of breaking out of it - well, I would like to advise you to never go near casinos.
Edit: corrected genders, no genders in my language.
Off the top of my head the risk of identity theft and black mail are probably biggest reasons why privacy is important. This is especially concerning when you realize how "secure" data often leaks and becomes public.
The main assumptions you need to be wary of data harvesting are:
1. (Some) people will act against your best interest for their own gain (most often to get money, but, sometimes, to undermine people or ideas they do not like)
2. There are always things that you believe/stand for (pro-abortion, pro-BLM, anti-vaccine, anti-pesticide, pro-house garden, etc.) that someone somewhere disagrees with. These are increasingly identifiable and actionable on.
3. There are always traits (skin color, medical disposition, height, IQ, etc.) that are identifiable and therefore actionable on.
4. Modern data collection is much less visible and obvious than it's ever been. This means the people doing this collection and the people acting on this data are subject to less societal and legal pushback than before, and receive this pushback much later along the line (meaning they can build more momentum behind whatever they're doing before getting called out, if they ever are)
5. Data is never secure indefinitely. Eventually, there's a good chance it's sold to the highest bidder, regardless of who that bidder is or what their intentions are.
With the above assumptions, we can see people are more empowered to discriminate against and take advantage of others. They are also more easily able to remove civil liberties without getting called out.
I think some people believe that the people who would do this kind of thing are few and far between, but it only takes one person with weird ideas and some clout getting this kind of capability to make life miserable for a group of people. And people probably underestimate the number of people with low empathy in the population.
The entire concept that paying money will make companies not sell your data even harder is magical thinking. What better signal does an advertiser want than somebody that spends money, especially on something relatively frivolous?
Absolutely true. Although it’s true that users have become the product through advertising, making users pay won’t make them any less susceptible. The magical thinking this idea is based on is that market forces will solve the problem. The theory is that if I, a user, find out that a company is selling my data I will leave the service and go to another one, voting with my wallet and causing the whole system to value privacy. This depends on 3 assumptions:
1. There is symmetry of information in the economy (users will know when their data is being monetized)
2. There are many providers of a service with different offerings
3. The services are commoditized (there is nothing keeping me from interchanging one for the other)
None of those assumptions are true for all services and only a few services may meet all 3. #1 is the least true IMO and if you want it, it’s something you’ll have to pass legislation and regulate for. There’s not much you can do for #2 except make it easier for people to make services, sometimes economies of scale and network effects mean that it will only make economic sense for there to be 1 player in a market, in that case there should be heavy oversight on that player, such is the case with public utilities. #3 would require standardization, this can be done by industry, but they have every incentive at this point to silo their users. Public and Gov’t pressure could make this happen similar to the formation of the MPAA as a hedge the movie industry made against gov’t regulation.
i think there is a sneaky #4. just because some service you're using meets your privacy criteria today, there is really nothing stopping them or their aquirer from deciding to sell or exploit your information in the future. TOS seems to be kind of a one way street.
That's not true though; the fact that some service I'm using meets my privacy criteria today means they don't have any information to sell or exploit in the future.
How many services you use have your email address, telephone number, real name, or street address?
All of these have value, but are needed for various aspects of business (sending purchases, receipts, contacting you on case of issues with your account, etc).
For that matter the compiled list of actions you take on a site have value. The only way not to provide a service with valuable personal information is not to use it.
In the magic world where capitalism's "vote with your wallet" actually works (which as vegetablepotpie points out is fictional), email addresses given to corporations have the form "[random base64 UUID]@generic-mail-service.example.com", phone numbers are similar but base ten, businesses don't distinguish between 'real' names and "John Q Smith", and mailing addresses (given to corporations) are of the form "[random base64 UUID] care of [US postal service or a more cooperative competitor thereof]".
(I guess this technically assumes that voting with your actual vote also works, at least for the purposes of removing abusive know-your-customer laws, but lack of gratuitously harmful regulation seems in line with the libertarian-esque philosophy behind "vote with your wallet", so... eh.)
Who said anything about paying more? This could all be changed with legislation at effectively zero cost to consumers. That seems like the only real option as I can’t directly stop my electric company etc from selling my information.
To be honest, my guess is that the consumers who would pay extra money for privacy, although likely in high income brackets, are also likely to be the consumers who are the biggest pain in the ass to support (if we're not talking enterprise).
This is where it would be fun to have all the data and start creeping the person out. “Have fun next week on your vacation don’t worry I will drive by your house a couple times while no one is there and check on your place. I know you just bought an expensive computer and were also searching best home security systems so I imagine you don’t have one yet but don’t worry I will watch your place even though you never asked me”. Or “Hey I hear you might have Erectile Dysfunction you okay bro”. Oh you don’t care if the big companies know everything about you but your own friends it’s a secret? What if your friend works for the big company does he get to access all your information? Obviously that seems wrong. Maybe that would get the point across.
I agree very inappropriate and something I would never attempt. Facebook has been doing a very good job of predicting what ads are relevant to me to the point where I am somewhat convinced Facebook can hear my conversations. I don't have the app but access facebook through my webbrowser so not sure if that is even possible but it creeps me out to think what these big companies know about us and all the while many of the people I Know are seemingly clueless or careless to the situation.
That argument has sadly never worked. It's too easy to dismiss as an exaggerated analogy. I also think it's wrong to assume everyone outside tech is a clueless sheep. People are often well aware of the implications but think it's "worth" it. They simply don't care and draw the line somewhere between giving their data to Big Tech and live in a glass house.
So much this. The common individual doesn't understand the power that data gives to marketing and propaganda machines. Moreover, the ilusion of services being "free" gives these platforms an insane competitive advantage.
I don't like this elitist view inside tech that people are just dumb and unaware. No matter how hard I try to explain the implications to people they easily brush it off. People just don't care.
There is nothing elitist about it. I am also not knowledgeable of medicine or law, and I ask friends or family for advice. However, trying to convince people that “free” is not free is a very hard argument to have. I have used this video in the past to help my case, even if it caricaturizases the effects: