> Eventually, some lawyer is going to convince a judge that, say, 1% the victims of a deep-pocketed company’s breach will end up losing their houses to identity thieves as a result of the data that the company has leaked, and that the damages should be equal to 1% of all the property owned by a 53 million (or 500 million!) customers whom the company has wronged. It will take down a Fortune 100 company, and transfer billions from investors and insurers to lawyers and their clients.
This highlights a major problem with tort law: It's monetary damages or GTFO. In other words, it's nearly impossible to make a case for damages when there is no obvious monetary aspect of the harm done. I can't sue Home Depot for giving up my credit card info to hackers unless I can prove that it led to someone running up my credit card bill. Either this needs to change, or it should be a crime for companies to release customers' personal information to unauthorized third parties.
Part of the problem is how these data breaches are framed in the media. It's always "Company X was HACKED!" and "Company Y SUFFERED a major data breach!". They're portraying the negligent company as the victim! It should be "Company A carelessly released their customers' data." or "Company B failed to protect 10 million credit card numbers." Once we stop pretending these companies are victims, we can start making and enforcing tougher privacy laws.
"They're portraying the negligent company as the victim!"
Perhaps that's why Experia's little breach never made the headlines. They couldn't be framed as a victim as the guy bought all the data legally. It's less than reassuring to ponder, depending on the "how" the data is used, anyone with a shell company, a little diligence and, most importantly, cash could buy this info, too.
> I can't sue Home Depot for giving up my credit card info to hackers unless I can prove that it led to someone running up my credit card bill.
It's funny how personal data is so clearly valuable - if it weren't companies wouldn't spend resources to hang on to it - and yet that value is so hard to define.
I wonder if it would be helpful if personal information were treated like some kind of intellectual property which could be licensed out, and there were some kind of market for doing so. If a company leaked my data I could sue them like a record company suing individuals who share music files.
Of course, any one individual might not be able to make much money by selling access to their data. It's just a thought.
"They're portraying the negligent company as the victim!"
While I agree that things like sql injection are negligent, there were also credit card hacks/leaks(and an nsa leak) that were the result of malicious contractors. Saying "dont hire bad people" is easy, but how do you do that?
And the standard for best practices is constantly moving in our industry, how do we decide when it is negligence, and when there was nothing that could be done.
Not saying people are being harmed by this, Im just saying its not so black and white. In some cases the companies that were hacked couldnt have reasonably stopped it. I mean, how do you prevent contractors from setting up something that steals credit card numbers? You hired the contractors because you dont have those skills in house.
> How do we decide when it is negligence, and when there was nothing that could be done?
This isn't an insoluble issue: courts deal with similar decisions in car crashes, medical malpractice, and many other scenarios.
A plaintiff could argue that the respondent should have been aware of certain vulnerabilities because they were widely disseminated, or that certain practices are explicitly warned against in common training materials. Respondents might counter by arguing that they test for that type of vulnerability using a widely-accepted tool, but it failed to flag this one issue, or something like that.
I agree that it can't be a purely algorithmic process, but almost nothing in a courtroom is.
Courts aren't up to date on medicine either, but they manage to preside over malpractice cases somehow. I'm sure if technical malpractice graced their courtrooms regularly they would manage somehow. Probably by relying on expert witnesses and learn the bits and pieces of jargon they need to know.
I think I'm with Cory on this. It's negligence if you leak it. Period. Nobody is obligated to hoard and store a bunch of sensitive personal data from their customers.
In the specific hacks I was thinking of, they didnt hoard and store any info. Equipment was installed that siphoned credit card info from their payment systems. I cant think of a way to run a store without passing credit card info through your payment processing system to the banks.
Agreed, credit cards are a ridiculous necessity. I think in this case the credit card companies should be liable.
I am continually amazed by the simple solution bitcoin provides to this problem: instead of me giving you an account number that you (or anyone who gets the number) pull(s) money from, you give me a number that I push money to. It's going to be a long time before that kind of change in our payment systems (from pull to push, whether with bitcoin or some other system) can be widely implemented.
"I think in this case the credit card companies should be liable."
Maybe? I didnt see anything about home depot suing the contractors? Really, they should be held financially and criminally liable.(if they werent)
The public ledger part of bitcoin is great also. You can see where the stolen money went. Maybe if they added a way to flag money, so spending stolen bitcoin would trigger an alert at the merchant, the same way canceled credit cards do now.
re: the contractors being liable, currently the way things work is, when my credit card is used fraudulently the credit card company owns that and pays me back, which is the least they could do. I don't even know if they attempt to work with law enforcement to catch the actual fraudsters, but I sure hope they do. Maybe I should be more proactive about that? I don't know.
EDIT: I should point out I don't know any details about the Home Depot "hack" and I might not be addressing all the pertinent points of that particular incident. Bottom line is still the same, it shouldn't be the customers who suffer when this happens. The people that made it so easy for my personal information to leak, the ones who necessitated that my personal information even be required as part of the transaction, should be the ones feeling the pain.
I guess the details are still not known? I was confused by the target hack in my earlier comment(sorry). That was where hvac contractors had their credentials stolen, and malware was installed on their pos system. The home depot hack was malware suspected to have been installed on their self checkout machines[1].
" it shouldn't be the customers who suffer when this happens. The people that made it so easy for my personal information to leak, the ones who necessitated that my personal information even be required as part of the transaction, should be the ones feeling the pain"
I get what you mean about some services collecting everything they can, and not taking the best possible care of it. I agree with you there. I was trying to point out an exception where there isnt anything that can be done. For example, try and stop finfisher :)[2] Companies will get hacked, and it is not always negligence. I was trying to point out that you wouldn't blame a shopkeeper if an armed robber stole credit card info, but if he left it in an unlocked room then we should. And I dont think either home depot or target were collecting more than they needed to.
PS, if anyone has a link as to how we can be more proactive about working with law enforcement to catch the fraudsters, I would love to see it. Either from the point of view of a consumer, or service provider.
Perhaps the hack becomes tricking the consumer into pushing to the wrong address. Is that the consumer's fault? I agree that the current system is vulnerable, but perhaps we just get different hacks, not no hacks.
Do you regularly hire contractors and then never look over the work they do? Because I don't, and frankly, that's kind of idiotic. The data is your responsibility, not theirs. They also have no interest in the longevity of your company, why would you trust them without checking what they are doing? As a customer, I don't care how Home-Depot handed out my CC info, I care that they did.
Sure some hacks happened despite companies putting forth their best effort, but hand-waving the responsibility to contractors is not the answer.
"Do you regularly hire contractors and then never look over the work they do?"
I dont when it comes to software. For say legal services, I dont have many options, as I am not an expert. The point I was trying to make is that sometimes people hire contractors because they dont have the expertise. How are they supposed to review something they dont understand? Saying every organization should have top notch IT on staff so this doesnt happen is hand-waving as well.
There's a famous saying along the lines of "If you are rich, hire two accountants. One to keep track of your books, and another to keep track of the first guy"
If you provide sensitive information to anyone that you don't have legal recourse against, then I don't really have any sympathy for you. If you make a bad business decision and it leaks my info, I'm not upset because someone took it without your approval, I'm upset because it was leaked.
There's a cost/benefit risk to everything. If you want to take the route of working with contractors, you have to weight the risks for that as well.
Oh absolutely. The other side isn't necessarily better, it just comes with a different set of risks/costs/benefits. You have to weigh those against each other to see which makes more sense for your situation.
But if you take the less safe/secure option, you can't expect much sympathy.
Do you regularly hire contractors and then never look over the work they do?
Most people do this all the time. When you have your car serviced, do you look over the work? Some people do, sure -- those who know what to look for. Most people don't, because they have no basis upon which to judge the quality of the work.
> How do we decide when it is negligence, and when there was nothing that could be done?
As others have pointed out, that question is one that courts deal with every day in other industries.
IANAL but I have some familiarity with the architecture and construction industry, where there are lots of lawsuits around negligence. My understanding is that the question is generally framed as "what would a reasonable professional have done in this case?" Would any reasonable contractor have interpreted those drawings to mean the joists should be spaced 24" apart? Or would a reasonable professional have interpreted it to be 20"?
The same standard could be applied to software engineering and data breaches. Would a reasonable engineer allow a SQL injection vulnerability to persist in 2016?
I suspect as soon as any amount of precedent is set for lawsuits around "hacking" law firms will push open the floodgates and it will suddenly become very common.
In the construction industry the frequency of lawsuits has created a culture where people go to great lengths to reduce their risk of liability. Architects draw deliberately vague details around waterproofing and contractors push to have every detail for how to do things spelled out on paper, so they won't be on the hook if one of the steps in the process is wrong.
Would a reasonable engineer allow a SQL injection vulnerability to persist in 2016
You are suggesting that a software engineer should be personally liable for the code he writes, just as a civil engineer is liable when he stamps a blueprint.
The reality is that today nobody is responsible for data security. Developers foist the risk onto consumers, who don't have any real choice in the matter, and don't even know what the real risk profile is that they (we) are agreeing to. If you accept that somebody, somewhere needs to be held responsible for data breaches, the most sensible party is us. We have all the domain knowledge and we're the only ones who can actually protect our user's data.
And I say this as someone who's spent time in consulting. The prices would necessarily go up in the face of litigation. But I think it would be well worth it - I'd love to be able to financially justify taking extra time to make the systems I build secure. (I make systems secure anyway of course, but I'm competing with companies packed full of General Assembly grads who don't know what a hashing function is yet.)
So -- national building codes for software? Government instpectors who audit code? Building permits required to even write code, or modify existing code?
Doesn't seem to line up very well with "Move fast and break things"
> Doesn't seem to line up very well with "Move fast and break things"
When it comes to security around my personal data, I don't want anyone to move fast or break things. Businesses should either not bulk collect identifiable data or treat it with the care and diligence it deserves.
And while we're at it if 20 year old self taught rockstar programmers don't have the skills to do that, maybe they should learn. If you're making web software which stores personal information and you can't name 5 of the OWASP top 10 without looking them up, you're a privacy time bomb. I want my personal data far far away from your product.
When it comes to security, move slowly and fix your shit.
How do architects get away with vague details when lives can be on the line? If they design the building correctly, they should be pushing to make things as explicit as possible so that when contractors make a mistake it is clear where the fault is. It seems unprofessional that they'd be pushing in the other direction.
For context, the majority of lawsuits in construction involve defects that are far from life threatening. Waterproofing, insulation, or, say, windows not working quite right are more common. There could be millions of dollars at stake because a defective building is worth less as an asset, but there are rarely lives on the line.
I don't think many architects would be comfortable taking risks with critical structural details. Also, even if a firm isn't found technically liable for a deadly building failure it can still be very bad for business.
That said I'm sure some architects still put people's lives at risk to avoid liability. I suspect it's similar any other industry where, sometimes, collectively people in a company put the public at risk for the sake of financial gain - pharmaceutical companies, manufacturers, mining companies, etc. No individual feels personally responsible.
Typically an architect's design would need to be signed off by a structural engineer before construction begins. The structural engineer would take on the liability for safety critical aspects of the design.
The architect and engineer would presumably collaborate to ensure that the design is at least clear enough to be safely built.
Ok, so I was confusing the home depot hack mentioned in the article with the target hack, but I dont want to edit my comment after so much discussion. I also want to point out that i was trying to show an exception, not to say that noone should ever be held accountable for a breach.
In the target hack, hvac contractors had their credentials stolen, and malware was installed on their pos system. The home depot hack was malware suspected to have been installed on their self checkout machines[1].
So no, they never allowed anyone access to the unencrypted credit card numbers. Negligence is failure to exercise reasonable care.[2] Bad things happen, and even more so when sophisticated criminals are attacking you. It is impossible to create a fully hack proof system. I would say allowing a sql injection is negligence, but when people are using sohpisticated attacks, there is nothing you can do.
I used to feel strongly about not trading my privacy away for some convenience. For a long time I took a luddite pride in not being on various services, having a dumb phone, keeping it turned off, etc. Over time, though, I've found myself having a change of heart. What started as little inconveniences have turned into big ones, as I'm more and more out of touch with my peers, contemporaries, and family. I don't like that. And for what? Companies can still track me, my data's still out there.
But I still care about privacy. It's a basic human right. What I've realized is that our laws are lousy at enshrining that right--companies can largely do what they want, etc.,--and we're lousy as a society at recognizing the value of that right.
So I'm curious, are there any cities, counties, states with any sort of legislative efforts going on to strengthen the citizens' rights to privacy and ownership of data? Is there anyone we can look to as a model for "privacy and data ownership done right," something to work toward implementing at a national level?
Is there any awareness group working to raise these issues in a way that's actually accessible to the general public? (Are they doing a good job at it?)
I don't want to give up, and I don't want to check out. I like my shiny devices and assistants now, they make my life easier in little ways all the time. How can we balance that with the basic human right to own our data and retain our privacy?
> Companies can still track me, my data's still out there.
I don' think this is right. When you were being a "luddite" you probably had less data collected about you, but you couldn't see that. All you can see is the cool tech stuff you are missing out on, so you feel that keenly.
I think the reason a lot of people give up their privacy is that they feel like they have lost it already, so they might as well "Get what they can" from it. I don't think this is the right way to think about things. Not every company is sharing data with every other company (yet), so each new toy/app/gadget you used by a different company creates a new entry in some database about you.
Yes, I think this is right. I have very privacy-conscious tech habits, and companies have some data on me, but probably far less than on the typical user of modern services. Some ways I protect myself:
1. Google account - I have one because it's required for Android, but I have never used Gmail, do not search from my Google account or ever log into it from my browser.
2. Facebook or any subsidiaries - never.
3. General Web browsing - reject 3rd party cookies automatically, browse with uBlock Origin, disabled JavaScript (via ScriptSafe) with selective enabling of scripts, Privacy Badger.
4. Only enter my real name online for things I explicitly need/want associated with me. Only enter my real address if actually needed for a delivery or other reason.
5. My mail client is set not to load remote images, which protects me from those 1x1 tracking images.
Many of these make the Internet a better experience, and certainly protect my privacy to a large degree. Yes, corporations have some data on me, but not much, and it's not all correct. I also don't feel I'm missing out on too much cool tech stuff, a lot of the privacy-unfriendly stuff is entirely unnecessary, for me at least, or replaced by alternatives that are not much worse.
> 1. Google account - I have one because it's required for Android, but I have never used Gmail, do not search from my Google account or ever log into it from my browser.
I haven't run stock Android in a while (and instead just CyanogenMod), but I'm pretty sure that it's not actually required, i.e. you can skip account creation during setup and then install alternative app stores like F-Droid, Aptoide or Amazon.
If we include CyanogenMod or other ROMs, then it's definitely possible, though. I do own and use an Android phone and deleted my Google-account probably half a year ago...
I also prefer Cyanogen, although I like having access to the Google Play store even on that. While that's tied to my Google account, it's an informed choice, I am fine with Google knowing what Play Store apps I have installed, and I use F-Droid at the same time for some other apps.
One of the great things about Cyanogen is maintaining tight control even over Google's apps and how much they track me.
If you have the option to log in to a google account you have google play services installed. It keeps a connection open to google (even if you aren't logged in) to support push messaging - which many apps use.
It's such an uphill battle, though. There are many clever ways you can be tied across websites.
Unless you are using some sort of VPN or proxy that's constantly changing your IP as you're browsing, and a browser that consistently changes or masks fingerprintable information and wipes all persistent tracking beacons, then what you're doing is not even remotely sufficient to evade tracking by any entity. What you're doing is definitely not enough to avoid Google or Facebook.
At this time I believe nothing can do this with 100% effectiveness. The closest would be to use Tor Browser and never disable NoScript and only use sites that don't rely on JS... but even then there are many theoretical workarounds, many more to come in the future, and probably various unknown techniques currently being tested or deployed.
I know enough about the technology to realize it saves me a lot of time, effort, and stress to just accept my fate. I no longer include ad networks or big tech companies in my personal threat model, even though I dislike what they're doing.
> I know enough about the technology to realize it saves me a lot of time, effort, and stress to just accept my fate. I no longer include ad networks or big tech companies in my personal threat model, even though I dislike what they're doing.
That's the reasoning I came to as well, I'm fine with certain sites using JS, session data and filling up my .cache if it's convenient.
The more I understand my fingerprint on the web the less paranoid I get. Lots of comments here advising not storing anything, or using anything... they may as well use tails as their main OS or throw out their computers entirely.
The same privacy-intense users lose credibility when they advice people to trust a bunch of add-ons instead of using system-wide blocklists in their host file or strict IDS rulesets.
Never claimed otherwise, although my steps are enough to expose far less data about myself than the average Internet user. These steps are also completely usable in everyday life without sacrificing much time or convenience.
On the comparatively rare occasions when I need more privacy, I do more. For instance, I tend to use Tor if I'm staying at a hotel that ties the WiFi connection to you personally - I think those connections are as bad as it gets for privacy.
Also consider that data isn't timeless. Data about your habits now, are outdated and close to worthless in a few months. So, reducing your data footprint always has an effect.
This cannot be stressed enough. I can't tell you how many useless things websites have suggested I would be interested in because I purchased a gift for someone's kid 5 years ago, or whatever.
Also, consider that data isn't necessarily properly associated with who you think it is. I often see suggestions that are clearly due to my wife's browsing on her own computer (which is nearly identical to mine in terms of browser fingerprinting). Obviously we have the same IP to the outside world and we share some interests, but not everything!
No way. The largest threats to your privacy by several orders of magnitude over the stuff we complain about:
- Electric, gas, sewer, water, trash utilities that you basically can't live without.
- Thinking of going rustic? Better not own the land in your own name. Most places, parcels are easy to find online.
- Property management companies
- Sites that facilitate rental applications
- Your 100+ year old bank
- Credit card and auto lenders
- Employers and job application websites
- Cell phone networks
- AT&T, Comcast, and all but a handful of indie ISPs
- Insurance companies
- Retail stores and gas stations with your loyalty programs
- People who regularly enter your property like building maintenance, trash collectors (often spies for the municipal building inspector), etc.
- Cops and neighbors who can observe your daily routine
- Vital records offices that know who your families are
Hell, just yesterday Verizon gave my mom a $20/mo discount to plug something into her OBD II port. WTF else can that be for?
Facebook and Google are the kindest to your privacy. Your data is a closely guarded asset, protected by competent engineers, used to tailor your experience and show ads. All these other entities, many of which you basically have to do business with to lead a contemporary life even without social media and smartphones, know who you are, where you live, what you drive, who you care about, how much money and credit you have and how you use it, and they do not even try to keep these data in house. When the government or a PI is looking for someone, that's where they look first - to data bought, volunteered, interviewed, and stolen from every kind of company except modern tech, often through well defined and documented interfaces like LexisNexis and the credit bureaus.
Luddite doesn't do much. You have to sublet informally (usually illegal), get paid in cash (employers don't like to do this unless they are illegal businesses) under a false name (fraud), never use banking or credit, not own, rent, register, or insure any cars, buy single ride subway tickets (more expensive), and have a forgettable face (genetics?) etc. People can't even manage this when their lives depend on it. Fugitives are almost always caught these days - this is how. It's nearly impossible to avoid leaking your biographical data and physical location.
Modern tech might give more insight into your mind, I suppose, but deeper privacy than that is very hard - actors in a capitalist system are starved for information about their counterparties and work hard to get it.
People's situations may vary, but most of the ones you list differ in two important ways. One, they need to make an effort to invade your privacy. Yeah, my neighbours could observe my daily routine, my electric company may do more than know my use of electricity, but these would take some effort, while the tech companies can data mine about millions of people at a time.
And the other important distinction is that these other things are highly compartmentalized, tech isn't, especially for the kind of person likely to be reading HN. I'm sure I am not alone here in that I basically live on the Internet. I read news online, I buy the books that I read online, discuss topics that interest me, watch movies, research my medical issues, read about stocks I invest into, talk to friends and family. My complete browsing history would certainly reveal more about me than ransacking my house for all the physical items, and corporations like Facebook try to have access to that, plus more - such as your private conversations, which is why they push their own messaging services.
This is the truly scary combination. The extent to which online activities matter in your life (especially for the tech-inclined), and the ability of large companies to automatically process and analyse those activities.
OK, so we just compartmentalize online activity. I use multiple VMs, with uplinks using various combinations of VPN services and Tor. Mirimir, for example, can be profiled as well as anyone. But Mirimir is neither linkable to my official name, nor to other pseudonyms.
I don't want to give up, and I don't want to check out.
One thing to try, would be to lie more. We don't owe giant corporations the truth, but we especially don't owe them the whole truth. Never use your real name. Use fake addresses when you can. Use different email accounts for each site that needs one. Never use "social media logins". When you have to make payments, do so through Paypal or some other obfuscating service. Zealously delete cookies, and use whatever tech is available to help with that.
There are probably theoretical ways around this, "fingerprinting" etc. It might be like outrunning the lion, however: one might only have to be faster than some other prey, who happily stays logged into FB all the time even when ordering sex toys and vaporizers.
I've been giving city park addresses, lottery hotlines and/or obscure character names to everyone but the essentials & the guv. My first lesson in guerrilla counter-marketing came when my book club mis-typed my middle initial and I was flooded with pre-approved credit cards with said typo. My twelve year old brain was alive with all the possibilities I could buy with free cash! Apparently, without a job, a SS# and that particular middle initial, I wasn't really pre-approved. Meh.
Yeah, but the convenience is, for example, I get cheap flight tickets, Cortana picks those out of my email and adds the trip to my calendar automatically, etc. How am I going to lie about any of that?
If your answer involves paying cash for airline tickets, then we're back to the high-priced, inconvenient world I'm no longer interested in trying to maintain.
Stuffing databases with fake data is fun when you're signing up for a newspaper paywall, sure, but that's not a sustainable, long-term solution.
This is why I'm looking at law and society, that's the only way these rights will be preserved long-term.
Isn't being selective with what you lie about the answer to this? If you don't care that Google know about a purchase then use your normal email/credit card etc. If you want additional privacy then use methods that allow this. Privacy can be a continuum.
Given that it can probably be traced back to a mailing address and credit card these measures may be worthless anyway.
Flying commercially is never going to be a privacy-preserving activity, for a wide variety of reasons that have nothing to do with payments or online calendars. This is just a trade-off that one must accept.
This is a somewhat spooky, but eerily relevant talk from the CEO of In-Q-Tell, the venture capital arm of the CIA he gave at a security conference not long ago:
He mentions that we are past the point of being a luddite much mattering. Much of your activity can be inferred through activity of your friends, family, and those around you.
> Companies can still track me, my data's still out there.
It's not all or nothing. Like any security (and confidentiality is one of the three pillars of security), there's no perfect solution but you can make it more costly for attackers. A few things that help for little cost:
* Use a pre-paid phone plan; don't give your identity to the telco. (Maybe not possible in all countries.)
* Use a VPN and/or Tor. Protect your browsing habits from your ISP.
* Use an ad-blocker or something like uMatrix to stop most tracking
* Pay for things with cash when possible. If it was invented today, we'd all be impressed with the technology: Complete trust between strangers, anonymous financial transactions - all implemented in paper; no encryption needed.
* Use one of the many anonymous, confidential communication services for chat, text and voice.
But I agree that the answer in the law, not technical means.
I do this. The privacy aspect is just a bonus. The main advantage is I consistently spend less and have the world's easiest to manage budget with no forgotten spending, autopayment or contactless swipe. If there's £20 in my pocket after drawing out £50 I spent £30 no matter what I remember or what stuff I now have. I'm immune to 98% of supermarket special offer junk as I draw enough for shopping not shopping + impulse junk.
Done it ever since the dot con crash and we were seriously financially challenged for a while. Fifteen years later I haven't had the least temptation to revert, despite numerous offers of higher credit limits and more cards from assorted banks. Plastic gets used 0-4 times a month - for online shopping.
Despite all the hubris about cashless I don't find it the least bit inconvenient - everywhere takes it, everywhere I spend it has a cashpoint nearby. So all that remains for card and bank is online shopping and bills.
In theory the European Union has a bit better law than the US in this area. I write in theory - because in practice the USA does not need to abide by it - and it is mostly US companies that keep our data.
As to the awareness group - I guess this will be controversial - but there are the Pirate Parties with privacy in the core of their ideology.
Personally I think that we'll have to give up strict privacy - it is inevitable that more and more online devices will know more and more about us - but we need some time to adjust, to find out how to make a world without this strict privacy reasonable.
This is only an inevitability because we do not control our software and hardware. It is entirely possible to replace most of these privacy-destroying technologies and services with privacy-respecting free software solutions. Check out this list of alternatives, for example: https://degooglisons-internet.org/liste?l=en
I am all for control of our devices (even though I can see some problems with for example cars) - but it is only a part of the problem. First of all we will never control other people devices - and they will know more and more about us. Second there will be more and more services that will require the devices we control to expose more and more data about us.
Well, the EU didn't respect those laws in the first place when it decided we needed to allow US companies to store EU data, despite US companies being incapable of keeping it securely (by law). The whole situation is a mess and nobody's interested in fixing it.
> I don't want to give up, and I don't want to check out. I like my shiny devices and assistants now, they make my life easier in little ways all the time. How can we balance that with the basic human right to own our data and retain our privacy?
You can't. If you want to have the security of a democracy, you have to accept the inefficiencies of a democracy (as far as the power structures are concerned, it's pretty analogous to the difference between a dictatorship and a democracy).
Have you tried the Silent Circle's phone (Blackphone) [1]?
I'm similar to you in many ways on privacy. If the price can come down for the Blackphone and Celluar data, I'd be all for it. It make take time though.
>> "So I'm curious, are there any cities, counties, states with any sort of legislative efforts going on to strengthen the citizens' rights to privacy and ownership of data? Is there anyone we can look to as a model for "privacy and data ownership done right," something to work toward implementing at a national level?
Is there any awareness group working to raise these issues in a way that's actually accessible to the general public? (Are they doing a good job at it?)"
I'm in a similar position to you. I value my privacy and think it's an important right the needs to be protected. Unfortunately you miss out on a hell of a lot of things if you are strict about maintaining it (thanks mostly to ad supported web services). I also agree that it's through laws that this can be fixed. I think generally laws just can't keep up with the pace of technological change or are made by people with almost no technical knowledge. I'm actually just about to start studying for a law degree (I'm in my late 20's) so that I can apply my technical knowledge (I've been a full-time dev for the last 7 years) to privacy law. Hence I've been researching it a lot recently. The best group I've come across raising awareness and fighting for privacy is Privacy International. It's definitely worth checking out. They produce reports on privacy rights in countries throughout the world and they fight government legislation trying to erode our privacy rights. As for legislative efforts to strengthen privacy the EU's recent changes to data protection and data use of EU citizens by companies based outside the EU springs to mind (Privacy Shield). It's not great but there is effort to improve.
While future IoT devices are a privacy (and security) nightmare, we can take simple steps to leave a smaller digital footprint when using the web: use private browser windows more often, use a separate web browser for Google and Facebook services, use a separate browser with privacy tools for the rest of your web browsing.
This helps somewhat and is no real effort.
I find Google to useful when travelling so I use gmail for making travel reservations so Google Now knows what I am doing. So, a few times a year I let myself be tracked for convenience. Otherwise I like private email, Private calendar, etc.
I don't think privacy has to all or nothing deal. Just make choices you are comfortable with.
"While future IoT devices are a privacy (and security) nightmare, we can take simple steps to leave a smaller digital footprint when using the web: use private browser windows more often, use a separate web browser for Google and Facebook services, use a separate browser with privacy tools for the rest of your web browsing."
What I really want is a chroot jail for a browser. This shouldn't be that hard, but the GUI adds some complexity ... and I am not sure how to implement it in OSX.
I can do this immediately with vmware fusion, but that's a lot of overhead - quite expensive - for just a second browser instance. It gets even more expensive if I want a third or a fourth.
What do you mean by that? That phrase always struck me as an argument-ending non-argument. What if I disagree that you have a "right" to privacy in any meaningful way? What if I think that I have a basic human right to control the data you choose to upload to my server?
There's a long litarture and philosophical debate on the whole concept of human rights. I also tend not to find those discussions particularly productive, as it's something a "get out of rational discussion free" card: a person can simply declare that some favourite property or aspect is some universally privileged right. I'm less familiar with the debate, its particulars, and various advocates than I'd like to be. Generally, though, I find the empirical arguments more persuasive.
This isn't because I don't like the specific rights conveyed. Generally, most liberal rights strike me as personally quite desirable. And preferable on a social basis.
It's the latter which I think makes a better basis for arguing for (or against) rights. There are rules of behavior, including rules reserved to individuals, which seem to make for more viable societies, cultures, and civilisations. There are also, though, overabundances of freedoms which can be counterproductive -- as Will Durant has noted, if you've got a choice between an excess of order or an excess of liberty, he'd choose order, because you still have order. (Mind, I'm not entirely convinced he's correct either.)
But generally: a right (conditional, empirically-based, predicated on an increase in overall social welfare) to privacy strikes me as something which, on balance, tends to promote rather than diminish preferred function of society as a whole. And a loss of that privacy generates highly negative consequences.
The irrationality you are noticing is to do with the modern concept of rights. Since the New Deal and the Universal Declaration of Human Rights, 'right' has come to cover things like right to a job, right to healthcare, right to electricity, right to broadband, and so on. But these rights impose obligations on other people, to either work to provide these rights for you, or to stump up the cash to pay for them. That means you need an agency which violates property rights.
Since you can never raise enough taxes to pay for everyone's needs, there's no consistent way to implement the modern notion of rights, so it simply becomes a list of irrational demands.
The only consistent system of rights (of rights which can be granted to everyone equally), take the right to life as a fundamental, and derive other rights from that. The derivative rights in such a system are the Enlightenment-era rights, I.e, life, liberty, property and the pursuit of happiness.
I'm not opposed to either the ideas of "rights", or of there being conflicting rights, or of there being rights of obligation, say, an obligation to help those in need (good sam laws, mariners' aid laws).
I'm not even entirely concerned with rights being apparently, at least at first blush, inconsistent.
What I am concerned with is what you've just done: to elevate a single right or a single set of rights to some priveleged status, simply on a say-so. These turn out to be both exceptionally arbitrary, on close examination, and frequently dysfunctional. Take a trolly problem, or a unit in battle, or a sinking ocean liner, or any number of other conditions, with your primacy of right-to-life, for example.
Note that life, liberty, and happiness are given in the US Declaration of Independence (though not the Constitution) as unalienable rights. That is, not rights which must be granted (any two-bit punk can claim your life, a zip-tie can restrict your liberty, and a wailing child can remove your happiness), but they cannot be given to another -- unlike your property, or business income, or even spouse and children.
So no, actually, I don't find that a convincing argument, though thanks.
We'll first of all it's turtles all the way down. Like all morality there isn't an absolute answer that tells us what is good or bad in a secular setting.
So the thing that comes closest is the culture and history we've been carrying with us. And that the 'right' to privacy has been a 'right' for a long time (eg. the right to private conversations and mail etc.)
Only in western democracies. In Russia and China, for example, the government has the legal right to monitor any communication whatsoever, and always has.
Speaking about countries.... yes there are countries like e.g. Germany with strict privacy laws. The problem with this laws are... they should theoretically protect your privacy but they stop at the border. So the privacy laws are strict for German companies and maybe also some of EU countries... but data from a foreign company like e.g. Facebook do not stop at borders. There are many legal cases that say what Facebook (I'm only using FB as an example) is doing is illegal in Germany but how would you stop an international company betraying your privacy laws in the Internet?
Creating a company in Germany which does not respect your privacy is impossible there (maybe this is the reason we are not world class at internet companies).
There is no need to "balance" privacy with anything. You and I should just have privacy. Technology products can be created that put privacy first, and that thwart surveillance.
Moreover you don't need new laws to have privacy. Or looked at from another angle, even if you had laws protecting your privacy you should not rely on laws to keep your private information secure.
If all real time communication were secured with endpoint-generated ephemeral keys, and all storage and store-and-forward payloads were secured with keys distributed with a web-of-trust ket distribution infrastructure, the surveillance state and other commerce-oriented snoops would have to work much much harder to glean far less information.
I also try to find a compromise. If we're talking pragmatic, as opposed to ideal, my suggestions in the current environment are:
1) Be offline as much as possible. I have banished the webbrowser from my phone and tablets. The main computer is completely airgapped; I keep an old laptop around to go online. But I installed an app called Self-Control and use it in whitelist mode. I keep a separate To Do list for things that require internet access and when the time I set in Self-Control has passed, I do all my "internet errands" in one go. Usually does not take more than 1-2 hours/ week all in all.
2) Since you care about privacy, you already do all this but for completeness sake: ublock, ghostery, etc. are your friend. I also use an app called Little Snitch. Very useful! Obviously, I don't approve of Facebook, Google etc. but I actually went one step further and block all their properties in Little Snitch. No beacons, "like buttons" or supercookies for me, thank you.
3) Even with (2), I avoid the web as much as possible. In fact, with the exception of HN, I'm usually not on the web anymore. For news or some light magazine reading, I download the PDF editions (or have them sent to my email). For documentation and manuals, I use an offline reader called Dash. When I want to watch a video, I use youtube-dl (on my "internet groceries day"). For ebooks, I have a dedicated reader that is completely offline (needless to say, I do not sync with "the cloud"; Marvin is an excellent alternative to iBooks for reading epubs). In general, all my media library is local: Music, movies, TV shows, magazines/newspapers,…. I don't do streaming, I don't do "cloud". Storage is cheap, why give up your privacy for nothing? Unlike fking webpages, PDFs don't track your reading/thinking behind your back, you can annotate them if you want, and overall I think they offer a better reading experience compared to a webbrowser, esp. on a bigger tablet like the iPad Pro or (haven't tried but should be the same) Surface. Since I really, really dislike ads (= being told what to think, how to feel, etc) I even paste over all the ads before I read a magazine or newspaper! After a bit of practice, I can do it now in about 1-2 minutes per PDF. It's a small price to pay to keep these mercenary memes from infiltrating your mind.
4) For email, I switched to Protonmail. I understand that the NSA can hack them if they really want, but that's okay, they are not my threat model. My goal is to keep what I write in confidence from being read by Google/Gmail, Microsoft/Hotmail etc. So I usually send "pseudo-encrypted" versions to such addresses (where the reader has to enter a simple password, just to keep it from being machine-read as per default). Alternatively, I might send an encrypted PDF if it's more formal or important.
5) For more fluid, extemporaneous communications with friends and family, I mainly use iMessages. In general, Apple is your sole remaining friend among the mainstream vendors (and like I said, I'm talking pragmatic here, not ideal). If someone is on Android, there's still the cross-platform Signal app. Facebook-owned Whatsapp is an obvious no-go. I found that people usually can understand if I explain it to them why. It's like second-hand smoking, only worse. If they insist on exposing me to their "smoke" (or, negative externalities), then I cut the contact.
One last comment: If you feel like you are getting "de-synchronized" from your environment, consider that that might actually be a sign that you are detaching from the hivemind and starting to think on your own again. It's not a guarantee that you will be more (or more often) right than the crowd but at least you have a chance of thinking an interesting thought or developing your own good taste, of not being trite. So take courage! :-) I found that with a bit of effort, it's possible to keep some measure of privacy and make it a bit harder for these people to psychologically profile and manipulate you.
If you're happy with that kind of marginal existence, then that's great. It works for you.
I don't want to lose touch with people, because I don't view my contact with people as being part of a "hivemind". I view my time on Earth as one marked by civic and human engagement, where I want to share my ideas, bounce them off other people, strive to improve my community and the world, etc.
Stepping away from that to be a hermit, intellectually speaking? No thank you--there's too much living to do.
Besides, I realize that you don't really know me at all, and are just offering your comments out in a general sense, but it's pretty presumptuous to assert that people who want to live in a connected world are less likely to think interesting thoughts or develop their own sense of taste. Silly, too.
None of the little tricks or habits I listed above leads to a "marginal existence" or makes you a "hermit". They serve to cut out the middleman, or to minimize my data trail, given that I still want the benefits of technology. Why should Google/Gmail be allowed to analyze my communications with my friends or colleagues, esp. if I'm not even using Gmail? Why should Netflix be allowed to record what I watch, when, and in what order? Why should Facebook be notified that I read article X, esp. since I am not even on their network?!
What you said therefore makes no sense to me, it's a false dichotomy.
And before you congratulate yourself publicly on your own "civic and human engagement", consider that even just ten years ago people managed to be engaged without apprising Google, Facebook, et al. about their every impulse, incessantly. Yes, you could watch a DVD, read a book, or talk to your parents and no Evil Corp. was silently adding that to its Stasi dossier about you and your predilections. Somehow life went on anyway, and nobody called you a "hermit, intellectually speaking" for that. Talk about resetting expectations, jeez…
Hey, again, if living that kind of marginal existence ("I avoid the web as much as possible" ... and "Be offline as much as possible" ... and "If they insist on exposing me to their "smoke" (or, negative externalities), then I cut the contact.") suits you, that's great. I commend you, and I hope you continue to get everything out of life you want.
I'm not satisfied with that kind of inconvenience. These are, ultimately, social issues, not technical ones. For me, the solution must come from society (and consequently, become enshrined in law).
Dodging the big picture privacy issue with "avoiding the web as much as possible" and "cutting the contact" doesn't work for me.
EDIT: Whoa! You made some pretty big edits there! Mark those puppies next time, mister!
> I view my time on Earth as one marked by civic and human engagement, where I want to share my ideas, bounce them off other people, strive to improve my community and the world, etc.
I respect that sentiment, and agree with it. However, I think that it's much harder to do what you're talking about online. I would go so far as to say that the tendency of the internet is to encourage the "hivemind" and other bad practices. See, e.g., the eternal September, endless takes on how to moderate forums, algorithmic content filtering, etc...
I think the internet works great for making transactions between total strangers possible. I can collaborate on an open-source project or buy some widget. But I think that to really connect with people or improve my community or to delve deep into ideas, IRL is where it's at.
Note that the internet doesn't necessarily amount to public forums - there's many reasonable topic-specific semi-private groups all over the internet. Some of my best friends and even partners I've met through those, and I'd like to think we've made each other's lives better, or at least happier.
It's a stretch to call that a "marginal existence". If that were true, most of us were living a marginal existence 30 years ago. Almost nobody was "online" or used E-mail, yet somehow, by some great miracle, we all kept contact with friends and family, had social connections and lives, talked to our neighbors, etc. You don't need Facebook to have a social life.
I'm more concerned about the government tracking me than private companies tracking me.
As we've seen with the NSA over the last few years, mere laws will not stop the government from spying on you (if you can convince them to implement token restrictions on government surveillance in the first place, which is not often the case).
I'm waiting to see if this gets traction on HN. In case it doesn't, and so I can propose something only to the few who dig to the bottom of the comments:
The problem is social. A technological solution will not suffice.
I'd really like to see your extended thoughts, and this is getting better traction than I'd hoped when submitting it.
I half agree with you: technological solutions alone will not suffice, though they may be part of the solution.
But additionally, legal reforms, economic reforms, and social reforms need to be part of this.
Legal reforms to limit the ability of organisations to access, appropriate, store, exchange, and act based on exfiltrated data.
Economic reforms, as noted, to change the business model of publishing from ads-supported media.
Social reforms such that people push back on this. That's less a burden than necessary, IMO, as awareness of the scope of invasion seems highly developed, though responses to that are presently limited.
In concert with the above, provided and mandated technical means for people to protect their information, and to keep corporations and other organisations from grabbing and using it in the first place.
The problem is social. A technological solution will not suffice.
But technological solutions are probably necessary to some extent even if not sufficient, particularly when we're talking about communications channels where you're sharing something deliberately with one party but other parties are involved along the way.
Agreed. I'm working on a project with FOIA to build paths towards bulk communication records in Chicago. It's gone much better than I'd have hoped in comparison to the state it started in. If others would like to help out, I'd like to start expanding it out to all of the US. :)
I agree, the problem is social. To be more specific, the problem is that the corporate clout is too thick and the corporate social structure is broken. Running after the money stick, with total disregard for the person at the other end of the leash. Not purposefully evil but stupid and blind.
I think the social structure of the 'proletariat' is fine, though the institutions like the news that support it are being eroded due to changing technology and the corporate clout.
The problem is trust. Technological solutions exist that embody trust. People must trust technological solutions through social means...a type of prisoner's dilemma if there ever was one.
I like Cory Doctorow. I really do, and I agree that this is a problem. But as I argue, the genie is already out of the bottle, the cat is out of the bag and the barn door is open. What we should focus on instead is mechanisms for mitigating and punishing malicious use of data, AS WELL AS putting the surveillance to good use, eg body cameras for on-duty police officers:
My thought on this, for some time, has been that people need ways to effectively express, legally and technically, their disagreement and noncooperation with such terms.
An alternative model for funding both innovative reasearch and creative content (writing, music, film/video, visual, and other arts) also seems increasingly essential. Information and culture are public goods, and cash-on-the-barrelhead (or bitcoin-through-the-anonymised-exchange) transactions are largely not appropriate for them.
I am very much hoping this all dies in a fire. I'm increasingly concerned as to just how all-encompassing that fire may turn out to be.
Well, the most effective response is already here: don't use the service.
While that can be a big ask for many services, we're also doing a great job of replicating the functionality of many closed services in an open way. As long as the FOSS/hobby tech world still cares, we'll keep biting away at the new markets closed services create.
> As long as the FOSS/hobby tech world still cares, we'll keep biting away at the new markets closed services create.
I think nibbling away is more accurate. There are still no privacy-conscious alternatives to the big 3 social networks (Facebook/Twitter/LinkedIn) and none on the horizon. Yes technological alternatives exist, but not practical ones as I'm not likely to run into someone using Diaspora in real life.
This is similar to GNU/Linux vs. Windows on desktop where Linux gained a little bit of market share but was never a real alternative when it mattered. That defeat is all the worse in retrospect as Windows 10 is now a great vector of privacy invasion for Microsoft.
From a social justice POV, "just don't use the service" may be practical for those with a lot of education and opportunities for employment, but is a non-starter for someone who needs LinkedIn to find work or whose landlord expects to snoop through their Facebook account. We need a solution that helps everyone, not just those with the know-how and economic freedom to use niche alternatives.
> There are still no privacy-conscious alternatives to the big 3 social networks (Facebook/Twitter/LinkedIn)
I'm not a member of LinkedIn so I'm constantly a bit confused by it. Does it offer anything compelling besides a list of the places you've worked and studied? Why couldn't I get the same benefit by putting up a static web page with a content form? Am I missing out on something?
Discoverability and messaging. If you like hearing from a range of recruiters, putting a keyword-filter-friendly resume on LinkedIn is a good way to make that happen. The quality varies pretty widely, but at least some of them are worth talking to in my experience.
(Stack Overflow Careers is another good spot if you're interested, with in my experience a lower incoming message rate but a somewhat higher average lead quality, and none of the irritation that LinkedIn generally creates in its users.)
> Why couldn't I get the same benefit by putting up a static web page with a content form?
The same question could have been asked of GeoCities and one could give the same answer: most people outside the tech bubble don't know and don't want to know how to put up a website.
Yes, it's "not that hard", but it's also not as easy as filling in a bunch of text fields on LinkedIn - especially when you factor in things like SEO if you have a common name.
Some people find the professional-life-focused networking compelling.
Me, I find it theoretical, due to its profligate handling of email addressbooks and my history helping maintain mailing lists and selling items on Craigslist before they implemented full bidirectional address anonymization. I've had a large number of invitations, likely autosent, to connect from folks I wouldn't recognize if they sat down in front of me.
I fundamentally don't see how "social network" (especially the use cases covered by Facebook, LinkedIn, and Twitter) and "privacy" can ever exist together.
* Services your employer uses and reires you to use.
* Services a government service requires use of.
As technological tools become part of the social, corporate, governmental, and institutional infrastructure, "opting out" becomes an increasingly less viable, and more punishing, alternative.
Too: you're denying the specific tools which directly address the situation. Legal and technical means to loudly and clearly state "no", or to sue for damages where this isn't possible.
That's fine if you don't mind abandoning the externalities of the service.
For example, when all your friends now communicate mainly on Facebook (or WhatsApp, etc.), then your choice is to either lose touch with your friends, or use that service. Sure, nobody's technically forcing you to use Facebook, but the social externality means that you become cut off from your friends in a very real way if you don't.
It doesn't matter. There are many ways to get information from you in "public" ways. We can get speech from video and keystrokes from sound. Target already predicts pregnancies, and facebook predicts breakups. China already has a citizen reputation system the way we have credit bureaus. We can identify you by your gait, or wifi signals in your home without you even moving. We store most of the CCTV video camera footage until such time as it would be useful.
And when will it be useful? The scariest thing is when all this data will be used by smart algorithms designed to construct plausible stories in a court of law about you, making anyone a target for legal threats and jailtime and intimidation. It's already the case in more totalitarian countries, but this is about free countries where juries of peers and other systems will be subverted with clever algorithms and a mountain of data to use.
All your blockchain and anonymous transactions are stored until such time as you slip up and get deanonymized.
All you are currently relying on -- all that our legal systems are relying on -- is inefficiency of the attacker. You think they can't cross reference enough materials, analyze them deep enough, at a large enough scale, fast enough, and can't put two and two together. But the growing repository of data is there to be (quantum-cracked and) mined for information.
> It doesn't matter. There are many ways to get information from you in "public" ways
Politically, this framework is at the root of the problem. Currently, privacy violation is only policed based on method of access, rather than a larger general concept. We put "hackers" in prison for years for "breaking into an email account" because they shared a few people's private information, yet commercial surveillance bureaus are free to do whatever they want because the data is already "out there". Focusing solely on access kills any independent concept of privacy.
Specifying what exactly should be considered wrong is obviously a challenge. I think a business entity having someone's identifying information in a database without an ongoing business relationship could be a decent first pass. This difficulty is a large part of what has stunted progress. (Just FWIW my usual assumption is that information dissemination can't be stopped, but I don't think that assumption is necessarily true when talking about policing commercial entities that inherently have government charters.)
I'm actually more hopeful for technical solutions to the problem, even though they suffer from problems of scope, implementation, financing, and adoption. But coming up with a better external concept of privacy can guide such things too, especially encouraging incremental change in business models as opposed to the moon shot of repaving it all with Free software.
According to Gervase Markham of Mozilla:
"If you believe you have a right to access all the free content on the Internet while blocking the ads which fund it, you continue to believe that [..]"
Mozilla seems to not want to interfere with online advertising practices.
"If you believe you have a right to access all the free content on the Internet while blocking the ads which fund it, you continue to believe that [..]"
Thanks, I will. I've been using the Internet since before online advertising became a big deal. I've run web sites of varying scales, often with original content, sometimes with original content that cost a lot of time and/or money to produce. I've also contributed widely to many more web sites and other online forums. I've paid for things online. I've charged for things online.
And I've never once, in all that time, relied on ads to fund any of my or my various businesses' contributions.
Online advertising has served a useful purpose in some cases, and I sympathise with those who have chosen to rely on it as a funding model and are now losing out because of ad blockers and the like. But online advertising has also led to huge amounts of ad-ridden junk clogging up any forum it can invade. It has led to serious degradation in both the performance and the security of the greatest information-sharing medium in the history of the world. Perhaps worst of all, it has led to a culture where other ways of compensating those whose work we truly value and of discovering new material of interest are a lot harder than they might have been. None of those are positive influences, and I'm not at all sure that online advertising hasn't done far more harm than good overall, even taking into account the great value that some ad-funded content and services have offered to some people.
This is not the same thing. I block all cookies and I still see plenty of ads. What's different is that the ad networks don't get to aggregate a profile on (effectively) all of my browsing. And yes, I believe I have a right to that.
I always disable third-party cookies and very rarely does it break something. The most recent one I remember is logging in at the Origin website (online game store). The login form at checkout doesn't load at all without third-party cookies.
While I would love to browse the web with all security and privacy features enabled and things like JavaScript disabled, it breaks too much. The middle ground between security/privacy and user experience/convenience is different for everyone.
I agree. I used to use NoScript, RequestPolicy, and BetterPrivacy, but I finally realized I was spending far more time tuning policies than I was actually using the web. And they were TOTALLY unusable by non technically inclined people.
My compromise now is to use uBlock Origin (virtually never breaks anything), HTTPS Everywhere (on rare occasion breaks a site), Self-Destructing Cookies (by design cannot break sites), and 3rd-party cookies disabled.
The only thing that banning 3rd-party cookies broke was CapitalOne360's gateway into regular CapitalOne, but they even managed to fix this in recent months.
There was an attack in black hat recently called HEIST I believe that could target users via issuing a special cookie (say, through an advertisement). The only known way to combat it is to turn off third party cookies.
I mean, it's already happening. People walk around with smartphones, tablets and laptops which constantly record audio for their virtual assistants, and I have not once consented to having my voice recorded by any of those devices.
Has this actually been verified that they are recording without people's knowledge or consent and sending that data home vs just listening locally for a specific wake up word?
Don't get me wrong... I think it is horrid and the trend is worrisome, but I haven't seen conclusive proof this is happening yet.
>>When a backlash began, the app vendors and smartphone companies had a rebuttal ready: ‘‘You agreed to let us do this. We gave you notice of our privacy practices, and you consented.’’
This ‘‘notice and consent’’ model is absurd on its face, and yet it is surprisingly legally robust. As I write this in July of 2016, US federal appellate courts have just ruled on two cases that asked whether End User Licenses that no one read and no one understands and no one takes seriously are enforceable.
Just playing devil's advocate here - but I wonder if Cory has tried to claim that he doesn't have to pay his mortgage because he signed a lot of legal documents, but he didn't read or understand them.
There is coming a point in time where every person must have a mark in order to buy or sell. It will be your access and citizen membership to a new global order.
Enjoy your temporary freedom. History has a nasty way of modernizing itself.
Isn't this already happening? I got a tour from one of the techs at a major general store on how they track users "to enhance user experience". It's both impressive and terrifying. Using simple things like tracking the wireless signals on your smartphone they can see exactly when you pass or enter the store, which departments you visit and stay longest, etc. The system is even linked up to a camera at checkout so they have your picture linked to all other details they gathered.
In the country I live it's also very popular for every store to have their own plastic card you gather points on at every purchase (e.g. for discounts or freebies). This is an easy way of matching your purchases with all the other personal information they gather.
NY Times had an article in 2013 about this very thing. [1] But the amazing thing was, when users/customers found out that they were being tracked and the level of detail found in the tracking methods, some of them complained. They were quoted as saying "creepy", and "way over the line."
If users knew how much of their entire lives could be pieced together by browser tracking, cookies, and the same techniques that advertisement companies use to target groups of people, I feel that they would say the same thing.
The practice of tracking your location in a store by means of wifi and bluetooth connection signals has actually been deemed illegal by a judge in the Netherlands. There is some hope in legislation catching up with technology in that respect.
Privacy and freedom are orthogonal concepts. A lack of privacy can only lead to a lack of freedom if institutions have excessive coercive powers and if a society has strongly held but commonly broken rules. IMO, getting rid of the latter as much as possible is much easier than guaranteeing privacy for everyone.
Great illustration of the pervasiveness in personal data collection that IoT will bring to our future. Check out ProjectVRM from Harvard https://cyber.harvard.edu/projectvrm/Main_Page. Vendor Relationship Management is a concept focused on empowering users in their relationship with service providers. The basic idea being that services (supply side) are in control of the relationship in Web and IoT services today, but historically, control shifts over time to users (demand side).
"You agreed to let us do this. We gave you notice of our privacy practices, and you consented."
The EU regards privacy as a right and the recently passed General Data Protection Rule (GDPR) enforces this right for EU citizens worldwide. US companies are slow to realize this and there will be a flood of litigation from Europe against US companies that service EU citizens when this law takes effect in 2018. GDPR implements many of the concepts of VRM and has the stated objective of putting users in charge of their widely-defined personal information. Consent has to be explicit and opt-in prior to collection, so IoT companies would have to make sure that they either differentiated EU citizens and required consent prior to capture or implement standardized policies that respect GDPR across the board.
My thinking is that the disruptive companies of tomorrow will have user trust and personal data collection transparency as a key differentiator. The problem is that the technology to enable this easily isn't there yet. Privacy and personal data management are complex and remain the domain of health companies subject to HIPAA and European privacy wonks. My team is working on a platform to bring easy and transparent personal data management (including consent) to all services that collect personal information. Check it out - www.carbn.io
from the article - "Notice and consent is an absurd legal fiction."
it shouldn't be. "Notice and consent" should instead be a class taught in high schools. The ability to read, understand and NEGIOTIAGE these agreements should be within the mental grasp of everyone.
If you manage to renegotiate your contract/EULA with Apple or Facebook or Google because of your impressive ability to read and understand them, let me know.
In reality, the power these companies have means there is no negotiation. They are gatekeepers to nearly necessary tools to exist in society (especially for the non-tech-savvy) and they can put whatever rules they want on those gates. Your choices are to do whatever they want or walk away.
In fact, I'd posit it's a waste of time for someone who isn't a privacy attorney to read these take-it-or-leave-it-style "contracts" since you HAVE to accept them, and the only things you can argue about or discuss are provisions in the contract which are illegal or unenforceable, and that discussion happens much later in front of a judge.
The ONLY way we can protect ourselves from these abusive contracts (particularly the pervasive and rights-destroying mandatory binding arbitration clauses) is to use legislation to delete them from existence.
Nearly all tech companies try to use noncompetes to keep their employees in a brutal stranglehold and stop them from shopping around to try to earn the market value of their labor. California is where tech goes to flourish and startups abound, because noncompetes are banned there.
We need to ban mandatory binding arbitration altogether (there is no reason it should have ever been legal), and have very strict guidelines about what kinds of privacy/tech contracts are allowed -- with punishments for companies who violate, strict ones.
I mostly agree, though I think that California's massive amount of readily available tech venture capital, as compared to anywhere else on Earth, is a far more important factor in its abundance of startups, rather than its obscure law banning non-compete agreements.
The law isn't obscure at all, it's a fundamental part of employment law that separates California from many other states.
And saying it's due to tech VC is saying there's a lot of tech success due to the overabundance of tech success.
People who work at a company can quit and start a startup, even in the same industry, carrying with them their hard-won expertise. Without a legal cloud over their heads which will scare away investment.
"Obscure" does not mean "unimportant." It means few people know about it. I've lived in San Francisco and worked in the tech industry here for 10 years, during which time I've discussed this exact law with numerous people in many different roles. I can count on one hand the number of people who had heard of it before I told them.
Since almost nobody (here or elsewhere) knows that this law even exists, it's reasonable to conclude that it is not a significant factor in the development of the local tech industry.
Moreover, all of the large tech concerns whose profits fund those VC firms would be much better off without this law and are openly against it. See, for example, the anti-employee poaching ring that Steve Jobs set up.
There are far too many ToSes for any person to be able to read them, let alone negotiate each.
This problem was solved in an earlier age of commerce through a Uniform Commercial Code (throughout most of the US), or equivalent statutory or case law in other domains. Essentially, contracts were reduced to a common set of standard components. Exceptions might be allowed for specific cases, including unilateral "contracts of adhesion", but these too were generally limited.
In particular, ordinary transactional terms are limited by:
1. The scope and extent of the transaction. The term is for a single purchase or transaction, not an ongoing "relationship", with few exceptions (utilities, rental agreements, subscriptions).
2. There's a limit to the data exchanged. In general, the minimum amount of data required for a transaction is what's provided. Even where personal information was taken down, it was recorded on and remained on paper forms, rarely being converted to electronic form. This is no longer the case, where, say, a license number (or a license held as surity) for, say, an hourly boat or bicycle hire might be scanned electronically, OCRd, entered into a database, and matched with other records.
3. As noted above, information one organisation gathered on you was only rarely shared with others. This is no longer the case. As my awareness of such practices has spread, I've become vastlty less interested in transactions in which I'm aware my information is being exchanged: magazine subscriptions, credit or debit card purchases, anything with an email or postal code, etc. For a good decade or more, I refused to sign electronic signature pads. I still generally balk at this.
But until terms of service are both standardised and codified with users' interests in mind, the present situation will only get worse.
This should be exploited by challenging the mutuality of the agreement. Unless there is an understanding by both parties about the basic features and requirements - a "meeting of the minds" - then there isn't a contract.
Currently the "yes, I read that" buttons are used as an indication of having read and understood the contract, but as you said, actually reading all that legalese would take a long time. It may have been hard to prove otherwise in the past, but today we have another option.
We now have algorithms that estimate a given text's reading level. Using that kind of technique we can algorithmically estimate how long it should take for someone to read a document, from which a conservative estimate of the minimum reading time can be derived. It would be short enough that practically everybody exceeds the estimated time. This needs to be purely mechanical.
With a minimum reading time established, every contract should be nullified unless each party was given and used at least that much time to read the document. That is, allowing anybody to "sign" an EULA before $MINIMUM minutes have elapsed should be prima facie evidence that no contact exists.
I like the point about the UCC. I have thought about standardized contracts myself, though I didnt think to compare it to the UCC.
"But until terms of service are both standardised and codified with users' interests in mind"
But what about sites with a different revenue model? Say an image hosting site meant for personal photos, and an image sharing site meant as a platform for artists to sell their work. You would not want the former allowing any sort of copyright transfer, but the latter cannot function without it(at least without the ability for the site to take payment and provide a license). Im not sure how we could standardize everything. Especially if consumers are not willing to pay a subscription.
"In particular, ordinary transactional terms are limited by:"
1. But this would be an ongoing relationship, assuming they continue to use the service.
2. Again, subscription fees would be needed to sustain the business. Otherwise advertising is the only way. And there's an old joke in advertising: half of all advertising budgets are wasted, the trick is figuring out which half. I do see your point about a drivers license number not being needed for a long period of time.
There was an attempt to come up with a UCC for services, within the US, though it failed to garner sufficient support: UCITA.
It's not clear to me how some breadth of interests couldn't be addressed. The usual T&C generally address limits on liability of the site's owners, occasionally try to impose binding arbitration or limits on class action suits (among my complaints against the so-called "Kinder, gentler Reddit", Imzy), jurisdiction, reverse-engineering clauses, etc.
Allowing users to specify licenses for submitted works would address much of your concerns. A standard set of merchandise clauses, including, say, escrow, liability, and chargeback terms, might be among the boilerplate additions to a standard contract which might be made.
But the point is to make the contracts themselves standard and modular. There might be a base services contract, a base merchant contract, and a base rights-for-sale contract, but not infinite variations on each. Also limits on what sites or users might carve out as grantable or transferable rights.
My point in noting that ordinary commerce is limited to a single sale transaction is just that: that these are simple transactions and hence the associated legal binding is also simple. Ongoing relationships are inherently more complex.
There are alternatives to advertising and subscriptions, including non-market constructs.
In the example I posed, the value of the drivers license as a hire surity is that the hirer is quitely likely to return for it. The disadvantage, today, is that the license has not only the attributes of "valuable to the owner", but "hive of data which can be used to draw additional relations".
Some years ago I discovered that the purchase of certain over-the-counter medications required, by store policy though not by local law, presentation of a drivers license. I held up my license for the clerk to visually examine. He tried to take it from my hand, which I refused. He wouldn't close the sale without scanning the card. I walked off without paying and without product.
I've been insisting on respecting my privacy rights for some time, and am not above forgoing business, taking my business elsewhere, or making others pointedly uncomfortable for asking questions I won't answer. Sadly, I am an exception.
if you need a professional hacker for various hacking needs, i recommend globalhackteam34@gmail.com. they render a very good service and they are very good at hacking various things, phones,Facebook, whatsapp, twitter, calls, emails, websites, social medias and many more. i have used this service before so i vouch for them. here is their number you can call or text +19542719191
> However, there’s nothing intrinsic to self-driving cars that says that the data they gather needs to be retained or further processed. Remember that for many years, the server logs that recorded all your interactions with the web were flushed as a matter of course, because no one could figure out what they were good for, apart from debugging problems when they occurred.
Clearly someone who doesn't understand how machine learning works. The "further use" of the data is obvious: to train and improve the driving algorithm.
"Every page with a Google ad was able to both set and read a Google cookie with your browser (you could turn this off, but no one did), so that Google could get a pretty good picture of which websites you visited."
This is how cookies work. What is unique in this in the context of Google? Google didn't invent cookies. What am I missing?
This highlights a major problem with tort law: It's monetary damages or GTFO. In other words, it's nearly impossible to make a case for damages when there is no obvious monetary aspect of the harm done. I can't sue Home Depot for giving up my credit card info to hackers unless I can prove that it led to someone running up my credit card bill. Either this needs to change, or it should be a crime for companies to release customers' personal information to unauthorized third parties.
Part of the problem is how these data breaches are framed in the media. It's always "Company X was HACKED!" and "Company Y SUFFERED a major data breach!". They're portraying the negligent company as the victim! It should be "Company A carelessly released their customers' data." or "Company B failed to protect 10 million credit card numbers." Once we stop pretending these companies are victims, we can start making and enforcing tougher privacy laws.