Leadership starts with the top and the values of the company. There is a huge gap between the CEOs of Microsoft and Apple and Zuckerberg who has never understood the value of respecting boundaries. That culture likely permeates the whole company seen in their handling of partners, API permissions, privacy settings etc..
I doubt that Zuckerberg is able to fix the issues as he is at the root of it. Looks to me like the situation at Uber but with FB there may still something to be saved if they are moving fast.
The most obvious sign that this culture permeates the entire company is the open plan offices. Employees who can tolerate this sort of environment seem less likely to be super privacy conscious types.
I don't think Zuckerberg is as actively scummy as Kalanick was. Fixing this requires being very actively ethical and I think he's just falling short. I don't think Zuck is incapable he just needed this kick in the pants to do something.
> In reality, Apple collects more information about its customers than Facebook because it offers more products and services
The author seems to be conflating what’s stored on the device with what’s being “collected”. There’s a difference; with Facebook everything ends up on their servers, while with Apple much less does. If these companies were to be hacked tomorrow, with Apple the only information that would be made available is stuff that’s been sent, such as iCloud data. Things that stay local, such as call history, would remain secure.
That's funny, because Apple does upload your call history to iCloud and doesn't even mention in much. They have their users' location, almost all of their phones' data - including iMessage - thanks to iCloud Backup, etc.
None of this is end to end encrypted. They only encrypt Keychain and payment data end-to-end. iMessage encryption is moot when it later gets uploaded to iCloud from your device.
>"None of this is end to end encrypted. They only encrypt Keychain and payment data end-to-end. iMessage encryption is moot when it later gets uploaded to iCloud from your device."
Can you elaborate on why "iMessage encryption is moot when it later gets uploaded to iCloud from your device."? What do you mean by Keychain data being encrypted end to end? When does keychain data leave the local host?
Also might you have any citations or resources you could share on these Apple practices? Thanks.
iCloud Keychain is one of the features of iCloud. It uses a master password to ensure not even Apple can read your keychain secrets. All of this basic info is readily available on Apple’s site, they have very clear privacy info.
Can you explain a bit more? Apple iCloud services collect and upload a huge amount of data to Apple servers. Emails, calendars, tasks, your private messages (iMessage), phone diagnostic data, GPS location data (cell tower / wifi location helper service), SMS, MMS messages, application data and more.
Sure a lot of this is encrypted, but Apple explicitly warns you on support page that they WILL give iCloud data to authorities if warrant is sent.
Quote:
> iCloud content may include email, stored photos, documents, contacts, calendars, bookmarks, Safari browsing history and iOS device backups. iOS device backups may include photos and videos in the Camera Roll, device settings, app data, iMessage, SMS, and MMS messages and voicemail. All iCloud content data stored by Apple is encrypted at the location of the server. When third-party vendors are used to store data, Apple never gives them the keys. Apple retains the encryption keys in its U.S. data centres. iCloud content, as it exists in the subscriber’s account, may be provided in response to a search warrant issued upon a showing of probable cause.
iOS device backups sort of poke a hole in my comment, since they by definition store essentially everything on your device that I said Apple keeps local. I’ll fall back on a weaker argument, that Facebook collects similar information as well, especially if you’ve ever used an Android device.
What Apple will do with our data when provided with a legal warrant or subpoena is a very, very different question from what they will do when provided with an opportunity to monetize the same data.
The problem with Facebook is not, and has never been, that they will give the data they have to the government as part of a properly-conducted investigation into allegations of criminal misconduct. It's that they themselves will use it, and will sell it to a wide variety of other corporate entities who will use it, for a wide variety of other corporate purposes.
Including, as recently demonstrated, attempting to sway elections.
You have not linked to a support page, you have linked to a PDF titled 'Legal Process Guidelines'. (That is definitely NOT a privacy policy.)
https://support.apple.com/en-us/HT202303 would probably be the right support page, however, Apple mentions what is done – and not what is NOT done, in particular end-to-end-encryption of iCloud backups.
I still agree that Apple plays in a different league than Facebook with regard to privacy. At the same time, there is still a lot to do for Apple, especially regarding data security. Just a current example:
I think I s unarguably true that having our data online and backed up is a risk that needs to be weighed up against the very real advantages we get. That applies to both Apple and Facebook. However only one of those companies has explicitly built a business model around selling that data. Let’s not set up any false moral equivalencies.
By itself having data we have chosen to entrust to them isn’t a problem. It’s what they choose to do with it that matters, alongside the gathering of information we have chosen not to entrust to them regardless of our choice.
Speaking of which, can we stop equivocating between "selling that data" and "selling ads which are targeted partially based on that data," as you did in your post?
Microsoft has processes in place to minimize business risk due to privacy and security problems (both usually go hand in hand). All engineers must go through regular training, and features must go through security and privacy reviews. In these reviews, systems and their threat boundaries are described as well as the data that flows over system boundaries or that are stored. PII (personally identifiable information) gets anonymized unless absolutely necessary (e.g. for a profile page). The mantra is that the engineers should trust the system to put their own data in. Then there is also GDPR which requires adding abilities to delete personal data on request, even if it is distributed into different systems. It‘s even tougher when it‘s about customer data (eg Office 365). That‘s an absolute black box, so to do ML on this data you might put training into a firewalled cloud and you might get a metric back, but there is no way of looking at any concrete data. So, MS treats the prospect of privacy breaches as an existential risk and acts accordingly. It has been this way for at least 10 years.
The "lost years" spent on fixing Windows security fundamentals between XP and Vista perhaps taught Microsoft something important about long-term risk management.
Their web servers needs decrypted data in order to handle these office documents, so client side encryption is not an option.
Server side encryption, where the server knows the keys, serves very little purpose because the threats it protects against are extremely unlikely. What it’s protecting against, armed robbery of the datacenter?
I want to be protected against the exact same attack vectors that Microsoft's business customers are protected against and the same attack vectors that all customers of Google, Apple or Dropbox are protected against.
Perhaps it's all just security theatre as you suggest, but I am not the one running these data centers, so if all these companies say there is a threat, then my instinct is to believe them.
I can’t remember them saying there is a threat, or mentioning what it is. I only heard them saying marketing BS about how safe my data will be on their servers.
I doubt that any of them will ever publish their most detailed threat models.
If I had to guess, I would say the most likely threat is nosy or corrupt staff.
Protecting a key server is easier than protecting tens of thousands of servers and physical disks.
Legal requirements are not necessarily baseless either. They are in place to protect someone against something.
The data I have on these cloud services is extremely sensitive. All my identity documents, proof of address, examples of my signature, financial data, health data, etc. If someone were to get hold of these documents, they could steal all my money, my identity and make life hell for me.
> Most people won't fiddle with the controls because they may not even realize what all the fuss is about.
Good to see this _starting_ to become part of the discussion; that most users are not educated and motivated enough to take privacy seriously. I'd like to see regulation that makes "here - agree to this 200 page TOS" unacceptable as form of gaining user approval, especially - as in the case of Facebook - you're agreeing not only to sharing your own data but also data on friends who aren't even using Facebook.
Idk about Microsoft, of all the personal assistants, Cortana is batshit insane. Furthermore their Quantum team is the only one yet to produce a Qbit while IBM is using 50. The last decade of Microsoft acting like the college one and done has depleted its ability to innovate. Nevertheless, at least they still rely on tangible assets to turn a profit.
Regarding Qbits, the number doesn't mean much. Error rate is far more important. Microsoft hasn't yet created a real-world topological qbit, but they have much-improved noise properties in theory.
The article conflates the encrypted binary blobs which are your iCloud backups, with the kind of structured data Facebook gathers. They are not remotely comparable.
I’m surely wrong, but I thought I read a thing that iCloud blobs were actually not encrypted so that Apple could deduplicate common files/blocks/whatever.
"In reality, Apple collects more information about its customers than Facebook because it offers more products and services."
Was/is all that data collection necessary?
Did they (or Microsoft) make decisions about data collection based on perceived competition from Facebook or Google.
Considering e.g. how ads were integrated into MS software products (e.g. the "free" versions of Office software that appeared a number of years ago) I think the answer is yes, at least for MS.
I believe all these companies (AAPL, MS, FB, GOOG, AMZN, a few others) react to each other and often adjust their short-term and long-term goals accordingly, even when they might be thought not to be competing.
They show ads, but not based on your detailed personal profile, or content of your mail and things like that. Microsoft is a software company, not an ad company. I remember Microsoft making a big deal out of that at one point, saying that Google "reads" your email and stuff. That was really quite long ago, now things are even more personalized for Google and FB, whereas Apple and Microsoft has never been so deeply involved in the personalized ads business. There core business models are different.
They may spin it as necessary for "diagnostic" purposes or some such, to improve "user experience". This may be true but at the same time they reserve the right to use the data however they wish. It may have much higher value if used for commercial purposes. Will they ever do that? Hopefully not, but they are in a position to do it and thats the problem -- they are creating a company asset (user data) that is also a potential liability for users who value privacy.
When Apples CEO says "we could have monetized our users but we didnt" that should raise a red flag. Why are they even in a position to monetize users? Its hardware. His comments are pure marketing; hes trying to differentiate Apple from these other companies (yet hes admitting Apple could easily do as they do). If Apple was really "doing the right thing" as to privacy, they would not be collecting data by default, they would not have integrated Facebook and other such ad-driven companies into iOS (perhaps they have finally removed them now), they would be training users to think twice before uploading private data into "the cloud". That is, if Apple was really concerned about user privacy.
All collection would be optional and certainly would not be encouraged or enabled by default. The default would be no data collection.
There is a consistent, predictable effort to flag or downvote every comment that expresses any skepticism of Apple. This is really sad. No company is perfect. I am not anti-Apple; I have been using their hardware since the Apple ][. If users want Apple to improve then suppressing all commentary is not going to help. I think constructive criticism could actually improve the situation. Companies are "listening".
The truth is that Apple has been aggressively collecting user data just like these other companies, likely at least in part for "competitive purposes". They have invested heavily in building data centers to hold user data. Todays Apple computers start dialing in to Apple servers the moment the user powers them on. Whether they use the data to serve ads is not the issue. Apple is not training users to be wary of such collection. Quite the opposite.
It just doesnt fit with "Privacy is a fundamental human right." Users who want privacy (all thirteen of them, for now) may not want their data in the possession of third parties, stored in some data center. This is just common sense. Those users are the models to follow if privacy is really, truly important. Why would a company try to frustrate the user who wants to manage their own privacy?
If such users are marginalised (e.g. limited hardware functionality unless one participates in data collection, because Apple is upselling users after purchase on "Apple services"), if Apple subscribes to the belief of other web companies that all users want their data stored with third parties (truly, the third parties are the ones who want it), then this is making privacy much more difficult to achieve.
Apple privacy terms state users are not "required" to submit data, but if they dont some services wont work. It is the same type of "subtle language" as referred to in the recently leaked Facebook memo.
Ive got plenty of karma points to sacrafice. Its worth it. Apple can most certainly improve but not without honest feedback.
Do you agree that some services not working if data isn’t collected has to do with the way those services work, rather than with punishing users who don’t give up their privacy so that Apple can use their data to....
If the user is being shown an advertisement when opening a local software program then that means a request is being made automatically to some remote server. That is data collection. If the user clicks it, thats more data collection. And so on.
For the user, there is simply no need for a local program, e.g. a "word processor", to be connecting to the internet at startup. That connection attempt is for the needs of Microsoft, not the user. It sacrafices user privacy for the benefit of Microsoft.
IMO, they would not have made these sort of changes were it not for the success of Google and similar ad-supported web businesses.
As another commenter points out, Windows 10 is apparently full of this type of behavior. Telemetry is data collection.
Lets not forget about Bing. More Google mimicry.
LinkedIn, like Facebook a seemingly benign and useful "service", is one of most egregious data collectors, and is a major threat to user privacy. They are part of Microsoft. Microsoft wants user data and has paid billions to get it.
> Microsoft, with its cloud, software licensing and subscription businesses, is even less likely to go rogue in data collection because it no longer has a mobile platform to speak of.
But they have a desktop platform and they used it to go rogue with data collection, all the way. Just because they've failed on mobile doesn't automatically make them a privacy-oriented company.
Microsoft is a company that in their very long history, has repeatedly pushed legal boundaries at the expense of their business partners, their individual customers, and the software ecosystem. But the company has(had?) a very different mode of operation than Facebook or Apple. Neither of the two lines above sound like M$ft here.
> "The truth is, we could make a ton of money if we monetized our customer — if our customer was our product," Cook said on Wednesday. "We've elected not to do that."
You elected to pocket up to 30 % of all app revenue generated from your users and their data ($11.5 billion in 2017). How is that not monetizing your users?
I take it to mean that when he says "monetize our customer" he means "the money comes from someone other than the customer themselves." Pocketing 30% of app revenue falls more under "providing a service" than "monetizing customers."
Apple could have a similar situation. Via the iOS SDK, developers can ask for access for a phone’s address book. These contact connections could be harvested by bad developers. They could also be sold by those developers. This is merely one example. All companies with data and APIs are vectors of attack.
The difference is that the data is on your phone, not Apple's servers (unless you back up with iCloud, but even then I believe it's encrypted). You can decide which apps to run on your device; you have no control over what Facebook does on its servers.
Definitely a difference but there is a similarity in vectors of attack: The contact address book can be taken off the phone using the Apple SDK. Then, that data can be anywhere the developer wants to store it.
This is non-issue. Do you really want a mobile OS where third party apps can literally do nothing? No contacts, photos, camera, accelerometer, gps, microphone.
Each one of those explicitly requests the users permission. Facebook would track you across the internet just using Share buttons on webpages.
Facebook's issues stem from historical overly-permissive API access that could effectively harvest your graph via a single node's acceptance. Some small facsimile of that is represented by your example but not to the scope it was with Facebook. It's a one-to-many but not a graph. I think this is important natural constraint and a noteworthy distinction. It makes connecting data more of a crapshoot.
Apple is equipped with far more resources at present to deal with app review than Facebook has. They've traditionally been far more draconian about their own rules than has Facebook. This means something like what you describe is less likely to fall through the cracks or propagate the way abuse did with FB in the pre-2014 days.
You're 100% right that any access is effectively a vector, but that's needless reductivism. In 2018 society seems to accept some level of visibility but not unfettered visibility. This is the balance that social media and data gatekeepers will have to contend with.
That said, it doesn't take much to tip the scales these days. People tend to have a healthier skepticism toward the giants of tech these days, and if you burn a little bad PR it can have some terrible consequences. Which is why Facebook has pulled some plugs in a panic on the developer side of things in response.
As has been mentioned countless times in the wake of this kerfuffle, MySpace was the defacto social media company until one day they weren't. It's eas(ier) to introduce competition to Facebook over, say, Apple or Microsoft.
Finally, Facebook's product is, well, data. One could argue that it's a platform and data, but the former is a loss leader for the latter.
So for a number of reasons I find it unlikely that even were Apple to have a PR misstep like this that they'd face the same existential crisis.
They could have, but that's why Apple has an app review process. It's been criticised (by me, for one) for being heavy handed and slow, but recent events have shown that there's a wisdom to it.
No one really knows. But I wouldn't be surprised if there's a mix - I'm sure there's some automated investigation of how much data is sent, when and what that data is.
I doubt that Zuckerberg is able to fix the issues as he is at the root of it. Looks to me like the situation at Uber but with FB there may still something to be saved if they are moving fast.