Apple is in a good position in the market to do this, but it is just really hard to trust any US company.
They say they will never give anyone access to their servers but how can they make that promise when they don't make the laws?
As an aside I am finding it really difficult to delete my iCloud account, in fact it seems that is impossible.
Locating data outside the US is in no way a defense against US intelligence agencies. There are theoretically controls on domestic spying, and some possibility that Congress could make it illegal. They have a mandate to spy on foreign networks.
If they haven't cracked your European email provider, then they're not doing what we pay and order them to do.
> If they haven't cracked your European email provider...
The difference is that European email providers are not cooperating, because they aren't obliged by your laws, whereas US companies are not only obliged to comply with requests, but they are also coerced to keep it a secret.
> ... then they're not doing what we pay and order them to do
That's a good thing to know, plus this is reason enough to pressure our governments and companies to not buy into US products or services. And in case you haven't noticed, this has tangible effects already, as fear of industrial espionage is spreading in big companies like fire and I've noticed this first hand in the German companies I'm in contact with. On the negative side, the US is positioned as the steward of the Internet and because your government fucked things up so badly, this is the perfect opportunity for the other countries to balkanize the Internet, to build firewalls, etc.
So I hope you're happy about how your taxes are being spent.
The NSA doesn't need cooperation. It can pwn sysadmins, plant covert operatives, and backdoor equipment in transit (including foreign-made equipment, so long as US intelligence can influence the shipping carrier, for example by recruiting employees or hacking ancient legacy software). If it can't, then it can pwn the other side of the conversation, or watch the SMTP in cleartext through a submarine-tapped undersea cable.
Disband the NSA and some other agency, some other country, will do the same thing.
You're bikeshedding. End-to-end encryption with HSMs and trusted execution environments everywhere, always. Verifiable, deterministic builds. A genuinely trustworthy, decentralized PKI. Better software engineering security practices, a professional barrier to entry, and an ethical system (ala the Bar or medical boards) with teeth that will reliably eviscerate people and companies who write and run irresponsibly sloppy code.
The cat's not going back in the bag because you avoid the US. Fighting over which service providers you send cleartext through, whose hard drives your unencrypted data sits on, who has the power to MITM you, is a waste of time and a distraction from the real challenge of developing and adopting security systems and practices that make doing what the NSA is doing actually difficult.
But of course it does. Security is not a black and white issue, but rather a matter of cost. And the fact is US companies are much easier and more cost effective to crack because they can be (legally) coerced and nobody has unlimited resources, not even the NSA.
> Disband the NSA and some other agency, some other country, will do the same thing
This is one of those logical fallacies that keeps popping up. So we should bend over and take it like a man, because if it's not the NSA, then it will be somebody else. Even if you're right, bad actors in society should get punished, otherwise they'll never learn. And indeed, it doesn't seem fair to punish US companies, many of whom really want to be good and faithful for their customers, but I've seen many signals that the american public approves and finances this behavior, which includes the above comment and the US government never apologized (to us, foreigners), therefore avoiding US services and products can become a matter of necessity.
> The cat's not going back in the bag because you avoid the US.
Yeah, but you see, I'm not an US citizen so I don't even get to vote on your laws and your government has made it clear that when it comes to foreigners then everything is allowed. And we do have intelligence agencies and they are cooperating even with the NSA and so on and so forth, but here there is no behemoth like the NSA is. And as an EU citizen at least I would have ways to fight it.
> developing and adopting security systems and practices that make doing what the NSA is doing actually difficult
Only a software developer would end up thinking that all political and social issues can be solved with technology. The world doesn't work that way. You want cryptography? It will eventually get outlawed and there is already precedent in the US.
> I'd suggest you go 'deep inside you' and think again what should be the priorities to pay for. Thanks!
As you're perhaps aware, the US is a large country (in area and population), with a political system that depends on a sharply divided electorate. The things you seem to think are obvious issues to tackle are actually rather controversial, and prone to demagoguing by opportunistic politicians.
Congratulations on the ideal political process in your home country. Perhaps one day we'll attain the same level of enlightenment; or perhaps you'll gain a bit of maturity and understanding of the landscape here, and develop more realistic ideas about the US.
I'm of course aware of the political system. Thanks. My comment is about the personal statement 'cracked your European email provider, then they're not doing what we pay' which I understand that he is very much in approval of 'we pay'.
Maybe I should have restrained from the additional remarks. But I got frustrated about a fellow hacker newser kind of defending intruding friend nation companies and not willing to improve at home. (Yes I would accept that it may not be possible or easy).
[But - what a pity - the next shooting didn't wait long to happen. Maybe The Economist has a good and realistic understanding of the landscape, writing: "Those who live in America, or visit it, might do best to regard them [the Normalization of Gun Massacres] the way one regards air pollution in China: an endemic local health hazard which, for deep-rooted cultural, social, economic and political reasons, the country is incapable of addressing.". And then they continue: "This may, however, be a bit unfair. China seems to be making progress on pollution."]
Maybe I'm dense or naive, but I don't think there's any precedent for that. A gag order is one thing (and there are certainly places for it), but forcing someone to lie would hopefully violate the First Amendment.
It's strange that chainsaw10 is being downvoted for their comment. From the second link above, "Have courts upheld compelled false speech? No, and the cases on compelled speech have tended to rely on truth as a minimum requirement." That sounds more like there is not precedent to force people to tell explicit lies.
Also true. But we know from the Snowden revelations and other sources that Apple has been backing up its promises. So we have at least some level of assurance that Apple is a good actor.
https://en.wikipedia.org/wiki/40-bit_encryption was the most secure thing it was legal to export. The Netscape browser, in particular, had a lot of hoops you had to go through in order to get the 56 bit version meant for US audiences. Therefore, even most Americans with internet access at the time had the crippled international version.
Whether or not they hold the keys at present, Apple is in a position of power with regard to the iOS environment. In a technical sense it would be fairly straightforward for them to acquire the keys.
Trusting the company has nothing to do with it - they could be legally compelled to do so in a secret court, and gagged with a NSL to keep them from revealing such an order. Sadly that's the reality we now live in.
No, they can't... not without designing changes into their hardware to allow retrieving the keys from the secure enclave.
In theory Apple could modify iMessage to MITM the key distribution server and enable eavesdropping. The only way to protect against that is to provide in-person validation mechanisms so users can directly compare keys. I hope they add such a thing, not that 99.99999% of their users would ever use it.
As far as the US legal system goes you'd need positive law to enforce wiretapping requirements. Courts (as a general rule) can't issue orders to force Apple to write new code or modify their silicon design to support something the government wishes it could have. Given the way SCOTUS has been approaching cell phone privacy I'm not sure such a law would pass muster.
Laws could, however, force the company to secretly push out an update that sends your keys to be held in escrow on government servers should the need arise to decrypt your stuff.
Switzerland, Netherlands and Norway are a good starting point. I believe that now that privacy became a major concern, we will see countries with some legislative background and experience in other sectors that require secrecy above everything else (e.g. private banking), evolve in secure havens for servers.
The Netherlands is on its way to be removed from that short list. The new WIV20xx (charter for information and security services) gives it very broad powers against very little oversight. I have been unable to find a decent source in English, but among its provisions:
- allows for "reconnaissance" on external networks, including breaking encryption or forcing targets to divulge keys. This "reconnaissance" apparently includes installing sniffers or data probes.
- allows for untargeted data collection on wired networks (including cell phone towers)
- has provisions for forcing data transit stations (including ISP's, but also AMS-IX) to comply with requests.
It depends on what you are trusting them to do. The NSA's not going to spy on you less just because you're not in the US. If anything, they'd spy on you more.
They would no longer need to ask for cooperation. Personally, I'd call that less safe.
At least in theory, the NSA could allow a compliant US business to be secure. If the NSA could not get data from a foreign business the easy way, I'm sure they would get it the hard way.
That's not what it's about for me. If companies like google and Apple want my trust they need to operate as their own government entity and make all users citizens and give them rights. Until then I'll stick to using as much FOSS as possible, never using social media for secure communication, and storing all my own data.
Basically, unless they let me see what's happening with my data by allowing me access to the code, I can't and won't trust them.
One doesn't need to give access to their servers in order to give access to their data. You just set up a hot spare and sync it over a leaky protocol, like FTP.
Ta-da! Both the marketing and the NSA are happy as clams.
> but how can they make that promise when they don't make the laws?
I agree, like Lavabit... Lavabit had all the best intentions but in the end the law screwed them over anyway.
Of course Apple has a lot more power than Lavabit, so it's nice they are taking this standpoint and being resistant. Hopefully they can contribute to a positive change..
Anyways at least they are trying, something i haven't seen from other big companies, like Google https://en.wikipedia.org/wiki/Criticism_of_Google#Privacy
Setting aside the fact that US intelligence agencies can and will subvert foreign servers with impunity, foreign intelligence agencies operate with less oversight within their own borders than the US agencies do within its borders AND they tend to cooperate with US agencies (or, worse, Russia's or China's).
Very well written text. I like the simple and comprehensible language.
Other companies should take this as an example. (Only problem: many other companies likely wouldn't want to tell in plain words how broad they are gathering and aggregating your (my) data)
I don't think I'm being overly cynical, but I'm seeing qualifiers[1] all over the place. The phrases seem carefully constructed to suggest more privacy than what is actually offered. For example:
"We don’t build a profile based on your email content or web browsing habits to sell to advertisers" but we do build profiles (bonus points for insinuating competition sells your information)
"we don’t read your email or your messages to get information to market to you." - but we do read and index your emails and messages, only to make our services and devices 'better' - by our definition
The sentences are built to sound like the italicized parts are not there, yet they totally change (negate) the meaning.
If I were a murderer, I could say "I do not kill men to eat their livers" and it would be 100% true, but it sounds like I'm not into killing at all.
Interesting note:
"P. FaceTime
FaceTime communications are end-to-end encrypted and Apple has no way to decrypt FaceTime
data when it is in transit between devices. Apple cannot intercept FaceTime communications.
Apple has FaceTime call invitation logs when a FaceTime call invitation is initiated. These logs do
not indicate that any communication between users actually took place. Apple has no information
as to whether the FaceTime call was successfully established or duration of a FaceTime call.
FaceTime call invitation logs are retained up to 30 days. FaceTime call invitation logs are available
only following receipt of a legally valid request"
iMessage is not mentioned. Does this mean they are capable of intercepting iMessage?
edit: in the FAQ it says "Can Apple intercept users’ communications pursuant to a Wiretap Order?
Apple can intercept users’ email communications, upon receipt of a valid Wiretap Order. Apple cannot
intercept users’ iMessage or FaceTime communications as these communications are end-to-end
encrypted."
>Does this mean they are capable of intercepting iMessage?
You know what the best way is to see who cooperates with law enforcement and to what level? Court documents!
I've been trawling court documents for the past few months (I'm writing a blog article on this) and I'm yet to find iMessage being used in court (unless the access to the conversation was given by one of the parties). iMessage really does seem secure from legal system point of view.
For a comparison, I've found dozens of court documents from Google, Facebook, Microsoft handing over chat logs from Hangouts, Whatsapp/FBMessenger, Skype.
Anyway, if you want to see the level of cooperation and want to double-check privacy policies, actual court documents are the best way.
“Parallel construction is a law enforcement process of building a parallel - or separate - evidentiary basis for a criminal investigation in order to conceal how the investigation actually began.”
Police use parallel construction when they don't want to reveal their methods. I seriously doubt law enforcement would engage in parallel construction if it had access to iMessage but Apple asked it to.
That's exactly the case I was thinking about with my comment
1) Police don't care that APPLE doesn't want to reveal its methods, and
2) Unlike the Stingray, it wouldn't be illegal for them to use iMessage evidence in court if Apple provided it to them (especially with the use of a warrant!)
I think it's more about the police not wanting the public to know that iMessage is owned; having trusted-but-broken communication services is a serious advantage for them.
I'm also looking forward to your post. Whatsapp is supposed to be end-to-end encrypted with messages only stored on the phone, at least it has reportedly been like that since end of 2014. It would be extremely interesting and a big scandal if you found documents with Whatsapp messages in them dating from 2015.
Ok... So, why would Apple receive the benefit of only exposing iMessage data in secret courts when Google, Facebook, and Microsoft all have to do it in open court documents?
Users start a new iMessage conversation by entering an address or name. If they enter
a phone number or email address, the device contacts the IDS to retrieve the public
keys and APNs addresses for all of the devices associated with the addressee. If the
user enters a name, the device first utilizes the user’s Contacts app to gather the phone
numbers and email addresses associated with that name, then gets the public keys
and APNs addresses from the IDS.
The user’s outgoing message is individually encrypted for each of the receiver’s
devices. The public RSA encryption keys of the receiving devices are retrieved from IDS.
For each receiving device, the sending device generates a random 128-bit key and
encrypts the message with it using AES in CTR mode. This per-message AES key is
encrypted using RSA-OAEP to the public key of the receiving device. The combination
of the encrypted message text and the encrypted message key is then hashed with
SHA-1, and the hash is signed with ECDSA using the sending device’s private signing
key. The resulting messages, one for each receiving device, consist of the encrypted
message text, the encrypted message key, and the sender’s digital signature. They are
then dispatched to the APNs for delivery. Metadata, such as the timestamp and APNs
routing information, is not encrypted. Communication with APNs is encrypted using a
forward-secret TLS channel.
Apple says "Less than 0.00673% of customers have been affected by government information requests." That's approximately 1 out of every 14,000 -- seems like a lot to me!
well, put a number on it. Of the "Device request" class, clearly at least some of them really are stolen. So that can't be zero. In fact, I'd bet the vast majority of those claims are indeed stolen phones.
I'll give you P(spy) = .01, which feels plausible, around 140 incidents, but we have zero evidence. For something like P(spy) = .1, 1400 requests, I'd want more evidence. It dosn't need to be particularly good evidence, because i don't hold the NSA side to be particularly good.
But, you know, still more than vague comments about the state oppressing a vast number of people with undocumented shady tactics. They've been proven to use undocumented shady tactics in the past, but they also seem pretty bad at keeping that stuff secret for long.
How many customers does Apple have? How many people total is that? It's 6730 per 100,000,000 (hundred million). I think Apple has sold like 700 million iphones. But actual customers is less. So there have been maybe 20,000 or so government information requests?
Definitely. There was a story a few years ago about the South African postal system delivering 99% of all letters over a period of time. Turns out when you aggregate that across how many they sent they only lost a couple of hundreds of thousands of letters.
Percentage is a convenient con used in some contexts.
I guess I should have expected a number this high, but I didn't. 750-999 national security requests in half a year? Not warrants, but national security requests. That just seems insane to me.
When national security requests come with built-in gag orders, little-to-no oversight, and are probably easier to issue/get than warrants (no pesky judges or explaining to do) why would one ever use a warrant again?
We don't know that. We do know that the FBI, NSA, and other federal agencies are overly anxious to support local law enforcement - by providing wartime hardware and weapons, as well as support in spying. We know local police patrols use Stingray devices, disrupting cell service and sweeping up private communications without a warrant - that's just a single instance.
We have no idea how easy it is for local law enforcement to pick up the phone and request "support" by having a NSL issued.
One could infer how difficult it would be to maintain secrecy as the number of privileged enforcers expands. The longer that secrecy is maintained, the less access probably exists to the privilege.
I'm comfortable assuming that my local beat officer probably can't signal to his superior to pull an NSL on me and start reading every piece of data my devices are streaming over TLS.
I'm not sure that high level enforcers should have easy access either, though. What's the old saying? Something like you rise to the level of your own incompetence?
Does anyone else have the suspicion that Prism was named prism because instead of getting companies to co-operate with the program they were siphoning off data like they did to google by tapping their fiber lines?
That would fit more in line with what we have heard about the tapping stations/rooms at AT&T/Verizon over the years.
I mean if they were really involved with back dooring the individual servers of google/apple/facebook that would involve hundreds of employees at each company to make that happen. Someone would have spoken out by now with some evidence to prove it.
To my knowledge that hasn't happened. Just this shitty looking powerpoint outlining when the data was starting to come in...
Yes, but according to google they were tapping the "dark fiber" (private lines only carrying google traffic) that were not encrypted between their data centers.
Those lines were for things like data replication so it was a goldmine for the NSA to tap. Those transmissions have since been encrypted.
Not entirely, though Google and subsequently Microsoft, Apple et all have upgraded inter-datacentre connectivity to be encrypted - reducing this attack vector.
> Prism in general was mischaracterized by the early reports. It was first reported as a persistent backdoor into servers, but it was actually just a way for NSA to automate requests for information through the FBI. This was detailed in later reports.
> Edit: for those skeptical about my comment above, here is more detail from a discussion about a year ago:
> I want to be absolutely clear that we have never worked with any government agency from any country to create a backdoor in any of our products or services. We have also never allowed access to our servers. And we never will.
The slides imply Apple gives access to data on customers. Tim Cook says Apple does not give access to their servers. Is that really the point you're trying to make?
That's just semantics and weasel words, nothing more.
Apple, Microsoft, Adobe, Symantec, and a handful of other tech companies just began publicly lobbying Congress to pass Cyber Threat Information Sharing legislation, like CISA, a bill that would give corporations total legal immunity when they share private user data with the government and with each other. Many of these companies have previously claimed to fight for their users' privacy rights, but by supporting this type of legislation, they've made it clear that they've abandoned that position, and are willing to endanger their users' security and civil rights in exchange for government handouts and protection.
> Apple has never worked with any government agency from any country to create a “backdoor” in any of our products or services. We have also never allowed any government access to our servers. And we never will.
Did Apple ever provide any insight into what access the Prism program had? They denied knowing about it[1], so either they are lying or it was a mole. Did they ever follow up with an investigation or conclusion as to what exactly the government had access to?
Prism in general was mischaracterized by the early reports. It was first reported as a persistent backdoor into servers, but it was actually just a way for NSA to automate requests for information through the FBI. This was detailed in later reports.
Edit: for those skeptical about my comment above, here is more detail from a discussion about a year ago:
If you were paranoid, it wouldn't take much to conclude that "worked with" and "allowed" could mean that this doesn't imply that there weren't or aren't backdoors or access that occurred without Apple's knowledge or approval.
Under a strict interpretation, Apple hasn't given the NSA "access to [their] servers" (not root access, not physical access, etc.). They've given them a means to access some user data. (Edit: Or just request it, see snowwrestler's comment below.)
To answer what access the NSA has, the best case would be merely access to iCloud email, iTunes Store purchase records and other things Apple can't encrypt. The worst case is everything.
It's a shame the options to disable sending your local spotlight queries to bing and apple are hidden away in a huge list of other apps. I wish they had one giant button for disabling all remote queries instead of 4-5 options spread throughout different settings sub-pages.
You know you can Google it but I believe you haven't tried it yourself to do exactly what the goal of the question is, otherwise you would not answer with a plain Google query. I know as I've actually spent the time to experiment. And I'm still not at the point where I can answer it exactly. I don't want "disable everything" I don't want "disable this what's written but I'm not sure if what I type is still transferred" I want the exact information based on the network traffic analysis confirmation. I even know how I could do this but I don't have the time and the motivation and I know the sites that come up on the Google queries didn't do their actual homework, just "generated the content." SEO rules and stuff.
I've hoped somebody would have given an exact link to some competent and exact analysis. Google query it ain't.
I've got by Googling:
"turn off 'Spotlight suggestions' and 'Bing Web Results'." Did both. Did that. Still got the web "suggestions" in my search results. I don't know it they are "Spotlight" "bing" or "Safari" but they are there, some server must have been involved as the results can't come from my phone. Clicked around a little more. Now looks better. Or not. I try to avoid the search page. I don't know what turns off what actually. And still don't know who reliably documented it.
It seems that other stuff can send the web queries as the result of what I type. Which stuff is that, what's going on, somebody will still have to find out and explain. Apple still haven't. Or I'm missing something and I'd be glad to learn.
>You know you can Google it but I believe you haven't tried it yourself to do exactly what the goal of the question is, otherwise you would not answer with a plain Google query.
And you'd be wrong. In fact when I did it, I did it with a similar Google query, checked the first 2-3 results, settled on one site with instructions (it's a very simple setting anyway, it's not like you're hacking anything) and went on with my life.
Doesn't really address iOS 9 which changed things around a bit, and to even find anything about iOS at all you have to start looking at the 4th or 5th result on some random domain. And - even then - there are multiple odd places in the settings you need to check and verify, some of which are hidden in a long list of other apps.
It's detailed in the help page, the link to which is on the same settings screen that contains the switches to turn off.
Likewise on MacOS, there's a 'About Spotlight Suggestions & Privacy' button on the Spotlight settings page. Again, it's on the same screen that contains the switches to turn off the feature.
You don't need internet access to find these options, and I can't think of a better place to put the help. Knowing that these options exist in the first place is another problem though...
In iOS9, there's at least "Siri Suggestions" at the top, then intermixed with all your apps there's "Bing Suggestions" and "Spotlight Suggestions", then there's a "Suggested Apps" in the separate "Handoff & Suggested Apps" settings screen, then there is "Suggested Apps" in the "App and iTunes Store" settings screen, then there is "Safari Suggestions" and "Search Engine Suggestions" in the "Safari" settings screen. I have no idea what half of these actually do though. Very confusing.
Confusing, definitely. But in the context of your initial question, bringing up the Siri settings seems a little strange. By definition, if you are using Siri, you are using a remote query... Likewise, if you have hand-off enabled, it's going to be talking over the internet, how else can it work?
And "Siri suggestions" appears on the spotlight search screen to the left / when dragging downwards, so it is not at all obvious what the difference between "siri" and "spotlight" is. Yes, even when I want to do a local spotlight search.
In iOS, you need to turn off 'Spotlight suggestions' and 'Bing Web Results'.
This is documented in the 'About Spotlight Suggestions & Privacy' link at the bottom of the Settings->General->Spotlight Search panel. It's a shame that there's no switch in the actual Privacy panel though.
I use Apple products, including the MBP on which I am typing this post. I paid big bucks and my expectation is Apple should respect my privacy. Services like Google I use them for free, and as such, monetization is fair. Not for Apple, and I hope it won't let me down.
This seems to me like a rational approach. You get what you pay for. Or to put it another way, if you don't pay for the product, then you are the product.
Guess what. I used to be a Linux zealot for over ten years, my hobby was to custom-build my kernel and rewrite device drivers for some of my devices. But Linux had some minor issues that I couldn't quite put up with, while Apple so far just works for me. Apple is far from perfect, but it covers most of my needs and I am happy to use it.
I know strong advocate of freedom and transparency will say Apple is closed-source, for-profit company yadi yada. The truth is most of things in our lives are: the cars we drive, the fridge, the TV, the watch we wear, and hardware we use (Intel or ARM chips aren't open source, right?) So years ago i decide this free software thingie isn't for me.
We’re going to make sure you get updates here about privacy at Apple at least once a year and whenever there are significant changes to our policies.
Translation: We set up this page to reference for the inevitable future articles and critcisms of our policies.
Not saying it's a bad idea, but it's very lawerly to me. Like this one:
We don’t build a profile based on your email content or web browsing habits to sell to advertisers. We don’t “monetize” the information you store on your iPhone or in iCloud.
That makes sense, and I figure it's a true statement as written. But, bear with me here, I feel like it could still also be true they build a profile based on X, Y, or Z for internal use by Apple in the name of "making services better" as it were.
What I'd like to see at the bottom of the letter - and don't see even after clicking through a couple of the links - is a link to review all stored content by Apple in a nice, clean two-factor authenticated dashboard, and settings for all devices to be managed in one central location. That would be rather helpful to individuals...a big gesture of that buzzword "transparency" and all that! Yet I highly doubt such a portal / review capability would be implemented by Apple without much metaphorical kicking and screaming.
I was going to suggest that this page was meant to serve as a warrant canary but it looks like Apple had a warrant canary but it's gone now (Explanation of a warrant canary for the uninitiated in link): http://apple.slashdot.org/story/14/09/18/2216222/apples-warr...
Remember this: http://www.theguardian.com/technology/2011/apr/20/iphone-tra... in which the location history of users was tracked and offloaded to the user's PC when synched? I get that the "problem" was allegedly fixed, but there's absolutely zero reason to trust anything Apple says here. The potential of getting hit with National Security Letters with built in gag orders invalidates any conception of trust between users and groups that handle their data.
There could also be hardware or software backdoors/exploits that Apple has no clue about-- I would say almost certainly, given how inventive the NSA has been with both of those angles.
I'm still confused if the Snowden leaks meant that the NSA was cooperating with the companies listed, or had developed backdoors with the companies listed.
There were Google security engineers that seemed to be in the dark about the level of NSA engagement.
As a long time Linux user and a long time Google product user, I'd be willing to go all in on Apple if:
* They were truly, actively committed to defending a user's privacy.
* They allowed using your own domains (I currently have around 30 domains pointed to the same GMail account).
If both of the above were true, I'd ditch my Cyanogenmod-running HTC One M8 and buy an iPhone today. My privacy is worth more to me than the ability to install whatever music player, keyboard, etc that I want or drop to a command line (though I do love having that ability and would miss it).
You really think you get privacy with Android? Have you ever monitored outbound connections on your device? There's a LOT of activity there. Even with Cyanogenmod's Privacy Guard, which at least improves the Android permissions debacle.
Combine that with GMail. I pay for the business version, yet the privacy policy is still hard to understand and I still get suspicious ads popping up which seem to have come from no where else but my email. Could be wrong, but it's uncanny.
It doesn't matter whether it's open source or closed source, non-profit or for-profit. If it's made by a company headquartered in the US, there's no escaping a national security letter with a gag order. Anything within Five Eyes (AUS/CAN/NZ/UK/US) jurisdiction (such as Ubuntu) is probably not much safer. We really need a new Linux distribution headquartered in a safe jurisdiction.
Nice but recently I was asked to enter the admin password of my MBP into a webform at an Apple reseller, it was a requirement for getting the GPU repaired (as part of a recall.) Isn't that very strange?
No it's not strange. It's the only way to make sure everything is working correctly before giving your computer back. If you tell them you aren't comfortable with that they do have workarounds. If they DIDN'T need your password that would be a bigger red flag because it would mean they had backdoors.
You don't have to. It just lets them better test and repair your device. If something is gorked up in your system files they can fix it using an admin account. I just dropped two laptops for repairs and didn't know an admin password on one. It wasn't a problem, though I suppose it might be more likely to come back with a wiped disk.
I'd like to see Apple commit to making Apple Pay available for online commerce. That would allow Apple to get rid of the hundreds of millions of credit and debit card numbers they retain in their private servers. If Apple's card number information ever got hacked, it would be a disaster for Apple and their customers.
By 'government', does Apple mean the government of the U.S. or any governments of nations Apple exports its goods and services to?
I am still skeptical about any company in a hypothetical case where private information of its customers is sought by governments like China. BTW, I am living and going to live in China in future.
So how will we ever know until it happens if the encryption on Apple devices is weakened by government demands? I understand they have a posted stance on privacy and such but the US government has shown it will threaten even those who divulge requests.
It is to the point I don't upgrade i devices until weeks after just to be sure.
Isn't that problem solved by the 'canary in the coal mine' ? A company can have strong privacy language in their documentation. If any or part of that language disappears suddenly one day, you can know that a government forced the company to go against this language?
Sadly, Apple isn't on https://canarywatch.org/ Did I miss another way to know about that information (that does not involve parsing all Apple documentation by myself).
Can we get Apple on board with RWD since they seem are up to their ears with "protecting" their users' privacy to the point of neglecting this crucial aspect of modern web design?
No, it was because celebrities are normal people and tended to use the same password for multiple services. One service was compromised, which compromised every other one because the passwords were the same.
My understanding was that some iCloud account login endpoints (associated with Find My iPhone) didn't have any rate limiting for password failures, and this allowed brute force to work for targetted accounts.
Really? It seems to me that most articles conclude that it wasn’t iCloud that was hacked but the celebrities.
Apple also released a press release[1] saying that they “have discovered that certain celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions, a practice that has become all too common on the Internet. None of the cases we have investigated has resulted from any breach in any of Apple’s systems including iCloud® or Find my iPhone.”
It clearly shows that iCloud was very susceptible to basic social engineered attacks. Their statement on the subject is vague and misleading. There was no breach of iCloud passwords database, but if somebody just "guessed" the answers to the security questions, that counts as a breach for everybody else.
FBI has made one arrest and the investigation is still on going. We probably would not know until it's over, but at least one celebrity, Kirsten Dunst suggested her images were taken from the iCloud: https://twitter.com/kirstendunst/status/506553772114317312
Again, what most articles you have besides Apple's hand waving?
The images in The Fappening came from a variety of cloud services, including iCloud. I think the hackers were getting access primarily via social engineering.
There was an iCloud hole that was discovered around the same time as The Fappening, but no evidence that it was used by them before it was patched by apple.
No, I don't trust you. That trust is long gone, you will have a long way to go to earn it back. These are just words, and they can write whatever they want since there is no way to validate the claims.
I'll take them more seriously when they stop voluntarily giving the NSA zero-day vulnerabilities months before they get fixed, essentially giving them "temporary backdoors" to their systems.
Stuff like gotofail took a while to get patched (weeks?), and things like https://github.com/kpwn/tpwn has been around for a while which is still not patched in any public, non-beta release of OSX.
While it's definitely plausible, and probably likely, I was looking for a source to the specific statement that Apple is "voluntarily giving the NSA zero-day vulnerabilities months before they get fixed". Are they giving them to the NSA? Is this why they're holding out on fixing them? Do we have evidence that this is their intention?
What's with the tinfoil hat accusation? It's 100% reasonable and supported by an abundance of information that US corporations are being pressured in multiple ways from the surveillance state in order to hand over their user data, whether or not their users are suspected of any crime.
Sure, maybe he is speculating without anything specifically to support his thought, but come on, it's plausible. We know that "they" have no decency or limits on what they'll try to exploit.
As an aside I am finding it really difficult to delete my iCloud account, in fact it seems that is impossible.