Hacker News new | past | comments | ask | show | jobs | submit login
iOS 10: Security Weakness Discovered, Backup Passwords Much Easier to Break (elcomsoft.com)
220 points by cpach on Sept 23, 2016 | hide | past | favorite | 91 comments



Ok, I got it clearer: somebody at Apple fucked up and left a weak SHA256 Hash and salt inside a db table that shouldn't be there. Probably used in testing for the betas, then nobody remembered to comment it away before the public release. Some engineer and somebody in QA will get their ass kicked pretty badly. Next iOS public release will solve it, everybody's gonna be happy. Nothing to see here folks, we can move on :)


I really wish the default response in dev and operations wasn't "somebody is going to get (yelled at|punished|fired|pile on layers of bureaucracy)". It would be so much better if the response was "how can we change automation in our release process to catch this and the engineer that did this is responsible for fixing it". Blameless post-mortems should be the industry standard.


Indeed, but this is a big "you should have gotten that in QA" kind of mistake. With the aggravating factor the the company markets security as one of iOS tentpoles... Accountability is a bitch, I guess, but in this case this was really in plain sight. I mean, it's not necessarily the dev's ass that's going to be kicked. There's a nice story about accountability at Apple once recalled in a profile by Adam Lashinski. It's from the Steve Jobs' era, but I guess this kind of policy is still enforced in Cupertino:

One such lesson could be called the “Difference Between the Janitor and the Vice President,” and it’s a sermon Jobs delivers every time an executive reaches the VP level. Jobs imagines his garbage regularly not being emptied in his office, and when he asks the janitor why, he gets an excuse: The locks have been changed, and the janitor doesn’t have a key. This is an acceptable excuse coming from someone who empties trash bins for a living. The janitor gets to explain why something went wrong. Senior people do not. “When you’re the janitor,” Jobs has repeatedly told incoming VPs, “reasons matter.” He continues: “Somewhere between the janitor and the CEO, reasons stop mattering.” That “Rubicon,” he has said, “is crossed when you become a VP.”


Indeed, but this is a big "you should have gotten that in QA" kind of mistake.

It depends what kind of infrastructure you have.

Ideally there should be NO MANUAL STEPS between dev and production. This should not be any need for a person to remember that this data should be inserted for dev and removed before prod. Instead there should be something like a documented way to insert the data or not based on a configuration file that is different for dev and production.

Of course reaching that keeping that ideal requires discipline. And when you slip, it becomes easy to say, "That's QA's job." Over time QA's job will get harder and harder, and that is a guarantee of periodic slip-ups.

Authority for fixing this situation really is the job of someone VP or above. They need to decide whether to fix the process and change what a lot of people do, or accept the inevitable screw-ups as the right choice to maintain the current rate of development.


I want to preface this by saying that if this comes off as condescending that's not my intention.

> Blameless post-mortems should be the industry standard.

Sure, often times there are multiple reasons why something isn't accomplished, something is broken, or something doesn't work as intended. But generally there is someone that's primarily responsible for the piece of code, be it a manager, engineer, etc. It does suck to be on the receiving end of it and if it's major enough you can be out of a job, but blame should be given to the responsible party/parties and the issue should be handled appropriately as it's more direct than what alternatively happens which is to passive-aggesively handle the situation. Plus, how you handle crises like production going down and being blamed for it say just as much about you as an engineer as does your code.

Essentially, I'm against being coddled as an adult. We're adults, someone fucked up and is responsible for it, they should be blamed and handle it, and handle it like an adult. I do think that handling that kind of situation as person on the giving end definitely requires tact, though.


But generally there is someone that's primarily responsible for the piece of code, be it a manager, engineer, etc. It does suck to be on the receiving end of it and if it's major enough you can be out of a job, but blame should be given to the responsible party/parties and the issue should be handled appropriately as it's more direct than what alternatively happens which is to passive-aggesively handle the situation.

The problem is deciding what is the actual goal of a post-mortem. Is it to fix the problem (and prevent it from happening again)? Or is it to figure out who is the person most to blame so that they can be punished. Because, honestly, you can't have it both ways.

Plus, who makes the final judgement as to the party to be blamed and based on what criteria? Is it QA for not catching the issue, is it the engineer for forgetting to remove some debug code, the tech arch for not properly ensuring there were systems in place to prevent it from occurring in production, the program manager for not pushing back on deadlines that were too aggressive, or the CEO for constantly pushing employees to prioritize revenue over product quality?

Plus, how you handle crises like production going down and being blamed for it say just as much about you as an engineer as does your code.

Or your willingness/ability to find a suitable scapegoat.


The problem is that post mortems then become politicized, which means the actual cause may not be exposed, rendering the whole exercise pointless. It's part of the reason the FAA has all kinds of safe guards around reporting aviation hazards, even if the person who reported them is to blame.


Reminds me of the story of a guy who makes a $50,000 mistake. He goes to his boss saying "So I expect you're going to fire me now" and the boss answers "Are you kidding? We just invested $50,000 in your training!"

Should I consider myself lucky that all my bosses have had this mentality? Is ass-kicking actually more prevalent in the industry than rational correction?


I've always thought that `git blame` was an overly negative / accusational term for the command.


That's why I actually have an alias "git praise", which was one detail of svn I liked.

I was actually surprised by the change of mindset this simple substitution made. Power of positive thinking, I guess...


you can run git annotate instead (although the format looks a bit different), but blame is shorter and easier (for me) to type and remember.


It's a little worse than that.

The "file" column of the database is a binary plist encrypted with AES128-CBC using the first 16 bytes of SHA1(password||salt) as a key and 0,1,2,...,15 as a salt. So those columns could be used as an oracle to break the password, even if the hash were removed.

The data in the file column doesn't really need to be encrypted. It was unencrypted previously, contains file metadata, and a properly wrapped key for the file.

The cryptography on this is pretty shabby. Much more amateur than the existing / previous stuff surrounding the keybag in the backup (which uses PBKDF2). Not sure what happened here. (Why it was added, and why it wasn't vetted by their more knowledgeable engineers.)


That sounds maybe plausible, although what could they have possibly been debugging that required a weaker hash?

Of course, with the paranoid hat on, if your task was to subvert the mechanism, this is probably exactly how you'd go about it - make it seem like an accident. Just like gotofail :)


Yeah, and hide it in plain sight. Plausible, yes. Although goto fail; had another vibe to it. It was really strangely placed and hardly explainable. Plus it made possible to perform some pretty nasty MiTM attacks over a network.

Here you have a method that security experts tinkering with the betas knew of since probably last July. Everybody expected Apple to fix it in the GM, but it didn't happen. Now the cat's out of the bag with some deservedly good (and harmless) publicity for Elcomsoft and it's gonna be fixed in no time.

If we even want to give credit to the malicious actor hypothesis, what's the benefit for them? Accessing iOS 10 beta backups more easily for three months? Nah, I don't see that, not this time.


> Although goto fail; had another vibe to it. It was really strangely placed and hardly explainable. Plus it made possible to perform some pretty nasty MiTM attacks over a network.

The goto fail looked very much like a bad code merge. (I actually had a similar issue with a duplicated line when merging changes two days ago.) It's impossible to know without reviewing their full source control history, but it seemed like a plausible mistake to me.


Ok, good explanation. Let's slit that one with the Occam razor too, then :)


Well... yes... just remember to securely delete that backup made with the current version of iOS10 from your computer, after you upgrade to next iOS public release... ;-)


Some technical detail would be nice. At the moment, this is just an advertisement for their iPhone backup cracking software.


Per Thorsheim says: "Apple have moved from pbkdf2(sha1) with 10K iterations to a plain sha256 hash with a single iteration only. Bruteforce with CPU!" https://twitter.com/thorsheim/status/779207177416351744


The PBKDF2 stuff is still there for the keybag, someone just added additional encryption (of the "file" column in that database - data that was previously unencrypted) with the same password using single iteration sha1.

(Data source: writing my own code to decrypt the backups about a month ago, or rather adapting my previous implementation to the new backup format.)


That will be a fun "git blame" (or equivalent) to make

Wonder how did that slip through


I didn't know we can brute-force SHA256 in a lifetime, yet.

Is it already possible?...


Essentially, your backups are encrypted with a key that is generated using a key derivation function (KDF), which takes in your password and outputs a "hash" that is fed into the encryption algorithm.

It's obviously much faster to brute force SHA256(dictionary_word) than a 128-bit key.

Apple was previously using 10,000 rounds of SHA-1 (with PBKDF1) as a KDF. This is much slower than one round of SHA-2 they've switched to by orders of magnitude.


ANY standard hash function (SHA256, SHA512, even SHA3) used to obfuscate a password can easily be brute forced if the password has low entropy (like BigBlue8 or s3cr3tp4ssw0rd). These passwords are weak and hash functions are designed to be fast to compute, so it's possible to brute force them.

To brute force successfully SHA256, you don't necessarily need to brute force 2^256 possibilities, you need to brute force a much smaller list of likely passwords, and this might just be enough.


You can easily bruteforce against a dictionary. They're getting 6 million passwords/sec checked. Not enough to get an arbitrary hash, though.


If you only try the passwords that people are likely to use then... Yes


Would using a bunch of previous password leaks to create a rainbow table be useful for programs that do this? With the number of leaks it seems like it would be a pretty accurate depiction of how a lot of people choose passwords.


Yes. However, rainbow tables are trivially mitigated with a random salt, which requires you to regenerate your table for every hash.

I don't know if iTunes uses a salt to encrypt the backup.


Forgive my ignorance on the matter, but is this being handled by iTunes or by the Phone for encrypting the backups? Which element is actually at fault for the poorly protected backups?

If it's just iTunes being crappy with iOS 10 backups, then reasonably this is an easy patch, correct? And the change is perhaps a bit more understandable (trying to improve performance of actions from the already bloated iTunes); it's not a good reason at all to weaken the encryption method, but it at least provides a plausible explanation that doesn't have government interests as an impetus.


I also suspect this is on the iTunes side.

Maybe it is time to take a step back and question to what extent it is wise to trust iTunes. Could the encryption not be handled on the device? Then when the PC/Mac is compromised the backup would be still safe.


Well, it could be done on the device, but doesn't iTunes need unencrypted access to the device to function normally?

(Though perhaps it ought not to.)


It can't be iTunes' fault only. You're able to extract the data from the devices by sending the right commands, therefore the device is vulnerable.


Not necessarily. The article mentions phones being paired with iTunes - I assume this involves entering the passcode on the phone (haven't used iTunes for my phone in years, so I'm not entirely sure, but by my understanding of the iOS security architecture, it wouldn't be possible otherwise). This probably results in some sort of authentication token that iTunes stores and uses to get the device to perform backups. So yes, there's a command to extract data, but it's not unauthenticated or anything like that.


If iTunes has a token to get complete access to the device, and this attack requires access to that token, then again it's not an iTunes issue.

The way the article is worded makes me think that you wouldn't normally be able to use the token to get the data without this bug, which means that it can't be an iTunes issue.

Also, the fact that it's only iOS 10 implies the same.


Good point, that makes more sense.


It's a good question

If it's only related to iTunes then why does it only apply to iOS10?


Whaooo wtf? It is password 101 to use a password hash function to hash a password. This seems a very serious bug, by developers who ought know what they are doing :(


The cynical pessimist would say they did this on purpose to help government agencies extract data.


The other cynical person would say that doesn't make sense because out of the very small amount of people that backup via iTunes, an extremely small percent of them would actually bother encrypting.


If you don't encrypt the backup, then iTunes does not backup your stored passwords (wifi WPA2, web, mail, etc), health data, and other items considered "sensitive", which makes for a poor backup/restore experience. So there's a big benefit in enabling encryption in iTunes.

It's a bit annoying considering macOS can also use FileVault for full-disk encryption, which helps if the machine is locked/off. I guess it doesn't help against your macOS user account being compromised via a browser or anything.

Makes me wonder if anyone has taken a fresh new look at the FDE passphrase algorithms in macOS 10.12...


Or, also pessimistically, that they know more about Intel's

https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...

and its potential.


For me, Manifest.db inside the backup is a sqlite3 database containing a table Properties with two rows, salt and passwordHash, satisfying:

    passwordHash == SHA256(password || salt)


If you are encrypting data, why would you store a hash of the key at all?


Debug code that made it into production is the only thing I can think of, and even that isn't a good explanation.


yup, literally anyone could claim that.. bad article


Elcomsoft is hardly "anyone" when it comes to this particular subject matter. You may want to look them up.


That sounds close to an appeal to authority.


It's not the logical fallacy you think it is when there's an actual authority involved.


Appeal to authority requires an actual authority to be involved.


And it's not a fallacy

> An argument from authority (Latin: argumentum ad verecundiam), also called an appeal to authority, is a common type of argument which can be fallacious, such as when an authority is cited on a topic outside their area of expertise or when the authority cited is not a true expert.

https://en.m.wikipedia.org/wiki/Argument_from_authority

Calling it a fallacy when there's an actual authority is the part that's incorrect. It's no longer a fallacy when the appeal to authority (argument from authority) involves an actual authority.


> It's no longer a fallacy when the appeal to authority (argument from authority) involves an actual authority.

If I have to go google who has written that article to blindly believe in it - that certainly reflects poorly on the article itself.


Why would you do that? The citations are there for a reason.

Several reasons. One of which would be to avoid using the article author's background as part of an argument against the content of the article. Because that would be an ad hominem.


No, that's not true.

Appeal to authority means accepting what an authority says on the merit of the entity being an authority, and not on the validity of the statement itself.

Re-read your wikipedia article...


Yes, it is true. I said the instance wasn't a fallacy. Read my comment again. To be clear: an appeal to authority isn't always fallacious.


This part of what you said:

"It's no longer a fallacy when the appeal to authority (argument from authority) involves an actual authority."

is not true.

The right thing to say would have been "It is no longer a fallacy when the authority making the claim provides enough convincing evidence to make the claim valid, with or without an authority", if that's indeed what you meant to say.

But that's not what you said, and so I stand by my assertion that what you said is not true.

We have the original post in this thread of interest saying (paraphrasing): "The article has no substance, so the claim is it making could be made by anyone. The article is not useful and the claim is not substantiated".

Then we have a reply: "Elcomsoft is an authority on the topic so their claim should be stronger than if anyone else made it".

This is where I said it looks close to an appeal to authority.

But you then came in and said "when there's an actual authority there is no longer an appeal to authority".

Which is not true.

I mean, how can there be an appeal to authority with no authority? That's like saying there's a car accident with no car.


They did not just write an article, they also provide the brute forcing tool to prove their claims.


Clearly they will tell Apple before publishing on their website. Then again they sell cracking software for money.

iTunes is done by a different team than the OS. At one point at least much of the iTunes web side was handled by remote contractors, not sure of the app itself. Given that Apple is releasing 4 new OSs every year its not surprising something gets screwed up.

It will be fixed within a week I bet.


It will be interesting to see how they fix it, I'd guess it would have to be an iTunes and iOS fix because if it's only iTunes there is nothing stopping me from using an older version of iTunes to get a backup. You would need an iOS bump to force the user to upgrade to the newer iTunes before it would allow it backup.


Ok, ELI5 for me please: how using a SHA256 function unsalted with 1 iteration - as suggested by Per Torsheim - would influence the fact that you can try more passwords per second? Isn't that a flaw of whatever kind of software password-trial limit more than a flaw of the algo used by iOS to encrypt the backup?


You can't (usually) force attackers to use your code when performing an attack on encrypted data they hold. You might try to rate-limit password attempts with something like:

    if time_since_last_attempt < 1 {
        sleep(1)
    }
    key = sha256(password)
    check(key)
But an attacker can just replicate the algorithm without the sleep call, since it's doesn't influence the actual process by which you obtain the encryption key from the password.

The standard way to solve this is to use an inherently expensive process to turn the password into a key. In this case, it's something like:

    tmp = password
    for i in 0 to 10000 {
        tmp = sha256(tmp)
    }
    check(tmp)
Barring some breakthrough attack on SHA-256, an attacker must perform 10000 hashes for every attempt. If they don't, they won't derive the right key and they won't be able to decrypt the data.

Another approach is to use attack-resistant hardware which won't run the attacker's code and which has access to some hidden data which can be mixed in, and can't (easily) be extracted from the hardware. Then it looks like:

    if time_since_last_attempt < 1 {
        sleep(1)
    }
    hash = sha256(password)
    key = encrypt(hidden_data, hash)
    check(key)
Since the attacker can't obtain hidden_data, they can't run this code on their own hardware. Since the hardware doesn't accept the attacker's code, the attacker can't remove the sleep. This is how the iPhone's Secure Enclave works, and a less sophisticated version of this is why the FBI had so much trouble getting into that iPhone a while back even though it only had a four-digit passcode set.


Because the SHA-generated value is available to any program which can employ all the cores of the CPU and all the cores of the GPU to try the generated passwords in parallel, and one try is very, very cheap for SHA, compared to the "serious" algorithms, plus, anybody can buy special SHA breaking devices which are built all the time.

For example, this device, which was priced just 1300 USD tries

https://www.bitmaintech.com/productDetail.htm?pid=0002016091...

11.85T hashes per second, that's 1 with 13 zeroes per second and that shows what today is possible to achieve so cheap. Elcomsoft's software for previous, non-weak algorithm, managed to try 150000 passwords per second using the GPU, that is, this weakening is like giving an attacker some tens of thousands of computers for free, or tens of millions of computers for $1000.



If the hash is SHA256 they can, read your links.


Sorry, the second link has better info on exactly why it won't work directly.


Thanks. Yes, I agree completely, when it's "directly." The mining devices aren't designed specifically to crack the passwords, but the solutions have the raw processing power and use the same hash algorithm, it's just that somebody would have to adapt the chips for that given task, and for that he could keep most of what's already optimized, namely the fast SHA256 calculations in hardware. It won't be particularly hard to adapt some current solution, and the known speed benefit and power use would approximately remain.


I think you'd need to design your own hardware, since the algorithm is hardcoded into these. And once you're doing that you'd lose the economy of scale that bitmain has.


Geez... Thanks


You can bruteforce unsalted hashes by using rainbow tables.


Until it get fixed (if), everybody who does the local backup and worries about its possible bruteforcing should

1. wipe his previous ios10 backups

2. if the backup password is not significantly long, increase the length of his backup password with some random enough material.

And, of course, never forget that "$5 wrench" comics.

I still hope Apple will publicly respond on this. It simply doesn't fit with the other steps they did at least starting with iPhone 5s.


The article seems to suggest that an attacker would be able to perform a (new) backup without unlocking the phone if they had access to a "pairing record from a trusted computer", which I assume means an iTunes installation that's paired with the phone. In other words, you might have to unpair the phone (or protect the iTunes installation with FDE) in addition to deleting any backups.


And this is something visible on the surface. Imagine what else they added. Or removed. This is exactly why you shouldn't trust proprietary software.

https://www.gnu.org/philosophy/free-software-even-more-impor...


I think that history has proven that being open source does not render you immune from bugs and security holes.



My bad, I should have said that "I think that history has proven that being open source or free software does not render you immune from bugs and security holes."

Free software is not some magical silver bullet that makes your software impervious to bugs or mishandling.


The link makes it entirely unclear how this relates to decrypting the keychain. Yes, this makes it much easier to get access to the keychain, but isn't it also encrypted?


Meh, my iTunes backups are stored encrypted at rest.


Do you mean FDE?

Because an unprivileged account compromise could still get access to your backups in that case.

GPG or TrueCrypt-like would be the way to go there.


Humm, a skeptical view of this would suggest deliberate weakening of security.

Perhaps not a full back door, but more of an open upstairs window?


Why does everything have to be a conspiracy?


perhaps because the old API is still there? This appears to be a new less secure duplicate, with seemingly no benefit.

Also Apple took a lot of heat from the FBI case, perhaps this was part of the deal for them to drop the suit, which isn't a blatant back door.

The proof will be in their response to the issue.


Because people have faith in other people not being complete dullards.


That doesn't work either because Apple would have to be complete dullards to not realize that someone would found this obvious flaw within a week of the release of the OS. Especially right after launching a bug bounty.


iOS 10 has felt pretty buggy to me. Lots of little glitches here and there. We are still trying to get some hard data on a sneaky problem with random dropouts when using BLE beacons. I'd say this is more likely to be more of the same than a conspiracy.


You could say that it's not as common a problem and could be missed. But what do you say when the push notification is not working well with the new hotness and there is no fix. You need to uninstall, restart, install until the next update of your app is released and then you have to repeat that.

https://forums.developer.apple.com/thread/49512


Reality continues to ruin Apple's marketing ;)

Given the other crap[1], relatively speaking they still shine though - at least a patch will be out soon to everyone that turns on their device.

[1] FCC should really look into making security updates for mobile devices mandatory with a time limit, in absence of which OEM or the Carrier must replace the device free of charge with one that doesn't have the vulnerability. It's criminal what OEMs and carriers are getting away with while making ton of profits.


Why would apple implement a new weaker scheme in parallel to the existing old? Are the designers of the otherwise so secure enclave blundering? Or is this done on purpose?! (Hard to believe they would believe they would get away with it, so... amateur hour accident?)

Is this Apple's Bitlocker Elephant Diffuser?

Can I have some extrabacon with that?!


yes, the company that went out of their way to fab their own soc JUST for encryption/improving device security purposefully degraded their password backup scheme to allow the government to break into their devices

i dont even like apple but come on, the hyperbole can only go so far


You do know that it was possible to get the FileVault password for other users with a simple grep command up until OS X 10.7, don't you?


You do know that if you can run commands on someone's machine, FileVault has already failed to protect them, right?


Probably just a secret deal with the DHS. Don't worry about it.


I think your second explanation goes a long way to explaining the whole San Bernadino iPhone 5c dog-and-pony-show that Apple put on early this year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: