Hacker News new | past | comments | ask | show | jobs | submit login
Insider Attack Resistance (googleblog.com)
126 points by el_duderino on May 31, 2018 | hide | past | favorite | 65 comments



This is the sort of development that makes me want a Pixel 2.

I've been leaning toward an iPhone, for the first time, on security grounds, so this is a welcome piece of news.

Thanks, Google. Please keep it up.


I'm pretty sure everything mentioned here has been a feature on iPhones for quite some time.


No. If Apple develops a new, signed, version of the security module, it can bypass the password by updating the firmware in the security module without the user's password.

At least that was the case during the San Bernardino dispute. Apple never claimed it couldn't do, only that it refused to do for the FBI.

In this case (Pixel 2), Google claims that it is impossible to do that.


The San Bernardino case involved an older device though, the 5C (which came out in 2013), which did not have the dedicated security module (Secure Enclave) at all.

As far as I know, there is no confirmation on whether Apple (or anyone) could flash new firmware to the Secure Enclave, without the user passcode, without wiping data on the phone. This info is strangely missing from the official iOS Security Guide document. If anyone has more info on this please share.

There are some (unsubstantiated IMO) claims by people online (e.g. https://blog.trailofbits.com/2016/02/17/apple-can-comply-wit...), and a series of Tweets with an ex-Apple security engineer (https://twitter.com/JohnHedge/status/699882614212075520), but nothing official. SEP firmware definitely can be upgraded without a key wipe (as confirmed by the Tweets as well as regular usage of iOS), but it's unsure if can be done without the user passcode. iOS does prompt the user for passcode when performing OS updates (which is also the delivery mechanism for Secure Enclave firmware upgrades). I don't know whether this is a UX-level security check only or actually hardware level required step.


This isn't proof, but food for speculation: Given that you have to disable the iCloud "Find My Device" feature on an iPhone as part of the steps Apple requires to be willing to take a device for recycling, I would assume that that setting being on prevents them from doing any automatic wiping/updating of your phone without your passcode, even in DFU mode. (For, surely, if they had the capability, they'd simply use it at the recycling centre, and thereby streamline the recycling workflow.)


I highly doubt they would put those methods in the hands of people working at recycling centers. I feel like if, and that's a big if, they have that kind of capabilities it would be reserved for special case uses. I mean really special case uses.


Keep in mind that "recycling centre" here refers to an intake channel at their own factories; and that the firmware side of the recycling process isn't done by a technician themselves, but by a specialized "sanitizer" unit that the tech plugs the phone into. (Picture a disk degausser, but with a slot for a phone rather than a hard disk. Something heavy enough that you can't simply walk away with one!)

Is it hard to believe that, if iOS devices had a mode "deeper than DFU" that enabled control over the SEP firmware, that such machines would be implemented in terms of that mode?

And I mean, it's not like I'm making this idea up. This sort of "secret hardware-level handshake between recycling/repair machines and production devices, to put said devices back into a lowest-level firmware flashing mode that bypasses all user protections" was discovered to exist on the Nintendo 3DS, and was turned into a permanent jailbreak method for those. It might be an industry-wide practice. (It's hard to tell, because even on a rooted device, you can't just "dump" the ASICs and scan them for a backdoor handshake.)


A device that can launder stolen phones regardless of security settings is still something to keep in limited circulation, even if you can't pick it up and walk away with it.


I was under the (potentially incorrect) impression that this was fixed in later iPhone revisions (which may have even already existed by the time this happened). Essentially: pointing out that the iPhone 5c--a five year old device--had this issue is not particularly strong evidence that this hasn't been "a feature on iPhones for quite some time".


> is not particularly strong evidence that this hasn't been "a feature on iPhones for quite some time"

Sure, but there's no evidence whatsoever that this has been "a feature on iPhones for quite some time".


[flagged]


I'm interested in which details you can present.


If I understand correctly, this feature want on phones as recently as 2016 (San Bernardino). It would have made it impossible for Apple to comply.

I haven't seen anything since that says that this has changed, but I admit I don't pay close attention.


I'm interested in which details OP can present - I'm somewhat surprised to be put in a position where I have to do a breakdown of Apple's security vs. Google's to address a claim that doesn't have anything backing it, and is specifically stated with uncertainty ("I'm pretty sure...")

I'm only _somewhat_ surprised because people have a high-level idea that Apple is the one manufacturer concerned with security, and I'm somewhat _surprised_ because tptacek is involved, and he's aware that it's highly unusual to request no proof from the original claim while asking people addressing it to provide proof.

This contradiction leads me to believe tptacek has info on this that could settle things, leading me to resent facing two generations of posts that don't attempt to flesh out a claim while I'm downvoted for not fleshing out a claim, and I'm requested to make a claim.

I'll keep eating downvotes instead, this is objectively unreasonable.

EDIT: ...and there's already a sibling post that addresses this. Not a great look folks. :/


What about app security? I gave up on Android years ago. Is it still the case that apps expect to be given most/all permissions in order to function? If so then no thanks.


They overhauled permissions so that the app will ask you for permission right when the app needs it, rather than 50 permissions all up front. So you try to upload a photo and Android says "allow access to photos?"

It was a huge and much-needed upgrade.


Unfortunately some critical permissions are lumped into "Other" and can't be disabled, including full network access (portscan and privacy risk) and running at phone startup (I uninstalled Pandora when I started getting "Pandora has stopped" alerts when Spotify was the only app open).


Until this story gets as good as iOS', you can count me out of the Android universe.


It shows that Google is behind... I think most systems (iPhone, even Windows with TPM and Bitlocker) had this stuff for many years.


The beauty of this can be seen by how those securities are leverage and enhanced on CopperheadOS.

I criticise Google a lot for how much information they store on us, but this work they do both on Android (open-source) and the Pixel phones hardware should receive more praise.


"To prevent attackers from replacing our firmware with a malicious version, we apply digital signatures."

How about putting a read/write switch on the device that prevents writing to the firmware if the switch is in the off position.


What problem would it solve?


> We recommend that all mobile device makers do the same.

Kind of insencere when the biggest competitor has been doing this since 2013 (the feature is marketed as “secure enclave” by Apple)


The Apple Secure Enclave cannot defend against someone who has the signing keys to the password software (or at least couldn't as of 2016)- that's why the FBI wanted Apple's "help" over the San Bernardino shooter. Apple said no, but could have done it- it was a policy choice of theirs to fight the FBI. Google has created a situation with the Pixel2 where they can't do that sort of thing even if they wanted to. And justified it without ever referencing "Search Warrants" or "Nation-state threat actors" even though that is the obvious driving force here.


> ...or at least couldn't as of 2016...

The iPhone 5c came out in 2013 (and was a budget device based on the hardware design from 2012; the iPhone 5s--which I believe, though would happily be proved wrong, had already fixed this issue--actually came out at the same time as the iPhone 5c). I have no love for Apple (clearly), and I think they (and, sadly, the EFF) were being disincenuous at the time about the definition of "back door", but you are just spreading misinformation here: "as of 2016" would imply the iPhone 7 was vulnerable, which is a much newer device than the one from the infamous San Bernadino scenario :/.


>I have no love for Apple (clearly)

For those who are unaware, Saurik created Cydia, the package manager for jailbroken iOS devices, as well as a whole bunch of jailbroken iOS-related software.


So is the contention that Apple is already insider attack-resistant? Or is the contention merely that they might be?


The phone in San Bernardino case did not yet have the secure enclave.


This is not simply about the presence of a "secure enclave" or security module as Google calls it. It's about preventing the firmware on the security module from being compromised without knowing the users password.

To mitigate these risks, Google Pixel 2 devices implement insider attack resistance in the tamper-resistant hardware security module that guards the encryption keys for user data. This helps prevent an attacker who manages to produce properly signed malicious firmware from installing it on the security module in a lost or stolen device without the user's cooperation. Specifically, it is not possible to upgrade the firmware that checks the user's password unless you present the correct user password. There is a way to "force" an upgrade, for example when a returned device is refurbished for resale, but forcing it wipes the secrets used to decrypt the user's data, effectively destroying it.


Yes, I know. I thought that the secure enclave requires the user passcode to authorize an update, but after re-reading the iOS security whitepaper, I am no longer sure that this is actually correct. (it’s not mentioned anywhere)

So while you do have to provide your passcode to update an iOS device, it could be that this requirement is only enforced at a higher level (ie. not by the secure enclave itself).


Does Apple market their solution as resistant to someone having all Apple's keys but not the user password?


Based on the San Bernardino case, no.

It wasn't a matter of feasibility, Apple could have done it but refused to.


> biggest competitor

You mean Samsung?


What if the attack is in the form of a court order?


This constraint forces an attacker to focus on the user that actually has the password. From a security perspective, this forces the attacker to shift focus on forcing the user to reveal the secret as opposed to the company responsible for creating the firmware, in this case Google.

My guess is in the U.S. the 4th and 5th amendment would prevent the government from forcing you to reveal the secret, so long as you do not rely on biometric security, which has been in some cases ruled as exempt from the same rights as say a password. IANAL though, so I really can't elaborate on an explanation of why. I think if anything you're likely to be held on obstruction charges or have your assets frozen in an attempt to apply pressure on someone unwilling to cooperate. In other, perhaps less forgiving locales like North Korea, China, or Russia, I imagine one may end up being the subject of persuasion of a more physical nature.


I've noticed that my Pixel asks me for my PIN for "additional security" every few days. Apparently it asks you for your PIN if you try to unlock your device without having entered your PIN for 3 days [0]. I never realized this was the reason, but it seems like a fairly decent deterrent to law enforcement - I wonder if there's a way to reduce this frequency to a day or so.

[0] https://www.reddit.com/r/nexus5x/comments/3us0f6/pin_require...


A competent law enforcement agency attaches a digital forensics device all the phone’s content as soon as they get their hands on it. They’re probably not going to wait three days.


But that requires them to unlock the device. Using your fingerprint to unlock it likely requires a court order, which takes some time.


> In other, perhaps less forgiving locales like North Korea, China, or Russia, I imagine

Many European countries have 'key disclosure' laws; give us the keys, or go to jail. In these cases, silence is illegal.

https://en.wikipedia.org/wiki/Key_disclosure_law


You don’t have to disclose the password, but you can be held in contempt of court until you unlock the device.


> This constraint forces an attacker to focus on the user that actually has the password.

The irony is that while the Android development team is doing this, the Google business and cloud services teams are increasingly gathering more data from the Android users, and encouraging them to put as much of their data on Google's servers as possible. And Google can give access to that data because it doesn't use end-to-end or homomorphic encryption.


You'd need a court order to get that data from Google, but not to get the data from a seized phone. That court order might also be challenged by Google.

That applies especially to non-law enforcement actors. Those can't get a court order to force Google to hand over the data.


A court order can’t compel someone to do the impossible. The updates in the Pixel 2 make it impossible for Google to circumvent security measures on it, thus protecting them from being coerced to do so (by courts and criminals alike).


That's the same attack vector as far as this change is concerned.

The idea is that nothing, not even Google, can change the change the firmware without first wiping the device or entering the passcode.


Have there been notable cases of a malicious actor installing a compromised OS on a target's phone for spying purposes?


Impressive, but given the prevalence of apps that demand full access to all USB contents and then arming the user with only the ability to accept or decline all-or-nothing, this seems like an electronic Maginot Line.

But to be fair, they've got to start somewhere and there is always hope they'll extend the permissions options to be more powerful.


One attack this wouldn’t guard against is a malicious actor pushing a buggy version of the Secure Enclave code that couldn’t be updated without destroying all data on the phone.


Does the firmware signature preclude flasshing the devices with an alternate OS? (Considered independenttly of data stored.)


The locked bootloader would prevent flashing anything that was not signed by Google.


:-(


It wouldn’t be able to read the encrypted data.


Which is why I specifically excluded that consideration.


> There is a way to "force" an upgrade, for example when a returned device is refurbished for resale, but forcing it wipes the secrets used to decrypt the user's data, effectively destroying it.

It's interesting.

Why "[wipe] the secrets used to decrypt the user's data, effectively destroying it" instead of wiping the data itself too?

Is this to potentially allow a third-party with enough power (i.e. a government entity) to eventually decrypt the data?


this is the standard "remote wipe" technique and a standard cryptographic application


I know nothing about the "standard remote wipe technique", but neophyte me believes that wiping security keys isn't exactly protecting you against your data being eventually retrieved and decrypted.


The data could be stored on an SD card, or emmc flash, or the cloud. Erasing is not reliable there. The more data you have to protect with secure hardware, the easier a time the attacker has.

Destroying encryption keys stored in a small, secure processor is the most reliable and secure method.

The encrypted data can and should be wiped too, but it's a lot more reliable to have your security model not rely on that.

We have no reason to believe AES-256 will ever be broken within the lifetime of the universe. Maybe in 100 years I will be proven wrong, but I am willing to bet the security of my phone on it.

I suspect we will have human beings living in another solar system before AES-128 is broken.


My point is about doing both, if possible, instead of just getting rid of the keys.


Because, as parent noted, making it possible makes your secure enclave more complex (which is the same as less secure).

Furthermore, as parent also noted, if your encryption is solid, wiping the keys is identical to wiping the data.


Indeed, my bad, I totally read his answer too fast and missed "The encrypted data can and should be wiped too, but it's a lot more reliable to have your security model not rely on that."


Agreed. In security, time works against mitigations. If you want data to be gone, just deleting keys is a bit short sighted.


This wouldn’t really improve security. Only the secure hardware is secure from physical attack. The user data is encrypted and put on memory that isn’t physically secure.

If you have a way to decrypt that data, you can just physically remove that memory, copy that data, and break the encryption on it. If people could break encryption, they wouldn’t go about updating the firmware and triggering a potential data wipe in the first place. In short, if you don’t trust encryption you’ve already lost.


The thing I wanted to highlight is that time works against encryption. What is encrypted with today's most powerful and thorough understanding of crypto, will possibly be easily decrypted a few years or a decade down the road. So in addition to deleting the keys, I would always wipe the data itself also.


I agree that time works against crypto, but one point I wanted to make is that the attacker can easily prevent the data wipe from happening. Just physically copy the encrypted user data, or discount that memory from the device.

It’s hard for me to an imagine a state funded attacker that wouldn’t just prevent the data wipe if they cared. I agree at least there’s an extreme edge case where this would protect you from a simultaneously very advanced but also negligent attacker, but that seems so rare that it’s unconcerning to me.

Feels a bit like adding an extra luggage lock to a bank vault since its still better to have more security afterall...


I assume the data storage is on a separate chip which - in the context of this attack - is untrusted. That is, the firmware could try to wipe the data storage too, but that would be relatively easy to bypass and doesn't really gain you anything anyway.

Or from the opposite direction: Only the keys are stored within the trusted part of the hardware; they're the only thing you can reliably wipe.


Wiping data is time consuming. Wiping the keys that can be used to decrypt it is much less so.


How will you decrypt the data once the key has been erased?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: