Synopsis: Secret keys are embedded in the device's e-fuses and are not readable by normal means because of a protection e-fuse. By measuring current draw during power up an interval is determined to be the time when the CPU is reading the e-fuses. At that time the power supplies are "glitched" from 3.3v to 6v using unspecified patterns from a signal generator. This causes errors in the e-fuse reading, one of which is to make a bank of read protected fuses readable. The read values have errors in them, but multiple runs and statistical error correction can retrieve the actual values.
Physical access to the device is required. Security compromise is permanent.
Just piggy backing on the top comment to point out that the primary concern here is not necessarily for the security of the devices that you own and physically control (although that could be an issue in some cases, if others can access them too), but for the IP of the OEM which can now be extracted and flashed to cloned boards. So this may well be a serious issue for some of Espressif's customers, who are mostly OEMs, even if it is not an issue for the consumers who buy from that OEM.
Owners have physical access to their devices, but so do others. It's far from obvious to me that as owner, I benefit from elevated privileges, when anyone with temporary physical access also get the same elevated privileges.
Depends. But the simple fact is if it’s REALLY important; you had better be doing it online and passing the result to the device.
I could tell you about hardware security modules (HSM) or the new ARM trustzone for small micros, but I’m designing new products so that if I handed you the source - you still can’t clone a board. That requires a connection to a better trusted device.
> So this may well be a serious issue for some of Espressif's customers, who are mostly OEMs
I highly doubt that. From what I know, that feature was more of a nod to their customers from the West.
To most Chinese entrepreneurs, it makes no sense how your software being copied be an issue:
1. If you have a real specific reason why disclosure of your code be an end to your business, it will get hacked and copied anyways.
2. If you rely on that to stave away competition, you are already are in a such competitive market where this will make no difference, and your business will be cloned anyways.
Think a bit yourself. If you struck gold, you have zero chance not being cloned.
1. Do not strike gold — look for an easily entrechable position in niche market, like a lot of companies in US do
2. Economies of scale — works until your competitor bribes a banker for a giant loan
3. Be one step ahead — look at FAB business. In microelectronics fabrication, everybody copy each other, and you can't do anything about it, but somehow companies still maintain their positions
You always have to be one step ahead. No competitive edge lasts forever.
But there's a big difference between being cloned in a month, and being cloned in a year.
Within a year, maybe you could build a brand, create v2, have some economies of scale in dealing with your suppliers(harder to bribe), create some internal expertise.
The last situation is somewhat similar to the fabless companies.
Sounds like there’s a “branch if equals” step that checks the fuse in the ROM. You can trigger the code to actually perform your read without branching if you glitch the right part of the cycle. Maybe enough current leaks around at 6V triggering a “not broken” value at the fuse, or just skips the instruction pointer change.
My guess is that the e-fuse is checked on every bit-read, so sometimes you don’t get the true value because your glitch isn’t precise enough.
Possibly there’s some randomization in the timing of each read, but there’s a signature current draw before each read that you can use to trigger your glitch.
This is an interesting attack, and certainly looks highly successful in terms of allowing a determined hardware hacker to gain root/bootloader access to a device that the manufacturer has attempted to lock them out of. Glitching with a 6V supply on a 3.3V bus is certainly something I'd want to be a little cautious of if the hardware was more expensive than a $10 dev board - I wouldn't buy a $800 IoT fridge and use this to install alternate firmware just for fun, but it's nice to know it's possible in case my fridge stops working because the manufacturer declares it end-of-life. It's just not clear to me if or how this is a bad thing. The author writes:
> This FATAL exploit allows an attacker to decrypt an encrypted firmware because he is now in possession of the AES Flash Encryption Key.
> Worst case scenario, he is now able to forge his own valid firmware (using the Secure Boot Key) then encrypt it (using the Flash Encryption Key) to replace the original firmware PERMANENTLY.
> This last post closes my security investigation on ESP32, which I consider now as a broken platform.
Isn't that a good thing for me as a consumer? I like the ability to decrypt and modify my own devices. I like that this is a permanent modification, unlike eg. dd-wrt where you have to prevent the bootloader from overwriting your software with that of the manufacturer.
The only thing I can think of that would be really bad is if I had a device with an ESP32 inside physically stolen then reinstalled by an attacker (or a counterfeit sold to me with malicious code from the vendor) and this exploit allowed them to get private data from my network to an Internet location. But they could already just buy or build their own device, ESP32 or not, to do that.
This is only bad for draconian IoT manufacturers who want to enforce their terms of service and artificial limitations on hardware they think consumers are leasing but consumers think they are buying.
> Isn't that a good thing for me as a consumer? I like the ability to decrypt and modify my own devices.
If you're the sort of person who buys wifi-based-internet-enabled door bells, but you don't want someone who steals your doorbell to (a) be able to extract your wifi password or (b) be able to get the thing to work at all, you might appreciate resistance to the thief's attacks.
Of course, you can also address this security concern by just not buying an internet-enabled doorbell.
This could still be addressed by not putting the wifi part of the doorbell into the doorbell itself or alternatively using something like LoRaWAN where at worst someone could compromise the device keys (which you can reprovision) so your Wifi isn't compromised at all.
Another solution is to use a second gateway inside the house that manages the Wifi part and secure communication with the doorbell via short range radio.
One user's self is another user's attacker. This attack isn't one-time; if I can break into the hardware and change the keys such that I now control it, then someone else with temporary physical access can then break into my hardware and change the keys again, suborning "my" IoT device into e.g. a subtle wiretap.
A computer anyone—not just the owner—can root given physical access, is like a lock that anyone—not just the owner—can non-tamper-evidently pick open. It really is broken.
Almost all computing devices are broken when given physical access. And if they aren't it's just because someone hasn't worked it out yet or is broken secretly by governments.
This is kind of a myth. There is such thing as tamperproof hardware components and they can protect against plenty of threats.
Security isn’t all or nothing, it’s about understanding what the different threats are and adequately protecting against them. Not everyone is trying to protect against attackers with millions of dollars at their disposal. There is plenty of value to deterring 99% of attackers with physical access.
The idea of security as all or nothing, and that physical access thus defeats all security measures, are security tropes that need to die. You can see how obviously wrong they are when you consider that just about every security system depends on proper behavior by trusted human beings, who are never 100% reliable.
when you consider that just about every security system depends on proper behavior by trusted human beings, who are never 100% reliable
...and I think that's perfectly fine and IMHO required. I've long been a proponent of the philosophy that a little bit of insecurity is what keeps society in general from turning into complete dystopia; but unfortunately, paranoia and the search of "perfect security" is driving it in that direction.
In other words, striving for perfect security is treacherous precisely because humans are not 100% reliable. The same way you would probably not want "perfect" law enforcement by the government.
Yes, but no. I mean you are probably familiar with FIPS-140-2's security levels [0], and the ESP32 is probably on neither. (Not even Level 1. Which is roughly something that you can do almost purely at just in software, that's why OpenSSL has this mode.)
I'd argue that if you want to use some kind of device as part of your security system, and that part has to endure temporary physical access from unauthorized third parties, then you need something that is designed for that. Considering a software broken when it's clearly not designed to withstand physical tampering ... is a bit silly. (Though considering it broken in terms of IP protection is not surprising, it was never really designed for that either.)
Though, of course, you're absolutely correct that compared to its price (or cost), it's a lot more secure than an empty floppy (yet similarly simple - except you can't toggle an efuse with hand), or early smart phones (or early anything, that was complex, ran every kind of software as root, and so naturally was full of holes).
I don't disagree with other parts of your post, but I still think protecting against the scenario where an attacker has physical access to your computer is basically pointless. Especially if it comes with a very significant loss of freedom.
If a malicious person has entered your home or workplace, access to your computer should be low on the list of worries.
Android handles this decently well it allows you to install whatever you want to the device but to unlock the device for custom firmware the device is first wiped so user data is perfectly safe.
This would be more akin to jailbreaking your nintendo switch and installing linux. An IOT platform that's intended to be secure can be tricked into revealing it's key.
Most consumers aren't going to write custom firmware for their lightbulbs.
Of course, I think this exploit is impractical for a lot of cases given how the ESP32 is typically used, but, ymmv.
> Might as well call the PC a broken platform since you can install your own OS.
More like calling a PC broken if you can install your own OS even after you've enabled Secure Boot and a TPM (in which case, the security features are objectively broken)
This is only bad for draconian IoT manufacturers who want to enforce their terms of service and artificial limitations on hardware they think consumers are leasing but consumers think they are buying.
No kidding. What really grinds my gears is the fact that these authoritarian "security" people are effectively helping to tighten the nooses around everyone else, and very eager to do it too. It's one thing to post about an exploit you've found and help the community, but I'll never agree or help anyone who goes snitching to the company about it. In the "old school" hacking culture you would be called a corporate sellout, or worse, for doing that.
I think most people are assuming that the user breaking this device is the owner and therefore don't see the potential threats this hack realizes.
A perfect example of how this could be a problem would be the modification of a utility providers smart meter. The home owner hacks the firmware of their electicity meter to show a 10% reduction of power consumption.
Im sure there are several more applications of this exploit that would allow end users who are not the owners of the hardware to make it a threat large enough for manufacturers to consider using a more secure device.
It's easy enough to tap into the service ahead of the meter. Or if you've taken the meter apart, adjust the analog signal conditioning. And so power companies monitor aggregate usage per neighborhood, and if there is a discrepancy, go looking.
In general most people are honest, most of the others are deterred by stiff penalties, and these issues are kept in check at "human scale". DRM schemes are more likely to be used to erode long-held precepts, rather than being needed to enforce them.
If I'm not mistaken, buying one of a device and tearing it down like this would yield keys that would let you create "official" firmwares for all of the other ones of their kind and set up a fake update site allowing you to remote exploit all of the others, yes?
If so this is a fairly serious hack especially for devices that auto-update OTA.
They should be permitting OTA only if the website they download from over TLS has a cert signed by the developer/manufacturer or at least a public CA with a CN matching the host name...so you'd have to physically access each device and not just MITM them.
Some hardware comes with firmware that can't be overwritten unless it's properly signed by the manufacturer to prevent an attacker from being able to get low-level control of devices that couldn't otherwise be detected. Example: https://www.cisco.com/c/en/us/products/collateral/security/c...
It is, of course, possible to replace an entire physical device with your own hacked one, and have nobody be the wiser. But the theory goes, that would be a lot harder than just copying rooted firmware into a device remotely. (The above system was hacked this year, though)
I wonder why they weren't using Public-Private crypto in the first place?
Perhaps because it would use up more space in the ROM? If so, I wonder what functionality they dropped from the ROM to add it now? I somehow doubt they made the ROM bigger - that would be very expensive at this stage of the chips lifecycle.
Mainly because it wouldn't help with this issue. The code signing used in devices like phones and game consoles is to prevent users from running unapproved software on the given hardware. The issue here is users taking the software and running it on unapproved hardware.
At some point, the software has to be decrypted on the physical device to run. The best you can ever do is put enough physical hoops in the way to make it impractical to defeat.
Something about e-fuses seems quite mystical. The idea of a computer program deliberately and permanently damaging its own hardware (or hardware it is attached to) using a mechanism so close to regular operation (current flowing in memory) but for a good reason rather than to cause harm, and in such an information rich way.
Different to say a robotic tool using its tooltip to maim itself and different to one robot building another, because at the e-fuse level of detail it’s so much more information sense.
Perhaps it’s like a tattoo? Perhaps I’m thinking of the ship tattoos in Surface Detail by Iain M Banks?
Another way to think of it is, a memory-mapped set of wires that are made too thin, and when read, correspond to ones. If ones are written to that region, they become zeroes every time they are read afterward. This is kind of the reverse of how memory normally works.
Of course the actual mechanism used in OTP memory is different...
I think it would be interesting to learn the history of e-fuses as applies to CPU architecture... That is, where/how/when were they invented, what was the first CPU to use them, for what purpose, and which CPU's have used them since that point in time... Maybe I'll post to Ask HN or one of the StackOverflow websites about this in the future...
I've heard e-fuses in general are vulnerable to optical inspection under polarized light after deliding a part. So if someone capable really wanted to clone a device, it's very possible they already were able to get the e-fuse key values.
I once used the e-fuse feature of another part for bootloader integrity. I wasn't worried about encryption, but the part would validate the bootloader integrity when encrypted. If integrity failed, the part would keep searching for a valid image. It was an easy way for some protection against flash corruption.
Indeed, this is the reason ICs used in credit cards don't use them, but embedded flash can still be mechanically probed, and this is how EMV cards are allegedly being cloned
It does, but major part of flash encryption is to protect your supply chain. Ie. to keep people from cloning your boards and just dumping your software on them. It's also a bit of security through obscurity (which despite the memes can be an important piece of defense in depth) to make the MCUs a bit more difficult to attack if you don't know the code that's running.
Indeed. It's odd to see "Pwn" in the title, and then read the details and have to completely reverse the context. This situation is closer to the original sense of "own", as in "home ownership". If some squirrels get into your home and you evict them, you wouldn't say you "pwnt" your house.
> I quickly identify a pure HW processing 500us before the beginning of the UART ascii strings ‘ets June 2018’ corresponding to the BootROM process.
> This HW activity is probably the eFuses Controller initialisation, and a load of the eFuses values in some dedicated buffer memory, to be used by the Flash controller for further steps).
How one would come to this specific conclusion without having any prior knowledge of the boot rom ?
The efuses are still part of hardware initialization, before one gets into the bootrom, so it's feasible to assume that this 500us is still "hardware" init, during which time it's typical for an MCU to be reading input from pins to know about voltage, clock, mode selector jumpers, etc.
And, the way all of those things work is by setting registers so that they're visible in the software either _still_ in a register, or mapped into the address space.
Edit: I checked your profile and see that you're an embedded engineer, so I must have missed some nuance in your question, because power glitching the boot sequence to mess with hardware init it a really popular vector for attacking embedded devices. Please feel free to disregard my reply.
Props for the effort, but who expects a cheap china MCU for consumer products to be resilient against glitching attacks? You don’t use that stuff in high-security settings anyway. For consumers products resilient to advanced hardware attacks, I can only think of the iPhone and some consoles. Anything else?
Some silicon labs arm MCUs and radios have hardened crypto core and hardened bootloaders. Their closest equivalent to the esp32 is the wgm160p but it lacks Bluetooth and costs more.
Disclaimer: used to work there but this is all public information
How about zigbee products? A few years ago I worked on the development of a wireless product for use in the traffic control industry. Security was of great importance and we opted to use a more costly "low power mcu radio combination" using the TI MSP430 series processors and Anaren radios. At the time we were more concerned with hackers spoofing our radios signals but thinking about it now the physical hacking of our firmware would be just as big of threat.
We use these extensively for our higher security needs, yes it's Microsoft, but it's a great product and solves many problems in this space (Updating, security, etc.)
If you're interested in this problem space you should definitely check out the chip whisperer. They make some great hardware for doing this kind of test.
Could this get around a locked bootloader on a Sony Xperia Z5 Compact? (As in, the normal sony-website-enabled bootloader unlock NOT allowed when checking in service menu)
Physical access to the device is required. Security compromise is permanent.