Good politics: they had custom code for parts of OpenSSL, they published it when it seemed needed, open-source community told them what was wrong, they fixed it and apologized. No bullshit.
Also: the way this played out makes an awesome story for the next time Akamai experiences internal pushback against participating in open source. "Remember that one time it saved our bacon" is a political goldmine.
Them and everybody else. "Remember that time akamai ran vulnerable code in production for 13 years and the bug got patched two days after they open-sourced it" should be able to drive open source contributions at all kinds of companies.
It won't matter. The response will be, "Without releasing the source no one would've ever found the bug. Now think of how many bugs were found that haven't been responsibly disclosed!"
They are only in problem now, because they have released the patch. If no one outside the company saw that patch, they could spot the problem and fix it without anyone noticing and creating a PR disaster.
They probably wouldn't, but they could.
That is one argument someone could use against "participating in open source".
It's a bit to late for saving bacon, patch should be publicly reviewed before it went into production. So no bacon saved, but there is a hope for a smoke detector.
If they had not released the patch, then it's entirely possible that black hats would have been able to make off with the keys from their customers - and that would have been a real disaster.
I just want to say I don't think you deserve to get voted down (as you currently have been). I was thinking the same thing about this story - it's very interesting because it highlights the (utilitarian) philosophical battle around Open Source. It's really not totally clear in which case they were safer. Your position should at least be considered.
Look, I like open source as much as the next guy, but this is myopic - one might as well say 'remember that one time we used open source stuff and we had to spend weeks scrambling to fix things?'. Please let's not fool ourselves into believing things that simply aren't true - it detracts from the things we can leverage.
Willem's letter to Akamai addressed three areas where their patch was likely insufficient:
(1) First question is then, are 'p', 'q' and 'd' the only sensitive parameters? NO. (The Chinese Remainder Theorem parameters are included in OpenSSL private keys by default and they are enough to recover the private key).
(2) Second question: Does OpenSSL ever copy the private key, or parts, after the key has been read? YES.
(3) Third question: Does OpenSSL allocate temporary variables for intermediates when performing operations with a private key. Can they be used to recover the private key? YES and YES.
We are, in fact, evaluating those other claims, and testing them in our labs. We are also evaluating claims made by other researchers, both publicly and privately; and we hope to end up with both a better library out of this, as well as a better understanding of the hazards we do and don't provide safeties against.
Validating his first claim was sufficient to undermine our belief in key safety; the others are therefore only relevant for protection against future attacks. In that light, it was important to advise our customers and the community at large.
I think it is important to remember when validating flaws, that we show that attacks are possible not they are necessarily demonstrated. Just because one cannot construct an exploit doesn't mean that something is secure.
Why would their update need to address more than just (1)? That's the only part of the problem that they have to deal with. The OpenSSL team should be the one answering (2) and (3), since it's OpenSSL that copies the PK/parts after it's been read, or allocate temp vars for intermediates. If Akamai chooses to patch those two problems and then releases it to open source like their initial patch, then they can answer them, but otherwise, what's the point?
(2) & (3) are not bugs for OpenSSL because only Akamai is trying to segregate the key data.
In upstream OpenSSL the intermediates are stored in the same memory pool as the key data. Thus the intermediates do not expose privileged data anywhere it is not already present.
Without addressing issues 2 & 3 Akamai's patch is not a line of defense. It does not guard against dirty memory exploits like Heartbleed, it only increases the difficulty of exploitation.
The part I still don't get because I haven't seen it discussed so far is that their original patch mentionned they had been using a variant of the patch "for a decade" but I never got why they didn't seek to get it merged upstream a decade ago. Or why they released "a variant" of their patch instead of the real thing to begin with.
It is only part of the video, but they talk about searching Google and finding partial private keys, e.g. ---- BEGIN PRIVATE KEY ---.
Even with text missing, it is trivial to recover the entire key, because of redundancy that Akamai neglected to take into account. I never really thought about how those ASCII strings are encoded, but it's actually the 6 or so values encoded. If some are missing, you can recover the others.
In retrospect, why shouldn't the private keyh have RFC 822-like key value format so this is obvious? I don't see why it has to be encoded at all; the format is unnecessarily obscure.
The rest of this talk is also great. Most of it is accessible to non-cryptographers (like me), as long as you understand basic arithmetic and have an undergrad-level overview of public key crypto (which they give). They provide very readable and runnable Sage scripts.
I really like the batch GCD attacks. Very elegant and effective.
Right before this all came out our Akamai rep came on site and swore up and down they were not vulnerable. They need to quickly communicate internally as well that a vulnerability existed and send account managers back out to work on high priority cert rollovers.
Our rep said the same thing even when we insisted they rotate our certs on day 1.
Of course we're still waiting. I don't know who's decision it was to not reissue all certs on day 1, but this is an epically bad decision bordering on gross negligence.
No, this was not a good response. It was well written and took responsibility for bad code, but it failed to address the elephant in the room: why were keys not immediately rotated?
I've said this a number of times lately, but it bears repeating that the golden rule of security is: If a compromise occurs, assume that everything is compromised. Even if their approach was tested to be perfect, they shouldn't have made the assumption that that actually reflected reality.
They were extraordinarily irresponsible. There's simply no other way to slice it.
No, that goes too far the other way. If I have legitimate reason to believe that an exploit does not affect me (especially if it's because I acknowledged its possibility and programmed defensively against it), why would I automatically act as if I were breached too?
Your argument is totally correct if I am simply unsure---if I don't know whether an exploit affects me but I suspect it might, I assume the worst and act accordingly. But in this case Akamai had good reason to believe they were unaffected. (And, when someone demonstrated otherwise, they responded responsibly.)
> If I have legitimate reason to believe that an exploit does not affect me (especially if it's because I acknowledged its possibility and programmed defensively against it), why would I automatically act as if I were breached too?
If you have a memory read out of a process that contains sensitive data, you should always assume that the sensitive data was breached, even if you "programmed defensively against it". Period. No exceptions.
They had good reason to believe they were unaffected, which is why they shouldn't have hit the "oh shit, the world is ending" button. But the decision to not roll their keys in the face of a breach was a massively irresponsible one. We have how many years of major bugs to look at? And in how many cases are the likes of Akamai (that is, organizations with theoretical mitigations in place) actually safe when they think they are?
Always, always assume your protections are faulty in a way that a motivated attacker will figure out.
Take a look at how much protection people's legitimate reasons to believe that this didn't affect them have given so far - the only people who haven't been proven wrong were the ones who weren't running a vulnerable OpenSSL. Their arguments all assumed that because they couldn't imagine a way to get the keys there wasn't one, when in fact they just hadn't thought about it enough.
> If I have legitimate reason to believe that an exploit does not affect me (especially if it's because I acknowledged its possibility and programmed defensively against it), why would I automatically act as if I were breached too?
Your argument hinges on the word "legitimate." I don't do security, but "it looks good to me" is not a legitimate reason to believe that the exploit does not affect you (for any value of "me"). OTOH "I've never used the versions of OpenSSL that have this bug" is a legitimate reason to believe that you're not affected, e.g. Tarsnap's explanation [1]. I'd also recommend that post for another take on what defensive programming looks like in this context.
They believed that they were covered and that no compromise could/did occur. It's a mistake they now acknowledge but they have owned up to it and are taking action - just as you suggest.
I believe their actions are honest and completely reasonable. If they had known they still had an issue I seriously doubt they would have publicised the patch and risk the issue they now have.
> I believe their actions are honest and completely reasonable. If they had known they still had an issue I seriously doubt they would have publicised the patch and risk the issue they now have.
I completely agree that their actions are honest, and I seriously doubt they would have played things this way if they knew their protections were faulty. I'm not saying that they're acting maliciously; I'm saying that they acted in an arrogant and irresponsible way.
If an attacker can read memory out of a process containing sensitive data, the assumption should always be that they got the keys to the kingdom even if they didn't. I'm not saying they didn't do their best to defend their service -- on the contrary, I think that their defense is a good one -- but that they played chicken with an exploit and ended up on the wrong side of it.
> If an attacker can read memory out of a process containing sensitive data, the assumption should always be that they got the keys to the kingdom even if they didn't
We both agree on this.
> but that they played chicken with an exploit and ended up on the wrong side of it.
I disagree with this. They thought they had a valid technical solution and did the honourable thing by releasing it. Unfortunately they were wrong but that's a bug and not a arrogance issue.
This is the perfect example to point people to when they ask why the company shouldn't just maintain an internal fork or write its own version of an open source project.
Actually contributing the code you write is helps _everyone_.
I would like to know how many security related patches they are keeping for themselves? And I think not only Akamai did this. I hope that this situation will prove that security needs a bit more patch reviews and a bit less of secret sauce.
I suspect their patches are oriented toward paranoia. Their particular patch (keeping what they believed to be all critical values in a special region of memory used only for critical things) had no obvious necessity -- it didn't obviously provide more security -- but was just paranoia. Paranoia isn't always necessary, or even desirable; hence, the remainder of the patches have no obvious external utility, so they presumably haven't released them.
<p>
They may also have performance improvements, conceivably, and those have a more obvious rationale for keeping secret.
As Andy tweeted, we built that particular patch to keep unencrypted secrets off disk -- it actually was intended to provide tangible security benefit, it wasn't just paranoia:
https://twitter.com/csoandy/status/455307255895060480
If you folks have used this technique for years as I believe it has been stated, any particular reason why it wasn't contributed back to OpenSSL earlier?
Why people are only talking about SSL keys? Because Heartbleed allowed to fetch any data, that could have also included SSH keys and or any other authentication credentials / information the server got, for itself or to other services. As example if your site got Facebook API keys it's possible that those have been leaking too.
Yes, it is. Leaking memory contents across processes would be a pretty serious bug in the kernel. So when your process asks the kernel for memory (via sbrk or mmap) the memory it gets is zeroed out.
When a company says "we're not changing our potentially compromised keys because we're protected by this magic box" and you punch a million holes in the magic box, your goal should be to make it well known that the protection is faulty. He accomplished that quickly, and now they're finally doing the right thing.
If the Heartbleed bug was still viable against your service, I'd say that he made the wrong call and should've emailed your team about it. But here, we're basically talking forensics.
I've written at length about the decision to not roll the keys as soon as Heartbleed was discovered, so I won't repeat myself, but in short: him publicizing this, as well as the Cloudflare challenge immediately getting slapped down, should make it clear to everyone that if you have a memory read, you should assume everything is compromised.
Hopefully this is a good thing for security in general.
For what it's worth, I think that the patch you guys threw out there (and have been using for a while) is a great idea, and I really hope it ends up in mainline OpenSSL. I think you guys have acted 100% properly from a tech perspective in this whole thing, even if a bug did pop up (when do they not?).
I disagree strongly with the business decision made w.r.t. keys, but I hope people don't take that as me ragging on the tech team over there; you guys are doing good work.
I don't know akamai's relationship with open source projects, but it sounds like they had local changes and tried to karma whore them as, "trust us, our customers were protected, and we contribute to open source." I hope I'm wrong, or they deserve the public shaming they are receiving. Ultimately, we need more engagement from commercial organizations benefiting from open source projects.
This does not inspire confidence in akamai for me. These days people on the internet seem to get super excited when various companies write post mortems on how they screwed up. Yay, they screwed up and were honest about it!!! I find those to be wildly overrated. I'd rather there no be a screw-up to begin with. And I don't particularly care if you come clean or not.
I know bugs are written and bugs happen. It's an unfortunate fact of life. And anyone who posts their code will likely have holes poked in it by the community. But the akamai thing really sets of warning flags. The fact they thought it would work and it doesn't, at all, does not sit right.