Every time some OpenSSL bug is announced, I remember my issues with TLS (I don't know if this bug is TLS-related or not).
Would be nice if TLS supported:
1) "hot" certificates with 24h ttl signed-off by a longer-living certificate on a more secure machine. Having 1-year certificate private key deployed on a web server is crazy. Especially since revocation does not really work.
2) Threshold multi-signature certificates for both CAs and end-user certificates.
3) CA certificates locked to specific TLDs (was there RFC about something like that already?) - so Russian CA cannot sign certificate for a Canadian TLD.
4) Ultimately, blockchain name pinning on DNS level.
The last three do not really relate to a case when bug in OpenSSL reveals a private key stored on web server.
In brief, it attempts to address weaknesses in PKI operations and specifically revocation capabilities by allowing issuance of short-lived certificates based on a set of validation rules. The README has some detail about the tool, http://git.openstack.org/cgit/stackforge/anchor/tree/README.....
Just in case the stackforge link disappears, the project is being moved right now into OpenStack proper. There's no dependency though, so you can easily use it on its own.
We'll release version 1 soon, so the API will change just a little bit, but the main idea remains: issuing short lived certificates from a single location (or multiple, it works without changes in multi master mode).
If you have any configuration issues, get in touch, we're happy to hear about new use cases for this tool.
Yes, the main reason it was created is OpenStack deployments where Anchor is used for securing internal communications, so the PKI is an internal one (most likely).
We cannot change how the CAs work in public networks unfortunately, but if they do, we're going to be ready :)
The fact that 3) was not built-in from the start pains me to no end. It should be flying first-class with Priority Airlines, a penthouse in the Hilton bugtracker since 1996. Instead, it's #3 in someone's list somewhere in a forum in 2015.
Oh my God! What? How is this not a major, major thing? How are Mozilla and Google not pushing hard for government CAs to have these, like, stat? And for OpenSSL to actually check them?
I am bewildered. What happened? Is this just apathy?
Enforcement of nameConstraints is inconsistent at best.
I experimented with name constraints a couple years ago for a private CA project, with the idea that I could restrict the private CA to issuing only names within a chosen subdomain.
I remember being able to enforce nameConstraints on the subjectAltName, but I was never able to get it to enforce anything on the subject Common Name. In theory new certificates should always have a critical subjectAltName extension, but this makes it worthless in practice.
It's also possible that my X.509 foo is not strong enough, or that I was testing with an older version of OpenSSL that doesn't implement it.
> 1) "hot" certificates with 24h ttl signed-off by a longer-living certificate on a more secure machine. Having 1-year certificate private key deployed on a web server is crazy. Especially since revocation does not really work.
This seems like the kind of thing that Let's Encrypt could be good at: if you have a fully automated renewal process, why have certificates last a long time?
1. Store your private keys in a separate process, either on the same machine or a remote, more secure host. Offload private key operations to the private key service. This obviously requires an encrypted connection between your webserver and the private key service, but you get good gains in security from no longer having the private key in your public facing web server's address space.
2. Another technique you can add is to separate the data plane from the control plane on your public facing webserver--that way you have a stripped-down process that is just handling the low level reading and writing of buffers to the wire, and have a fast pipe connecting it to a separate process doing all the HTTP logic. That way, you can lock down the data plane service and make it harder to exploit since it has a much smaller surface area than a full fledged web server.
Your webserver needs access to the private key when it starts. Is it really a big enough gain to be worth it to move it to another server? If I root the webserver, I can presumably just read the key from that service.
OK, so if my TLS temination host gets rooted.... I guess the assummption is that host is hardened and less likely to be exploited since it's just doing one well-defined thing?
My ignorance may be showing, how would I copmpletely isolate the private keys from the public-facing service? I suppose using an accelerator card would do it?
You run an "RSA signing server" that accepts connections only from the internal IP of your webserver (or maybe only TLS connections from across the internet but only if the client connecting to it has a client certificate signed by your own self-signed CA, but that depends on topology).
It runs a very simple app that accepts requests to sign something and responds with the <something> signed by your RSA private key. The code for this is very minimal and secure. You teach your web server to use this thing during TLS handshake, to sign the ephemeral key exchange.
If the web server is hacked, the hackers gain the ability to sign things with your private key, but they don't get the private key itself. They need to hack the "RSA signing service" server for that. They can't stockpile signed ephemeral key exchanges. After you detect the hack, snapshot and kill the hacked server, start a new one from clean backup, they lose the ability to impersonate your site.
This is an excellent explanation of how private key offloading works, thank you.
BTW, you can store private keys in a HSM on your private key server and that's an additional layer of protection, but nobody uses TLS accelerator cards anymore that I'm aware of. You can simply do the relevant crypto on commodity CPUs and still have plenty of throughput.
Yes. My first idea to detect this is keep count of signatures performed on the private key holding server and count of tls handshakes on your webserver.
Does that actually help? If the attacker is running arbitrary code as the webserver, they can use the webserver's access to the service to MITM anyone trying to connect to you.
There is no limitation in TLS preventing you from cycling your keys every 24 hours. Most CAs let you do unlimited reissuance. It makes public key pinning hard (impossible?), though.
Entrust, for instance, allows you to purchase certs essentially on a subscription plan. You may have one valid cert for 3 years (and pay the reduced 3 year rate), and reissue it with 24 hour expirations every day.
I personally wouldn't go with an expiration that low because of the operational overhead, but a few weeks or a month is attractive. It still significantly limits the downside potential versus 1-3 year certs. Basically any cert (non-root) over 1 year should be considered against best practice at this point.
I don't understand #1 - what prevents you from doing this today? When you generate a certificate you get to pick the expiration date; you are free to make it as short as you want. Don't intermediate certificates exist to implement this strategy?
And who is going to issue you a reasonably-priced intermediate cert? Especially since PKIX name constraints don't actually work, so that intermediate cert would let you sign just about anything.
Here's [0] the relevant section of the X.509 RFC (Name Constraints). Unfortunately, last time this was discussed on HN, someone mentioned that Name Constraints are not supported by all client software, making it unsafe to rely on them.
I'd imagine (based on very superficial knowledge) that DANE would achieve something to that effect. But it's pretty much dead because apparently DNSSEC wasn't all that great.
It makes losing/leaking a private key less of a problem, because it restricts the leakage to a 24h window. It also makes (webserver) key revocation kind of useless, because the certificate is automatically invalid after 24h.
This announcement narrows down the proximate cause of the bug to changes introduced in the 1.0.1 line.
Assuming this is a defect in a new feature (rather than a bugfix which went awry) that means there's a fairly limited number of culprits: SCTP, DTLS-SRTP, NPN, RSA-PSS, TLS v1.1, TLS v1.2 or SRP.
There was a sort-of interesting memory corruption flaw in OpenSSL 1.0.1 SRP (it was interesting because the corrupted copy was implied in bignum operations, so you had to squint at look at OpenSSL BN code as string copies). I remember when our team found it, we looked somewhat carefully at the rest of the code for similar flaws.
SCTP and DTLS seem a little more likely.
Edited: tired, forgot to write "SRP" in the first sentence.
Given the OpenSSL trackrecord [1], I recommend switching to LibreSSL [2] if possible. They tore through OpenSSL to pull out all the horrors they found and beat it into shape. OpenSSL's code was so unbelievably bad that there's certainly more problems lurking in there.
I think what the grandparents is wondering is if Hacking Team have a 0day in OpenSSL which this will fix, or is the timing coincidental? (I don't know the answer, but if they did, it's probably in that 400GB dump.)
I'm mostly just wondering what they mean by HIGH. Something as bad as code execution or Heartbleed, or "just" something like bad DHE checking?
A policy which unfortunately lumps DoS in with remote code execution as both "high". They're both significant, but one's clearly going to give us all a much worse day than the other, so we're all still left to wonder - how bad is this one?
This is a weird question, since there's almost nothing that uses RSA's library as their TLS library. Every mainstream OS BSAFE is available for already provides a TLS library.
This is a little like asking whether it's safer to use PHP or DTLS.
PolarSSL said it wasn't affected by Heartbleed. There's quite a few non-OpenSSL libraries out there to use which might or might not be affected by any given bug in OpenSSL. I just remember PolarSSL because I stumbled on that claim while reading on users of Frama-C tool for finding bugs. They use it apparently.
The LibreSSL people showed one commit at a time that OpenSSL was just poor coding through and through. I'd expect any implementation that paid more attention to code quality to do better. That's just one part of getting a crypto library done right, though.
You say those OpenSSL alternatives can be dangerous. Yet, you also never recommend against OpenSSL despite it proving itself to be quite dangerous in more ways than just cryptographic. Strange double standard.
Anyway, Fox IT [1] recently used PolarSSL in their OpenVPN respin. It's been immune to a number of issues that hit OpenSSL while their mailing list indicates steady work at finding and fixing its own problems. Improved the cryptographic defaults, too. The effort is open source. If you see non-OpenSSL crypto problems, feel free to publish them and suggest improvements so people in or outside those projects can make the systems better. So far, you mainly just blanket recommend against while pushing dangerous stuff (OpenSSL) on readers.
Note: At least you endorsed two alternatives to OpenSSL in this one. A first.
That's straightforward and means we agree on an alternative (LibreSSL). You haven't mentioned the converse issue though: only vague warnings with a broad word (cryptographic). I'm not even disagreeing with you on PolarSSL, necessarily. The problem is you quickly dismiss them without details while you don't do the same for OpenSSL despite known, horrific details available justifying avoiding it. So, I guess the dispute boils down to those issues:
1. What's the specific reason those libraries suck worse than OpenSSL (which SUCKS) and where did you publish that for peer review/improvement?
2. Why don't you treat OpenSSL the same for all its problems and recommend what you believe is a decent alternative (eg LibreSSL)? (Double standards always bother me in this field.)
That's the consistent trend in these threads: deny several for vague reasons; fine with a known bad one despite non-vague reasons against.
My first question was serious--If there is a secure and reliable commercial ssl library out there, I know people would pay for it. No one wants to deal with these reoccurring OpenSSL issues while they work to clean up their code.
If they're really any good, I'll at least say that hardly anyone will pay for a secure alternative. It's hard to sell more secure anything to businesses. Much less a protocol library that they have to integrate into everything. Especially at the prices they're sold at by the companies that might be competent enough to make a good library. They want to get back their engineering investment while users want the product for next to nothing.
There's also the problem of integrating it into GPL software. Many companies are using such software. Companies specializing in software I.P. don't want their stuff released as GPL because it was used in a GPL app. There's ways to skirt around this but they add complexity. Stuff like this is why I recommend BSD-style licenses so that good, proprietary stuff can be integrated with it.
I agree most small time companies would not spend an extra price for a library. Even though doing so would make sense based on the financial risk you take by utilizing a free option.
With that said, major vendors who sell very expensive gear that use open source libraries like OpenSSL could afford to pay a license fee per device, and then pass that price on to their enterprise customers. An enterprise customer would gladly pay an extra 500-1000 dollars for a stable SSL/TLS library if it meant they wouldn't have to upgrade their devices every ~8 weeks due to OpenSSL bugs. Its cheaper to pay for a more stable/secure library (if one exists) than to upgrade mission critical devices so often (or worse, get hacked).
One thing you can say is that a bank has a lot to lose--they'll invest in whatever it takes to secure their networks and devices.
They could and that is one of the models. It's a very niche, tiny group that would. I mostly saw developments in high-end smartcards, premium guards in defense, custom work for government/commercial by contractors in high assurance... that's pretty much it. There's so little work in high security field that I straight up left it and now mainly do R&D on various problems. Even NSA uses HAIPE and SCIP internally for Type 1 (their best) stuff. They clearly didn't trust SSL/TLS, even default IPsec, from the get go. Most just use whatever is cheapest with majority of those buying "certified" products doing it for extra government sales and false due diligence (C.Y.A.).
That includes banks. They get breached all the time in various ways while trying to hide it or obscure what exactly happened. I've seen this myself. One said the industry goal is to keep their losses at about 6% or less of risky revenues. They have just enough security (and incompetent enemies) to achieve this. The other trick is "investing in" politicians to keep liability laws in their favor to block most lawsuit risk. Past these two, most banks are focused on just cranking out more profit. Just like everyone else.
Post-Snowden, we've seen increase demand for real security. Yet, it requires you to ditch a fully-featured OS, most Internet functionality, a bit of performance, and a significant chunk of the wallet. Further, the widespread use of IT and security that are shit makes most people not know what a strong offering would even look like. These combine to make the sales process for high security an uphill battle. Not likely to change: even I tell new people interested to treat it as a hobby and do mainstream INFOSEC practices to ensure job security. We embed our style invisibly where we can, though. ;)
Would be nice if TLS supported:
1) "hot" certificates with 24h ttl signed-off by a longer-living certificate on a more secure machine. Having 1-year certificate private key deployed on a web server is crazy. Especially since revocation does not really work.
2) Threshold multi-signature certificates for both CAs and end-user certificates.
3) CA certificates locked to specific TLDs (was there RFC about something like that already?) - so Russian CA cannot sign certificate for a Canadian TLD.
4) Ultimately, blockchain name pinning on DNS level.
The last three do not really relate to a case when bug in OpenSSL reveals a private key stored on web server.