1. C & S exchange messages to agree on versions, ciphersuites, and nonces.
2. S->C certificate, which includes an RSA public key.
3. C verifies certificate against its local cache of CA roots.
4. C->S random secret encrypted under the RSA key (the "pre-master secret").
5. C & S derive (the same) set of MAC keys, crypto keys, and crypto parameters.
6. C & S verify every message of the handshake with the MAC keys.
Other useful things to know:
* SSL/TLS operates over a "record layer" of TLV-style messages. The TLS record layer itself supports fragmentation, which is a little crazy.
* The server can ask the client to send a certificate too; this is common on backend connections and unfortunately not common with browsers, because the UI is terrible.
* Less commonly, C & S can opt for a "DHE" (ephemeral Diffie-Hellman) key exchange, in which the RSA key from the certificate is used to sign a DH key exchange (DH allows both sides to use random public keys instead of long-term fixed keys, but suffers from exposure to MITM attacks --- the RSA key from the cert "breaks the tie" in a MITM situation, making the exchange secure). This has the advantage of ensuring that even if an attacker has been recording all your traffic for years, she can't compromise a server's private key and then decrypt older connections. This is called "forward secrecy".
* The two common cipher suites used on most connections are AES in CBC mode and RC4. AES-CBC chunks plaintext into 16 byte blocks, padding the last block if there are insufficient bytes to fill it. Until TLS 1.1, TLS ran CBC in a continuous stream over the whole connection, using the last block of the most recent message as the IV for the next, which gave rise to the BEAST flaw. RC4 is a stream cipher that encrypts byte-at-a-time --- but nobody trusts RC4 much.
* TLS 1.2 (IIRC) introduces AES-CTR, which runs AES as a stream cipher.
Hmmm. I'm sure browser certs is a well-worn topic, so maybe you could just point me to a proposal/discussion about it, but I don't really see the benefit client keys have over per-user symmetric tokens if each app is its own CA. Symmetric tokens can also be distributed/stored without special browser support.
The cryptosystem required to do per-app certificates is already resident in every mainstream browser; we are literally a UI/UX fix away from having that working.
Meanwhile, to a first approximation, zero people have tokens.
Web servers would have to be retrofitted to sign, distribute, and authenticate client certificates, so there is O(apps) work to be done. For symmetric tokens, each app would just have to store the token in the user DB, and then tell the client to hold it in local storage, which is probably less work for the server admin, and also doesn't require browser support.
I've always wondered if / what is stopping someone from eves dropping or duping the initial handshakes before the communications are encrypted. If you get the cipher and understand the schema used you should be able to decode the otherwise secure traffic.
in step 4 above the client sends data to the server that is encrypted with the server's public key. you don't have the server's private key, so you cannot decrypt that data. but the server can. so you cannot duplicate things, even if you are watching.
I agree with the commenters on SE that say "Thomas Pornin will have the best answer". While I absolutely understand the utility of SE's competition to produce the best message, I find it destroys any interest I have in contributing to the site.
I just finished reading this book: "SSL and TLS: Designing and Building Secure Systems"[1] and I now have a _much_ better understanding of how SSL works, what security it provides, how it provides it, and when and where to use it.
It is a thorough read and not the easiest, but you can always pick and choose what chapters you're interested in. I highly recommend it. You can buy it from Amazon here (not an affiliate link):
That's a misstatement of Moxie's position, which is unfortunate because Moxie's position is important, probably correct, and needs all the credibility it can get.
What's broken about SSL/TLS is the current CA model. Since SSL/TLS was introduced, we've been running with almost exactly the same trust "UX": a hidden browser config panel listing a series of complicated-sounding trusted root CAs, each with the authority to sell or transfer their business to some other entity, or even to delegate the authority to sign certificates to other organizations.
That's absurd; it's a security model that clearly can't work in the real world --- and, more, demonstrably hasn't worked. SSL CA's have been caught red-handing selling their authority for dubious reasons. For instance, Trustwave sold a CA=YES certificate to an undisclosed third-party corporation solely for the purpose of making it easier (not "possible" but merely easier) for that corporation to monitor their own users.
We need a radical rethinking of the UI/UX and trust model behind SSL/TLS, and Moxie's idea of decentralizing that trust --- so that, say, the ACLU could operate a sort of CA root that would vouch for Verisign's signatures on core ecommerce sites but not accept a crazy delegation from Iran.
The protocol, on the other hand, is for all its warts the best-tested crypto protocol in possibly the history of computing. Baby & bathwater, and all that.
I remember seeing an interview where someone asked a Chrome team member if they had plans to support Moxie's idea. They said, roughly, that if Chrome supported it, Google would have to run some notary servers, and people would rest assured that Google is running them and not be bothered to run their own, leaving Google with having to support this for the entire internet.
Thankfully, the world does not depend on Google to move decentralized trust for TLS forward; since it's mostly a UX change, and it's confined to a very small part of the TLS stack (the verification of server certificates), it can be retrofit over existing infrastructure with neither changes in server software nor major changes to browsers. We can probably do it via plugins.
What can those of us who like this decentralized CA whitelist idea do to help it gain adoption? Can I start telling Chrome right now to use a whitelist maintained by an external source? Should I just pick someone's whitelist (e.g. ACLU, EFF, yours) and trim my browser and OS whitelists to only use those?
Also how does this affect SSL certificiate "pinning" as implemented in Chrome? I guess it doesn't since even if you have a pinned cert for a specific domain Chrome will still verify the trustworthiness of the CA that signed it?
Sorry, I should also have mentioned TACK, Moxie and Trevor Perrin's proposed system of allowing servers to dynamically update certificate pins. As many people here know, Chrome already has a system of pinned keys, which mean that as far as Chrome is concerned, Chrome is the final arbiter of GMail's public key, not Trustwave or any other CA. TACK allows browsers to keep a cached list of pins in somewhat similar fashion to HSTS, which caches a list of servers that must use TLS.
TACK is just a proposed standard right now; I have no idea where it's going. But it's a good band-aid on the existing CA system.
Several things come to mind; let's take Convergence (http://www.convergence.io) as an example. First, the software (Convergence plugin and notary code) need to be stable. This is mainly the job for the project team, but they might need help. In my personal experience, the code works well; haven't checked recently to see how it performs for others. The Convergence plugin is a hack, so it probably has a few rough edges.
Second, the protocol and the code need to be independently reviewed. This hasn't happened as of yet, but I am sure will if the popularity of Convergence increases. I'd be willing to give it a go at some point.
Third, we need a good-enough infrastructure to start with. SSL Labs (which I run at Qualys) sponsors Convergence by providing 4 notary servers (2 in the US, 2 in Europe). These notaries are installed by default, so you could say that the infrastructure is decent (at the current level of usage).
Finally, we need to have the technology available in all major browsers, at the very least pre-installed and available as an option, but -- ideally -- fully integrated ("This looks like a self-signed certificate; please wait for a moment while we verify that you are not under an attack"). A big problem for adoption is that browsers are lacking APIs for this type of work.
That's what the Perspectives plugin for Firefox (http://perspectives-project.org/) does, but, as with all these things, it probably won't see wide adoption until it's implemented in a browser by default (and it's good enough that it will have very few false positives or negatives).
I wonder what building a Chromium version would entail. Do you think it would be a lot of work? I feel like it would have been done already if it were possible with the present extension APIs.
I'unno, man. It's probably significantly lower than 99.99% of chrome users that stick with Google as their default search despite the first-run search engine selection dialog, but also likely not too much lower, either.
What's important is that they give the choice -- not just in a tucked away settings pane or whatever, but actually flash the question in the user's face in a meaningful way. If search engine monopolies are important enough to add "friction" to user interaction, then I'd say CA oligopolies are at least doubly so. The conflict of interest factor isn't present in the case of CAs, because, AFAIK google doesn't run any themselves (yet), but I think its still very important to give users the choice.
I also want to point out that Google runs all kinds of infrastructural nodes all up and down the internet's stack. They have no problem pioneering high-performance DNS to move the web forward, and they even are running their own fiber optic network, for crying out loud. They're huge on promoting IPv6 adoption, (mainly because it will remove any significant cap on the internet's (and thus their) growth). I think they can handle a few SSL notaries.
But security behind the scenes doesn't contribute to a palpably sexy image of the web in the masses' minds, so it doesn't really help google's bottom line enough for them to care. Kind of like how while their "speed" initiative complete with JSCDNs is terrible for privacy (third party resources sending referrers to Google upon fetching), it helps to make the web seem like a more serious platform in the subconscious minds of users by increasing performance.
Sure, they can handle a few CAs, but then you've switched from the CA oligopoly to a Google monopoly. A significant fraction of the web would basically be running with Google as its sole CA.
Thanks by the way, HN moderators, for expanding the short URL generated by the site to share. Now it didn't keep track of how many people visited the URL, and I missed the Announcer badge. May I ask why?
I didn't use an url shortener, stackexchange just gives me a short link when I click the share button. And if they're banned, how could I have posted this...
1. C & S exchange messages to agree on versions, ciphersuites, and nonces.
2. S->C certificate, which includes an RSA public key.
3. C verifies certificate against its local cache of CA roots.
4. C->S random secret encrypted under the RSA key (the "pre-master secret").
5. C & S derive (the same) set of MAC keys, crypto keys, and crypto parameters.
6. C & S verify every message of the handshake with the MAC keys.
Other useful things to know:
* SSL/TLS operates over a "record layer" of TLV-style messages. The TLS record layer itself supports fragmentation, which is a little crazy.
* The server can ask the client to send a certificate too; this is common on backend connections and unfortunately not common with browsers, because the UI is terrible.
* Less commonly, C & S can opt for a "DHE" (ephemeral Diffie-Hellman) key exchange, in which the RSA key from the certificate is used to sign a DH key exchange (DH allows both sides to use random public keys instead of long-term fixed keys, but suffers from exposure to MITM attacks --- the RSA key from the cert "breaks the tie" in a MITM situation, making the exchange secure). This has the advantage of ensuring that even if an attacker has been recording all your traffic for years, she can't compromise a server's private key and then decrypt older connections. This is called "forward secrecy".
* The two common cipher suites used on most connections are AES in CBC mode and RC4. AES-CBC chunks plaintext into 16 byte blocks, padding the last block if there are insufficient bytes to fill it. Until TLS 1.1, TLS ran CBC in a continuous stream over the whole connection, using the last block of the most recent message as the IV for the next, which gave rise to the BEAST flaw. RC4 is a stream cipher that encrypts byte-at-a-time --- but nobody trusts RC4 much.
* TLS 1.2 (IIRC) introduces AES-CTR, which runs AES as a stream cipher.