Hacker News new | past | comments | ask | show | jobs | submit login
The DROWN Attack (drownattack.com)
791 points by jsnell on March 1, 2016 | hide | past | favorite | 194 comments



The vulnerability here is tricky to exploit but actually simple to describe.

There's a padding oracle in the form of RSA used by both TLS and SSLv2; by repeatedly sending permuted versions of a ciphertext to an SSLv2 server, you can gradually discover the plaintext†. Both SSLv2 and TLS have countermeasures for this attack.

But SSLv2's countermeasures are sabotaged by the crappy ciphers it also supports. In both TLS and SSLv2, the anti- padding- oracle trick is that the server detects bad messages and then generates a fake message to continue running the protocol with, instead of aborting (which would reveal to the attacker that the message was corrupt, thus enabling the padding oracle). But when SSLv2 does that, the attacker can detect that it did, because the cipher key lengths in SSLv2 are so short that they can be brute forced. The attacker knows when the SSLv2 server replaced its message with a fake one, and thus has a working padding oracle.

The big problem here is that people run old SSLv2 servers with the same RSA keypairs as their TLS servers. So you can take messages you captured from the TLS servers, and, with some very clever message manipulation owing to an older Bardou paper, make them intelligible to SSLv2, and use the SSLv2 padding oracle to decrypt them.

It's a great, great paper. The Bardou "trimmer" stuff was news to me too! :mind-blown-emoji:

The top line takeaway on DROWN for most people seems to be "export ciphers are evil". I think: (a) boring! (b) misses the more important point.

To me, the real problem here is RSA. Virtually every system on the Internet that does RSA uses PKCS1v15 padding (DNSSEC, which is only now being rolled out after nearly two decades of standards work, uses PKCS1v15 padding!). Moreover, RSA directly exposes an encryption primitive, unlike DH+signature forward-secure protocols, and RSA ciphertexts are surprisingly malleable.

To me, the real takeaway: RSA is obsolete. Stop using it.

We walk you through this vulnerability in our challenges here: http://cryptopals.com/sets/6/ --- start with #46


> The vulnerability here is tricky to exploit but actually simple to describe.

Yessss finally

> There's a padding oracle

hysterical sobbing sounds


An "oracle" is simply an circumstance in which an adversary can coerce you to use your secret key to do a useful computation, some result of which is revealed.

Padding oracles are a subset of "error oracles", which are oracles that come from exception processing. The "useful computation" you're being forced into doing is "decrypting, checking for errors, and then somehow signaling the error."

Think of error oracles this way: imagine a situation in which an adversary can hand you a ciphertext which, when decrypted, will trigger an error (in a padding oracle, that error is "the padding is wrong"). Now imagine that the error, revealed to the adversary, allows them to infer one bit of the plaintext. Not very useful by itself, but, as a final step, imagine that the attacker can permute the same ciphertext, generating (or not generating) an error and allowing them to reveal a different bit of the plaintext.

As long as they can keep doing that, they'll eventually recover the whole plaintext.

Here's a good starting point:

http://cryptopals.com/sets/6/challenges/46/


This is a great explanation and I appreciate you providing it.

I think the point GP was making was that he got literally 4 words into the "simple to explain" explanation before realizing this explanation is going to be like tons of other things he reads here and elsewhere that require so much prior and specialized knowledge as to be nearly impossible for someone lacking a high level of experience in the area to understand.

I often feel this way when security topics come up on HN. I've been writing code a long time and I think I'm of moderate intelligence and I often have no idea what anyone is talking about. I try not to lose sleep over it because I understand that security can be really, really complicated (even if the issue is "easy to explain") and so I'm just not going to get it.


Right. Well, you're not going to get the exploit from a simple description of the attack. DROWN is built on some well-known attacks on RSA, most notably Bleichenbacher's 1998 padding oracle attack, but the specifics of how to manipulate an RSA ciphertext to make TLS records intelligible to SSLv2 are hairy even if you've implemented BB98 before --- I'm going to call this "stunt Bleichenbachering".

So it's not very important that you understand exactly how to go from "errors/exceptions generated by the target" to "recovery of plaintext". Even if you understand the specifics of those kinds of attacks, they're even more complicated (and fun) here.

The interesting nut of the attack is: there's a well-known error oracle in both TLS and SSLv2, but it's long since been mitigated in both protocols. However, details of the crappy crypto in SSLv2 conspire to make the mitigation for padding oracles in SSLv2 ineffective, and you can use the SSLv2 oracle to attack TLS ciphertexts if TLS and SSLv2 ever share keys.


Are error oracles why security software nearly always has poor error messages and therefore poor usability? Or is that just tradition?


No, that's just tradition. Error oracles are a crypto thing; the errors don't even need to be exposed to users for them to be a problem (for instance, the Lucky13 TLS attack uses timing to discern exceptions, not actual error messages).


Well, at a high enough level there is some connection there.

I think everyone can understand why "Error: The fourth character of your password is wrong" is both more "usable" and insecure.


A tradition i would love to see go byebye.

Had to deal with a program recently that gave "corrupted MAC" error on the server. All searching etc hinted at it being a network issue. Only after having fiddled with the network for ages and getting nowhere did i set up a loopback test and found the same error.

Turns out the program i was dealing with was using an old lib.


Imagine it like you're looking through a keyhole at a computer monitor.

It's setup so that you can only see the very last letter in what you think is a really long sentence someone has typed into a text editor.

With your bluetooth keyboard you move to the beginning of the line and repeatedly hit the spacebar, taking note each time what the new letter is that you can see through the keyhole.


I ran across a related issue working on a project that I think helped me understand the broader issue better. I'll relate it here and see if it helps.

We had user search functionality for an internal business application. We allowed searching by several fields, such as name, and phone number, and allowed partial matches, where the first part of a field matched.

Phone numbers were restricted information for some users, and we didn't display them in the search results. But you could search by those fields, if you already knew the value.

It turned out we left 'partial matching' available in the phone number field.

If you knew the name of a particular person, you could do the following:

Enter their name to confirm a single match shows up.

Then enter the same search, but put a single digit in the phone field, say "0". If no result came back, you knew that their phone number didn't start with that digit. So you try the next, 1, 2, 3... when you hit, say 5, you get a result. Now you know their phone number starts with a 5.

So you start on the next digit. 50-, 51-, 52-... and you get the next digit. Eventually you can reveal their entire phone number, and it was fast enough to do manually.

It wasn't super critical for the internal system, so we changed the phone numbers to exact match only and that was fine. but let's continue.

Let's say we had a REALLY slow system, and it checked each digit one at a time, taking one second per digit, and sent back a failure when it hit a non matching digit. We aren't leaking information DIRECTLY, you get no records back for partial matches. But we ARE leaking information indirectly. The attack now works roughly like this:

Do the same verification search to get a record.

Now do the same digit by digit search, start with 0. Carefully time the delay when you submit the question, and when the error shows up. If it's almost instant, you know the first digit failed. If it takes about a second to give you an error, you know the first digit worked.

You then continue to the second digit. One second fails, two seconds verifies.

Using only the timing you can determine each digit, and reveal the entire number. This is not the main information channel, the timings are what they call a side channel. The service that gives you this information by responding with an error and varying the time is an 'oracle' because it answers your questions.

This seems clear at a one-second time scale and obvious when it's explained, but it turns out that even over public networks with enough tries you can determine very small time differences, on the millisecond or less scale.

It turns out there are a lot of variations on this, some of them are very complex, and secure systems need to work very hard to run in the same amount of time ("constant time") for any query to avoid leaking information this way. The timing exploit described above is a very simplified form of the root of the described vulnerability.

This situation helped me understand 'timing attacks' and 'oracles' in a more mundane coding context, and hopefully it will help others.


This really helped me understand a few security concepts including the fundamentals of this attack. Thanks for the post.


This is a really good explanation. Thanks you.


The problem is that RSA is also used for most certificates, including signing certificates. This attack also implicates digital signatures. While we've made good progress shifting towards DH/ECDH ciphersuites -- and TLS v1.3 will eliminate RSA entirely -- we have essentially no deployment of DSA/ECDSA certificates. It's all RSA. So "getting rid of RSA" sounds good, but won't happen in TLS/SSL anytime soon.

Moreover, if we can't get rid of export ciphersuites after twenty years, what is the probability that we'll manage to get rid of RSA in just a few? I'm not that optimistic.

There would be very little problem with RSA in TLS if people had moved to modern schemes like OAEP and RSA-PSS. The problem is that we didn't, and we're stuck with obsolete crap. Moving away from obsolete crap isn't the solution, it's the definition of the problem.


Moving away from obsolete crap isn't the solution, it's the definition of the problem.

One could argue that the CA/Browser forum has achieved some success with moving away from SHA-1. As a spectator, I don't understand why this process is not repeated for similar obsolete primitives or standards.


I read an blog post by a guy with a long experience with this. What happens is large players demand that there be a 'reasonable' deadline for compliance. And then half the companies involved sit on their hands for two and a half years and then demand an extension. And then another and next thing you know you're still using RSA fifteen years after people knew they needed to stop using it.

Only solution I can think of is to create some sort of license where once the sunset deadline is established, the license to use it expires hard on the deadline.


That's very interesting, do you happen to have a link for the blog post?



Thanks that would be the one. I get this feeling that encryption protocols and standards often end up and all sorts of dank corners of the web infrastructure and finding and updating all of these is really messy task. And I suspect service providers and their customers haven't been really good at keeping track of everything.


Fascinating. I still feel I'm missing something basic here: If Microsoft, Google and Mozilla announce they're not going to accept any particular crypto primitive two years from now, and this time there won't be any exceptions, CAs and websites just have to abide, don't they?


The browsers say what they accept, the server says what it provides and something in the intersecting set will be used.

If (as a random example that didn't annoy me at all for 2 years) a website also needs to support SmartTV devices which only accept obsolete certificates then your server has to either break them or not.


Then a bunch of big companies announce they'll use another browser to be able to keep using it


Another browser beside Chrome, Firefox and IE? OK, so Symantec announces that they will only use Opera. Even then, they have to deal with their customers, website operators who need a certificate trusted by the big 3 browsers, leaving. In fact, now that Let's Encrypt certificates are free, it seems like this is the Symantec CA's worst nightmare.


Not CA:s, but clients like banks


RSA OAEP and PSS is just as unlikely to win the Internet as ECDH and curve signatures. But if you had to pick one to win, you'd pick curves.


Why argue about which battle to fight, and try to do both? Get widespread support for ECDSA; get RSA implementations up to date with modern padding schemes. We don't need to pick and choose which modern crypto to use; use modern crypto everywhere.

Obviously upgrading old clients is hard, but that's going to be a problem regardless. So in new standards that need RSA (e.g. something where the private key needs to be able to encrypt and sign), push for OEAP. If they don't need RSA, push for modern curves.


Because even if you use OAEP, RSA is more error-prone --- particularly at a design level --- than curve cryptosystems and, for most applications, doesn't allow you to express anything you couldn't express better with curves.


What are the arguments against RSA OAEP?


Public key encryption transforms are one of the biggest foot-guns in cryptography. OAEP is at least not prima facie broken, but using it still exposes you to the the design risks of building with public key encryption.

(There are attacks against OAEP, but they're less common and not intrinsic to the design the way PKCS1v15's are).


Even if RSA does go away, and it will, public-key encryption primitives won't. The post-quantum craze is mostly made up of encryption and signature primitives, so we have that to look forward to. Even the lattice-based key agreement we have is basically fancy KEM, which leads to things like [1].

My impression is that RSA never really got the "djb treatment". The people designing OAEP and friends were mostly theorists concerned with security reductions, not implementation issues. I think an idiot-proof RSA scheme could be devised, but it is now way too late for that.

[1] https://eprint.iacr.org/2016/085


> RSA is obsolete. Stop using it.

The problem is, the ONLY curves that are supported in practice are NIST P-256 (prime256v1) and P-384 (secp384r1), which aren't considered safe. So, I guess, many are reluctant to switch (because we're non-experts and don't know real implications, but we heard that there's something not right there - and shouldn't one be wary?).

Neither Curve25519 nor Curve448 - which are said to be safe - are usable with X.509 and TLS, yet. Still waiting for that RFC to get past the draft status.

So, what should we do?


> The problem is, the ONLY curves that are supported in practice are NIST P-256 (prime256v1) and P-384 (secp384r1), which aren't considered safe.

I don't believe this is accurate. It is true that there are complaints about the curves' design in parameter transparency and their difficulty to implement them securely. But I don't think people are actually suggesting the curves are not safe to use conceptually.

P-256 facilitated ECDSA should be good to use until the new curves are adopted (which will take a while) as long as you're not writing implementations yourself. It is true that the new curves will be a better alternative once they are available given that they are designed for transparency and ease of implementation.


You have two choices then:

    - Use RSA until 25519/448 land in TLS
    - Switch to ECDSA in the interim
Personally, I would switch even though ECDSA isn't great.

The concerns over P-256 and P-384 are more academic than anything: They're hard to implement safely and without side-channels. Read: hard, not impossible.

You shouldn't be writing your own ECDSA implementation, however. That'd be foolhardy.

EDIT: I use secp384r1 for signing random_compat, so if anyone gets bit by this recommendation, I will too: https://github.com/paragonie/random_compat/blob/master/dist/...


The NIST curves also have constants which may or may not be manipulated.

Bruce Schneier recommends against using them.


Even Bernstein doesn't argue that the NIST curve seeds are actually backdoors. Schneier isn't a curve researcher; in fact, he's more like an anti-curve pundit. I'm not sure his opinion is all that powerful.

Regardless, I'm not suggesting new cryptosystems should use the NIST P-curves. They shouldn't; those curves are just as tricky to use as RSA.


What about the NSA "freaking out"[1] about ECC in general?

[1]: http://blog.cryptographyengineering.com/2015/10/a-riddle-wra...


Rodents of unusual size? I don't think they exist.


Your vote of confidence is overwhelming. ;)


I would recommend against using them if your adversary is the NSA and your threat model comes wrapped in tin foil (and if I didn't, I'd get ignored anyway).

If so, use NaCl/libsodium at the application layer and don't rely on ECDSA alone.

If your threat model is "criminals", ECDSA is less insane than RSA (provided, once again, you're not implementing it yourself, you're relying on developed by a team of cryptographers and security engineers).


Threat model wrapped in tin foil? I don't understand what you mean, are you suggesting paranoia?

My threat model includes NSA dragnets but not being specifically targeted by the NSA.


In that case, active attacks against Weierstrass field arithmetic isn't part of your threat model and ECDSA/ECDH over the NIST curves is fine.


So this is something that can't be done en masse? Okay, thanks.


>> Threat model wrapped in tin foil? I don't understand what you mean, are you suggesting paranoia?

That term likely means one of two things: guarding against a particularly capable attacker or paranoia for others


NSA dragnets won't decrypt things using dodgy curves for signatures (ECDSA), only things using dodgy curves for key exchange (ECDH).


Use Curve25519.


drdaeman says that Curve25519 isn't usable with X.509 and TLS. Your reply is "Use Curve25519". What are you saying? That drdaeman is wrong, and Curve25519 can be used with TLS? Or are you saying, "don't use TLS"? Or... what?


The spec for Ed25519 signatures in TLS is not yet ready: https://tools.ietf.org/html/draft-irtf-cfrg-eddsa-02

X25519 for key exchange in TLS is ready and is in openssl master (and in boringSSL). I don't know the status of NSS support, or plans for SChannel and CoreCrypto support.

X25519 key exchange and Ed25519 signatures have been deployed in nacl, libsodium, ssh, etc. for a while.

EDIT: NSS ticket for X25519 Key Exchange: https://bugzilla.mozilla.org/show_bug.cgi?id=957105


Frankly I don't do security critical things on TLS anymore. I'm tired of having to wait in limbo for months to fix known vulnerabilities.


Other protocols are probably just as broken, we just haven't found the vulnerabilities yet. Few protocols (if any) get as much scrutiny as SSL/TLS.


This seems like what a sensible person would do.

On the other hand, placing evergreen confidence in "TLS", believing that it is "good enough" or concluding "it's all we've got" are lines of thinking that not make sense to me. The vulnerabilities just keep coming, one after another.

High speed crypto not part of TLS that, as another commenter put it, is "considered safe". Does it exist?

Useful software that is written from the start with such care that it does not need to be continously patched ad inifitum. Nonexistant? (No need to answer. I know the truth.)

Getting something added to TLS seems difficult enough, but getting something removed seems impossible. Like all bad software, TLS has numerous "features" I do not need and will never use. OpenSSL is like a museum of cryptography, preserving the obsolete for posterity.

Long live TLS. May it forever waste my time and energy.


What do you use instead? SSH? IPSec?


Go to datacenter. Open cage. Plug in keyboard. Oldschool.


Are you shielding the keyboard from EM leakage? I don't open a cage, I get in one and do my work in there... No more EM eavesdropping.


Unless there is an enemy antenna with you inside the cage. A real security pro uses a pneumatic keyboard.


No magnetized needle and a steady hand?


Thanks for the excellent summary.

"The big problem here is that people run old SSLv2 servers with the same RSA keypairs as their TLS servers. So you can take messages you captured from the TLS servers, and, with some very clever message manipulation owing to an older Bardou paper, make them intelligible to SSLv2, and use the SSLv2 padding oracle to decrypt them."

That is a lesson worth remembering: something acquired in one context might become valuable in an attack in another context. I've seen this concept show up repeatedly hacking and security analysis. Clever how this attack applied it.

I've occasionally wondered if that could be turned into some mental framework or heuristics for generically applying it to various security analyses. Some systematic way, maybe semi-automated, of saying we've collected all these pieces of information for protocols, configs, etc. Now what does that automatically infer in terms of attacks or even connections that might lead to them?

Not sure that it's feasible in general case. I used it in prototype security scanners before the commercial ones showed up. Might be something to be had researching the concept further. (shrugs)


As JP Aumasson said https://twitter.com/veorq/status/683360050199552001

    1998: Bleichenbacher's padding oracle attack on RSA
    2016: still vulnerable systems
    http://framework.zend.com/security/advisory/ZF2015-10
I second "stop using RSA".


RSA isn't obsolete. PKCS1v1.5 padding is obsolete.


I'm with you on this one. This 'RSA is evil' conclusion is like taking a look at [1]---or the gazillion Schnorr-style signature scheme nonce leaks published over the years---and concluding that elliptic curves are evil.

[1] http://web-in-security.blogspot.com/2015/09/practical-invali...


I concede that "RSA is obsolete" is somewhat hyperbolic, given OAEP and PSS, but RSA is dangerous, more dangerous than modern curve crypto. Developers overestimate their ability to mitigate dangers.


I think developers of elliptic curve crypto overestimate their ability too. Avoiding side channels in RSA is easier than avoiding them in ECC - fewer moving parts.


Developers that are likely to expose exploitable side channels in curve software are just as likely to expose them in RSA software. But developers that use curve software are going to avoid a bunch of vulnerabilities that are specific to RSA. They should use curves, and avoid RSA.

But I don't know why I'm letting you off the hook on this. Can you be as specific as you can about the additional moving parts you're referring to? Is your comment about the difficulty of getting constant time key agreement or verification specific to legacy Weierstrass curves? And are you concerned about timing leaks in something other than scalar multiplication? If so, what are those other things?

When you say there are "more moving parts", are you referring to the fact that there's both a modular reduction and a scalar multiplication that need to be protected?


The amount I tend to learn after you get invested in a thread and start busting out things I've never heard of is honestly my counterpoint to arguing on the Internet being unproductive. That it is Percival you are fencing makes me look forward to the rest of this thread, because EC vs RSA is an interesting dialogue and you both have a lot to say on it. It might look like arguing to both of you, but scraps of useful information do fling off and I appreciate your arguing.

This thread and others like it make me think we need a security (and general) debate series.


Colin and 'pbsd are much smarter than me, and learning stuff is actually why I'm so happy to pick fights with them.

I was a blog-arguer before HN, and a Usenet person before blogs, and a BBS person before that, and most of what I've learned in my whole career is traceable somehow to Internet arguments.

I really do think I'm right about RSA, though.


I come back to my debate point. I think tech is in a state now where we definitely have personal policy positions but as engineers, we are reluctant to frame what we do that way. This is in contrast to politics where most issues are acceptably on a spectrum, because that is more of a "soft" field. I suspect "RSA is obsolete" is one of those things where you've carved out a position on one end, much like Schneier carved out his position on one end of curves as discussed upthread. I'm not calling you wrong, mind, as that comparison might imply: just a surprising opinion and you've certainly earned it, and I like hearing about it, and I think of you as one colorful end of a spectrum on this and a few other points. And throughout my life the extreme opinions have often panned out (i.e. Snowden), so I don't dismiss them much any more.

Maybe you're crypto's Bernie. Feel the Ptacek.


In my experience, this is how developers see crypto:

    RSA: "I learned that as an undergrad. It's just multiplication, I'll use GMP!"
    ECC: "Whoa what the hell is this? I better use a library."
These are the same developers who ask on StackOverflow how to decrypt MD5, of course.


> These are the same developers who ask on StackOverflow how to decrypt MD5, of course.

Rainbow tables. Duh!


That's not decryption.


Of course, not.

(Though I wonder if you can make a pedantic argument, that if the domain of your one-way function is suitably restricted (eg to passwords humans actually come up with) they do become bijective.)


> The top line takeaway on DROWN for most people seems to be "export ciphers are evil". I think: (a) boring! (b) misses the more important point.

The US government never misses a propaganda opportunity: "uncrackable crypto and impenetrable devices are evil". But technical people so easily miss the marketing value of sound bites.

This is a clear-cut case in which yet another major Internet security vulnerability is significantly caused by US government policy. Why shouldn't we run with this as a way to explain to the public why regulation of crypto is bad?


Another valuable takeaway is that forward secrecy is well worth having. In this case, any previously collected trove of sessions that used DHE/ECDHE is not de-cryptable by using DROWN.


Having had to briefly use an old PalmOS phone when my previous phone died, I can say that very few HTTPS sites still support SSLv2. However, most imap/pop servers seem to still support it.


This. I've typed in one of my domain names and found out my Postfix had SSLv2 for some odd reason. No idea why, I always assumed the defaults had SSLv2 properly disabled. Wrote explicit `smtpd_tls_protocols = TLSv1, TLSv1.1, TLSv1.2, !SSLv2, !SSLv3`, reloaded the daemon, ran ` openssl s_client -connect localhost:25 -starttls smtp -ssl2`, saw the negotiation failure, hope I'm good now.


This sounds really bad. Is it?

So, if a server is accepting sslv2 at all, then all connection (including TLS connections) to the same server are practically compromised. That is, assuming both sslv2 and TLS use the same key. Therefore, having allowed sslv2 in the past would only jeopardize clients foolish enough to use it, but now it endangers every single client. Is that right?

Two very important things come form that:

Everyone who had sslv2 enabled at any point in the past must get a new SSL certificate right away. Right?

Windows XP has TLS disabled by default, which in practice means unavailable. So we must cut them off. Therefore Windows XP is dead-dead, starting today. Right?


> Everyone who had sslv2 enabled at any point in the past must get a new SSL certificate right away. Right?

No, unlike Heartbleed, the key isn't leaked directly. Revocation and generating a new key isn't necessary - just patch your stuff and disable SSLv2.

> Windows XP has TLS disabled by default, which in practice means unavailable. So we must cut them off. Therefore Windows XP is dead-dead, starting today. Right?

IIRC XP supports TLS 1.0.


Ah so it's not exfiltratimg the key, it uses sslv2 oracle to break captured TLS traffic.

Ok, that makes it a lot less scary. Pretty bad still, but least the keys are not compromised.


It does support TLS, but it's disabled out of the box, and no one who is still running XP is likely to turn it on.


That seems to be the case only for IE 6, not 7 or 8[1] (which are available on XP).

[1]: https://en.wikipedia.org/wiki/Template:TLS/SSL_support_histo...


Thanks for rescuing me from my own ignorance! I was reading a different chart that wasn't as clear.


Note that there's an SSLv3 after SSLv2 -- both are broken, but SSLv2 is just a much bigger liability than before. SSLv3 is also broken.

You shouldn't have had SSLv2 enabled for many years now. If you need SSLv3 for WinXP support, this bug doesn't change anything in that regard.


Good point on v3.

I kept v2 running because we couldn't bring ourselves to prevent connections from clients of our clients, it's not our place to tell them what to do, especially if it only harms them and no one else. This time it harms everyone, including high-privilege users, so it's a lot easier to justify.


Only if they are using IE6 I think. That being said, I think there are some IE7 and IE8 upgrades that inherits the old default from IE6.


> To me, the real takeaway: RSA is obsolete. Stop using it.

RSA also has the nice property of being deterministic.

ECDSA may not be, depending on how you implement it. EDDSA is.

This matters a great deal in a world where it's important to assume your hardware may have some adversarial properties. It's much easier for your ECDSA device to purposefully leak your private key than it is for RSA (both because there's an explicit covert channel available, and also because of how much smaller elliptic key-pairs are in practice).

Also, as we prepare for a post-quantum crypto world, this might be a bad time for shorter keylengths.

There are lots of great reasons to use curves, but I think describing RSA as obsolete is a little premature.


Don't use ECDSA, and don't use RSA.

I don't think reasonable key lengths are going to make much of a difference if quantum computing becomes a practical threat. All the RSA cryptosystems and all the ECC cryptosystems will fall.

Meanwhile: the literature suggests that RSA and classical DH keys are threatened much more by conventional computing advances than curve keys; curve keys resist index calculus attacks.


To me, as a SW engineer, the mitigation against the padding oracle attack in SSLv2 (pretend its correct and generate random PMS) seems like a very bad hack. Is there some better, more modern way to protect against bad input that eliminates this entire class of problems, similarly as we use encrypt-then-HMAC or GCM for encryption?


And the rotation modulo after that to do a lattice attack \o/


From a pool of 11 million scans of HTTPS sites, I could only find ~265k targets with SSLv2 enabled [1]. That's 2.4%, not 25%.

A breakdown of the type of target that have SSLv2 enabled would be useful to understand how they reached that number. It's possible that they scanned much much more than HTTPS on port 443, and found a lot of embedded devices with poor SSL configurations.

At any rate, you should verify the configuration of your websites. There are many tools to do that, and we publish configuration sample to make it easy: https://mozilla.github.io/server-side-tls/ssl-config-generat...

[1] https://twitter.com/jvehent/status/704657734810148864

[edit] the page https://drownattack.com/top-sites shows that sites like yahoo.com are vulnerable, but there are no details as to why it is listed vulnerable. yahoo.com does not allows connections with SSLv2, but some of its subdomains do, so maybe the top-level domain is listed because some of its subdomains are vulnerable?

[edit 2] according to one of the researcher, the scanner "check if pubkey (not cert) runs on SSLv2. Then mark all others with that pubkey vuln" (src: https://twitter.com/seecurity/status/704665265712308224)


> [edit 2] according to one of the researcher, the scanner "check if pubkey (not cert) runs on SSLv2. Then mark all others with that pubkey vuln" (src: https://twitter.com/seecurity/status/704665265712308224)

That seems to make sense, given that any SSLv2 service using a pubkey that's used on a non-SSLv2 service can be used in an attack against the latter.

The main takeaway from this is probably that it's a good idea to separate your SSL keys by service (or, more specifically, by DISTINCT($attack_surface), where $attack_surface is your software stack, OpSec, etc.). This would limit the impact of a number of vulnerabilities (think: Heartbleed) and should generally become a best-practice.


Due to CVE-2015-3197, many servers using OpenSSL reported they supported no SSLv2 ciphers, which causes OpenSSL-based SSLv2 clients to hang up. However, if you aggressively choose a cipher and continue the handshake, the server will still negotiate the connection. As a result, most existing scanning tools vastly underestimate SSLv2 support.


Wait, what? Is there a PoC script?


It's not enough to not run SSLv2 on the target; SSLv2 can't be available on any server that shares the same RSA keypair. It's a cross protocol attack, where attackers use the SSLv2 server as a tool to attack the TLS server.


Yes, as long as one of the server that shares the keypair has SSLv2 enabled.


has SSLv2 enabled or has CVE-2015-3197 and accepts SSLv2 cipher suites even with SSLv2 disabled.


From the article: "For the third time in a year, a major Internet security vulnerability has resulted from the way cryptography was weakened by U.S. government policies that restricted exporting strong cryptography until the late 1990s."

The bearing of this on the ongoing "backdoor" and "just one device" discussions is critical. This is direct evidence of real harm done by weakening encryption.


The attack is viable without export ciphers. The real issue here is the continued and unabated prevalence of PKCS#1 v1.5 padding (really, of RSA in general).


Oh, I agree. I don't mean that the weakness created the vulnerability entirely, thank you for clarifying. I appreciate the excellent research and security investigation done here. It was excellently articulated and presented, and we need to keep improving our publicly available tools and finding these issues.

What I mean is that, today, many many years since the creation of the suite, the single-target cost has dropped to 'only' $440 per target for non-weakened ciphers. I love the money-to-execute-attack metric.

That's still high enough to slow down the viability of broad attacks for most attackers. What about in the past? How much was that back when the suite was created? Much higher. How much sooner did the weakened version make the attack efficient?

I don't mean that the government is the source of all crypto flaws, I mean that we have a specific example of a government mandated crypto 'back door' weakness causing a specific harm, making exploitation of a security flaw substantially easier. This is direct evidence of the harm caused by undermining security, which the government is currently adamant can be done "safely". The security and hacker communities know that this is not true, and this is a timely counter-factual example.

This is direct evidence that weakening cryptography, including back-doors, and special "one time" access fundamentally cause harm by undermining the security of cryptography for everyone.


The MITM attack is only viable due to a bug introduced by handling of export ciphers, even though it affects non-export ciphers. So export ciphers are the real issue here.





If you want to check your servers for various other attacks with a shell script:

https://testssl.sh/

Also, I'm not seeing any guides on fixes for Dovecot yet. If you built from source or the defaults aren't working, you can use the following:

    ssl_cipher_list = ALL:!LOW:!SSLv2:!EXP:!aNULL
Something more secure (blocks other vulnerabilities):

    ssl_cipher_list = ALL:!ADH:!LOW:!SSLv2:!SSLv3:!EXP:!aNULL:!RC4:+HIGH:+MEDIUM
The second one also covers SSlv3. You can read more on why to disable this at: http://disablessl3.com/

Also, you may need to update your Exim configs as well:

    tls_require_ciphers = AES128+EECDH:AES128+EDH
    openssl_options = +no_sslv2 +no_sslv3
If there's anything else I'm missing, let me know.


Please don't put !SSLv3 in the cipher list. I had to help out Pinboard who did this: https://twitter.com/yuhong2/status/602545883775836161


  ssl_cipher_list = ALL:!LOW:!SSLv2:!EXP:!aNULL
Does this just disable all SSLv2 ciphers, or disable SSLv2 via SSL_OP_NO_SSLv2? The former might not be enough, unless your OpenSSL version includes fixes from 1.0.2f and g.


Dovecot also supports disabling SSLv2 directly:

    ssl_protocols = !SSLv2 !SSLv3


From the FAQ:

"In technical terms, DROWN is a new form of cross-protocol Bleichenbacher padding oracle attack. It allows an attacker to decrypt intercepted TLS connections by making specially crafted connections to an SSLv2 server that uses the same private key."


> Disabling SSLv2 can be complicated [...]

If users are accidentally enabling a feature that's been insecure for 20 years, the vulnerability is that your configuration mechanism is too complicated to understand.


So, disable SSLv2. That is not new information. I like how the FAQ insists you are still at risk if you've disabled SSLv2 because you still might be using it somewhere else!!! In other words, if you've not disabled SSLv2 everywhere, then there are still places where SSLv2 is enabled. Thank you.


I expect you already understand this, but to be clear: the risk appears to be that even one forgotten old server (or service) running SSLv2 can put your up to date, highly secure servers at risk (if you've shared keys between them).

To my eye, the way you've phrased it here could be read to imply "any server with SSLv2 disabled is safe". The linked report says this is incorrect, and its inaccuracy is responsible for roughly half of the vulnerabilities that they have observed.


So, disable SSLv2 and get a new key, and only share it between servers with SSLv2 disabled.


Note that the private key isn't actually leaked (as opposed to Heartbleed), so it wouldn't be necessary to revoke the old certificate and use a new key. Rather, DROWN uses an existing SSLv2 service with the same key as an Oracle to decrypt (usually secure) TLS connections.

It's still a good idea to not share keys to limit the exposure for future attacks.


Also, disabling SSLv2 in versions of openssl up til January did not actually disable SSLv2.


Or, more specifically, disabling all SSLv2 ciphers didn't disable SSLv2 entirely. Disabling SSLv2 as a protocol worked.


Quote from paper: "Unfortunately, during our experiments we discovered that OpenSSL servers do not respect the cipher suites advertised in the ServerHello message. That is, the client can select an arbitrary cipher suite in the ClientMasterKey message and force the use of export cipher suites even if they are explicitly disabled in the server configuration. The SSLv2 protocol itself was still enabled by default in the OpenSSL standalone server for the most recent OpenSSL versions prior to our disclosure."

Hopefully OpenSSL still honored a configuration setting to disable SSLv2?


It would depend on exactly how the code disables SSLv2 - if it just disables all SSLv2 ciphers, you're SOL, see my other comment: https://news.ycombinator.com/item?id=11202785


Yes, seems that way.


I don't know that much about encryption at all. But I was wondering if the fact that the private key for a certain web server could be uncovered using this attack implies that all the encrypted data that could have been gathered by, say, NSA's PRISM, while the site was using that certificate is now available to them in 'plain text'?


Unlike Heartbleed, the actual private key is not leaked to the attacker. Rather, a SSLv2-enabled server allows you to build an oracle with which you can decrypt a TLS connection (modern crypto, secure, etc.) to a server that uses the same key.

With Perfect Forward Secrecy, any DROWN compromise is limited to the period where an attacker is actively running the attack against you. Previous PFS sessions cannot be decrypted. Without PFS, I'm not sure, but my gut feeling is that previous traffic could be decrypted if someone actively attacks you on an unpatched system now.


This is why complexity is so terribly evil in secure protocols. All these features, feature flags, upgrades, downgrades, state transitions... it's all surface area for bugs. A complete state diagram for a modern SSL/TLS stack would be huge and contains edge cases that are probably undocumented and poorly understood.

Do boring crypto: https://cr.yp.to/talks/2015.10.05/slides-djb-20151005-a4.pdf

Boring crypto means minimal state, minimal code (and therefore easy to audit), a minimal set of algorithms with new algorithms and encodings and such being added only when absolutely necessary, etc. Crypto "flexibility" means edge case bugs and lots of code where bugs can hide.

There is in general an exponential relationship between LOC and complexity and bugs. In crypto and security code this is really bad.



Oh come on, it targets SSLv2. You better have a _damn_ good reason for still having SSLv2 enabled on your systems.

If you didn't, you had this one coming.


> Oh come on, it targets SSLv2. You better have a _damn_ good reason for still having SSLv2 enabled on your systems.

"Unfortunately, during our experiments we discovered that OpenSSL servers do not respect the cipher suites advertised in the ServerHello message. That is, the client can select an arbitrary cipher suite in the ClientMasterKey message and force the use of ex- port cipher suites even if they are explicitly disabled in the server configuration. The SSLv2 protocol itself was still enabled by default in the OpenSSL standalone server for the most recent OpenSSL versions prior to our disclosure."


I think it's a little bit more complicated than that, and depends on how SSLv2 was disabled.

From the OpenSSL 1.0.2f release notes:

  SSLv2 doesn't block disabled ciphers

  A malicious client can negotiate SSLv2 ciphers that have been disabled on
  the server and complete SSLv2 handshakes even if all SSLv2 ciphers have
  been disabled, provided that the SSLv2 protocol was not also disabled via
  SSL_OP_NO_SSLv2.
So if you disabled all SSLv2 ciphers, but didn't actually disable SSLv2 itself, you could still complete a SSLv2 handshake. If your server software disables SSLv2 via SSL_OP_NO_SSLv2, then you're fine. This seems to be the case for (at least) nginx, based on how I read the code.


One reason: Using a distribution that uses old OpenSSL versions (see my own post).


Is there anyone developing an exploit for this attack? You'll need a vulnerable openssl version to test. After major Linux distros releasing the path, it'll be difficult to obtain the vulnerable openssl.

Timemachine (debian) is a tool that constructs a Docker image of a Debian base system. You can choose a Debian distribution and a date in the past. Then install any (vulnerable) package of your choice for experiment. https://github.com/CSLDepend/timemachine

This tool is a part of a security testbed that we are developing. We have created container images of recent attacks in our repo, e.g., https://github.com/CSLDepend/itestbed/tree/master/repo/mitm/...

If anyone is interested in developing such testbed for reproducible security experiments, let me know @pmcao


Are you familiar with version control systems?


The site appears to be getting hit pretty hard, here's Google's cached page: https://webcache.googleusercontent.com/search?q=cache:Hi1cki...


For those using nginx:

All nginx versions >= 0.8.19 (Oct 2009), and backported to >= 0.7.65 (Feb 2010), have SSLv2 disabled in default configuration[1][2]. Versions >= 1.9.1 (May 2015) also disable SSLv3 by default, which is not affected by this particular attack but suffers from the POODLE attack.

If you are on a version of nginx covered above, just ensure your nginx configuration does not have an "ssl_protocols" directive explicitly enabling SSLv2 (and SSLv3 for POODLE).

If you are on an older affected version of nginx, check your configuration to make sure you exclude SSLv2 (and typically SSLv3) with something like:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

It is not necessary to worry whether your nginx is built against openssl 1.0.2g, as this release simply disables SSLv2 support by default. If you've covered your bases with your nginx configuration, the openssl update is not strictly required.

[1] http://nginx.org/en/docs/http/configuring_https_servers.html (versions listed at very bottom of page)

[2] https://drownattack.com/nginx


From the paper: "In order to decrypt one TLS session, the attacker must passively capture about 1,000 TLS sessions using RSA key exchange, make 40,000 SSLv2 connections to the victim server and perform 2^50 symmetric encryption operations"

In other words, while this compromises TLS when SSLv2 is enabled, it seems to be only maybe practical for targeted use by state actors, and even then easily detectable by network capture.


If by "state actors" you mean "people who can afford $500 of AWS compute", then yes.


Basically, it looks like this affects servers that still support SSLv2. From the mitigation notes:

> To protect against DROWN, server operators need to ensure that their private keys are not used anywhere with server software that allows SSLv2 connections.

Also, I like this snippet:

> Disabling SSLv2 can be complicated and depends on the specific server software.


According to the info page, SSLv2 can only be disabled on OpenSSL by having the right (newer) version of OpenSSL installed.

I just checked Debian versions.

Wheezy (oldstable): Much to old OpenSSL versions, according to the info site

Jessy (stable): Still to old OpenSSL version.

Stretch (testing): Still to old OpenSSL version.

Sid (unstable): The same version of OpenSSL as Stretch -- Still to old.

What to do?


From the recent Debian security advisory (DSA-3500-1) about this:

"Additionally the EXPORT and LOW ciphers were disabled since thay could be used as part of the DROWN (CVE-2016-0800) and SLOTH (CVE-2015-7575) attacks, but note that the oldstable (wheezye)[sic!] and stable (jessie) distributions are not affected by those attacks since the SSLv2 protocol has already been dropped in the openssl package version 1.0.0c-2."

So it seems that neither Jessie nor Wheezy are affected by this.


Thank you very much!


Typically, most server software has configuration options that allow you to specify which protocol versions are permitted (i.e. ssl_protocols in nginx, SSLProtocol in apache). Sometimes, they also bring defaults that are stricter than what OpenSSL supports by default.

The difference between older OpenSSL versions shipped by distributions and the latest version is not that newer protocols aren't supported, but that they haven't completely removed support for older, insecure protocols. For example, SSLv2 was only disabled by default in OpenSSL 1.0.2g, which was released today. Meanwhile, a lot of server software (e.g. nginx) had it disabled by default for quite some time now.

This is mostly relevant if you have code that uses OpenSSL directly - but then it's probably not a good idea to rely on OpenSSL defaults anyway.

tl;dr Older OpenSSL versions are probably fine as long as your server software has good defaults.


Ok, can you tell in short, which server software is involved?

Nginx is not the only one, I guess. At least ssh should also be in the boat ... and the mail server .... and ....


OpenSSH is a different project that doesn't use OpenSSL, so that's not affected.

As for other software, it depends on the defaults and how their code disables SSLv2 (i.e. whether they just disable all SSLv2 ciphers, or disable the actual protocol with SSL_OP_NO_SSLv2).

Anyway, it's likely that most distributions will backport the fix soon for all supported OS versions (either by disabling SSLv2 too or by including the fix from OpenSSL 1.0.2f that allows SSLv2 handshakes even if SSLv2 ciphers are disabled).


> According to the info page, SSLv2 can only be disabled on OpenSSL by having the right (newer) version of OpenSSL installed.

Not quite. If you disable the SSLv2 protocol, you're fine. If you have the SSLv2 protocol enabled but all the SSLv2 ciphersuites disabled, you're not fine. You can disable the SSLv2 protocol in versions of openssl previous to today's.


Jessie has 5 changesets on top of 1.0.1k which fix 14 CVE's:

http://anonscm.debian.org/viewvc/pkg-openssl/openssl/branche...

And similar for the other debian versions.


Ok, is there any CVE that covers this attack / disables SSLv2?

This is rather in-transparent to me. Would be nice, if somebody could give better advice on this soon.


DROWN is CVE-2016-0800. There are a lot of CVE's in openssl's advisory released 10 minutes ago: https://www.openssl.org/news/secadv/20160301.txt

None are fixed yet of course in debian. And not in ubuntu either: http://changelogs.ubuntu.com/changelogs/pool/main/o/openssl/...


Actually Ubuntu is not affected because it already has SSLv2 disabled: http://people.canonical.com/~ubuntu-security/cve/2016/CVE-20...

You won't see a fix appear in the changelog because there is nothing to fix in the Ubuntu packages.


Disable SSLv2 everywhere.


To make it more clear:

According to the info page, I can disable SSLv2 on OpenSSL only by installing a newer version of OpenSSL, that is not available on Debian (as I found).

(I also updated my original post)


You misunderstand how Debian works.

Debian practically never updates to new versions of software (until you upgrade Debian). Instead they "backport" security fixes into the older software versions, preserving the old version numbers but adding some stuff on the end to reflect Debian's changes. The intent is that you get "only" security fixes, never features or improvements.

So when you see "OpenSSL-1.0.c-stuffgoeshere" you are not looking at "OpenSSL" openssl anymore, but a version that Debian customized, probably to add security fixes. I say "Debian" here, but really most distros do it (RHEL, Ubuntu, CentOS, etc.) I'm just not familiar enough with their processes to comment specifically.

Debian has disabled SSLv2 in its OpenSSL packages since 2010 [0], and if you are running a Debian OpenSSL version later than 1.0.0c-2 your OpenSSL version is not vulnerable. The current version of OpenSSL in stable is 1.0.1k-3+deb8u2, so unless your server has been under a rock for 5 years you should be fine. And if it's been under a rock for 5 years you have a lot of security vulnerabilities to be worried about.

Of course you may have installed OpenSSL from somewhere else, or you may be using some other software for SSLv2 that doesn't involve OpenSSL at all. So merely upgrading your OpenSSL version is not a silver bullet, you need to think about every TLS deployment you have and how it might be used.

More broadly, use this vulnerability as a wakeup call to learn about "where your software comes from", because everybody has a role to play in staying secure, including users. Debian is maintained by volunteers; you might be happier with a commercial vendor who guarantees response times. Debian backports security fixes; you might be happier with a distribution that upgrades to new vendor versions which may have avoided the confusion here.

[0] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=589706


> you might be happier with a commercial vendor

Might be a viable option.

I just don't know a commercial vendor, that also gives more transparency.

I knew, that there are also fixes from the Debian team, that add to the core functionality. But still my problem is the transparency. It is just very difficult in such a case, to find out the relevant changes, if you lack the time to observe all security changes in the distribution.

When such a thing pops up, like today, it is very tedious work for people like me, to find all the strings involved. So many packages, that can potentially involved, so many applications (eg. WebServer, SSH-Server, ...) and everywhere could be a hole. Here, I would appreciate, some more focused information, about the particular distribution.

I am using Debian because of its good reputation -- but of course if you could point out a commercial distribution with more transparency, it would be worthwhile!


SSLv2 and SSLv3 are dropped at compile time in Jessie.


SSLLabs is the go to place for checks regarding your SSL/TLS setup.

https://www.ssllabs.com/ssltest/

DROWN is not yet marked there but you can look for "SSL 2" within the protocols section of the report.


Mostly. See the DROWN FAQ [1] regarding the SSLLabs test.

[1] https://www.drownattack.com/#faq-ssllabs


> the attack exploits a fundamental weakness in the SSLv2 protocol that relates to export-grade cryptography that was introduced to comply with 1990s-era U.S. government restrictions.

Thanks again US government.


For those using ManageIQ / CloudForms, here's a blog post and a policy to check your Red Hat systems for compliance.

http://cloudformsblog.redhat.com/2016/03/01/drown-openssl-vu...


Additional write-up on this. --> https://blog.qualys.com/securitylabs/2016/03/01/drown-abuses...

more technical detail


I checked out the services I use and most of them are OK. I was surprised to find Namecheap [0] did not pass, however. Kind of ironic given they sell SSL certs...

[0] https://test.drownattack.com/?site=namecheap.com


Here's OpenSSL's announcement on disabling SSlv2: https://marc.ttias.be/openssl-announce/2016-03/msg00002.php


It would be fine if this were just a public service announcement, but then there is the huge political rant at the bottom. The use of a vulnerability as a megaphone for personal opinions is very annoying.


How can I test if my servers are vulnerable ?


does cloudflare block this?


I didn't find any affected CloudFlare sites on their test site, so I assume they don't allow SSLv2 connections.

You should probably patch your origin servers anyway.


These marketed attacks with special logos drive me up the wall. If I ever discover one I'll <edit!> give it a rude name and force everyone to look at a silly picture to go with it.


I know you didn't mean to do it, but because it's easier for people to snark about attack names, or react to snark, than it is to discuss a complicated crypto attack, this tangent about attack names takes up a pretty big chunk of the thread and really cruds the discussion up.

Whatever you might think of the name, this is bona fide important crypto work; it's one of the more interesting TLS attacks ever discovered. Please try to be careful about distracting from the important stuff with silly stuff like naming.


Then maybe next time they won't distract people with silly marketing, forcing the conversation to be about substance.


I'm on the fence over the "marketing". On one hand it is easier to communicate terms like DROWN, GHOST and Heartbleed than say CVE-2016-0800; meaning knowledge of the vulnerability gets spreads quicker and sites are patched more readily. Alternatively it could be a distraction from good security management practices, where only the "marketed" vulnerabilities get patched instead of general tracking the security of critical pieces of your infrastructure.


Yep; at this point, non-technical management is trained that "it's serious if it's got a cool name."


Seems a bit harsh and unnecessary. This is so much clearer than a security bulletin on a mailing list I'll never see. I'm pretty sure our sites are safe but now I'll remember to check.

Rebels without a cause...


But Heartbleed was the first one, and it's marketing made it get everywhere. Regular newspapers, etc.

Instead of hating people marketing vulnerbilities, tackle why people-in-power don't care about vuln when communicated the old way.


> But Heartbleed was the first one

Perhaps with a logo - I never really pay much attention to logos - but it wasn't the first to have an accessible name. Just off the top of my head I can name 3 SSL vulnerabilities that predated Heartbleed: BEAST, CRIME, BREACH.

Personally I can't see how having a logo changes much in terms of press coverage since radio and printed newspapers tend not to publish the logos, online publications have a large resource of stock images they can use (eg Getty) and the TV would likely want video footage anyway.

I think the nature of the bug with Heartbleed also played an important factor in it's coverage. It was quite a simple exploit to explain to the lay-person (albeit imprecisely) when compared to BEAST, CRIME, BREACH, POODLE, etc which require a much greater understanding of SSL/TLS. We do see other unbranded exploits are reported by the mainstream media, such as websites being hacked, DDoS attacks, etc, and in all of those instances the content is easily communicated in a 30 second soundbite.


And if we step outside SSL vulnerabilities:

- ILOVEYOU - Melissa - Slammer

We've been naming attacks for a long time. Perhaps one difference is that these days, a vulnerability is named if it's deemed high-risk/high-impact. In the past, vulnerabilities became well-known names once they'd been seen to do serious damage.


Let's not forget the Millennium bug. That thing also had a logo: http://ichef.bbci.co.uk/news/304/media/images/79938000/jpg/_...


Honestly, I think that would require a generation or two of a modified education system to bring about the desired result. You gotta deal with the here and now sometimes.


> But Heartbleed was the first one, and it's marketing made it get everywhere. Regular newspapers, etc.

This is a blip. As soon as this escalates and everybody starts "Marketing" their latest security vulnerabilities they will once again be lost in the storm.


I'd be interested to know why this is the case. It's a hunch, but I reckon that this sort of 'branding' makes a serious problem more obvious and memorable. I mean, I remember 'Heartbleed' and 'GHOST', but I don't really remember the details of say CVE-2014-2523.


I guess it depends whether Heartbleed and GHOST are actually more serious than the ones without names.


Would be wonderful, you could also develop a fix and call it COCKBLOCK

Edit: dear Ben, you let us down :(


Then develop a workaround for the fix and call it COCKKNOCKER.

Edit: Well, OP edited their comment and now mine makes no sense... time to take a break.


Sorry! I need a more pseudonymous HN account.


No worries mate... I really do need a break heh


Well how long before the novelty names run out of meaning ? For starters there are probably more issues out there then there are catchy American English anachronisms. I vote for using the names of Mezzo-American gods . Like Hachäk'yum or Xbalanque!


I think GitHub should starting doing something this for issues and pull requests.

Instead of the nondescript

- Issue #2654: misplaced comma on page 3 of documentation

- Issue #2653: open() should accept relative file path

it should be

- Quetzalcoatl's Rage: misplaced comma on page 3 of documentation

- The Dying Curse of Buluc Chabtan: open() should accept relative file path


That would make issue trackers / change logs more interesting to read. ;-) A lot!


It gets worse, some people produce a whole video around it: https://www.youtube.com/watch?v=3NL2lEomB_Y


That video is fun to watch though and does a pretty good job of explaining how that attack works.


I agree, but it also felt like one of those TV Shop videos where you just know that you're being sold to.


I guess we just rebrand some minor edge case of a well-known attack, give it a stupid name and use it to promote our security consultancy, is that it?

EDIT: Sorry, for consultancy read "peer-reviewed paper".


Marketing helps market the awareness of these problems, even non-tech folk have heard of Heartbleed. More awareness probably means software is likely to be updated.


I don't like it because it puts too much useful information on one domain. Will the researchers still be paying for the domain in 5-10 years?


I think JSFuck beat you to it: http://arstechnica.com/security/2016/02/ebay-has-no-plans-to... ('Clever "JSFxxK" technique allows hackers to bypass eBay block of JavaScript.')


I dunno, bencollier49. Seems like a strange hill to die upon.


What's with this trend of vulnerabilities getting a catchy name, domain and even a logo?

Is that because people don't pay attention when there's just a CVE number?


Why does everyone bug need its own domain name?


It's a lot easier to convince the CTO to do something if there's marketing behind it.


If you look at the bottom of the page, you will see that some of the underlying motivation for the site is to shill for a particular political stance.


> shill for a particular political stance

Government agencies lobby for what they want (restrictions on speech, limits on privacy, back doors into crypto), but we technical people should stick to technical discussion only?

A technical report should never discuss what government policy caused the bug, political decisions that could have avoided it, and their effect on society?

By the way (to save a click), the political stance is this (quoted from the article):

"Today, some policy makers are calling for new restrictions on the design of cryptography in order to prevent law enforcement from “going dark.” While we believe that advocates of such backdoors are acting out of a good faith desire to protect their countries, history’s technical lesson is clear: weakening cryptography carries enormous risk to all of our security."


I guess DROWN has a better ring to it than DROWE


Is this new? Since I would say it's already widely known as a ssl Downgrade attack.

https://en.wikipedia.org/wiki/Downgrade_attack


Nope, this is different.

Your machine can not support SSLv2 at all (so you couldn't be downgraded), but the existence of SSLv2 on the server (or a different server running SSLv2, say an e-mail server) allows the attack.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: