Hacker News new | past | comments | ask | show | jobs | submit login
These are not the SSL certs you're looking for (dankaminsky.com)
68 points by sp332 on Sept 1, 2011 | hide | past | favorite | 17 comments



I think that Dan is heavily invested in trying to "fix" DNS, and that he's arrived at DNSSEC as a way to do that. The problem for him, like most DNSSEC advocates, is that nobody cares about DNSSEC, and so it's not really happening.

People are starting to care about CAs, though, and so it seems convenient for DNSSEC advocates to hitch their wagon on a solution to that issue, because it might actually get DNSSEC moving.

The trouble is that they don't really get the problem. The problem with CAs today is a lack of trust agility, and instead of improving trust agility, DNSSEC makes it worse.

People object that we already have to trust the TLDs and the registrars, that it's already bad, that we're already beholden to parties we wish we weren't. I don't understand how this resonates with anyone -- if we acknowledge that things are bad, we should be trying to move away from that, not throwing our hands up, embracing it, and moving towards it.


I do not follow what this post is trying to say.

First, many grafs about what appears to be a wild goose chase for a forged Facebook cert that turns out to be real.

Then, ruminations on OCSP and serial number assertions, with the apparent idea that we need better cryptographic methods to deal with CAs that misbehave so that we don't have to apply the Internet CA Death Penalty so often.

Then the notion that we have "1500 CAs", so what we really need to do is invest every DNS zone with CA powers (because if 1500 is too many, millions is better?).

We need fewer CAs.

We need to apply the Internet CA Death Penalty way more often. I think we should do it at random, just to keep people on their toes.

We need a better mechanism for determining CA roots than "the right set of tickets was filed in Mozilla Bugzilla".

We need a better UX for certificate failures.

None of these things require profound infrastructure changes. Smart people can build this stuff as a side project. Moxie Marlinspike is doing exactly that with CONVERGENCE.IO.


Then the notion that we have "1500 CAs", so what we really need to do is invest every DNS zone with CA powers (because if 1500 is too many, millions is better?).

I think you've shot rather wide of the mark here. Every DNS zone does not have CA powers, at least not in the sense that payp4l.com could vouch for paypal.com. Sure, paypal can bone up their own zone. Or .com screws up (that'd be bad). But an honest appraisal would recognize that the dnssec solution means there is exactly one CA per domain and you know in advance who it is. That's way better than having to guess which of 1500 CAs is the legit signer.


You're right; the "1500 vs. 1,000,000" thing was an unfair potshot.


We shouldn't need to trust the CAs. Moving the issue to DNS means that each website can verify its own identity, so there is no need to trust more than one CA (the root DNS).

Of course there are downsides to this too, but it could be better than the current situation.


DNSSEC does not allow websites to verify their own identity; just like the CA system, DNSSEC is a PKI. But unlike the SSL/TLS CA's, Mozilla can't fire .COM or .BIZ when their operators misbehave.

Among the many, many issues with DNSSEC, one of the most obvious problems with it is that it perpetuates the broken model we have now, where a small number of gatekeepers is allowed to make all of the most important trust decisions for every user of the entire Internet.

There is simply no reason why DNS administrators should get to determine whether your bank account or a Chinese dissident's email is safe. I agree that Verisign shouldn't have the sole vote in that either.

That's what's great about approaches like Moxie Marlinspike's. What I can't fathom is why anyone would want to make something like Moxie's plan harder, by baking a centralized PKI even deeper into the core architecture of the Internet.


The argument for it (not saying I accept it) is that the definition of the "authentic site" largely resides in the database of the DNS registrars anyway.

Currently, many CAs authenticate new applicants primarily on the basis of this "domain control", i.e., can they receive an email at this domain. So DNSSEC wouldn't be changing anything there.

IMHO, the idea of providing cryptographic authentication for DNS records isn't inherently a bad one and it's only natural that it would follow the same hierarchy as the name delegation itself.

In one sense it delegates more freedom than the CA system does (assuming you don't purchase yourself a sub-CA). However, in other fundamental ways it represents a highly centralized system of control.


DNSSEC advocates (not saying you're one) tend to talk about the idea of cryptographic authentication of DNS records as if it were free. But it's not free. It will cost many tens of millions of dollars. It will contribute to the instability of the Internet as a whole as well-meaning but necessarily inexpert administrators misconfigure their zones. And because DNS is at a lower layer of the stack than TLS, those failures will be more severe.

For instance, read any tutorial on socket programming, or browse socket code on Github or Google Code Search. Look how a DNS name is normally resolved in code: by calling gethostbyname(name). Look how the return value of that call is handled: either the struct hostent it returns is NULL, in which case "host not found", or it's non-NULL, in which case the domain name is good. Where's the error channel?

There are DNSSEC advocates --- presumably none of whom have shipped a product in C before --- who talk as if this is a minor detail. But look at Daniel J. Bernstein's "ipv6mess" article:

http://cr.yp.to/djbdns/ipv6mess.html

How do we reach the magic moment? How do we teach every server on the Internet to talk to clients on public IPv6 addresses? How do we teach every client on the Internet to talk to servers on public IPv6 addresses?

Answer: We go through every place that 4-byte IPv4 addresses appear, and allow 16-byte IPv6 addresses in the same place.

Isn't it the same deal for DNS? Aren't we talking about either forklifting out or breaking huge amounts of deployed code?

And to what end? DNSSEC doesn't solve the PKI problem we have on the Internet. When people in the Czech Republic go to Google, aren't they often going to GOOGLE.CZ? When people in Italy go to Amazon, don't they go to AMAZON.IT? Who controls .CZ? .IT? .COM? Certainly not Amazon and Google.

What we have on the Internet is a failure of security policy. It's a human factors problem and a UX problem, not a failure of the core Internet protocols. By acting like a band-aid on those core protocols is going to fix the problem, we:

* defer the real solution to the problem for another 10 years while we wait-and-see whether DNSSEC helps (hint: it won't really)

* make it even harder to apply UX solutions to the problem by pushing the decisions that need to be made further down the stack

* incur costs and instability and yes probably a good deal of security flaws to no good end.


Look how a DNS name is normally resolved in code: by calling gethostbyname(name). Look how the return value of that call is handled: either the struct hostent it returns is NULL, in which case "host not found", or it's non-NULL, in which case the domain name is good. Where's the error channel?

Don't forget the influence of Mozilla, Google, Microsoft, and Apple. While they may not own even the majority of the lines of code, the code they do own is disproportionately visible.

Some of their stuff actually does its own name resolution, or even provides the implementation for a whole operating system. I believe some of them are shipping some actual DNSSEC client code today.

Combine that with the work done by ISC and this thing isn't exactly alone whistling in the wind.

There are DNSSEC advocates --- presumably none of whom have shipped a product in C before --- who talk as if this is a minor detail.

Hmm, well Paul Vixie counts at least double in my book.

Isn't it the same deal for DNS? Aren't we talking about either forklifting out or breaking huge amounts of deployed code?

Sure, it will probably take 10 years before the absence of DNSSEC data can be interpreted as a name resolution failure. But technology can move a lot faster now that MS IE and Windows doesn't have 90% market share on the client side anymore.


I'm not talking about Paul Vixie.


But there's only so much hand-picking of your examples that you can get away with. Your argument would be stronger if it accounted for more data points.


I'm sorry, I didn't mean to be so terse. I just wanted to make it clear that I was not in that instance sniping at Paul Vixie.


We go through every place that 4-byte IPv4 addresses appear, and allow 16-byte IPv6 addresses in the same place. Isn't it the same deal for DNS?

Actually, I don't think so. The intermediate state on the upgrade path for DNSSEC is infinitely easier than for IPv6, which is what DJB seems to be pointing out.

Of course, in many circumstances the real security benefits are not obtained as long as the attacker can simply strip off the DNSSEC information and downgrade the client back to 'compatible mode'. But this seems to be an inherent difficulty in strengthening authentication in any diverse ecosystem of endpoints.


You're right that unlike IPv6, DNSSEC does not require a unanimous "flag day" upgrade. And you're right that that's relevant. Partly, this is because by design, DNSSEC only protects DNSSEC servers; the path between a modern browser and the DNS server configured by DHCP is not protected by DNSSEC.


Which is something DHCPv6 (and the ridiculous amount of new autodiscovery added to IPv6 in general) missed an opportunity to provide.


At the risk of bucketing myself as a crackpot: I'm actually not a big fan of IPv6 either. I don't believe that in 50 years anyone is going to care what an IP address is; we'll have long since built new overlays on top of whatever transports we come up with, and IP will be an archeological curiosity.

The network should be dumb; it's the endpoints that need to be smart.


I agree completely about the dumb network. Security, in particular, lives in the endpoints.

But still, net-boot BIOSes and BOOTP/DHCP protocols have been around since 1985 (RFC 951). They show no signs of going away. Possibly the problem of bootstrapping the configuration and security relationships will be with us as long as there are security boundaries on the networks.

And there will be security boundaries on networks as long as there are mediated boundaries in the real world.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: