Let's Encrypt is just about the best thing to happen to the web this decade. It really is huge to have encryption be something that can Just Work. It is, by far, the most popular feature we've added to Virtualmin in many years (and we had support from very early on due to the high demand for it).
Wildcard certs have been a common request, but you can already specify multiple domains using SAN. And, since you can issue a new LE cert on-demand, it's actually not necessary for a lot of use cases that would have required a wildcard in the past. In the bad old days, getting a new certificate (even just to change details like adding a name to it) was time-consuming and often cost money. Most of the time when our users have needed a wildcard, it was just because they wanted to save a little money by having all of their subdomains on one cert; and not so much that they really needed to be able to spring up dozens/hundreds/thousands of new subdomains that could just automatically be secured. If you have a fixed number of names that need SSL with the same cert, you can already do that today with Let's Encrypt.
Nonetheless, this is great. I'm just super impressed by how effective the LE folks have been at improving the state of security on the web, and for free! They really deserve every kind word and every donation dollar that comes their way.
Can I ask for a virtualmin feature here wrt SSL certs (and maybe specifically LE)?
I know it's not the first request, but an easy way from the GUI to force all requests to be SSL would be great. The only way I see in GUI is a "redirect everything", but... that also redirects requests for ".well-known" which means renewal requests don't work (have been hit by this a few times). Some recommended ways of handling "always use SSL" with LE renewals would help.
> [...] that also redirects requests for ".well-known" which means renewal requests don't work
What issue did you run into with this? Let's Encrypt follows redirects both to HTTP and HTTPS and accepts practically any certificate for redirect targets when validating via http-01, including self-signed, expired and mismatching certificates (which isn't a problem since the initial request is plaintext anyway).
That's a great request (though I thought .well-known would still work through an http->https redirect, as long as nothing else messed with the request; we do a redirect on virtualmin.com and our LE renewals work OK...but, I'll look deeper into that).
It can be surprisingly tricky to get redirects right, because of how web apps behave and how web apps often take over early request processing in an htaccess file for things like nice URLs and special assets directories and such, and there can be surprising interactions between redirect rules, but I suspect we could make it work in the majority of cases pretty easily. I'll add it to my todo list.
This is great news for organizations. I work at a large Fortune 150, and there are lots of services that require wildcard certs. We have a process to get these from our internal CA as well as a third-party - the internal CA is automated, but the third-party (for external services) can be slow and cumbersome, to the point where many departments just buy their own cert. And then a year later, they move on, forget, etc, and suddenly we have services that have expired certs and there's a scramble to fix them.
This move by Letsencrypt should hopefully make them the standard for any external service that doesn't require an EV cert.
>This move by Letsencrypt should hopefully make them the standard for any external service that doesn't require an EV cert.
I'm kind of worried about this myself.
No matter how well intentioned, secure, or "good" lets encrypt is, having a significant portion of the world's TLS be under one umbrella isn't a good thing.
I'm hoping that we will begin to see other services pop up that are similar to lets encrypt (free, even using the ACME protocol) so that we don't have too many of our eggs in one basket here.
I work on the Let's Encrypt project and this is my own personal opinion, not necessarily the opinion of EFF or ISRG.
There are several reasons that it could be good to have multiple publicly-trusted CAs out there, though most of them don't depend at all on the relative market share of the CAs at a given point in time.
Security risk If a particular CA is compromised through an attack and its intermediate certificates need to be revoked, it would be good to have other CAs already ready to continue issuance to the public. Even if the CA intends to resume operations after a compromise and user-agents are OK with that, it probably wouldn't be prepared to resume operations immediately.
Geographic risk It would be good for availability to have CA datacenters on multiple continents, not just one.
Jurisdictional risk The government of the country where a CA operates could try to compel the CA to stop issuing particular certificates, for example in order to enforce international sanctions, or to facilitate espionage by trying to make it harder for people to get authenticated encrypted connections to certain services. A government could even force a CA in its territory to cease operating entirely. (There's also jurisdictional risk in the other direction, of governments trying to compel misissuance, but this risk is strictly increased by having trusted CAs in more jurisdictions. In general, having more CAs increases the risk of misissuance, while decreasing the risk of certificates being unavailable to a particular site because people don't like that site or its operators for some reason.)
Continuity risk It would be safer to have CAs with more different kinds of funding for their operating expenses.
Institutional/governance risk A particular CA might some day decide to do things that relying parties find improper. Having more CA alternatives can give the relying parties more plausible leverage to get the CA to align its practices with their preferences. (As with the jurisdictional risk point, only a decision not to issue certain certificates can be directly addressed in the short term by having other alternatives. A decision to issue certificates that other people think shouldn't have been issued can probably only be addressed this way by removing trust from a particular issuer.)
Looking over this thread, I do want to emphasize again that misissuance risk gets worse, not better, when there are more CAs. If you're particularly afraid that CAs will be issuing certs improperly because they get attacked or coerced or do a bad job of validation or internal controls, you should probably want fewer CAs rather than more, at least as a response to that particular concern. This is because CAs in the X.509 PKI can't "contradict" another CA's issuance; every assertion about a binding between an identifier and a public key is cumulative and operates in parallel and in addition to every other assertion.
I don't recall anything like that having happened before, even in the DigiNotar case, where the CA was thoroughly compromised. The keys must be kept in HSMs, so even with a fully compromised issuance system, the keys themselves are typically safe - which isn't much of a relief at that point.
There were a couple of cases where CAs like Trustwave or CNNIC signed intermediate certificates that were capable of issuing publicly-trusted certificates for organizations who lacked the required audits. They were typically intended for corporate/internal MitM proxies, though there was no technical enforcement in place for this, and they could've been used for any MitM attack. The recent investigations into Symantec's CA showed similar, but slightly more complex cases.
I just reviewed Chapter 4 of Ivan Ristić's book and the only incidents that might be considered compromises on this level were DigiNotar and NICCA, which both led to revocation of intermediates. However, the book doesn't explain technically what the exact nature of the compromises was, so I'm not sure either of them involved an actual compromise of the private key material itself.
There were many other incidents involving problems with behavior of PKI participants, and I'm sure reading this chapter will give people a sense that the ability to remove trust from intermediate CAs is an important ability.
To use a classic example, who's going to go to bat for you when they get a demand from a government agency - the EFF or the Hong Kong Post Office? I know where I'll place my bets.
Go pull up your certificate authority list and ask yourself for each one of them if you trust that company more or less than let's encrypt.
Let's encrypt publishes auditable logs of all issued certificates, they're backed by some of the biggest names in online privacy and I trust them much more than other CAs.
I for one would be happy if I could delete all other providers from my browser.
CAs ultimately are centralized and too trusting, giving that same level of trust to less trustworthy companies just damages the overall security of TLS. There's no distributed trust model for CAs, it's pretty much all or nothing, so in the case of CAs, distribution is not a security benefit like it is in say Tor or Bitcoin, but a problem as it means the attack surface has widened.
Short of going to a new, completely decentralized solution like the proposed DNSSEC extensions or Namecoin, a single, very secure CA is probably better than a lot of not secure, often government influenced CAs.
> having a significant portion of the world's TLS be under one umbrella isn't a good thing.
Why is that? The damage from a CA being hacked is not proportional to the size of the CA - they are all equally (small number of exceptions notwithstanding) capable of issuing certificates for any domain which will be trusted by all major browsers.
Is there another aspect I'm not considering? While I see how it feels like a troubling thing, I'm struggling to actually come up with any real consequences of it.
It's been awhile since I had to deal with ocsp breakage, but if it breaks due to an ocsp server down, doesn't that mean the browser or web server is misconfigured? Of course, if browsers are misconfigured out of the box, that doesn't help at all...
It wasn't as simple as the ocsp server being down. It was returning bad request (http 400) responses. When the good responses expired from caches, the bad responses started going out and breakage started spreading. LE detailed this in their postmortem which I linked.
Punishing CAs for bad behavior (ie Security Problems) has more collateral damage the bigger a CA is. Right now, if a CA is bad enough browsers just stop accepting their certificates. After a certain size that becomes unfeasible, removing a lot of pressure from that CA
No, browsers don't do that. See how WoSign was distrusted[0]. Basically, they still trusted existing certificates, but stopped trusting new certs (both renewed or brand new). Through this, they kept collateral damage to a minimum, while carrying the CA death sentence.
The trouble is that's only possible with the CA's cooperation, because they have the ability to backdate the certificates by falsifying the date. In the case of WoSign Mozilla threatened to distrust them completely if they did that, but if it's unfeasible to remove a CA that threat may be ineffectual.
This kind of forgery can be mitigated by requiring all certificates to be published to a Certificate Transparency server upon issuance. You can't backdate a public ledger that is being watched by third parties.
The pressure will come from the public. If they damage their reputation, people will be less willing to donate, which will pretty directly influence their income stream.
Assuming Facebook's numbers represent two-thirds of all web users then I'm saying I'd be surprised if LetsEncrypt have more than 30,000 donors.
If we're quibbling about "the public" then the GP comments only make sense if "the public" means "people who aren't IT professionals", in which case I'd warrant that there are far fewer donors than 30k who aren't IT professionals, indeed it's got to be ~0.
Can't see donor details on the LE pages though? Mind you at approx.av.300k certs issued daily (https://letsencrypt.org/stats/) I concede I could easily be orders of magnitude out in my guesswork.
It's not just about access to their private key, but also downtime (expected or otherwise), and bugs in the cert verification process.
I don't know of anything concrete, but I can imagine an attack that can exploit the process of verification on their servers to have them sign domains they shouldn't, or DDoS attacks on them to prevent people from renewing their certificates. The bigger they are, the juicier of a target they are for these kinds of things. if they were a provider of 50% of the internet's TLS certificates, you could take down half the internet by continually DDoSing a single company!
Hell I can already imagine someone sending a bunch of signing requests spoofed as someone else, locking that person out of renewing due to rate limiting.
Not to mention that even the country they operate in can be a big deal.
Let's Encrypt strongly encourages you to use a tool that does automatic renewal a month before the cert expires. If someone manages to DDoS Let's Encrypt for an entire month, I think we're firmly into "you have bigger problems" territory. (Among other things, if 50% of the internet were in fact on LE, major internet providers like CloudFlare and Akamai and Google would start offering to run LE directly on their own infrastructure after a week or so of this.)
Bugs in the cert verification process are the same amount of risk regardless of whether everyone is using the CA or nobody is, as long as the CA is trusted. There's nothing gained by putting your eggs in multiple baskets.
Also, these all seem like hypotheticals when the old-school CAs have had OCSP downtime, bugs in the cert verification process, incompetent staff signing and publicly logging google.com certs to test their infrastructure, governments asking and receiving unconstrained intermediates, unconstrained intermediates as a publicly advertised product, etc.
You're right but size doesn't really factor in any of your points.
Assume for instance that the country of Hackeristan manages to have one of its authorities accepted in major web browsers. This authority is only meant to sign Hackeristan domains and only signs a tiny amount of certificates.
Now let's imagine that this authority is compromised, maybe the Hackeristan government wants to intercept connections to gmail, maybe the authority is vulnerable to hackers. One way or an other, it signs a bogus *.google.com certificate. Well it's game over, since the authority is trusted by all major browsers everybody's vulnerable, even though it was a tiny CA. Only certificate pinning can save you now.
Yes, but if LE was the only major CA, then if you could attack "Company A" by impersonating them and making lots of signing requests causing them to hit rate limits you could take "Company A" offline.
If LE was found to be incompetent and lost control of their private key, browsers would be much less willing to remove them as trusted if they were a significant portion of the web.
And things like the impact of DDoSing LE to take their OCSP servers down and things like that still grow with their size.
To clarify, I love LE and I use them almost exclusively. But I'd feel better if there were others trying to follow in their footsteps.
The parent makes the point that it's not necessarily the case since hacking any trusted CA (no matter the size) lets you generate certificates for anything. If letsencrypt was hacked today it could be used to generate a valid google.com certificate for instance, even though Google's certificate is normally issued by their own authority.
It's a weakness of the current authority architecture really, trusting a CA is an all or nothing decision. If any of the authorities is compromised you're vulnerable until you remove the CA from your browser, regardless of the number of legit certificates it issued.
fortunately, this change should help diversify the number of certificate authorities for people to use. In the explanation of the wilcard support they link to another post that explains it is enabled by their rollout of ACME v2.
Wildcard support is one advantage of ACME v2, but another advantage they list is "ACME v2 was designed with additional input from other CAs besides Let’s Encrypt, so it should be easier for other CAs to use" - https://letsencrypt.org/2017/06/14/acme-v2-api.html
So, in addition to this functional improvement to Lets Encrypt, the change should enable more automated CA options in the future.
However, it's still better than having expired certs and a team trying to figure out who owns the app, trying to get in touch with them, asking them to update the cert, finding out that they are no longer with the company...
It's even worse when the service is something that a small team created as a POC - which then became customer facing and mission critical, with the team having moved on to something else.
And it's funny how often this happens over a holiday weekend.
Yes, I know that the issues are deeper and more to do with large company process and bureaucracy than anything technical. But at least you can have secure services that don't fall over.
It's not 'artificial', for multiple reasons. You correctly address one by listing the cost, but the other is that they limit you to only being able to configure a certificate to a resource they manage, so that they can rotate that certificate transparently for you.
Yes, they could branch beyond that, and create a special category of certificates to issue, that has a cost, and that gives you access to the private key, but that isn't really distinguishable from any other certificate provider out there. In fact, Let's Encrypt offers that for free? Why would Amazon decide to compete with a paid product against a free one, when there's no benefit to the consumer to warrant paying?
>Why would Amazon decide to compete with a paid product against a free one, when there's no benefit to the consumer to warrant paying?
They compete with free Cloudflare caching today. And free DNS services. And much cheaper VPS services. There are various reasons customers might choose low cost over free.
For me, since I use ACM already, for AWS hosted resources, I would appreciate the advantage of using it for other resources. Even ones on AWS, like a cert on a Lightsail instance, for example.
I say artificial because all that's missing really is a link to download the cert.
Why are we relying on single providers, shouldn't it be a consensus system of some form, like month old certs get passed on in bulk outside of the main network, then if the original provider is compromised their cert info disagrees with the "consensus" providers indicating a compromise at some point.
High profile sites can buy multiple top level certificates (with mutual signing, say); sites needing less security can fallback on a simplified consensus system (maybe like above).
Sure - this was in reference to my top level comment, but I see that this dropped lower down the page.
I work for a large Fortune 150, one that you've heard of, and we have a security team that is constantly scanning our network for weaknesses and potential exploit vectors. They will kill (firewall off) any sites that might compromise the network and tell the application owner to fix the issue before they allow it back on the public net.
And let's be perfectly clear, EV certs are basically a moneygrab by CAs trying to provide some kind of value-add. It used to be that ecommerce sites would have to get an EV cert, now with Chrome desktop showing "Secure" in the address bar, the visual representation of an EV cert isn't nearly as important.
I worked in a few places that had a *.company.com which covered, obviously, everything under that domain.
That meant if that wildcard cert leaked then our EV cert for, say, checkout.company.com would be essentially compromised too.
Not to mention. If you have a wildcard cert it's rather likely you're passing those certs around servers, lots of scope for leakage.
I really think that if you feel the need to do wildcard certificates, then you should at least try to figure out another way around it. I'm not saying you absolutely must never use them, but be incredibly mindful of what is at stake and limit the scope and availability of such certs as much as possible.
For instance. Don't put the same wildcard on mail servers and IM servers and git servers and etc; a compromise of one will compromise them all and the revokation system is not good enough.
I agree that wildcards aren't great if they're being passed around an organization to avoid registering a few extra certs, but they are very useful in a few circumstances such as sandstorm.io: every app session uses a different subdomain to prevent cookie leakage, and registering that many certs would overwhelm LE. I'd imagine there are other cases out there involving automatically created subdomains that will benefit.
Like I said. There are uses for wildcard certs I'm just arguing against the fact they're used en masse. People should be perfectly aware of the ramifications and sandbox appropriately. (*.tennant.sandstorm.io or whatever.)
Everyone keeps saying SaaS is the reason for the use of wildcard certs and I would absolutely argue the point that multi-tennancies weakest tenet is the fact that if you get compromised the scale can be broad. Why intentionally weaken that system? LE can handle thousands of domain creations a minute, they've been very forthcoming with lifting limits for people on domain creation.
The downside is your server sites which need a little overhead for vhost creation but that could be automated with less than a day of ops work.
I believe a while ago the sandstorm people spoke to LE who advised that it wasn't a good idea.
I'll stand by the assertion that vhosts are probably still better off with a wildcard cert if it's the difference between a single server using a single cert vs a single server holding thousands of certs. In a node compromise it's the same either way. If different servers are serving different subdomains then sure, subdomain certs are the better way to go.
Couldn't you have a cert per sub domain? If you're using Let's encrypt, you almost certainly have automated renewal in place, so you could allocate a cert when you give them a subdomain
You can only put 100 domains on a Let's Encrypt cert, so if you're a site like Tumblr that's going to be a whole bunch (hundreds of thousands or millions of certs) of mayhem.
I agree with the case for using a wildcard, but just to play devil's advocate - why can't they take the CloudFlare approach?
Generate a cert for 100 of your client's domains, use that cert across those domains. Cut your 50m domains down to 500,000 certificates. Serving the right certificate for the right domain is a simple enough task.
As new tumblr domains are registered, generate more certs in batches of 100 domains.
I doubt anyone would ever seriously suggest putting millions of SANs on a single certificate, but 100 isn't too farfetched.
Absolutely. I use a wildcard for this and doing a cert for each sub would just be a lot of hassle and potential for things to go wrong with no security benefit.
That is a good solution for kube thanks for the reference.
My situation is a bit different: hosting a bunch of subs on the same servers.
With one wildcard I have one server conf with one cert and use the hostname to rewrite each request to the correct directory.
If I did a cert for each sub the nginx conf would need 1000's of server config blocks each with its own cert. I haven't tested, so maybe nginx would handle this just fine, but it is easier to just go with a single wildcard and not worry about it.
As far as I can tell there is no security advantage to having multiple certs instead of one wildcard since I would have all the certs on the same server anyway but if anyone knows of any I would be happy to hear.
I would argue that they're useful but not mandatory.
Is having separate certs really that big of a deal? Operational overhead with LE is next to none and if you're scared of hitting limits you can contact LE to have the limits increased.
Pre-LE you'd use an other authority that provided wildcard certificates. That's what they're for after all, why would you want to hack your way around them?
For the cases you mention, I mostly agree. Wildcard certs are critical for many SaaS use cases though, where you need a cert for each customer on a product domain: company1.example.com, company2.example.com, etc.
I would argue that they're useful but not mandatory.
Is having separate certs really that big of a deal? Operational overhead with LE is next to none and if you're scared of hitting limits you can contact LE to have the limits increased.
I don't think it's fair to say the operational overhead would be next to none. As it stands right now, most services that use per-user subdomains get wildcard certificates. Let's assume that wouldn't be an option for a service like tumblr, which has per-user subdomains. They have ~350 million users, that's not really practical to maintain. And that's just one big site, there are plenty of others. This would not just overwhelm the issuance capacity of most CAs, but it would also be a problem for many other components of the Web PKI, like Certificate Transparency log servers, which have so far only needed to handle a total of ~450 million certificates (many of which are duplicates that have been logged to multiple servers).
There's certainly a place for wildcard certificates, but I definitely agree that they should be used sparingly.
This is definitely true, and single-domain or SAN certificates should continue to be the first option administrators consider when deploying TLS. Wildcards make sense for things where the number of subdomains is unmanageable and where each of those subdomains share the same attack vector (i.e. are handled by the same load balancer, etc.)
The fact that validation for wildcard certificates is limited to the DNS challenge will hopefully ensure that most users will continue to use non-wildcards, as the other challenges are significantly easier to automate.
For those concerned with others in their organization obtaining/using wildcards unsafely: set a CAA record prohibiting wildcard issuance from any CA and be done with it.
Looking forward to using these at Cloudflare for our Universal SSL product. Our plan is let customers choose which CA they would like us to issue from, with sane defaults based on constraints imposed by other settings, e.g., CAA record prohibiting issuance from one of the CA choices.
Wildcards were one of the two big blockers for Let's Encrypt adoption at a lot of organizations. The second blocker, the operational discipline to automatically refresh the cert and restart services every <90 days, will likely be the only excuse left.
It does. "service nginx reload" (and similar commands, like systemctl reload nginx in systemd territory) sends SIGHUP to the nginx master process on all distributions I'm aware of, and that will cause the certificate and key files to be re-read.
I've been using this in production for more than a year now, and if you google around a bit, most guides for automating renewal on nginx[1] will use that command.
There are lots of organizations where certificates are "managed" by a recurring Outlook meeting on a 2-year cycle, where you hope that employee is still around. Moving to an automated system like certbot (which I use and love) with proper logging and email alerts is the right way to go! But that can require a lot of communication between the server teams, web teams, IT teams, etc. There are plenty of places where it seems more convenient (true or not) to just deal with $100 every couple of years than spend the time to implement new processes.
At work, we use handful of SAN certs from Let's Encrypt. We have a few wildcard certs too, we obviously had to buy those. I'm glad to see LE offer wildcards, but I don't know that it held back too many people.
As for large organizations and the 90 limit, I find dealing with it at work isn't a big deal. We have so many they have to be automated anyway. Even if we only had a few, the process is much easier/faster than it used to be to have someone in the company buy a cert and figure out what files to get to us. Now we can just take care of it, no credit card required. An easy cert every 90 days or so or one that is much more work once a year? Let's Encrypt has my vote and people who want to make excuses will never run out of them.
I am genuinely curious as to how much this will affect the cert providers commercial business? Other than Lets Encrypt not being able to issue EV certs. Does anyone have a resource that talks about this?
When we moved our certs away from COMODO we received a sales call from one of them. They found the contact info for an executive here and told them that by replacing our COMODO certs with another brand, we were at "tremendous" risk for not having our websites work on the latest iPhones and iPads.
The entire call (which we ended up pulling and listening to, and then sent back to COMODO as a prime example as to why we're threw with their business) was designed to have a non-technical decision maker make an impulse decision over the phone to buy thousands of dollars worth of certificates again.
Wildcard certs from Let's Encrypt cannot come soon enough.
Fortunately (and unfortunately) all the way up to the assistant vice president came from software engineers and systems engineers. So when Comodo did their little scam, the AVP called bullshit on them and told them off. (The unfortunately is that some of the higher ups are technically exceptional, they have low regard of people skills).
We're on LE for 90%. There's a client (there always is...) that demands Network Solutions certs. Yet they cannot put to words why that's their need, other than stupid bullyish business practices.
We're still trying to wrap our heads how LE plans to offer wildcards.. But I digress.
Ugh, when I was a security analyst for an enterprise, I'd occasionally have Network Solutions call me and try to sell me certificates. I'd explain that it's not my decision, you've got the wrong person, how did you get this number and turns out they'd call the front desk or the help desk and say they found a security hole on our public facing websites. The security hole was that we used another company for our certs.
The real security hole was that the operators were patching through salesmen directly to the security staff without verifying who they were...
Im not privy to billing discussions unfortunately. I know how we get the certs; we just tell our contact that we need a long cert for X machine, and 2-3 days later, it shows up in our email.
It's a pain,but we have only 14 machines we oversee, with 3 year certs on each. Nagios takes care of alerts within 60 days, so we can easily get the request in time.
I would feel even more comfortable if I would be able to pay a nominal sum. Even $1 per cert would go a long way in securing their infrastructure. Maybe Letsencrypt does not want to handle the hassle of managing payments.
I personally donate to Lets Encrypt once a year. The trouble with donations at my company, however, is that 'gifting' money is a lot more complicated than buying something.
For instance, we are supporters of Vim, but we couldn't make a direct donation to the project. Our corporate policies on this makes things a little stiff as any donation like this is seen as potential publicity. so we couldn't move forward on that. However, we do often buy things for corporate events from Amazon, so we could use the affiliate link to buy our stuff and still contribute to the project.
There was another instance, i can't remember what the project is as I didn't deal with it directly, where you could donate directly or buy 'swag' (like T-shirts, cups etc). I remember one of the teams that wanted to contribute funds managed to expense it as swag for their department so everyone got hats, t-shirts, pens, coffee cups etc. because again, can't directly donate.
And of course, sponsorship is usually out of the question, because they don't want to be known as supporting one specific thing or another.
Sometimes its just easier to make a 'sale' than it is to get a donation from huge users of your product.
Machines don't (on the whole) have wallets. So even for one dollar the effect is that now auto-renewal isn't possible, a human must intervene to pay. Donations, however, are appreciated.
That's only true if it's prepaid. LE could, in theory, simply send an invoice to the account's email after the renovation (or on a billing cycle, say, yearly). Not that I think it'd be a good idea.
Question on this topic - is there a method of encrypting subdomains when you don't own the domain?
An example: I run a vm that exposes mysubdomain.azure.com, can I turn on ssl at that level? A google search says "no" but I figure this is a place where someone might have a workaround.
Each FQDN is treated separately, so generally speaking, if you can demonstrate control for a FQDN under an ICANN TLD, you can obtain a certificate.
Something you have to keep in mind are rate limits. Unless the (parent) domain owner has registered the domain in question as a public suffix[1], you, together with all other users who have subdomains under the parent domain, will be limited to 20 certificates per week.
Some domains, like for example the hostnames EC2 instances get that resolve to their public IP, have also been explicitly blacklisted because they are generally not assigned to anyone for longer periods of time, and it would be easy to mint certificates for a large number of those hostnames by just spawning tons of EC2 instances, which would make those certificates largely useless.
Finally, domain owners may decide to prevent issuance using a CAA DNS record, which are supported by Let's Encrypt.
Sure, LetsEncrypt can issue certificates for that domain. If you have a webserver you control that runs on port 80, you can use Certbot[1] to get a certificate for that domain.
[1]: https://certbot.eff.org/
It seems like LetsEncrypt should support that, per e.g. [1] - I haven't tried it myself, but I don't see any obvious howlers in that thread, or any a priori reason why it should not work given correct arguments to certbot and a service configuration that permits ownership verification to succeed.
It looks as though Azure itself also provides a CA [2], or at least resells one's services, for use with apps hosted on the platform. Depending on your needs, that may be a better alternative, though certainly it will also be more costly. It also appears [3] that the only route that service offers to satisfy the subdomain requirement is a wildcard cert, so there's that.
Each DNS zone (azure.com, mydomain.azure.com, otherdomain.mydomain.azure.com) is separate, and they can all be given independent TLS certs. The only relationship between azure.com and mydomain.azure.com, is that the azure.com name servers delegate DNS for mydomain.azure.com to the name servers of mydomain.azure.com.
So you can turn on encryption at that level, and using Let's Encrypt, the private key for your cert would be unique for you. So private keys for azure.com won't be able to decrypt traffic for mydomain.azure.com.
Afaik you would have to register a domain, and point alias.example.com to alias.azure.com via a CNAME record.
But for ssl etc to work you would also have to get your vm setup so it "knows it's own (new) name" (alias.example.com).
[you could also use an A record with the ip, but I'm guessing guaranteeing sub-domain.azure.com points to the right ip is easier that updating the ip on updates etc to the vm]
You can encrypt any subdomain at which you can serve an answer to Let's Encrypt's ACME Challenge [1][2].
That being said, I know they have a blacklist of certain domains. I've seen it once with amazonaws.com [3], and it's possible they have similar entries for azure.com, heroku, etc. They don't publicly release their blacklist.
I don't see why this wouldn't be possible, just tell Nginx (or whatever proxy) to serve one on 443 with appropriate TLS options and the root domain on 80. To a large extent subdomains are treated as different sites w.r.t. security. But it's possible Azure has some particular settings to make this impossible.
What verification strategy are they using to determine when a wildcard cert can be created? I see the discussion on https://github.com/letsencrypt/acme-spec/issues/64 suggesting that they validate a sampling of randomly-generated subdomains, but it's unclear if that's actually the strategy they're using (and an obvious downside with that strategy is it won't work for a client that wants a wildcard cert for whatever reason but hasn't configured DNS to handle arbitrary subdomains, though you could of course argue that these clients don't actually need wildcard certs).
Currently we're only planning to allow DNS method validation for wildcards. You'll have to validate the base domain via DNS, that's all. No HTTP or file-based validation option.
We decided not to offer HTTP-based (file-based) validation via randomly-generated subdomains for wildcards in part because if you're required to set up random subdomains you're modifying DNS to do that, and if you're already modifying DNS you might as well just use the DNS validation method.
Glad to hear it. So the assumption is that if you control the DNS for example.com then you control the DNS for all subdomains? That seems like a reasonable assumption.
I used to use the DNS challenge for my (few) sites. Unfortunately i'd had many problems so I switched to HTTP. These problems boiled down to:
1. Namecheap's API is rubbish - extremely rate limited so after doing 2 or 3 in an hour it basically stopped working
2. Propagation delays - I don't know if this was provider specific for Namecheap and Gandi, but sometimes lego would just hang waiting for LE to confirm propagation.
HTTP challenge works fine and is far easier if punched into the load balancer rather than relying on each back end.
Note note of these problems are the fault of LE from what I see. I'm going to see if any of the ACME clients support updating Google Cloud DNS which I use now, as their propagation time seems minimal.
Thanks for your work btw in securing the internet.
One great advantage of wildcard certificates is the privacy of the domains.
If you use customer1.mycorp.com, customer2.mycorp.com, etc. the names of your clients is exposed twice :
- if you issue one certificate with all the domains, all the domains are readable in the certificate (Cloudflare free cert. has this issue too)
- all the LE certificates are published in Certificate Transparency logs. So you can detect if anyone issues a cert. for your domain, but anyone can view the certificates you issued.
With a wildcard certificate, the subdomains used are not public.
Note this issue applies to "internal" subdomains too. You probably don't want to expose the hostname of your backoffice (admin.mycorp.com) or your new top secret project (linux.microsoft.org).
Correction: you can also enumerate through NSEC3, the most common (and default) mode of deployment; NSEC3 turns enumerable zone entries into the equivalent of a password hash file, which can be cracked.
There's a hack to prevent this that seeds the zone with false entries, but it requires the server to operate as an online signer. Since this is essentially incoherent to the design of the protocol (which makes major cryptographic and usability sacrifices to enable offline signers), there's an "NSEC4" being worked on now.
The company I work for is a large user of Let's Encrypt certs (we order them for our customer's sites). It doesn't look like we'll be able to use this since we don't control our customer's DNS.
Yup. We use a lot of Let's Encrypt certs with domain validation via http-01 where our internal API can handle all the requests and validation without the end user requiring any technical knowledge.
It seems they will evaluate other options, but it's hard to imagine they would use something as convenient as http-01 for wildcards as then it opens up the platform to major abuse.
How would it open up the platform for abuse (serious quetion, not snark)? The CA we use to get wildcard certs for our customers uses a challenge process very similar to LE's http-01.
Serious question. What does LetsEncrypt buy me that I could not get from having a knob in applications and browsers that lets me accept self signed certs?
To be clear, the reason I am asking is that historically a CA was intended to be a way to validate "who" you are talking to. LetsEncrypt is providing a signed cert that does not validate an entity. It just solves the self signed cert, which could also be solved in applications by having a setting to "Accept Self Signed Certs". Some apps and appliances already have this.
If you ask LetsEncrypt for a certificate for www.google.com you won't be able to get it as you cannot solve the challenge LetsEncrypt issues to check that you actually own www.google.com. Creating a self-signed certificate for www.google.com on the other hand is something everyone can do.
"paypal.com in the name" is a bit too ambiguous. You can register a subdomain like "paypal.com.example.com" and acquire a certificate for that, that's correct. There have been no mis-issuances under the actual domain "paypal.com", to my knowledge.
Here's a blog post explaining why Let's Encrypt does not think it should be the CA's job to prevent this[1]. At least two browser vendors seem to share this sentiment[2][3].
Hopefully not. Domain name similarity does not fall within the scope of DV certs.
For the usability problem you're hinting at, people mistaking DV certs for EV certs, I believe web browsers should consider demoting the color of the pad lock displayed for DV certs from green to plain text color, while still retaining the pad lock symbol (plain http would still be red). This solution would both provide enough distinction between the two types of certs to the normal enduser without retraining them; "look for the green padlock" would still hold.
That said, 17k (even multiples of this) is still a rounding error compared to the total number of certs issued. I believe the public good done here far outweighs the bad.
I always disable mixed content so I honestly wouldn't know browsers indicate mixed content. I was under the impression that yellow was used. Apologies for the confusion.
My point in my previous comment was that browsers should consider exposing the distinction between EV and DV certs to the user in a way that doesn't break their mental model of how browsers indicate the security of websites. How this is implemented is probably better handled by others more knowledgeable in UI design than I.
Safari (at least on the Mac) shows EV certificates with a green padlock and the organization name, which I think makes it nicely clear. PayPal shows up as "<green padlock> PayPal, Inc. www.paypal.com" whereas a scam site will just show "<gray padlock> paypal.com.scammers-r-us.com."
Teaching people to look for this might be hard, though.
Organisation names are not, to people's surprise, globally unique. I don't have a Mac but a common "solution" there is to add a country flag, so an Australian firm named Top Burgers gets a different flag icon from an Irish firm by the same name.
But wait, is the burger place you like the Irish one or the Australian one? The faux German decor and the American accent of their spokespeople on TV give no hint. Turns out - neither, the Top Burgers you love are legally named Upper Deck Barbecue and Burger Company, Inc., and so their EV would need that mouthful on it.
So yeah, EV isn't worthless, but it's probably not going to fix anything much you'd actually care about. If I ran a business with PayPal's money I'd get an EV cert because the price is a rounding error. But for 99.99% that's money they could spend on security or customer service improvements that'd see an actual return.
It'll be way harder to get an EV certificate for "Paypal Inc." than to get a DV certificate for paypal.com.scammers-r-us.com. Getting two legitimate companies mixed up is a problem, but far less of one than getting a legitimate company mixed up with a scammer.
I would agree that it would not be scalable or fair to LetsEncrypt to police all of them. Would it be feasible to maybe just police the top 50 or top 100 financial institutions?
All public CAs are obliged (by the Baseline Requirements agreed with Mozilla, Apple, Microsoft etc.) to operate a "high risk" list of names for which they will do additional manual checks. For Let's Encrypt the effect of requiring "manual checks" is that you can't get a certificate because they only do automatic issuances.
However the BRs deliberately don't say what should or should not be on the list. Is Gmail as important as a Russian bank? Probably not if you're Russian!
Also of course CAs are not exactly rushing to reveal everything on their lists, for much the same reason you don't get told every security measure in place at your local bank.
Finally, bad guys will react to any such restriction, if they can't get paypal.example they'll try paypa1.example, not allowed that? How about paypa1-web.example? Even the rules LE have in place today cause problems for somebody a few times per month because their South American trucking business has the same initials as a German bank or whatever.
Why do we constantly hear that Lets Encrypt need to police this but it hasn't been an issue in years of commercial CAs doing exactly the same thing?
I ran phishing susceptibility tests for years before LE and would often just expense a $9 certificate for something similar to paypal.com and never had an issue. In fact any time it came up, I got a sales pitch about "this is why you should pay for an OV cert".
CAs in practice don't verify who you are; they just verify that you are the entity that controls a particular domain name.
LetsEncrypt does this automatically, for free, and in a more user-friendly way. For information on the security considerations involved, see [1]. These are similar considerations to those of most DNS-based CA verification methods (which is most of them).
> CAs in practice don't verify who you are; they just verify that you are the entity that controls a particular domain name.
They do that at a minimum. OV and EV certs require more work and do verify who you are (for some definition of "who") and that's where the more expensive CA's add value.
How do we make it more obvious to non technical people the difference between a DV cert and other certs, in a way they will completely understand?
I understand what the browsers do today and I don't believe this help protect people. It should be very clear what type of transport security is in use, along with a score and what type of identity has been verified and what that means. I honestly don't know the answer to this, but I do know the existing methods in browsers just doesn't give a clear picture to non technical people.
It's not about the CA, it's about the level of validation your chose. All major CAs offer all three levels of validation - DV, OV, EV. DV doesn't involve any verification of personal/company details, no matter the CA.
You don't get to say "all major CAs" referring to everybody else now, Let's Encrypt is definitely a major CA too now :-)
Also some (maybe you don't think they're major?) don't offer DV. After all they can't be cheapest or fastest so why not focus on a product with higher value.
LetsEncrypt verifies that the server making the request controls the domain against which the certificate is issued. Their certificates are most definitely not self-signed.
Anyone can generate a self-signed cert for any domain, LetsEncrypt only allows somebody who can demonstrate some level of control over the content in that domain.
Here's a concrete scenario: You host linuxbender.com and use a self-signed cert. You set your browser to trust self-signed certs. When you connect to your browser to linuxbender.com, I MITM you. I serve you a _different_ self-signed cert, which you then trust. I can read your traffic.
In this scenario, if the site was secured with LetsEncrypt, you wouldn't have to trust self-signed certs, and I wouldn't be able to MITM you.
Key-Pinning goes some way to combating this issue too, but doesn't solve every case.
In addition to what the other replies have already said: Domain Validated certificates have been a thing for years before Let's Encrypt entered the market. Most sites have used them and all mainstream CAs have automated the validation procedure and do not verify the entity requesting the certificate, only domain control/ownership.
There's different grades of validation, some of which audit your organization's legal structure to make sure you are actually a legitimate company. Let's Encrypt does domain validation, which validates that the entity it is issuing a certificate for has control of the domain name.
In practice, I doubt most people that use web browsers actually know the difference. They just see a green lock and assume everything is good to go.
I think that you are over-estimating just how much "validation" was done for a cert under the old model. A LE cert, like most standard CA certs just verifies control of the domain in question at some point in the recent past. A self-signed cert does not even rise to that level and making self-signed certs anything more than an oddball testing tool invites easy MITM.
You're right. It's not literally the same. However, it is effectively identical.
It is absolutely trivial for even a 5 year old to click a button and perform a downgrade attack on this type of an https:// connection. That is why accepting any self-signed certificates should be treated identically to http:// connections.
The issue is that if there is something like that, it has to be on by default. Otherwise, there's no difference between that and using self-signed certs now.
This is fantastic news. I just whish x509/browsers/other clients could be fixed with proper support/implementation for scoping, so signing a CA cert limited to a single TLD wouldn't be a big deal.
That way, Letsencrypt could've just signed a CA cert that was authorised to sign certs for anything under example.com - but not for anything else - and we could bootstrap trust in internal/local CAs just as we now do with certificates.
Wow, very cool. I wonder if elliptic curve certificates and intermediaries via certbot are also in the pipeline. P-256 can be issued via manual CSR and EC intermediaries are scheduled "before September" according to https://letsencrypt.org/upcoming-features/
I recently read about how Plex got trusted SSL certificates for all their users in partnership with DigiCert, and was really curious if a similar scheme could be accomplished with Let's Encrypt. The scheme required wildcard certificates so I figured it wouldn't be possible. But with this announcement, maybe it would be! I work on a product that generates a self-signed cert and so our customers always get a cert warning. They can replace the cert with their own if they like, but some customers aren't set up to do that. Offering an alternative where we securely mediate creation of a trusted SSL cert would be fantastic.
If your product consists mainly of a HTTPS service with some particular Internet accessible fully qualified domain name, say https://benth-app.customername.example/ where your customer owns customername.example then it's possible already today although you should ensure the customer is told what you're up to of course.
If your service doesn't provide HTTPS or customers don't have it accessible from the public Internet then you'd need cooperation from them unless you yourselves control the DNS records involved.
Wildcard issuance and validation hasn't been implemented yet, but I see no reason why this shouldn't be possible once the feature is rolled out. My best guess is that you'll be able to mix wildcards and FQDNs to your liking, provided that you can demonstrate control of all domains.
In my best Oprah voice: "Subdomains for everybody! You get a secured subdomain.. you get a secured subdomain..."
In seriousness, I pine for the post `~username`, pre-`/username` days where services would hand out subdomains for user management. It still happens -- viz Tumblr, etc. -- but I feel like it's less frequently then it used to. One reason I would avoid is that wildcard certs are pricey. Nice to see that will commoditize a bit come January.
ACME is on the path to IETF standardisation, and all of Let's Encrypt is Free Software, so Microsoft absolutely could enable this in a future IIS version. If you're a customer it can't hurt to tell MS you want this.
Meanwhile you're at the mercy of third party probably volunteers to make what you want possible.
Wildcard certs, that are generally seen as a security risk, and could have been alleviated for most legitimate uses with higher limits on issuance per domain will be supported.
But S/MIME, the email encryption option that actually works out of the box in basically every mail client, sorry, nothing doing.
Wildcard certs have been a common request, but you can already specify multiple domains using SAN. And, since you can issue a new LE cert on-demand, it's actually not necessary for a lot of use cases that would have required a wildcard in the past. In the bad old days, getting a new certificate (even just to change details like adding a name to it) was time-consuming and often cost money. Most of the time when our users have needed a wildcard, it was just because they wanted to save a little money by having all of their subdomains on one cert; and not so much that they really needed to be able to spring up dozens/hundreds/thousands of new subdomains that could just automatically be secured. If you have a fixed number of names that need SSL with the same cert, you can already do that today with Let's Encrypt.
Nonetheless, this is great. I'm just super impressed by how effective the LE folks have been at improving the state of security on the web, and for free! They really deserve every kind word and every donation dollar that comes their way.