It's still a pain in the ass to manage wildcard certificates with letsencrypt.
Especially when your DNS registrar does not support DNS changes via API.
And even if the registrar supports it, you have to build and maintain the code that talks to the API. Yuck.
I wonder why they don't allow whover controls the domain name to use the domain/.well-known/acme-challenge to create wildcard certs that are valid for all subdomains of that domain.
So if you have example.com at a place which does not have an API, you can point _acme-challenge.example.com via CNAME to _acme-challenge.example.org, which you may have at a place that does.
Or you can point it to a sub-domain, so _acme-challenge.example.com points to _acme-challenge.dnsauth.example.com, and then you have dnsauth.example.com live on a DNS server in your DMZ.
This can be used for internal hosts as well (if you have split-horizon DNS). So if you have websrv1.int.example.com, you can put put _acme-challenge.websrv1.int.example.com as a CNAME that points to _acme-challenge.websrv1.dnsauth.example.com, and dnsauth.example.com live in your DMZ. You do not have to have an A(AAA) record for websrv1 in your external DNS. You'd have to write some glue so that your LE client talks to dnsauth.example.com to add/remove the dns-01 verification records.
> Then you need two domain name servers to serve your domain.
Or, as I explained further down, a sub-domain.
> And you need to write and maintain code to talk to the API of the second nameserver.
Which was several dozen lines of shell (used as a hook in our LE client), which we haven't needed to touch since we wrote it a year ago. Hardly a Sisyphean task.
> That's what I call a pain in the ass.
Often less of a pain than dealing with many CAs manually.
I mean it's a pain but why is this LE's problem? The company hosting your domain doesn't support programmatically updating records so the least-effort path ends up paying someone that does like the $0.20/month to host the records with an easier to use API.
Unless you're using a very obscure DNS hosting service you shouldn't need to write any code. There are already made plug-ins for just about everyone.
I'm gonna focus on certbot since it's the main player in this space but there are other ACME clients that might have better support for other providers.
* Route53. The plug-in is straightforward with the relevant IAM policy to allow a service account to change your records being already written for you. You end up just copying the tokens, and the arn of the zone into the config file and you're off.
* Google CloudDNS. Google's IAM system is a little more complicated if all you want is a DNS hosted zone but once you have a service account with the right permission and the JSON blob in place the plug-in is actually easier to use since it has the ability to programmatically find your zone based on the name instead of copying the arn.
* DNSimple and DigitalOcean. No IAM policy to fiddle with. Just grant an API token from your account, plug it into the config file, and you're done.
* RFC2136. Not super useful unless you're doing on-prem stuff but really nice if you are. The config format for this one is super finnicky and you'll be reading docs to generate the keys but once you have it it's pretty smooth.
How about anyone on the list that's supported by the lexicon utility:
> Lexicon provides a way to manipulate DNS records on multiple DNS providers in a standardized way. Lexicon has a CLI but it can also be used as a python library.
Azure DNS can be scripted from *nix using Azure CLI and from Windows using PowerShell. You can use the certbot verification hooks to run the requisite scripts.
No, you only need an additional name server to serve one name, and it doesn't even have to be particularly reliable or fast or anything for most uses, as long as it's available at some point for a cert renewal before the old cert expires.
So, you can just put that nameserver on any machine that has a static IP address, or even a semi-static address, like, your home router or something.
Additionally, there are pre-built tools designed for exactly this purpose (e.g. acme-dns), so it's not like you even have to write any code to set this up.
More of a pain in the ass then working with a traditional CA's web frontend, generating your signing certificate, waiting for them to call and verify because you want EV, then downloading the bundle from their site and extracting it then putting it into the (hopefully) automation that pushes it out to all of your servers needing the wildcard?
> Then you need two domain name servers to serve your domain.
If you are a mega-corp and have multiple country-specific domains that redirect (apple.ca/ -> apple.com/ca/) then having most of your DNS be "static" and your scripts only talk to one 'canonical' (sub-)domain (e.g., dnsauth.apple.com) may be less hassle: instead of dealing with scripting n>1 domains, you only have to deal with n=1.
I haven't looked at DNS in depth in a very very long time. My recollection is that it is a pretty simple protocol. Is that still the case?
If it is, I wonder if it would be reasonable to build a small stand-alone DNS server specifically to use with Let's Encrypt for this. You run this on your own server, and it only handles the domains that you need to satisfy Let's Encrypt.
Point the _acme-challenge record there with a CNAME on your "real" DNS server, and whenever you are getting or renewing a Let's Encrypt certificate, bring up the small stand-alone server.
When the certificate issues or is renewed, shut down the small stand-alone server.
Some Googling of the form "simple DNS server $LANG" for various programming languages $LANG turns up a few that could provide nice starting points.
Perl: on meta::cpan there is Net::DNS::Nameserver, meant to provide a local nameserver for testing local client resolvers, but would probably be adaptable.
Python: [1] under 50 lines and using no imports other than "socket". It's very minimal, but shows dealing with the protocol. [2] [3] [4] are all using something called "dnslib". [2] is nearly 150 lines, but it is more complicated than we need. It's even using multithreading. Same for [3]. [4] is much similar, and much smaller (comparable to using Net::DNS::Nameserver in Perl). Another library for simple DNS in Python is pydnserver [5].
PHP: I didn't find anything simple. All I found was much more full featured and many more lines of code than we need.
Go: [6].
Bash: [7]. I'm not sure what disturbs me more. That someone made this, or that I searched for it.
My personal take on this is that with easy automation wildcard certificates simply shouldn't be used any more.
In the past one reason for wildcards was that it's too annoying to request certs for each subdomains. With automation this reason goes aways.
The other reason is that you can have "secret hostnames". But if your security relies on secret hostnames that's a bad idea to begin with. You still leak the hostnames to the DNS and as long as we don't have ubiquitous DoH+ESNI also to the network.
Wildcard certs on the other hand have certain risks. If you have a vulnerability in the TLS stack on subdomain1.example.org that may compromise the security of subdomain2.example.org if they share the same cert.
You have a .example-usercontent.com wildcard certificate for domains like user-1234.example-usercontent.com and you have millions of users. A wildcard certificate is appropriate because:
* LetsEncrypt rate limits are a thing
* The domains exist to leverage origin sandboxing in browsers, but are served by the same infrastructure. It's not more secure (but it is more complicated) to have more certificates here.
Generally, the assumption that two subdomains are served by independent infrastructure is often wrong. Think of things like blogger.com/blogspot.com. So the concern about compromising keys doesn't really apply.
Sandstorm.io serves each session of each document on a different subdomain, which yes, isn't an all-around "secret hostname" (access control is not managed solely via this strategy, of course), but it defeats a lot of possible dangers or ways to tamper with an app.
It would be extremely prohibitive to have to request a certificate for each session of each access to a document, before even discussing the rate limits of Let's Encrypt.
I've used wildcards a lot for securing internal servers. Have a public-facing "internal.my.domain", get a wildcard for that, and and handle the "internal.my.domain" internally, so we have valid SSL certificates for internal services, which otherwise is a pain in the ass.
Also, when running something like a kubernetes or openshift cluster, having dynamic ingress/routes is very easy, and offering the ability to your devs to have SSL not only by default but mandatory, with close to zero configuration, is great.
Wildcards can be used to get around problems in managing access to resources in a multi-tenant environment. If you have 10 product teams, and they all serve different products under a single zone, you may need to grant them all access to modify your nameserver as well as be able to create their own certs at will, if they were going to do independent automated certs. But with a wildcard, you simply give them all the same cert and independently configure the nameserver to point the right records at the right ALBs, and now none of them have any access other than to just serve whatever request hits its ALB.
This would be a nice world to live in, but I stand up more domains in a day for review apps than LetsEncrypt will issue in a week. I don’t even have a large engineering org.
It’s 50 certs per week, not very much.
Honestly I’d pay LE happily for a paid tier option; I would love to subsidize this piece of internet infrastructure in return for all the certs I need.
This seems like a good idea... until your self-hosted DNS server starts getting DoS attacked. I've had seemingly innocent servers practically taken off the Internet with UDP/53 floods- very easy for any 12-year-old to execute.
The most annoying thing about it is the short expiry time.
I'm using a wildcard certificate so my internal services can be accessed without resorting to installing a self-singed CA cert on all my devices. I've set up a script to renew it every month but unfortunately distributing the resulting certificate to all internal services is proving difficult. There is no simple way to update the cert in my managed switches or the IPMI interface on my servers without resorting to custom scripts to upload it via the web interface.
If it was a once-a-year job I could do it manually, but these certificates need to be regularly replaced which makes it a PITA.
letsencrypt support for wildcards has improved greatly. Now that DNSControl also manages certs, we use it to manage our wildcard certs... even ones with multiple wildcards. Shameless plug: https://github.com/StackExchange/dnscontrol
That said... every time someone uses a wildcard cert I think it should be considered a bug. It solves a lot of problems, but opens up others. I'd like to reduce our use of them significantly. Now that letencypt lets me create new certs within minutes (seconds?) instead of days in a fully automated manner, it's easier and easier to reduce my need for wildcards certs.
Wildcard certs don't expose your internal host names on the public certificate transparency list. Issue a SSL certificate for a new domain and you immediately get hit with some random requests hoping you left a default open for a split second.
I got this working with Traefik recently, which has code written to work with APIs from a bunch of DNS providers. I had to switch my domain from Google Domains to Digital Ocean to get it to work though (Google Domains is missing an API).
The docs, however, are terrible. It is because a friend told me that labels in the docker configuration form "groups" where you define everything I finally got it to work (what I mean is that I create a label which covers ho, to match the traffic, what to do with the traffic I matched etc. and all of this relies on one word shared between the different lines).
The community is not very helpful either.
Beside that, once you understand the idea, it works really really well.
I'm using certbot with ansiblw for wildcards and cert-manager+external-dns in EKS. Both work like magic but I'm using route53. Wildcard validation works the same as regular validation, for dns-01. I haven't built or maintained any portion of that code. You may want to take a closer look.
This is what I was feeling the other week with the last discussion but the core issue is a bad tool. Use a better client side tool that doesn't generate a new hash on every let's encrypt update. When that happens you only touch the domain settings one time.
Initially, ISRG was funded almost entirely through large donations from technology companies. In late 2014, it secured financial commitments from Akamai, Cisco, EFF, and Mozilla, allowing the organization to purchase equipment, secure hosting contracts, and pay initial staff. Today, ISRG has more diverse funding sources; in 2018 it received 83% of its funding from corporate sponsors, 14% from grants and major gifts, and 3% from individual giving.
This is good, because having something that owns ca. 60% of the SSL certificates on the internet supported by corporate largesse is not a good route forward. I'm glad that's shifting.
It's fantastic that Let's Encrypt made short(er)-lived certificates possible. With automation in place it doesn't matter if the cert lives a year or a week. Eventually it will get short enough that we won't have to bother with revocation lists, OCSP, etc.
In the medium term the intent here is Delegated Credentials. The TLS working group is putting the finishing touches on a mechanism which goes like this:
You still have TLS certificates like today, but the private key corresponding to the public key in that certificate lives on a dedicated machine that's not accessible to the outside world, rather than your web server.
This dedicated machine uses the private key to sign tiny messages which basically say "I, the owner of this TLS certificate you've seen, authorise this web server for a short period (say 24 hours) to prove its identity using a short-lived key which you can verify using public key P".
Then your web servers serve up the TLS cert (for which they don't know the private key) this short message with P in it, and proof they do know their short-lived private key which can be verified using P. Browsers check the certificate, the delegated credential message and so on and you've got all the same security but with much shorter lived credentials.
This way, when somebody finds a zero day in your tremendously complicated web site and uses it to steal the private key, that key is only valid for maximum 24 hours and so you can close their window of opportunity relatively swiftly if you're on top of it. Meanwhile the dedicated server for making the signed messages is not publicly accessible and can be simpler and thus more secure.
This isn't really a way forward for making toy web sites with 1000 visitors per day, but if you're building anything where you already have multiple servers possibly in different locations then it's more interesting.
I have deep concerns about the development to criminalize and ban non-https web sites from functioning in mainstream web browsers. Many flee to Letsencrypt certificates which have a short TTL frame.
What if Letsencrypt gets bought by Google / Microsoft and the free lunch is over? The end of the free web as we know it... ?
I'm not too worried. If SSL becomes mandatory and no cheap/free solution is available there'll be a very strong incentive to build a new CA doing what Let's Encrypt does to replace it. Then it's "just" about getting browsers to trust the new CA and at this point I think the main threat is the Chrome monopoly, not Google buying Let's Encrypt.
After all the CA system is effectively decentralized.
> Then it's "just" about getting browsers to trust the new CA and at this point I think the main threat is the Chrome monopoly, not Google buying Let's Encrypt.
Even for Let's Encrypt it took 3 years before they were comfortable to abandon the IdenTrust cross signed certificate, and they won't even do it until July 2020, so a whole 5 years.
That's from a very popular NPO supported by many corporation.
ISRG is a California public benefit corporation, and is recognized by the IRS as a tax-exempt organization under Section 501(c)(3) of the Internal Revenue Code. Our EIN is 46-3344200.
And the protocol to talk to LetEncrypt is open, if LetEncrypt turned evil, you could go to other providers without updating your overall process.
I am also concerned about the centralisation of everyone going to Let's encrypt. Yes they are non-profit but they are located in the United States and must follow US laws.
Too big to fail: It's not just for banks any more.
If LE is compromised a few times and some fraudulent facebook.com or google.com certs leak out, how long before Firefox/Chrome/Edge blacklist their root cert like they did with Symantec[1], and end up breaking half the internet?
I understand that ACME is an open standard, but can someone point me to an alternative ACME provider that isn't "please call us for a quote" enterprise-grade?
Seems like a reasonable fear, but it is mitigated by short certificate lifespans. Symantec was much more painful to distrust due to multi-year certificate lengths.
That is not strictly required though. LE did this in order to speed up the rollout but you can also skip that step and submit the root CA to the appropriate places, e.g. Mozillaʼs CA Certificate Program.
And then you lose all the older browser installations...
There was a reason if LE went the way they went and basically it is retro-compatibility.
Anyway I'm not arguing that it would be impossible to create an LE2 but it would not be that simple. Especially if the buyer is someone already in the certificates business that wants to destroy the "free-for-all" concept.
This is just a symptom. The underlying problem is the whole model of the SSL PKI. To quote Ivan Ristic's Bulletproof
SSL and TLS:
> There’s an inherent flaw in how Public Key Infrastructure (PKI) operates today: any CA is able to issue certificates for any name without having to seek approval from the domain name owner. It seems incredible that this system, which has been in use for about 20 years now, essentially relies on everyone—hundreds of entities and thousands of people—doing the right thing.
My hope is that in time we will move to something like DANE[1].
It all depends on what root CAs the major browsers ship with. If browser vendors consider it good to have free certs available, like they seem to, then EFF/Mozilla/etc could start a "Let's Encrypt II" in case something happened to them.
This math is off because the vast majority of Silver sponsors are at $10k, and some of the sponsors at all levels are in-kind, meaning we get something other than cash from them.
In 2019 we spent $3.35M in cash (we came in a little under our projected budget of $3.6M). We raised $3.82M in cash between sponsors, grants, and individual giving.
The philanthropy-based funding model, coupled with the fact that this is now load-bearing Internet infrastructure, still makes me quite nervous, but I'm glad that you're able to sock away ~$500k for a rainy day.
ICANN is a natural monopoly because DNS was designed to have a single, central authority. Switching to a new DNS root without ICANN's cooperation would be a monumental task, and they know it.
But setting up a new CA, while expensive and time consuming, is very doable. If Let's Encrypt somehow became as corrupt as ICANN, there'd be incentive for some organization to create an alternative.
> pledge not to take big donations from corporates
You mean the people who actually benefit financially from the service? Look, I get the sentiment but this is a case where the incentives are remarkably aligned. Companies that derive real business value from the service foot the bill while the rest of the world gets free certs.
Tis a shame web browsers are only able to use Kerberos to authenticate connections and not additionally to provide confidentiality and protect the integrity of the communications channel.
Certificates are pain in the arse and it depresses me that we still don't have a way to deal with compromised private keys.
I'm always annoyed by the fact that no spec exists for a `intermediate CA for a particular domain`. If it exists, multiple level wildcard pain disappears and Subject Alternative Names stuff is not needed.
> and Subject Alternative Names stuff is not needed.
Somebody already chimed in about name constraints, but I'm going to emphasise again that SANs are not an "alias" mechanism. SANs are how to use the Internet's names for things with the X.509 standard. The X.509 certificate is intended as part of the X.500 global directory system. Have you used the global directory system? No, because it was never built. And so X.509's names aren't appropriate for the Internet, which actually was built.
PKIX (RFC2459 and successors) documents how to use X.509 for the Internet, and it defines Subject Alternative Name for writing three popular names things have in the Internet, DNS names†, IP addresses‡ and email addresses.
Historically when Netscape invented SSL in the mid-1990s they abused the X.509 Common Name field to put a DNS name as text, having nowhere else to put it and billions of other more important problems to solve. But CN is arbitrary human readable text, not a great way to write DNS names. There have been way too many bugs as a result, and that's before IDNs existed. For a SAN dnsName there's deliberately exactly one correct way to write an IDN, but if you're abusing Common Name it's unclear what you should do.
So, after PKIX was standardised it was required to write SANs in all certificates. The old Common Name was grandfathered in, but all certificates in the Web PKI should use either SAN dnsName or ipAddress or both as appropriate.
Modern web browsers don't look anywhere else. Your Chrome or Firefox isn't trying to parse mysterious text elsewhere in the Subject to see if it might be an FQDN, it just reads the SANs and parses those, the rest is for humans only.
† Yes both kinds (PKIX uses A-labels here)
‡ Yes both kinds (IPv4 and IPv6)
X.509 Name Constraints extension is what you are looking for.
They are VERY expensive when ordered from public CA, but we use them at work for internal PKI.
When I was setting up internal PKI back in 2015, I wanted to limit the CA to be less dangerous. At the time name constraints did not work. At all.
It wasn't possible to create a CSR with the field in OpenSSL, because the config parser didn't know about the key. So I did what any self-respecting person would do: I created the CSR manually with low-level API, plugging in the OID directly. When I tried to sign that one, the openssl libs just blew up with BIO_read_XXX errors everywhere.
I then tried the same thing with golang's TLS stack. Trying to operate on a CSR with name constraints triggered a panic. So I gave up - no name constraints on internal CA.
Never got to try it out, but considering how the client libraries behaved on seeing the flag, it would have been amusing to see how different clients behaved when served with a certificate chain that ended up in name constrained CA.
Yes, they are. The extension should be marked as critical, so if the client does not understand it it should error out.
At least java, go, curl and all major browser support it.
I believe clients that don't support name constraints extension won't pass certificate chain verification should the root, intermediate or certificate itself have a name constraint defined.
Good question. I’m guessing that modern browsers such as Google Chrome and Firefox does. When it comes to agents such as wget, curl, mstsc.exe, etc, then I’m not so sure.
There is an alternative ACME server implementation available from Buypass: https://www.buypass.no/ssl/products/acme. Haven't tried it personally, but they launched support for ACME v2 in late 2019.
ACME's secret sauce is the name validation challenges, not present in SCEP and other prior standards. The same people, roughly, worked on two things, one of which is made very visible in ACME a published standard, but just as important is the other side of the coin.
The Baseline Requirements https://cabforum.org/baseline-requirements-documents/ from the CA/Browser Forum set out shared rules for how a publicly trusted CA must do its job. From the outset the BRs have been clear that a CA needs to somehow validate that it's issuing the certificate for some-fqdn.example.com to the people who actually have some-fqdn.example.com or else this PKI is futile. But until relatively recently the BRs were pretty vague on how exactly they ought to do that. As a consequence some CAs did an admirable job, many did a passable job, some were a bad joke either through not grasping the threat model or just ordinary incompetence.
A trusted CA for example once had a system which would check you owned www.example.com by connecting to example.com over HTTP and making a request for a magic document, say http://example.com/xyzzy.html then it would grep the reply from the server for a magic string, which was the same as the name of the document, in this case xyzzy, and if it was found the check passes. But wait a minute, this means if your server says "404 Not Found: xyzzy.html was not found" the check passes and you get a certificate. Oops. Now, not checking for a 200 OK was a bug, but even if you do check for 200 OK this validation clearly more of a polite "Keep out" notice to bad guys rather than any sort of actual defence.
So in the same period Let's Encrypt was being set up and ACME was being defined, the CA/B Forum also reformed the BR definition of how to validate DNS names using some of the same personnel. The result is the Ten Blessed Methods (although right now there are actually more than ten of them) and ACME is an automation of just three of those specific methods. Since then CA/B Forum has also worked on obsoleting the riskier and less useful methods and creating newer safer ones. Today that would be /.well-known/pki-validation/xyzzy.html safely in a reserved namespace, and it'd need a value inside it that's either entirely random or is determined by some other factor not under an attacker's control (in Let's Encrypt the equivalent string is a hash of the LE user's public account key).
Some of the Ten Blessed Methods are inherently kinda manual. Some just aren't very universal. So ACME and Let's Encrypt focus on the three which are most suitable for automation, surfaced in a way that is hopefully even more secure than required by the BRs and made available to everyone.
For me it was a cherry on top of my home built pipeline. I was able to learn the general cert usage through setting up my CI/CD. It was fun and costed me nothing.
Gitlab for CI, Nexus for jars and docker containers and routing traffic with traefik.
Its super easy if you setup them using docker. I will move to jenkins CI tough. Its plugin/ecosystem brings you much more value over gitlab.
Adrian does a great job in summarizing the Let's Encrypt paper. He highlights all the important sections and gives the reader the complete overview of the system. Thanks for the summary.
It s funny how free certificates are now easier to use than overpriced ones. Knock on wood, letsencrypt works well , except when it doesn't , e.g. when you remove a subdomain from your webserver and then it refuses to renew them for the entire domain. But still it s the best thing to happen to the web in a long time.
There's no alternative when you have multiple SANs, all of them have to be validated. But since they are free and easy to automate you can just issue a certificate for every domain.
when you remove a subdomain from your webserver and then it refuses to renew them for the entire domain
Most Let's Encrypt clients have the ability to automatically exclude any subdomains that have stopped working. However they don't enable this by default because they have no way of knowing if the domain is really gone or if a janitor just happened to unplug the server to plug in their vacuum when the renewal was attempted.
While seeing great progress in securing the web via HTTPS email still lacks fundamental security. While I would love to see a functioning PGP or other decentralized setup, network effects are a strong force and it's still to much hassle for non-technical people.
Free S/MIME would help me to distribute secure e-Mail setup for friends & family.
Looking for free or cheap certificates there was no satisfying provider.
Either expensive, or not trustworthy server-side generated certificates, or not compatible/ not trusted root authority, or dedicated Windows software for generating certificates:
LE for domain-validated code signing would be good as well. Knowing an executable came from a trusted domain even if it was mirrored or rehosted sounds nice.
Nope! For various reasons, including that the IdenTrust cross-signature has a path length constraint that doesn't allow any longer chains for this path. Also, Let's Encrypt doesn't want to be responsible for supervising the correctness of your CA's operations.
In theory this could be practical if you look at name-constrained delegations but it seems to me that there are a lot of practical problems with making this widespread. If you happen to be interested in discussing them in more detail, come over to https://community.letsencrypt.org/ and create a new "Issuance Policy" thread and we can get into it in more depth. But the short answer to your question is simply no—Let's Encrypt doesn't offer this service, isn't currently permitted to offer this service, and isn't interested in offering this service.
This kind of service would be extremely valuable for some use-cases. But I could see a hard to solve issue here. Imagine that private key for domain1.com leaked and ordinary person (not owner) wants to revoke corresponding certificate. It's enough to tell CA who issued that certificate and it'll revoke it. Now imagine that private key for sub1.domain2.com leaked. It's signed by domain2.com operator who's just small business or some hobbyist guy and does not even read his mail. Should letsencrypt be responsible for revoking those certificates? What if there are 4 billion of those certificates (e.g. 1.1.1.1.domain2.com, 1.1.1.2.domain2.com and so on)? Should letsencrypt reach domain2.com owner and ask him to revoke? Now it's human labour and can't be automated. Should letsencrypt just revoke domain2.com CA if any of his subdomains were compromised? That's not good either. Should we allow domain2.com owner to issue valid certificates for his subdomains without following proper security practice (like not distributing private keys in his devices)? Not a good idea too.
While let's encrypt is great I keep getting worried let not what happened to the team managing openssl resulting to heartblead bug happen to the let's encrypt team it would be a disaster
Can you give some more insight on what happened to the team managing openssl during that time? I remember being on-call at a job when the news dropped.
Especially when your DNS registrar does not support DNS changes via API.
And even if the registrar supports it, you have to build and maintain the code that talks to the API. Yuck.
I wonder why they don't allow whover controls the domain name to use the domain/.well-known/acme-challenge to create wildcard certs that are valid for all subdomains of that domain.