I encountered this problem on my project Soundsync. It's controlled by a webpage that needs to connect to every Soundsync peer on the current local network. I couldn't use HTTP as I need to instantiate an AudioWorklet (which has the same security requirements as a WebWorker). The big browser warning of a self-signed HTTPS certificate is a deal-breaker for a lot of non tech-savy users (which are my target).
In the end, I used WebRTC from an external HTTPS webpage I'm hosting and a "mailbox" service where two peers can post and retrieve messages from a UUID. To communicate this UUID on the local network to the peer, I first used an <img> tag with the UUID in the URL query string but this method was recently broken because of increased security measures by browsers. I now use two methods:
- Bonjour: Every peer listen for every bonjour request on the network, on the webpage I make a request to https://soundsync-UUID.local/. The peer then extract the UUID from the request and connect
- TLS Server Name Indication: I use sslip.io to connect to https://UUID_IP.sslip.io/. This is redirected to the local network IP of the peer which use the full domain name in the TLS handshake to extract the conversation UUID. This method doesn't always work because of some router filtering out DNS records resolving to 192.168.X.X.
All this process is very hacky and doesn't always work but I haven't found anything else better. We still don't have a good way to make self-hosting easy for anyone while making it secure and not dependant on an external service.
As much as I dislike it for most things, your project sounds like the ideal candidate for an Electron app!
Edit: Just to clarify, I meant because an Electron app could be coded to allow a self-signed certificate, or allow AudioWorklets over http, or whatever other solution makes sense for your use-case.
I personally believe that browsers will eventually be forced to turn off CA verification (by default) of local network traffic (reserved IP spaces and the .local TLD) in non-enterprise browsers.
Enterprise browsers will no doubt continue to perform CA verification by default, and I expect a preference will be offered to let home users (rare) and enterprise profile management (common) override this to enable/disable as needed for whatever custom localnet definition they require.
No other solution can work here, as by definition anything in localnet can neither be verified by an external entity nor issued in any trustable or permissible manner that is resilient enough for home users to configure, navigate, and manage on their own.
This is a predictable outcome of HTTPS Everywhere, and all major browsers have chosen to not yet deliver this for simple https://whatever.local requests, as tested in each of them today. While technically local network traffic might need as urgently need to necessarily be HTTPS to keep users safe in a home scenario, browsers further narrow the list of HTTP-offered features each year, and with the advent of HTTP/2 and HTTP/3 requiring mandatory TLS, the ancient status quo that's being delivered no longer meets the needs of modern networked homes.
I encourage the major browsers to step up and define their direction forward in coordination with each other, so that further wreckage is not introduced into the home networking ecosystem. The lack of progress here is indirectly harming users by teaching them that "http:// is fine if something says it is"; as is well understood, the judgement of whether to wire-encrypt or not cannot (by default) be entrusted to end users following instructions.
If there are existing discussions underway by any of the major browsers, I'd love an opportunity to learn more about them from links or stories shared in this discussion.
An alternate solution to the same problem would be to streamline running your own local certificate authority. For example: if the local router was its own certificate authority (that spoke ACME), and OSes and/or browsers were configured to check with the local gateway for their cert authority. You could add the Name Constraints exception and restrict it to local-only (reserved) TLDs.
I don’t consider this approach to be possible to operate usefully at scale with completely non-technical users, as they aren’t interested in operating a minor bureaucracy for their thermostat to work.
I am capable of operating my own CA out of my home in a matter of minutes in order to solve this problem, yet even I consider it a waste of my time to do so. In-home transport encryption should not demand a burden from the user. To quote Brazil:
“Listen, this whole system of yours could be on fire and I couldn't even turn on the kitchen tap without filling out a twenty-seven B stroke six.”
It's a more plausible scenario, to me, that browsers will become more strict w/ CA verification. Non-technical users will be permitted easy access to resources secured by third-party-signed certificates. They'll be forced to plumb into the confusing depths of their browser / OS to make changes that would allow access to resources protected with untrusted certificates (assuming they're even allowed to make such changes, which I think will also eventually be taken away).
Nobody in the industry wants anything locally-hosted by end users to work well. Locally-hosted means not "cloud"-hosted. That means no sweet, sweet recurring subscription revenue stream or centrally-stored user data to "monetize".
> Nobody in the industry wants anything locally-hosted by end users to work well. Locally-hosted means not "cloud"-hosted. That means no sweet, sweet recurring subscription revenue stream or centrally-stored user data to "monetize".
This doesn't work for things like routers and modems that need to be configured before you can even access the cloud.
There have been "cloud" router products already. Cisco / Linksys had some, if memory serves. Have the router pull DHCP from the ISP, call home to the mother ship, then have the user configure it with an "app" on their phone. The vast majority of router buyers don't want to configure anything anyway.
To me the cleanest solution would be to have something ACME like coupled with DHCP and/or 802.1x (or something like that) that would allow routers to advertise a) CA certificate applicable to local network only b) A place where hosts can (semi-)automatically get CSRs signed. Of course practically that seems very difficult to get actually landed, especially as it is somewhat complicated to make sure that rogue routers can not somehow impersonate any external hosts.
If there's going to be some automated mechanism for newbies that effectively involves phoning-home or having some external centralized dependency at run time (even if it's only DNS requests), then I'd like that to be optional, including in the official builds.
One of the reasons I run OpenWrt is to avoid some of the undesirable crud of commercially-packaged routers. And the crud is getting much worse, especially with convergence with IoT, where there are conventions of often gratuitous (lazy and/or predatory) centralized dependencies.
OpenWrt seems to have a good amount of civic-minded volunteers using it, not only driven by router manufacturers doing versions for their devices (where there are a lot of unfortunate commercial pressures to grab all the intimate data they can, create dependency/engagement/subscriptions/etc.). So I suspect there are others who are reluctant to do this like a move-fast-and-break-things conventional IoT play, even if that can be convenient in some ways.
Personally, for now, I currently just make my own self-signed TLS cert for the OpenWrt router's Web admin console. Longer-term, I want to make a dedicated hardware console for it. I don't expect everyone to do that, though, but I can imagine reasonably usable near-term solutions for bootstrapping a self-signed cert, integrated with the existing OpenWrt first-time config procedures, which would be easier than the bigger barrier of first-time firmware installing on a lot of the devices.
My Motorola cable modem (MB8600) has this same issue with its status page, and Motorola's "solution" is strictly worse than just being unsecured. It used a self-signed certificate which requires clicking through security warnings in every browser.
As a bonus, it seems to generate the certificate based on a random seed at boot time. So if you power it off for long enough, it generates a new different self-signed certificate that causes un-click-through-able security warnings in all browsers other than IE11.
For Safari, I can at least delete the saved certificate from the keychain and get the usual "untrusted certificate" warning which I can click through instead of the hard stop "impersonation" warning.
The kicker is that you can't even do anything other than reboot from the modem status page. It doesn't even expose a firmware upgrade; that's controlled by the ISP.
I'm not sure that this is a real issue. Self-signed certificates can be accepted under a TOFU (trust on first use) pattern; the user only has to click through a scary browser warning once, and then "save" the exception locally on her machine.
That only works on Firefox. For all other browsers, you need to go through an extremely cumbersome process to install the certificates in the Windows trust store. This is not even possible on my work-issued laptop for example.
It's really annoying that the only way to provide secure HTTPS is to make your local devices dependent on a public service and abuse DNS resolution.
Hopefully as more and more people encounter this problem, some router-based certificate signing protocol will become known and accepted, maybe even with some kind of extension to DHCP to allow advertising the root cert of the router. Of course, a lot of care would need to be taken to make it hard to accidentally trust some cafe wi-fi...
> That only works on Firefox. For all other browsers, you need to go through an extremely cumbersome process to install the certificates in the Windows trust store.
This is a bug in those browsers, then. It's totally fine to use the system certificate store as a default, but it should be possible to trust other certificates at a more localized level - perhaps for a limited time, but not in a way that's limited to a single browser session.
Chrome is building their own root certificate store[1].
I personally believe this is because so many users are stuck with old Android devices where the store is no longer maintained. Folks like Let's Encrypt are unable to get a foothold[2], with so many decaying unmaintainable operating systems out there. Chrome & every other app doing 3rd party HTTPS have to start creating their own root certificate stores that they can maintain, since the dead operating system on such a vast % of devices will not. Tragic situation, in my opinion, and with so few unlockable phones, buyers have limited ability to maintain or repair their own devices.
A scary warning might not be an issue for more technical users like us, but for a non-technical user, it's a relevant obstacle. A non-technical user does not have the knowledge and experience to know whether the warning is legitimate or not. In the worst case, the user learns to ignore that type of warning from then on, which only leads to the warning being made harder and harder to ignore (we're in that treadmill already; this warning used to be much less scary and much easier to ignore).
Non-technical users should rely on their local IT support to address issues such as this. We're talking about OpenWRT configuration here, which is quite a "relevant obstacle" anyway for other reasons.
Or just have them use ssh. Every popular OS besides android (that's OSX, windows, GNU/Linux, iOS, all the BSDs) ship with an ssh client (almost always openssh.)
It would be nice if they all came with a VTE that worked over bluetooth. Windows used to come with it's own VTE for dialing modems so obviously users can handle it.
> - HTTP access, insecure and some browser functionality is unavailable for pages/SPA's served over HTTP.
> - HTTPS and tell the user (maybe in a previous HTTP page before a redirect) to dismiss the browser big warning.
That's not really true though, right? I think the options are actually:
- HTTP access, insecure and some browser functionality is unavailable for pages/SPA's served over HTTP.
- HTTPS, insecure and tell the user (maybe in a previous HTTP page before a redirect) to dismiss the browser big warning, but some more web APIs work.
Unless I'm mistaken with a self-signed cert you gain none of the actual security guarantees of https with a CA mostly stemming from the fact that there is no authenticity and MITM is trivial. I think it's a bad idea to pretend to a user that a connection is secure if it actually isn't. The solution I think would be to open up web apis to local http connections and create a verification system for self-signed devices like signal, matrix, and probably other systems have.
My completely-uninformed bet is that the ability to access sites with self-signed or expired certificates will get hidden behind a flag and ultimately removed from Chrome. Probably in 2-3 years?
Then the IT people will install some other browsers. Because there's a lot of embedded stuff out there (HVAC, UPSes, etc) that will never have "real" certificates, and those need to be accessible.
So if Chrome wants to piss off the Alpha Geeks in an organization, the folks who decide what software is used, not allowing any kind of override is a good way to do it.
I would argue this is a strong argument against browsers treating http traffic that way. There needs to be a solution for this use case.
Training people to ignore warning messages is clearly worse than just using http and letting the browser respond by disabling some features for safety.
If only there were a way to signify trust for a self-signed certificate or CA under a particular domain name, directly using control of the domain to anchor this...
I wonder if the router manufacturer could just have the router connect to a Dynamic DNS provider, register the router’s current IPv6 address (so we don’t have to worry about NAT from a evil modem), and get a Let’s Encrypt cert over ACME. Synology does something like this if you open the right ports for your NAS.
Every IoT device doing this might be annoying for Let’s Encrypt, but routers I think are worth it considering they stand guard for the rest of the home network.
Sometimes you need to enter access credentials in order to enable internet access for your router. How will you do that (outside of telnet/ssh) if it needs internet so that you can connect to it?
Apologies if this was mentioned in the article or discussion (didn't see it but also didn't read every last word) but what's wrong with having the device generate its own CA cert that it uses to issue a server cert for itself. The CA cert can be imported into the user's browser as a trusted CA enterprise style either via http (if still allowed) or through OOB paths such as USB/uSD or some hack such as SMB?
Disclosure: we implemented this approach in a project I work on and...so far so good.
That is a terrible idea. Regrettably, adding a custom CA causes it to be trusted for everything, and my router should not cause a compromise of my router to end up compromising my computer.
> A router that needs a certificate contacts the OpenWrt CA over an encrypted channel and sends its SSH host key; the CA hashes the key and reserves the "SSH-hash.luci.openwrt.org" host name for the device. The OpenWrt CA sends back a nonce to the router, which creates a certificate signing request (CSR), hashes it, and then signs the hash, nonce, and a timestamp using its SSH host key. The CSR and signed message are sent back to the OpenWrt CA, which verifies the information and signature, signs the certificate, and sends it back to the device.
Yeah, that's not going to work.
The CA/Browser Forum has clear guidelines on the validation/verification methods required before a certificate can be issued and you're not going to find a CA that will allow something like this (even from a subordinate CA because, as TFA points out, they are responsible for all issuing performed by their subordinates).
Folks have been trying to come up with a good solution for this for ages and one hasn't been found yet. At present, I think the best solution is to just create your own private PKI and issue your own certificates. If you don't want to deal with that, you can use ACME with DNS-based validation to get a certificate for your desired hostnames (every 90 days).
What's interesting is their proposed scheme would already work with Lets Encrypt acting as the CA. I'm not sure why they think they need to have a subordinate CA.
Keep the goofy ssh key based dns registration and provide users with a SSH-hash.luci.openwrt.org A record pointing to their public IP, then just let LE and ACME do the rest.
Mikrotik's RouterOS (and I'm sure other commercial router distros) already provides me this in the form of <stringOfHex>.sn.mynetname.net (though I have to do the ACME myself via script or from a host inside the network).
> Keep the goofy ssh key based dns registration and provide users with a SSH-hash.luci.openwrt.org A record pointing to their public IP, then just let LE and ACME do the rest.
This presumes you only have one OpenWRT-device and that it’s on a public IP.
There's even a way around this: the DNS record can point to a server that the OpwnWRT people control, and use ACME to get that server issued a cert. Then the cert can be passed to the router that's running OpenWRT. And after that, the DNS can start pointing to the router's private LAN IP. Repeat every time the cert needs a refresh.
To mitigate issues while the refresh is happening, the DNS server running on the router could intercept requests for its own name. Of course that doesn't work if the client on the LAN is pointing directly to an off-LAN DNS server, so it's not a perfect solution. But presumably, with a low A-record TTL, "downtime" of the local DNS name could be kept to a minimum, and the cert renewal could be scheduled to take place in the middle of the night.
Not sure if any of this process would violate LetsEncrypt's policies and get the account banned, though.
One place I used to work had a daemon running on your computer, but wanted to control it from the browser without a mixed content warning, so they made control.example.com a CNAME to localhost and shipped the same certificate with the daemon. The cert got revoked when the CA found out about it.
I use this in my own setup. When letsencrypt provides a DNS challenge, the cloudflare plugin will complete it. The only requirements are
(a) owning a domain
(b) having cloudflare control it
(c) somewhere to run the certbot renewal cron job
If you don't like b, you can swap out cloudflare for any of the other DNS providers. I chose it because it's free.
For devices which can't run certbot (e.g. a managed network switch) I run certbot on a raspberry pi and use a deployment hook to deploy it. The code for deploying to a tp link switch is open source:
If they would implement that scheme, I bet that many other projects, including proprietary ones, will use their service, as this problem is common and hard-to-impossible to solve.
Just checked my TP Link router, it uses http in the house, I'm fine with it.
the web page can default to http, then on the top of that page advise users to choose https-with-browsering-warnings (provide a link for that for immediate use) that is safer for local use. the default still be http.
most of the routers are wireless with wpa2-psk which is encrypted. If someone knows how to use the shared-psk to decrypt http traffic and you need defend that, you already know you should use the https option.
While cracking WPA2 isn't trivial, it's not hard to redirect network traffic once you've managed to get in. Whether it's through abusing a flaw in the router's WPS functionality or plain old brute force, it's not impossible for an attacker to get into your network.
From there a simple flood of ARP packets will allow the attacker to take over your network traffic and listen in on any unencrypted traffic, using the router admin password as an escalation within the network infrastructure.
Of course, such an attack is unrealistic in a home network scenario. It doesn't matter, because most consumer routers probably contain some unauthenticated RCE or authentication bypass anyway. Consumer routers aren't the most secure products and nearly every time a security researcher starts digging into their firmware, something spectacular comes out of it.
it's not focused on TLS certificates, but the Autonomic Network and Integrated Model and Approach (ANIMA) work group[1] has a sizable % of their Documents and Drafts dedicated to shipping or setting up devices with pre-set certificates. Most of this related to their Bootstrapping Remote Secure Keys Infrastructure (BRSKI) work[2]. I'll let BRSKI descibe itself in it's own words, with the start of their abstract:
> This document specifies automated bootstrapping of an Autonomic Control Plane. To do this a Secure Key Infrastructure is bootstrapped. This is done using manufacturer-installed X.509 certificates, in combination with a manufacturer's authorizing service, both online and offline. We call this process the Bootstrapping Remote Secure Key Infrastructure (BRSKI) protocol. Bootstrapping a new device can occur using a routable address and a cloud service, or using only link-local connectivity, or on limited/ disconnected networks.
The BRSKI document does quite an robust imo job outlaying the environments & challenges & constraints folks like OpenWRT face in trying to set up devices.
BRSKI presents a specific approach, which no, does not immediately seem super useful/simple enough for openwrt. A devices (Pledge) contact a Manufacturer Authorized Signing Authority (MASA) Service. The MASA returns a Voucher that points it to a Join Registrar (& Coordinator), which the device uses for Enrollmemt to the domain & Imprinting of the Domain's material & exchange of Local Identity. The standard also includes ways for the MASA to collect Domain information (so you can register your registrar) and to provide optional Ownership Tracking of devices.
Openwrt could become a MASA, users could host Join Registrar's for their Domains, & nodes could securely go get the root trust key they ought to from that Join Registrar. Since the Domain generated that key, if the device were to present it's https interface with that key or one derived from it, HTTPS ought work.
In an alternate universe, ISPs could just give each customer a subdomain under `isp-cos-customers.com`. An RFC standard for updating DNS records could then give that customer full control over their subdomain. Their router could get a cert through ACME.
The solution would be create a dns record for something linke newinstall.openwrt.net and use that to to generate a certificate from letsencrypt and include that into the openwrt builds. Such builds would be rebuilt every 2.5 months and would have a lifespan of 2 months.
The only problem I see with that is that now you have to setup ci and automated build for a vast number of targets (which i'm not sure if exists already).
The public dns record would then have 192.168.1.1 as A record (or whatever the default is in openwrt)
Don't do this for any published software. If you want to build your own in-house systems this way knock yourself out, but for anything published this is not at all safe as I explain in the linked LWN comments.
If anybody other than the subscriber has the private key they can (and should) revoke the certificate. With Let's Encrypt you can even trivially automate this, fetch the new build, extract the private key, do the ACME revocation call with the signature from that key as proof and the certificate is revoked.
This is also a violation of the subscriber agreement for any public CA, it's unlikely they'll do anything about that beyond revoking the certificate, but they might, especially if you keep doing this after being told not to.
Having a private key is only sufficient to passively eavesdrop TLS 1.2 (and earlier) RSA kex which is not going to happen with any halfway modern browser.
You could actively MITM passthrough with the private key on any version, because in this and most cases HTTPS is not mutually authenticated, so the OpenWrt server doesn't know it is talking to your eavesdropping system rather than the web client.
> Such builds would be rebuilt every 2.5 months and would have a lifespan of 2 months.
Sounds like a fast way to hit flash storage's write cycle limit and corrupt the device. The devices that OpenWRT targets mostly have flash memory that have limited amounts of write cycles[1] before the memory fails.
There's no reason that HTTP has to be insecure for the purposes here - it should be possible to implement browser-side encryption enough to securely transmit the password.
You're never supposed to reinvent the encryption wheel - but this would be a use case for it.
> it should be possible to implement browser-side encryption enough to securely transmit the password.
That only helps against passive eavesdroppers. Without a way to authenticate the device, a man-in-the-middle can easily pretend to be the device and receive the password.
Well you could implement public key authentication in javascript with a password derived public key. But someone could still intercept your session cookie. You could take it even further and build out an encrypted channel, websocket, or just message based, to send all the data over and use client side rendering. Probably too complicated, but if implemented properly, it would work.
No matter how complicated you make it, a man-in-the-middle pretending to be the device can simply replace your authentication JavaScript. The only way to stop an active attacker from doing that is to authenticate the device to the browser, and this cannot be done in unauthenticated JavaScript sent from the device.
In the end, I used WebRTC from an external HTTPS webpage I'm hosting and a "mailbox" service where two peers can post and retrieve messages from a UUID. To communicate this UUID on the local network to the peer, I first used an <img> tag with the UUID in the URL query string but this method was recently broken because of increased security measures by browsers. I now use two methods:
- Bonjour: Every peer listen for every bonjour request on the network, on the webpage I make a request to https://soundsync-UUID.local/. The peer then extract the UUID from the request and connect
- TLS Server Name Indication: I use sslip.io to connect to https://UUID_IP.sslip.io/. This is redirected to the local network IP of the peer which use the full domain name in the TLS handshake to extract the conversation UUID. This method doesn't always work because of some router filtering out DNS records resolving to 192.168.X.X.
All this process is very hacky and doesn't always work but I haven't found anything else better. We still don't have a good way to make self-hosting easy for anyone while making it secure and not dependant on an external service.