I'm most worried about the "long tail" of often very interesting, useful, and rare content (a lot of it from a time when the Internet was far less commercialised) that is unlikely to be hosted on HTTPS, and whose owner may have even forgotten about or can't be bothered to do anything about, but still serves a purpose for visitors. The "not secure" will drive a lot of visitors away, and even lead to the death of many such sites.
Imagine someone who knew enough to set up a site on his own server a long time ago and had left it alone ever since. Maybe he'd considered turning it off a few times, but just couldn't be bothered to. Now he suddenly gets contacted by a bunch of people telling him his site is "not secure". Keep in mind that he and his visitors are largely not highly knowledgeable in exactly what that means, or what to do about it. It could push him over the edge.
...and then there's things like http://www.homebrewcpu.com/ which might never have existed if HTTPS was strongly enforced all along.
I understand the security motivation, but I disagree very very strongly with these actions when it also means there's a high risk of destroying valuable and unique, maybe even irreplaceable content. In general, I think that security should not be the ultimate and only goal of society, contrary to what seems the popular notion today. It somewhat reminds me of https://en.wikipedia.org/wiki/Slum_clearance .
(I also oppose the increased centralisation of authority/control that CAs and enforced HTTPS will bring, but that's a rant for another time...)
Unfortunately, if the owner of the content is not interested in keeping this site up, the content will be lost sooner or later anyways. He probably also does not bother to install security updates and he will most likely stop paying the bills at some point (domain name, hosting server, etc).
Installing LetsEncrypt is not much work and he might be motivated if a lot of people ask him. If he is really not interested, it is probably best to archive the website to a real archive and hope they make sure they content remains available. Unfortunately this also means that the archive will no longer be found on google or most other search engines. It is really a shame that there is no work on google's side to make sure archived content can be found among other search results.
> Installing LetsEncrypt is not much work and he might be motivated if a lot of people ask him.
Assuming he has direct access to his server of course. If he's on a shared host, he may not have the option to use LetsEncrypt, but may be forced to a buy a certificate from the hosting company.
Many hosting companies already give out free LetsEncrypt certificates with the ease of a checkbox. And those that don't will have plenty initiative to do so because their customers will spam their support saying that their website is not save anymore (and many will switch if the price for the certificate is unreasonable).
I suspect people who commonly look at the older or weirder parts of the Internet will get used to the "not secure" warning. If you don't care enough to fix it, you probably don't care much about growing a mainstream audience anyway.
"Not secure" doesn't mean you have to shut it down, it just means it's not secure.
This is part of a trend of gradually increasing production values for mainstream sites; it makes the less polished stuff look dated, but in a way that's just going back to the old days when the Internet was an obscure place with a small audience.
I completely agree...this will just dilute the impact of not secure and after a bunch of safe interactions to not secure sites, it might actually make not secure meaningless to the masses.
That being said, I think the owners of sites will still feel the sting in their gut seeing or hearing their site is insecure or that browsers are scaring visitors away...so I think it will speed up adoption even further and I do agree that https will become the default.
But why hide it so that normal web user can't access it?
Checking what Cert-company signed the certificate is important to trust a site.
Before it was one left click and it showed the company that signed the cert (optional you could view the cert itself)
Now many steps like click on hamburger button -> More tools -> Developer Tools -> Security tab (bottom of screen) -> View Certificate (shows certificate itself)
What a trainwreck of decision t remove this vital info. e.g. YCombinator cert is from "Comodo". If it's suddenly from a funky else it would help lessen the trust.
I am sure, it was just a mistake to remove it and some cargo cult.
Must be a Chrome thing. In Firefox 54 it's three clicks: lock icon, ">", "more info". Actually the second click already gives some summary including issuer.
Interesting you link to homebrewcpu.com, since one of the links on that page directs to a blog where all the images on the homepage have been broken by Photobucket's recent policy change. I feel like in the future people will be mystified at how much of the early internet just gradually vaporized.
People who have created and then abandoned such sites are very unlikely to self-host on their own hardware or VPS. Their web hosts will simply get certs through Let's Encrypt or other and things will continue working fine.
I'd hope LAN is included because while you use some services at home, most people will primarily use LAN services in public Wifis. It would be sensible to have HTTPS used there. They'd have to come up with a new idea of login pages (currently most systems MITM http requests) but otherwise it would make sense if you get a warning when you access pages on your hotel/in-flight Wifi without encryption.
> most people will primarily use LAN services in public Wifis
[citation needed] - I would say most people will primarily use LAN services in private home/corporate networks. But I don't have a citation either.
It's different use cases. Maybe one needs the warning, the other one not. But putting the warning on everything and overload the user with them is not the right solution.
Using a self signed CA is a pity, because installing it on every phone, tablet, laptop, tv, pc, ... is cumbersome in a home network, and making all hosts public and depending on an external CA for local resources to not get scary warnings can not be the right solution either.
> [citation needed] - I would say most people will primarily use LAN services in private home/corporate networks. But I don't have a citation either.
Sorry I don't have a citation here. But looking at my coworkers and family, none of them uses services in their home LAN. Most won't even know how to access the router.
On the other hand it's very common to use Wifi in public transport (at least here in London for the tube), airports, trains and hotels. Few people consider security when using these hotspots, making sure that SSL is enabled for all pages would be an improvement.
> On the other hand it's very common to use Wifi in public transport (at least here in London for the tube), airports, trains and hotels. Few people consider security when using these hotspots, making sure that SSL is enabled for all pages would be an improvement.
But do you use LAN services on that network? I think you will use that public hotspot to check facebook, hn, search something on google and stuff like that, not connect to some service in the local network of the hotspot.
Many captive portals operate as a LAN service, where you may be expected to type a password. If the public Wifi has a roaming agreement, that password may even be for your ISP or mobile phone provider.
Yes I agree. But I see that as an exception. The Captive Portal is ONE thing that needs to be secured and be accessible inside the LAN of a public wifi hotspot. I see that as a special case, and public HS providers can work around that (using a domain for the CP they own and got a cert for). But imho that does not mean that browsers should throw security warnings at users for everything else inside their home lan. At least until we have a solution that works for everyone. And
- depending on an external service for your internal home stuff, or
- installing a self signed CA on every device you own
is not a solution.
Until we have something we can replace the status quo with, we should not deprecate it.
They don't have printers? They don't connect to a NAT router before their cable modem? They never tether their computer to their phone? Their DVR doesn't have an app to control it? None of them have their receiver connected to their iTunes library?
While quite a few users will use services on their home lan, contrary to the parent posters statement, I believe the majority of users don't use those services via a web browser.
For instance (to draw from your examples):
1) Most people either install the printer drivers from the manufacture's web site, or let Windows/OSX auto-discover
and auto configure the printer with out a web browser.
2) Most users just use the wifi information provided by the cable/DSL provider when they installed the modem, use the WPS button, or install the app that came with their router rather than use a web browser to configure it.
3) Most users will not tether their computer to their phone.
4) Most don't install a DVR app on their phone. If they did, their phone probably communicates with their DVR via some intermediate cloud service.
5) If their receiver auto-connects to their itunes library, it probably won't complain about that itunes not having a public certificate either.
All these things are preset in the many tech-savvy houses in my area ... and every single one of them (with the exception of the printer) is missing from less tech-oriented houses here. The printers are also usually locally connected when needed in many of my acquaintances' houses.
There's a clear divide that I see daily between households that understand their tech, and households that simply use whatever was set up for them. That divide seems to be growing wider, lately, too ...
Seems like something that should tie onto the public/private network distinction that Windows makes, though I can't remember ever seeing it in other OSes.
I would really love to see some kind of LAN TLS solution that doesn't rely on requiring you to have your own CA. I've thought a fair bit about the problem, but haven't come up with any solutions that I like.
Browsers, rightfully so, don't accept self-signed certificates. Active Directory and Group Policies can push out a trusted self-signed root CA certificate and generate certs that endpoints can use, but that's a pain in the ass and usually requires central IT to manage.
Please someone come up with something I haven't thought of yet that doesn't break the internet but gets useful certs onto my LAN!
That any jerkoff with the right work shirt and demeanor can convince your coworkers that they need to service equipment at your demarc - and a lan turtle ends up connected to a trunk-enabled port on your switch that nobody notices for a year.
We are rapidly moving away from perimeter solutions and toward zero trust networking models. Yes, you should encrypt and authenticate inside your LAN.
...but those services, served over HTTP, are not secured and can be spied upon by your insecure Chinese webcam or by the 3% of routers that have a malware, isn't that correct? So the browser should display so, it's up to you to ignore the warning.
Well it's my lan, and I can decide what I trust and what not. For example in my home network I have different vlans: one for things i reasonably trust, and one for the chinese webcams and iot lightbulbs.
If my router has malware, even if i talk to it over https I'm screwed anyway, I don't think that is something I can fix with https.
> This one. Realistically all those devices are depending on external services already.
This is codifying (IMHO) bad practices and a really brittle architecture. Why would you want this? One of the reasons IoT security is a mess is that devices need to talk to internet endpoints instead of staying inside the NATted, firewalled LAN.
I think the NATed, firewalled LAN is an untenable concept: it blurs the line between private and public, a line that security requires making make clear and sharp. If it's connected to a network, it's exposed enough that we should treat it as connected to a public network (which greatly simplifies our threat model); if it's not secure enough for that, it needs to be built as local-only hardware.
Way off on a tangent here, but can you point me in the direction of some documentation for VLANing specific devices, giving them access to the internet for whatever they need to do, the ability to access them over the LAN by trusted devices, but not giving them the ability to call into the secure LAN?
Say you have a VLAN-capable switch and a spare PC with a couple NICs to use as a router. I use the Sophos XG Home Edition for example but other router setups (like the mentioned pfsense) should be similar.
Say ports 1-4 on the switch are your trusted devices, 5-8 are your untrusted, and 9-10 are the LAN side of the router. There are other ways to do it but this is probably simplest and easiest to mentally map. Set ports 1-4 and 9 as access ports for VLAN 100. Set ports 5-8 and 10 as access ports for VLAN 200. Now you have two virtual networks hooked into one physical, partitioned by the router.
You then set up eth0 (port 9) as trusted LAN and eth1 (port 10) as untrusted LAN in the router, and give each an IP range (Separate subnets at a minimum). Now all trusted can talk to each other and all untrusted can talk to each other, but nothing together yet.
You then set routes and rules (port filtering, SNAT, DNAT, etc) for port access from either network to the other to finely control what ports are available to what. You do the same thing with eth2 which would be the connection from the router to the modem for internet access.
Not a full tutorial but hopefully that points you in the right direction.
That's exactly what I did, but I can't point you to a howto or something like that. I just bought a vlan capable switch and a pfsense router and set everything up.
3% of routers in Canada and the US have some kind of malware in them, and almost all other countries are worse off. You might not need HTTPS, but I need you to have it so I can trust downloading a PDF when I'm on your site. I need it so I don't get MITM by some shitty black hat tracking company. I need it so while I travel I don't have to worry about shitty governments tracking what I'm reading or watching.
And if your answer is "sure, but there will be other websites that are HTTP" my response is "yeah, but one day soon when enough of the web is secure I'm going to disallow all HTTP connections from my browser". And eventually I'll disallow all connections that aren't on HSTS preload lists. And eventually, hopefully, I'll disallow websites that don't have HPKP with long expires.
> eventually, hopefully, I'll disallow websites that don't have HPKP with long expires
I doubt HPKP will ever see wide adoption. At least not in the form it has now. It's just too damn easy to bork the config and take your entire site offline with no way to remedy that error.
Proxies that do https inspection act as a client to the destination server, decrypt traffic, then reencrypt locally using a self-signed (or otherwise non publically trusted) certificate.
In short, you can intercept the traffic, but it relies on the client explicitly trusting your certificate. This is the foundation of all security on the web.
We weren't using an off-the-shelf proxy, rather some nice debugging tools with extra help from the networking layering.
Maybe TLS is more full-proof to Website replication with DNS spoofing and a few other tricks, but they did work without issues in SSL 1.0 connections, using modified web servers.
You'd still not be able to MITM traffic while I see the same fingerprint as usually? Either the traffic looks like http to me or it's a different certificate.
We did it, I already explained a bit in another post.
As mentioned this was done with SSL 1.0, I don't know if a similar set of tricks could be done with TLS, as I am out of these type of applications since 2000.
Now believe whatever you want, but I am sure of what I programmed in 2000, the infrastructure we had and gladly explain it in job interviews where confidentiality is assured.
SSL 1.0 wasn't even supported by many of the earlier builds of Internet Explorer on Windows XP, with SSL 2 getting deprecated around the time of XP. Even SSL 3 - the last of the SSL protocols - has now largely been deprecated bar a few misconfigured servers.
Furthermore TLS 1.0 has been supported since XP but even that is now in the process of being deprecated.
So your attack, if it did depend on SSL v1.0, is so outdated that it's not even worth mentioning. And certainly not in the way that you announced "let me tell you it is quite easy to MITM by any company owning part of the connection."
(Please excuse me using "Windows XP" for approximate timeframes. I can't remember the exact year for when these protocols were phased out but I can remember which devices we had to support).
edit: It was bugging me that I didn't really know much about SSL 1.0 compared with the later protocols so I decided to do a bit of reading. It turns out the reason I don't know much about SSL 1.0 was because it was never publically released[1].
Which makes me even more puzzled about your anecdote as why would you even want to run an SSL 1.0 proxy when even back in the year 2000 no devices were supporting it. Either way it's certainly not proof of the ease of which SSL can be MITMed.
I won't comment on how you built your proxy but HTTPS has come a long way since, eg TLS wasn't common back then and SSL has since been deprecated.
These days the only viable way to MITM HTTPS is to either attack the connection while it's in plain text HTTP (ie before the browser redirects to HTTPS:// (which is where the HSTS header comes in to play - it's cached by browsers telling them to default to HTTPS and never attempt a plain text connection) or to form your own HTTPS connection with your own CA signed certificate for the target site. Which would mean you'd either need to compromise a signing authority or have your own CA certs already installed on the victims PC (the latter is what some bad ISPs reportedly do when they inject ads).
Edit: I believe some corporate network proxies also work on the principal of having their own CA certs on their business workstations - so maybe this was how your proxy worked as well?
Or for smtp and imap traffic to do a downgrade attack (STARTTLS downgrade). The committee that set encryption to be optional and downgradable in modern smtp/imap over TLS should be placed against a wall. In the history of moronic decisions... All that to save a port.
I can't disclose much, but can give a very basic idea regarding SSL.
That information eventually needs to be unpacked, so you get a replicate of the destination site with the same certificates, which you can easily download by pretending to be a browser, but running on your modified server, which then unwraps, repackages and then forwards the request to the original server.
Since one owns the server, it is also relatively easy to disable whatever validations the SSL algorithm requires at library level.
So I don't know TLS and since 2000 I don't mess with this kind of stuff, but I don't think similar approaches aren't possible.
You can't download the private part of the TLS certificate by pretending to be a browser. You can't publicly download it at all. Your browser comes with a chain of public certificates from reputable CA vendors, and you verify that a site is who it claims to be by using their public certificate (which your browser downloads).
The only way they can succeed in proving who they are, is if their server has access to the corresponding private key of the certificate, which is why you can't spoof a properly secured site unless you either hack their server, crack the encryption, or install your phony certificate on the client (which is what corporations sometimes mandate). That's it.
It's been 17 years since 2000. Whatever security hole you may have used (if any) has long since been patched. The weak cyphers used in some SSL versions which you may have depended on are mostly gone (certainly from high profile targets).
If what you state is true, now in 2017, then there would have to be a huge conspiracy which involves all browser vendors (including Mozilla), as well as all national governments and the EU that hides this fact from the people. All software developers who do catch on (the relevant source code is open source after all) would have to bribed or threatened into silence too.
You are either misinformed or consciously spreading misinformation.
> It's been 17 years since 2000. Whatever security hole you may have used (if any) has long since been patched. The weak cyphers used in some SSL versions which you may have depended on are mostly gone (certainly from high profile targets).
I very much doubt he was even doing anything that clever. Since it was a corporate network he probably just had their corporate CA certificate already built as part of the company machine images / build scripts. He possibly might not even have been aware this was happening if the IT department was large enough that different coworkers managed the desktops from himself.
The "I can't disclose much" argument on a 2 decade old hack is effectively just saying "I can't remember the details" or "I wasn't directly involved in setting up the proxy". Either way, while much has changed in the last 17 years, the practice of some businesses installing their own CA certificates on company assets has been a fairly standard way for corporate proxies to intercept HTTPS traffic. And a great deal less trouble than relying on SSL vulnerabilities since you already own and deploy the destination hardware+software anyway.
I don't really understand what lawyers have to do with the discussion but I certainly do believe you that you had an in-house proxy that did MITM HTTPS traffic. Lots of businesses do. The problem is you said "let me tell you it is quite easy to MITM by any company owning part of the connection" using that as your example. While your anecdote may be true, the statement it is trying to support simply isn't true and neither was your description of how SSL works (eg the proxy server being able to disable the client libraries SSL checks).
So it's not that I don't believe your anecdote - I'm sure that did happen in some form or other - but the nature of the exploit either isn't how you described or is so old and long since patched that it hasn't been exploitable in more than a decade thus isn't relevant to the statement you were trying to support.
You're loosely describing the 2nd attack I mentioned but overlooking the issue of the SSL handshake. You cannot perform your attack without having the server's private key, which as name suggests, is privately stored on the remote TLS endpoint. The private key is used to confirm that the client has received the websites certificates without getting MITMed. Since you don't have the websites private key you cannot use their public key in the way you described.
The workaround for that is to create your own SSL certificate for that website (this will create you a public certificate and private key). Which means you will also need that new certificate CA signed otherwise the client (who's own SSL validations you cannot remotely disable like you implied) will give the user a warning about an untrusted certificate (the message will vary from client to client). This means you either need access to the targeted client to install your own CA certs, or you need access to a compromised signing authority.
They are not. Either I get content as the key'ed server sent it, or I do not. MITM of a https source under the same domain is not possible (caveate is manually accepting a new certificate in my local browser).
People who can MITM you when you use HTTPS: the company that owns the certificate, the software on your local system, people who are capable of subverting one or the other (i.e. state-level attackers), any of the 150-odd CAs if it's willing to burn its entire business to do so (certificate transparency).
> I need it so while I travel I don't have to worry about shitty governments tracking what I'm reading or watching.
For this reason I used a VPN while travelling overseas.
Most places have wifi of varying quality. To my dismay/horror, I found that Hilton charge EUR15/day for the privelege of using a VPN.
Does Hilton actually block VPNs on their standard internet, or is "VPN access" just code for "we'll give you a public IP so that IPsec traffic won't get swallowed"?
You won't have confidence that your visitors see the content you serve as you intend because it can be modified in transit. Your visitors will be broadcasting at least the exact address of the pages they read on your blog, and are vulnerable to anything that gets injected into your traffic in transit (e.g. malicious script).
Every website should use HTTPS. It's the right thing to do. It's not hard to do these days.
There are legacy setups like S3 with custom domains, probably github pages with custom domain.
But in both cases there is no reason not to be using cloudfront or cloudflare.
I am hopeful github/tumblr and other static hosting services will add in letsenrypt setup once browsers start showing these as "not secure". There will be a dip in user engagement otherwise, as the main article points out.
I'm using gh pages for my site, on a custom domain. I really don't want to use cloudflare or similar services for personal reasons. Afaict there's not really any other option though :/ I'd love it if GitHub were able to provide me with a LetsEncrypt cert. I've written to them about this before, but they said there was nothing planned yet.
FWIW, I use Gitlab Pages for the same, and they allow you to provide your own cert (be it LetsEncrypt or otherwise). This has become a good bit easier to administer since LE added support for DNS verification, bit you're right, it would be cool to see these providers adding in built LE automated certs.
I've been doing the same. Unfortunately, it seems that if I want to update my cert I have to remove the domain and then add it back in.
Are you aware of a workaround? Otherwise, I agree with the ease now that LE supports DNS verification. I just wish we could edit existing domains in Gitlab.
You could plausibly use cloudfront, I don't know what tumblr's policies are.
But these aren't technical limitations! I don't know why Tumblr retricts you from using cloudflare, presumably because they want to control your content.
My two cents, don't use a free hosting service. S3+cloudfront is dirt cheap and will do the trick.
That is only a facade of security, since you have to assume trustworthy behavior on the part of Cloudfront and Cloudflare.
I'd rather deal with the very occasional “why not secure?” e-mail rather than pretend my site is actually secure. (I only serve static pages, not forms.)
That doesn't really change the parent comment. Just because it's harder sometimes doesn't mean it shouldn't be done.
A current project of mine has some usability issues because it accepts external API requests but I won't allow non HTTPS. It would give me great pleasure to accept all possible APIs, but it's just not right to open users to that risk.
It is, unless you want to use a custom domain with it. With the latter case your only choice to retain some semblance of HTTPS is to put it behind a CDN, but that still breaks end-to-end HTTPS. The problem really has to solved at Github's end, by allowing users to upload their own certs.
You can set your own cert on the CDN though, and have the CDN request origin objects over https from the github domain. Since the CDN is the endpoint for your domain, you'll still be presenting a custom domain to your users.
Wait, how is serving your content through Github's servers using your own certificate any more secure than letting GitHub or CloudFlare provide the cert?
It's arguably more secure than letting cloudflare do it (adding a new party doesn't make the chain more secure) but it doesn't have to be more secure than using githubs cert. It's just that you can't use githubs cert with your own custom domain.
For $5 month you could through up what is likely to be an adequate nginx proxy on Lightsail or similar and serve letsencrypt certs for all of your projects/domains.
It's not really that hard. Set up auto-renew, make sure the box is up to date and properly firewalled. You won't really have to think much about it.
Exactly. I remember viewing my website over a mobile connection in the UK and getting an ISP banner at the top of the page! Injected right into my HTML!
My understanding is that you will already be ranked lower on search engines.
My main concern with this happening is that browsers are going to get a reputation for being 'alarmist', so when something really goes wrong, they won't be able to communicate it effectively.
Yes, search engines will penalize you. ISPs will intercept traffic and change it. Your site will load slower. The problems with plain HTTP go on and on.
I believe the Chrome team has (or at least had) long term plans to mark all HTTP sites as actively dangerous, form fields or not. So, you'll be fine for now, but there will come a point when you will need to implement HTTPS.
HTTPS is pain in the neck and _currently_ I hate it from the bottom of my heart.
TLTR: if you have a commercial service or device running in a local network forget HTTPS and service workers, use HTTP and HTML5 appcache.
-- RANT starts here --
It would be lovely when every website and webapp uses HTTPS. But for a significant amount of them it's just not f..... possible without driving users completely insane.
If the HTTPS server doesn't (and never will) have a public domain forget about encryption and security, forget about using service workers. The following examples can't, by the love of god, ever provide HTTPS without completely f..cking up user experience due self signed certificates warnings:
1) internal corporation services, websites and webapps.
2) services that run in a local private network like on a Raspberry Pi.
3) webapps which are served via public HTTPS website, but need to talk via CORS to local unsecured services, like to a Philips hue bridge, or any other IoT device which is in the local network but only provides HTTP. These will enlight the users with a shiny mixed-content warning.
.... JUST use self-signed certificates, they said.
NO.
For normal users the UX of self-signed certificates is just non existent, it's a complete mess! It will scare the sh't out of users and will almost always look like your service is plain malware.
It looks much more secure to serve a good'ol HTTP site with no encryption at all.
> 1) internal corporation services, websites and webapps
If not for hostname validation, how would you even /know/ that you're talking to the "internal corporation service" rather than someones MITM proxy? And how would you feel if people on the same LAN could see and modify all your interactions with those services?
2) services that run in a local private network like on a Raspberry Pi
Depending on the type of service you may or may not want TLS. If you visit the service by IP address and specific port anyway, you can easily add an exception for your internal IP. This will never be used like so by non-tech-savvy people.
3) webapps which are served via public HTTPS website, but need to talk via CORS to local unsecured services
I cannot think of any reason to /not/ want to use HTTPS on this. It's horrible how things like the Philips Hue bridge work and rely on insecure HTTP to control your home lighting.
Don't blame browsers for warning people for their insecure systems and appliances. Instead blame their creators or manufacturers as they're the ones who can fix this situation.
> It's horrible how things like the Philips Hue bridge work and rely on insecure HTTP to control your home lighting.
The Philips hue bridge REST API is accessible in the local network like http://192.168.1.123/api/ .... which is great since apps/wepapps can talk to the bridge without a cloud or philips server inbetween.
And this is the very problem, it's not possible for Philips to add HTTPS support to the hue bridge without some sort of cloud roundtrip to a Philips server, keeping the very cool feature to talk only within the local network to the bridge.
Because how could that be deployed without self-signed certificates and the usual browser exceptions and warnings?
If it is an http API, you don't really need a public certificate. You can have a long term selfsigned certificate on the device and you check that the thumbprint hasn't changed every time you connect from your client. These big warning windows are for connecting to it from a browser, not a RestClient.
Cross-domain is needed, otherwise an app/webapp couldn't talk to the bridge since the bridge only serves the REST API.
However in order to send control commands and query light states the app/webapp needs to authenticate and create a account, which is only possible for a few seconds after pressing a physical bridge button.
We have the same issue with Glowing Bear (https://github.com/glowing-bear/glowing-bear). It's a web frontend for an IRC client (WeeChat) that connects directly to WeeChat via WebSockets. Sort of like self-hosted irccloud without a cloud. We really want everyone to use encrypted connections [0] and push people onto the TLS version of Glowing Bear. But some people host their WeeChat on their local network, and you can't (realistically) get a certificate for a local IP. So for those people we need to open an unencrypted websocket (ws://1.2.3.4), which isn't possible from an https site. Ideally we'd like to disallow unencrypted connections to non-local destinations but that's practically impossible to determine in JS. It's a super annoying problem.
Disallowing unsafe websockets from secure origins is one of those policies that is a really good idea 99.5% of the time but for those last 0.5% of use cases, it's a major pain in the bum.
[0] WeeChat has a /exec command to execute arbitrary commands, and the client has access to that --- not great when you transmit your password in plain text.
> it's not possible for Philips to add HTTPS support to the hue bridge without some sort of cloud roundtrip to a Philips server
I don't see why not. The alternative is freedom. Philips doesn't have to lock their devices. That's a choice they made, sadly the choice that most companies make.
> Because how could that be deployed without self-signed certificates and the usual browser exceptions and warnings?
The fact that your browser warns you about insecure communication happening from that web page, that's a good thing. Even iff you deliberately choose accept that and believe that there's no other way for this particular service/device.
The simple fact that you accept some insecure traffic, doesn't make it secure.
> > it's not possible for Philips to add HTTPS support to the hue bridge
> I don't see why not.
That's not a constructive argument. I don't see how they could make it work?
Even if they somehow solve the problem of giving these devices domain names and even if they generate separate private key for each unit, the key and cert are going to be embedded in the firmware and a sufficiently sophisticated attacker will just extract them and become able to impersonate some Philips device.
How the user of another device is going to tell whether he is connecting to his device or to malicious neighbor impersonating neighbor's device to establish Philips-signed HTTPS with the victim and then another connection to victim's device and MITM the victim?
You would have to make all users install a trusted certificate authority tied to their individual device. Which is a UX disaster in current browsers and also a security disaster, because if this becomes a norm, sooner or later somebody will sell you a toy device bundled with a CA crafted to give him the ability to impersonate any website. And you'll trust this CA because you want to play with the toy.
This maybe could be made to work with some improvements in browser UI. Make it easier to add new roots of trust. Make it easier to learn and/or limit what websites these certs will be authorized to authenticate. But nothing like that exists now.
> The fact that your browser warns you about insecure communication happening from that web page, that's a good thing. [...] The simple fact that you accept some insecure traffic, doesn't make it secure.
True. As somebody pointed out elsewhere in this thread, this warning will become another EU cookie banner nothingburger.
> > > it's not possible for Philips to add HTTPS support to the hue bridge
> > I don't see why not.
> That's not a constructive argument. I don't see how they could make it work?
Missing from your quote: The alternative is freedom. Philips doesn't have to lock their devices.
If Philips (and other companies, obviously this doesn't relate to just Philips) would provide a community access to their devices and software rather than locking them out, I believe that this problem would not exist.
The original issue is that having a public website (used over TLS) that interacts with local network devices without TLS shows warnings about insecure communication. Again, the warning is shown because it /is/ insecure. There are plenty alternatives of securely interacting with an IOT device. Plain HTTP from a public website is just not one of them. For example, look at how Apple's Homekit has implemented that. Homekit is not usable from a public web page in a web browser. That's a good thing. (aside: I'm not a big fan of Homekit but their security is not bad)
So if vendors are annoyed with browser warnings, it's because /they/ are doing the wrong thing, not the browsers.
> sufficiently sophisticated attacker will just extract them and become able to impersonate some Philips device
Just like on any website. Just because something isn't 100% unbreakable, doesn't mean it's a bad idea (you do lock your doors, don't you?)
> Missing from your quote: The alternative is freedom. Philips doesn't have to lock their devices.
> If Philips (and other companies, obviously this doesn't relate to just Philips) would provide a community access to their devices and software rather than locking them out, I believe that this problem would not exist.
The problem is technical and won't be fixed by just opening the software.
> There are plenty alternatives of securely interacting with an IOT device.
Please name just one which works for webapps, beside HTTPS.
> So if vendors are annoyed with browser warnings, it's because /they/ are doing the wrong thing, not the browsers.
Homekit is nice but not available to webapps, apps of course can take advantage of several security mechanisms.
My whole rant is about browsers and HTTPS in non public networks.
For webapps which want to talk to IoT devices there is only HTTPS and there is _no_ sane way to provide robust, local!, access to a LAN device via HTTPS.
Here are some requirements: (actually real, I'm working on a IoT'ish product)
* webapp must be served via HTTPS, either from the IoT device or vendor site
* it just works! if webapp served from IoT device, the user shall not be required to install certificate or set exception (because it then looks like scary as hell malware)
* the webapp must work offline (service worker or appcache) without internet connection
* webapp must be able to talk directly to the device, no cloud or vendor server inbetween
* the IoT device which provides a secured REST API might be in in a LAN which is NOT connected to the internet --
so the '<random-stuff-id>.vendor.com DNS resolves to device IP with an Lets Encrypt CA' approach won't work here (otherwise nice hack)
To my knowledge it's technically not possible to build such a HTTPS secured webapp in a local network today without breaking the mentioned requirements.
I think this is part of the larger problem of the relentless "cloud first" movement that the whole industry seems to have adopted. I feel like there isn't any new software, device or standard in development by now that doesn't demand constant internet access and a dedicated background service. Even basic things that should have no business relying on internet access get swalloed by that. (Browsers, operating systems, cars...)
The economic incentives that push everyone in that direction are obvious but I think in the end that will lead to more harm than good.
That being said, I can understand that making IoT devices directly accessible from web page JS fü could cause some security headaches:
As an example, apparently a lot of recent exploits were caused by programs opening a loopback-only REST service for IPC. Those services weren't secured because, hey, if someone can talk to loopback, the system is compromised anyway. The developers didn't realize that any webpage open in a browser can do that via script (respecting CORS) and so even loopback services should be considered exposed to the internet.
I can imagine that an IoT device offering a browser-accessible REST interface might cause similar non-obvious attack vectors. So at the least, it would have to implement some kind of user management and authentication - which might be challenging for small devices.
I think what we really need is some kind of dedicated standard for browsers talking to things on the LAN. Such a standard could then handle discovery, certificate management and authentication/permissions in one go - and would enable browsers to present a good UI for those steps.
However, right now everyone seems to busy developing intricate rube-goldberg machines[1] to care and the agenda of the browser vendors seems to go in the opposite direction - so I don't have high hopes.
I think practically, the most feasible step right now is to forego browsers and build an app instead. Then you have to deal with the headaches of app development but at least you get a nice user experience without any backend services...
> The problem is technical and won't be fixed by just opening the software.
I strongly disagree. Main reason is that if Philips opened their firmware to the public, it would have had different protocols by now than just HTTP with a poor mans JSON API.
> Homekit is nice but not available to webapps, apps of course can take advantage of several security mechanisms.
That's why basically my point is to /not/ use a web app to control local insecure IOT devices.
> Please name just one which works for webapps, beside HTTPS.
Use locally resolvable DNS names and wildcard certificates signed by commonly trusted (public) CAs. It's been done before (Plex does something like this IIRC).
* update: I just noticed another comment [1] that mentions Plex with a link to some technical details [2].
> it would have had different protocols by now than just HTTP with a poor mans JSON API.
Sure, but now you can't control the gadget from the browser and the vendor needs to write an application or something for whatever shitty OS you want to use.
> Use locally resolvable DNS names and wildcard certificates signed by commonly trusted (public) CAs. It's been done before (Plex does something like this IIRC).
Not that simple. Public CAs will likely only give you certs for domains you own (like plex.direct) and your users generally don't have nameservers authoritative for such domains on their LANs (maybe you could pull it off if you are a router vendor, but not with IoT light bulbs) so they have to query your public nameserver and the system fails without Internet connection.
And there is no easy solution: if your light bulb could register an xxx.philips.com domain via UPnP on your router or via SMB on your Windows box, it would be very much unclear what exactly should prevent it from registering philips.com as well.
> Just like on any website. Just because something isn't 100% unbreakable, doesn't mean it's a bad idea (you do lock your doors, don't you?)
Don't you think it's a completely different thing to extract keys from a remote server (try https://news.ycombinator.com/ for example) and a physical gadget you own?
Doubly so if the gadget is open source, as you apparently prefer.
Not really, for a hacker both are remote servers, aren't they? I agree that in practice many security updates are not provided for IOT devices (another reason for FOSS), so it might get easier and at the same time less relevant to extract the keys.
If a gadget is open source doesn't mean the private keys are.. Most internet servers are running open source software (BSD, Linux).
In my opinion the manufacturer should /not/ have your gadget's private key. But that's not really related to this problem.
I was talking about a different scenario: I buy the same kind of light bulb you own, extract its private key and use it to either:
1. impersonate your light bulb, because they both have the same key
2. impersonate my light bulb, because you and your browser can't tell the difference
To prevent such attacks, each device needs its own certificate and key and then furthermore you need one of the following:
1. each certificate is signed by a unique CA which you add to your browser's list of trusted CAs so that it doesn't trust other devices' certs because they are signed by different CAs
2. each device has a globally unique domain and you type this domain into the browser
> 1. impersonate your light bulb, because they both have the same key
Definitely don't give both the same key.
> 2. impersonate my light bulb, because you and your browser can't tell the difference
Who cares if you can impersonate your own lightbulb?
> 1. each certificate is signed by a unique CA
This doesn't change either scenario. If they shared a key then custom CAs don't stop impersonation. If each device has its own key and CA then they still can't impersonate your device, and they still can impersonate their device.
> 2. each device has a globally unique domain and you type this domain into the browser
Typing in "jzhf.hue.com" sound easier than figuring out what IP has been assigned to the device.
> Who cares if you can impersonate your own lightbulb?
For MITM - you think you are connecting to your device, actually it's my proxy (DNS spoof, ARP spoof, TCP hijack, ...), you still get the green bar in your browser saying "über secure Philips lightbulb", you just don't know it's mine because the domain matches and it's signed by the same CA (assuming neither of these protections is in place).
> If each device has its own key and CA then they still can't impersonate your device, and they still can impersonate their device.
Without manual installation of my CA your browser won't accept the certificate ripped from my device.
You said in another post that providing correct address is better than per-device CA. No doubt it's more convenient in a commercial product, assuming you can solve the DNS problem somehow (which doesn't seem possible without working Internet connection or editing hosts file). From pure security standpoint though, I feel like per-device CA has an added advantage of resistance to typosquatting. But it's getting academic now, it's hard to squat if it takes buying a physical device with the right ID.
Plex achieves this with a very convoluted setup [1] - they set up a DNS server so that 1-2-3-4.625d406a00ac415b978ddb368c0d1289.plex.direct returns IP address 1.2.3.4, then they issue a single user a wildcard certificate for *.625d406a00ac415b978ddb368c0d1289.plex.direct
Of course, you have to get a special deal from a CA at who-knows-what-cost - likely meaning open source projects need not apply. And you get a dependency on cloud infrastructure, if they stop issuing certs you end up in a bad place. And you get a giant, ugly URL. And you have to make a DNS lookup so traffic leaves your network anyway.
It's an ugly solution with a lot of downsides - but I doubt the CA/Browser Forum plans to give people much choice in the matter, so it's their way or the highway :-|
wildcard certs are not a solution to this problem. Sharing a private cert with all customers isn't what the solution does. every customer gets their own cert
second letsenrypt has low limits of 20 certs per week. so imagine VLC added a Plex like streaming feature. they'd need far far more than 20 certs a day given how large their user base is
wildcard certs are not a solution to this problem. Sharing a private cert with all customers isn't what the solution does. every customer gets their own cert
That's not what I mean. I mean the same solution as described by michaelt above, that is, provide a different wildcard cert per user.
second letsenrypt has low limits of 20 certs per week. so imagine VLC added a Plex like streaming feature. they'd need far far more than 20 certs a day given how large their user base is
Remember that the limit is only on the number of new users; Let's Encrypt has a renewal exemption that lets you renew your certs even after hitting the 20/week limit. So while it might still not be enough for VLC, I don't think it's a problem for most projects. Plus you can always use more than one domain.
Pretty much any open source project that was to need certs similar to plex would pass this limit the moment they mentioned it on HN. Why should an open source projected have to register hundreds of domains just to handle this case? Someone else gave a long list of the number of devices and services running in his house that need certs like plex. Effectively every router, nas, IP camera, and other networked device that exposes a web interface and therefore every open source project that does those, OpenWRT for example, FreeNAS, ZoneMinder, etc...
BTW, who really is Let's Encrypt, why should I trust them, why should I trust they won't disappear once plain HTTP is no longer supported by cargo-cult-security-conscious browsers?
It seems to me like providing certificates isn't exactly free, in itself.
once plain HTTP is no longer supported by cargo-cult-security-conscious browsers
There already are people talking about such possibility and some even appear to believe it would be a good idea.
Of course what happens then is that without Let's Encrypt you are stuck paying other CAs to have anything published on the Web at all.
<tinfoil hat on>LE is a conspiracy of CAs to phase out unencrypted HTTP and ensure them infinite money stream.
<tinfoil hat off>Even if it isn't, LE will disappear five months after their mission is done because what the heck, why bother.
I just wonder if there is any reason to believe that users of LE are any smarter than kids accepting free candy from pedos? Maybe there are reasons but I just haven't heard them yet.
Ah, I think I'm missing an assumption you're making: that LE is indispensable (or almost) for browsers to deprecate HTTP.
Personally, I think the deprecation (as in, the warning bells and reduced priority, not full blocking) was going to happen anyway, and LE was mostly inconsequential, even if it makes the transition easier.
As for LE being a CA conspiracy, I don't think that makes much sense considering their funders (eg. Mozilla, Google) and those funders relationships with existing CAs (see WoSign, Symantec). But anything's possible.
> That's not a constructive argument. I don't see how they could make it work?
Give each one a subdomain that resolves to its local IP, and give it a valid certificate for that subdomain.
> extract them and become able to impersonate some Philips device.
Or the attacker could just have a real, non-impersonated Philips device. If the user deliberately points their browser at the wrong device's site, nothing can save them. This is a very different problem from securing access to the correct site.
> You would have to make all users install a trusted certificate authority tied to their individual device.
That's not true, and I don't even understand what benefit that would have.
If you have a way to deliver a CA, instead you should deliver the correct address of the device. This makes 'MitM' impossible without any downsides.
>If not for hostname validation, how would you even /know/ that you're talking to the "internal corporation service" rather than someones MITM proxy?
If you control physical hardware, and you control all the users on it (as a corporate network), then you can know that nothing is amiss.
>And how would you feel if people on the same LAN could see and modify all your interactions with those services?
The fact that they physically can doesn't mean they will.
For 2 & 3:
https has two modes: self signed, and certified. Certified requires that you have a public facing domain name, such as "news.ycombinator.com". Devices on private networks can't have public domain names. Consumer devices on public networks could have domain names, but this would be very difficult to configure. Without a domain name, https must be done as self signed.
With self signed, when you first interact with the server, it could be anyone. Self signed https only gives you the guarantee that any further interaction besides this first one are with the same server as the first one. It should be clear that you can still be MITMed under this mode, so long as the attacker can intersect the first message you send after a reboot. If you're scared of network ninjas sneaking into your house in the middle of the night and intersecting your packets, self signed https is no better than http.
>With self signed, when you first interact with the server, it could be anyone. Self signed https only gives you the guarantee that any further interaction besides this first one are with the same server as the first one.
If I create a CA and install that CA's public key in my browser, then use that CA to sign the cert for a device on my network, why exactly will it "be anyone"?
Unrelated, but this push for HTTPS for everything isn't without downsides. Many apps gather extensive data when running on my devices and they communicate that data back to some central location, sometimes under the guise of functionality, sometimes straight up nefariously, but always with a side effect of giving that central entity a complete record of what I'm doing and often also exfiltrates my data (contacts, etc.)
Honestly, I've been tempted to set up a transparent TLS terminating proxy at my home to give myself some possibility of seeing wtf is coming and going from my network.
> If not for hostname validation, how would you even /know/ that you're talking to the "internal corporation service" rather than someones MITM proxy?
Because a crap ton of Linux software comes with its own set of bundled root CAs instead of using the system defaults. Welcome to the configuration nightmare that is setting up Anaconda, npm, AWS CLI, Python (Requests library), Git, etc. for working with something like Zscaler.
The issue is that Zscaler may have flaws, and that even if the validation is performed flawlessly then the introduced risk is not zero…
Usually one would have to trust the root CAs, but with TLS interception we have to trust the trust of the MiTM software in the root CAs. This increases the attack surface instead of decreasing it.
For a security appliance it’s a pretty bad job; sure, there may be reasons why you want to look into traffic, but then the aim is to control the communication. And control doesn’t come for free.
> 1) internal corporation services, websites and webapps.
For this use case companies usually provide an internal CA, which signs their certificates and is trusted by all company machines. We have various customers which do this and it works just fine.
Small companies/small groups of developers have no idea how to implement and manage this, but think that it should be easy.
I've recently been approached by a group of developers to enable SSL on their internal sites. When I mentioned that this would take some time, the response was "why can't you just use LetsEncrypt?"
I replied that LE only works on external facing sites, not internal sites. The next response was "fine, why don't we make it all external facing?"
I'm still trying to explain that their CI server (Jenkins, with its history of remotely exploitable vulnerabilities), and their internal OAuth2 server should not be public facing.
Google is moving away from network-centric security and VPNs. See https://cloud.google.com/beyondcorp/ . The threat model is a bit different but you could also follow their approach and put an auth proxy in front of Jenkins and deploy it on the public Internet.
But yeah, don't expose Jenkins to the Internet directly. Last month I saw a Jenkins instance that was mining bitcoins. The worm had used one of Java's serialisation vuln to get in the box and install the miner.
Not at all, it means the proxy can be attacked over the Internet. Just like the VPN can be attacked over the Internet. Once you're past that it's the same story.
Specifically... LetsEncrypt, and most other CAs no longer issue certs for domains that are not legal ccTLDs or gTLDs.
Not so many years ago, Microsoft recommended that organisations used [companyname].local as their internal DNS zone[1], as .local will never be an external zone, so there would be no conflict. Then along came cloud integration and increased need for edge services, and .local no worked well as a solution. Servers needed certs with both the local domain and a new external domain in their certs which became a security nightmare. Then (about a year ago) CAs stopped issuing certs for domains that weren't sub-domains of proper TLDs, which all but killed the concept of these internal non-legal domains.
So, unless you are prepared to roll your own CA, AND instruct your internal (non MS-domain members) users how to manually install an untrusted cert, signing internal sites that do not have a legal domain name, is a complete non-starter.
---
[1] Now of course they recommend a sub-domain of your public domain name (site1.company.com), or a reserved public domain name that you don't use externally (site1-company.com). Which is all well and good, but what about the 100s of legacy kit you've got on the old name... ~sigh~
It is pretty easy to manage your own CA, make a Debian VM, install something like XCA and it is literally click a few buttons to generate and issue certificates and set up certificate authority root certificates.
And why would I trust a company I work at to be able to sign certificates for every single website on the internet? Especially if I need to install that root certificate on a personal device?
If you need such a device to do your job, maybe ask them to provide one so you can keep work off your personal device anyway? I disconnected my phone and so forth from work email and other services some time ago, and I'm not going back!
Why would you need to install it in a personal device? Just add an exception. It's still better than plain HTTP since you can check the fingerprint against your work PC, which already validated the cert.
This fails utterly when you can't control your clients. My student society for example ran into this problem. Students bring their own laptops and installing our root certificate on all of them is infeasible (if they even would allow us to do so). As a consequence, we need to expose critical internal services on the public internet, some of which contain private user data.
Additionally, if you let anyone bring their own device in a diverse semi-public environment like a school, you owe it to the students and faculty alike to provide them with some protection against creative types placing fake wifi access points in busy places, trying to play man-in-the-middle for any credentials and other stuff sent to your local services. HTTPS does that.
Using a proper FQDN for each service only makes everything easier to maintain.
A public domain name costs the price of a coffee (and less than a raspberry pi) and you can get a certificate for free with Let's Encrypt. There is really no reason to resort to a private CA unless you want to MITM your client's connection.
You don't need to expose your server to the public internet to use let's encrypt. I use DNS authorization and it works perfectly.
Even if you could I would highly recommend against doing that, given that this would grant you access to every https connection that isn't hpkp secured.
I actually have all webservices in my home network secured by https, all you need to do is click a cheap vps, install nginx and tinc, and then proxy /.well-known/acme-challenge/ to your internal servers. Either setup domain or ip hijacking so the public IP is routed inside your lan. Done.
If I can do this for me and my cat in my spare time, you can do this for your university.
If you can’t control your clients - maybe use a captive portal style landing page with a link to install the local certificate or something along those lines, it’s also useful to have a wireless network (SSID/VLAN) for BYOD that just has internet access and as such doesn’t need the very and one that has access to internal services that does.
Western Digital solves #2 and #3 for the MyCloud EX4 by somehow issuing real browser-trusted certs to each device for the domain device<mac_address>.wd2go.com using their intermediate CA "Western Digital Technologies Certification Authority" (https://www.censys.io/certificates/eb94f8e2c8d0c8338bb8ba40e...), which is in turn issued by COMODO. Now, not everyone has an intermediate CA locked to their own domain, but maybe that's the issue? X.509 has the ability to restrict CAs to particular domains (e.g. see the "path constraint" on WD's CA in the info link above), so if it was easy to be issued a CA cert for your own domain, couldn't that be a potential solution to this problem?
1) internal company webapps just install company root cert and create properly signed certs under corporate internal CA. Installing the certificate across network is easily automatable on windows, OSX and Linux. The only issue is Firefox as it uses its own trust store. Any senior admin who can't figure it out with the resources available (plenty of information available online) should be replaced days it is not that hard.
2) with regards to raspberry pi, will anyone who can write code can learn to also create their own CA the only difference is probably no automation of adding to the trust store likely however it is only a 2-3 click install in most cases.
You are assuming that you control all client machines. Unfortunately it is not always possible and far from the admin technical decision.
The admin usually can't fire the upper management.
It's possible to purchase certs signed by pre-trusted CAs extremely cheaply ($9/year/name) that can then be used on internal services. This is not a difficult problem to solve.
You can't buy certs for non.public.domain.local. So you must control the CA list at all client machines and use a self signed cert.
The assumptions that there is a solution to the problem do not take in consideration that some times these changes are not possible.
If I were to choose everyone would be using public domains with DNS zone view for public / private environments but Microsoft DNS service don't even support it.
Also why do I get a certificate warning that looks the same for an IP (https://192.168.1.1) which you can not buy certificates? What about 10.x.x.x or even 127.0.0.1? As far as I know you can also no longer purchase a certificate for public IPs.
Just watch how consumer router manufactures are going to work around this by either re-educating their users to ignore the red warnings or only selling cloud managed and locked devices which sucks for everyone.
If you're in a corporate intranet environment, you ought to have an intranet CA, as well as the means to distribute the CA's certificates securely to all deployed machines within the intranet.
I dislike intranet CAs because they allow your company to intercept and play MITM with every other website you visit (except for certificate-pinned websites)...
I'd prefer that Chrome write "insecure" if there's a non-public CA in your chain.
1) Every employee of every company needs to have some level of trust in their company. They trust their company to make payroll, and they trust their company at a reasonably high level to follow local laws and regulations, including reporting threats and violations against their physical safety. That doesn't mean that employees should trust their employers with their deepest darkest secrets and life savings, or that there aren't different types of trust, just that trust is a spectrum, and arguing that you should fully trust every one of the shadowy public CAs pre-installed in your OS and browser, that you know absolutely nothing about and have not personally vetted nor have personal relationships with, but not the intranet CA your employer operates, is rather clearly an irrational assertion.
2) If you decide not to trust your employer's CA, and your employer has provided you with a machine to access intranet sites, then you clearly cannot trust accessing Internet sites for personal reasons on your employer-provided device, not because the CA cannot be trusted but because it's irrational to distrust the CA but also trust the employer-provided device, which may have a keylogger and other tracking software installed.
3) If you decide not to trust your employer's CA and your employer operates a BYOD environment, then you are free to bring a separate device for work purposes, on which you trust your employer's CA but refrain from accessing personal accounts, instead only accessing personal accounts on devices which your employer doesn't know about.
Minor nitpick regarding the "except for certificate-pinned websites" part:
HPKP does not validate pins if they resolve to a user-installed trust anchor like an intranet CA. The RFC [1] leaves behavior undefined (see Section 2.4), and I'm not aware of any popular implementation that would honor the pin in case of a user-installed certificate.
This can be incredibly frustrating if you're trying to protect against MITM attacks; but at the same time, I can follow the browser developers' line of thought that goes "if we were to enforce it, users would just jump ship to the next available browser".
It depends on what they're used for but I agree that many companies seem to spy on their employees this way.
Commonly I only allow internal CAs for specific internal websites and not allow them to MITM just any website. On occasion this meant not being able to use the company's wifi and deal with 4g instead.
Sorry but I have to call BS on that. I always bring my own device and never have bypassed certificate errors. A company will either have certificates for their own apps (commonly running on specific subdomains of their own domain) or have their own internal CA that you can trust on your own device.
Certificate errors on "internal" web apps are just as bad as on the rest of the internet.
If you trust a company's internal CA then aren't you trusting them to issue certificates for every website and not just their own? Isn't that dangerous?
All browsers can tell you what certificate signed the one in use. Unfortunately a recent chrome UI change made this a pain to get to.in chrome, into the other browsers just clicking on the lock in the address bar, it soon becomes obvious if the company is mitm all SSL connections.
Because a normal person will "just click the lock in the address bar" on every single HTTPS website he visits to make sure his company isn't MITMing him, right?
You can get a certificate for a local IP address. If you own foo.com, you can get a cert for local.foo.com from letsencrypt that points to 192.168.1.5. Obviously you can't use HTTP verification, but you can use DNS verification, point _acme-challenge.local.foo.com to a public server and run certsling or another acme client on that server.
> It looks much more secure to serve a good'ol HTTP site with no encryption at all.
Not for long it won't: Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.
>1) internal corporation services, websites and webapps.
If this is an internal corporation service, why don't you bake your own self-signed root certificate into every computer in your network? Then you can generate as many certificates from your own root as you like, and they'll all magically be valid on corporate computers.
> 1) internal corporation services, websites and webapps.
That one's straightforward. Set up a corporate CA and use it for your internal certificates. Or, operate your corporate services on your real domain name, and use real publicly-trusted certificates - whichever is easier.
Clients need the CA cert in their trust store, not servers. Client get it by the act of enrolling into AD or FreeIPA domain.
On the docker side (or rather on the reverse proxy that provides access to them) you are solving different problem and it does not matter whether the key/cert is provided by your internal CA or third-party one.
The problem is you can't do this for every Docker image you have, particularly for a large organization. It defeats the whole point of having base images if you need "include" Dockerfiles. If Docker had a way to build from multiple base images, that might fix the issue, but I believe they removed that bug/feature a while back.
Have every local machine get something that is internet-routable to a public machine that allows you to get a certificate, then on your corporate DNS, just serve the local machine it's supposed to go to and you can use that cert.
You could use Let's Encrypt for this and have free certs.
> 1) internal corporation services, websites and webapps.
When firefox started throwing warnings on my intranet sites that are accessed via OpenVPN i did this:
1) move all internal sites to valid FQDN
2) push dns settings over openvpn so that clients resolve the names with the internal dns service and dont leak names. Names withing the vpn resolve to internal ip addresses.
3) set up a catch-all website, point the wildcard *.company.name domain to it and mada letsencrypt certs for the internail domains/
4) copy the valid certs to the intranet webserver. Done. Everything working ok.
I suspect the big reason it hasn't happened yet is it would require ISPs to replace tens of thousands of dollars of hardware and it would increase support requests in the short term ("site XYZ is broken but it's fixed when I turn of IPv6").
Is the hardware you're talking about the network equipment controlled by the ISP, or the routers and modems in customers' homes? I'd be surprised if the former hadn't been IPv6-ready for many years now, but I can imagine many customers are still using ancient hardware left over from when they first signed up for service.
If you have cable, a number of providers have been making customers upgrade to the newest modem. A single old modem that doesn't support docsis 3 will slow down everyone in your neighborhood.
Really? Interesting... can you elaborate/provide some reading?
I'm a software engineer with a smidge of basic networking experience so not completely clueless, but definitely inexperienced with DOCSIS and this sort of residential networking stuff.
For real. I've seen it more these days. A friend in a po-dunk town in Northern California I visited had IPv6 from Comcast. I was kinda shocked since my fiber Gigabit ISP in Seattle didn't have IPv6 rolled out to residential customers yet.
It seems more important than ever to roll out IPv6, since, at some point, IPv4 is going to become incredibly scares. Imagine a permanent/reserved IPv4 address on DigitalOcean/AWS/Vultr going from the few dollars a month to $70/month or $100/month. Forget network neutrality, regular people won't even be able to host their own content in a way everyone else can reach.
There's a real and immediate value for users if HTTPS is used. I don't see the immediate value for end users when IPv6 is used. Worst case, the lack of NATs makes tracking easier.
IPv6 is certainly necessary but nothing users have to worry about.
Https has googles bully pulpit behind it and even though https has its issues it pales into insignificance compared to ipv6's "problematic" design choices
This is true. I can intercept your username/password during login by being in close proximity to you if you are logging into their website over plaintext HTTP. Not possible if it is protected by TLS (HTTPS).
Well that's what the S in HTTPS stands for. I am pretty sure that anybody who knows the difference between HTTP and HTTPS also knows that "security" is not binary.
I hope they did some user testing to see how people actually behave in the presence of such warnings but in my experience it does nothing. Worse, it's in an environment that is already rife with little messages in corners trying to get your attention (ads) so users may be more "blind" when browsing than usual.
The success of "Let's Encrypt" suggests that a key part of the problem wasn't a lack of user complaints about security. Rather, it was a lack of a sane model (both technically and economically) for setting up and maintaining certificates. In the end, people maintaining sites already had 100 other things to worry about and weren't going to get around to HTTPS with anything less.
You can setup a Let's Encrypt certificate on your server and use Full SSL (strict). It will also make switching away from Cloudflare in the future easier.
That's right. Cloudflare doesn't try to lock people in with artificial constraints. Use Let's Encrypt for your origin. Very soon we hope to support Let's Encrypt completely for our main certs (once they have wildcard support).
CloudFlare's "Flexible SSL" (https://www.cloudflare.com/ssl/) offers encryption/authentication from CloudFlare's server to the client, but none from the origin server to CloudFlare's. Which means that is a vector by which the content could be sniffed or modified in transit.
It's a "better than nothing" option, as there are a slightly higher number of actively exploited attack vectors that apply to the client to CDN connection than the CDN to origin server, such as "free" wifi that injects ads, malicious ISP DNS, and the like. But it's not actually secure, as the origin server to CDN connection could be tampered with, and just because there are fewer active attacks that would be likely to affect that connection right now, doesn't mean that someone won't come along later and hijack such a connection.
CloudFlare offers other TLS options that do include encryption and authentication between the origin server and CDN, but they do require that you set up a certificate on your server, so if all you're trying to do is enable TLS (and don't care about the CDN), just installing a cert on the origin server and using TLS is probably a simpler option that using CloudFlare.
We encourage users to use Strict mode which requests and validates a certificate from the origin.
It's great that shared web hosting providers and others are starting to make it easy to acquire and install a certificate, but that hasn't always been the case.
EDIT: We also provide an API that will provision a free certificate for your origin: https://blog.cloudflare.com/cloudflare-ca-encryption-origin/. The certificate is optimized for communication with our edge (essentially just as small a chain as possible, as we don't need the intermediate to walk to the root). Either that or use certbot from EFF/Let's Encrypt.
But most people don't want or know how to automate. HTTPS is supposed to be used on very site but not every site is setup by a developer. That's a basic flaw that will make it problematic to have these kinds of HTTPS-only policies.
Not really. You put nginx in front of Varnish and terminate your TLS there. It's not that much more work. Hint: make sure you have an X-Forwarded-For entry in your nginx config. Your root location would look something like:
> Any host that still requires a dedicated IP for https is woefully out of date.
May be true in a perfect world but this is my web host (bluehost):
>> Note: Since SSL Certificates are Domain/IP specific, you must first Purchase a Dedicated IP before purchasing or having an SSL Certificate installed on your account. They will NOT work with a shared IP address.
I'm pretty sure a lot of other hosting services still go that route.
I'm not sure whether they just don't support SNI yet, or whether they are artificially enforcing this restriction so that they don't have to deal with IE6 not working.
It's sadly not just the servers but also the clients... still you're right about that being a really long tail at this point.
Edit:
A quick search seems to indicate that the only typical consumer facing system that doesn't support SNI is IE on Windows XP. It's pretty safe to have a catch-all bucket that informs such users to use a modern, security patches including browser or to upgrade to a different OS.
It's not. It's supposed to use a publicly routable address. Private addresses were an unfortunate hack that got massively overused when people would've been much better off putting the same effort into using IPv6.
Down this path lies the IoT security apocalypse. Imagine every cheap, unupgradable IoT lightbulb with a publically routable IP address. If IPv6 was widely adopted tomorrow, I'd still run my home LAN services behind a NAT.
Addressability != access. By all means firewall your devices (though I'd strongly recommend something more granular than a perimeter firewall - particularly in the days of insecure IoT devices, an attack could easily be coming from inside the network), but they can still have proper addresses.
Delegate address space the whole way through your internal network - you should get a large enough block from your upstream ISP that this is fine. If you're too big for a single upstream ISP you should have your own AS number and participate in internet routing.
If this is truly disconnected from the Internet then yeah HTTPS is unsuitable - it fundamentally relies on the idea that there's a central, universal definition for who owns "foo.com" so that users can rely on talking to the correct "foo.com".
Easy. Get a certificate for some name, say foo.example.com. Point foo.example.com, in your internal network's resolver, to 192.168.1.1. Use foo.example.com in the browser instead of 192.168.1.1. Done.
Who is supposed to do this exactly? Me the consumer who buys a router, or me the manufacturer of said router? (router/access point/whatever it's called...)
OK, and (1) how are you supposed to trust that the manufacturer won't get hacked one day (or whatever) and the IP address won't change to something external/malicious? (2) what if I don't have an internet connection and don't have a DNS server on the gateway that can reply to such a query?
1) If you trust them to write secure router firmware you can trust them to keep their HTTPS certificates safe - the former is a lot easer than the latter. 2) Router intercepts all DNS requests and responds with its own IP, responds to HTTP calls with HTTP 428, like already happens and like OSes already deal with appropriately.
> (1) If you trust them to write secure router firmware you can trust them to keep their HTTPS certificates safe
wha? uhm, no. Just because I trust you to do something correctly once that doesn't mean I trust you to keep something else safe for all eternity.
> 2) Router intercepts all DNS requests and responds with its own IP
Actually, what if I have multiple of these routers in (say) a chain? I have to go physically find the one I need so I can connect an Ethernet cable to it and bypass all the others? I can't just connect to the one I want directly by its IP address?
> Actually, what if I have multiple of these routers in (say) a chain? I have to go physically find the one I need so I can connect an Ethernet cable to it and bypass all the others? I can't just connect to the one I want directly by its IP address?
Ah, I misunderstood, thought you were talking about a "captive portal"-type use case. If you're talking about having the router host some config interface like any other webserver then I'd say like any other webserver it should be able to generate its own certificate and CSR for a hostname you configure it with, and you submit that to your internal CA, or directly to let's encrypt or similar provider.
Er, what "internal CA" are you even talking about? Like imagine my grandma gets Comcast, her internet is not working, and I tell her to go to 10.0.0.1 to see if it shows anything. Suddenly she's supposed to get an HTTPS error warning her there's an MITM attack? Or am I supposed to tell her to install a root cert in her machine and every other machine she might connect in the future?
Or heck, what if I'm just connecting to my damn scanner in my network? Or what if it's a guest trying to do that? "Sorry auntie, you'll have to install my self-signed cert as a root cert before you can use my scanner's web interface to scan your pic"?
If your router or scanner is to be accessible over the network then it needs its own name and it needs to be able to certify that that's its name. Anything else is just too dangerous. A user expects addresses they enter into the browser to mean the same thing on any connection; having a few "magic" addresses that go one place on one network and another place on another network is a recipe for users getting hacked.
For the consumer use case, maybe the router gets a unique default address in the manufacturer's namespace (router12345.linksys.com) and ships with a certificate for that name and that name printed on the box, just like we do for the admin password. Since it's a router it's probably running the DNS for your network (at least in the consumer use case) so it can route requests for itself correctly. For scanners or similar, the router would need to update its DNS when the scanner joins the router's network - a lot of routers already do this within the local domain based on DHCP registrations, so this ought to be simple if it's not already done. Crucially this part isn't security-critical - if you try to print a confidential document on your network printer while you're on your neighbour's wifi, the worst their router can do is not route you, because an evil endpoint won't have your printer's certificate.
> If your router or scanner is to be accessible over the network then it needs its own name and it needs to be able to certify that that's its name. Anything else is just too dangerous. A user expects addresses they enter into the browser to mean the same thing on any connection; having a few "magic" addresses that go one place on one network and another place on another network is a recipe for users getting hacked.
...a recipe for users getting hacked? on a home network? by whom exactly? my family? The router is already firewalling the entire network against the internet. Can you describe the exact attack scenario you're imagining?
User tries to print a confidential document. Prints it on their neighbour's printer, or a printer somewhere on the internet, instead.
User tries to grant their soundsystem access to their google music. Gets cut off and asked to reconnect as they're walking out the door. Ends up granting the cafe's soundsystem access instead, and maybe that gets combined with another exploit to give someone else at the cafe access to their documents.
User is in the habit of using the same username/password everywhere, enters it into http://192.168.1.1 on some hostile network.
User knows to use a password manager, but password manager is happy to put their router password into some other router's login; attacker uses this to subvert their router
Thief replaces the home webcam with one that supplies a dummy image, takes their time to clear the place out.
You're not describing the attack, you're describing the damage that could be done after an attack has already compromised the system. I'm saying describe the exact attack scenario, i.e. how any of these could be made possible in the first place.
Impersonate popular home devices on public wifi networks, or on the Internet. Exploit that "push the button on the router to allow this device to join" thing that was popular (but vulnerable) a few years ago. Subvert an insecure IoT device on the target's network. Attack from their friend's compromised device when they connect to the target's wifi, or just use their credentials. Splice into ethernet cable where it runs through a maintenance floor or a cabinet on the outside of the building. Once you're on the network either ARP spoof or just register with the router under the same name (perhaps after DoSing the legitimate device).
The network is not completely public, but even a home user's network is too weakly-defended to just blindly trust to any device connected to it.
Let's pretend this is an ideal world; could ISPs just automatically assign DNS entries to their customer's IP address's? The router could figure out its public name via a reverse DNS lookup, then do a Let's Encrypt / ACME challenge for a certificate against that domain name. (I have no idea how the customer ends up knowing the domain name, though. Though, if ISPs are supposedly so eager to "differentiate" their product, hell, an easy-to-use interface to have full control over <yourname>.ISP.com would actually be a decent feature, but then, I don't know what would make non-hackers care about that.)
Users are tuned to seeing green icon, and secure as trustworthy. This is over a decade of UX towards these. Suddenly seeing "Not secure" will definitely have an impact on engagement. (I know we're both guessing -- we'll have to wait for concrete numbers to make a call). I'd mark the engagement change at around 10%, and 20% for any site that is login/payments related.
they have been seeing that for months now ( i see it often too on sites that are slow to change). I have not heard anyone complaining about loss of traffic. It's becoming transparent to the users, imho, like the EU cookie prompt.
How about a warning in Chrome that says "You're about to use Chrome to visit this website, and thus send everything about yourself to Google to do whatever they want with", for all websites staring in Chrome ~67?
Say you have a plain Debian 8 install, running a typical LAMP stack serving a single domain.
If you want to make it use a LetsEncrypt cert and serve the domain over HTTPS - what would be the minimum number of steps on the command line to make it do that?
The company I work for have 3000+ websites served from a single application. Whilst we offer everyone HTTPS when they log in (everything's on a subdomain at this point, so a wildcard cert does the trick), we don't have HTTPS enabled for general access to their websites.
I've pushed for the 3000+ websites to be served over HTTPS but there's serious reluctance from the sys admins about creating and managing such volumes of certificates. So I'm guessing it's easy if you only have a handful of services, but at volume it becomes a little tricker. If there's a easy way to solve this problem, I'd love to hear it.
I have been anticipating this but have had better things to spend my limited time on. I have more than 135 sites I need to convert to https and they are load balanced. I don't think letsencrypt handles load balanced sites yet. My management is against wildcard certs. This might push them over the edge in favor of wildcard certs.
DO you have a url for documentation that shows how to set up the certs? Our certs are on each web server and not in the load balancer. I suppose we can also put the certs into the load balancer but I don't have control over that.
HTTPS gives your ISP less of your information to collect, analyze and sell to advertisers which in turn protects the value of Google's information about you. I think the changes to Chrome are well-intentioned, but can't help but smile at how this side-effect favors Google's business.
It's more like some endpoints are 95% secure whereas cloudflare flexible ssl is 5% secure. Conflating those as "not 100%" is far more misleading than rounding them off to "secure" and "not secure". If https:// doesn't mean traffic is encrypted as it passes over the public internet then it means nothing, and that's what happens when you use cloudflare.
Those comments are saying that because the last hop (Cloudflare → Github) will still be unencrypted. You may disagree that it doesn't make it insecure, but that doesn't mean they're uninformed.
The FULL option in fact requires HTTPS even for the last hop. It just accepts any certificate which isn't as good as only accepting a valid certificate. But the last hop doesn't have to be clear-text any more.
How do I do that with GitHub pages? In my case (glowing-bear.org), I'd like to tell Cloudflare to accept valid certificates for glowing-bear.github.io (or * .github.io) because that's the origin certificate. But I haven't found an option to do so.
Github has no provision for this. So it's more a Github issue than a Cloudflare one. The latter has the Full (but not strict) SSL option for precisely this situation, which is arguably better than going with Flexible SSL.
Right, but if someone can snoop the connection between Cloudflare and your server, chances are they are in control of some intermediate machine and can MITM, injecting their own self-signed cert.
Allowing transparent downgrades of self-signed certificates would be a big security hole. For example, suppose I add the following to my website:
<script src="https://cdn.example.com/awesome.js">
By doing so, I am requiring the script to be served securely. If we allow self-signed certificates, anyone could generate a self-signed certificate for example.com and serve a malicious script to my users.
> Allowing transparent downgrades of self-signed certificates would be a big security hole.
Automatically generated self-signed certificates should have replaced all plaintext HTTP 15-20 years ago. The big security hole was allowing passive surveillance, ISP-level page injection vandalism[1]/attacks[2].
The web could have been almost completely protected from several classes of attack a decade ago, but this stupid insistence on conflating protection from 3rd part eavesdropping or corruption during transit with the authentication of the server. These are entirely separate problems that do not need to be solved at the same time.
> I am requiring the script to be served securely
You're requiring it to be served over HTTPS, which doesn't necessarily mean "secure", because "secure" covers several different goals. You're also strongly trusting the PKI system. Do you trust all the certificate authorities your browser includes by default?
Of course, because HTTP still exists, the initial request for the HTML that contains your <script> tag could be sent plaintext and thus modified during transit in many different ways.
> serve a malicious script to my users.
That can still happen without proper pinning, or if the local browser downgrades the request back to HTTP. Unfortunately this isn't particularly uncommon with corporate/school proxy, in-flight wi-fi services that forge certificates[3], and Superfish-style junk all removing both the encryption and the authentication provided by TLS.
Regarding your specific example about loading Javascript referenced in an HTML document's <script> tag, the solution is to validate the data, not the server. The valid server can still send incorrect data. If you include hashes about a page's subresources[4], the browser can validate the integrity of the file it received.
Of course, even with self signed certificates to replace plaintext http, ISP injecting/vandalism would be really easy. ISP would terminate the TLS, inject some annoying stuff, and then reencrypt with another auto generated certificate. Without the verification by public CAs, the client could never detect the MITMing.
The client can detect changing to a new certificate. Obviously self signed certificates have problems. The main point is that it does protect against some attacks, and raises the complexity/cost. Running a MITM takes a lot more time, effort, and resources compared to simple deep packet inspection on plaintext packets.
> Without the verification by public CAs
While there isn't much support in current client software, verification doesn't have to be from a CA. In an ideal world, your bank (or whomever) could hand out some sort of dongle (or maybe as a QR (or similar) code on a card?) that had a certificate that could be used for direct verification of their internet services independent of any CA, or in combination with CA verification.
> The client can detect changing to a new certificate
Not if the client has never visited the site before and doesn't have a known-good self-signed certificate pinned locally to check against. And if the client did have such a certificate pinned, revocation by the legitimate owner of the self-signed certificate becomes impossible, since the client won't trust the new self-signed certificate being presented to it, without out-of-band communication of said intent to revoke and manual intervention on the client side.
> dongle
Again, the problem is certificate revocation. Physical dongles cannot easily be revoked. Corporate intranets deal with catastrophic compromise of their internal CA certificates by re-imaging all corporate machines with new certificates and restoring from off-site backups where needed - prescribing that for customer machines is impossible.
PKI is like monitoring - it must rely on external services to be dependable and effective.
I already said that self signed certs are not going to solve every problem. They solve some problems, which is better plaintext HTTP that we should have retired over a decade ago. Obviously you should validate the server - probably through the usual PKI methods - whenever possible.
> revocation
Revocation would happen in the usual manner. The dongle is just a minor example of another way to provide validation. Obviously each methods will have their own benefits and limitations. I'm not saying we should replace PKI with physical dongles; I'm suggesting that alternative (non-PKI) methods are possible and they can not only coexist, they can also corroborate each other.
That can still happen without proper pinning, or if the local browser downgrades the request back to HTTP.
What? What kind of browser would downgrade the request to HTTP?
Unfortunately this isn't particularly uncommon with corporate/school proxy, in-flight wi-fi services that forge certificates
Which require a cert signed by a CA already in the client's machine.
Regarding your specific example about loading Javascript referenced in an HTML document's <script> tag, the solution is to validate the data, not the server. The valid server can still send incorrect data. If you include hashes about a page's subresources[4], the browser can validate the integrity of the file it received.
If you don't have HTTPS, how can you be sure that the SRI hash wasn't tampered with?
Sorry, that should be the browser's local environment, not just the browser itself. An obvious example is sslstrip
Right. Which would still work if all HTTP connections were replaced by HTTPS with self-signed certs, as you proposed. sslstrip, which must have MITM control to do that downgrade, would just terminate the connection and re-encrypt it with its own cert.
Which is why PKI HTTPS everywhere is the reasonable solution.
Of course. That happens.
Right. Nothing can protect you if you deliberately undermine it.
Loading static resources from other domains is very common. Especially ad networks.
Right, and SRI is certainly useful, but you still need PKI HTTPS on every site to bootstrap it. And since the only reason to avoid HTTPS is to avoid the encryption penalty, automatically generated self-signed certificates wouldn't be used anyway.
> what about self signed certificates? wouldn't it be great if these swebsites treated like http ones
In the not too distant future, they will be, though perhaps not in the way you had in mind: HTTP sites will start showing similar indications of insecurity, just like sites with broken HTTPS.
But SSL does help prevent non-state actors from accessing my Facebook feed while I'm connected to free Wi-Fi? (Remember the FireSheep days? They still exist for so many websites, up to 40% of the Internet...)
Sure, by using free Wi-Fi, or an ISP, or any number of other scenarios, I'm giving up privacy. I understand that, even if non-technical folks don't. But you can't argue that SSL hinders things, even if it doesn't go far enough.
It even prevents most state actors from accessing them. There's probably only a handful of state actors that have the means to get access to your Facebook data. If you travel in any of the other countries, the host state won't be able to break SSL or to get access to your data otherwise.
And that can be very important if you travel in countries where a wrong post in your timeline could get you in trouble.
You're being downvoted because issuing a cert for MITM on an "undermined CA" is, in many cases, a very quick way to get that CA burned. If you have a compromised resource that valuable, you're going to use it very very sparingly.
What's your recommendation? I don't think anyone is saying this is the be all and end all. I'd rather be susceptible to hacking by nation-states while protecting myself from all the 1337 hax0rs out there.
Browser vendors can guarantee privacy the same way that the best (independently audited, opens source) encrypted messaging clients can guarantee privacy. It's not rocket science, but it's never been a goal of browser vendors.
I can use Signal to communicate to another user without being subject to surveillance. But I cannot use my browser to chat with another browser user with the same degree of immunity to surveillance.
Let's say some xyz website's server was just another Signal client. I can talk to it from my Signal client in a way that is not subject to State surveillance. But I can't do that using HTTPS as my security model.
If I acquire a SIM card that takes over your friend's phone number, I can install Signal and sign up as your friend. Then when you use Signal to communicate with your friend, you are actually communicating with me. How is that different to HTTPS?
Interesting, ok. I am not sure above what you are suggesting is more secure about Signal compared to HTTPS though. In both cases you start out with an address (phone number or domain) which you want to use to communicate with a third party securely. You require a trusted third party to link the address to an identity so that you know you are communicating with the right person. With HTTPS you are trusting the CA, with Signal you are trusting Signal. The CA will have checked for ownership of the domain and Signal will have checked for ownership of the phone number. After you have established secure communication with the end party, you have equivalent guarantees of privacy for future communications with them using either protocol.
I think you're mistaken. I believe that the communication between your computer and a web server is 100% as secure as a Signal communication, assuming that your browser and the server are using the latest TLS standards and large keys.
What is insecure about a browser is that you can't be sure who is at the other end of the communication. That includes man-in-the-middle attacks, which are basically a variant of the case where you are not talking to the person you think you are.
That problem with Signal is solved by proving the identity of the person you are talking to out-of-band with the application, for example by linking up with your friend while you are standing next to each other, or verifying the link by a phone call.
I hope you can see that there's no directly analogous way to secure a web session for the consumer web, since the process of verifying the identity of the server manually would never scale. The solution to this problem -- ta da -- is to create authorities who you trust who will independently certify the server's identity -- certificate authorities, if you will. And yes, once you add CA's in the middle, the line of trust becomes an n-tree of trust, and you have a much higher risk of things going wrong.
If you want a fully secure Web experience -- delete all CA's except any that you independently audit. Then manually validate the SSL certs of every server you want to talk to and add it manually to your cert store. Now you have a Signal-like experience for the web. Good luck.
How is it a scare tactic? It's the reality, there's not much more room for an unencrypted web nowadays. Troy can feel free to advertise whatever he wants, and what he says is still true.
It's his MO. Every article of his is decently researched with lots of fluff, always points to another article of his, and always ends in some kind of conversion goal.
True, but a man's gotta eat, and security is more and more critical these days. I don't begrudge him making a living while beating the drum of be-more-security-aware.
A modus operandi is someone's habits of working, particularly in the context of business or criminal investigations, but also more generally. It is a Latin phrase, approximately translated as method or mode of operation.
Imagine someone who knew enough to set up a site on his own server a long time ago and had left it alone ever since. Maybe he'd considered turning it off a few times, but just couldn't be bothered to. Now he suddenly gets contacted by a bunch of people telling him his site is "not secure". Keep in mind that he and his visitors are largely not highly knowledgeable in exactly what that means, or what to do about it. It could push him over the edge.
...and then there's things like http://www.homebrewcpu.com/ which might never have existed if HTTPS was strongly enforced all along.
I understand the security motivation, but I disagree very very strongly with these actions when it also means there's a high risk of destroying valuable and unique, maybe even irreplaceable content. In general, I think that security should not be the ultimate and only goal of society, contrary to what seems the popular notion today. It somewhat reminds me of https://en.wikipedia.org/wiki/Slum_clearance .
(I also oppose the increased centralisation of authority/control that CAs and enforced HTTPS will bring, but that's a rant for another time...)