> Allowing transparent downgrades of self-signed certificates would be a big security hole.
Automatically generated self-signed certificates should have replaced all plaintext HTTP 15-20 years ago. The big security hole was allowing passive surveillance, ISP-level page injection vandalism[1]/attacks[2].
The web could have been almost completely protected from several classes of attack a decade ago, but this stupid insistence on conflating protection from 3rd part eavesdropping or corruption during transit with the authentication of the server. These are entirely separate problems that do not need to be solved at the same time.
> I am requiring the script to be served securely
You're requiring it to be served over HTTPS, which doesn't necessarily mean "secure", because "secure" covers several different goals. You're also strongly trusting the PKI system. Do you trust all the certificate authorities your browser includes by default?
Of course, because HTTP still exists, the initial request for the HTML that contains your <script> tag could be sent plaintext and thus modified during transit in many different ways.
> serve a malicious script to my users.
That can still happen without proper pinning, or if the local browser downgrades the request back to HTTP. Unfortunately this isn't particularly uncommon with corporate/school proxy, in-flight wi-fi services that forge certificates[3], and Superfish-style junk all removing both the encryption and the authentication provided by TLS.
Regarding your specific example about loading Javascript referenced in an HTML document's <script> tag, the solution is to validate the data, not the server. The valid server can still send incorrect data. If you include hashes about a page's subresources[4], the browser can validate the integrity of the file it received.
Of course, even with self signed certificates to replace plaintext http, ISP injecting/vandalism would be really easy. ISP would terminate the TLS, inject some annoying stuff, and then reencrypt with another auto generated certificate. Without the verification by public CAs, the client could never detect the MITMing.
The client can detect changing to a new certificate. Obviously self signed certificates have problems. The main point is that it does protect against some attacks, and raises the complexity/cost. Running a MITM takes a lot more time, effort, and resources compared to simple deep packet inspection on plaintext packets.
> Without the verification by public CAs
While there isn't much support in current client software, verification doesn't have to be from a CA. In an ideal world, your bank (or whomever) could hand out some sort of dongle (or maybe as a QR (or similar) code on a card?) that had a certificate that could be used for direct verification of their internet services independent of any CA, or in combination with CA verification.
> The client can detect changing to a new certificate
Not if the client has never visited the site before and doesn't have a known-good self-signed certificate pinned locally to check against. And if the client did have such a certificate pinned, revocation by the legitimate owner of the self-signed certificate becomes impossible, since the client won't trust the new self-signed certificate being presented to it, without out-of-band communication of said intent to revoke and manual intervention on the client side.
> dongle
Again, the problem is certificate revocation. Physical dongles cannot easily be revoked. Corporate intranets deal with catastrophic compromise of their internal CA certificates by re-imaging all corporate machines with new certificates and restoring from off-site backups where needed - prescribing that for customer machines is impossible.
PKI is like monitoring - it must rely on external services to be dependable and effective.
I already said that self signed certs are not going to solve every problem. They solve some problems, which is better plaintext HTTP that we should have retired over a decade ago. Obviously you should validate the server - probably through the usual PKI methods - whenever possible.
> revocation
Revocation would happen in the usual manner. The dongle is just a minor example of another way to provide validation. Obviously each methods will have their own benefits and limitations. I'm not saying we should replace PKI with physical dongles; I'm suggesting that alternative (non-PKI) methods are possible and they can not only coexist, they can also corroborate each other.
That can still happen without proper pinning, or if the local browser downgrades the request back to HTTP.
What? What kind of browser would downgrade the request to HTTP?
Unfortunately this isn't particularly uncommon with corporate/school proxy, in-flight wi-fi services that forge certificates
Which require a cert signed by a CA already in the client's machine.
Regarding your specific example about loading Javascript referenced in an HTML document's <script> tag, the solution is to validate the data, not the server. The valid server can still send incorrect data. If you include hashes about a page's subresources[4], the browser can validate the integrity of the file it received.
If you don't have HTTPS, how can you be sure that the SRI hash wasn't tampered with?
Sorry, that should be the browser's local environment, not just the browser itself. An obvious example is sslstrip
Right. Which would still work if all HTTP connections were replaced by HTTPS with self-signed certs, as you proposed. sslstrip, which must have MITM control to do that downgrade, would just terminate the connection and re-encrypt it with its own cert.
Which is why PKI HTTPS everywhere is the reasonable solution.
Of course. That happens.
Right. Nothing can protect you if you deliberately undermine it.
Loading static resources from other domains is very common. Especially ad networks.
Right, and SRI is certainly useful, but you still need PKI HTTPS on every site to bootstrap it. And since the only reason to avoid HTTPS is to avoid the encryption penalty, automatically generated self-signed certificates wouldn't be used anyway.
Automatically generated self-signed certificates should have replaced all plaintext HTTP 15-20 years ago. The big security hole was allowing passive surveillance, ISP-level page injection vandalism[1]/attacks[2].
The web could have been almost completely protected from several classes of attack a decade ago, but this stupid insistence on conflating protection from 3rd part eavesdropping or corruption during transit with the authentication of the server. These are entirely separate problems that do not need to be solved at the same time.
> I am requiring the script to be served securely
You're requiring it to be served over HTTPS, which doesn't necessarily mean "secure", because "secure" covers several different goals. You're also strongly trusting the PKI system. Do you trust all the certificate authorities your browser includes by default?
Of course, because HTTP still exists, the initial request for the HTML that contains your <script> tag could be sent plaintext and thus modified during transit in many different ways.
> serve a malicious script to my users.
That can still happen without proper pinning, or if the local browser downgrades the request back to HTTP. Unfortunately this isn't particularly uncommon with corporate/school proxy, in-flight wi-fi services that forge certificates[3], and Superfish-style junk all removing both the encryption and the authentication provided by TLS.
Regarding your specific example about loading Javascript referenced in an HTML document's <script> tag, the solution is to validate the data, not the server. The valid server can still send incorrect data. If you include hashes about a page's subresources[4], the browser can validate the integrity of the file it received.
[1] https://arstechnica.com/tech-policy/2014/09/why-comcasts-jav...
[2] https://citizenlab.ca/2015/04/chinas-great-cannon/
[3] https://arstechnica.com/information-technology/2016/02/why-y...
[4] https://www.w3.org/TR/SRI/