This is why we need resource pinning in the browser.
The webservice would assert that the resources it's sending now are the same resources it will continue to send in the future. The browser is in a prime position to enforce this.
In the event the resources ever change, then the browser should refuse to allow the changed resources to run and notifies the user what has happened in a way that is at least as scary as broken TLS. If it's a legit deployment (say, because the service has updated the backend), then this should be independently verifiable out of band e.g. via a blog post, a public changelog, etc. The process to accept the new deployment would need to be opt-in. If the user choices not to opt-in, the browser may continue using the old resources that were being served up in the past.
Without some mechanism like this, verification of the claims that services like ProtonMail makes remains intractable.
I don't want to sound negative but I guess almost 100% of the web relies on quickly updating resources and being fresh so I wouldn't hold my breath for when "resource pinning" would happen.
For the record one can already do it if all resources would use Subresource Integrity. Hashes of leaf resources would be embedded in parent resources up to the root document that you could announce out-of-band (e.g. https://example.com on 23rd of November 2017 has hash 1234566...). Then you'd have a cryptographic proof (like a Merkle tree) that nothing in the page changed.
There are no standards and protocols in place for this, and there's no browser that enforces this.
If you think that taking something that's 80% there and filling in the last 20% for yourself counts as something that's "already" possible, then nothing is new and everything is already possible.
> Hashes of leaf resources would be embedded in parent resources up to the root document that you could announce out-of-band (e.g. https://example.com on 23rd of November 2017 has hash 1234566...)
This is really janky and not at all what I'm talking about. What I'm talking about is as simple as what happens now, e.g., "GitLab/Mastodon/Whatever XX.x Released".
> There are no standards and protocols in place for this, and there's no browser that enforces this.
And there will never be especially for web apps because there are no parties interested in this. Look at what happened with HPKP. It looked good on the surface but it turned out that extreme security is a little bit too extreme.
> If you think that taking something that's 80% there and filling in the last 20% for yourself counts as something that's "already" possible, then nothing is new and everything is already possible.
I'm just pointing out that you can already construct a scheme with the same security properties as what you described. If you'd rather wait for some hypothetical standard and implementation that will probably never happen then that's your decision.
> This is really janky and not at all what I'm talking about. What I'm talking about is as simple as what happens now, e.g., "GitLab/Mastodon/Whatever XX.x Released".
Perfect is the enemy of good and "GitLab/Mastodon/Whatever XX.x Released" seems to be just good enough. For paranoid people OpenPGP is there to verify build artifacts.
Are you an authority on this? Or just trying your hand at being pundit with an endless supply of unsubstantiated stop energy?
> I'm just pointing out that you can already construct a scheme with the same security properties as what you described.
No, you can't. You're writing as if the "you" here is the party in control of the service backend—the developer. That's not what this is about. This is about how you—the user—can trust that out of the n times you visited the site it didn't serve up tampered assets to backdoor the process. If this were about developers, we wouldn't be having this discussion; the developer doesn't need to request proof that he or she hasn't done any tampering to covertly introduce a backdoor.
> "GitLab/Mastodon/Whatever XX.x Released" seems to be just good enough.
I'm convinced at this point that either you're just responding without actually giving any consideration to the words coming from either one of us, or I'm having a frustrating exchange with a chatbot.
I'm the one who wrote that a release announcement on the project blog suffices to verify out-of-band that the user should expect the resources to change. You're the one who wrote this:
> Hashes of leaf resources would be embedded in parent resources up to the root document that you could announce out-of-band
So why are you now trying to explain to me that a release announcement blog post is "good enough"? Clearly if I didn't think so, I wouldn't have argued for it.
The webservice would assert that the resources it's sending now are the same resources it will continue to send in the future. The browser is in a prime position to enforce this.
In the event the resources ever change, then the browser should refuse to allow the changed resources to run and notifies the user what has happened in a way that is at least as scary as broken TLS. If it's a legit deployment (say, because the service has updated the backend), then this should be independently verifiable out of band e.g. via a blog post, a public changelog, etc. The process to accept the new deployment would need to be opt-in. If the user choices not to opt-in, the browser may continue using the old resources that were being served up in the past.
Without some mechanism like this, verification of the claims that services like ProtonMail makes remains intractable.