IPv6 does indeed complicate things. I suspect we'll end up trying a few things before finding the right answer, starting with a) allowing network admins to configure IP ranges that correspond to the network they control, and b) examining the local network to infer a private range.
Happily(?), IPv4 networks are still pervasive, and this proposal seems clearly valuable in those environments.
The core assertion behind this proposal is that devices and services running on a local network can continue making themselves available to external networks if and only if they can update themselves to make that desired relationship explicit. If they can't update themselves, they also can't fix security bugs, and really must not be exposed to the web.
Correct. In the status quo, you will be best-served by looking at solutions similar to what Plex is shipping (https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...). ACME's DNS-based challenges might even make this easier today than it was when that mechanism was designed.
Longer-term, it seems clear that it would be valuable to come up with ways in which we can teach browsers how to trust devices on a local network. Thus far, this hasn't been a heavy area of investment. I can imagine it becoming more important if we're able to ship restrictions like the ones described here, as they do make the capability to authenticate and encrypt communication channels to local devices more important.
The proposal does not attempt to force private network resources to use TLS. That would be an excellent outcome, but it's difficult to do in the status quo, and is a separate problem to address separately.
The proposal _does_ require pages that wish to request resources across a network boundary to be delivered securely, which therefore requires resources that wish to be accessible across network boundaries to be served securely (as they'd otherwise be blocked as mixed content). This places the burden upon those resources which wish to be included externally, which seems like the right place for it to land.
We're proposing treating cookies as `SameSite=Lax` by default (https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-03...). Developers would be able to opt-into the status quo by explicitly asserting `SameSite=None`, but to do so, they'll also need to ensure that their cookies won't be delivered over non-secure transport by asserting the `Secure` attribute as well.
This is exactly the information I was looking for when I opened chromium blog post. Technical and to the point. Is there a reason why this couldn't be appended to the blog post?
If we're not the audience then who is? This was made to the Chromium open source blog, which is typically a developer heavy blog (with previous topics like "Hint for low latency canvas contexts"). Throwing a few reference links at the bottom shouldn't harm their message with the less technically savvy.
I'm just guessing. Something else that sparks joy for me: the fact that Google will never give any of their announcements the titles they're justifying, like, "OMG, WE KILLED CSRF!", and that I'll have to dig in a bit to see how big a deal what they just did is. It's like every "Improving privacy and security on the web" is a little gift I get to unwrap. It's like Justin Schuh and Mike West's version of "one more thing".
which does go into a good deal of technical detail. A challenge is that even experienced web developers didn't know much about SameSite prior to this announcement.
Unfortunately, crawling isn't a terribly effective way of evaluating breakage, as the crawler doesn't sign-in, and therefore doesn't attempt to federate sign-in across multiple sites. That's part of the reason that we're not shipping this change today, but proposing it as a (near-)future step.
To that end, we've implemented the change behind two flags (chrome://flags/#same-site-by-default-cookies and chrome://flags/#cookies-without-same-site-must-be-secure) so that we can work with developers to help them migrate cookies that need to be accessible cross-site to `SameSite=None; Secure`.
Ideally, we won't unintentionally break anything when we're confident enough to ship this change.
That's done via port-forwarding. That is, Chrome is talking to the loopback interface on a particular port. The server listening at that port forwards the requests across the debugging bridge to the phone, and ferries the response back across in the same way.
It should be unaffected by the suggestion in this document.
Happily(?), IPv4 networks are still pervasive, and this proposal seems clearly valuable in those environments.