If I were developing a new protocol, and I thought more than just web browsers might want to use it, I would develop it as a new standard internet transport protocol. None of this trying to avoid getting industry adoption. Talk to Cisco, Juniper, IBM, Google, Microsoft, Apple, Intel, Broadcom, TP-Link, Huawei, Foundry, Avaya, Marvell, Foxconn, etc. Get it adopted in silicon and software, and get the industry actually excited about an upgraded protocol. They get to use it to push sales of new devices, so it's great for them. You get to use it to work around stupid intermediary networking issues.
I am so glad to see these old standards (tcp, http) finally being offered a better, more secure alternative. But the most problematic of all in my opinion is smtp. Anyone aware of an equivalent of the quic/http2 initiative for smtp?
dns would be second most problematic one, but like for ipv6, dnssec exists, it is just not used enough to get a chance to take over.
It's been a few years since I did server e-mail management, but SMTP/email in 2018 is quite different from SMTP/email in 1985.
For starters all large e-mail exchanges (Gmail/Live) require you to use TLS, so most mx<>mx connections are already secured. Additional protocols like SPF, DKIM and DMARC are securing origins, messages and reducing spam and spoofing.
Yeah but TLS is using STATTLS which is trivially downgradable, most smtp servers on the internet use invalid (selfsigned or expired) certs, and no one validates anything. As for spf, dkim, they are nice hints for spam level but are not used to block spoofed traffic.
So all these features are a thin layer in of lipstick on the pig and don’t really make the problem a solved problem.
Even with invalid certificates and fairly weak crypto TLS protects us against passive eavesdroppers, which is not nothing.
In 1998 if I had a fibre tap for a big backbone carrier, and a line speed regex engine, I could pull matching text out of basically everything travelling over that carrier, silently and with just my small box maybe labelled "Lawful intercept capability" in one corner of one cage in one data centre.
Today that doesn't get me much, because it's encrypted. Even if it's encrypted using 3DES and RSA with a self-signed certificate for localhost, I can't read it, and I certainly can't parse it at line speed and run it through my regular expression system to keep the juicy bits, so even if I can decrypt some of that later I'll need to keep it all somewhere until I get that chance. Ugh.
So now an adversary has become _active_ and that changes the nature of the game, because while I can't detect passive eavesdroppers (except with "Quantum encryption" which is a fun lab toy but not a realistic component of everyday communications) I can detect active ones.
Not everybody cares if they're detected. I doubt Chinese or Russian intelligence agencies are much bothered that journalists visiting know they're bugged, if anything it just helps intimidate them. But if (like the afore-mentioned NSA, or Mossad) your country depends upon the pretence that it is above such shenanigans it sure is embarrassing to keep getting caught... and it also makes it much harder to pretend that everybody defending is just being paranoid.
There's an SMTP-STS protocol in the works to establish persistent secure relationships between MTAs, which should alleviate a lot of the STARTTLS downgrade pain. Last I checked, most of the major email providers were backing it.
We've seen so many issues with certificate authorities, that validating ownership of certificates based on that system is completely pointless if you're looking for trust.
Like, remember Trustico, the guys who emailed 20,000 of their customer's private keys? They're both still in business, and working with one of the largest registrars: https://www.trustico.com/
Note that Trustico isn't a CA, just a reseller. Also it is much easier to avoid having your reseller (or your CA, or the guy who cleans your pool) email out your private key if you pick your own key and you know, keep it private, rather than letting them choose it and just hoping they deleted their copy.
I actually think we've been doing a fairly good job of cleaning up the Web PKI in recent years and I note that other trusted institutions like major banks and newspapers haven't got spotless records either.
Trustico is "just" a reseller, but one working with Comodo, an extremely trusted CA. Which calls a lot into question: If we can't trust Trustico (we can't), but they're able to issue Comodo certs, we really can't trust Comodo either. (Beyond the fact that really, if Comodo is still willing to work with Trustico, it's clear Comodo is also not a trustworthy entity.) And even if you or I don't trust Comodo, our web browsers do, because Google and Mozilla have decided Comodo is trustworthy.
I don't understand how anyone can suggest that web PKI isn't totally broken as long as this remains the case.
And banks don't have perfect records, but I'm not being forced to use a bank. If Google wants to use it's monopoly to force us all to use PKI, maybe it should fix PKI first.
> If we can't trust Trustico (we can't), but they're able to issue Comodo certs, we really can't trust Comodo either.
I can't fault this logic, which is why I'm happy to tell you that this isn't how resellers work and hasn't been for many years. They can't "issue Comodo certs", they're just a middleman taking a cut.
A reseller isn't trusted by the CA at all in Web PKI terms. They handle some customer service stuff and (like an airline discounter) they allow the headline prices to stay high so that the "real" prices seem cheaper.
You don't need to trust Trustico at all, as a Relying Party you're depending on Comodo to do their job, independent of Trustico. If you are a Trustico customer, you needn't trust them beyond the fact that you'll be sending them money, and of course any outfit might take the money and run. If you're very naive and you allow Trustico to have your private key (an arrangement Comodo told them to stop when it signed them up as a reseller) then they have your private key. So, never do that, not with Trustico, not with Comodo, not with anybody. The entire point of a private key is that it's private, we can't make it any clearer than that.
> JMAP is a modern standard for email clients to connect to mail stores. It therefore primarily replaces IMAP + SMTP submission. It does not replace MTA-to-MTA SMTP transmission. JMAP was built by the community, and continues to improve via the IETF standardization process. Upcoming work includes adding contacts and calendars (replacing CardDAV/CalDAV).
Thanks. That's interesting but my mail pain point is smtp to smtp protocol. Client to smtp server is less problematic, your client will shout if the certificate is invalid. Smtp to smtp is where the phishing, spamming, etc happens.
So who are the bad actors they’re freezing out with this spec? They mention problems with middle boxes which I’ve heard before, but no finger pointing.
I can't find any mention of "bad actors" or "freezing out" anybody or anything so I'm going to guess you meant just generally which middle boxes are "bad" and the answer is all of them, almost by definition.
The specifications we're dealing with here don't (and in modern protocols, this is quite deliberate) allow for any middleboxes. The only, minimal way to implement such a thing correctly in the face of that situation is to act as a full proxy, which is going to _suck_ for performance and your customers aren't going to pay for a product that throttles their connectivity badly nor for the hardware that would let you run a line speed proxy.
So, they don't, they try to make an end run around the protocol compliance, typically the idea goes something like this:
During connection setup we'll inspect everything and implement whatever rules are key to our product, but mostly we'll pass things between the real client and server transparently, only intervening as necessary for our role
Then, we can "whitelist" most connections, and let them continue at line speed without actually being inspected further.
Unlike a full proxy this design breaks, messily, when optional protocol features are understood by the client and server but not the middlebox. This is because either the middlebox pretends to understand when it doens't (so client and server seem to get their mutually agreed new feature, but if it has any impact on how the protocol is used it breaks mysteriously since the middlebox didn't know) or the middlebox squashes everything it doesn't understand then steps out of the way and expects that to work out OK even though the client and server now misunderstand each other's situation.
> generally which middle boxes are "bad" and the answer is all of them, almost by definition.
NATs and firewalls are the first two classes of middleboxes that come to mind, and I wouldn't consider either of them inherently "bad".
As pointed out in the article, NATs suffer from the shortcoming that without visibility into stream semantics (e.g. SYNs / RSTs / FINs), they often fall back to using arbitrarily set timeouts that can sever long-lived connections (e.g. an idle SSH session, where messages might be infrequent).
In my view, "bad" middleboxes are those that lead to protocol ossification -- TLS 1.3 (also from the article) is a good example of that. With encrypted control state, middleboxes (without cooperation by one of the endhosts) are forced to treat QUIC packets as opaque UDP blobs.
Part of the problem is that some middleboxes don't actually follow the robustness principle, and will in fact strip unrecognized protocol options or drop the packets entirely.
That's not inherent to the provided functionality, but rather an implementation detail of existing boxes.
There are objectively bad NATs, yes, but extending the lifetime of IPv4 by 20 years and providing isolation between internal and external networks are not inherently bad.
NA(P)T is fundamentally incompatible with the guarantees set out by the IP standards that specify packets to be transmitted unmodified end-to-end. Their entire idea is to change the IP address and higher-level protocol identifiers such as ports.
A basic case that they break is when applications embed IP addresses in the data. The "timeout" problem in the article is also impossible to avoid in a guaranteed-correct way, since the NA(P)T can not know when a flow is finished and the mapping is safe to recycle.
Since these things are basically forbidden by the standards, their functionality has never been standardized. Hence the wild west of varying timeouts, heuristics, and various more-or-less broken attempts to munge application-level data (ALG).
> NA(P)T is fundamentally incompatible with the guarantees set out by the IP standards that specify packets to be transmitted unmodified end-to-end.
A small nitpick, but some fields of the IP packet are meant to be modified in transit. For instance, the TTL, the ECN marking bit, and the fragment fields. The checksum field is defined so it can be updated (without being recomputed) to match these changes.
But yeah, other than these fields in the IP header, and a few hop-by-hop headers or options, packets are not meant to be modified in transit (other than fragmentation, but this applies once the packet fragments are put together).
> extending the lifetime of IPv4 by 20 years and providing isolation between internal and external networks are not inherently bad.
It's hard to see how things would be worse in the alternative history without NAT. IPv6 deployment would have come much sooner - it's been ready for so long even in our current timeline.
Ambiguous addresses (RFC1918) aren't good for isolation, firewalls are. It's a common security problem that people end up joining different RFC1918 networks, and then don't know what the ACLs mean anymore. Isolation is provided by firewalls, not NATs after all. A NAT's job is to try to proxy traffic back and forth, not block it.
If you require any type of IP Helper/NAT_* module you are going to have problems with it at some point, and those problems will be very opaque to the end user, and even possibly the administrator.
For example, all VOIP and online gaming would be in a better place if IPv4 was taken out and shot years ago. The lost of 1:1 mapping between the server and client ports makes everything worse.
NAT is a minor nuisance for protocol design and most games have exactly zero problems with NATs. I think games are actually one of those things where NATs are really not a big issue, because the high packet rate will never produce timeout issues.
SIP is a different story, but SIP (and it seems all other VoIP stacks as well) are mindboggingly problematic in every way imagineable, so NAT is a problem but NAT is definitely not your only problem with VoIP.
(I consider VoIP hugely impressive for turning what literally was "connect two wires, polarity doesn't really matter" with a debugging experience of "if you don't hear the tone, the wire is broken" and a reliability of "it works" into "well you need a fully-loaded computer connected to the internet running an insane software stack with more compatibility hacks than your unsightly mother" with a debugging experience of "well f---" and a reliability of "I'm just going to use my cellphone, whose battery life is literally less than one day")
NAT prevented the deployment of SCTP, which was envisioned to be a better TCP, just to name one example. And stopped countless protocol ideas from getting off the drawing board.
Thanks for this. It's very informative, even though it's not the packet-level analysis I was initially looking for.
>However, we observed that QUIC performs significantly worse than TCP when the network reorders packets (Figure 2).
>Upon investigating the QUIC code, we found that in the presence of packet reordering, QUIC falsely infers that packets have been lost, while TCP detects packet reordering and increases its NACK threshold.
Under what conditions does packet ordering usually occur?
From the looks of the figures, it seems like packet-reordering is a function of the x-axis value (rate-limit of some sort?).
Sounds very interesting indeed. I wish more of those secure protocols would focus more on some sort of obfuscation. This is to stop repressive regimes from simply blocking all this impressive privacy tech, yielding zero actual benefit to end users.
I don't know about differences between gQUIC and new QUIC, but I always hate gQUIC to the level that I always disabled it on my machine. Main reason being that my ISP always prioritize TCP over UDP, so during congested period (6-10pm), gQUIC is useless.
Yes, but being that most people have very little choice in their choice of ISP, so much hate has already been directed at them with little change in outcome, directing hate at any other group is more effective.
I had a funny problem with my router QoS a couple of years ago. It was classifying unknown UDP traffic as junk, and QUIC counted as unknown. This problem was entirely self-inflicted, I was using crappy firmware and I misconfigured it. But I can easily imagine other network infrastructure treating UDP traffic poorly.
Which is a mistake. A lot of UDP traffic should be priority for low latency; games, for instance. The problem is that there's also a lot of bulk transfer happening over UDP, like Bittorrent. Long story short, traffic shaping is hard.
The BitTorrent protocol uses UDP only when using a congestion control mechanism on top of it (uTP). In fact uTP is aware when the send buffer is filled and when latency is introduced so that it automatically back-off and let interactive traffic some time to be exchanged in priority.
The standard BitTorrent protocol uses TCP by default and is arguably worse: since it uses more TCP connexion by nature compared to a client-server file delivery, it have more chances to be prioritized.
So BT over UDP with uTP transport protocol is in fact a clever way to balance heavy traffic and interactive one.
I think you're missing the point that this is a type of ratchet in getting the intermediates to stop doing arbitrary traffic shaping/queueing/etc. (Ideally it would end up being a ratchet for net neutrality, but let's see...)
A lot of Chrome traffic is already QUIC-based[1], and I think people will be pissed off it their experience is suddenly degraded because some asshole ISP decided to wholesale drop UDP over TCP.
Regardless of any of that -- the only way forward for privacy is end-to-end encryption and QUIC helps achieve that, up to "secret key availability"[2].
[1] I think Firefox is also experimenting with it?
[2] That is, any entity serving you a page must have the right secret keys. This is basically the case with SSL-only web sites already.
I would too drop UDP before TCP as UDP likely won't get resent.
A great example of intuition going wrong. VoIP won't work at all during congestion even if it's not causing the congestion. If anything, dropping TCP before UDP is likely to be more helpful since TCP will back off. But now that uTP and QUIC exist, such L4 discrimination is probably just counterproductive.
I find it interesting how everything has to be encrypted and 'secure' now. The excuse is always the NSA, but let's be honest: A new transport protocol isn't going to protect you from them. I think there's a lot more to the cellular baseband and Intel ME backdoors than anyone can imagine.
Google is going to do the same thing with this they did with HTTPS. Soon enough, you'll be penalized through search and/or Chrome for not supporting it.
The incentive for Google is that all browsers support QUIC, it saves them a lot of hardware costs and increases performance. They couldn't care less if your website uses QUIC.
You're receiving downvotes because you're propagating the "pushing encryption is motivated by Google's evil agenda" that's currently in vogue with a subsection of HN commenters, for reasons unknown to me because quite frankly the arguments are ridiculous.
Try blocking your network's access to www.google-analytics.com (with a fast-loading block notice page) and you will see that most webpages become unusable with a 30-second delay before page load completes.
If QUIC only works when the network has no partitions, then QUIC doesn't work.