Hacker News new | past | comments | ask | show | jobs | submit login

The article mentions gives an example of why it’s difficult to improve TCP further.

TCP Fast Open was standardized 8 years ago and is barely used. This is because updating TCP requires kernel updates, which just isn’t going to happen on most mobile devices.

Thus, moving the protocol to userspace makes a lot of sense.




> Thus, moving the protocol to userspace makes a lot of sense.

Raw IP sockets are accessible from the same userspace facing APIs as e.g. UDP sockets and don't require climbing up the stack. Unfortunately operating systems started to consider custom protocol implementations security risks but rather than reverse that thinking we've just continued to abstract up past it.

In reality I think "where it is implemented in code" was a small portion of QUICs design choices compared to "IPv4 NAT & external firewalling has ossified protocols" which is a similar story of "just abstact up to avoid the issues". Unfortunately in that case I don't think abstracting up isn't as permanent a solution as it was on the OS side.


Raw sockets don't really allow for multiple applications to use the same custom protocol. If, for example, chrome and firefox were both running, which gets packets destined for the QUIC transport protocol? The kernel wouldn't know; without the UDP header it can't distinguish flows.

Likewise NAT devices typically support UDP flows today due to their prevalence in games, but if you introduce a new transport protocol at the IP layer, they wouldn't be able to identify which flow (and therefore which NATed endpoint) the packet is destined for.


> Raw sockets don't really allow for multiple applications to use the same custom protocol. If, for example, chrome and firefox were both running, which gets packets destined for the QUIC transport protocol? The kernel wouldn't know; without the UDP header it can't distinguish flows.

In reality raw sockets work in a way that the question is the reverse of what you describe. The kernel will check 2 things: - Which raw sockets are bound to the protocol number seen in the packet - Which raw sockets have issued "connect" to the sending IP

Any and all raw sockets that match these will receive the packets. In such a sense the protocol (QUIC) needs to have some way to identify streams so that if e.g. both Chrome and Firefox browse to the same server they don't interfere with each other. QUIC innately has this functionality due to the way it implements encryption. Ideally the OS would allow a raw socket to register something akin to a BPF filter though as that would make it equally as efficient as UDP socket tracking even in the edge cases.

> Likewise NAT devices typically support UDP flows today due to their prevalence in games, but if you introduce a new transport protocol at the IP layer, they wouldn't be able to identify which flow (and therefore which NATed endpoint) the packet is destined for.

This is actually what I was referring to when I said:

> "IPv4 NAT & external firewalling has ossified protocols" which is a similar story of "just abstact up to avoid the issues"

We continue to make non choices to build up the stack rather than implement systems that are interchangeable.


Chrome and Firefox could develop standardized system service which will deliver package to a proper application. NAT is not needed in a bright IPv6 world of the future.

Though I don't know what's wrong with UDP. 8 bytes of overhead for 1450 bytes IP payload is 0.5% bandwidth. Checksum overhead should be negligible.


> system service which will deliver package to a proper application

That's expensive. Even if you avoid copying the packages by sharing memory between processes, there are still a lot of context switches...


I think TCP fast open is a bad example for this. None of the common socket libraries that I know (never mind http libraries) have gained support for TCP fast open yet.


It's a bad example because it's badly adopted?


It's a bad example of middle-boxes causing ossification, as the adoption has been more limited by library/framework support than middle-boxes blocking it.


Chicken and egg problem. It doesn't tend to work reliably because of middle boxes so there is no push to implement it widely. It is not widely implemented so there is little pressure to update middle boxes.


That doesn't sound like a particularly convincing reason. In order for mobile devices to benefit from HTTP/3, commonly used HTTP client libraries will have to be updated. Which usually happens on a similar timescale as kernel updates anyway.


Much easier to update an app and its HTTP library than mobile device kernels. Particularly given how a large proportion of mobile devices are unsupported and won't get kernel upgrades any more.


The thing is, there are two possible extremes:

1. Our design is complete, error-free and designed to stand the tests of time. It will be in common use, largely unchanged, in 40 years time. The TCP of the 2010s. There is widespread industry support. We want to move to the application layer to ease the initial roll-out.

2. Our design will need to change every few years, even we authors don't think it's finished. This is a Google-only project and most vendors are refusing to support it as they think it's badly designed crap. The ActiveX of the 2010s. We want to move to the application layer so we can force it through without anyone else's support.

Where are we on the spectrum between those two options? I don't know.


Great question!


An app can control which HTTP library it’s using and even bypass the built in one on the mobile device. That’s not possible in the case of TCP.

So no, it’s not the same.

Another issue they mention in the article is middleboxes, which basically will never be upgraded, and will never support new TCP features.


Nor will they support new HTTP-features.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: