I use ratpoison tiling wm with custom bindings, it's kind of tmux or screen for Xorg where you stack apps for example firefox, vim, ssh, tmux/screen, terms. Anyway, it's up to you how you search ctags, man, grep, web, etc
This setup is agile but still not as fast as lsp or copilot, but I prefer learning and remembering over speed, personal taste :-)
>However, in those discussions, a related concern was identified; confusion between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. I and others have seen a number of folks not closely involved in this work conflating the two, even though they're now separate things.
>
>To address this, I'd like to suggest that -- after coordination with the HTTP WG -- we rename our the HTTP document to "HTTP/3", and using the final ALPN token "h3". Doing so clearly identifies it as another binding of HTTP semantics to the wire protocol -- just as HTTP/2 did -- so people understand its separation from QUIC.
Google didn't make the distinction between transport layer and HTTP-layer on top when they called their development "QUIC", it was one thing. IETF decided to split these during the standardization.
Doesn't matter because every re-implementation of that old unix tool is made for same purpose and mostly accept same arguments.
For example you can make tar archive and pipe it to nc. On other server nc would accept data and pipe it to tar for unpack. This shows real power of unix pipes.
Stupid question. How do you solve (de)fragmentation and out of order delivery with unreliable protocol? What are cases and is it possible to make it simple?
You generally only use this for data where only the latest update matters. So if you get "Packet 18" "Packet 30" "Packet 19", you take 18, then 30, then ignore 19. The canonical example is an object's position; usually if you know what something's position is at 30 then information about where it was at 19 is so out of date it's useless.
The gain here is that you typically get the latest information as fast as possible. The downsides, of course, are that there's a pretty steep decline on connections that are even a little high latency or lossy. UDP drops packets even on wired LAN, for example. Practically all games use this method, and they all have lots of smoothing and prediction tech to make things seem like you're getting constant position updates -- which you almost certainly aren't.
You can also use this for data that doesn't matter -- although if it doesn't matter you should question why you're sending it in the first place.
If you need to 'solve' those, you use a reliable protocol instead. Like tcp.
In terms of actual implementation, protocols that does this needs to keep state, regularly notify the other side what they've received, and retry if packets appear to have been lost. Doing it 'simple' is easy, maximising efficiency/performance is harder.
The other answer gives a good example of when you don't actually want to fix the issues you mention.
This setup is agile but still not as fast as lsp or copilot, but I prefer learning and remembering over speed, personal taste :-)