Hacker News new | past | comments | ask | show | jobs | submit | popee's comments login

I use ratpoison tiling wm with custom bindings, it's kind of tmux or screen for Xorg where you stack apps for example firefox, vim, ssh, tmux/screen, terms. Anyway, it's up to you how you search ctags, man, grep, web, etc

This setup is agile but still not as fast as lsp or copilot, but I prefer learning and remembering over speed, personal taste :-)


C is Sparta! :-)


Perl6 is factually wrong name, it's absolutely new language and as such deserves different name


Yeah, just reject any access from their subnets


And if you don't like orms you can always use query builder like knex


Like totally


> Their second upgrade they called QUIC (pronounced "quick"), which is being standardized as HTTP/3.

Isn't QUIC new transport layer protocol based on UDP and, if I remember correctly, HTTP/3 will be HTTP bindings for QUIC?

You might think this is nitpicking, but HTTP is application layer protocol, so it's little bit confusing to me.


>However, in those discussions, a related concern was identified; confusion between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. I and others have seen a number of folks not closely involved in this work conflating the two, even though they're now separate things.

>

>To address this, I'd like to suggest that -- after coordination with the HTTP WG -- we rename our the HTTP document to "HTTP/3", and using the final ALPN token "h3". Doing so clearly identifies it as another binding of HTTP semantics to the wire protocol -- just as HTTP/2 did -- so people understand its separation from QUIC.

https://mailarchive.ietf.org/arch/msg/quic/RLRs4nB1lwFCZ_7k0...

TL;DR the rename is to resolve the confusion.


why not just http/quic? using 3 seems strongly suggest that it is the next generation of http. They knew that but pretend it is not relavent.


Because there's a decent chance that it will be the next generation of HTTP.

If it doesn't pan out they'll just move on. Remember IPv5?


Aha, thanks for clearing that up


Google didn't make the distinction between transport layer and HTTP-layer on top when they called their development "QUIC", it was one thing. IETF decided to split these during the standardization.


Smart decision, because QUIC as transport layer could be good for other protocols built on top of it.

But hey who knows, SCTP never took of but we are talking about google here


netcat


the one that comes with nmap?


Doesn't matter because every re-implementation of that old unix tool is made for same purpose and mostly accept same arguments.

For example you can make tar archive and pipe it to nc. On other server nc would accept data and pipe it to tar for unpack. This shows real power of unix pipes.


I asked, because the one from nmap actually supports multiplexing.


He is right, but I feel fat now


Stupid question. How do you solve (de)fragmentation and out of order delivery with unreliable protocol? What are cases and is it possible to make it simple?


You generally only use this for data where only the latest update matters. So if you get "Packet 18" "Packet 30" "Packet 19", you take 18, then 30, then ignore 19. The canonical example is an object's position; usually if you know what something's position is at 30 then information about where it was at 19 is so out of date it's useless.

The gain here is that you typically get the latest information as fast as possible. The downsides, of course, are that there's a pretty steep decline on connections that are even a little high latency or lossy. UDP drops packets even on wired LAN, for example. Practically all games use this method, and they all have lots of smoothing and prediction tech to make things seem like you're getting constant position updates -- which you almost certainly aren't.

You can also use this for data that doesn't matter -- although if it doesn't matter you should question why you're sending it in the first place.


If you need to 'solve' those, you use a reliable protocol instead. Like tcp.

In terms of actual implementation, protocols that does this needs to keep state, regularly notify the other side what they've received, and retry if packets appear to have been lost. Doing it 'simple' is easy, maximising efficiency/performance is harder.

The other answer gives a good example of when you don't actually want to fix the issues you mention.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: