Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem isn't CURL, but HTTP. It's actually suprisingly complex and really needs to die. That'll never happen though.


Plenty of applications use TCP and UDP, or even custom protocols on top of raw IP.

Not everything needs to come through the browser.


Not everything on top of HTTP comes through the browser, either ;)


More and more APIs use HTTP as the transport for various reasons, but one of them is that HTTP has considered large number of edge cases and has been tested "in the fire" - rolling your own custom API with security and encryption is much more fraught with dangers.


But I would bet that those only use the GET or POST functionality of HTTP, and that the real reason it's used is because it's easy to quickly get something up an running.

It's a short-term gain, but quite inefficient in the long run. ZeroMQ is a perfectly valid replacement for HTTP in internal networks. There's no reason for everything to be human readable.


Except HTTP 2.0/3.0 has very little to do with HTTP 1.0, and is more akin to use TCP/UDP directly anyway.

It just happens to get a free pass over port 80.


And that's the other major reason - if your traffic LOOKS like web traffic, it will slip through firewalls and other middle-ware devices.


Micro-services, for example, are pretty much all HTTP + JSON. The only place for HTTP is the browser and purely for historical reasons. Using it for anything server-server is a waste of network and CPU.


Thankfully it is going away with gRPC and friends.


Which is HTTP/2, another monster, not even fully implemented by web browsers, which is why gRPC doesn't work using web browsers.


Just as we managed to do that so well with SMTP and FTP ;)


but curl is so ==much== more than just http.


Which is why it is cURL and not cHTTP




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: