Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let's be honest here. Curl to 98% of people is http/1 requests with some post params and maybe json body, with some custom headers. Most language standard libs (which might be using libcurl) can facilitate writing that relatively quickly. And probably with a more 'modern' CLI.

Curl though, is a very wide breadth project. It currently supports the following protocols: DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP.

Each one of those is a massive undertaking of protocol/spec management, edge case handling, cross compatibility, hacks, and much much more to make them work. Then to put a stable C api on top of it all to be a cross language toolkit is a MASSIVE undertaking.

But, again, the "idiots" who posted those comments, aren't talking about any of this. Perhaps they shouldn't care to. Most folks I know don't even know about the other protocols curl supports and have only interfaced with curl through its http. Frankly there are nicer http cli's out there with less code that _can_ be written in a weekend (assuming they piggy back on a http lib).

What Daniel Stenberg achieved is giving the world a fantastic, reliable, cross protocol, cross platform and cross language network library that can be used as the foundation for many projects. I'm sure few of those cited would claim they could do that.



A question that arises out of this is, should the 90% use cases be handled by a small, simple tool, or by the tip of the iceberg of a large, complex tool such as curl?

I can see the advantages of standardizing on a complex, powerful tool for simple use cases. For one thing, it may be the only way to standardize: simple versions are too easy to write, and therefore you get dozens of competitors, none of whom are popular enough to shake out their edge case bugs.

It's also nice not to have to find, install, and learn a new tool when you stray over that 90% boundary.

With software dependencies, I think the advantages of small, simple libraries win out over generality. Supporting powerful use cases makes an API more complex and often means that simple, 90% use cases require weird incantations to make them work. Here I think of the times I've had to get into the guts of Jackson despite never doing anything remotely exotic with JSON. The 95% of your codebase that can be simple should be simple, so you can devote your attention to the things that need to be complex.


Sometimes yes, sometimes no.

Counter-example: Jenkins. It does what you ask of it, its base install is "naked" and only contains the minimum functionality in the core.

Everything then becomes a plugin. Git. GitHub. Branch for multi-branch pipelines. Credentials management. And on and on and on.

Now you have stay on top of maintaining the plugins in addition to the core. Also, many plugins require other plugins so just to do some basic stuff like set up a multi-branch pipeline from a GitHub repo you're suddenly staring down the barrel of dozens and dozens of bespoke plugins with varying levels of quality and support.

A monolithic application like curl is a dream to me by comparison. Everything is tested in every release. Sub-components are kept up to date by the maintainer. No plugins fighting each other's plugins.

From afar it's easy to see the praise simplicity and modularization but honestly monoliths can be undervalued too.


I can definitely see your point, having experienced the same thing with plugins for SBT, the Scala build tool. I didn't really consider the case of a small core with a multitude of plugins as a twist on the small, simple tool. I think you're right that a plugin architecture lets a thousand flowers bloom, but you don't get long-term stability, because people move on to other tools and stop maintaining the plugins they wrote.

For example, VSCode plugins are great because VSCode is thriving, and Emacs packages are a crapshoot because many of the programmers who wrote them have moved on. Eventually VSCode plugins will be like Emacs packages.


Also: node. Everything is a module, and every module requires a hundred more. Projects with thousands of dependencies become common. No one understands what is actually “under the hood” and hardly anyone cares. “It just works” most of the time. Good enough.

Sad.


well written monoliths are the dream.....

curl is a utility, like power.

you don't worry about electricity not being able to power your TV because it was designed for light bulbs, it just works.


I would be surprised if nobody had tried to make a mini-curl that could go in busybox. The idea of having a tiny version of the program which handles the 90% cases by itself but can call out to the real-deal bigger brother when necessary is a nice one. This sort-of happens already with lots of common tools which are shadowed by shell builtins, why not curl?


Perhaps not exactly what you are looking for but there is: https://curl.se/tiny/


For embedded systems we'll usually make our own curl with the exact protocols features we need.

No surprise but curl can do that too!


> A question that arises out of this is, should the 90% use cases be handled by a small, simple tool, or by the tip of the iceberg of a large, complex tool such as curl?

But nobody is forced to use curl? People use it because it’s convenient, shoot them selves in the foot and then lash out at the author the tool for their own choices. Where’s the fault?


> But nobody is forced to use curl?

I like curl, but this isn't true. When you're SSH-ing onto a box, you often don't have permissions to install your favorite CLI tool, and even if you do have said permissions it's inconvenient to have to install it each time you SSH onto the box (not a major inconvenience, mind you). Moreover, in many cases you need to run a script or some other software that depends directly on curl.

In general the "nobody is forced to use it" arguments rarely pan out (I remember this was a canned argument from C++ folks circa 2011: "C++ is the best language because it has every feature and if you don't like some features, you aren't forced to use them!").


Well, your distro choose to include one lib that can be used with most protocols out there. They include the multi tool and many of the apps also included on the box require the tool.

If you want to use something else. You have to install it on the host. Every box comes with bash, but we still install other languages and frameworks on the host so we can run out applications with tools that make sense.

If you want a different tool, make it part of your default install.


The point is that you’re not always the person who gets to decide which distro, which tools to install atop the distro, or which dependencies your scripts will use. If you own those choices, of course you can add in your own tool, but you frequently don’t own those choices.


But cannot this charge be levied upon all tools and utilities? If you do not have the permission to bring your own tools it stands to reason that you will have to use the tools already in place, be that curl or some other random assortment of literally anything else. I don't much see the moral basis behind showing up at the construction yard and then lamenting over your lack of choice simply because your employer only brought makita brand tools.


No one picks curl to be on a box, it a core library for everything else on the host. It's not a Makita drill, it's more like electrical power at the site and this guy is complaining this tool uses gas or propane.

Doesn't stop anyone else from using a power saw or charging batteries. If you need propane, bring it.


I guess I don't understand the complaint? You're worried that other people are using it for their own projects? The reason it's on every box is because it exposes the API and it's a library for half the shit on the host.

If you don't get to choose anything on the host, why are you concerned what other people use? If you install your own apps, add that lib you want?


I feel the alternative is not better, a light weight bare minimum lib and no ability to install your own is definitely worse.


For sure, try to build out a Linux host without libcurl.


I agree. At any rare the System default curl is rarely compiled with all those features enabled, so finally one sticks to http/https/ftp/ftps subset for curl all the time.


Yeah, very easy to write an HTTP client on top of Berkley sockets. Until chunked responses come into play, and HTTPS, and HTTP/2, and HTTP/3...

(this is actually something I'm very worried about, it used to be easy to cobble together a very basic HTTP client, not anymore with the new HTTP protocol versions, and all the bells and whistles).


And of course, CURL deals with multiple protocols as the OP mentioned. I've used libcurl for sending emails and FTP but mainly for HTTP.

Using the multi interface it's relatively trivial (a few hundred lines of C) to fetch using hundreds or thousands of concurrent connections. There's even some courtesy opts for rate limiting per host.

It's so good he/they built c-ares for asynchronous DNS lookups, IIRC something that wasn't so readily available in years gone by.


Agreed. They're harder to debug, MITM, trace, etc.

As a silver lining: I have been less worried recently seeing that most of the adoption of http/2-3 (at least their odd features) have been at the edge. Most developers are still writing http/1 endpoints and leaving the edge to optionally up-convert them for transport efficiency.


> What Daniel Stenberg achieved is giving the world a fantastic, reliable, cross protocol, cross platform and cross language network library that can be used as the foundation for many projects.

One of which, curl(1), is a constant ass-and-life saver for me personally and professionally. Of course the library itself ends up being used by me as well (in C, ++ and tcl). curl to me is in the league of sqlite aka "absolutely gorgeous and more often than not all you need"


In the midst of so much open source drama and angst, I'm still just profoundly grateful for Daniel Stenberg's work on curl.

I'm no expert on open source management but Daniel has for years been a positive and steady force working on curl.

This is him publishing the direct criticisms he's receiving as "thanks" for his efforts and it guts me he's got to put up with these ignorant comments.


Thanks for sharing this insight, I hadn't realized it was capable of all those protocols.

I fall firmly into "those 98%" you mention.

Dlang's stdlib (only) implementation for making network requests is quite literally a direct "curl" wrapper:

https://dlang.org/phobos/std_net_curl.html

You've changed on my mind on this being an odd design decision.

Baking in libcurl for the user (or optionally allowing them to dynamically link) if they want networking was maybe the most sane/pragmatic choice you could have made for a new language.


True. I'm kind of surprised (again) by Daniel after 20 years of maintaining curl being so salty about "people not appreciating the achievement". Yes, for overwhelming majority of users curl in 2021 is just a simple http/https request client. And, yes, in 2021 that could be written over a weekend with a better CLI, mainly because most of the languages already have all important stuff implemented in a [standard] library (ironically, sometimes via libcurl).


> I'm kind of surprised (again) by Daniel after 20 years of maintaining curl being so salty about "people not appreciating the achievement"

You're surprised by him being salty? I can't imagine what it's like to be on the receiving end of the water torture of people continually belittling the work you have done over 20 years and given to the community.


Not to mention the guys who threatened to kidnap/kill him if he didn't fly like 10000km immediately, on his own expense of course, to solve their bug pro bono.


That was honestly my first thought. Of course it's easy to rewrite curl nowadays. It can't be anything more complicated than slapping a command line interface on top of libcurl...


At first I laughed... then I thought about the sheer number of command line options that you can find in the man pages. After thinking about that, even writing the command line interface would be a non-trivial undertaking.


Documentation takes work too.


I took it as more bemused, and given the audience I suspect it's more of a "look and laugh".


I suspect curl and rsync power more things on the internet than people realize. They are both completely amazing tools that I use almost daily.


> Let's be honest here. Curl to 98% of people is http/1 requests with some post params and maybe json body, with some custom headers.

100%, that's why we see so many smaller projects pop up (on GitHub and the like) that support basically just this. No shade to those projects, improving the UI for this subset is a worthy cause, it's just not anything near cURL.


Curl has been a life safer for me when I had to interact with a legacy third party server. The Python request model was regularly hanging when making requests to this server, whereas I never observed these issues with curl. Curl appears to be better at dealing with such legacy edge cases as it has been around for more than two decades.


Why even get mad at curl? If you hate curl and all you want is a simple http/1 client just spend a few minutes to write a Python script w/requests that takes verbs and payloads from the command line rather than make the curl author's day shitty.


I'm one of the 98%, and I don't like using cURL for this reason -- the CLI feels clunky and its gigantic number of options are distracting for my simple use case.

Can anyone recommend a good alternative for the base case?


https://httpie.io/ is pretty nice.


I recognize the huge amount of time and effort that went into make cURL, but I agree that the gigantic number of options are distracting. When I read the cURL man page, I have a hard time finding what I want because there are literally dozens of screens of options.

That said, I still will reach for cURL even when simpler options exist because it's ubiquitous. Same thing with Bash, grep, and sed.


I loved using `xh` lately.

https://github.com/ducaale/xh


wget maybe


> SMB

Woah. This is genuienly impressive.


Yeah. I once though I would have a look at what it would mean to just send a single file over SMB. What better way than to just reverse engineer the little bits of communication between a client and a server?

The simplest solution would have been to give up the instant I started reading the catched traffic. I gave up after about 40 minutes of trying to figure out where to start.


As someone who fell into the category of:

> "Most folks I know don't even know about the other protocols curl supports and have only interfaced with curl through its http."

Just curious if you could elaborate on this one specifically and why you found it especially interesting.


SMB is... not very straightforward. I needed to reverse engineer it once to write an exploit, and it's difficult to properly formulate even basic requests, let alone the more complex stuff the protocol supports.


This begs the question of why all of these protocols should be in one library, instead of one each.

If a project is only going to ever use http, why would I need to bundle in all of the others?

Genuinely curious what the advantages are


What better thing to do with a universal resource locator syntax than to have a universal client?

> If a project is only going to ever use http, why would I need to bundle in all of the others?

Your project might only ever think to use http, but my project might appreciate having one client for any URL, instead of having to parse URLs myself and decide on separate clients based on a part of it.


It is a good library API with good support for async and that sort of thing. As a user it is less work to integrate the Nth protocol in a library I understand than to read the docs and try to reverse engineer some new API model of how to do things including asynchronous and configurations and TLS/cert management etc. there are a thousand ways to skin the cat, and libcurl is a good one. I want to learn a bunch of new distributed protocols because those are real things; I don’t want to learn a bunch of different APIs to do the same things because those are arbitrary wrappings to the real thing. If an API is successful enough it would be real, but cleverness in API design isn’t that useful without the popularity.


My first thought is flexibility and interoperability. If you have one tool, cross platform, that supports a wide variety of protocols, you can use it on both ends of a connection, and easily swap out protocols without having to: 1) install a new tool on both sides 2) learn a new API for each protocol 3) bug check for it

It also let's you do all this dynamically, so switching protocols on the fly is trivial.



They can rewrite curl (http(s) only) in Python using requests library.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: