* Wget's the interactive, end-user tool, and my go-to if I just need to download a file. For that purpose, its defaults are more sane, its command line usage is more straightforward, its documentation is better-organized, and it can continue incomplete downloads, which curl can't.
* Curl's the developer tool-- it's what I'd use if I were building a shell script that needed to download. The command line tool is more unix-y by default (outputs to stdout) and it's more flexible in terms of options. It's also present by default on more systems-- of note, OSX ships curl but not wget out of the box. Its backing library (libcurl) is also pretty nifty, but not really relevant to this comparison.
This doesn't really need to be an "emacs vs. vim" or "tabs vs. spaces"-type dichotomy: wget and curl do different things well and there's no reason why both shouldn't coexist in one's workflow.
> This doesn't really need to be an "emacs vs. vim" or "tabs vs. spaces"-type dichotomy: wget and curl do different things well and there's no reason why both shouldn't coexist in one's workflow.
Totally agree. I love curl for testing API request/responses manually. It's usually a huge part of navigating my way around a new API that doesn't have a client library available for whatever language I'm using at that time.
I also use it for weird requests that need special headers or authentication.
Wget is the first thing I turn to when I'm downloading anything remote from the command line or scraping some remote content for analysis.
Yes, wget is fantastic for mirroring www and ftp sites and I use it a lot for that purpose. It's magic [0]. I hadn't realized that it didn't support compression though, which might explain why it's so slow in some cases. Not normally a problem as it just runs in the background on a schedule.
Curl supports gzip and deflate. It would be great if support for sdch and br were added too. Brotli is in Firefox 44 and can be enabled in Chrome Canary with a flag. SDCH has been in Chrome for a while and is used on Google servers.
The Win64 latest version of curl doesn't seem to support gzip nor deflate. I have to remove those options when copying from Chrome developer tools and pasting into a script. I'd report a bug but their page doesn't seem to have an obvious link.
Pretty much. If I want to save a file: wget. If I want to do _anything_ else: curl. Yes you can write files with curl, no I don't use that functionality very often. I don't think of them as "end user" vs "developer" use cases so much as them being two great tools for different tasks. I do wish that -s was the curl default, since that stderr progress output is pretty lame.
Stupid question, but how do things like this resume from where they left off? Wouldn't the server need to be cooperating in this? Is that build into HTTP?
This is also how download accelerators worked (back in the late nineties, early naught's), by having different connections work at several ranges to maximize bandwidth usage.
Why? How could it be more useful? HTTP byte ranges are incredibly flexible, since you one request can specify many byte ranges at once (it's almost too flexible, since a dumb server can easily be overwhelmed by a maliciously complicated request)
It handles the basic case of fetching the remainder of an interrupted download, and can also support partial downloads e.g. for supporting a video stream with the user jumping to different places in the movie.
The article is not coming up for me. Perhaps it's ycombinated. Anyway I agree curl is more of a developer tool although using it to download files is not the first thing that comes to mind. I use it daily to identify caching issues and redirect issues. The -SLIXGET flags in particular are very useful for this.
Huh. I've always thought of them as the opposite - wget is the full featured spidering tool, curl is the easy to run one when I need a command line thing or to bang web stuff into a janky copy and paste workflow.
In that case I'd love for it to include resuming downloads though, but given the differences I don't think merging is going to be more useful than just adding the feature in curl.
Edit: nevermind, I just learned that wget can download a page's resources or even inline them (-k apparently), that's a bit of a way off from curl's purpose. Better keep them separate tools, although wget might benefit from using libcurl so they don't have to implement lots of stuff like https or http2.
I totally agree except over the years I have been using aria2 more and more instead of wget. aria2 supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink with the same sane wget syntax and defaults.
curl is the one where I have to remember whether to use -o or -O when trying to download a file with the original filename and just use wget instead because it's faster than reading the curl man page.
I have the same problem. Curl is "unix-y" in the sense that the default options are optimized for a shell script and make no sense for interactive usage.
bropages is where it's at for that kind of stuff. curl is actually their usage example :)
Their example will just dump the webpage to stdout! Almost certainly not the desired behavior given the comment. Then they include second example of a use case that almost nobody has, instead of giving the option that everyone is actually looking for.
* Wget's the interactive, end-user tool, and my go-to if I just need to download a file. For that purpose, its defaults are more sane, its command line usage is more straightforward, its documentation is better-organized, and it can continue incomplete downloads, which curl can't.
* Curl's the developer tool-- it's what I'd use if I were building a shell script that needed to download. The command line tool is more unix-y by default (outputs to stdout) and it's more flexible in terms of options. It's also present by default on more systems-- of note, OSX ships curl but not wget out of the box. Its backing library (libcurl) is also pretty nifty, but not really relevant to this comparison.
This doesn't really need to be an "emacs vs. vim" or "tabs vs. spaces"-type dichotomy: wget and curl do different things well and there's no reason why both shouldn't coexist in one's workflow.