There was a bit of PR "war" between curl and wget a long time ago. At some point curl just became everyone's goto and even the reference implementation for http.
Many moons ago, I was in the small ISP business, and I discovered the aftermath of an attempted script kiddie hack on one of our servers. When I examined the logs I realized we were extremely vulnerable to the remote code execution exploit but had been completely saved by two things: they kept trying to use curl to install the payload, but we only had wget installed; and their scripts were extremely Linux-centric but we were using FreeBSD.
That's funny, I remember the exact same thing. It was either Joomla or Wordpress for us, and we were saved only by virtue of having everything in jails and those jails having a very limited (and un-Linux-like) userland.
Curl's success has more to do with it's quality (and ubiquity) as a C library. I think the curl cli somewhat got "taken along for a ride" with all the improvements that curl lib had by becoming the de-facto standard http library.
Because the library gets ported to every new platform anyways (it's what people use so will be the first to be ported to a new architecture or whatever) so the cli gets support for everything new "for free" so can outcompete wget (because it's always _there_ and works the same).
Now, there's also the fact that the curl cli is a fantastic piece of software, but in terms of features I don't think there's that much between curl and wget for simple cli usecases (but I still use curl).
wget (and htrack) is able to parse downloaded HTML and CSS for scanning additional URLs to fetch recursively and mirror entire sites that way, while curl is "just" a very very complete HTTP client.
Though wget isn't perfect for mirroring, as it will typically download large amounts of resources redundantly only differing in URL query params when those refer to the same content, such as on typical WeirdPress sites with comment links. Ideal would be an HTTP client with http/3, auth token, and keepalive support etc. based on libcurl that can be customized, such as via "href handlers" triggered by event-driven markup parsers for SGML (though CSS and JS imports need special treatment).
Curl certainly isn't my go-to, because it's so complicated. For most uses I can just "wget http://blahblah" and it just works. With curl I have to look up the flags to save the file, etc. Curl is certainly more powerful, but that power comes with a steep usability loss for the common case.