wget (and htrack) is able to parse downloaded HTML and CSS for scanning additional URLs to fetch recursively and mirror entire sites that way, while curl is "just" a very very complete HTTP client.
Though wget isn't perfect for mirroring, as it will typically download large amounts of resources redundantly only differing in URL query params when those refer to the same content, such as on typical WeirdPress sites with comment links. Ideal would be an HTTP client with http/3, auth token, and keepalive support etc. based on libcurl that can be customized, such as via "href handlers" triggered by event-driven markup parsers for SGML (though CSS and JS imports need special treatment).
Though wget isn't perfect for mirroring, as it will typically download large amounts of resources redundantly only differing in URL query params when those refer to the same content, such as on typical WeirdPress sites with comment links. Ideal would be an HTTP client with http/3, auth token, and keepalive support etc. based on libcurl that can be customized, such as via "href handlers" triggered by event-driven markup parsers for SGML (though CSS and JS imports need special treatment).