Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You still need a bundler because browsers will process only ~6 HTTP requests at a time, so if your code (with all dependencies) has many JS files you will be throttled by that limit real quick. HTTP2/3 makes parallel fetching more efficient over the wire but does not change the limit of max concurrency imposed by the browser.



I actually think the main issue isn't number of requests, but that you can't know which additional files you need to load before loading some of them. Aka if you have a moduleA depending on moduleB depending on moduleC. Only after downloading moduleB will you know that you have to download moduleC as well. So with a deep tree this quickly becomes very slow?


This is the problem Preload headers & Early Hints are meant to help with. https://web.dev/preload-critical-assets/ https://developer.chrome.com/blog/early-hints/

You need some server-side intelligence to analyze each module & determine what preload headers to send. But then the browser knows what to request, even before content starts coming.


If it's calculated at deployment time, it's functionally the same cost as making a bundle, without its benefits.

If it's calculated at runtime, it's an additional cost & delay, plus you'd need a specialized server (or at least middleware) to handle the requests.


Bundles have a colossal disadvantages. Change one thing and boom your user is re-downloading a pretty big bundle. Fine grained file resolution means apps can grow & evolve with very little user cost.

People harp on and on about the benefits of bundles for compression, but man, it's so shortsighted & stupid. It favors only the first-load situation. If your user actually comes back to your app, these advantages all go away, disappear. Personally I'd rather help people that use my app regularly.

Second, the days of bundles being better at compression are numbered. Work has been ongoing to figure out how to send compression dictionaries separately. With this, 98% of the compression advantage disappears out the window. https://github.com/WICG/compression-dictionary-transport

Neither of your approaches sounds like what I'd do. Personally I would build an http server that takes raw uncompressed source. When asked for a file the first time, it compresses & builds the dependency maps in parallel, & saves both of these out, maybe a .gz with some xattr on it. Or store that data in memory, whatever. The first user gets a couple extra ms hit, but the server transparently still does the thing. Developer mode is just a tool to watch the file system & clear those caches, nothing more, & can potentially be completely separate.

Bundles are just so awful. They complicate what used to be an elegant understandable clear world of computing. We can & should try to get back to resources, if it makes sense. And the cards are lining up to make this horrible un-web un-resourceful kludge potentially obsolete. I'm excited we might make the web make sense again for end-users. They deserve to be out of the bad times.


Also, you would need as many roundtrips as your dependency depth, no? No matter how parallel it is.


> HTTP2/3 […] but does not change the limit of max concurrency imposed by the browser.

No. HTTP/2 is allowed far more than 6 requests at a time; within a single connection it's limited by the max concurrent streams setting in the SETTINGS frame and the browser's willingness to take advantage of it; AIUI, e.g., in Firefox, that limit is 100.[1]

From there, you're limited by connection bandwidth and any brief HoL blocking caused by dropped packets (but not by HoL blocking caused at the server).

[1]: https://stackoverflow.com/questions/36835972/is-the-per-host...


You might be right, and my initial assessment is incorrect. The real reason why HTTP2 doesn't solve the loading problem with many files is the depth of imports across all dependencies - the browser loads the entry file, sees its imports, fetches those URLs, then discovers new imports, starts fetching those, discovers more, etc recursively. So the slowness is caused by the latency of each round trip (easily 50ms-500ms), and not by how many files the browser has in-flight simultaneously, as I assumed.


With HTTP2 would we not use 1 request / stream but with multiplexing to allow a huge number of parallel requests?


HTTP2 improves on that bottleneck but not as much as expected. I'm struggling to find relevant benchmarks now, but anecdotally even on localhost when using a dev pipeline without a bundler (such as Vite), any reasonably complex application takes many seconds to fetch thousands of small JS files.


This is something I’m facing now. Even hundreds of files slow things down. Code splitting is the answer, but that adds some other complexity that we may not want.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: