Does this mean that in theory i could skip the build/bundling step entirely?
E.g. i could have a backend project in whatever language i wanted but then instead of having an npm frontend project, instead i could:
1. use a jsconfig.json and jsdoc to gradually type with typescript but without need for tsc since my editor gives me feedback anyway
2. use es modules to handle splitting and caching the various .js files of my app
3. use this import maps feature so that i can still gzip and add the digest of the file to the filename so that i can safely say cache every .js forever since if theres a change, the filename will differ and this feature means i can still import from friendlyname rather than friendlyname-hashed-value
What am i missing from the bundle approach:
Optimal file transfer - all in one js bundle, but if i have http2 then i don’t care because the overhead of multiple files is gone
minify - i don’t care since i gzip anyway
sourcemaps - i dont care since the files presented to the browser are the ones i wrote, maps are redundant
I quite like react but i think i’d like simplifying my tool chain more and using something like htmx instead.
EDIT: i want to qualify the “i quite like react” statement, i have pure components only, no useState, for state handling in the small i’m fine with pure useReducer’s but zustand is nicer for my cases since i can also use that to prevent subtree re-renders of the reducer/context barebones approach.
> all in one js bundle, but if i have http2 then i don’t care because the overhead of multiple files is gone
I think everyone who makes this statement has not tried this. While it is somewhat true for A->B, A->C if not using compression it is definitely not true for A->B, B->C. The deeper your chain is (regardless of if it is within the same package or within different) the overhead of fetching, parsing, building dependency map, fetching dependencies, repeat is pretty huge.
I say this as someone that has deployed both approaches at scale and A/B tested them.
The benefits of compression should not be understated either. You probably have many repeated things in your files that can be efficiently compressed. Even small stuff like `<div` or `useState` can make a huge difference when you consider them over a larger codebase. This part could have been fixed with SDCH, but only chrome and linkedin seemed to care about that before it was deprecated and removed.
>Does this mean that in theory i could skip the build/bundling step entirely?
You can but you must write your app in something the browser understands (js not ts, css not sass etc) and use native modules. For example, here is the test harness for a custom module, written in pure html with no build step: https://github.com/javajosh/simpatico/blob/master/combine2.h.... Here is a more complex (and much older) example from Crockford: https://www.jslint.com/
And yes, the experience developing this way is quite nice!
Having the code in the browser be the code you wrote is...so refreshing.
I highly recommend it.
> You can but you must write your app in something the browser understands. And yes, the experience developing this way is quite nice! Having the code in the browser be the code you wrote is...so refreshing.
This just reminds me how quickly the years pass. It's weird for me to think that developers may have never really worked on anything without a build step.
They may not fully understand how the only things the browser only understands are HTML, JS and CSS (yes, and some other stuff, but the big 3).
Not TS, JSX, SASS, etc. Which is very strange, but I know it can happen ... because I personally know someone I had to explain this to, after a career change and a React-focused bootcamp.
My first major project that used JavaScript was in 1996, so that it probably why. JavaScript back then was a bit "primitive". I remember too many years of the abject pain. Way too many. Even the next 15 weren't that great until ES6 arrived.
Now I'll take TypeScript, a build step, and the resulting tree-shaken, minified, transpiled result any day.
The browsers have been very good incorporating ideas from the community into the browser, and strengthening the standards every time. Things started getting really good with ES6, HTML5 and CSS3, and when IE went entirely away, and most browsers are evergreen, it's actually a much different universe now.
Apart from being pleasant and fast to work with, the benefit of coding without a build step is to the community, as it allows us to learn from each other's code, as in the early days.
> the benefit of coding without a build step is to the community
Programmers don't necessarily want things to be easy. It thwarts the ability to practice resume-driven development and extract money from consulting schemes* and/or draw salaries where a substantial part of the worker's dayjob is in dealing with stuff that isn't strictly necessary (or even a good idea to start with). To frame it another way, high barriers to entry are virtually synonymous with better job security whereas lower barriers lead to increased competition in the job market.
* This is more in the realm of "emergent phenomena" and "revealed preference" than it is conscious scheming, but it's not any less real.
You can actually load babel into the browser and run it there if you want to deliver your script in a language other than js. I wrote this jsfiddle not that long ago as proof you can write JSX direct in the page in a <script type="text/jsx"> element: https://jsfiddle.net/smLa1bco/
If you like React but also don't want a build step, take a look at Preact (only 3kb gzipped) + HTM, their alternative to JSX that uses tagged template literals, so it works in the browser (but also can be compiled away in a build step)
Lit also requires no build step and is shipped only as standard JS modules. It also uses file extensions in all imports so that the required import map to access all files is very short (one index map + one prefix map * 4 core packages). See https://lit.dev
Lit is definitely on my radar, I'm just trying to figure out a project I want to use it on. I want to update my personal website but Lit's SSR is "experimental" so I'll probably wait and try SvelteKit or something else.
Unfortunately not. Gzip is applied per-file and multiple small zips aren’t as compressed as a single zip.
Additionally, you get a cascade of downloads if you have multiple levels of imports, so it will download a file, parse it, download the its dependencies, etc.
Now this may not be a big deal in some cases, but the overhead is still not gone.
Side note: server push is gone so there’s no way to avoid the cascade.
> minify - i don’t care since i gzip anyway
That’s not how it works. The two things are complementary. Minification can drop a lot of dead code and comments, gzipping alone won’t do that.
Frankly in the particular case where you have cascading downloads of small files, HTTP/1 is so unbelievably bad at that compared to HTTP/2 (especially in cases where the user-agent throttles TCP connections, like the limits of ~6 in a browser) that the "overhead" argument isn't really relevant because it might imply they're roughly comparable in some way. They aren't, in my experience, it's more like "Does it work" versus "Does it not work." I'm talking like one-or-two orders of magnitude in performance difference, in practice, in my cases.
Server push wasn't ever going to help use cases like this because pushing those files doesn't actually work very well; only the client knows about the state of its own cache (i.e. the server will aggressively push things even when the client doesn't need them). I tried making it work in some cases very similar to the "recursively download based on import structure" problem, and it was always just a complete wash in practice.
103 Early Hints are a better solution for that class of problem where the server has some knowledge over the request/response pattern and the "structure" of the objects its serving for download. They're also easier to support and implement, anyway.
You probably still want to use a dependency manager (npm, yarn, etc) to pull in and own host your own dependencies. You don't want to pull your dependencies from an external source as that will only ever make your site less reliable.
> use es modules to handle splitting and caching the various .js files of my app
Yeah, I was surprised at how well es modules just worked when i put together this tiny repro https://idb-perf-repro.netlify.app/ (index.js imports two relative files)
> use this import maps feature so that i can [...] add the digest of the file to the filename
So you've still got build tooling and a layer of indirection. Maybe this is simpler, but you'll still need tooling to manage hashing files + creating import maps. I don't think import maps really gives you anything here, because if you're renaming all your files on deploy you may as well just update the imports as well, rather than using import map.
> Optimal file transfer - all in one js bundle, but if i have http2 then i don’t care because the overhead of multiple files is gone
Test and validate this. Does HTTP2 really remove all overhead from requesting 100s of files vs 1 large file? I don't think this is the case.
> but i think i’d like simplifying my tool chain more and using something like htmx instead.
Simplifying your toolchain by removing type safety from your views.
I think you're right but if you need treeshaking, some sorting of pack (minification of variable names, etc) or jsx you would still need a build step. I don't know if treeshaking and pack are that relevant for most people tho
If you aren't packing anything the browser natively treeshakes ESM for you. It doesn't load modules that aren't imported, for one obvious thing. In a "loose" package of ESM modules the browser may never even see most of the tree.
Beyond that ESM "module objects" (the proxy that the import represents and is most directly seen in an `import * as someModule` style import) in most of the browsers are also built to be readonly treeshakeable things in that they may not be immediately JIT compiled and are generally subject to garbage collection rules like everything else, and being readonly proxies may garbage collect sooner than average. So the browser will lazily "treeshake" by garbage collection at runtime in a running app. (Though if you are relying heavily on that it likely means your modules are too big and you should consider smaller, looser, less packed modules and relying on the first bit where stuff not imported is never seen by the browser.)
You still need a bundler because browsers will process only ~6 HTTP requests at a time, so if your code (with all dependencies) has many JS files you will be throttled by that limit real quick. HTTP2/3 makes parallel fetching more efficient over the wire but does not change the limit of max concurrency imposed by the browser.
I actually think the main issue isn't number of requests, but that you can't know which additional files you need to load before loading some of them. Aka if you have a moduleA depending on moduleB depending on moduleC. Only after downloading moduleB will you know that you have to download moduleC as well. So with a deep tree this quickly becomes very slow?
You need some server-side intelligence to analyze each module & determine what preload headers to send. But then the browser knows what to request, even before content starts coming.
Bundles have a colossal disadvantages. Change one thing and boom your user is re-downloading a pretty big bundle. Fine grained file resolution means apps can grow & evolve with very little user cost.
People harp on and on about the benefits of bundles for compression, but man, it's so shortsighted & stupid. It favors only the first-load situation. If your user actually comes back to your app, these advantages all go away, disappear. Personally I'd rather help people that use my app regularly.
Second, the days of bundles being better at compression are numbered. Work has been ongoing to figure out how to send compression dictionaries separately. With this, 98% of the compression advantage disappears out the window. https://github.com/WICG/compression-dictionary-transport
Neither of your approaches sounds like what I'd do. Personally I would build an http server that takes raw uncompressed source. When asked for a file the first time, it compresses & builds the dependency maps in parallel, & saves both of these out, maybe a .gz with some xattr on it. Or store that data in memory, whatever. The first user gets a couple extra ms hit, but the server transparently still does the thing. Developer mode is just a tool to watch the file system & clear those caches, nothing more, & can potentially be completely separate.
Bundles are just so awful. They complicate what used to be an elegant understandable clear world of computing. We can & should try to get back to resources, if it makes sense. And the cards are lining up to make this horrible un-web un-resourceful kludge potentially obsolete. I'm excited we might make the web make sense again for end-users. They deserve to be out of the bad times.
> HTTP2/3 […] but does not change the limit of max concurrency imposed by the browser.
No. HTTP/2 is allowed far more than 6 requests at a time; within a single connection it's limited by the max concurrent streams setting in the SETTINGS frame and the browser's willingness to take advantage of it; AIUI, e.g., in Firefox, that limit is 100.[1]
From there, you're limited by connection bandwidth and any brief HoL blocking caused by dropped packets (but not by HoL blocking caused at the server).
You might be right, and my initial assessment is incorrect. The real reason why HTTP2 doesn't solve the loading problem with many files is the depth of imports across all dependencies - the browser loads the entry file, sees its imports, fetches those URLs, then discovers new imports, starts fetching those, discovers more, etc recursively. So the slowness is caused by the latency of each round trip (easily 50ms-500ms), and not by how many files the browser has in-flight simultaneously, as I assumed.
HTTP2 improves on that bottleneck but not as much as expected. I'm struggling to find relevant benchmarks now, but anecdotally even on localhost when using a dev pipeline without a bundler (such as Vite), any reasonably complex application takes many seconds to fetch thousands of small JS files.
This is something I’m facing now. Even hundreds of files slow things down. Code splitting is the answer, but that adds some other complexity that we may not want.
> Lack of tree shaking will yield larger downloads
... assuming all other things kept equal. Observations show, however, that development practices most commonly associated with (and promoted by) the dev cohort that reflexively reach for this type of tooling just end up with ballooning app sizes anyway (and usually with more jank and less responsiveness on the user end, too). A tool/method/process that delivers a shrink in the range of 10% after 10x bloating still loses when more thoughtful tool/dependency use (or lack thereof) will tend to just sidestep those problems entirely.
the only thing missing is bundling CSS files. Rails has defaulted now to "pinning" javascript however, any css that maybe bundled with the package still presents an issue.
E.g. i could have a backend project in whatever language i wanted but then instead of having an npm frontend project, instead i could:
What am i missing from the bundle approach:Optimal file transfer - all in one js bundle, but if i have http2 then i don’t care because the overhead of multiple files is gone
minify - i don’t care since i gzip anyway
sourcemaps - i dont care since the files presented to the browser are the ones i wrote, maps are redundant
I quite like react but i think i’d like simplifying my tool chain more and using something like htmx instead.
EDIT: i want to qualify the “i quite like react” statement, i have pure components only, no useState, for state handling in the small i’m fine with pure useReducer’s but zustand is nicer for my cases since i can also use that to prevent subtree re-renders of the reducer/context barebones approach.