I built a tool relevant to this the other day. I was fed up of how hard it was to download ES modules from a CDN and store them locally so I could build apps without that CDN dependency, so I made this:
This downloads the ESM version of Observable Plot from the jsDelivr CDN, figures out its dependencies, downloads them as well (40 files total!) and rewrites their imports to be local, not CDN paths.
Genuinely curious — what language do you think has good/better package management? Every time I start a Python project, for example, there's a bunch of time and frustration spent getting pip and virtual envs working.
As a python and node dev, I think NPM wins out just because I don't have to "activate" an environment when I want to use it. The environment is always right there where my code is.
You didn't ask me this question, but I work in Ruby, and off the top of my head, have no complaints about Bundler. Switching Ruby versions between projects is not something that is handled natively in any way (unsurprisingly?) and I know that I've struggled a lot with `rvm`, but since switching to *env (rbenv, pyenv, nodenv) I can safely say I do not have any struggles on that front either.
I feel ruby's whole problem in this space is that everyone recommends a different tool. I've used several. None are terrible. But walking into an environment and getting told everyone uses a different tool is
Only times I've run into dependency issues with .NET are when a library expects certain native DLLs to already be present on the system, but that's pretty rare (and fairly easy to solve).
In my mind, anyone setting out to create a package manager, or a new language that will need one, needs to at least match the functionality of Gems/Bundler or Hex/Mix. They've been around long enough that it feels like table stakes at this point.
Mix does more than just dependencies, however I am referring the table stakes as being the more narrow scope of dependency management.
They are talking about client development so none of the answers are going to compare.
You aren’t grabbing ESMs from CDNs for a Node server app. And even if you are, that’s just not a situation that’s happening outside of an ecosystem that straddles client development.
> (...) but languages with a better standard library seem less affected by dependency hell.
Let's ignore the hand-waving over what "better standard library" means.
Dependency hell has zero to do with standard libraries, and is exclusively related to modularity. If your language supports third party modules that can depend on third party modules, you have dependency management. Managing dependencies so far is hell, no matter how you go about it.
This particular incarnation of the problem is a thing because this is actually not (standards-compliant) JS. This is a direct consequence of people infecting everything with non-standard NodeJS-isms (or NodeJS-inspired isms).
The problem I'm trying to solve is "give me an ECMAScript module version of this package, and fetch all of the dependencies too, in a way that I can use in a browser without needing a build system".
If you know of the npm recipe for doing that I'd love to learn it - I'm fine running npm once to grab the module versions of things (which I intend to vendor in my repo), as long as I don't have to use any build tooling after that point.
A wrapper bundle is what I prefer right now, here is a wrapper bundle I made for CodeMirror 6. It just provides the needed exports from the library. It can also be verified on the client side with UNPKG or jsDelivr. My client side build prints out the integrity attribute. I still need to write the npm bundling code so at the moment I am relying on UNPKG.
The rollup output is nice. I don't have terser, though I may add it later. When I added a couple exports the diff was just the change in my code.
The library using it can include the integrity attribute and a can go here and run the build after checking the html and JavaScript file and check the output (codeberg.page is like GitHub pages in that it the repo contents controls what it serves): https://macchiato.codeberg.page/editor-lib-codemirror/
npm + an import map generator should work. I think you might be able to use the jspm import map generator via the `jspm link` command: https://jspm.org/docs/jspm-cli/stable/#link
I always assumed, worst that could happen is somebody decides to download all of that CDN's library to my RAM by simply requesting the appropriate URLs until Varnish starts dropping the modules I wanted to cache in the first place.
Thanks for the tip about esm.sh. For some reason I hadn't heard of it before, or maybe I had but it was before I fixed issues with ES module exports in my library. Previously I was using Skypack, but it didn't bundle dependencies (at least as far as I could tell).
Thanks to your tip, I can query public datasets with SQL with a one line import in an observable notebook [0] :)
I understand there may be various reasons to avoid js bundling, but in case you avoid complexity, check out Vite. It’s zero config and can work on a regular directory (no node_modules, only index.html, styles, scripts). `vite` runs devserver with hmr, `vite build` generates dist. It may be useful to you even if you stick to esm downloading due to out of box support for all tech like ts, sass, less, etc.
It’s basically an http server like npm/http-server, but with all modern tools automagically included.
So, the thing with Vite is that it’s great as long as your use case is in the happy paths it caters to. But if you diverge from them, its config story rapidly devolves to:
- A very large subset of Rollup, which is so large that the docs seem to be self-mocking about it
- Another very large subset of ESBuild, which is smaller but still non-trivial
- A whole bunch of Vite-specific stuff that’s only partly (and incompletely/inconsistently) abstracting the omitted portions of those subsets
- An enormous amount of terminology, some very poorly described in any docs if you even know which to consult
- Plugins you’ll definitely need and which are a superset of Rollup’s, and the abstraction is so leaky that you’re often better off skipping the Vite keyword in your searches
I mean, even in the worst case it’s not Webpack, but it’s astonishing how close it can feel very easily.
And I don’t mean this nearly as harshly as it probably sounds. Vite really is great if it’s designed for your use case. But it’s far from the no-headaches panacea even I thought it was a few months ago.
you give it an import map and a list of urls and it either bundles + minifies to a single .js file or rewrites the paths and saves everything as a .zip
I'm really looking forward to the ECMAScript proposal on Type Annotations making it into browsers some days as this would effectively unlock transpiler free typescript development, leaving just a simple optional bundling step for production.
Does this mean that in theory i could skip the build/bundling step entirely?
E.g. i could have a backend project in whatever language i wanted but then instead of having an npm frontend project, instead i could:
1. use a jsconfig.json and jsdoc to gradually type with typescript but without need for tsc since my editor gives me feedback anyway
2. use es modules to handle splitting and caching the various .js files of my app
3. use this import maps feature so that i can still gzip and add the digest of the file to the filename so that i can safely say cache every .js forever since if theres a change, the filename will differ and this feature means i can still import from friendlyname rather than friendlyname-hashed-value
What am i missing from the bundle approach:
Optimal file transfer - all in one js bundle, but if i have http2 then i don’t care because the overhead of multiple files is gone
minify - i don’t care since i gzip anyway
sourcemaps - i dont care since the files presented to the browser are the ones i wrote, maps are redundant
I quite like react but i think i’d like simplifying my tool chain more and using something like htmx instead.
EDIT: i want to qualify the “i quite like react” statement, i have pure components only, no useState, for state handling in the small i’m fine with pure useReducer’s but zustand is nicer for my cases since i can also use that to prevent subtree re-renders of the reducer/context barebones approach.
> all in one js bundle, but if i have http2 then i don’t care because the overhead of multiple files is gone
I think everyone who makes this statement has not tried this. While it is somewhat true for A->B, A->C if not using compression it is definitely not true for A->B, B->C. The deeper your chain is (regardless of if it is within the same package or within different) the overhead of fetching, parsing, building dependency map, fetching dependencies, repeat is pretty huge.
I say this as someone that has deployed both approaches at scale and A/B tested them.
The benefits of compression should not be understated either. You probably have many repeated things in your files that can be efficiently compressed. Even small stuff like `<div` or `useState` can make a huge difference when you consider them over a larger codebase. This part could have been fixed with SDCH, but only chrome and linkedin seemed to care about that before it was deprecated and removed.
>Does this mean that in theory i could skip the build/bundling step entirely?
You can but you must write your app in something the browser understands (js not ts, css not sass etc) and use native modules. For example, here is the test harness for a custom module, written in pure html with no build step: https://github.com/javajosh/simpatico/blob/master/combine2.h.... Here is a more complex (and much older) example from Crockford: https://www.jslint.com/
And yes, the experience developing this way is quite nice!
Having the code in the browser be the code you wrote is...so refreshing.
I highly recommend it.
> You can but you must write your app in something the browser understands. And yes, the experience developing this way is quite nice! Having the code in the browser be the code you wrote is...so refreshing.
This just reminds me how quickly the years pass. It's weird for me to think that developers may have never really worked on anything without a build step.
They may not fully understand how the only things the browser only understands are HTML, JS and CSS (yes, and some other stuff, but the big 3).
Not TS, JSX, SASS, etc. Which is very strange, but I know it can happen ... because I personally know someone I had to explain this to, after a career change and a React-focused bootcamp.
My first major project that used JavaScript was in 1996, so that it probably why. JavaScript back then was a bit "primitive". I remember too many years of the abject pain. Way too many. Even the next 15 weren't that great until ES6 arrived.
Now I'll take TypeScript, a build step, and the resulting tree-shaken, minified, transpiled result any day.
The browsers have been very good incorporating ideas from the community into the browser, and strengthening the standards every time. Things started getting really good with ES6, HTML5 and CSS3, and when IE went entirely away, and most browsers are evergreen, it's actually a much different universe now.
Apart from being pleasant and fast to work with, the benefit of coding without a build step is to the community, as it allows us to learn from each other's code, as in the early days.
> the benefit of coding without a build step is to the community
Programmers don't necessarily want things to be easy. It thwarts the ability to practice resume-driven development and extract money from consulting schemes* and/or draw salaries where a substantial part of the worker's dayjob is in dealing with stuff that isn't strictly necessary (or even a good idea to start with). To frame it another way, high barriers to entry are virtually synonymous with better job security whereas lower barriers lead to increased competition in the job market.
* This is more in the realm of "emergent phenomena" and "revealed preference" than it is conscious scheming, but it's not any less real.
You can actually load babel into the browser and run it there if you want to deliver your script in a language other than js. I wrote this jsfiddle not that long ago as proof you can write JSX direct in the page in a <script type="text/jsx"> element: https://jsfiddle.net/smLa1bco/
If you like React but also don't want a build step, take a look at Preact (only 3kb gzipped) + HTM, their alternative to JSX that uses tagged template literals, so it works in the browser (but also can be compiled away in a build step)
Lit also requires no build step and is shipped only as standard JS modules. It also uses file extensions in all imports so that the required import map to access all files is very short (one index map + one prefix map * 4 core packages). See https://lit.dev
Lit is definitely on my radar, I'm just trying to figure out a project I want to use it on. I want to update my personal website but Lit's SSR is "experimental" so I'll probably wait and try SvelteKit or something else.
Unfortunately not. Gzip is applied per-file and multiple small zips aren’t as compressed as a single zip.
Additionally, you get a cascade of downloads if you have multiple levels of imports, so it will download a file, parse it, download the its dependencies, etc.
Now this may not be a big deal in some cases, but the overhead is still not gone.
Side note: server push is gone so there’s no way to avoid the cascade.
> minify - i don’t care since i gzip anyway
That’s not how it works. The two things are complementary. Minification can drop a lot of dead code and comments, gzipping alone won’t do that.
Frankly in the particular case where you have cascading downloads of small files, HTTP/1 is so unbelievably bad at that compared to HTTP/2 (especially in cases where the user-agent throttles TCP connections, like the limits of ~6 in a browser) that the "overhead" argument isn't really relevant because it might imply they're roughly comparable in some way. They aren't, in my experience, it's more like "Does it work" versus "Does it not work." I'm talking like one-or-two orders of magnitude in performance difference, in practice, in my cases.
Server push wasn't ever going to help use cases like this because pushing those files doesn't actually work very well; only the client knows about the state of its own cache (i.e. the server will aggressively push things even when the client doesn't need them). I tried making it work in some cases very similar to the "recursively download based on import structure" problem, and it was always just a complete wash in practice.
103 Early Hints are a better solution for that class of problem where the server has some knowledge over the request/response pattern and the "structure" of the objects its serving for download. They're also easier to support and implement, anyway.
You probably still want to use a dependency manager (npm, yarn, etc) to pull in and own host your own dependencies. You don't want to pull your dependencies from an external source as that will only ever make your site less reliable.
> use es modules to handle splitting and caching the various .js files of my app
Yeah, I was surprised at how well es modules just worked when i put together this tiny repro https://idb-perf-repro.netlify.app/ (index.js imports two relative files)
> use this import maps feature so that i can [...] add the digest of the file to the filename
So you've still got build tooling and a layer of indirection. Maybe this is simpler, but you'll still need tooling to manage hashing files + creating import maps. I don't think import maps really gives you anything here, because if you're renaming all your files on deploy you may as well just update the imports as well, rather than using import map.
> Optimal file transfer - all in one js bundle, but if i have http2 then i don’t care because the overhead of multiple files is gone
Test and validate this. Does HTTP2 really remove all overhead from requesting 100s of files vs 1 large file? I don't think this is the case.
> but i think i’d like simplifying my tool chain more and using something like htmx instead.
Simplifying your toolchain by removing type safety from your views.
I think you're right but if you need treeshaking, some sorting of pack (minification of variable names, etc) or jsx you would still need a build step. I don't know if treeshaking and pack are that relevant for most people tho
If you aren't packing anything the browser natively treeshakes ESM for you. It doesn't load modules that aren't imported, for one obvious thing. In a "loose" package of ESM modules the browser may never even see most of the tree.
Beyond that ESM "module objects" (the proxy that the import represents and is most directly seen in an `import * as someModule` style import) in most of the browsers are also built to be readonly treeshakeable things in that they may not be immediately JIT compiled and are generally subject to garbage collection rules like everything else, and being readonly proxies may garbage collect sooner than average. So the browser will lazily "treeshake" by garbage collection at runtime in a running app. (Though if you are relying heavily on that it likely means your modules are too big and you should consider smaller, looser, less packed modules and relying on the first bit where stuff not imported is never seen by the browser.)
You still need a bundler because browsers will process only ~6 HTTP requests at a time, so if your code (with all dependencies) has many JS files you will be throttled by that limit real quick. HTTP2/3 makes parallel fetching more efficient over the wire but does not change the limit of max concurrency imposed by the browser.
I actually think the main issue isn't number of requests, but that you can't know which additional files you need to load before loading some of them. Aka if you have a moduleA depending on moduleB depending on moduleC. Only after downloading moduleB will you know that you have to download moduleC as well. So with a deep tree this quickly becomes very slow?
You need some server-side intelligence to analyze each module & determine what preload headers to send. But then the browser knows what to request, even before content starts coming.
Bundles have a colossal disadvantages. Change one thing and boom your user is re-downloading a pretty big bundle. Fine grained file resolution means apps can grow & evolve with very little user cost.
People harp on and on about the benefits of bundles for compression, but man, it's so shortsighted & stupid. It favors only the first-load situation. If your user actually comes back to your app, these advantages all go away, disappear. Personally I'd rather help people that use my app regularly.
Second, the days of bundles being better at compression are numbered. Work has been ongoing to figure out how to send compression dictionaries separately. With this, 98% of the compression advantage disappears out the window. https://github.com/WICG/compression-dictionary-transport
Neither of your approaches sounds like what I'd do. Personally I would build an http server that takes raw uncompressed source. When asked for a file the first time, it compresses & builds the dependency maps in parallel, & saves both of these out, maybe a .gz with some xattr on it. Or store that data in memory, whatever. The first user gets a couple extra ms hit, but the server transparently still does the thing. Developer mode is just a tool to watch the file system & clear those caches, nothing more, & can potentially be completely separate.
Bundles are just so awful. They complicate what used to be an elegant understandable clear world of computing. We can & should try to get back to resources, if it makes sense. And the cards are lining up to make this horrible un-web un-resourceful kludge potentially obsolete. I'm excited we might make the web make sense again for end-users. They deserve to be out of the bad times.
> HTTP2/3 […] but does not change the limit of max concurrency imposed by the browser.
No. HTTP/2 is allowed far more than 6 requests at a time; within a single connection it's limited by the max concurrent streams setting in the SETTINGS frame and the browser's willingness to take advantage of it; AIUI, e.g., in Firefox, that limit is 100.[1]
From there, you're limited by connection bandwidth and any brief HoL blocking caused by dropped packets (but not by HoL blocking caused at the server).
You might be right, and my initial assessment is incorrect. The real reason why HTTP2 doesn't solve the loading problem with many files is the depth of imports across all dependencies - the browser loads the entry file, sees its imports, fetches those URLs, then discovers new imports, starts fetching those, discovers more, etc recursively. So the slowness is caused by the latency of each round trip (easily 50ms-500ms), and not by how many files the browser has in-flight simultaneously, as I assumed.
HTTP2 improves on that bottleneck but not as much as expected. I'm struggling to find relevant benchmarks now, but anecdotally even on localhost when using a dev pipeline without a bundler (such as Vite), any reasonably complex application takes many seconds to fetch thousands of small JS files.
This is something I’m facing now. Even hundreds of files slow things down. Code splitting is the answer, but that adds some other complexity that we may not want.
> Lack of tree shaking will yield larger downloads
... assuming all other things kept equal. Observations show, however, that development practices most commonly associated with (and promoted by) the dev cohort that reflexively reach for this type of tooling just end up with ballooning app sizes anyway (and usually with more jank and less responsiveness on the user end, too). A tool/method/process that delivers a shrink in the range of 10% after 10x bloating still loses when more thoughtful tool/dependency use (or lack thereof) will tend to just sidestep those problems entirely.
the only thing missing is bundling CSS files. Rails has defaulted now to "pinning" javascript however, any css that maybe bundled with the package still presents an issue.
This allows to improve my load times after each deploy by several factors.
Be aware that the example that is provided in the article is quite dangerous. If for any reason the content of the script is altered, the user will execute it anyway.
With normal scripts you have the integrity="sha-..." option that secures the content. With importmaps we only have a couple of github issues and no solution.
> With normal scripts you have the integrity="sha-..." option that secures the content. With importmaps we only have a couple of github issues and no solution.
Yeah, I think it's a Bad Thing for security, to release importmaps without integrity.
Reminder: vendor your imported scripts if you’re doing this. I shouldn’t have to trust or give data to a CDN to use your website. There isn’t a meaningful performance win either since browsers now cache per domain for privacy reasons.
> vendor your imported scripts i̶f̶ ̶y̶o̶u̶’̶r̶e̶ ̶d̶o̶i̶n̶g̶ ̶t̶h̶i̶s̶
FTFY
I vendor everything my webpages need, better to ensure that the webpage always works as long as my server is up. Also means I can download the page and use it offline.
It almost but not quite goes without saying, but if the import is (for example) the YouTube embed client script, you should not put an integrity check on it. Because you expect YouTube to put out a new client script from time to time. If there’s an integrity check based on an old version of the script, it will just break your page.
You're passing IP addresses and who knows what else to these CDN providers by using them. Add-ons like Decentraleyes were created because this leaks info, SRI or not.
To elaborate on the more pragmatic answers: vendoring generally refers to adding the source code of a dependency to your source control repository rather than having an install step (or expecting libraries to be provided by system level package managers) as part of your development process.
In JavaScript these days, dependencies are usually loaded from npm so avoiding the bundling step for client-side code would naively mean replacing those dependencies with links to a third-party service like unpkg, which serves code published to npm for the browser.
As others have explained, vendoring in this case would mean shipping those dependencies as part of your application code so they are served from your own domain. Of course at the moment a lot of dependencies will likely simply not work if imported directly from the browser.
To clarify, it isn't vendoring to have an external script in your build output. Just because it's technical jargon doesn't mean there can't be a clear meaning of it.
I assume it means the same thing as to vendor anything else: to bundle dependencies as a part of your application instead of giving users "links" to the dependencies served by a third party.
This likely stems from the proposers building on existing "in-the-wild" development patterns: it's become a convention for DSLs in the JS development community.
Even sticking strictly to JSON (the DSL can be any syntax), this has a few advantages to SGML-style attributes:
1. Supports nesting to any depth
2. Easier to deserialize from an in-memory object in the most common server-side languages
3. Supports easy developer/hacker copypasta into a JS file for debugging.
4. Broader/better expand/collapse support in text editors (in the case of very large lists)
---
> I'd have to look up every time I wanted to use it
I'd likely have to look up the HTML attribute names just as much - I recall having to do this a lot at first with the viewport meta when it was introduced - the only real hurdle to get over with this is to internalise the following pattern (which, as mentioned, some in the JS community are already familiar with):
It matches exactly the on-disk format for deno.json / import_map.json.
Based on my limited experience with it, I was super-stoked to see it be basically: “oh, what I was messing around with last night when trying out deno!”
The current implementation is so that you can `<script type="importmap" src="path/to/your/importmap.json"></script>`. JSON is a well-known on-disk data format with lots of tools that support working with it and manipulating it (including plenty JS itself including natively in the browser).
ESM was specced out in 2015. It hurts so bad that we are in 2023 & using modules in a modular way is still incredibly ill supported in the web ecosystem. :(
It's really wild seeing the tremendous amount of work being put in to ESM just to come up with a solution that still isn't as fast, secure, or ergonomic as bundling.
1. Import maps are basically a compile-time feature. I thought the point of all of this was to not have to have toolchains?
2. Imports significantly lag behind "old school" scripts in security. You can't use integrity hashes with them, and yet most the examples you see are about importing random scripts from all over the web. This is actually hilariously related to point (1), since you can only escape the realization that this is a feature a compiler is supposed to spit out if you direct the imports to some third party resource. So in order to contrive a situation where you would manually craft an import map, they are forced to show an example that is less secure than the stuff that has worked in every browser for 10 years.
3. Not directly related to import maps, but you'll probably end up using them together: in order to attempt to achieve similar performance to just bundling your code, the amazing solution we've come up with is to... put a <link rel="modulepreload"> tag in your head element for each script you want to be fetched immediately. Yes, so much better than the ridiculous old method of using a script tag for each script. Now you just use a link tag for each script instead. I'm sure someone will tell you that this is better because you don't have to do that. Yes, yes, so much better that the default behavior of script loading is slow unless you go read about a bunch of "tips and tricks" to get it working with reasonable performance.
It's always great to take one concept like "importing a script" and turn it in a an entire ecosystem of terminology that spans languages and platforms, custom JavaScript syntax for the imports themselves, a set of different HTML tags for actually getting them into your page (script, link, etc.), and JSON for mapping names to URLs. Great. Super simple.
ESM has always suffered from a weird shifting argument. Whenever you point out that the performance and security isn't great, all of a sudden it's a developer ergonomics feature, not meant to be for production. Then when you point you the ergonomics suck (like not being able to do bare imports), you're told you should be using it in tandem with a toolchain that auto-generates import maps or something. So what's the benefit? I already had a tool that could do that, and it worked in every browser and didn't ignore the the security advancements we made 10 years ago.
The web is a very wide world out there, and companies and people have a wide diversity of architectures and methods for frontend apps that benefit from import maps.
Consider:
- Many people want to skip bundling altogether for as long as is possible. It's really nice to have a build-free (or almost build-free) app, especially when you're starting a greenfield or personal project. Why get mired down in researching the latest third-party build tools before writing code?
- Third-party dependencies can be vendored (included in your own deployed assets on a domain you own). This ought to be everyone's default - CDNs shouldn't be used in production for new projects anyway. If you're loading from your own domain, hashing the asset isn't necessary.
- Sometimes you may have dependencies you wrote yourself that you'd like to load with an import map.
- Some architectures call for both a build/bundle step and an import map. Consider a micro-frontend architecture -- this allows many decoupled teams to work on different parts of an app that are composed together at runtime. Each MFE is built/bundled prior to deployment, but an import map helps the browser load each one when it is needed. See https://single-spa.js.org/docs/recommended-setup/
> I already had a tool that could do that, and it worked in every browser and didn't ignore the the security advancements we made 10 years ago.
But that tool (WebPack or RequireJS, right?) had to shim a lot of non-native features into the browser runtime, and at build time it was slow and very complex since there were so many module types that needed to be supported. Modern tooling has much less build-time complexity, requires little or no runtime module library, and is more performant since it's often written in Rust. (See Vite, esbuild, etc.)
Browser features that allow simplifying and/or eliminating third-party tools are always a net win, IMO.
Many people want to skip bundling altogether for as long as is possible. It's really nice to have a build-free (or almost build-free) app, especially when you're starting a greenfield or personal project. Why get mired down in researching the latest third-party build tools before writing code?
But ESM doesn't actually give you this, this is precisely the point I'm making. ESM doesn't become ergonomic without the use of some sort of tooling. Perhaps you have your setup that just spins something up and auto-compiles an import maps or something. Great! But that's not at all the promised dream of finally escaping tooling. It's the same thing we had before, just outputting different stuff. Different stuff that happens to be worse in a lot of ways.
Import maps are a perfect representation of this. It's essentially asking you to hand-write a manifest file just to be able to type "import 'lodash'". This is not an improvement or simplification to the web ecosystem. In my experience, the people who are actually using any of this stuff, are doing so through some build pipeline. And again, this is unfortunate since no one should be shipping this to production since it is both slower and less secure.
Handwriting an importmap is no worse than handwriting a stack of `<script src="https://some.cdn.somewhere.example.com/jquery"></script>` global imports, but has the advantage that importmaps are lazy (aren't used until import time) and don't pollute the global namespace.
Just like the bad old jQuery days maybe you'll see a proliferation of importmap CDN lines to just copy/paste manually in READMEs, no need for further tooling than your OS clipboard.
Some CDNs are even building themselves to be easy to handwrite into importmaps (https://esm.sh/ for one example mentioned in other threads around here).
It gets complicated again if you don't want to rely on CDNs and prefer to have local copies, but that's always been the package management game in a nutshell (and before there was npm there was the much simpler bower and there are already tools trying to be ESM "bower").
You really don't have to worry about these things until your module graph is > 100 modules. You are falling into "all or nothing" thinking here, because you need tools in some use-cases doesn't mean that a feature is worthless. Not needing tools for small projects and then needing them for large projects is pretty good tradeoff.
I'm not sure what you mean by all or nothing thinking here. The problem with imports is precisely that it is incredibly cumbersome to use without a bunch of additional "add-ons". You can't just "use a little bit of it." Want to do the supremely simple task of importing a JSON file? Better make sure your browser supports import assertions (assuming one even knows these exist). Want to not have to import a long incomprehensible URL? Better write a manifest file called an import map that only recently got full browser support. You can't even trivially add a script tag with an ESM module because it won't work from a file:// domain, you have to start up a simple server for your simple.html file. So I have no idea where this notion of not needing tools comes from. I think what people actually mean is "my tools support ESM out of the box". It is not trivial to get a nice experience with zero tooling. Again, you at minimum need to run a server (and I believe you have to fiddle with the MIME types for python's one line simple server too, although perhaps that has now been rectified). The simplest projects require strictly more tools to use this thing. And on the opposite end of the scale, it appears that you agree that of course you will use tooling. Only hopefully those tools aren't generating imports but just bundling because if not they are creating slower and less secure resources.
ESM imports have fallen into the C++ trap: every year the determination is made that "if we just add these 5 extra features (both to the the ECMAScript spec and W3C spec), then they'll finally be great out of the box!" This is how we ended up with import.meta and dynamic import and import maps and import assertions and preloadmodule link tags, which are all still less flexible than require/bundler, and somehow still don't have any answer whatsoever to integrity checksums that originally shipped in 2015. It's absurd.
To be clear, I am not saying there shouldn't have been "an" import spec. I am saying this one is tragically flawed and remains realistically a bad option (unless you are using a compiler that is just going to turn it into roughly the same minified mumbo jumbo it was turning your requires into before). This is fairly well accepted at TC39. The spec was rushed. Nothing would be allowed to ship with that level of consideration anymore. Usually someone at this point asks "well what would you have done?!". I would have punted on import until after async/await. Many of the most fundamental issues in imports come from having to have conceived them in a pre-async/await world. They are essentially a "top-level-await-ish" statement, in a world with async/await. However, had async/await and top-level-await shipped first, and then import was introduced, you could have started with the expression form of import (await import()), which would not have required any new syntax (you could just use destructuring, the same way you do now), and not required a "exercise left to the host environment" hand-waivy async-link resolution system, but instead used the actual existing semantic of the language. We ended up having to do it anyways with dynamic import. It is so weird that `x as y` becomes `{ x: y }` and `x` becomes `{ default: x }` merely by moving the entrypoint around in the file. It would also be a lot more transparent to the user: by typing `const x = await import("blah")`, you aren't (erroneously) lead to believe that this is a synchronous action when it isn't. It's also way easier to add additional features to a function (such as integrity hashes or assertions or whatever), vs. in a statement where it requires a parser level change, and for no real benefit other than causing people to have to learn a bunch of weird new syntax that increasingly feels out of place.
> I'm not sure what you mean by all or nothing thinking here.
I mean this:
> Want to do the supremely simple task of importing a JSON file?
Surely you realize that "importing JSON" is not something every project or page needs. By "all or nothing thinking" I mean the idea, which you expand in this response, that unless it supports everything it's not worth ever using. But that's a false dichotomy. A lot of projects are simple. A lot don't need all of these advanced bundler features. Those projects can (and do) use bare ESM in the browser. And that's ok.
To answer your question more directly, the reason they do it this way is because bundling dozens of features into a spec and then releasing them all at once has never worked in the history of web standards. For better or worse, doing one small thing at a time is what works.
As someone on TC39, I can tell you that’s not why this was done this way. It’s an accident if history, and recognized as a rushed spec. The async/await first then import expression path I described would actually follow the “don’t ship one huge thing” (that then takes 5 years to implement) advice that you are suggesting.
> You can't even trivially add a script tag with an ESM module because it won't work from a file:// domain, you have to start up a simple server for your simple.html file.
This isn't an ESM problem. Last I checked this is still a per-browser security decision that applies to all JS, not just ESM, because browsers fear they can't trust malware or adware not to spread if file: were too open. (Thanks to such things in the IE days existing.) Too many developers are in Chrome 24/7 and forget that Firefox still exists and has good development tools and that Firefox allows file:// origin for scripts still in 2023 (with some caveats, but fewer than "doesn't run at all" as has been the Chromium default for a long time).
Also, I recall there are command line flags in Chrome to allow it if you really do want to use Chrome for development and can't just spin up a simple server on localhost.
It is an ESM problem, since it works just fine with non-ESM script tags. ESM isn't some theoretical thing, it's the reality of how it works, in particular when we are specifically discussing the practical reality of using it. And the fact is that it absolutely is more cumbersome than the old technologies (and more cumbersome than alternative approaches that could have been taken). The fact that we're now talking about launching Chrome with special command line flags demonstrates this. I mean, the rest of my comment that delves into all the other problems proves it even more, but the fact that it fails the bare minimum requirement of just loading when you drag and drop a file into the browser is fairly bad.
> It is an ESM problem, since it works just fine with non-ESM script tags.
This is not my experience. I've had many cases where Chrome security blocked non-ESM script tags to file: URLs (that Firefox was much more fine with).
The security mechanisms in the browser have gotten extremely sensitive (and rather baroque) and blocking file: by default is a part of that. This isn't an ESM-specific thing in any way, this is an overall browser security thing. It is a response to the arms race of malware.
People that have been writing/developing apps and scripts designed for local use (such as Twine) have been complaining about Chrome security defaults for years now for non-ESM development. "Drag-and-drop" of Twine games is generally only reliable on Firefox. There are so many obscure Chrome security issues that can break such games. This is not a new complaint for ESM.
(I didn't break down the "problems" with the rest of your comment because it was a bit of a rant and a lot of it felt to me more like opinions and bike shedding rather than "problems". I didn't think I can convince you that ESMs are good and wasn't going to try. I can point out the facts that script loading from local files is a Chrome [and IE] problem more than an ESM problem, because the facts back that up.)
If you expect to be able to develop with the file protocol you're going to be disappointed. Essentially all modern features don't work with it; not just ESM. You can't use Service Workers with the file protocol either.
There are obvious benefits to having the granularity of loading and caching script files match the granularity of how those files are used on different pages or updated at different times.
Bundling everything is fine for most apps, but sometimes you have a need for code splitting, or sharing libraries between scripts from different origins, or some mix of preloading and dynamic loading. So it is nice to have a standard for these things.
Here here. Today, bundlers may get you to first page load faster. But if a user comes back and you've shipped two small fixes, all those extra wins you get from compressing a bunch files at once fly out the window & you're deep in the red. If you have users that return to your site, and your site is actively developed, bundling is probably a bad tradeoff.
We see similar fixedness in the field all over the place: people freaking love small Docker image sizes & will spend forever making it smaller. But my gosh the number of engineers I've seen fixate on total download size for an image, & ignore everything else, is vast. Same story, but server side: my interest is in the download size for what v1.0.1 of the Docker container looks like once we already have v1.0.0 already shipped. Once we start to consider what the ongoing experience is, rather than just the first time easy-to-judge metric, the pictures all look very different.
Then there's the other thing. The performance reasons for bundling are being eaten away. Preload & Early Hints are both here today & both offer really good tools to greatly streamline asset loading & claw back a lot of turf, and work hand-in-glove with import-maps. The remaining thing everyone points out is that a large bundle compresses better (but again at the cost of making incremental updates bad). The spec is in progress, but compression-dictionary-transport could potentially obliterate that advantage, either make it a non-factor, or perhaps even a disadvantage for large bundles (as one could use a set of dictionaries & go discover which of your handful of dictionaries best compress any given piece of code). These dictionaries would again be a first-load hit not different from a bundle's hit, but could then be used again and again by users, to great effect again for incremental changes. https://github.com/WICG/compression-dictionary-transport
Bundles are such an ugly stain on the web, such an awful hack that betrays the web's better resourceful nature. Thankfully we're finally making real strides against this opaque awful blob we've foisted upon this world, our users, and ourselves. And we can start to undo not just the ugliness, but the terrible performance pains we've created by bundling so much togther.
This has repeatedly my been demonstrated to be false in real world tests. One bundled file performs way better, especially when you take tree shaking, minimization (which has performance implications even after load in terms of memory usage), and compression into account. A small change in one part of the code can result in lots of other code being removed from a library you use. Code splitting usually does a better job of achieving the (suspect) goal of making these incremental changes more performant since the chunks are chosen in order to optimize this vs files which are “chunked” along human organizational lines, which do not necessarily correlate highly with diffs.
If browsers hadn’t removed the shared cache for script resources, there might be an argument here. But they did, putting each domain its own cache to avoid fingerprinting, so even that goes out the window.
On the one hand people worry about the massive node_modules folder in the cases where it doesn't matter (when it's just files on disk in a local node app), and on the other hand, we create things like import maps that will literally take this dependency graph and translate it one-to-one to a higher latency HTTP version, where it absolutely will be a problem.
I did a write up of how I use import maps to avoid bundling JS on my static site and cache modules more effectively. I mostly use JS for little experiments and generative art and such, so I have a number of utility modules. These get hashed and the names of each resolved in the import map. Original modules are kept for browsers without import map support (without the immutable cache header).
There are a few gotchas. The browser won't use the import map to result an entry point in a script tag for example. Content security policy is a painful one too for static sites like mine (the import map counts as a script, so you have to hash the map and put that in the CSP header).
So, any progress on being able to bundle native modules and have them import properly into inline script tags?
Right now if you want bundling you have to give up native modules, because the only way to get a browser to register a native module is by making a separate HTTP request for that module.
The closest I could think of to doing this would be an importmap with a data URI per module, but I've no idea if you can load importmaps as separate files or if they have to be inlined. If they have to be inlined that defeats the purpose.
And that reminds me of the rebuttal: such "embedding/blessing" of libraries would give them a considerable tailwind and stifle innovation. Suppose React/styled-components/Apollo/Highcharts is already available on the client. In that case, it becomes much harder to consider alternatives, and any new contenders (Vue/Solid/urql/emotion/etc) would never get traction.
Yeah, cross-domain public caching had mostly theoretical benefits and never really worked that well in praxis. Glad browser vendors moved away from it.
Check the link in my top comment. Most problems are related to the privacy.
A website could always tell if you loaded the resource or had it in cache. If you had it in cache, it means you have previously visited a website that used the same library. Now, if multiple such inferences can be made, it could be used for many things like fingerprinting, for marketing purposes, for identifying a user's identity, etc.
I am responsible for a large microfrontend portal application, which consists of multiple modules and applications, which are wired at runtime with module federation. The build process is handled by webpack for each mudule...
I just wonder, if import maps could replace module federation now or in the long run?
Can we use React with JSX in browser without any compiler using this? I'm sure people have tried it but I can't find it. I just want an `index.html` file that I can write React code in it and play around with things...
React is already often slow enough. Pushing compilation steps to the client is inefficient, especially when you consider that JSX compiles to basic functions and you could have used those functions since the first version.
That has been possible for some time with Babel[0] and was also referenced in the old React docs[1]. But, as stated there and in the sibling comment here, it's not really suitable for production use.
From the react docs: "it makes your website slow and isn’t suitable for production"
I don't really like blanket statements like this or think they are appropriate. It's so counter to the spirit of the web.
Like, what does 'slow' mean in an absolute statement like this? It depends entirely on the system and usecase. It's just code at the end of the day. If it works for you, go for it!
It's interesting that this is your perception of "the spirit of the web" whereas other people are going to say that requiring JavaScript at all is "counter to the spirit of the web".
They are many libraries that use tagged template string to allow this, the most interesting to my eyes is the one made by preact dev. It's just named htm [0], it's made by preact but it can totally be used with react.
The most interesting thing of this one is that uses no custom syntax for js variable, so @attribute something when you must pass an object as props.
You can import the Fragment component of the React package or stack them directly, it works too.
Normally, the empty bracket syntax should work, they have a few issues closed and open that talk about it in their repo.
JSX expressions are compiled to calls of JSX(tag,atts,kids) function. That function can be redefined enabling
different libraries to use JSX in the way they want, here is an example of using JSX with PReact:
import { h, render, Component } from 'preact';
JSX = h; // let's use PReact::h() as a driver of JSX expressions
Reading the article it seems to me you must code the import map which basically defines aliases for your scripts. Then you must use the aliases to actually get access to the scripts you want to use.
You must specify the absolute URL of the imports whether you use import-maps or not so what's the benefit? Does this reduce the amount of code?
I'm sure there is a benefit I just don't see it on first reading of this article.
Basically package.json but without package.json. IE no bundle JS is back.
There are still some issues, specifically with things like React, where it's kind of hard to guarantee if you use React, you'll get the same version if you use it in "react-relay" etc.
Deno has been very informative on this. They're adding package.json support to the runtime as a polyfill, but I'm hopeful they can swing back to import maps fully over time.
I see. So the benefit is that you externalize your imports into a file separate from you code. Right? I can see the benefit of that. It is like meta-data about your module, kept separate from it. And several modules can use the same imports-map, right? That means amount of code needed gets less.
Thanks for the explanation. I assume I got it right, did I?
Or now I wonder, must the imports-map be defined in the same module where it is used?
No, you might have one "global" importmap in your HTML file and that applies to the entire graph of modules you load after that, which also means that you can "chain" importmaps and can also do things like pin the versions of dependencies of dependencies in that top level importmap. That sort of thing will need package-management-like tooling and dependency "packages" built for it, but that gets back to the idea that importmaps in theory are a lighter weight replacement for package.json/package-lock.json files.
I built a new Rails app with importmap-rails last year and it was an absolute joy to work with. Being able to forgo all of the JS compilation bs was a massive simplification. I've wasted so much time in the past dealing with Babel and Webpack so finally being able to kick all of it to the curb was a joyous day.
That might be handy during development, but production would still need a "build step" for generating things like SRI hashes, asset filenames, sitemaps, etc.
thats really cool but this will only be over when >external< importmaps and decent browser dev tool support is there too. inlining is a pain in most setups and when things don't work as expected its harder than it should be to find out why a prefix did not work.
More details here: https://simonwillison.net/2023/May/2/download-esm/
I'm now considering adding import maps support: https://github.com/simonw/download-esm/issues/4