Browsers have optimized for "eager" loading. Link your images, CSS, and JS just like you did when you first learned HTML, then configure your server's caching headers. For JS, there's the async attribute.
Back in the day, the warez/emulator group Damaged Cybernetics included its members to use a standard page background and images, at standard URLs. That way, users who browsed through their sites will have cached the images and need not load them again when viewing another member's site. In the days of dialup this was a big deal.
It's not good practice to rely on a user's cache. If they visit your site twice in a span of a day, they may have already cleared their cache or pushed your files out with other sites' files.
This article demonstrates a pretty good method to reduce unnecessary byte transfer which will ultimately speed up page loading.
Caching is still a good idea, because if a user decides to browse to another page on the same site in the same visit (or refresh/post a comment), a significant fraction of the data required for it has already been loaded. If a user does come back, there's always a chance that your stuff will still be in the cache.
Lazy loading leads to more requests, which leads to more latency, slower apps. Better to handle this stuff as a build step with RequireJS or browserify.
Not all assets are code. At Trulia, we lazy load images, but we bundle our code and css into packages and serve each in a g-zipped, cachable request.
For sites running at scale, lazy loading code might not really be a viable option. But if it is, it's certainly a strategy to consider and not just dismiss out of hand. Each strategy has its trade offs.
Because you're making dubious gains saving a couple KB of bandwidth by sacrificing the responsiveness of your app when you introduce an additional round trip for each new piece of code. It may look fine for you, but there are plenty of people that don't have super-low latency, high bandwidth internet connections.
It's important to understand that there are tradeoffs with lazy loading. There's a spectrum, right?
On one end, you could serve all of your js bundled with browserify on the first page load, which, for SPA's will frequently block rendering until the entire bundle is downloaded, parsed, and executed. If your bundle is small, who cares? Once your app grows to a large enough size, that initial delay might no longer be worth it.
On the other end, you could serve almost no assets with the initial page load, and request what you need when you need it. Your initial render will be blazingly fast but any first interaction might be slowed by the latency involved in going to get the relevant assets needed to execute the interaction.
Somewhere in the middle you may find a solution involving sending half of your assets initially and requesting the other half when you need them.
It's not helpful to write off various methods of asset loading because of assumptions about latency, bundle size, or DOMContentLoaded event delay. The important part of this article is that it speaks to the various approaches to this problem. I'm guessing the author took the time to weigh the pros and cons of these approaches, chose this one which particularly works for him, and wrote about it to share his findings.
This isn't a trivial problem with one solution. Dig into the performance of your apps while wearing multiple hats and take what works best for your situation ;)
Well put. My (and, I think others) concern with the article is that it advocates splitting all your code into separate files (great for maintainability and managing your code base), but then loading each of them individually on demand (probably a bad idea for all the reasons everyone's mentioned). I'm a lead dev on a fairly large JavaScript app, and we've taken that middle route - we use build process with Grunt and RequireJS to compile our 300+ individual script files into a handful of concatenated files divvied up by page/component. Our application is large enough that loading everything at once (even concatenated/minified/gzipped) doesn't make sense, but RequireJS/the r.js optimizer made it really easy to both figure out our usage pattern and consolidate our core set of modules. The process we use creates a different version of our app script for each entry page, and then each page has its own separate module that can be loaded on navigation.
Definitely, and I think your concerns are valid, but if you found that loading each file on demand yielded the optimal performance for your app, wouldn't you go with it? Additionally, I think the method the author is presenting can be very well suited to both partial bundles and individual files. For what it's worth, you and I are in similar positions and it sounds like we have very similar approaches with our own apps. As long as readers understand that this is one possible solution to a nontrivial problem, I'm happy with it.
optimizing for cache hits and only loading the code for the view you need are not mutually exclusive. Your build can package optimized assets at a page or component granularity. But parsing a little bit too much javascript is not so slow as extra requests, doubly so considering that the biggest unoptimized assets are the dependencies that every page needs, like jquery, angular or whatever. I have worked on several gmail-sized apps and, to date, the profiler told me not to bother.
let's be clear: i'm not advocating lazy-loading assets like jquery or your main app codebase. I'm talking about your ancillary files that may not get used on every visit, especially if your site is a single-page app.