Hacker News new | past | comments | ask | show | jobs | submit login
I think I kind of hate lazy loading (shkspr.mobi)
136 points by edent on Sept 14, 2023 | hide | past | favorite | 173 comments



Not a subway rider, and I hate another form of lazy loading: webstore filters. You go to buypc.com, pc parts / ram, check 16GB box, it starts loading. Later you check 8GB (it starts loading), uncheck 16GB, it starts loading again, check “2x kits”. As you may have guessed already, it starts loading. You see the page and want to scroll it, but the second load finishes and the content changes. Then the third. If you want to filter a list by a handful of parameters, there will be almost the same amount of loads and re-renders, all insanely latent. A cherry on top - every time you check a box, load completion routine scrolls the page back to the top, so you miss a couple of clicks and check wrong checkboxes.

The irony is that a full category list in json format that could be instantly filtered at the client side would be smaller in size than a couple of jpegs they serve with each product card. And that would make their servers rpm literally an order of magnitude less.


If I understand correctly your beef is with the fact that they don’t have an “Apply filters” button and not lazy loading?

Also they’re probably not trying to save bandwidth but database servers.


Not OP, but I think the beef is that the act of applying a filter requires a reload of anything at all ("hey, server, please send me the list of only 16GB pairs this time"), when it's perfectly possible to just send the full list of items to the client initially and then have filtering apply instantly, client-side, on that already loaded list.

At least that's the beef I have, and was reminded of when reading the parent comment.


Rather than sending the full dataset (which could be fine if it's small), just do serverside filtering. And rendering for the most part.

It drives me nuts how much completely useless JS is written. 90% or more of web apps could easily be serverside rendered with maybe a few small scripts added.

Instead we have this shit, people writing hundreds or even thousands of lines of JS to make this dumb ass auto-refreshing search/filter page whose only purpose is to make a bunch of pointless requests and computation while I'm in the process of building my query.

Just have the filters be a form and the search button submits it. The backend executes the query, builds the page and returns it. Easy peasy, no pointless requests, no sending megabytes of pointless data in response to every request. No complex JS(which by the way is an absolute shit-tier language for anything beyond 100 LOC) logic.

I may be biased due to my personal experience, personally I find that the vast majority of JS I'm forced to deal with shouldn't exist. It's either a result of bad application design, bad API design, JS that just does things html and CSS do better, or dumb ass requirements like "we need this filter page to update the results every time the user does anything". No you don't, you just need a way to build a query and a way to submit it. And paginate it.

"But page loads take too long that's why we use SPAs" they take too long because you're sending megabytes of pointless JS that shouldn't exist. Remove the JS and they're fast. Plus SPAs are frequently slow as shit anyway, because most web developers just kind of suck and write shitty code. Like sending a huge Json document with every request rather than handling it in the backend.


But it isn’t huge. I just checked how much data my local pc store fetches for the first page of RAM, 18 items total. It does too much ajax-in-json bs to estimate, but let’s assume each product uses 500 bytes, which is more than reasonable. It’s roughly 9kb total. The first RAM image on that page is 8kb. So the pictures on that page alone are roughly 18x bigger than that json. That means if there’s 18 pages of RAM, and there was 35 actually, the whole json is as big as one-two pages. Iow, unless a user changes just one filter and then buys immediately, it’s more effective.

As a consumer, this all feels like being between two fires. One side creates the stupidest UX possible, where one change can fix that, while the other side claims it’s all bs anyway and we must go medieval. Can’t we just listen to a user for once?


I said:

> sending the full dataset (*which could be fine if it's small*)

In the specific case of search/filter pages, I prefer the server-side rendered experience as a user. If I want to just check the box for RAM and search instantly, I can do that. But if I want to build a more complex query I don't need it to keep updating over and over.

I don't necessarily hate the auto-updating pages if they are implemented well. I just don't think they're any better, so why waste dev hours on it? It costs money to maintain that code, and the more of these pointless little JS applications you bake into your website the more expensive it is to build and maintain.


I've had a similar conversation with my lead before and my understanding is the web application does not reach out to the database server for search results. We query the elastic index instead where possible. Now I don't know much about elastic but my local development can take as much as 16GB RAM just for elastic.

You'd think it shouldn't be possible. I have fewer than three thousand styles and fewer than thirty thousand style plus color options but strange legacy ERP decisions mean you have weird "data points" like memory capacity of zero for a power bank. Well, duh. It is a power bank. Why do we need to store data for it to say it's memory capacity is zero?


The reason that you store memory capacity for a power bank (and an adjustable chair) is because the alternative is storing entity value attributes in a joined table. Each approach has drawbacks and advantages, I'll use either of the two depending on the application.

However, I hope that you weren't really storing the integer 0 for the memory capacity of a power bank. It should be null. Most databases handle such sparse matrices very well, it really is not an issue.


Also not OP but yes... like how many bytes are saved here vs just getting the whole list vs how many bytes of js-bloat have been initially loaded vs user experience.. often very much out of any reasonable proportion providing nil value.


Yeah. Also IMO the annoyance isn't really necessarily related to bytes transferred. It's other things that make that reload just really annoying when it happens. Bad UX


Then the server send you 2k products every time because you might want one? This is insanity.


Why is it insanity? Web sites have no problem sending megabytes of largely worthless data and code as a matter of course anyway. Why can't some of it be useful?


Because the application files are cached and aren't sent on every request.


I just don’t get it. There’s nothing that could beat an one-time json download per product category. Not even “apply filters”, which I see less and less often with years. Unless a store has thousands of products and vague categories, ofc (but not in my cases).

If there is something beyond Hanlons Razor, I’d like to understand by which metric it works and how. E.g. how fetching 30 rows 10 times through different params compares to fetching 200-300 rows once, assuming various sorts of caching, etc. If everyone does it, it’s either a common methodic or a common ui-fad incompetence. But then even a shallow search should yield at least some results. It doesn’t.


A large reason why virtually all modern web stores suck as much as they do is the insane amount of bot bullshit they need to deal with.

Most storefronts are designed more or less as an obstacle course to make any sort of scraping or automated retrieval more obvious, this includes having beyond useless filters, having no working search function, having nebulous item names, paginating with 4 items at a time, etc.


Why stop bots though. Price comparison sites are amazingly useful.


Competitors use them to ever so slightly undercut your prices, get a higher position on said comparison sites, and run you out of business.


If you aren’t on comparison sites at all because of your anti-scraping measures, is that better?


Not to defend the practice but I think the intent is to allow scraping (or API access) only by the price comparison site but not by competitors. (Why wouldn't the competitors then just scrape the price comparison site? Because that site also has anti-scraping measures in place...)


Yup or sometimes even you click another filter while it's loading and the other filter click just gets turned off again after the reload.

Very bad UX


Hardly any app is designed for subway riders. If they were, we'd have widespread preloading (like half an hour of tweets pre-loaded), seamless delayed posting (sending a new post when the internet comes back online), and visibility into what's downloading and uploading so users can make informed choices. As with all accessibility affordances, this will help people beyond the targeted group - like air travelers, folks living in rural areas, or those out of the country without service. They say wifi coverage in trains is going take another 25 years in the NYC subway system. Until then, I hope we pay more attention to users with intermittent internet access.


I dunno, a number of apps I use on the subway are designed for such uses, including audiobook apps, podcast apps and many news apps (e.g. The Economist). But yes, other apps that could be designed that way may not work as well as they should. But even Netflix and YouTube can suggest downloads in advance now. Some apps will never be fully updated - for example, while the standard Mail app works great offline (it's getting it to connect sometimes that seems to take forever), the Gmail app always wants to upload a photo fully to the cloud before sending an email and this trips me up every time, because it never shows a progress indicator and it doesn't allow the email to be sent in the background before attachments upload. It's amazing how in 2023, we have all these engineering resources and lessons that can be learned from apps like WhatsApp, but we still build apps that assume always-on connectivity. It makes sense - even my local subway system is getting cellular service these days - but it's still doubly irritating if in an area with poor signal. (The signals poor, plus the app doesn't work or requires a re-launch, or... etc. etc.)


I'm with you - a lot of them are, but too many are not. I personally use the NYT and FT news apps in the subway quite a bit.

In terms of pain points, the biggest is Safari. It gets rid of website state pretty quickly. I have tons of disk storage on my phone, why not serialize to disk? It could even be a "frozen" mode that still lets you scroll saved articles. It wouldn't be so useful for web apps, of course.'


Love The Economist. They silently download the newest edition in the background, even when the app isn't open. I think a year ago the download was only triggered once you opened the page and went to "Weekly." Possible, but easy to forget.


It's the subway where you are that's not designed for apps.

On Korean subways I stream video at max res throughout the ride.


Same in taiwan. I think they must have cellphone antennas underground


One of the deep level lines on the London Underground (Jubilee line) has 4G. It’s proved by a “leaky feeder” apparently.


I know subway in Warsaw does have them.


We're always going to have situations with intermittent network connections. It's best to design for that from the first. An offline-first experience is good even if you have a good network connection: lower latency (read from disk) and lower energy (the phone radio will be on for less time).


That comes at the cost of increased unnecessary bandwidth and battery usage though.

As a developer I'm much more interested in optimising for that than accommodating a minor improvement to the experience when experiencing connectivity issues.


The infrastructure of Asian capitals is the best in the world, that's exactly the point - 99% of the world experiences deteriorated network (see comments in this thread to that point). It's frustrating when software isn't built in ways that are resilient to that.


Most western European and all Australian cities I've traveled in had perfectly fine connectivity on underground trains.


I'm not surprised that this is the case (I've made that same decision myself). But there are some apps that I just done understand: airline ticketing apps, theme park apps, etc. How is handling offline usage not ofne of the first requirements.


Gmail does not cache email attachments so ticket attachments are always iffy at busy venues. I usually just screenshot the ticket QR code now so I have it no matter the network situation at a crowded event.


If you have Youtube's paid version, you can have the app automatically preload content it thinks you'd like.

This has saved my ass plenty of times when my ADHD brain needed content in places where I didn't have internet.


That sounds like paying a subscription for a crack dealer in a rehab centre :p


I need that CONTENT! :<


you can also just download some videos from youtube you know you want to watch and save yourself the cash


The game metro is unsurprisingly good. Turn based games work well.


Even if network conditions are great, lazy loading breaks browser operations like "Find" with Ctrl-F. It's only gonna search what's in the viewport. I think lazy loading should only be used when there's an actually infinite stream of content to load. But stuff like the twitter replies section shouldn't be this way. I shouldn't have to scroll all the way down to make sure search works.


This article is talking specifically about <img loading="lazy">, not progressively-loaded content, which is quite a different kettle of fish.

(Specifically, everyone agrees progressively loading your content is to be avoided if possible, whereas “add loading=lazy to your images” is common performance advice.)


I'm not typically a Jira hater (I know, I know) but this is definitely a gripe I have with it. The backlog view lazy loads. Tiny little scraps of text! Ok so there's rank, priority, and maybe a few other data points to query for. But still, come on!


> lazy loading breaks browser operations like "Find" with Ctrl-F. It's only gonna search what's in the viewport

Sort of. If you quickly add a letter and then remove it while the page is loading it'll refresh the results though (sort of like a React force re-render)


This is a solved problem with Safari since at least High Sierra days. You add any webpage to your reading list and with appropriate settings you'll have a full local copy.

https://support.apple.com/guide/safari/keep-a-reading-list-s... relevant quote below...

Save a page in your Reading List to read when you’re not connected to the Internet: Control-click the page summary in the sidebar, then choose Save Offline. You can also swipe left over the page summary, then click Save Offline. To automatically save all pages in your Reading List, choose Safari > Preferences, click Advanced, then select “Save articles for offline reading automatically.”


I would like something similar for Mozilla Firefox.

On a very basic level, maybe Mozilla Firefox reader view can ignore the loading equals lazy hint and load all images. One step deeper, maybe we can keep these articles in some kind of local cache?

Come to think of it, what happened with Pocket, that Mozilla acquired? We were supposed to open source the pocket server like five years ago. Did it happen? Does anyone even care about pocket (the read offline app) anymore?

Edit: https://github.com/Pocket/extension-save-to-pocket/issues/75


I live in Germany and so do most of my users, so I take the spotty internet rather seriously. There are lots of holes in the coverage even in the capital. Then there is the underground.

However loading the images early wastes bandwidth, and it hurts SEO. In my case the user can live without those images, so it's a sacrifice I'm willing to make. I hope that descriptive alt tags will help.

On the other hand, the whole text is there from the start and every other way is wrong.


it hurts SEO. In my case the user can live without those images, so it's a sacrifice I'm willing to make.

This is one of the big problems with the modern web, It puts you in this doom loop: sacrifice the user to please Google to get users.


I feel this. So much of what I do to appease Google Search Console ends up actually hurting the user experience. Blind ruled lacking of context.


The performance metrics in pagespeed/lighthouse etc. are not really assessing performance directly.

You can have a slow, bloated website but great metrics, because you’re doing the “right” things.

However lazy loading images that are not on screen is generally a good thing. Images are typically the heaviest part of a website and loading them only when needed is good.

The article in question acknowledges that, but is criticizing the implementation and failure mode.


> However lazy loading images that are not on screen is generally a good thing.

No. An article should always be loaded in full. I clicked a link to a document, that means i wish to view it. Not half, not the first page, the whole thing. If you have tons of content in a single document, you should probably paginate anyway. Images are usually few, and if not totally unoptimized (which you should do before lazy loading), not a huge drain. By all means, lazy load all the unimportant banners, ads and UI elements. Better still, don't add them at all. Only for infinite scroll would lazy loading make sense, and even then batched by 10-20 ahead (or more), so I don't spend time waiting half a second for each and every image.

The heaviest part is nearly always ads and analytics. Cut those down first.


These days JS is often the heaviest part of a website both in absolute bytes downloaded and the amount of main thread time it consumes


What you say about in terms of comparing download size: This very much depends on the site. It might be that you're right in terms of trends.

I haven't seen any statistical evidence, but from my personal experience a typical website with some images, some JS is often heavier on the image side. Fonts can be large too. PNG are heavy. Videos are the heaviest but rare.

However per byte impact on performance (which is a very vague quality) is very different. Often JS degrades performance more than images.


I agree wholeheartedly, although for the most part the performance metrics match my needs as a user.

In this case it means preserving bandwidth unless it's needed. I lazyload images for the same reason I serve 4 different image sizes.


Back in the olden days when I was closer to the web, and we didn't have the browser to do lazy loading for us, we would always load everything 'below the fold' after a timeout, if the user hadn't scrolled down (or left the page). Seems not unreasonable to not load images that aren't visible during the initial load, but pull them in a bit later. There's some priorty things you could try, but priorities don't always work the way we might want.

MDN says img lazy doesn't work unless javascript is enabled, so easy peasy to use javascript to make the images unlazy after onload.


In the early days, google used those alt tags to help its SEO, but now we have the Eli Parsier Google filter bubble to give us the SEO experience we are looking for as it tries to combat those bots manipulating Search.


Eh, mistakenly clicking an ad or irrelevant content when trying to click a link because the page loaded some async content at the last millisecond is far worse.


It is all bad because now all the images are async content too.


They do that on purpose.


It’s a data driven decision. They a/b tested and found users were more interested in clicking on ads when images were lazy loaded.


Or a more mundane reason: You need to add the dimensions to the image to reserve the page side and prevent skipping. That takes extra effort and is not something people think about. If there are any places that a/b tested changing layouts specifically, I expect it's extremely few. It's much easier to make the background a clickable ad area and some pages do that explicitly.


Why does anyone need to add the dimensions? Surely the build process can do this ahead of time or the server can do it on the fly. It only needs to be thought of once.


You still need to either use a fully integrated dev/build system which allows it, or explicitly add the placeholders. The build process of course could add the actual numbers, but I don't think I've ever seen that happen outside of CMS content. (Specifically content, not even the theme / static stuff)


Lazy loading breaks a lot of functionality.

For example, I open a webpage and then go offline. Later, I scroll down, but the content is missing.

Another time, I open a webpage and save it. But most of the content is not saved, because it has not been loaded.

The network activity is actually higher for a page with lazy-loading elements.

All-around, it does not make any sense to me, personally.


Both of these seem like uncommon edge cases. I can't remember the last time I actually "saved a webpage" - I think in my using the web for nearly the past 30 years that functionality hardly ever worked well in the first place. I do use print-to-PDF fairly often, but honestly don't know how lazy load works in that scenario. If you choose to print all pages and it doesn't include lazy loaded images, I'd just consider that a bug.


For you, they are uncommon edge cases. For me, they are everyday usage.


The problems lazy loading solves is an everyday issue for me. So ...

I guess the lack of user control here is the annoying part. Hopefully in the next round of standards updates this makes it in.


I mean it wouldn't be hard to run a simple user script to add or remove all lazy attributes, though this isn't a solution for everyone, but most people don't care either.


Given how much link rot there is I would have wanted all my non-porn non-bank website visits to be saved to some encrypted folder, in some way.

Especially since Google got so bad you can't refind webpages you know exist.


I wish browsers did this automatically so that the back button works properly instead of being just a guess on whether you pressed it or not.


I wish the back button would bring back the rendered view of a page from a cache so that it would be (a) instant, and (b) not make any network requests.

It seemed to me that older versions of the Opera browser did this, and the text-based "links" browser seems to do this today.


I'm confused - my browser does exactly that, and so do others' around me.

I've even had an issue on a project recently caused exactly by the rendered view being saved for the back button. I've also had to demonstrate a possible issue with full page loaders on a regular website twice for the same back button reason - because apparently people (or at least several of my colleagues) don't find browsers' native request loading indication enough anymore... Rotten by JS loaders and SPA's, I guess?


It feels like this is something the browser can solve by working towards these edge cases as bugs to fix. Especially saving the webpage with lazy loading. I've encountered similar issues when using iOS Share menu to convert a webpage to a PDF or Reader PDF, where sometimes you have to scroll the webpage first before it exports correctly. These are bugs that web browsers need to fix. Similarly, there should be some kind of warning or progress bar if you have downloads in progress but want to go to airplane mode. It could be optional, but like when you close your browser and downloads are still running, it could ask if you want to go offline after downloads finish, for example.


> I open a webpage and then go offline. Later, I scroll down, but the content is missing.

This bites me often, too. It's always infuriating.


could hold down the space bar until you are at the end, then save.


That is a great idea, but actually does not work, because the browser redownloads the page from scratch when I save. Also, it has to be done for every page, which I would have to remember to do each time.


ah right! I forgot about that. For a single page:

(Firefox) load everything > ctrl+a > right click > view source (have to wait a bit for that to work) > ctrl+a again > copy > paste into text document > save as > example.html

It is a truly insane workflow, as if you are trying to do something weird.

Better to just right click > take screenshot > whole page or use the cli https://firefox-source-docs.mozilla.org/devtools-user/taking...


> There's no way to disable Lazy Loading on Android Chrome or Android Firefox.

I haven’t tested it, but I would expect that Firefox for Android would still support going to about:config and changing… hmm, looks like dom.image-lazy-loading.enabled is no more, so I suppose you can just set dom.image-lazy-loading.root-margin.bottom to an enormous number (probably don’t need to worry about top/left/right, but you can do them for good measure if you want).


If you'd tested, you would have found that unfortunately access to about:config has been disabled on the release (and I think Beta, too?) versions of Firefox for Android.


Back when I was using Android, I ran Firefox Nightly, just like I do on my laptop, so I knew about:config worked! I was not aware it was disabled on release, which seems pretty roundly just a silly and unreasonable restriction.


I generally dislike lazy loading. My time is almost always worth more than data transfer costs, so I'd much rather load up a bunch of stuff I might not see so that in the office chance I fast scroll down the page I don't have to wait for anything.


You're thinking of your own costs; the developer is more likely thinking of their own than yours.


As a developer adding lazy loading... nah. The motivation is more about prioritising resources. I want some images to load after everything else to free up the bandwidth for now critical resources. If I had the option of "when the connection is idle for 2s", instead of the current implementation of "lazy", I'd use that.

I'm sure there are some other services which are more image heavy and will save some money this way, but it's not the only reason.


Ok, so "monetary cost" and/or "performance cost" -- either way, it's for the benefit of the site (whether fiscal or reputational benefit). Although a desire for the latter is fairly well shared by all parties.


As a web dev… I honestly have never thought of it that way in terms of cost.

I would rather a user see the whole page, consume all the content, it’s not a big cost.


Then you’ve probably just not worked on pages of that scale so far.

Some very large sites maintain alternate versions of images in chance the browser supports something more efficient than JPEG; that can save about 30% – at the expense of a lot of complexity and extra server side storage. Yet they still do it: It’s still worth it for them.

But nothing beats not loading the image at all!

As a web user, I personally also don’t particularly like it.


Most content-heavy media sites make use of CDNs, and the expenses involved are generally reasonable. It's unlikely that a slightly heavier image would have a significant impact on their costs.

I think this might be more of a well-intentioned effort to help end-users save on bandwidth, but it could lead to a bad experience as OP pointed out in their blog post.


CDN storage costs are actually very high, only large websites that basically run their own CDNs storage many different versions of the image to optimize for bandwidth.

The company I worked for did, but they didn't pay for a CDN, just loaded everything from one server


Unlike self-hosting, CDNs (and many cloud computing services) are typically paid by the byte of outbound traffic, so sending 30% less per image often results in a direct 30% savings!


We do this too and believe me, it’s not to reduce the server workload.

It’s just progressive enhancement: The browsers that support a format get the lighter version, the ones that don’t, get a heavier fallback.


If only we could progressively enhance a page rather than stunt them via lazy loading.


That would require a new browser feature or image metadata tag that indicates “load only until here if not visible”, right?

Or do you mean web developers should manually swap assets based on visibility, rather than make a binary load/don’t load decision?


Image-wise, isn't GIF capable of that? I remember back in the dialup days some image type would be quickly visible but very pixelated, then slowly get more detail as more of the image downloaded.


The JPEG standard includes an option for displaying images progressively so that you see large blocks of color that continue to refine into more pixels as the image downloads. In the mid 90's on a 14.4K modem you had plenty of time to watch this happen. I miss the effect, but fortunately most connections are now fast enough where the intermediate step wouldn't be noticeable for a single image.


That doesn’t help today though (at least not without additional mechanisms): Each individual internet connection is often fast enough, and actually used bandwidth is more of a concern than saturating an individual connection.


Lazy loading also allows any network operator (and FVEY et al) to surveil your progress reading any particular web page based on when the additional image requests are generated.


How are they inspecting your network traffic?


They don't need to. They can infer based on the size and timing of the transfers. Nothing for a while then suddenly a huge download? Probably a picture. If the sizes are unique enough they might even figure out which.

https://en.wikipedia.org/wiki/Traffic_analysis

To stop this, we'd have to saturate the link 100% of the time even when no useful communications are taking place.


But how do they know what you're looking at and how are they using this information to do anything useful? I get you can probably differentiate between text and video, but I'm having trouble understanding exactly what information they would get from your bandwidth consumption.


If there's an image of size X and they observe a download of X bytes, they can infer you downloaded and possibly looked at that image. The file size alone might be unique enough to allow that.

What they do with this information is anyone's guess. Just viewing something could put you into some kind of government watchlist. They could use parallel construction against you.


They aren't going to see behind the TLS curtain, but they would see (assuming no DNS encryption) a domain name lookup followed by various traffic patterns; either:

Bursts (page loads) with near silence in between, maybe just some non-human-triggered traffic from scripts that poll.

Bursts (page loads) with quite a bit more of a human-triggered cadence in between, if lazy loading during scrolling occurs.

But mouse-tracking analytics probably result in a similar leak, if not better.


Even with encrypted DNS, the unencrypted host name is usually part of the TLS handshake (or the server wouldn’t know what certificate to present to you).


True, though there seems to be decent momentum toward ECH (which supplants ESNI) lately:

https://blog.cloudflare.com/handshake-encryption-endgame-an-...

https://www.reddit.com/r/CloudFlare/comments/wp6yve/what_are...


But you also usually keep the connection alive for multiple requests.


To the same host though, so that doesn’t really help with privacy, no?


I don't care. At some point the law of diminishing returns makes some bit of exfiltration of user interaction pointless to worry about.


Seems fixable at the standards level. The spec could define `loading="lazy"` as a (strong) hint for the client, but not a rule. Meanwhile, clients should allow disabling lazy loading. (The same way a valid client can disable Javascript, for instance.)

This is only a partial solution as no more than a tiny minority would think to actually disable lazy loading. But at least there would be an option for the author and others who operate in contexts where the lazy loading is harmful.


It is a hint, inasmuch as the behaviour depends on the lazy load root margin which is implementation-defined, with a handful of suggested factors to take into account in choosing the defaults. Disabling lazy loading is done by setting the lazy load root margin infinitely large. Relevant spec: https://html.spec.whatwg.org/multipage/urls-and-fetching.htm.... In Firefox, you can control this by setting the dom.image-lazy-loading.root-margin.{top,left,right,bottom} prefs in about:config.


There used to be a time where you can load up a YouTube video on the site and go offline then watch the video.

You can't do that any more, unless you actively find a way to download the said video.


Uh, youtube absolutely has a save for offline viewing option.


In the browser and without paying for Premium on mobile?


You can do it with YouTube premium


Or use whatever YouTube-dl is called now.


I feel this comment in my very core. I have an unusually hard time remembering “yt-dlp”. Glad I’m not the only one


I just have to type yt and let shell history search find it for me every time.


I created a shell alias.


yt-dlp


Subway riders are a corner case. The amount of energy, bandwidth, resources saved with lazy loading must be staggering.

Travellers may someday (and some do now) look back on losing signal with fondness. A time to reflect, meditate, plan, create, stop consuming for a little while, a welcome break.


Several million people ride the London Underground every day. https://tfl.gov.uk/info-for/media/press-releases/2022/februa...

You're welcome to meditate if you want while crammed into a stranger's armpit. The rest of us would just like to read the news or catch up with our friends.


In what fantasy is loud subway a place to mediate and reflect?


Images are often junk though. What's better: to have the text of the article to read through the tunnel? Or some bullshit stock images that bear no relation to the story.

In the now olden days of the internet, people on low-bandwidth connections used proxies that stripped away images.


> There's no way to disable Lazy Loading on Android Chrome or Android Firefox.

I would think you could effectively disable lazy loading by using a bookmarklet for individual pages, or a userscript in Tampermonkey for whole sites or patterns of sites.

It's not as simple of a solution as a browser option, but I figure solving your problem beats complaining about it.


How's this for "lazy loading"? My Kaios fake-android Alcatel flip phone (sold to me by Verizon) shows the "time" the moment you remove it from the pocket and open it up.

Then you start counting to yourself: one one thousand, two one thousand, 3, 4, 5, 6...

After about six seconds, sometime a little less, the display updates to show the current ACTUAL time. The one it shows before that could be 5,20,40 minutes or up to an hour and a half, AGO.

I have never hated a phone so much. My hatred is palpable and intense. Of course the so-called-OS is the "latest" version which is only because Verizon abandoned it as soon as they rolled it out. And there are no options available that change this broken behavior.


If the images are important to you, just scroll to the bottom of the article before reading to force the loading.

Also: it's a pity that the lowsrc attribute was deprecated. I know there are alternative using JS, but it was useful.


I’d like to see lowsrc return and renamed something like a placeholder attribute


I wrote some code to convert links to epub, and those lazy loading images used in some websites used to be a huge pita to find the correct image file to be packed into the epub file, until I figured out a common enough way to handle them: https://b.yuxuan.org/lazy-loading-image-url2epub


Its not a fix, but whenever I have to travel I set up a "reading list" on my RSS feed. Between that and physical books, I tend to fill in the time.

Actually, in general I would like to see RSS (or even a more draconic, constrained form of RSS) be a relevant web standard for traditional information dense webpages.


I particularly dislike lazy loading if it is rather a form of procrastination, i.e., web frameworks and runtimes that defer class loading, initialization and the like until the first request comes in (as if there is hope that this would never occur) but report to be "started/ready" before.


On my blog, I have a script that goes through lazy loaded images (which is all of them) and flips one to eager (in order down the page) every 5 seconds. Keep a page open for a minute, all images will be loaded, but if you close after 10 seconds, almost nothing extra is loaded.


If you open an article on a train you could always scroll to the bottom of the page before you start reading it in order to load all the resources.


There ought to be a "fringe reception mode" (extension, perhaps) which automates that, ideally without the user noticing. Just as TFA calls for.


To me, that’s one of these “reading mode” apps.

It works ok most of the time, but I really wish there was a standard for that that browsers could just implement, rather than these (often centralized) apps having to reverse engineer it.

The one I use even belongs to a browser vendor, yet they’re trying to constantly upsell me a premium subscription.


> There's no way to disable Lazy Loading on […] Android Firefox.

  about:config

  dom.image-lazy-loading.enabled


That doesn't work on the Android version of Firefox. It goes to a blank page with no settings.

See https://connect.mozilla.org/t5/ideas/firefox-for-android-abo...


Huh. Excluded from stable? That’s a bit silly. I only used Nightly and it worked there last time I tried (maybe a year ago).


Sorry, I use (and recommend) the Iceraven fork for Android. Works there.


Thank you!


The issue is, as usual, the lack of convenient user control to match the behavior best suited to that user's specific situation


The decision here is completely on the hands of the user agent, use a browser that let's you disable lazy loading.


I like such short article given the attention span I have these days. This could have been a tweet.


In firefox you can set dom.image-lazy-loading.enabled to false


This is why my country has no subways instead we have flyways


I hate it soooo much. I've got Gigabit Internet and it makes it feel like I'm back on dialup waiting for images to load as I scroll rather than having the page ready for me.


Why did we go from pre-rendered pages generated with multi-threading on a server to single threaded javascript sequential loading in the front end?


That has nothing to do with the complaint here.


I thought this was the reason behind increasing use of lazy loading, especially in the context of non-live data like images and text


No, it's a traffic optimization to avoid loading images you don't see until they are visible on the page.


It's curious that some folks hate the side effects of insufficient infrastructure while missing that the lack of modern infrastructure is the root problem.

It is entirely possible to get a cell signal on a train, even in tunnels. We as a society just choose not to build out the necessary infrastructure.


As a software developer I have very little control over rail infrastructure. However I have almost total control over the HTML attributes I emit.


As a user, I have very little control over 99.999% of all sites' infrastructures. We were talking about lazy loading on a train. Most users have no control over the HTML attributes they receive.

As software developers make up a vanishingly small amount of all users on trains, it seems prudent to highlight their collective experience, not the portion of a one-off that you can control as an individual contributor.

I'm well aware that you or I as individuals have very little control over public transit infrastructure. It's a shame we cannot team up as a group to solve these issues rather than fixate on our experiences solely through a personal lens.


Adding physical networking infrastructure to all the trains and everything else is a much bigger job than adding a setting to a browser that lets the user opt out of lazy loading.


Ah, it's the users who are wrong!

In all seriousness, as engineers we of course can't assume the happy path is common, and to ensure good UX more often than not we need to account for these cases.


I'm not sure how you possibly got that reading from what I wrote.

I'm am discussing a solution rather than a series of ephemeral and incomplete fixes. Not that developers can solve public transit issues but that society at large is the only one that can truly solve these issues.

The users are not wrong in my assertion nor do I think disabling lazy loading is the solution. Disabling lazy loading is an ephemeral and incomplete fix that helps some scenarios while degrading the experience in other scenarios.

The solution is improved infrastructure. I don't see why this position is controversial.


So: developers can't change the way society has chosen to prioritize infrastructure, Poor user experience is the result of this societal failure, thus there is nothing that can be done to improve user experience?

It seems more likely that the expectation that users must have consistent connectivity to load (and keep loaded) the most fundamental aspects of a website (images and text) is a poor assumption and the hubris to leave this type of assumption unchecked the actual root problem.


It is the web developers choice to decide whether or not to deal with bad/poor internet connections. Most do not.

Lazy loading makes it impossible to do anything to manage a poor connection.


> Lazy loading makes it impossible to do anything to manage a poor connection.

This was the exact problem lazy loading was supposed to solve


If the problem was out-of-viewport images impeding the receipt of assets that do affect the viewport (CSS, fonts, etc.) then it would be good to postpone fetching those images just until everything else is received, rather than postponing until they're closer to the viewport.


Fun fact, if you type your password, it will show up as ***


Makes me wonder what other great ideas we've implemented.


Which is indeed why it's a terrible idea.


In this case it's also the choice of the user, they could choose to ignore the lazy loading tags (I know OP mentions their browser doesn't supports it, but they could choose another browser)


"Best viewed in Internet Explorer" is absolutely not something that needs to make a comeback.


Seems hard not to these days, especially with service workers which can cache resources for offline use.


Right, because it's expensive and difficult and would make the train cost more.

You could have truly wireless electricity power your phone and never have to recharge. You just need to invest in more infrastructure.


> Right, because it's expensive and difficult and would make the train cost more.

That you (and others) think network connectivity is anything higher than a rounding error in terms of rail infrastructure costs is likely part of the problem.


Software is easier to adapt than everything else and users don't care why your website doesn't work properly.

Building a website with the assumption that the user will have a perfect internet connection is like writing an application without error handling. When something goes wrong you can't just blame reality for being imperfect.


Europe has the benefit of having had a lot of roads created by the Romans. Infrastructure takes time.


But it burns in a day?


A weird complaint if you ask me.

Browsers and developers should try to do their best with limited resources. So it most cases that means making predictions about user behavior. Sometimes that means preloading content (e.g. if you're on a landing page, preloading content from the next likely page will make the it feel much more performant) and sometimes lazy loading to not waste resources if a user isn't going to see content in the first place.

But of course there will be "bad branch predictions" sometimes. I think "having a page load before I get into the tunnel but not scroll down enough to have content load before I lose Internet connection" certainly seems like a case that it's fair not to optimize for.


Instead of all the complexity and code and branch predication, how about just send the page and the images? If lazy loading is super important, the browser can lazy load and render off screen. Don't make thousands of solutions for thousands of platforms on the server side.


I don't understand your comment at all.

Before the loading attribute on img, developers did have lots of custom solutions for lazy loading. A major point of lazy loading is to save resources on the server that would be wasted when serving an image that is never seen.


I like the loading attribute, but it hasn't been supported in Safari until very recently, so the solution has had to be a bespoke server + javascript solution to please Google. Still think a better solution is for the browser to optimize what is "below the fold" and make the decision based on user preference (I want to load the whole page). And most importantly, Google shouldn't weigh in on how to implement pages.


I agree but changing the world to that would be quite a fuss. But if that was already where we are at it would be better. But the JS as an Operating System per site ship has sailed.


I agree. A simple way to solve it is to make the article readable without images. Good old alt tags! A data science heavy article may not benefit. But a dozen words can paint a picture.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: