You're right if you're using the web in order to display content. That was the case in the 90s. In this case, yes, a simple index.html with a <h1> and <p> is fast, responsive etc. But with webapps being more and more common one could argue that displaying text is not necessarily the web's main purpose anymore. If you're trying to access figma with a text based browser it's gonna crap the bed, so it fails the test, but is it a relevant test though ? The web is bloated but it didn't bloated just because engineers were bored, it had genuine use cases where doing more than just displaying text was needed. And it wasn't ONLY for marketing purposes (but it played a big part i'm sure)
There's also SEO as a simple reason that JS-heavy sites correlate inversely with content quality. Yes I know googlebot attempts a time/memory-bound render of a JS site to arrive at a DOM for text extraction, but this won't work with other search engines, and will never work as well and timely as providing static HTML to googlebot, no matter what.
I use noscript for this, plus I am running into sites that presents a screen stating "Cloudflare is checking". That Cloudflare check requires Javascript enabled. So I just move on, that to me means that site cannot even count me as a 'view'. Makes things a bit easier for me too :)
Wow. I wonder how you use stuff like instant messaging, music streaming and so forward. Do you also skip all browser based desktop apps because you hate Javascript so much?
No need to get upset at the guy. Its their choice to use Javascript in the way they want, rather than let any site that wants it use it. Most websites with articles don't need Javascript, and only use it to push ads/paywall articles. If there is a webapp that they want to use, they can always whitelist it.
As for music streaming and instant messaging, not everyone has/needs Discord, Slack, Teams, or anything like that. If you absolutely need it for work, just whitelist it. No problemo. And, not everyone streams music. I barely used Spotify myself until I got an office job and needed something to fill the boredom.
The issue isn't about stopping Javascript as a whole. The issue is about permissions management. Not every site needs it, so not every site should have it. If you need it to work for a webapp, just enable it.
I'm not upset, I am just curious and think the guy had quite the ignorant take on the whole subject matter.
I kind of disagree though, most websites do need javascript even if they simply contain only an article. Extremely few devs will care about the astonishing minority that disables javascript by default and saying that the websites that do this is trash, like so many people that favor this kind of thinking does, is being ignorant.
The javascript hate is strange because it's like people would hate programs written in python for no other reason than the program being written in python. I think it comes down to devs not being able to chose and that the only real choice is already made for them so then they end up hating it.
But even alternatives like Phoenix Liveview / Hotwire that minimizes the usage of javascript requires javascript to work properly. In time I believe most websites will ship a wasm binary and then this debate will be over forever and javascript will become irrelevant.
This is what Twitter does. It's possibly the least performant major website I've used.
Conditional loading certainly can improve perf. in theory. I've yet to see any evidence it does so in practice. The aggregate of bundle-size, bundle-parse, client-side execution resource-usage & added latency of the plethora of metadata normally bundled with API responses is more than enough to negate any actual perf. gains.
As for "easier to maintain", I've never seen anyone even try to make that argument in theory, nevermind practice. Pretty sure it's widely accepted even by advocates of this architecture that it's a trade-off of perf. gains for ease-of-maintenance losses.
Just because Twitter does that doesn't mean it is the case everywhere else.
It moves some of the rendering work from the backend (having to query the data and generate markup) to the browser (query the API and generate the content based on the responses).
At my current job, it's made a significant improvement. The server returns compact JSON data instead of HTML, so it's easier to generate the data and uses less bandwidth.
It also looks faster for the user, because they change search parameters and only part of the page changes, rather than reloading the entire page.
As for "easier to maintain", that may be subjective. Code to generate a simple HTML template from results is replaced by JavaScript code to hit the API and generate the DOM. Although HTML5 templates makes that much easier.
I'm not saying it's impossible - glad to hear you've successfully implemented it in your workplace. I'm just saying that by-and-large it has the opposite effect to the stated intent.
If most examples of a strategy make things worse, and only one person uses that strategy to improve things, then going around saying "everyone is doing it wrong" rather than questioning the strategy isn't particularly sound.
I've build plenty of (small) client-side rendered UIs myself that lazy-load content; I know the trade-offs and I even believe I can achieve a performant outcome on my own. But that's anecdotal. In the wild, I have not seen a single major website improve perf. via lazy-fetched content rendering.
I can't say i've seen a single site where in practice that worked as advertised. Also some times it introduces UX annoyances (e.g. back button not working as expected).
It is one of those things where in theory if absolutely everything was done right and no other stuff was done differently it can work. E.g. if the only difference between a JS-enabled and a JS-disabled version of the site was the content change and nothing else (no additional JS frameworks, functionality or whatever) then yes it most likely can be faster (though for the difference to be noticeable the site needs to be rather heavy in the first place).
Problem being that in practice this comes with a bunch of other baggage that not only throws the benefit out of the window but introduces a bunch of other issues as well.
Is there any evidence of this? All the sites I've seen that use extra requests to load text always seem to take multiple seconds to load. Whereas most pages that use server side rendering generally load under 100ms.
It's a tradeoff, basically the question is "Will most users need and read all the content or not". Displaying everything at once without making extra querries is best, but not always possible . The frontend is fetching the backend. So it's going to say "Hey, send me all the comments from all the posts from november 2021". If there are 3 it's fine, but if there are like 23,000 of them you can't really load everything at once , that's why we use pagination on the backend. We say "Hey send me results 1 to 25 of the comments from all the posts from November 2021" This way the frontend only displays 25 comments for a quick page load and we hope that it will be enough. To display the other comments either we ask the backend to let us know how many pages of 25 elements there are and we display that amount of pagination element links (pagination), or we simply tell the frontend to ask the next page once we reach the bottom (infinite scroll). Even if displaying all the content is possible, if there are content that only 1% of your users will read you might want to offer faster loading for 99% of users and add a few seconds of loading for the 1%.
>This way the frontend only displays 25 comments for a quick page load
Many years ago smart frameworks implemented smart stuff like you can display only what is visible. For example you could have a table with 1 million rows but in your html page you will not create 1 million row elements, you can create GUI widgets only for the visible part, as the user scrolls you can recycle existing widgets.
As a practical example , you go to a yotube channel page and they load only 2 or 3 rows of videos and you have to scroll to force more to appear, this means you can't do a Ctrl+F and seatrch and is also less efficient because as you scroll the items at the top are not recycled and reuse so probably more memory is used.
The json for all the videos is not huge,some strings with title and thumbnails, maybe some numbers but the issue is that is not possible to natively do the best/correct thing, only recently we got lazy loading for example so basicaly html was desibned for documents and frameworks/toolkits designed for apps did the correct thing many years ago... this is an explanation but no excuse why things are such a shit show with pagination today.
The argument is that JS-heavy site design indicates worthless content on average. Not that it's easier to maintain for the site owner (which might or might not be the case), or more realistically, creates job opportunities for "web developers".
Google Maps is one of the sites I still whitelist, but I often reconsider this decision, and I'm ready to find a replacement.
Today's Google Maps is a shadow of its original self, which did have a no-JS version, by the way. It has gradually gotten simultaneously heavier, less convenient, more annoying, and less useful, and I've just about had it.
Just off the top of my head, it no longer displays zip codes, takes a long time to load, has missing street names on the map, often promotes features I do not want while taking away features I do want, and is covered so thick with paid-promotion items that I can barely find somewhere to click that isn't an ad.