I remember doing this back in 2006 before Chrome even came out. Remember when it was a good idea to cache the .length property because it was O(N) on some browser (I think an early IE), so you'd write your for-loops like this?
for (var i = 0, len = arr.length; i < len; ++i) { ... }
Hell, the whole premise for React (the virtual DOM) is based on outdated performance advice. Chrome has used a dirty-bit for DOM manipulations since a year or two after React came out; manipulating the DOM is within a factor of ~2-4 of setting a property on a JS object (and much faster than constructing & copying whole new JS objects, which incurs a GC cost), it's just that you really want to avoid interspersed manipulations & queries, which force a page reflow:
parent.appendChild(document.createElement('div')); // Fast; ~50 us
let w = parent.innerWidth; // Slow, forces reflow; ~20 ms
let h = parent.innerHeight; // Fast again; no page modifications
parent.appendChild(document.createElement('div')); // Fast; just sets dirty bit
parent.appendChild(document.createElement('div')); // Still fast; dirty bit already set
But that's the nature of a lot of technical rules of thumb. They get stale as the underlying stack beneath them changes.
> the whole premise for React (the virtual DOM) is based on outdated performance advice. Chrome has used a dirty-bit
This mentality sorta falls into the exact pitfall GP mentioned (i.e. what you really want is for IE/Edge to be significantly faster bringing speed gains for everyone else along the way, not for Chrome to be microscopically faster at the expense of every other browser)
There are real and significant costs in things like random access of .childNodes or bottom-up DOM construction in some browsers. At the height of the whole virtual dom thing, there was a lot of super browser-specific micro-optimizations going on, a lot of it w/ a heavy focus on Chrome, with IRHydra and stuff.
It got a bit ridiculous when I figured out that you could make a significant dent in one of the popular benchmarks at the time not by tweaking JS anymore, but by removing unused CSS rules from bootstrap.css...
But as you said, engines change quickly and I'm not sure it's really worth to be chasing micro-optimizations anymore. We're getting to a stage where the only way to get faster is to simply run less code. Some frameworks are getting the idea (svelte, marko).
Or build better APIs into the browser and optimize them in the C++. IMHO that's the best long-term solution. Moves glacially slow, though, since there's a large engineering cost that needs the coordination of each major browser vendor.
I was a big fan of Polymer/WebComponents in 2014, but they kinda dropped the ball. In hindsight I wish they'd just implemented the React API directly in the browser. (Though depending on how Google vs. Oracle goes in the Supreme Court, maybe that'll become illegal, sigh.)
This worked amazingly well for querySelector (vs jQuery's sizzle and friends at the time), but IIRC, JIT is so good now that there are APIs written in JS _because it's faster to do so than to use AOT_. I recall reading somewhere that the cost of jumping between JS and C++ land was a source of slowdowns with the DOM API, so there's that too.
But overall, I agree with the idea of incorporating the ideas that are working into the platform. There have been various discussion over the years about how virtual dom could work as a native browser API, but unfortunately, the needle hasn't moved there at all.
Thanks -- correct me if I'm wrong but I thought lit-element is the bit that does components and lit-html (I think the naming is pretty bad here) is just the rendering bit, is that right?
I was already on the fence about what to try next, and Svelte is at the top of the list if I have any problems with lit-element... lit-element just seems a bit lighter and more standards-positive so I wanted to give it a go.
These days though, I'm basically not considering investing in any libraries/frameworks that don't offer SSR with competent hydration. IMO it's the closest we get to the holy grail in frontend -- separation from the backend (which I argue is a benefit), and the SEO-friendliness, nojs-compatability, and speed of server side rendering.
lit-element doesn't have a good SSR story just yet (it's experimental[0]), and Svelte has sapper and ElderJS[2], so it's already ahead there...
React would be pretty horrible to implement in the browser. It can't use most of the DOM API: you can't set properties on element, can't add event handlers to elements, create comment nodes. And VDOM and diffing is one of the slower ways to update the DOM.
We should build templating and DOM updates into the browser, but we should do better than React.
All browsers have used a dirty bit for layout for at least the past 2 decades (source: I have worked on browser engines for 2 decades). This is not some new Chrome thing. And all browsers continue to have that same hazard that querying layout-dependent properties forces a synchronous style recalc and layout.
The benefit of React (if any; it has a lot more overhead than vanilla JS+DOM) is that you can't hit that specific pitfall of forcing layout interspersed with DOM or style manipulation.
This is a slightly weird example because it's adding an empty <div>, so you can imagine it could be optimized if it provably has no effect on layout.
But let's imagine instead that the node added to parent was a div with a text child. Or, slightly less obviously, the div is getting added in a document with a style rule of `div { width: 1000px;} In that case, the width could change. So the browser engine's options are:
1. Return a stale value from parent.innerWidth for now, and just lazily update style at the next event loop iteration.
2. Synchronously update style and layout (note, this update is not as expensive as a from-scratch layout), and return the up-to-date value of parent.innerWidth
It turns out that, historically, the earliest browsers with scripting did option (2), and websites came to depend on it. So browsers had to keep on doing it, and so forth. Many folks in the web standards world would like to find away out of this dilemma, where DOM mutation doesn't risk this kind of performance hazard.
You could also imagine an extra bad option:
3. Every time the DOM (or the CSSOM) is mutated, synchronously update layout.
This is super expensive in the face of repeated DOM mutations. Repeated DOM mutations (e.g. adding multiple elements, setting multiple attributes) are way more common than repeatedly getting style/layout-dependent attributes. (3) has the same observable functional behavior as (2), but it's a lot slower, because it will do a lot of unnecessary layouts.
I'm not totally sure if this explains everything you were wondering about, but I hope it helps some.
> is that you can't hit that specific pitfall of forcing layout interspersed with DOM or style manipulation.
This isn't true though, useLayoutEffects that perform a read/write littered through your code is going to quite easily induce layout thrashing and there's no way of splitting this throughout a tree
Isn't it a good habit to store the length of the array regardless of browser implementation? Technically, accessing a variable is simply faster than a property access on an object, and this wouldn't be a case of premature optimization either--just sound coding practice.
They're close enough not to matter on most modern browsers - I suspect V8 actually hoists the field access out of the loop when it compiles it (loop-invariant code motion is a really well-understood compiler optimization at this point). I would definitely put this in the premature optimization bucket.
It doesn't really matter nowadays anyway, because now I write my for-each loops like:
for (let elem : arr) { ... }
or
arr.foreach(elem => { ... });
(Well, technically now I write Android & C++ code and do leadership/communication stuff, but I brushed up on my ES6 before getting the most recent job.)
because the people who built the JS spec decided that there should be a brand new heap object created every iteration. At the time, there was thought that escape analysis would let them optimize away this object, but from what I can tell, ten years later, engines are really bad at it. Escape analysis is a heuristic, and it needs to be conservative.
And yes, this isn't a micro-benchmark. At least in my application, performance is mostly bounded from GC pauses and collection, not slow code execution. Anything to reduce your GC pressure is going to be a good improvement... but note that modern frameworks like React are already basically trashing your heap, so changing out your loops in an already GC-heavy codebase won't really do much.
It can only hoist the length out of a for loop if it can prove that the length doesn't change, i.e. the array isn't modified. Otherwise it does have to check it each iteration. This is pretty hard in general since any time you call an external function it could potentially have a reference to the array and modify it.
I suspect the length access is just so fast that the difference between hoisting it out and not is immeasurable.
That very much depends on how good the JIT is, certainly many AOT compilers would understand this pattern and inline the callback, resulting in very similar optimised code.
In cases where order doesn't matter you can avoid you can avoid the array length question all together by decrementing:
let index = arr.length;
do {
index -= 1;
console.log(arr[index]);
} while (index > 0);
As a side note whether it takes longer to access a variable or object property is largely superficial depending upon the size of object because it implies creating a new variable on which to store that object property. There is time involved to invoke a new variable just as there is time involved to access an object's property.
As an added bit of trivia in the 1970s a software developer named Paul Heckel, known for Heckel Diff algorithm, discovered that access to object properties is faster than accessing array indexes half the time. That was in C language, but it holds true in JavaScript.
In 2009 we standardized on this form for all Google Search JS for-each loops, because we were literally counting bytes on the SRP:
for(var i=0,e;e=a[i++];){ ... }
Could result in problems if the array contained falsey values, but we just didn't do that.
Nowadays, like I mentioned above, I'd just do
arr.foreach((elem, i) => { ... });
Which last time I checked was significantly slower than the for-loop, but I've learned my lesson about trying to optimize for browser quirks that may disappear in a year or two. :-)
>[...] discovered that access to object properties is faster than accessing array indexes half the time. That was in C language, but it holds true in JavaScript.
Hm. So you're saying that indexing into a hash map can be faster than indexing into an array? How would this be possible? I mean, under the hood a hash map is going to an array too, which is being indexed based on the hash value...
If the code is hot, there is no perf difference in modern JavaScript engines. They will speculate and both your var access and length property access turn into just simple a memory read or a constant.
> Technically, accessing a variable is simply faster than a property access on an object
A good compiler will make it so that they are largely equivalent. In C-based languages this is one of the first things a compiler will do, and I am sure that every JavaScript engine does this kind of thing too when possible (actually, it may even have an easier time doing it because it may be able to skip pointer analysis).
If it can proved immutable all modern JS engines will at worst hoist to the earliest point of immutability. With the exciting amount of inlining the modern JS JITs can do they can often do a better job of proving immutability than C[++] compilers.
Components were an implementation detail, to an immediate-mode rendering paradigm for DOM.
Previously we had been doing retained-mode rendering. We would modify & shuffle around the pieces of the page.
React's components were there to let you re-render the app quickly. When state changed, a new render happened, with new elements emitted. You didn't think of what elements used to be there.
React was about the virtual dom. It was about creating an abstraction to let us not have to regard what was on the page, when we were deciding what is on the page. Incidentally, imo, that involved components, but components, while comprising numerous html elements, serve a very similar function to html elements (especially to web components), and were not, imo, a particularly novel part of React. Yes, there was a lot to their creation & implementation, there was a lot of tech work that went into making Components a thing. But components, to me, are far outshadowed by the vdom, by the performance & speed of a data-system designed to go diff & en-act desired state into the live DOM tree.
In kubernetes world, we'd call the vdom a controller. It reads the canonical state, the component tree, and insures the target DOM machinery is kept up to date & reflects this desired state.
You've got it backwards. The creators of React have clarified this many times as well. The vdom is an implementation detail, components and declarative render are the motivation.
The vdom is interesting, but the right history isn't starting there. The premise was the ability to build bigger, better, more reliable, easier to understand/maintain/refactor apps. Components with declarative render + top down data flow enable that, vdom is just the only way to make it work without being too slow.
I disagree. Not that the vdom is an implementation detail. But to me, the fact that there is an immediate-mode rendering API for the web is far more notable than "Components" take the place of HTML elements.
Everything interesting & notable about components relates to the fact that they are immediate-mode things. Nothing else is particularly notable or interesting or important about them, has parity with what HTML Elements did/do.
Regardless of intent or deliberation, this, to me, is the clear & obvious technical difference that underpins whatever goals the team thought they were shooting for. It's the major characterization of how React was different from other webdev we'd tried. Everything else is downstream of that specific choice, for how to "draw" HTML: immediate-mode.
Well then we agree, you're just calling components immediate mode and throwing away all the other things that make them interesting.
Immediate mode has a specific meaning that isn't really what React is doing. There's no concept of avoiding double buffering, immediate mode re-renders everything every frame while vdom avoids as much work as possible and updates are only triggered by actions.
But there's more interesting to them than just being declarative. A pure render function, a standardized prop boundary, top down data flow, and lifecycles (further improved by hooks, which are basically algebraic effects, which make real hot reloading work), are just as important, and all of those fall under "components".
Enyo, the WebOS framework, actually had a great declarative component model without vdom many years before React.
I'd point out Steven Wittens' Model View Catharsis[1] as some discussion that adopts similar framing to my own, that emphasizes the Immediate vs Retained mode distinction as a core way to analyze different web toolkits.
> Immediate mode has a specific meaning that isn't really what React is doing. There's no concept of avoiding double buffering, immediate mode re-renders everything every frame while vdom avoids as much work as possible and updates are only triggered by actions.
I've shown above others with similar framing to my own. I think you are over-focusing & refusing to see a similarity that is quite present. From a programmer perspective, react is about calling React.render(myJsx, domElement), again and again and again. How much more immediate mode does it get? "Redraw the world" is the premise.
As for double buffering, that's pretty much what the vdom is doing! There's the current buffer, there's the new world, and the vdom machinery is pushing the new buffer onto the old buffer once the render completes.
> But there's more interesting to them than just being declarative. A pure render function, a standardized prop boundary, top down data flow, and lifecycles (further improved by hooks, which are basically algebraic effects, which make real hot reloading work), are just as important, and all of those fall under "components".
These are all good characteristics of React, and part of it's total package that defines it. Interesting, yes! I think I am under-attributing the different feel of components versus where the web was before. A lof of this, feels, to me, like an incident discovery to a bigger phase change, from retained to immediate. I see a lot of those finger prints in other immediate mode rendering places. But it's very new to the web, best I can tell. I need to go back & re-review Enyo. Been a while. Ahh the heady days of two way data-binding!!
To hash up some specific points? Declarative is only part of it. The DOM is declarative. But the DOM is not immediate mode rendering, it's a retained system. The declarativeness feels normal?
Pure render functions are neat to see on the web, yeah. A lot of other immediate mode systems have this, geometry shaders being a large class of systems that often are pure functions.
The prop boundary seems like something the DOM already has & used a lot, if not quite so heavily bounded. This is just properties on elements, only expressed in a slightly different way: that distinction doesn't draw any major note for me, is an interesting twist, but just a re-embodiment of what was. That it's a harder boundary now doesn't do a ton for me- elements have been powered by their properties since DOM1 & it's very normal on Custom Elements.
Top down data-flow again feels like something relatively naturally emergent most scene-graph rendering systems, not unique to React. The web itself is a big top down renderer, always has been. It's weird because I both see tons of parallels between templates (which often nest or accept children or slots) and React components, but also I see that the feel is quite different, that components somehow are different, although I struggle to characterize how they really are and how this has changed webdev.
Not sure how I feel about lifecycle either. Willing to give this one to componenents. Definitely not a very immediate-mode idea, more like something we'd see in a scene-graph though.
React and web components are different tools trying to solve different problems.
Web components are primarily about trying to add new virtual HTML tags to "extend the platform".
React and other frameworks are about trying to build interactive applications that require larger-scale UI management, efficiently, on top of the DOM, by defining pieces of that UI as a tree of reusable components (and using techniques that web components don't have available to them).
If I want to add a color picker to an otherwise static HTML page, I might use a color picker web component.
If I want to build a meaningful-sized app, I'd reach for React.
This declarative narrative is more or less a marketing gag, isn't it? Mixing Logic and Markup has always been an antipattern in web development. However, react calls this declarative and this is the new black now?
I’ve been doing web since 1990s and I‘ve always thought spreading a UI element across 3 separate files (often in different locations) was an anti-pattern (or 5+ files In 5 different folders if you want MVC).
React is awesome because it allows feature-aligned separation of concerns (each component has a single job - render everything about a specific element - which is usually a well defined part of a specific use case).
Jsx is the best UI system ever in terms of productivity - speaking from experience: I’ve implemented production apps using dozens of UI frameworks/platforms - Html, WYSIWYG, Flash, WindowsForms, WebForms, Ajax, Asp.Net Mvc, Razor, WPF, Xaml, Silverlight, Knockout, Handlebars, PhoneGap, Ionic, Bootstrap, MaterialUI, Angular2, React w/ Class Components, React w/ Mobx, React w/ Hooks
I can tell you pros/cons of each of those. But at the end of the day I can develop an entire app in days in React+Hooks which would take me weeks in most any other.
> Remember when it was a good idea to cache the .length property because it was O(N) on some browser (I think an early IE), so you'd write your for-loops like this?
I do not, mostly because I came online at a time that Internet Explorer was already uncool and Mozilla was clearly better, and also I had no idea what JavaScript was ;) But I can only imagine what the browser monoculture of the time was like, viewing the echo of it many years later with Blink.