I'm surprised people in this thread are so opposed to trying to gain status. It took me too long to realize, but status, like money, is just a tool at your disposal in society.
You can be too obsessed with status, just like you can be too obsessed with money, and use them for the "wrong" reasons.
But there are things money lets you do that you couldn't do otherwise, like buying a home for your family or bootstrapping a company.
For status, it's the same. Going to X school gives you an easier chance of getting into X company. Knowing X amount of people gives you more opportunities to meet other well-connected people, allowing you to gain more diversified interactions and/or spread your own influence. Having a trustworthy reputation gives people a reason to give you a minute of their time, and sometimes that first impression is all you need to start a relationship or get funding for your company.
Finally, there are things that status gives you that all the money in the world won't.
Definitely, agree. And before someone argues with the expected rebuttal, I think there’s a difference between chasing happiness and rationally acting upon your wants.
In the US at least, “the pursuit of happiness” is such a fundamental concept, people really aren’t challenged to question it.
Isn't it a good habit to store the length of the array regardless of browser implementation? Technically, accessing a variable is simply faster than a property access on an object, and this wouldn't be a case of premature optimization either--just sound coding practice.
They're close enough not to matter on most modern browsers - I suspect V8 actually hoists the field access out of the loop when it compiles it (loop-invariant code motion is a really well-understood compiler optimization at this point). I would definitely put this in the premature optimization bucket.
It doesn't really matter nowadays anyway, because now I write my for-each loops like:
for (let elem : arr) { ... }
or
arr.foreach(elem => { ... });
(Well, technically now I write Android & C++ code and do leadership/communication stuff, but I brushed up on my ES6 before getting the most recent job.)
because the people who built the JS spec decided that there should be a brand new heap object created every iteration. At the time, there was thought that escape analysis would let them optimize away this object, but from what I can tell, ten years later, engines are really bad at it. Escape analysis is a heuristic, and it needs to be conservative.
And yes, this isn't a micro-benchmark. At least in my application, performance is mostly bounded from GC pauses and collection, not slow code execution. Anything to reduce your GC pressure is going to be a good improvement... but note that modern frameworks like React are already basically trashing your heap, so changing out your loops in an already GC-heavy codebase won't really do much.
It can only hoist the length out of a for loop if it can prove that the length doesn't change, i.e. the array isn't modified. Otherwise it does have to check it each iteration. This is pretty hard in general since any time you call an external function it could potentially have a reference to the array and modify it.
I suspect the length access is just so fast that the difference between hoisting it out and not is immeasurable.
That very much depends on how good the JIT is, certainly many AOT compilers would understand this pattern and inline the callback, resulting in very similar optimised code.
In cases where order doesn't matter you can avoid you can avoid the array length question all together by decrementing:
let index = arr.length;
do {
index -= 1;
console.log(arr[index]);
} while (index > 0);
As a side note whether it takes longer to access a variable or object property is largely superficial depending upon the size of object because it implies creating a new variable on which to store that object property. There is time involved to invoke a new variable just as there is time involved to access an object's property.
As an added bit of trivia in the 1970s a software developer named Paul Heckel, known for Heckel Diff algorithm, discovered that access to object properties is faster than accessing array indexes half the time. That was in C language, but it holds true in JavaScript.
In 2009 we standardized on this form for all Google Search JS for-each loops, because we were literally counting bytes on the SRP:
for(var i=0,e;e=a[i++];){ ... }
Could result in problems if the array contained falsey values, but we just didn't do that.
Nowadays, like I mentioned above, I'd just do
arr.foreach((elem, i) => { ... });
Which last time I checked was significantly slower than the for-loop, but I've learned my lesson about trying to optimize for browser quirks that may disappear in a year or two. :-)
>[...] discovered that access to object properties is faster than accessing array indexes half the time. That was in C language, but it holds true in JavaScript.
Hm. So you're saying that indexing into a hash map can be faster than indexing into an array? How would this be possible? I mean, under the hood a hash map is going to an array too, which is being indexed based on the hash value...
If the code is hot, there is no perf difference in modern JavaScript engines. They will speculate and both your var access and length property access turn into just simple a memory read or a constant.
> Technically, accessing a variable is simply faster than a property access on an object
A good compiler will make it so that they are largely equivalent. In C-based languages this is one of the first things a compiler will do, and I am sure that every JavaScript engine does this kind of thing too when possible (actually, it may even have an easier time doing it because it may be able to skip pointer analysis).
If it can proved immutable all modern JS engines will at worst hoist to the earliest point of immutability. With the exciting amount of inlining the modern JS JITs can do they can often do a better job of proving immutability than C[++] compilers.
Sure - but waiting to announce (and hopefully release) for over a month after your competitor is SHIPPING is a sure way to lose sales. At the very least they need to leak potential pricing and performance in the mean time.
Anyway I am still highly, highly skeptical Nvidia is going to move a lot of units so close to launch. The first batch is going to sell out in moments and from then on they will only be available way above retail price. Happens every time...
Also, many are probably looking into a new pc upgrade with the new RTX gpu's. I know I am. But I will most likely go for Intel 10900 CPU instead of the 3900X, the 10900 is also cheaper than the 3900X and XT in my country.
I also map my Caps Lock to Control, but don't forget that Ctrl + [ is the equivalent of Escape in vi/vim. It's a chord, but it ends up being much more convenient for me since the keys are both comfortably around the home row keys.
Maybe not as comprehensive, but https://cronometer.com/ goes to lengths to break down foods by amino acids (super helpful for protein tracking and weightlifting) and micronutrients