I prefer to use a variable name that describes the type of the object in short.
So if my "this" object is actually an AwesomeTableWidget instance, I might do:
var table = this;
I like the bit of extra documentation it provides... Or maybe this is just a remnant of all the C code I've written over the years, where it's common to have an object pointer as the first argument to a function and it's usually named like this.
Which should be accessed by window.self anyway so it doesn't really matter? There are so many poorly named global variables (top etc) that worrying about shadowing them seems meaningless.
Note: Arrow functions cannot help us when the function is defined on the object literal as it will resolve to the enclosing lexical scope, which in this case is the global scope.
This is not true. With an arrow function, the example prints "Object Name". MDN says: "An arrow function does not create its own this context, so this has its original meaning from the enclosing context." [1]
(The "enclosing context" in this case refers to the context in which the arrow function was created, not the context in which the function in which the arrow function was created was created.)
var name = "Global Name";
var obj = {
name : 'Object Name',
printName : function() {
setTimeout(() => {
console.log(this.name);
}, 2000);
}
};
obj.printName(); //prints "Object Name"
var printNameReference = obj.printName;
printNameReference(); //prints "Global Name"
You will see that "Global Name" will still be printed out. The reason is because arrow functions take the enclosing lexical scope, and the arrow function to setTimeout in the example is created each time the printName function is invoked. The printNameReference() invocation sets the scope of the printName function to global because of the way it is invoked and hence the arrow functions to setTimeout will also take this scope.
Changing the printName function itself as arrow function does not help either.
Since nobody has mentioned it yet, my first introduction to that = this; was JavaScript: The Good Parts,
Unearthing the Excellence in JavaScript by Douglas Crockford
Well worth a read for understanding the differences and relative powers of a prototypical language and how to get it to behave almost like a classical one. I think the one big thing it missed was immediately envoked function expressions.
I'm not a JavaScript expert so I may be wrong, but I don't think that's right is it?
If you are in a function scope (which you would be in the context of this article as we're talking about what the receiver is at different points) then variables assigned without any modifier are defined as locals, in the scope of that function. They are hoisted to the function scope, out of any block scope, unless you use 'var', but they're still locals aren't they?
No, GP is correct. If you omit the `var` (or `let`/`const` in es6) then you are implicitly assigning a property on the global object (`window` in the browser), in essence setting a global variable. Never omit `var`.
If you are in a function scope, `var` declarations are hoisted to the top of the function scope, outside of any block scope, as you described. `let` and `const` declarations from es6 are block scoped. Personally I highly recommend never using `var` and simply adopting es6 and using `let` and `const`, preferring `const`, everywhere as they have significantly simpler and more intuitive semantics.
I agree with the way you've responded. But there can be a case where omitting var on a variable definition will not assign it on the global object. That is, if you hit a variable by the same name as you go up the scope stack, you will just overwrite that variable's value. Example:
function f1(){
var v = 'val';
var f2 = function(){
v = 'inner'; // will not hit global scope
}
f2();
console.log(v);
}
f1();
console.log(v);
"Assigning a value to an undeclared variable implicitly creates it as a global variable (it becomes a property of the global object) when the assignment is executed."
Right. This is a typical language design mistake. It happens in phases:
- "We're not going to have declarations in our language. First use defines a variable".
- "We need some way to define local variables. But if we make that the default, existing code will break. So the default has to be global."
- Embrace the pain.
There are variants on this theme. Python is local by default. Go has variable creation by assignment with ":=", but with weird semantics for multiple assignment.
Other classic mistakes are "we don't need a Boolean type; 1 and 0 are enough", and "there's only one kind of integer". Fixing those later always sucks.
This an ugly convention. By always using `that` you lose an opportunity to create a meaningful variable name which would be more descriptive. Why not use?
var widget = this
Besides using more functional approach and arrow functions you can free up your code from lining methods via this keyword.
That's way more confusing. At least with `that`, you know it is pointing at the current instance. If you see `anyOtherVariableName` you have to know what it was assigned.
Is it a poor pattern to use #bind a lot instead of this or that? I tend to put most of the code in procedures like functions and bind it to whatever context I need, if not the regular this.
It's absolutely useless anyway unless you're trying to slightly optimize cpu and memory usage for at least hundreds of instance creations. Typically this scenario would be found in libs, not in application code.
I completely ignore this, prototypes, classes, Function.prototype.bind, etc. Code is much cleaner and easy to follow for devs of all skill level.
Also, even if you like these features, var that = this is unneeded if you use a transpiler and have access to the more modern JS constructs like arrow functions.
If you're completely getting rid of "this" references then you're probably following a functional style and will keep data and functions separate, passing pure data as arguments rather than combining data and functions into classes.
If you have closures, you can simulate objects. In many different ways, most of them bad. If you have an interpreter with decent variable semantics, you get closures. That's how LISP did objects in the 1980s, before people knew better. Javascript somehow got it from there.
> you get closures. That's how LISP did objects in the 1980s, before people knew better
That one can use closures for objects was demonstrated with Scheme in the 70s. See for example: Scheme: An Interpreter for Extended Lambda Calculus". MIT AI Lab. AI Lab Memo AIM-349. December 1975.
It's also a typical topic in education to show how to use closures to implement a primitive object system, similar how one learns to 'implement' numbers in pure lambda calculus.
But actual object systems in Lisp from the 70s/80s mostly did not use closures as objects. None of LOOPS, Flavors, CLOS, KRL, KEE, FRL, ... uses closures for their implementation. Even in 'Object Lisp', which is relatively close to Javascript's objects, they don't use closures:
Despite writing ES6 with arrow functions, I find that I prefer writing `const self = this` than relying arrow-function lexical scoping. Maybe it will change over time, but for now that way seems to make he intent far clearer and less likely to mess up in a future refactoring.
One common example would be to refer to instance properties of an object from a method that is attached to the object's prototype.
Actually I think using bind() would be a very unusual thing, but that could just be my style of writing code (I don't think I've ever used it in any code I have written over the years).
Don't get me wrong: I love JavaScript, I write piles of it all the time, and I said "was" not "is" because it's been vastly improved since it was originally badly designed, but I still get screwed by accidentally using "this" in the wrong context.
It's such an easy accident to make, and it's so hard to see, because when you're looking at code you wrote, you see what you meant to write, instead of what's actually there. Programming languages are user interfaces, and "this" is a user interface dark pattern, an unforced design mistake that doesn't make the language any more powerful, but lays a deadly trap for beginners that also bites pros.
It's tricky to understand and hard to explain (this is actually a dynamically bound keyword, not a lexically bound variable), so it's fodder for the thousands of blog posting explaining it like the one we're discussing.
Must be because I have a strong python background, but generally I find "self" makes more sense than a wild "that"!