Side anecdote, but a number of people at my company were making fun of a candidate who used labels to break out of loops in JS. “Those don’t exist” they joked. I remember thinking, “I thought they did,” but didn’t say anything. The sad part is I really like the people I work with and they are generally smart engineers. There is just such an arrogance to the interview process in tech it is almost unreal. I haven’t noticed a lack of qualified candidates in tech, just a lack of humility in many of the gatekeepers. Whiteboard pissing contests.
Some years ago I got into a dispute with an interviewer about some C++ construct. He claimed it wouldn't work. I claimed it would. I asked him to put it through the compiler but he refused.
Same in your situation. Why didn't they try it out before making a judgement?
I've been in this position a few times, one way or another. It's a bad position to be in. If you're wrong, you're labeled as unknowledgeable. If you're right, you're labeled as "not a culture fit". I've found that no matter how humble you try and be, the interviewer is isn't very gracious when they're right and isn't humble enough when they're wrong.
"At some point you have to have confidence in what you know (right or wrong). You can't be trying every darn thing."
I view it as test. When an interviewee or a new guy at work tells me something that I don't know or think is wrong I always give him the benefit of the doubt and verify it together with him. If he is right I know we have somebody who brings something to the table. If he is wrong that's clearly a negative that counts against him. If that happens a few times I will stop listening to that person.
If it was something I had not seen before (which is already extremely unlikely given that I've spent a significant amount of time reading the official spec of the language to write a parser...) I'd still have given them the benefit of the doubt and maybe asked to clarify what they were doing. More than correcting people, I love learning new obscure things.
> I haven’t noticed a lack of qualified candidates in tech, just a lack of humility in many of the gatekeepers. Whiteboard pissing contests.
This mirrors my interviewing experience. People read so much into every answer during these stupid whiteboard interviews. I'm guessing that the candidate didn't get the position because they didn't seem to know JS well enough?
Precisely the reason I don't like to focus interviews on language gotchas and semantics.
The only exception I made was a quick pre-screen (to be completed in their own time with no pressure) in which candidates received 3 or 4 code snippets, and were asked a couple of questions about them which they wrote answers out in plain English explaining their thought process, in however much detail they wanted. (Thinks like a piece of code where "this" would break because of context, or a cross domain request that would fail and some ways to approach solving).
Even then, that was used as a ways to say "ok, this candidate clearly has good understanding, we'll not hang about too long during the interview process on that topic".
If they didn't mention what we thought the answer was, it didn't count against them at all, it just gave us an area to explore in the interview. (Because who's to say our pre-screen question wasn't the problem?)
> Side anecdote, but a number of people at my company were making fun of a candidate who used labels to break out of loops in JS. “Those don’t exist” they joked. I remember thinking, “I thought they did,” but didn’t say anything. The sad part is I really like the people I work with and they are generally smart engineers. There is just such an arrogance to the interview process in tech it is almost unreal. I haven’t noticed a lack of qualified candidates in tech, just a lack of humility in many of the gatekeepers. Whiteboard pissing contests.
And this is why if people mention a whiteboard as part of the interview process, I don't bother. I've seen similar arrogance related issues in regards to security/crypto questions. The ability to demonstrate the code actually runs and/or is actually broken from a security perspective is an invaluable part of the process.
I agree with your points, there is another part to this though: production code has to be easy to understand. If a developers has a habit of writing code that is more complex than it needs to be, it would definitely raise some red flags for me during an interview.
It could be worse. I was interviewing for a team lead position and I started talking about EcmaScript 6. One of the devs said "we don't use EcmaScript, we use Javascript....
Interesting read. I quite like the pipe operator as I think it would make a lot of code easier to read.
I would caution people against using some of these "features" though as writing clever code that leverages obscure techniques is NOT good for readability and maintainability.
The person who has to maintain your code will probably be able to more easily grok some boring if else statements than some weird ternary thing that leverages the comma operator.
Not that I necessarily disagree, but one counter argument would be "why even use the pipe method? You could just say
square(increment(square(2)))
and get the same result, right?"
The point is readability. It's UX for the programmer, reducing cognitive load in understanding what's going on. Pipes are fantastic in the terminal and have never been replaced there because they are very simple glue piecing together the complex bits with a simple-to-understand model.
If we can have that in JS, I'd be quite pleased even if there are other ways to do the same thing.
The advantage of the operator and the `pipe` helper over regular function composition is that you can emphasize the flow of data through a pipeline of functions.
The only advantage in readability I see here is that the operator is hard to mistake for anything else, it sticks out of the code, more than a seemingly regular function call. But this is not an advantage that justifies introducing the operator.
Other than that, UX and cognitive load is the same.
Pipes can indeed be fantastic! ;)
*
Apropos composition: why not add a composition operator while we're at it? Or we could define something like:
For me there is also something more ineffable that's important here. A question of whether adding a new built-in operator fits within the general philosophy or some sort of "look & feel" of the language, or whatever. I think JavaScript seems to be quite conservative, when it comes to bringing in new operators. Perhaps that's a good thing. Maybe not. But so far, I remain unconvinced.
Fuzzy words and your opinon, so can't exactly argue with that.
But here's a try ;)
I take it that the meaning behind the words you used is:
clean:
no need for `pipe(` prefix and `)` postfix
expressive:
can convey a pipeline in a distinctive and unique way
(with a special operator as opposed to regular function)
Let me use the same words, but with different meaning, to say "the pipe function is much cleaner and expressive IMO".
The meaning would be:
clean:
looks (is) the same as regular function application,
no need for dirtying the syntax up with
a special operator
need to only press one extra key per operand/argument
(,) as opposed to 3 (shift+|,>) ;)
expressive:
can convey a pipeline just as well as the operator;
it's a regular function, so it's first class,
unlike the operator
it's variadic, so you can combine it with
spread operator syntax like:
pipe(data, ...pipeline1, ...pipeline2)
// where pipeline1 and pipeline2
// are arrays of functions
On readability, all the code samples here are using pipe operators spread out on one line, but for anything nontrivial I think the point is to use it thusly:
Yeah readability is largely about what you're used to. I find the nested function calls more readable, as I've used that notation for decades going all the way back to high school math.
Imho, languages that break the abribtrary difference between functions and methods do this best, so there's no difference between Square(myNum) and myNum.Square(), since .Square() works nice for pipe-style usage.
Is there a good way to see the intermediate results at each stage in the pipe when things go wrong? Pipes are concise but perhaps harder to debug depending on what tools you have.
With the above, I can add in a breakpoint and inspect what's going on at each stage with access to other debugging tools. Log statements don't seem nearly as good as this. The above is nice as well in that you can give intuitive names to the intermediate results.
If you need names for intermediate results, etc. then you don't want the pipeline operator/function. In such cases the things you say are true. But when you wan't to be more concise, with the operator/function, you can.
BTW A nice feature of a code editor would be if you could magically switch back and forth between code with intermediate named variables and pipelined version of the same code, e.g. by selecting the code and invoking an option.
And a native operator can be optimized just about the same amount as a built-in function. So maybe add the function instead of the operator to the language?
One issue with a `pipe` function is that it is an arity of N. That makes it almost impossible to optimize for all the edge cases. In contrast, the pipe operator is always fixed at an arity of 2. Implementing a static Function.pipe in terms of the primitive operator becomes easy enough. Functional languages have played with both the function and the operator, but they keep coming back to the operator because it's more readable.
//implementation of your pipe in terms of the (potential) pipe operator
Function.pipe = (fn, ...args) => {
//let's use a pseudo Duff's device
switch (args.length) {
case 0: return fn;
case 1: return fn |> args[0];
case 2: return fn |> args[0] |> args[1];
case 3: return fn |> args[0] |> args[1] |> args[2];
case 4: return fn |> args[0] |> args[1] |> args[2] |> args[3];
case 5: return fn |> args[0] |> args[1] |> args[2] |> args[3] |> args[4];
case 6: return fn |> args[0] |> args[1] |> args[2] |> args[3] |> args[4] |> args[5];
case 7: return fn |> args[0] |> args[1] |> args[2] |> args[3] |> args[4] |> args[5] |> args[6];
case 8: return fn |> args[0] |> args[1] |> args[2] |> args[3] |> args[4] |> args[5] |> args[6] |> args[7];
default:
var mod = args.length >> 3; //div by 8
var rem = -8 * mod + args.length; //mult add
fn = Function.pipe(fn, ...args.slice(0, rem));
while (rem < args.length) {
fn = fn |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++];
}
return fn;
}
};
It's a little off topic, but I'd love to see the addition of well-known symbols for all the operators to allow custom overloading. Lua does operator overloading with metatables. No reason JS can't too. At that point, pipe, compose, and even spaceship become much more useful.
var x = {
foo: [1,2,3],
[Symbol.add](rightHandSide) {//Binary
return this.foo.concat(rightHandSide);
},
[Symbol.incr]() {//Unary
this.foo = this.foo.map(x => x += 1);
return this.foo;
}
}
//default to standard operator if
//if left side is primitive or if
//left side has no matching Symbol
x + 3; //same as x[Symbol.add](3)
[0].concat(x++); //=> [0,1,2,3]
//same as [0].concat(x); x[Symbol.incr]();
[0, 1].concat(++x); //=> [0, 1, 2, 3, 4]
//same as x[Symbol.incr](); [0, 1].concat(x);
You can also replace all `a |> b` with a fixed arity `pipe2(a, b)` in your code and again, no new operator needed. Anwyay, I don't think such optimization issues are enough to justify adding this operator to JavaScript. In other functional languages it may make sense, because operators work differently there (e.g. they are interchangeable with functions, there is support for overloading, etc.).
> I'd love to see the addition of well-known symbols for all the operators to allow custom overloading.
Sure, if we had support for first-class overloadable operators then a proposal to add a standard operator like `|>` would make sense. But we don't, so a better one would be to add `Function.pipe` (simple, fits into the language better) or allow operator overloading, etc. (also may be reasonable, but it's a major change in the language).
A small self-promotion, given the relevant context: I was frustrated with the lack of support for the pipeline operator and came up with this: https://github.com/egeozcan/ppipe - I currently use it on a couple of projects and it really helped make my code more understandable.
I'm surprised the author hadn't heard of void. There was a time when using it in links (aka `javascript:void(0)`) was very common.
Also yeah javascript has labels but using them is usually a bad idea. While they may seem useful they can be harder to reason about in all but the simplest code. You're almost always better off restructuring your code instead of using labels.
How did the void(0) thing work in links? I remember being frustrated by it when I couldn't see where the link would take me, but never understood what was going on.
Back then, you'd use it with the `href` attribute to prevent the link from going anywhere (it would return `undefined`), then call a function with the `onclick` attribute.
Use case for labelled breaks: collision detection in video game, where anything colliding with anything else ends the game, resets a counter, spawns Godzilla, who knows. With labels, you can break out of the top loop very efficiently.
You can't efficiently get back into it, though, which can be a problem for a game's event loop. If you wrap it in a function, you can return from that function anywhere inside the loop, and let its caller handle whatever state change incurred the exit and then restart the loop, if it wants, by calling the function again.
On the other hand, if the event loop's job also includes rendering the game's UI, then leaving it by any means will freeze the display until it's restarted. That's probably bad too, although I suppose in theory it could be turned into a game mechanic. Absent that, this technique might make more sense in a case where you're running the exitable loop in question on a secondary thread (i.e. a worker), and doing things with it which can be safely interrupted - maybe changing levels or something, where the player isn't expecting to do anything during a UI transition, and you need to await the arrival of some resources over the network and then set up the game state before restarting the event loop to resume normal play? I don't know, that's a bit contrived and probably flawed in a critical way, but I think it makes the same basic sense as offloading heavy and necessarily synchronous work to a worker thread does in general.
It used to be fashionable to use `void 0` for undefined values, both because it's shorter and because it used to be the case that you could actually redefine `undefined` to have some arbitrary value.
IMO it seems unwise to use features like these which you wouldn't expect other people to know on a shared codebase. C also has the comma operator and, while I find it can be used to write things elegantly, I've yet to find a place where it's easier for another person to understand at first scan than using more common idioms.
"unwise to use features like these which you wouldn't expect other people to know"
I understand what you mean, but most of the described features has been in the language for ages, some of them more than 20 years.
I think the reason why so many aren't aware of the features, is because they learn the language by looking at code written by others or from simplified tutorials, and never look at the formal specification.
I remember how I struggled back in the late 90's to find good information, and how I read the ECMA-262 specification when I found it. Whenever a new specification is released I download it and read though it to see all new features, and also refresh my memory about the old.
I wholeheartedly agree with you. This is something which I've been struggling with the last 6 months telling a colleague. He always want to use things that are completely new and sometimes just weird. He always believe this new way of coding is better. When the rest of us have to make changes in his code, we have to dig through the MDN documentation to figure out how that code works. We are trying to solve business problems here, we are not interested in unnecessary coding challenges.
Eslint can be configured to forbid language constructs based on the level of standardization they've reached - in both directions; it can as well forbid "var" as it can "let" and "const". You can also include it in your test and CI pipelines, so that code which violates the specified restrictions cannot be shipped at all. We do this where I work, and while I admit I initially found it somewhat confining, the benefit of being able to more quickly grasp a new codebase among the many that constitute our infrastructure has proven extremely valuable over time.
Granted, it's a fair bit of work to set up in the first place - more in deciding what to allow and what not to; eslint itself is very friendly in my experience. And someone who just flat-out refuses to play along might not be a problem this tool can solve - although perhaps you can all reach a modus vivendi around an acceptable level of novelty somewhere short of the bleeding edge. (We use TC39 level 3. You might prefer something more conservative, such as level 4, or some version of ECMA-262 proper.) But it's worth a look, I think, as a tool that might reduce the headaches you seem frequently to be stuck with. I hope you find it useful!
I dislike this reasoning, I've heard it many times with all kinds of things but mostly programming languages. I dislike it because the threshold you pick is completely arbitrary, what is to stop you from lowering yourself to the lowest common denominator.
As a counter argument... switch this reasoning from syntax to algorithm, if an algorithm is too complex for most people to understand at first glance, do you throw it away in favour of a simple but inferior choice for the problem at hand?
I'm not GP poster, but I'd like to dispute this point.
> the threshold you pick is completely arbitrary
The threshold you pick is the result of a cost/benefit analysis weighing ease of understanding against whatever the benefit of that feature is.
In the case of the algorithm, I'd want to know how much faster the complex one runs or how much better its results are. If the answer is, "a fraction of a percent" in both cases, I probably don't care, unless maybe I'm Google and handling enough volume that a fraction of a percent matters.
> switch this reasoning from syntax to algorithm, if an algorithm is too complex for most people to understand at first glance, do you throw it away in favour of a simple but inferior choice for the problem at hand?
This is not the same problem. Syntax is only a way of writing the same code in a different way.
An algorithm also solves the same problem. But that doesn't really matter. The point it: It is the same problem, there are many attributes, many choices beyond syntactic choice that affect the comprehension to people unfamiliar to your code.
Many of them opinionated and arbitrary, but it's easy to draw the line based on your own experience and unwittingly cast out people with more or less knowledge than yourself as being more obscure or more common.
You never know where in the code someone will see something for the first time. Are you going to put a comment every place you use obscure features? That would be doubly annoying.
…and not forget anything. Learning is a good thing, but obscure features tend to be forgotten quite quickly because nobody uses them. I recently learned again a very obscure Bash feature by reading a draft I wrote about it 4 years ago. I’ll probably forgot it again.
If it's used so rarely that you forget what it does, but it only takes a minute to remind yourself about next time you see it - and if when used in those rare cases it significantly simplifies an implementation and reduces effort - then is that such a bad thing?
> If it's used so rarely that you forget what it does, but it only takes a minute to remind yourself about next time you see it - and if when used in those rare cases it significantly simplifies an implementation and reduces effort - then is that such a bad thing?
It’s not if it meets your second and third conditions. I don’t think the comma operator or `void` qualifies as things that can "significantly simplifies an implementation". Even the labels thing looks like a great opportunity for spaghetti code.
Agree! With that being said, teams should also strive to learn as much of the language as possible. Languages won't evolve if we don't care to try out new syntax.
I don't think what matters is whether someone could be expected to know a feature, but rather whether they could be expected to correctly guess what it does.
That comma operator is pretty odd; someone could easily be surprised by that behavior. Labeled breaks and additional arguments to setTimeout, though, do exactly what they look like. You might be surprised that the code works, but you wouldn't be so surprised at what it does.
Specifically in the third clause of a for loop, if you have two iterator variables being incremented (e.g. at diffeeent strides), it is more clear to use the comma operator I think. Very narrow use case but it is convenient to exist.
When I was just a neophyte, I wondered whether JavaScript supported multidimensional arrays, so I wrote array[x, y] = something and it appeared to work (reading array[x, y] returned the written value, after all). It was only when I tried to store multiple values with the same y-coordinate that I realized my mistake.
I wouldn't be surprised if the syntax was even older. And the functionality has likely been used in some academic language from decades ago. It's just reverse function composition after all.
They seem to have been added around the same time. Another comment on the parent attributes it to F#, which based on a quick googling precedes both OCaml and Elixir. It wouldn't surprise me if the operator preceded that either, but I don't have enough time to explore its origin.
Although to be fair, using | for "or" has always been silly. Also making the bitwise operator the short one since the logical one gets more usage in the real world. Hindsight is 20/20 of course.