Hacker News new | past | comments | ask | show | jobs | submit login
JS things I didn’t know existed (air.ghost.io)
275 points by fagnerbrack on Jan 20, 2018 | hide | past | favorite | 98 comments



Side anecdote, but a number of people at my company were making fun of a candidate who used labels to break out of loops in JS. “Those don’t exist” they joked. I remember thinking, “I thought they did,” but didn’t say anything. The sad part is I really like the people I work with and they are generally smart engineers. There is just such an arrogance to the interview process in tech it is almost unreal. I haven’t noticed a lack of qualified candidates in tech, just a lack of humility in many of the gatekeepers. Whiteboard pissing contests.


Some years ago I got into a dispute with an interviewer about some C++ construct. He claimed it wouldn't work. I claimed it would. I asked him to put it through the compiler but he refused.

Same in your situation. Why didn't they try it out before making a judgement?


I've been in this position a few times, one way or another. It's a bad position to be in. If you're wrong, you're labeled as unknowledgeable. If you're right, you're labeled as "not a culture fit". I've found that no matter how humble you try and be, the interviewer is isn't very gracious when they're right and isn't humble enough when they're wrong.


At some point you have to have confidence in what you know (right or wrong). You can't be trying every darn thing.

Wait does JS really have curly braces? Not sure now, I haven't read the spec from cover to cover three times; better go try it.

Okay, super exaggerated example (but I like those; they get the point across).

Also, empirical approaches can appear to confirm intuitions that are wrong; like printf("%d %d\n", i++, i++) producing the wrongly expected answer.


"At some point you have to have confidence in what you know (right or wrong). You can't be trying every darn thing."

I view it as test. When an interviewee or a new guy at work tells me something that I don't know or think is wrong I always give him the benefit of the doubt and verify it together with him. If he is right I know we have somebody who brings something to the table. If he is wrong that's clearly a negative that counts against him. If that happens a few times I will stop listening to that person.


Empirical approaches and underlying theory are both valuable and should be used together. Abstract theory is useless by itself.

Hell, simply writing a program to accomplish some specific goal is an empirical exercise that further enriches existing understanding.

Of course, as you note, empirical guess-and-check knowledge with no theoretical understanding is also a problem.


If it was something I had not seen before (which is already extremely unlikely given that I've spent a significant amount of time reading the official spec of the language to write a parser...) I'd still have given them the benefit of the doubt and maybe asked to clarify what they were doing. More than correcting people, I love learning new obscure things.


> I haven’t noticed a lack of qualified candidates in tech, just a lack of humility in many of the gatekeepers. Whiteboard pissing contests.

This mirrors my interviewing experience. People read so much into every answer during these stupid whiteboard interviews. I'm guessing that the candidate didn't get the position because they didn't seem to know JS well enough?


Precisely the reason I don't like to focus interviews on language gotchas and semantics.

The only exception I made was a quick pre-screen (to be completed in their own time with no pressure) in which candidates received 3 or 4 code snippets, and were asked a couple of questions about them which they wrote answers out in plain English explaining their thought process, in however much detail they wanted. (Thinks like a piece of code where "this" would break because of context, or a cross domain request that would fail and some ways to approach solving).

Even then, that was used as a ways to say "ok, this candidate clearly has good understanding, we'll not hang about too long during the interview process on that topic".

If they didn't mention what we thought the answer was, it didn't count against them at all, it just gave us an area to explore in the interview. (Because who's to say our pre-screen question wasn't the problem?)


> Side anecdote, but a number of people at my company were making fun of a candidate who used labels to break out of loops in JS. “Those don’t exist” they joked. I remember thinking, “I thought they did,” but didn’t say anything. The sad part is I really like the people I work with and they are generally smart engineers. There is just such an arrogance to the interview process in tech it is almost unreal. I haven’t noticed a lack of qualified candidates in tech, just a lack of humility in many of the gatekeepers. Whiteboard pissing contests.

And this is why if people mention a whiteboard as part of the interview process, I don't bother. I've seen similar arrogance related issues in regards to security/crypto questions. The ability to demonstrate the code actually runs and/or is actually broken from a security perspective is an invaluable part of the process.


I agree with your points, there is another part to this though: production code has to be easy to understand. If a developers has a habit of writing code that is more complex than it needs to be, it would definitely raise some red flags for me during an interview.


It could be worse. I was interviewing for a team lead position and I started talking about EcmaScript 6. One of the devs said "we don't use EcmaScript, we use Javascript....

I did get the job.


Interesting read. I quite like the pipe operator as I think it would make a lot of code easier to read.

I would caution people against using some of these "features" though as writing clever code that leverages obscure techniques is NOT good for readability and maintainability.

The person who has to maintain your code will probably be able to more easily grok some boring if else statements than some weird ternary thing that leverages the comma operator.

Just because you can doesn't mean you should.


I wonder if there is really a need for this operator.

Compare

  2 |> square |> increment |> square
and

  pipe(2, square, increment, square)
where `pipe` is defined as

  const pipe = (...args) => args.reduce((prev, curr) => curr(prev));
So given that what would be a strong argument for introducing the operator to JavaScript?


Not that I necessarily disagree, but one counter argument would be "why even use the pipe method? You could just say

  square(increment(square(2)))
and get the same result, right?"

The point is readability. It's UX for the programmer, reducing cognitive load in understanding what's going on. Pipes are fantastic in the terminal and have never been replaced there because they are very simple glue piecing together the complex bits with a simple-to-understand model.

If we can have that in JS, I'd be quite pleased even if there are other ways to do the same thing.


The counter argument would be wrong.

The advantage of the operator and the `pipe` helper over regular function composition is that you can emphasize the flow of data through a pipeline of functions.

The only advantage in readability I see here is that the operator is hard to mistake for anything else, it sticks out of the code, more than a seemingly regular function call. But this is not an advantage that justifies introducing the operator.

Other than that, UX and cognitive load is the same.

Pipes can indeed be fantastic! ;)

*

Apropos composition: why not add a composition operator while we're at it? Or we could define something like:

  const compose = (...args) => pipe(...args.reverse())
  
and then

  compose(square, increment, square, 2)
  
;)

For me there is also something more ineffable that's important here. A question of whether adding a new built-in operator fits within the general philosophy or some sort of "look & feel" of the language, or whatever. I think JavaScript seems to be quite conservative, when it comes to bringing in new operators. Perhaps that's a good thing. Maybe not. But so far, I remain unconvinced.


How would you add extra arguments to a function with your method?


You could just make the function, like square, accept arguments and then return a function that can be composed in that way.


That gives it a huge disadvantage over just using the pipe operator, particularly with standard functions:

  const pow = x => y => Math.pow(x, y);
  pipe(2, pow(5), square);
compared to just:

  2 |> Math.pow(5) |> square


The referenced operator proposal advocates the same solution[0] as @yladiz suggested here, so it's not clear to me where did you get this:

  2 |> Math.pow(5) |> square
  
from? This would mean the same thing as:

   Math.pow(5, 2) |> square
right? This seems like a pretty hairy thing to implement into the language. Also I don't think this syntax is very clear.

A more concise way of doing this with `pipe` would be:

  pipe(2, $ => Math.pow(5, $), square)
  
Compared to the operator:

  2 |> $ => Math.pow(5, $) |> square
So no disadvantage at all.

[0] https://github.com/tc39/proposal-pipeline-operator#user-cont...


Oops, you're right. I was confusing the proposal with how it works in Elixir.

I really like the partial application example from your source:

  let newScore = person.score
    |> double
    |> add(7, ?)
    |> boundScore(0, 100, ?);
Partial application is a separate stage 1 proposal but together with the pipe operator I really prefer this:

  let x = 2
    |> Math.pow(5, ?)
    |> square
to this:

  let x = pipe(2, $ => Math.pow(5, $), square)


How about this:

  let x = pipe(2
    , Math.pow(5, ?)
    , square
  )
  
;)


Sure but the proposed syntax is much cleaner and expressive imo.


Fuzzy words and your opinon, so can't exactly argue with that.

But here's a try ;)

I take it that the meaning behind the words you used is:

clean:

    no need for `pipe(` prefix and `)` postfix
expressive:

    can convey a pipeline in a distinctive and unique way
    (with a special operator as opposed to regular function)


Let me use the same words, but with different meaning, to say "the pipe function is much cleaner and expressive IMO".

The meaning would be:

clean:

    looks (is) the same as regular function application,
    no need for dirtying the syntax up with
    a special operator

    need to only press one extra key per operand/argument
    (,) as opposed to 3 (shift+|,>) ;)
expressive:

    can convey a pipeline just as well as the operator;
    it's a regular function, so it's first class,
    unlike the operator

    it's variadic, so you can combine it with
    spread operator syntax like:

      pipe(data, ...pipeline1, ...pipeline2)
      // where pipeline1 and pipeline2
      // are arrays of functions


This, and void, and promises:

> 2 |> square |> increment |> square

So, looks like any sufficiently complex language nowadays carries a badly self-compatible, inextensible, limited set of popular Haskell libraries.

And also Lisp's seq. Other Lisp features are left for implementation on a Lisp interpreter once your application gets large.


Readability and not having to define a function / include a dependency of common functions (see leftpad).


On readability, all the code samples here are using pipe operators spread out on one line, but for anything nontrivial I think the point is to use it thusly:

    initialValue
      |> doSomething
      |> doSomethingElse
      |> reportResults
In which case I think it's extremely readable compared to the alternatives.


Compare:

  pipe(initialValue
      , doSomething
      , doSomethingElse
      , reportResults
  )
  
Is it really so much more readable with the operator?


Readability I would argue is a matter of taste here, so not a very strong argument.

As for the second thing, I'd agree, but instead of introducing an operator, we could make the helper a built-in feature.


Yeah readability is largely about what you're used to. I find the nested function calls more readable, as I've used that notation for decades going all the way back to high school math.


Imho, languages that break the abribtrary difference between functions and methods do this best, so there's no difference between Square(myNum) and myNum.Square(), since .Square() works nice for pipe-style usage.


Sure, but JavaScript is not one of those languages unfortunately.


Is there a good way to see the intermediate results at each stage in the pipe when things go wrong? Pipes are concise but perhaps harder to debug depending on what tools you have.


    function tee(arg) {
      console.log(arg);
      return arg;
    }
    
    2 |> square |> tee |> increment |> square;


Or even, using another cute toy from the article:

    const tee = arg => console.log(arg), arg;


So I'm comparing pipe usage to this e.g.

    const result1 = square(2);
    const result2 = increment(result1);
    const result3 = square(result2);
With the above, I can add in a breakpoint and inspect what's going on at each stage with access to other debugging tools. Log statements don't seem nearly as good as this. The above is nice as well in that you can give intuitive names to the intermediate results.


If you need names for intermediate results, etc. then you don't want the pipeline operator/function. In such cases the things you say are true. But when you wan't to be more concise, with the operator/function, you can.

BTW A nice feature of a code editor would be if you could magically switch back and forth between code with intermediate named variables and pipelined version of the same code, e.g. by selecting the code and invoking an option.


You can define a debug version:

  const pipe_debug = (...args) => args.reduce((prev, curr) => {
    console.log('pipe debug; curr:', curr, ', prev:', prev)
    return curr(prev)
  })
and when you want to see intermediate results of a `pipe` call, you append `_debug` to the name.


One great reason is that the composing function can be optimized away much easier with the native operator.


And a native operator can be optimized just about the same amount as a built-in function. So maybe add the function instead of the operator to the language?


One issue with a `pipe` function is that it is an arity of N. That makes it almost impossible to optimize for all the edge cases. In contrast, the pipe operator is always fixed at an arity of 2. Implementing a static Function.pipe in terms of the primitive operator becomes easy enough. Functional languages have played with both the function and the operator, but they keep coming back to the operator because it's more readable.

    //implementation of your pipe in terms of the (potential) pipe operator
    Function.pipe = (fn, ...args) => {
      //let's use a pseudo Duff's device
      switch (args.length) {
        case 0: return fn;
        case 1: return fn |> args[0];
        case 2: return fn |> args[0] |> args[1];
        case 3: return fn |> args[0] |> args[1] |> args[2];
        case 4: return fn |> args[0] |> args[1] |> args[2] |> args[3];
        case 5: return fn |> args[0] |> args[1] |> args[2] |> args[3] |> args[4];
        case 6: return fn |> args[0] |> args[1] |> args[2] |> args[3] |> args[4] |> args[5];
        case 7: return fn |> args[0] |> args[1] |> args[2] |> args[3] |> args[4] |> args[5] |> args[6];
        case 8: return fn |> args[0] |> args[1] |> args[2] |> args[3] |> args[4] |> args[5] |> args[6] |> args[7];
        default:
          var mod = args.length >> 3; //div by 8
          var rem = -8 * mod + args.length; //mult add
          fn = Function.pipe(fn, ...args.slice(0, rem));

          while (rem < args.length) {
            fn = fn |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++] |> args[rem++];
          }
          return fn;
      }
    };
It's a little off topic, but I'd love to see the addition of well-known symbols for all the operators to allow custom overloading. Lua does operator overloading with metatables. No reason JS can't too. At that point, pipe, compose, and even spaceship become much more useful.

    var x = {
      foo: [1,2,3],
      [Symbol.add](rightHandSide) {//Binary
        return this.foo.concat(rightHandSide);
      },
      [Symbol.incr]() {//Unary
        this.foo = this.foo.map(x => x += 1);
        return this.foo;
      }
    }
    //default to standard operator if
    //if left side is primitive or if
    //left side has no matching Symbol
    x + 3; //same as x[Symbol.add](3)

    [0].concat(x++); //=> [0,1,2,3]
    //same as [0].concat(x); x[Symbol.incr]();

    [0, 1].concat(++x); //=> [0, 1, 2, 3, 4]
    //same as x[Symbol.incr](); [0, 1].concat(x);


You can also replace all `a |> b` with a fixed arity `pipe2(a, b)` in your code and again, no new operator needed. Anwyay, I don't think such optimization issues are enough to justify adding this operator to JavaScript. In other functional languages it may make sense, because operators work differently there (e.g. they are interchangeable with functions, there is support for overloading, etc.).

> I'd love to see the addition of well-known symbols for all the operators to allow custom overloading.

Sure, if we had support for first-class overloadable operators then a proposal to add a standard operator like `|>` would make sense. But we don't, so a better one would be to add `Function.pipe` (simple, fits into the language better) or allow operator overloading, etc. (also may be reasonable, but it's a major change in the language).


You won't be able to type `pipe` with Flow/Typescript. But they can (and had) added support for the |> operator.


> You won't be able to type `pipe` with Flow/Typescript.

Don't know what you mean by that.

> But they can (and had) added support for the |> operator.

Doesn't seem like they did: https://github.com/Microsoft/TypeScript/issues/17718 https://github.com/facebook/flow/issues/5443


A small self-promotion, given the relevant context: I was frustrated with the lack of support for the pipeline operator and came up with this: https://github.com/egeozcan/ppipe - I currently use it on a couple of projects and it really helped make my code more understandable.


I'm surprised the author hadn't heard of void. There was a time when using it in links (aka `javascript:void(0)`) was very common.

Also yeah javascript has labels but using them is usually a bad idea. While they may seem useful they can be harder to reason about in all but the simplest code. You're almost always better off restructuring your code instead of using labels.


How did the void(0) thing work in links? I remember being frustrated by it when I couldn't see where the link would take me, but never understood what was going on.


Basically it defers to a javascript function instead of using the web's normal way of linking. An example of such a link would be:

    <a href="javascript:void(0)" onclick="dostuff()">Click me</a>
href="javascript:void(0)" basically tells the browser to do nothing when a link is clicked. It's like returning false from an event listener.

onclick="dostuff()" will then listen for clicks and call the `dostuff` function to update the page or load a new one.

Nowadays it's considered bad practice and to be honest it wasn't exactly good practice in those days but it was very commonly used.


https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Back then, you'd use it with the `href` attribute to prevent the link from going anywhere (it would return `undefined`), then call a function with the `onclick` attribute.

E.g.

  <a href="javascript:void(0)" onclick="clickHandler()">Click me</a>


these links have an onclick handler.

If the handler goes missing or fails, this construct will prevent the link from doing what it usually would.

But using a button is better in such cases...


Use case for labelled breaks: collision detection in video game, where anything colliding with anything else ends the game, resets a counter, spawns Godzilla, who knows. With labels, you can break out of the top loop very efficiently.


You can't efficiently get back into it, though, which can be a problem for a game's event loop. If you wrap it in a function, you can return from that function anywhere inside the loop, and let its caller handle whatever state change incurred the exit and then restart the loop, if it wants, by calling the function again.

On the other hand, if the event loop's job also includes rendering the game's UI, then leaving it by any means will freeze the display until it's restarted. That's probably bad too, although I suppose in theory it could be turned into a game mechanic. Absent that, this technique might make more sense in a case where you're running the exitable loop in question on a secondary thread (i.e. a worker), and doing things with it which can be safely interrupted - maybe changing levels or something, where the player isn't expecting to do anything during a UI transition, and you need to await the arrival of some resources over the network and then set up the game state before restarting the event loop to resume normal play? I don't know, that's a bit contrived and probably flawed in a critical way, but I think it makes the same basic sense as offloading heavy and necessarily synchronous work to a worker thread does in general.


It used to be fashionable to use `void 0` for undefined values, both because it's shorter and because it used to be the case that you could actually redefine `undefined` to have some arbitrary value.


IMO it seems unwise to use features like these which you wouldn't expect other people to know on a shared codebase. C also has the comma operator and, while I find it can be used to write things elegantly, I've yet to find a place where it's easier for another person to understand at first scan than using more common idioms.


"unwise to use features like these which you wouldn't expect other people to know"

I understand what you mean, but most of the described features has been in the language for ages, some of them more than 20 years.

I think the reason why so many aren't aware of the features, is because they learn the language by looking at code written by others or from simplified tutorials, and never look at the formal specification.

I remember how I struggled back in the late 90's to find good information, and how I read the ECMA-262 specification when I found it. Whenever a new specification is released I download it and read though it to see all new features, and also refresh my memory about the old.


I wholeheartedly agree with you. This is something which I've been struggling with the last 6 months telling a colleague. He always want to use things that are completely new and sometimes just weird. He always believe this new way of coding is better. When the rest of us have to make changes in his code, we have to dig through the MDN documentation to figure out how that code works. We are trying to solve business problems here, we are not interested in unnecessary coding challenges.


Eslint can be configured to forbid language constructs based on the level of standardization they've reached - in both directions; it can as well forbid "var" as it can "let" and "const". You can also include it in your test and CI pipelines, so that code which violates the specified restrictions cannot be shipped at all. We do this where I work, and while I admit I initially found it somewhat confining, the benefit of being able to more quickly grasp a new codebase among the many that constitute our infrastructure has proven extremely valuable over time.

Granted, it's a fair bit of work to set up in the first place - more in deciding what to allow and what not to; eslint itself is very friendly in my experience. And someone who just flat-out refuses to play along might not be a problem this tool can solve - although perhaps you can all reach a modus vivendi around an acceptable level of novelty somewhere short of the bleeding edge. (We use TC39 level 3. You might prefer something more conservative, such as level 4, or some version of ECMA-262 proper.) But it's worth a look, I think, as a tool that might reduce the headaches you seem frequently to be stuck with. I hope you find it useful!


I dislike this reasoning, I've heard it many times with all kinds of things but mostly programming languages. I dislike it because the threshold you pick is completely arbitrary, what is to stop you from lowering yourself to the lowest common denominator.

As a counter argument... switch this reasoning from syntax to algorithm, if an algorithm is too complex for most people to understand at first glance, do you throw it away in favour of a simple but inferior choice for the problem at hand?


I'm not GP poster, but I'd like to dispute this point.

> the threshold you pick is completely arbitrary

The threshold you pick is the result of a cost/benefit analysis weighing ease of understanding against whatever the benefit of that feature is.

In the case of the algorithm, I'd want to know how much faster the complex one runs or how much better its results are. If the answer is, "a fraction of a percent" in both cases, I probably don't care, unless maybe I'm Google and handling enough volume that a fraction of a percent matters.


> switch this reasoning from syntax to algorithm, if an algorithm is too complex for most people to understand at first glance, do you throw it away in favour of a simple but inferior choice for the problem at hand?

This is not the same problem. Syntax is only a way of writing the same code in a different way.


An algorithm also solves the same problem. But that doesn't really matter. The point it: It is the same problem, there are many attributes, many choices beyond syntactic choice that affect the comprehension to people unfamiliar to your code.

Many of them opinionated and arbitrary, but it's easy to draw the line based on your own experience and unwittingly cast out people with more or less knowledge than yourself as being more obscure or more common.


None of these are difficult to explain in a one or two line comment if the rest of your code is clean.


You never know where in the code someone will see something for the first time. Are you going to put a comment every place you use obscure features? That would be doubly annoying.


Or you could encourage colleagues to learn stuff.


…and not forget anything. Learning is a good thing, but obscure features tend to be forgotten quite quickly because nobody uses them. I recently learned again a very obscure Bash feature by reading a draft I wrote about it 4 years ago. I’ll probably forgot it again.


If it's used so rarely that you forget what it does, but it only takes a minute to remind yourself about next time you see it - and if when used in those rare cases it significantly simplifies an implementation and reduces effort - then is that such a bad thing?


> If it's used so rarely that you forget what it does, but it only takes a minute to remind yourself about next time you see it - and if when used in those rare cases it significantly simplifies an implementation and reduces effort - then is that such a bad thing?

It’s not if it meets your second and third conditions. I don’t think the comma operator or `void` qualifies as things that can "significantly simplifies an implementation". Even the labels thing looks like a great opportunity for spaghetti code.


But all of these can be replaced with readable code that doesn’t need a comment to be understood.


Agree! With that being said, teams should also strive to learn as much of the language as possible. Languages won't evolve if we don't care to try out new syntax.


I don't think what matters is whether someone could be expected to know a feature, but rather whether they could be expected to correctly guess what it does.

That comma operator is pretty odd; someone could easily be surprised by that behavior. Labeled breaks and additional arguments to setTimeout, though, do exactly what they look like. You might be surprised that the code works, but you wouldn't be so surprised at what it does.


Specifically in the third clause of a for loop, if you have two iterator variables being incremented (e.g. at diffeeent strides), it is more clear to use the comma operator I think. Very narrow use case but it is convenient to exist.


    const x = (1, 2, 3) // 3
    x === (1, 2, 3)
Javascript's tuples have some great compression!


When I was just a neophyte, I wondered whether JavaScript supported multidimensional arrays, so I wrote array[x, y] = something and it appeared to work (reading array[x, y] returned the written value, after all). It was only when I tried to store multiple values with the same y-coordinate that I realized my mistake.


The pipe operator exists exactly like the proposal in Elm, and I miss it so much in js.

It doesn’t really make sense to put it in the article, it is completely unrelated to the other arcane and unsafe features that precede it.


Out of curiosity, what makes these features unsafe?


Sorry, unsafe was a bad choice of words. More something like (programmer) error-prone.


What a lovely article. Was interesting and learned about the comma operator, which is really awesome.


Using the comma operator for branch specific actions in a ternary operator expression is actually really nice.


We should only use language features which spark joy.


So many functional aspects of JS make for an exciting future. If only chrome/firefox would catch up with webkit with regard to tail calls.

The latest comments from the devs on the bug in chrome is just depressing.


I knew about half of these, and the other half was still pretty awesome. I’m not entirely enthused by the pipeline operator, but oh well.


Given how long I've had the misfortune of working with JS, I can't believe I wasn't aware of the `dataset` one!


Bet you didn't hear about Symbols either, then


Glad to see that Elixir's pipe operator is beginning to spread to other languages!


https://en.m.wikipedia.org/wiki/Pipeline_(Unix)

First implemented in Unix V3 1973, acording to wikipedia.



Here is the history of the |> symbol as best as I can tell:

1994: Isabelle/ML (part of Isabelle proof assistant tooling) (https://blogs.msdn.microsoft.com/dsyme/2011/05/17/archeologi...)

2005 and earlier: F#

2012: Elm

2013: Elixir (https://github.com/elixir-lang/elixir/pull/751)

2013: OCaml (https://ocaml.org/releases/4.01.0.html)


Haskell has had the dot (.) operator which does the same thing for probably longer, but I have no reference sorry. Haskell is >30 years old.


Dot has the arguments swapped.

  f |> g = g . f


The $-operator is Haskell's equivalent to |>


Surely you mean OCaml's pipe operator? They have had it since at least 2013: http://ocaml.org/releases/4.01.0.html

I wouldn't be surprised if the syntax was even older. And the functionality has likely been used in some academic language from decades ago. It's just reverse function composition after all.


They seem to have been added around the same time. Another comment on the parent attributes it to F#, which based on a quick googling precedes both OCaml and Elixir. It wouldn't surprise me if the operator preceded that either, but I don't have enough time to explore its origin.


> F#, which based on a quick googling precedes both OCaml and Elixir

F# can’t precedes OCaml because it’s based on it. OCaml was first released in 1996 while F# appeared only in 2002.


Was it in Elixir or F# first?


Bash?


Technically yes, but it is interesting that "|>" seems to be the syntax that is being adopted amongst multiple languages.


Well, we can't use "|".


Although to be fair, using | for "or" has always been silly. Also making the bitwise operator the short one since the logical one gets more usage in the real world. Hindsight is 20/20 of course.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: