Hacker News new | past | comments | ask | show | jobs | submit login
What’s New in ES2019 (tildeloop.com)
433 points by davman on July 30, 2019 | hide | past | favorite | 398 comments



Honest question, not meant to be inflammatory.

If we still need to target es5 4 years later, and transpilation is standard practice, why bother? Is the evolution of JS not directed in practice by the authors of Babel and Typescript? If no one can confidently ship this stuff for years after, what’s the incentive to even bother thinking about what is official vs a Babel supported proposal.

I like the idea of idiomatic JS with powerful modern features, but in practice every project I’ve seen seems to use a pretty arbitrary subset of the language, with different ideas about best practices and what the good parts are.


That's not unique to JS. Some compilers took over a decade to implement C99 features. Most of us C coders were still defaulting to C89 for portability well into the late 00's. But now C99 is mainstream enough that you can (mostly) safely target it.

If you never release new standards you'll never be able to use them. I realize that in JS world 4 years is basically an eternity so it's hard to project that far but I'm sure that if you still have to write javascript code 10 years from now you'll be happy to be able to use the features of ES2019.

Regarding transpilation surely it's not as standard as you make it out to be? It's popular to be sure but handwritten javascript is not that rare nowadays, is it?


> Regarding transpilation surely it's not as standard as you make it out to be?

The impression I got from working on various projects over the past couple of years was that if the build doesn't include Babel/Webpack/Packet/etc then you're not doing a 'professional' job.

> ... handwritten javascript is not that rare nowadays, is it?

I love handwriting vanilla javascript, though I only really get the opportunity to do it in my personal projects.

When I made the decision earlier this year to rewrite my canvas library from scratch, I made a deliberate choice to drop support for IE/Edge/legacy browsers. (I can do this because nobody, as far as I know, uses my library for production sites). Being able to use ES6+ features in the code - promises, fat arrows, const/let, etc - has been a liberation and a joy and made me fall in love with Javascript all over again. Especially as the library has zero dependencies so when I'm working on it, it really does feel like working in uncharted territory where reinventing a wheel means getting to reinvent it better than before.

I wish paid work could be such fun.


Agreed! My personal project is vanilla JS, no dependencies, and no attempt to support old browsers, and it's so much fun to work in. And, since Real Life sometimes puts side projects on hold for a year or two, it's nice to know that picking it back up won't involve re-learning, updating or replacing a bunch of stuff I've forgotten the details of.


> The impression I got from working on various projects over the past couple of years was that if the build doesn't include Babel/Webpack/Packet/etc then you're not doing a 'professional' job.

If you have alternative recommendation that's not these tools, I'd be curious to hear it.


> It's popular to be sure but handwritten javascript is not that rare nowadays, is it?

Long time front end developer here. In the last couple of years I can't recall seeing even a single project without a build pipeline (not that they don't exist, I just haven't encountered them at my day job, first or third party).


I don't understand what is so appealing about not having a build pipeline for JavaScript, other than the ability to very quickly test and learn things directly in the browser, which of course anyone can still do.

For anything remotely important, you're almost certainly going to already want a build pipeline to do things like concatenating/minifying code, running tests, and deploying. Adding a transpilation step to extend support to older browsers comes with almost no cost to time or maintainability, assuming you're using well-supported things like Babel and its official plugins.


The appealing thing about not having a build pipeline is not having it. Not having another thing to maintain and update/upgrade/debug. I've gone through grunt, gulp, webpack and parcel. Somewhere along the way I realized that with es6 imports and css variables I don't really need it.

I've talked with frontend devs who only worked on projects with build steps about this and they often seem perplexed and surprised by just how simple it can be if you don't make it complex.


Can you imagine a modern SoundCloud, Spotify, Facebook, Airbnb or Slack without a build pipeline? There's nothing appealing about that.

It's nice and simple for small projects but let's not act like putting a script tag in HTML is some divine wisdom that newbies don't understand.


Yes I can. All of those except for facebook (because facebook is a sprawling platform) seem reasonable to be able to build without a build system. And with using es6 modules it's not "putting a script tag in HTML", you have a proper module loading system without needing any builds or libs.

I'm not saying it's some sort of lost knowledge like Damascus steel, I'm just saying that many people these days are so caught up into the current dogma of JS that they never consider a simpler but still modern path.


> Adding a transpilation step to extend support to older browsers comes with almost no cost to time or maintainability

They may be simple concepts, but there's definitely still an added cost with transpilation and minification steps.

If you work with Babel long enough you'll encounter scenarios where the transpiling didn't quite work as expected and breaks in-browser.

When you're debugging minified code in-browser using source maps, it's not all that uncommon to get different line numbers and stack traces than you get when loading up the original unminified source. I can recall a number of occasions where I had to deploy unminified source for in-browser debugging because the source maps were obfuscating the real error.

https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...


I'm using Webpack + Babel... but literally only for features after async/await. All major, current browsers support it. That's the baseline support for the apps I'm working on now.

The payload to fill promises, regenerator and async are huge, and without them, my payload is significantly smaller. for preset-env...

      {
        loose: true,
        modules: false,
        useBuiltIns: 'usage',
        corejs: 3,
        targets: {
          browsers: ['edge 15', 'safari 10.1', 'chrome 58', 'firefox 52', 'ios 10.3'],
        },
        exclude: ['transform-async-to-generator', 'transform-regenerator'],
      },
from here, I only need the features not part of stage 4 that I want to add. I have the following in a test script...

    try {
      eval('(function() { async _ => _; })();');
      if (typeof fetch === 'undefined') throw new Error('no fetch');
    } catch (e) {
      window.location.replace('/auth/legacy.html');
    }
With async functions, and fetch api available, you get a LOT in the box.


I have no concerns about having a build pipeline per se. My concern is that it's still way too complex and difficult. I think it could be made to be much simpler for 90 percent of cases, while still maintaining optional complexity for the remaining 10 percent of cases that need it.


>I don't understand what is so appealing about not having a build pipeline for JavaScript

We don't have one at my work (Fortune 100 company), but it has nothing to do with whether it is appealing or not. It has to do with the projects we are working on started years before there was any such thing as a build pipeline in the web world and it's not pragmatic to add one in at this point. Maybe some decade we will get the budget to start over from scratch (haha!) and get modern things like a build pipeline.


I agree in broad terms, but as a counterpoint, those build pipelines are being created and used primarily by folks who initially learned by handwriting javascript.

While less of a consideration for established teams, it dramatically raises the barrier to entry for those who are learning as well as those who currently have much simpler requirements. Those are the future members of that established team. Adding hurdles to the process of learning will, in the long run, result in a smaller, more homogeneous pool of experienced developers to draw upon. It's a self-perpetuating cycle - making folks learn by using complex build scripts will result in experienced developers who then create further complexity because they've self-selected to be those who happen to think in that mode.

That's not intended to suggest that progress is bad. Rather that it's a bit short-sighted of us, as an industry, not to consider the ramifications of enforcing a workflow that happens to be trivial from the limited perspective of long time front end developers.

{ Edit to try to make the last paragraph sound less accusatory. Sorry! That wasn't my intention. }


There are tools like create-react-app that configure complicated build pipelines for you, and there's parcel, a simple common-case build pipeline.


Again, just for the sake of playing Devil's Advocate, the fact that we rely entirely upon such helpers (and the overwhelming number of mutually exclusive options available) is a potential indicator that we're making it more complex than we have to[1].

The syntactic sugar that we're adding to make our own jobs easier is arguably making things disproportionately more difficult for anyone who doesn't already have the same baseline level of knowledge. That includes those who are trying to learn to do our jobs after we've all moved on. That would notably have also included all of us, earlier in our careers.

I'm just suggesting that it's worth considering whether that seems sustainable in the long run.

[1] ... for relatively simple tasks. Complex tooling happens to be a very good tool for complex tasks, like the sort of professional work that you seem to be referring to. No matter how good our hammer may be, however, not every task always wants to be a nail. create-react-app and parcel are examples of excellent hammers.


The main complex task that's being addressed is making JavaScript that runs on a variety of browsers, not just the latest Firefox.

I'm afraid I don't know what you have in mind as far as syntactic sugar that makes things harder for learners. Every new syntax I can think of makes JS easier to learn and use.


The biggest issue I've seen with these is that at some point they do break and then you need a knowledgeable person to fix it. The pipeline is basically a black box unless you possess the knowledge of how all the parts work and interact with each other.


I dunno how to put this in a non-controversial way, but...at some point, I switched from thinking a desirable, throw-the-money-at-them developer is somebody who writes a lot of code to somebody who understands things and is capable of understanding new ones when they arise.

I'm not saying that to excuse thrash. Thrash is bad. But like..."oh, at some point you'll have to have somebody who actually understands how the thing we use to make money works"--you should have that person anyway! It's an existential threat to your business not to.

Understanding things is our job. Unless you're somewhere where understanding things isn't valued, and 1) it's gonna eventually fail, and 2) you have better places to be.


> somebody who understands things and is capable of understanding new ones when they arise.

Some of us won't let go of the crazy dream that things should be simpler. I want this not because I'm lazy or because I don't understand them but because - dammit - things should be simpler.


Complex things are complex. Attempting to simplify complex systems below this threshold of complexity invariably ends in lost capability or lost fingers when the sharp edge you didn't know was there sneaks up on you.

And web browsers are very complex beasts by design, by accretion, and by necessity.

Sorry, I guess. Does bemoaning it actually help anything?


But we're not talking about web browsers per se - we're talking about the javascript ecosystem. The complexity of which is only indirectly related to that of the browser.


I made this recently just because I wanted to prove to myself that it was possible to have a modern frontend application without all the overhead of build tooling and I have to say I'm pretty happy with it. You can use some of the most important things in browsers today like CSS variables, and ES module imports.

https://gitlab.com/WA9ACE/frontend-starter


That's an interesting project template. Manually managing dependencies and using backbone sounds quite fun!


I was able to create a full web app using Preact with no build pipeline at all. It was not the best experience mostly because of no concatenation.

I ended up adding Parcel for easy Typescript support.


I work for a Fortune 100 company and we still don't have a build pipeline :-(


It's a bit different with compiled code though right? If you stop supporting the previous C revision, it means the person compiling your code need to upgrade their compiler, the executable will work for everyone on that platform once built.

Whereas in JS-land, the support for upgrades is even more trailing because it's the end-users who need to upgrade, not just the individual/organization doing the packaging.


C to a lesser extent does have the same trailing problem with end users as well in libc / VCRUNTIME and other standard lib bundles.

On Windows VCRUNTIME installs are mostly painless, easily backgrounded, and people don't realize they sometimes have as many as 100s of different versions installed side-by-side because every game installed wants a slightly different version.

But on Linux, it's often as much as like three-fourths of the reason that code needs to be rebuilt so often for every possible different Linux distribution because most distributions lock to only a single libc and prefer every app dynamically link that specific libc. Apps get held back by distributions slow to adopt libc updates all the time in Linux, and app devs try to stick to common libc versions based on distribution popularity (user preference), which isn't dissimilar to the lagging browser problem.

(Then there are arguments about statically bundling libc / VCRUNTIME, etc.)

It's not as bad as JS land on average, but that doesn't mean that C is immune to the same problem. As soon as you are dealing with shared libraries / platforms / runtimes, you run into having to deal with what users are willing to install (and practically no platform is immune, depending on trade-offs one is willing to take).


Different, yes, but the difference is quantitative, not qualitative.

Once a transpile step is required, it doesn't make as much of a difference what is on the other side.


> If you stop supporting the previous C revision, it means the person compiling your code need to upgrade their compiler

FYI, this doesn't happen in C, all C versions are backwards compatible - you can compile C89 code on any compiler supporting C99, C11 or C18.


Where C11 > C99.

Talk about naming problems / sorting problems. They've adopted Microsofts versioning system where Windows 8 is newer than both Windows 98 and Windows 2000.

Does anyone remember the Y2K problem. Anyone?????

They worked out naming things properly (where computers are involved) was a big problem in the 1960's. But we still only allocated 3 bytes for the version of C or something...?


parent means if code starts using C99 features that won't work in C89, then the person with a C89 compiler needs to upgrade their compiler.


Huh? The compiled JS will continue to work just fine for the end-user. Not sure what you're getting at.


If your C code is upgraded from C n to to use C n+1 features, then it breaks compilers that do not support c n+1. This is not a problem for end-users, because most people do not compile the code themselves, but use pre-built binaries. So, it is a problem for the package maintainer to deal with, the package maintainer upgrades their compiler and the new binaries just work for the end user, because once compiled it doesn't matter if the source was C n or C n+1.

If you JS code is upgraded from ES n to ES n+1, you need to polyfill and transpile in perpetuity, otherwise end users with access to only ES n browsers will not be able to run your code at all.

Comparing C spec updates and JS spec updates is comparing apples to oranges.


> If you JS code is upgraded from ES n to ES n+1, you need to polyfill and transpile in perpetuity, otherwise end users with access to only ES n browsers will not be able to run your code at all.

I'm still not seeing the difference. End users of JS applications don't compile their code the same way end users of a C binary don't compile the code.


If you ship an app with ES6 js and a user with a browser that doesn't support ES6 features tries to access it, then it won't work for them.

The user doesn't manually compile anything, but their choice (sometimes lack of choice) of browser determines which version of the spec they can run. So you end up polyfilling forever because some significant chunk of your user base is tied to IE8 or worse.

All the original commenter meant is that the people making the software can't ever guarantee what version of JS the end user's browser can handle.


> If you ship an app with ES6 js and a user with a browser that doesn't support ES6 features tries to access it, then it won't work for them.

Right, but if the developer transpiles their source code and distributes it then the JS end-user is in the same situation as the C binary user. Distributing ES6 code without transpiling it is akin to distributing C source code without compiling: the end-user would need to have an up to date compiler/runtime to execute the code.


Sure, but JavaScript isn't compiled but typically interpreted (or JITed).


Sure, but this is a distinction without a difference from the perspective of the end-user.


> That's not unique to JS. Some compilers took over a decade to implement C99 features.

I sometimes wonder what's the point of new versions of C, too.

Also, a new feature I want to use needs only be supported by one C compiler: the one I'm using. With JS, I need all of them to support it.


>I sometimes wonder what's the point of new versions of C, too.

At this point I'm mostly fine with the feature set of C99 so I can live without the newer standards. Actually there's stuff in newer standards that I find questionable, but that's a different discussion.

C99 on the other hand was sorely needed, if only for standardizing some basic features that up until then were only available as vendor-specific extensions. Things like inline, stdint.h, bools, variadic macros, restrict, compound literals and more[1].

Writing code without these features is often severely limiting. Or rather, you probably won't be limited but you'll have to rely on vendor extensions and write non-portable code. Or maybe you'll use a thousand-line long configure script to make the code portable.

>Also, a new feature I want to use needs only be supported by one C compiler: the one I'm using. With JS, I need all of them to support it.

If you're making proprietary software that makes sense, if you're developing open source code you very much care about portability and compatibility. I care about the compiler I use, the compiler OpenSuse uses, whatever garbage Visual Studio uses, the compiler FreeBSD uses etc...

Besides I have basic code snippets I wrote over a decade ago that I still use today, regardless of the environment. That's valuable too.

[1] https://en.wikipedia.org/wiki/C99


> Also, a new feature I want to use needs only be supported by one C compiler: the one I'm using.

I've always had the impression that C programmers also care about standards compliance, and aren't typically willing to marry their project to a particular compiler.

At least, it's the language community where you see "language lawyers". I'm sure there are "language lawyers" in other language communities, but I've never seen discussions about what causes "undefined behavior" or what's "implementation dependent" or discuss interpretations of particular passages of the standard like I do with the C and C++ communities.


I don't think I ever write C code that I intent to only be compiled using one compiler. One of the primary reasons I use C is to write cross-platform code, so I care about GCC, Clang, and VS, at least.


GCC and Clang are both cross-platform, and users typically don’t care about which compiler you used to produce the binary. At least until the early 2010s plenty of projects used GCC-specific features and wouldn’t compile with Clang. The situation must be better now but I bet GCC is still the only blessed compiler for many maintenance-only code bases.


THat's not right, the same is true of JS. You'd run a "build" step on a system you control to transpile your fancy modern JS back as far as you'd like official support and put the output JS file on your webserver.

Using modern JS features does not require universal support.


> It's popular to be sure but handwritten javascript is not that rare nowadays, is it?

Even if you’re only targeting evergreen browsers, the popular build tools also perform minification and dependency resolution/linking. There’s so much a tool like webpack can do for you that I imagine it will remain hugely popular even as the need for transpilation wanes.


> Regarding transpilation surely it's not as standard as you make it out to be? It's popular to be sure but handwritten javascript is not that rare nowadays, is it?

If you are building a modern web "app", not a one off set of web pages, then yes, it is the standard. It would be very weird to not see a compile (transpilation) step.


It's becoming less and less necessary. If you don't care about browsers with tiny usage you can ship ES6 as-is these days. All the main browsers support ES6 modules. And HTTP2 means having all those modules in different files doesn't have the huge performance impact it once did. About the only thing left (that isn't easily solvable) is stripping comments and minifying.


While that's true, dealing with compatibility issues and making sure all of your services are using http2 is arguably more work than just using babel.


Versioning files is still a need too.


I'm just making a handwritten microproject. It's to be included in a very old gigantic internal platform so I didn't want to add any more build steps to it. Everything was going smoothly, until someone said... It's not working in IE11 and some clients are still using it. We ended up converting the js to an older version in a half automated/half manual way with babel... I'm so used to use some of the new features I didn't even remembered they didn't existed before. I used babel before, but only now I realised really out how much pain it saved me over the years.


As of VS2017, MSVC still missed a bunch of C99 features (or they're buggy as hell):

> Variable Length Arrays are not supported (although these are now officially optional)

> restrict qualifier is not supported, __restrict is supported instead, but it is not exactly the same

> Top-level qualifiers in array declarations in function parameters are not supported (e.g. void foo(int a[const])) as well as keyword static in the same context

They only started seriously working on actual C99 support (aside from the bits of C99 which were part of C++) for VS2013 or so.

Though to be fair it seems both Clang and GCC are still missing bits and bobs:

> The support for standard C in clang is feature-complete except for the C99 floating-point pragmas.

for GCC I found https://gcc.gnu.org/c99status.html, it's unclear how up-to-date it is, and if GCC is still missing any required feature.


> Regarding transpilation surely it's not as standard as you make it out to be? It's popular to be sure but handwritten javascript is not that rare nowadays, is it?

I'd say for the last 5 years 90% of my browser JS projects used Webpack and Babel.


> Most of us C coders were still defaulting to C89 for portability well into the late 00's. But now C99 is mainstream enough that you can (mostly) safely target it.

Sure, but if C came out with a new standard every year, you'd essentially never be on the latest version. Isn't there at least a valid argument to slowing down a bit to give the implementations a chance to catch up instead of having a new ES 20XX every year?


The implementations that are moving at all are keeping up fine. The biggest question for web developers is whether they support IE. If you do, you're stuck on ES5. If not, you can use most features from ES2015 and up. Once the next release of Edge is out, that will expand greatly.

Even if you support IE, you can still reap the benefits of modern JS if you're willing to do some differential serving. You can use the module/nomodule pattern to serve ES2017 without making changes to the server.


Sadly the "mostly" is because threads :(

It is the part I was most excited about C99 and was very disappointed that it is so poorly supported.


Threads are C11, not C99.


I write small lambdas is JS all the time. Generally it's just a couple of SDK calls so it's not worth the toolchain.


>If we still need to target es5 4 years later, and transpilation is standard practice, why bother? Is the evolution of JS not directed in practice by the authors of Babel and Typescript? If no one can confidently ship this stuff for years after, what’s the incentive to even bother thinking about what is official vs a Babel supported proposal.

For one, not everyone works on the client. I can write for Node and use everything v8 supports without ever touching Babel and Typescript.

>I like the idea of idiomatic JS with powerful modern features, but in practice every project I’ve seen seems to use a pretty arbitrary subset of the language, with different ideas about best practices and what the good parts are.

Good parts/best practices are orthogonal to native features and libs, which is what we're discussing here.


> I can write for Node and use everything v8 supports without ever touching Babel and Typescript.

You totally can do that--but you probably shouldn't, because writing TypeScript is better for you and for future you. ;)


How will TypeScript instead of plain JS help the future me? Unless of-course you meant that as a joke (going by the wink).


Type safe code improves code readability, and the future you will most likely read the code you are writing today.


If you were more comfortable/most familiar with strongly typed languages, then yes. But don't assume everyone has your tastes. Java for example is a very strongly typed language, and I find the verbosity makes it harder to parse.

Good, future proof code can be written in any language. There's lots of still functioning, well maintained js written 10yrs back out there.


"I didn't have to write a test to know that I am passing a data structure correctly across every module of my application" is not a "taste" thing and comparing TypeScript to nominative typing isn't a reasonable and fair tactic.

We have legacy JavaScript dependencies and most of them worry me. I don't stay up at night worrying about the domain and range of TypeScript ones even if their testing is weak. There's a reason for that.cs TypeScript is loose where it should be loose and yet strongly encourages flexible but very clear typing that provides significant resistance to broken contracts and misunderstood semantics. I won't hire a web or Node developer who has a problem with it today because understanding the value of gradual and structural typing is, I strongly think, evidence of baseline competence.

"I write worse code, slower" is defensiveness masked as "a taste thing." We've improved the state of the art and it's better here.


If you write JavaScript with a c# frame of mind, of-course you'll find it akward and hard to test. TypeScript is basically JavaScript written in a c# frame of mind, and there is nothing wrong with that. If it helps some people be more productive, then more power to them. but don't fool yourself that TypeScript is the best way to do JavaScript. If you're finding writing maintainable apps in plain JavaScript hard then you are simply doing it wrong. You can learn how to do it right (hint: its a functional, not a class based language) and add one more tool to your box, or you could stick with Typescript and your comfort zone.

I have worked with TypeScript, and I enjoyed it. It reminded me of ActionScript 3, a language I really enjoyed that's more or less dead now. And I also enjoy plain JS, and its just more powerful and expressive.

> I won't hire a web or Node developer who has a problem with it today because understanding the value of gradual and structural typing is, I strongly think, evidence of baseline competence.

If you think the benefits of "gradual and structural typing" cannot be achieved in JavaScript, you clearly don't understand the language. And that's ok. It is also ok hire people you are comfortable with, rather than those who challenge and disrupt your way. And I wouldnt want to work somewhere where the devs are too rigid or tribal about tools. Am more interested in what I'm building, rather than the tools used.


I don't have a good way to describe this post as anything but not-even-wrong. "Hint, it's a functional language"--types are orthogonal to whatever functional-versus-OO dichotomy you've invented in your head. Gradual and structural typing are components of type systems of either static or inferential basis and JavaScript is neither--so, definitionally, you can't achieve them in JavaScript. And given that TypeScript is a strict superset of JavaScript, it's absolutely bonkers to claim that JavaScript is "more powerful and expressive". Should one need to go outside those gradual type strictures, `any` is right there.

I knew JavaScript cold well before I was using TypeScript and I've shipped both mobile and web applications using JavaScript, as well as writing a solid amount of high-performance backend systems. And I ended up writing an eye-gouging number of tests that go away with TypeScript. It is literally, not figuratively, a way to dismiss entire classes of bugs. I may not understand JavaScript, but I'm not the one making factually inaccurate assertions about stuff up in here and that makes me really wonder how solid your conclusions can possibly be.

But as a gentle observation: people don't usually go on about "tribal" developers when the facts are on their side.


Your problem is the emotional denial that there is an engineering trade-off TypeScript makes by adding strict typing to JS. That trade-off is less flexibility and expression, and the gain is more fool-proof code, thats more familiar and comfortable to those more familiar with languages based on classes and strong typing.

> Gradual and structural typing are components of type systems of either static or inferential basis and JavaScript is neither

Whoa! That's a lot of words to basically say "JavaScript does not have static or inferential/derivable types". Kind of like the way type annotations create all that syntax noise. And I was talking about achieving the BENEFITS of "gradual and structural typing". It is done for the the benefits, not for its own sake. Do you even know the benefits? In proper JS, the same is easily achieved by duck-typing, and run-time checking whenever necessary.

> And given that TypeScript is a strict superset of JavaScript, it's absolutely bonkers to claim that JavaScript is "more powerful and expressive"

I touched on the a bit when talking on trade-offs. A superset does not mean super like superman. it means an added layer of abstraction. You are moving further away from the metal, and leaving some decisions to the TypeScript compiler. That means giving away power, and flexibility to the compiler. A good engineer understands, and is not in denial about the trade-offs they are making.

> But as a gentle observation: people don't usually go on about "tribal" developers when the facts are on their side.

What really happens is that people outgrow their tribal instincts once they realize that this are all just tools that come and go, it is the underlying ideas that matter. JS absorbed all the best ideas from jQuery, and jQuery went. It has integrated all the best ideas from CoffeScript (js with a python frame of mind), and coffee-script is no longer the rage it was a few years back. Same is happening with TS, which is why my money is still on JS. The fact is, js will outlive ts.


Java is verbose because it is Java, not because it is statically typed. Let's look at some more modern and ergonomic language instead, like Typescript which is much more relevant to the OP.

The typing there might add a few extra characters to your function definitions but you gain them back in terms of less need for documenting the argument types in comments. You hardly ever see typing inside of functions in Typescript so it hardly adds any verbosity.


Java was used as an example of strongly typed taken to the extreme. Something YOU can relate to, as a TypeScript user. In terms of verbosity, the difference between Java and TypeScript is like the difference between TypeScript and JavaScript.


I do appreciate the comparison and have suffered enough Java in my life to fully understand it. But i have to tell you again, Java is not verbose because it has "taken strong typing to the extreme" (correlation vs causation), it's just a badly designed language. From the same familiy of languages look at C# and Kotlin for example, they are both much stronger than java (minus checked exceptions) but at the same time very less verbose. There are many other examples of this, C++21 is stronger and less verbose than C++98. Rust can also many times be less verbose than C++ or C and at the same time stronger.

In many cases typescript can also reduce verbosity because of more modern features and transpiling. For example early versions had foreach long before it was in ES and before Babel was as widespread as today. In fact you don't even need to add type annotations at all to run typescript, you can let it try its best on plain javascript, in those places where it's not obvious from context, just add the extra :string.


IMHO the current state of affairs is actually the best possible scenario.

- Experimental language proposals can be tested in the wild

- Real non-ivory-tower feedback is raised to TC39

- Everything feeds into the canonical ES spec (~no splintering)

- Us regular folk are able to harness new syntax immediately

- Users continue to have their old runtimes supported


It is not only the best possible scenario, it's a beautiful elegant solution that 10 years ago nobody would have dreamed of. This really propelled the web as a platform (although opinions vary on whether the web should be a platform at all).


I agree. 10 years ago I wouldn’t believe the language would be evolving (mostly improving) at the pace it currently is.


I find these to be good points, but there is something missing for me that makes this "best possible scenario" only pretty good.

JS is a notoriously quirky and inconsistent programming language. Clearly it's sufficiently usable for writing complex, powerful and reliable programs, but it's error-prone for non-experts and encourages programming patterns that make importing accidental complexity the norm.

For many programming situations it'd be easy to just pick a different language, but obviously thus isn't the case for writing browser-based programs.

The best possible scenario for me would somehow involve deprecation and removal of the nasty parts of JS, and a path towards a smaller, simpler, more consistent language. Right now it feels like the cost of backwards forever compatibility is paid every day, in every project, and it's completely wasteful, given that transpile and polyfill is widely considered best practice.

Whether this could the job of TC39 or some other institution could go either way for certain.


What parts of JS do you suggest removing? JS has its quirks, but it is not a particularly big language.


JavaScript was very simple compared to most languages. Adding a bunch of extra syntax variants, different types of scoping rules, magic syntax has made the language overly complex. Thankfully you can still use the "good parts".


Typescript removes some of the nastiest Javascript features. Of course that's not it's raison d'être and it includes a bunch of stuff you may not care for. However it is what is currently occupying the niche you're asking for. And it has all the advantages parent mentions.


As a purely backend engineer with a passionate hatred for NPM, I love writing ES6 and native javascript. All of the newer features (tbh, I don't know which are new but I guess 'const', 'arrow functions', 'default parameters', 'string interpolations', 'filter/map' are definitely a few of them and made javascript more like the back end language that I work with). I wrote at least a thousand line of native javascript for the monkey scripts and my personal websites and I consider the experience a good one. I applaud the advancement of the javascript language itself and I consider the package management the source of all evil for javascript.


What's your take on Node and backwards compatibility? Since you get to choose your runtime, is it valuable to stick with the same always-add, never remove approach as the browser? NPM taken out of the picture, I get writing one language everywhere would be nice. Do you just consider the bad parts to be the price to pay for not needing to keep track of the distinctions?

I've recently been working in Electron, and I find having app logic in both browser JS and Node to be more of a frustrating uncanny valley than a help. I suspect I'm in the minority on this one though, at least amount people with workaday skill set in client side JS.


I feel like you're mentioning forward compatibility instead of backward compatibility. ES5 code should run fine in ES6 AFAIK, and that's the extend of backward compatibility that should ever be needed. I haven't touched node all that much, and I'm in the camp that while it's good to just write one language everywhere, javascript still exhibits many of the key features of a browser language, which makes running it locally 'unnatural'. Same with trying to get python to run in a browser, maybe we should just embrace that we need to learn more languages as that is generally the case for us software engineers.


From my perspective, it is less about what we can use in the browser without transpilation, but where the js ecosystem is heading. I personally only want to write JS that is in spec because in 5 years something like decorators might not be a thing. Basically when I'm figuring out what language features I care about, I'm thinking about how hard it will be for other engineers to work on the codebase. Things that are not in spec are going to be less familiar and more time to learn. I think there's a lot of momentum with the spec and something to point to where can say "you should learn this syntax/feature because it's in the spec."


> Basically when I'm figuring out what language features I care about, I'm thinking about how hard it will be for other engineers to work on the codebase. Things that are not in spec are going to be less familiar and more time to learn.

I agree, this is super important. My inclination whenever I did into a JS project has been to use lodash/underscore everywhere for everything, assuming that it is popular enough that someone will be able to maintain it without much headache, and I can actually get stuff done without breaking my brain over JavaScript's notorious quirks. I'm curious at what point this stops being a good practice. It certainly was 4 years ago.


This came up in my teams standup today. We were analyzing old dependencies and Lodash was in the list and none of us could justify why we needed it anymore. So I guess that point is nowish. Same for jQuery.


I hardly use lodash. What do you use it for? I presume most of your use cases can by easily handled by just using JS from the ECMAScript spec.

Personally, I only lodash for the debounce functionality.


Different products/teams can move at different speeds if they target a specific market. For example, for internal apps teams can expect that browsers are up-to-date, or that even only a specific browser will ever run the app. Also, people who write JS for back-end (eg node.js) or an embedded engine can control the environment their code runs in, and can use more up-to-date features.

It’s similar to all the new tweaks and elements in HTML or DOM. If you’re working on Wikipedia, you will likely never get to use them. But if you work on a more niche app, they become quite useful. Over time, old browsers die out, and the amount of people who can use new features expands; early adopters do the testing for the late majority.


> we still need to target es5 4 years later,

This is a dramatic improvement over the 10 year lifespan of IE6 with es3. Once the need to target es5 drops, the next level gets even shorter - I don't think we'll get less than 2 years as a practical matter, but even at a 2 year delay, that's still regularly progress.

> a pretty arbitrary subset of the language

Best practices don't come from a mathematical model - programming is about communication and being compatible with user/business demands (which keep changing). Thus, best practices come about from a lot of experimentation and retrospective. That's ongoing. This doesn't mean it won't settle down. Heck, as it is, a lot of dynamic best practices influence the direction of non-dynamic languages - all of which arises from time and experimentation.


> If we still need to target es5 4 years later, and transpilation is standard practice, why bother?

Your “we” isn’t everyone else’s. Some places need to support very old browsers but even in places like that usually not every app does. Those people are pushing the state of the art forward since they do real work outside of the standards process and that provides useful feedback to both the standards committees and browser developers.


Fair, JS is everywhere and serves many use cases. The implicit "we" in that statement is developers 2019 who are building client side applications that need to get a lot done, run in most installed browsers, and remain maintainable to a 5-8 year horizon. Not everyone for certain, but probably enough to generalize, at least on this forum.


You tell such users: "Your browser is too old, please update it" and direct them to the Chrome and Firefox websites.

With the exception of Safari, all major browsers support auto-updating, and anybody who intentionally uses an outdated version of Safari is a masochist with a deviant fetish for error messages.


Or y'know, someone who doesn't see a reason to purchase a new Apple device every N years of forced obsolescence.


Feature phones, smart-tv/watches etc. do not auto update. Old versions of android, iOS, Windows. Non-mainstream OS's running non-mainstream browsers. 1% of the market is still a lot of users.


And in the idealized fantasy world of Ayn Rand, any giant dinosaur enterprise corporation who insists on using antique outdated Jurrasic browsers deserves to be bitch-slapped by the invisible hand of the market into extinction for their outrageously dangerous security and privacy policies.

To bad the real world doesn't work the same as it does in self-indulgent libertarian porn...


The real world works exactly like that. It's called System Requirements. If you want to use a piece of software, you look at the requirements and if you don't meet the requirements then you can't install it.

All websites have system requirements, no matter how non-libertarian they seek to be, for example most websites will not work on IE6.

It is not a fantasy to inform users that they don't meet the requirements, it is actually the courteous thing to do. If more websites were bold enough to inform users of their outdatedness then we would not be having this conversation.


This is especially true for businesses. I've had a few cases where clients balked at browser support policies but when presented them with the cost multiplier it'd add, not to mention the cost of features / security, that clicked it into a calculation and suddenly it turned out that the “hard requirement” was actually one VP who refused to upgrade from IE5 Mac and they decided not to pay more for that.


I mean the real world is not like Atlas Shrugged, which is archetypical self-indulgent libertarian fantasy porn. The invisible hand isn't God setting things right and making life fair, it's rich assholes (who worship Ayn Rand and think they're God) rigging the system to cheat for themselves.

NSFW link:

https://www.cbsnews.com/news/ayn-rand-vs-the-invisible-hand-...


ha!


Again, that really depends on what features you use and who your clients are. It's entirely possible that a team may not need IE11 support now or that they've moved in the direction of low-overhead for modern clients and a legacy mode (possibly transpiled) for the <1% of people who affected.

At some point this is a business decision like anything else and the security considerations in particular have helped get people onto evergreen browsers faster than used to seem possible.


> Is the evolution of JS not directed in practice by the authors of Babel and Typescript?

In practice? I suppose in practice, JavaScript is driven by everyone in aggregate and what people consider to be "JavaScript" rather than "something that can be made to run in traditional JavaScript environments like browsers." I'm not sure if you mean that, or simply who directs actual changes to the traditional JavaScript environments (like browsers) themselves.

But yeah, if transpilation tools are reliable and continue to be well-maintained, you can say "why bother updated the 'official' language and implementation in browsers?" But I don't understand how this is a bad thing. You're getting the best of both worlds: browsers will implement new JavaScript features and optimizations, and some dev teams can also use build tools to use those new features, and other potential new features, and still make their work available to older browsers.

It doesn't seem like a problem to me, unless you're thinking about all the language development effort in the JavaScript community as a fixed pie such that "non-official" language development like Babel and TypeScript take away effort that otherwise would be allocated to official language development. And I certainly don't think that is the case.


We're in the age of "evergreen" browsers, where almost everyone is on a browser that auto updates. New features can be useable in maybe 2 years now, depending on your audience.


[laughs in enterprise web app]


Those people should just stick to IE5 anyway.

How about simply compiling IE5 into WebAssembly, wouldn't that solve the problem? ;)


Equally honest counter-question: all modern browsers essentially support ES2019 already, so why would anyone still need babel outside of turning JSx into JS because React is still a hot tech, or in order to turn normal modern code into the kind of legacy code that >99% of your visitors won't need?

I'm sure there are plenty of build systems out there that still indiscriminately turn things into ES5, because "that's how we wrote it years ago and it still works", but anyone who actually cares about performance will think twice before using babel today to turn nice, clean, concise modern code into incredibly verbose and shimmed legacy code, and will certainly think twice before serving ES5 code to users on a modern browser.


There's a handful of unsupported features I play with... the null operators and pipelines in particular. But I do set my presets for pretty modern support as a baseline. I still use webpack as well.


Webpack is just a bundler, it doesn't turn modern code into overly verbose legacy code on its own. Babel's the real troublemaker because by default it's still acting like it's the original 6to5 package. Using it to turn draft features into working ES2018 is sensible, using it to convert all the way down to ES5 for anything that isn't IE11 (which is basically the only legacy browser left at this point), not so much.


Agreed. That's why my baseline is at least async support. Every modern browser has supported it for about 2 years now, and regenerator+asynch was the single biggest part of legacy transforms. I did have a second config for legacy, but at my current job was able to drop support, and just show unsupported browsers a message page now.


You only need to target old es if you're deploying a website for a big company. Most people don't need IE6 compat. For example, people writing node apps can choose which version of node they deploy.


Maybe I'm missing something, but they said es5, not es4?


I assume GP meant es5; es4 was an attempt to add static typing to JS and was never shipped. ActionScript 3 was the only implementation AFAIK.


Not true at all; With Node.js I don't have to wait for a browser to update, and V8 is often up-to-date with the latest stuff. And as a full-time JS dev, I have not gotten the feeling that the transpiler authors get to determine what's in the language. Honestly if they add anything that isn't destined to be part of the spec, I wouldn't use it because then you're tied into that transpiler for good.


A few off the top of my head:

1. Node.js greatly benefits from these features. Transpiling is not as prevalent there

2. Some people do exclusively target "evergreen" browsers and don't care about IE support

3. Those that do differential bundling (different bundles per browsers) can see quite a performance boost by not transpiling on newer browsers


Ad 2) It isn't just about IE anymore as more and more older machines become outphased for current OSes. E.g., for a 10 years old Mac Pro, which is still a viable machine, Firefox is currently the only evergreen option which is still receiving updates. (Safari is largely OS-dependent, as is WebKit, and Chrome/Chromium cancels support as soon as the OS is phased out.)


> If we still need to target es5 4 years later

This is only true if you're writing JavaScript for the client and you have to support IE11. For many companies, the usage for IE11 is so low now that it can be safely dropped, for instance my SaaS products all just target evergreen browsers.


It's still a moving target though. Either you transpile, and your target is just a config option, or you are in the business of being super selective about which "official" language features are ok to use for your user base. I can't imagine the business case for anything but write-anything, transpileto whatever the lowest common denominator of the day may be. Outside of one-off scripts, I don't imagine many browsers actually will be running any these new ES features until they become a compile target themselves in 8 years.


One place this works, when your clients are able and willing to use the latest browsers to use your software.

The primary benefit is that I don't have to rely on hacks.

This is how I build the backend to my CMS and my clients are happy to keep their browsers updated. (nearly trivial to do these days).

This also means that as soon as I see a new JS/CSS feature that will eventually become mainstream, I can use it in my admin as soon as both major browsers (Firefox/Chrome) support it. And even sooner if it's not a critical feature. (eg, I can skip adding a feature like "lazy loading" because browsers will have it built in eventually, etc...)

On the public facing front end, that is a different story though. It's motivation to keep things simple.


If you target modern browsers only, and if you're building non-web facing apps, you should be able to... you can absolutely use this stuff today.

Right now, for example, the apps I'm working on require at least async/await support as a minimum test.


I hear you. I look forward to seeing the (support) death of IE11 in 2025 (!!!), which is likely all too close to my own retirement.

Also about the “subset” thing: the last 10 years or so I have been moving away from OOP, and more towards FP, so stuff like class support or Typescript have been a big yawn, anyway.

We had “C with classes” (I.e. C++ as it was known at the time) shoved down our throats in Uni back in the 80s, since garbage collection was impractical. That “wisdom” turned out to be short lived, and thus my migration towards FP (beyond the Lisp I had in an AI class back in the 80s)


Having these features in the standard gives browsers and transpilers a common, well-specified target to work towards.

If you only need to support a subset of browsers you can turn off compilation/polyfills for specific features, which sometimes leads to better performance and smaller bundles.

I think of it like TypeScript and Babel running ahead experimenting with new ideas, ES following along turning the good ideas into a spec, and browsers taking up the rear implementing the spec. It’s a pretty decent system.


There are certain sectors in which backwards compatibility is important, but most people developing can confidently use ES2015+ features directly in the client without polyfill.


JS doesn't compile so all the runtime engines (mainly browsers) need to adopt the new features as well. This takes time. These new features are part of the official ES2019 standard, and Babel makes it possible to use these new language features and transpile it to a language that all the browsers can use. What's not to love? This is actually a great (if not amazing) thing.


In my experience, you don't really have to target es5 anymore. Most browsers support new features all the way through es2018. My Babel / Webpack config is pretty minimal these days.


Because, everything has been moving to evergreen browsers for desktop. If only the mobile devices would do the same, you could have a future where you would not have to.


I like TypeScript's (stage-3) approach to feature adoption.


Gating on stage-3 really helps. (I have done the same on projects I work on and it helps) There are times when you need to consider the complexity of the specification and the amount of contention around it (decorators / SIMD for example).

IMO, anything below that is effectively a custom macro anyway (for which you may want to consider sweet.js or babel.macro to make it clear that this may change and help you find places you use the feature). Real-world feedback may change anything from syntax to behavior (`flatMap -> flat`, `Object.observe -> Proxy`, `EventEmitter -> Observable -> Emitter?`, and the it-feels-like-dozens-of-options pipeline syntax)


It wasn't TS's idea, stage 3 exists purely for getting feedback from implementations about implementing and using the feature.


It was TS's 2.0+ idea to stick to Stage 3 like glue. Especially prior to 1.0, but even during 1.x, TS implemented proposals in stages earlier to 3 (or even not yet staged). One of the regrets you sometimes hear from TS devs is that they added decorators way too early (it's still not Stage 3 and increasingly likely if it ever hits Stage 3 it will look very different, and it may never hit Stage 3 because there is a bunch of opposition), even though that is behind an "experimental" flag, there's a huge amount of "Production" code written with decorators in Typescript. (Largely thanks to the Angular ecosystem that became hugely dependent on decorators.)


You don't necessarily need to target ES5.

What percentage of your users is on IE11? Are you making money from them? Can you serve only them a compiled bundle?


Almost all new features can be implemented is ES5 so instead of transpiling you can patch old engines. Eg. Array.prototype.flatmap


That's Array.prototype.flat


> If we still need to target es5 4 years later

You don't. Unless you care about IE11 (for most consumer products and mobile apps isn't necessary) you can use many of the features up through ES2016 or later without transpilation. My business uses JS classes, arrow and async functions, and new prototype methods without issue.


Are you proposing that we just never improve the language?


Now if we could just get pattern matching[1] and optional chaining[2], that would really elevate things.

[1] https://github.com/tc39/proposal-pattern-matching

[2] https://github.com/tc39/proposal-optional-chaining


Pattern matching is a really important feature but I strongly dislike that proposal because it feels like it introduces a bunch of single purpose syntax that's going to restrict the ability to evolve the language in future. It feels like it's an addon, not a holistic solution.

I would much, much rather that type annotation syntax gets standardised first, because it is comparatively easy to build pattern matching when that's in place but going the opposite direction is difficult. What is a type if not a pattern?


It seems that only `when` and `->`is the new syntax, no? The arrow seems something that could be replaced with `=>`, but anyway, neither seems to be very restrictive with regards to future syntax. (After all they are only defined in a `case` context, so the can be reused for whatever future purpose outside.)

Plus it's a stage1 proposal, meaning it's far from serious.

Could you link the type annotation proposal, I can't find it.


There is no type annotation proposal.


TypeScript is becoming the de facto type annotations proposal.


Let me know when (type inference of) partial function application and object literals are easy in Typescript and I’ll consider it.


Do you have a specific example of code that's hard to annotate in TypeScript? I've been using it for about a year without major issues (except somewhat slow compile times).



In JS, the pattern is the type. That is the point of duck typing.


> In JS, the pattern is the type.

How do you match against a `Buffer` pattern/type?


In the current proposal,

    case (value) {
        when { constructor: Buffer } ->
            console.log("It's a Buffer!")
    }
that doesn't handle subtying though and doesn't work across realms.


Oh interesting you can match on that!

However I think my point stands that javascript has runtime conceptions of types that go beyond duck typing.


Optional chaining recently reached stage 3, the babel plugin is available and TS is going to adopt it for 3.7.0.


Oh I didnt know about TS being so close to have it, that's great news

EDIT: here is the confirmation TS 3.7.0 got tagged: https://github.com/microsoft/TypeScript/issues/16#issuecomme...

EDIT 2: wow just noticed, that the issue ID is "16" and it has been open since Jul 15, 2014 (I guess: good things take time ... ;) )


This is awesome and hopefully 2020 bound. The only thing I remember from my few months of looking into Groovy was the "elvis operator" and being very jealous of it.


I adore pattern matching when done well, but I'm sorry -- that syntax seems terrible and confusing. The use of `{ }` seems arbitrary. In JS braces are already used for both scope-delimiting and object literal notation; now they're going to have a third job? There has to be a better way.

I understand it's based on destructuring; the syntax still just doesn't work for me.


Ocaml, StandardML, F#, Erlang, and Rust all use that syntax for pattern matching. What other syntax would you propose to test values in a JS object?


Yeah very excited about optional chaning, which few days ago got accepted to stage 3 "candidate", so just one more stage (stage 4 "finished"), I guess and hope it will be ready for ES 2020



Optional chaining is stage 3, and major engines are actively implementing it.


These two and decorators (including function/assignment decorators) are the top 3 on my wishlist.


What's the point of pattern matching? Why not just a switch statement?


Switch pivots off of one value, pattern matching allows you to match on types, destructured values, conditional execution based on the existence (or lack thereof) of those values or types. It's much more flexible then switch (but arguably more complex).

See Scala[1] or Swifts[2]'s implementations.

[1] https://docs.scala-lang.org/tour/pattern-matching.html

[2] https://docs.swift.org/swift-book/ReferenceManual/Patterns.h...


Waiting for Optional Chaining. It's in Stage 3 already.


And this is why I love kotlin so much. It has all of these and so much more. Really excited to have .flat() and .flatMap() in ES2019, it's one of my favorite array feature in kotlin. Now just need something similar in python to avoid having to use itertools.chain or weird double comprehension.


A year back I dropped a proposal idea at the EcmaScript discussion list, I hope it get's picked up sometime.

My idea is that `let`, `var` and `const` return the value(s) being assigned. Basically I miss being able to declare variables in the assertion part of `if` blocks that are scoped only during the `if()` block existence (including `else` blocks).

Something along these lines:

    if( let row = await db.findOne() ) {
         // row available here
    }

    // row does not exist here
The current alternative is to declare the variable outside the `if()` block, but I believe that is inelegant and harder to read, and also requires you to start renaming variables (ie. row1, row2...) due them going over their intended scope.

As previous art, Golang's:

      if x:=foo(); x>50 {
        // x is here
      }
      else {
        // x is here too
      }
      
      // x is not scoped here
And Perl's

      if( ( my $x = foo() ) > 50 ) {
           print $x
      }


Also see assignment expressions recently adopted in python 3.8: https://www.python.org/dev/peps/pep-0572/


In Python things are a bit different because declarations and assignments are ambiguous.


They're not ambiguous, assignments are declarations unless the variable was pre-declared (via global or nonlocal).

Assignment was specifically made a statement to avoid the confusion / typo risks of `=` v `==`.


They are not ambiguous if you have the whole file in your head. This is why there are keywords like "global" & "nonlocal".


> They are not ambiguous if you have the whole file in your head.

They're not ambiguous period, python's `=` always performs a local declaration unless overridden via the keywords you mentioned.

> This is why there are keywords like "global" & "nonlocal".

It's the exact opposite of your statement: `global` and `nonlocal` indicate non-local bindings, because by default all bindings are local and you do not need to have "the whole file in your head".


Also its addition to Python was controversial, to say the least. (I have been enjoying using it, however)


I know this doesn't match up entirely with what you want syntax wise, but the following works and I consider it quite elegant.

  {
    let row;
    if (row = await db.findOne()) {
      //
    }
    else {
      //
    }
  }


It's not the same. For instance, you could not use a `const` variable.

Also having "phantom" scope blocks get very nasty to read once you have more involved logic, as the block itself has no implied meaning and the programmer has to walk a few lines into it to get what's going on.


You could always make it a labelled block statement !


Seems like pretty trivial bikeshedding. How about just:

    const user = await db.findOne()
    if (user) ... else ...
Typescript can even narrow the type to null vs. User in each branch block.


Would the else branch trigger on a falsy value, or on an exception? It's a bit confusing, as it looks like it would handle exceptions on the else branch, even though we know we have to wrap await in try/catch for that.

Other than saving a few characters (the variable name), I don't see any benefit of this, while it makes code harder to read.

Too bad the `with` [0] keyword has been reserved for crap, it sounds nice (not for this, but maybe for something else).

>> and also requires you to start renaming variables (ie. row1, row2...) due them going over their intended scope

Variable shadowing [1] is a really bad practice that makes it hard for people to collaborate and keep the code sane. Bad habits are not a reason for language changes.

[0] - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[1] - https://en.wikipedia.org/wiki/Variable_shadowing


You can also declare variables in conditional statements in C/C++ like https://godbolt.org/z/h0BR8K

  int foo(int);
  int bar(int x) {
    if (int y = foo(x)) return 0;
    return x;
  }


Nice. I like use of the let/var/const keyword to disambiguate from the accidental `if (x = y)` when intending to use `==`.


Only helps with const. Better use linting to avoid such issues.


> Only helps with const.

Can you elaborate?


If the `x` in the example was a const, it would throw an error because the code attempts to assign the value of `y` to `x`. You cannot assign a new value to a const.


What I'm saying is that you wouldn't accidentally type `let x = y` when you meant to type `x == y`


This has been discussed a few times on IRC, but no one has made a proposal yet afaik.


FWIW Swift lets you do this, and it's great. Guard statements even better.


You can also do this in Ruby, though the scope is different.


In my opinion this is a terrible idea, since it's very easy (I do it all the time) to accidentally write `if (foo = bar)` instead of `if (foo == bar)`. If that were valid syntax it would be a huge footgun. I'd be onboard with it if it required a different syntax.


I'm down if you require a let/var/const in front of it:

    if (foo = bar()) { // syntax error!
    }

    if (let foo = bar()) { // works fine
    }

    if (const foo = bar()) { // also works fine
    }

    if (var foo = bar()) { // also also works fine
    }


> if (foo = bar()) { // syntax error!

That's already an error in strict mode (which would presumably be on for anyone writing bleeding-edge JS).


It's only an error in strict mode if there's no variable in scope named foo.


And what if foo had already been defined in the scope?


The new one shadows it.


Local scope might not be what the programmer wants, though, depending on circumstance. Pseudocode:

  i = 3;
  foo = \0;

  while ( i-- ) {
      if ( foo = resultFromFooApi() ) {
          break;
      }
      sleep 1;
  }

  return foo;


But JS already allows that.


It's great to see JS getting some of the features of better planned languages.

But I'm still very nervous about some of the stuff mentioned here with regard to mutation. Taking Rust and Clojure as references, you always know for sure whether or not a call to e.g. `flat` will result in a mutation.

In JS, because of past experience, I'd never be completely confident that I wasn't mutating something by mistake. I don't know if you could retrofit features like const or mut. But, speaking personally, it might create enough safety-net to consider JS again.

(Maybe I'm missing an obvious feature?)


Mutation is a real weakness of Javascript. I think the general idea is "methods don't mutate unless they are ancient". For example Array.map (an IE9-era feature) doesn't mutate, Array.sort (an IE5.5-era feature) does. Similarly a "for(let i=0; i<arr.length; i++) arr[i] = b" loop will obviously mutate modified elements while a "for(let e of arr) e = b" won't; the trend is towards less mutability in new features.

Proper immutable support (or a stronger concept of const) would also help with this.


> for(let e of arr) e = b

Is that just

  arr.map( e => b )

?


It doesn't do anything and was my attempt to give a simple example that is somewhat obvious while glossing over the complexity

    let arr = [{a: 1, b: ["a", "b"]}, {a: 9, b: ["a","c"]}];
    let b = "!"

    for(let e of arr) e = b;
    console.log(arr)
[{a: 1, b: ["a", "b"]}, {a: 9, b: ["a","c"]}] (unmodified)

    for(let e of arr) e.a = 2;
    console.log(arr)
[{a: 2, b: ["a", "b"]}, {a: 2, b: ["a","c"]}] (modified)

    for(let e of arr) {let copy = {...e}; copy.a = 4;}
    console.log(arr)
    
[{a: 2, b: ["a", "b"]}, {a: 2, b: ["a","c"]}] (unmodified)

    for(let e of arr) {let copy = {...e}; copy.b[0] = "!";}
    console.log(arr)
    
[{a: 2, b: ["!", "b"]}, {a: 2, b: ["!","c"]}] (modified)

The most frustrating thing about all of this is that the best way to make a deep copy to avoid all unwanted modification is JSON.parse(JSON.stringify(arr))


Thanks for sharing more context. I see what you were trying to illustrate with your earlier examples now.

Why not make a little library so you can do these kinds of things safely without reimplementing them every project? Maybe call it safeCopy or something.


...honestly, I don't see much complexity here. Understanding of reference types and the difference between deep copy and shallow copy makes it pretty straightforward - the result is the same as it would be in python or java.


I don't think Python or Java are examples of good or simple behavior here. Java at least has a strong culture of "copy everything or make it immutable at class boundaries" while javascript libraries often leave you guessing.

Examples where this is easy are C where every copy and allocation is very explicit, C++ which has a good `const` concept that's actually useful, or Rust with it's ownership concept that makes mutability very obvious.


Providing It was syntactically correct the first call would assign e to a variable b for each iteration ( pretty pointless )

The arr.map providing you had a variable on the left side would output the result of each iteration into said array ( so an array containing all b values - however I guess you meant arr.map(e => e)


> Providing It was syntactically correct

Providing arr is iterable (e.g. an array) it's perfectly valid js. Your linter might scream at you to add braces and a semicolon, but neither is needed for correctness here.


Nope. I meant arr.map(e=>b) .

I try to reimplement pointless code, I get more pointless code.


Nope. The first does not do anything. The second makes a new array of length 'arr.length' with 'b' in every cell


No, it's broken code and doesn't effectively do anything.


Typescript readonly properties and types are amazing for this. ReadonlyMap<T>, ReadonlyArray<T>, Readonly<T>.


Yes, this is a real problem and I've been bitten by it more times than I can count. Now I always keep this handy website ready: https://doesitmutate.xyz/


TypeScript does a pretty good job here if you're willing to add a bit of extra syntax:

  const a = [1,2,3]
  a.push(4)

  const b: readonly number[] = [1,2,3]
  b.push(4) // Property 'push' does not exist on type 'readonly number[]'.


Well, Object.freeze in plain JS can help too.

    > Object.freeze([1,2,3]).push(4)
    TypeError: can't define array index property past the end of an array with non-writable length (firefox)
    Uncaught TypeError: Cannot add property 3, object is not extensible (chrome)
Of course, it will only blow up at runtime. But better than not blowing up at all, creating heisenbugs and such.

I often find myself writing classes where the last step of a constructor is to Object.freeze (or at least Object.seal) itself.


For what it's worth, there are only two, maybe three methods in that entire list that mutate where it's not obvious: sort, reverse, and (maybe) splice. All the other methods (like push, pop, fill, etc) are methods whose entire purpose is to mutate the array.


That was my first impression. But then the same logic applies to concat. ("I want to add another array").


Sometimes I don’t. Actually usually I don’t, I do a lot of [].concat(a, b).


Thanks for sharing.

I think this is the kind of thing you just have to learn when you use any language. But when you're switching between half a dozen, being able to rely on consistent founding design principles really makes things easier. And when there aren't any, this kind of guide helps.


I really like Python's design here, if we are not talking about full on language features to enforce immutability, where functions that mutate never return a value and are named with verbs (e.g. 'sort()'), while functions that don't mutate return their value and are named with adjectives (e.g. 'sorted()'). This feels natural - mutations are actions, while pure functions are descriptions.

The only real downside is the lack of return values mean you can't chain mutations, but personally that never bothered me.


i used to like that distinction as well but verbs are too useful to let mutating stuff use them all up! and pure functions are actions as well, they just result in a new thing. also, some verbs sound awkward adjectified: take, drop, show, go, get ...


That sounds pretty reasonable. I can see the case for mutation support, but the unpredictable nature of it is what is frustrating and dangerous.


Coming from PHP, we’re used to it. Half the methods have $haystack, $needle, and the other half use them in the other order.


I feel a better form for this site would be:

Mutates: push, pop, shift, unshift, splice, reverse, sort, copyWithin

Does Not Mutate: everything else


At the same time, all this copying leads to an immense amount of garbage, which can really slow apps down with GC pauses. I really wished JavaScript had support for true immutable structures (a la Immutable.js) since these things do add up.

In my side project, which is a high performance web app, I was able to get an extra ~20fps by virtually removing all garbage created each frame. And there's a lot of ways to accidentally create garbage.

Prime example is the Iterator protocol, which creates a new object with two keys for every step of the iteration. Changing one for loop from for...of back to old-style made GC pauses happen about half as much. But you can't iterate over a Map or Set without the Iterator protocol, so now all my data structures are hand-built, or simply Arrays.

I would like to see new language features be designed with a GC cost model that isn't "GC is free!" But I doubt that JavaScript is designed for me and my sensibilities....


Does shallow copying have the same issues? For example, `let foo = { x: 1, ...bar }` just makes a new object with references to bar's members.


Shallow copying will create a new object, and thus, some (small) amount of GC garbage. Less than a deep copy, for sure, which means less frequent pauses, but still garbage to clean up nonetheless.


You could always look at ClojureScript. I know I am.

  Array.flat() => flatten
  Array.flatMap() => mapcat
  String.trimLeft() => triml, trimr 

Symbols are great but they’re much more useful when you can write them as (optionally namespaced) literals, which are much faster to work with:

  (= :my-key :your-key) ;; false
  (= :my-key :my-key) ;; true

Object.entries() and Object.fromEntries() are both covered by (into). You can use (map) and other collection-oriented functions directly with a hashmap, it will be converted to a vector of [k v] pairs for you. (into {} your-vector) will turn it back into a new hashmap.

And...all of these things were already in clojurescript when it was launched back in 2013! Plus efficient immutability by default, it’ll run on IE6, and the syntax is now way more uniform than JS. I’m itching to use it professionally.


I really like the lisp convention of using !'s to indicate mutation. Like set-car! .

In javascript you kind of have to reason backwards and declare your variables as immutable (const). Though there are still some bugaboos; object fields can still be overwritten even if the object was declared with const.


const only means the variable itself can't be reassigned though, and really the main complain about mutation comes from Array methods. Like Array.pop will mutate the array and you have to do an Array.split for the last item instead if you want to keep your array.


JS already has immutable objects with `Object.freeze()`.

Personally I just use TypeScript which can enforce not mutating at compile time (for the most part).


Thanks. But can I then add and remove items from an immutable object to create new objects?

Part of the immutable value proposition is being able to work with the objects. Based on [0] Freezing feels more like constant than immutable. And the 'frozenness' isn't communicated through the language - I could be passed a frozen or unfrozen object and I wouldn't know without inspecting it.

And freeze isn't recursive against the entire object graph, meaning the nature of freezing is entirely dependent on the implementation of that object.

I really like the language-level expression and type checking of Rust. But it does require intentional language design.

I'm not criticising JS (though I think there are plenty of far better langauges). Just saying that calling `freeze` 'immutable' isn't the full story.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


> But can I then add and remove items from an immutable object to create new objects?

Yes, although the new object is not frozen by default. Adding is quite straight forward, expecially with the spread syntax

    let x = { a: 1, b: 2 };
    Object.freeze(x)
    let y = {...x, b: 3}
    // y == { a: 1, b: 3 }
Removing is less intuitive

    let x = { a: 1, b: 2 };
    Object.freeze(x)
    let { b, ...y} = x;
    // y == { a: 1 }

> And the 'frozenness' isn't communicated through the language

Yes, but given that JS is a dynamic language I wouldn't expect anything different (everything must inspected at runtime).

> And freeze isn't recursive against the entire object graph

You're right, although one could quickly implement a recursive version.

In any case I find Object.freeze not much useful since trying to mutate a frozen object will simply ignore the operation; I think that most of the time trying to do that should be considered an error and I would prefer to have and exception raised.


Object.freeze is kind of constant, but you can still easily copy and work with the objects if you need to, for example, the following is valid for most objects:

    const foo = Object.freeze({ a: 1, b: 2 })
    const fooCopy = { ...foo }
And you are right that Object.freeze doesn't work recursively (although making it work recursively is fairly easy to implement yourself if you use it a lot).

But like it or not JS isn't a language with a powerful type system, and it doesn't pretend to have one so knocking it for that is like knocking Python for using whitespace, or knocking Rust for needing a compiler.

Luckily, Typescript and Flow have most of what you are asking for, and they work pretty damn well across the entire ecosystem.

Off the top of my head, I know typescript has the ability to mark things as read-only even at the individual property level. [1] And they have tons of the type checking nice-ness that you can expect from other "well typed" languages like Rust.

[1] https://basarat.gitbooks.io/typescript/docs/types/readonly.h...


Have you looked at Immer.js? It allows you to express modifications to immutable objects as a series of imperative operations.

In my experience most "immutability" in JS is enforced by convention or, at best, static type systems. It's not ideal, but it works.


Lenses in ramda work well, too, if you don't mind being functional in your js code.

I suppose that still only fits in the "immutable by convention" category, though.


In other words you’re concerned that some Array methods mutate the array (push, pop) and some don’t (map, concat)?

If so then yeah, that can be annoying and/or confusing.


For me the worst is slice/splice.


Kind of stupid, but I imagine the 'p' in 'splice' being an axe that chops the array :D Works for me...


I think OP was saying slice returns a mutated copy while splice mutates in place.


Yes, it stems from arrays. But extends from there to any collection, built in or custom. Kind of bridges the built in type system but it's really about the expressivity of the language.

It becomes especially important in React where you share objects up and down an immutable structure of objects.


Yep. If you've bitten the apple of JavaScript tooling you can kinda sorta rig up compiler-enforced immutable data structures with TypeScript. But IMO if you're going that far it's much easier/well-documented to just use Elm or something.


If this is a major concern for you, you might want to use something like immutableJS[0]. More often than not I’ve found it unnecessary except in very contained parts of a large app, but in case it’s helpful I wanted to point it out.

[0] - https://github.com/immutable-js/immutable-js


Another one that I personally prefer is Immer.js[0]

[0] - https://github.com/immerjs/immer


I really love the Ruby naming conventions around this: `!` indicates mutation.

  2.4.1 :001 > a = [4,3,5,1,2]
   => [4, 3, 5, 1, 2]
  2.4.1 :002 > a.sort
   => [1, 2, 3, 4, 5]
  2.4.1 :003 > a
   => [4, 3, 5, 1, 2]
  2.4.1 :004 > a.sort!
   => [1, 2, 3, 4, 5]
  2.4.1 :005 > a
   => [1, 2, 3, 4, 5]


A method named "method!" means that it's a somehow "unsafe" version of the method "method". A lot of the time it means "destructive version," but if there's no non-destructive version, the destructive one won't have a ! (eg, Array#shift), and sometimes ! means something else (eg, Kernel#exit! is like Kernel#exit, but doesn't run any at_exit code).


Always try to limit the scope of variables. Use local scope. And also functions within functions. The nice part when limiting scope is that you never have to scroll or look elsewhere in order to understand what a piece of code does.


I still don't understand why neither JSs var or let allow you to redefine the variable with the same name.

I makes chaining things while debugging so much harder:

  let a = a.project();
  let a = debug(a);
  let a = a.eject();
vs

  let a1 = a.project();
  let a1d = debug(a1);
  let a2 = a1d.eject();


I always assumed it was to protect against: accidental naming errors, confusion of what a declaration is and copy/paste issues. When I first started writing rust and saw it was a thing I thought it was a terrible idea. I'm a more open towards it now, the strong static analysis Rust does help and it can improve code quality if used in small amounts. However, it still can be quite confusing.

Given that JS doesn't restrict the type of a declaration you can just assign a new value to it, place it in a small scope or use a chain.


>> However, it still can be quite confusing.

I don't know - it's never confusing to me. I just use the IDE that allows me to view the types of the variables whenever I need to see them.

IDE also highlights the definitions and then the usages of the variable, including the syntax scope where it's used.

You're definitely using the wrong tools for the job if you get confused with that little detail.

>> Given that JS doesn't restrict the type of a declaration you can just assign a new value to it, place it in a small scope or use a chain.

Yeah, but I don't want to semantically assign a new value to the variable. I want this to be a new variable, because it is a new variable.

So the point of var is slightly over exaggerated because they could have gone the python way and simply allow declarations to be anything that is an assignment.


var and let tell the compiler the scope in which it should declare the variable (oversimplifying). If var or let are not present, the variable is declared in the global scope; unless you're in strict-mode, then you get a scolding.

So var and let are only tangential to the whole declaration process and only indicate the scope in which the variable is bound.

I feel confused. Why do you want your assignment statements to be prefixed with var's and let's?


if you put var in-front you don't have to worry about reassigning a variable from parent scope. You also make assignment and comparison semantically different, example var foo=1; vs foo=1 if(foo=1) and it's thus easier to spot bugs and understand the code.


What IDE (+plugins?) are you using ?


    let a = a.project();
    a = debug(a)
    a = a.eject();
This is perfectly legal.


So what's the point of having let, then?


let declares a block-scoped variable, while var declares a function-scoped variable.

Limiting a variable's scope can help avoid subtle and potentially annoying errors. For example, if you use a let variable inside an if block, it'll only be accessible by code inside the block. If you use var inside an if block, your variable will be visible to the entire function.


The truth is that if you need to declare a variable outside your current scope you should probably declare it in the outside scope in the first place.

In the scenario with var/let I need to grok the code in order to tell which variables are visible in my current scope.


I tend to mainly use const, let only when necessary. I never use var and set linter to scream about it at me. IE11 supports it, so unless you develop application for IE compatibility mode (my condolences) I don't see a reason to use var. On the other hand var mainly bites you when declaring closures inside loops, also hoisting is just plain weird.


I find hoisting a convenient feature as I can declare the variable in context to where it's used. It means I do not have to break the flow of how the code reads, making the code easier to understand and less bug prone. Example:

    if(something) var foo = 1;


So how do you maintain the invariant that this variable is used only if <something> is true _down the line_?


It's only logical that it will be undefined if it's never assigned. With var you can just declare anywhere. While with let it feels like a chore when you have to declare in a parent block, eg. outside the if-block for it to be accessible within another if-block. Lexical function scope works very well in an async language with first class functions, as you deal mostly with functions, which can access it's closures at any time, etc. makes it logical that the function should define the scope, not if-statement or for-loops.


let also does some magic in for-loops, creating a new variable for each iteration, basically creating a closure.

It also throws and error if it's used before it's declared.

Let basically fix some minor issues that hard-bitten JavaScript developers have learned to avoid.


block scope


Without getting into the merits of allowing redefines in a loosely typed language, the simple reason why JS can't support this is hoisting.

Any var/let statement of the form var a = 1; is interpreted as 2 statements. (1) The declaration of the variable which is hoisted to the beginning of the variable scope, and the (2) setting of the value, which is done at the location the var statement is at.

Having multiple let statements would mean the same variable is declared and hoisted to the same location multiple times. So it's basically unnecessary and breaks hoisting semantics.

In addition, the downside risk of accidentally redefining a variable is probably far greater than the semantic benefits of making the redefinition clear to a reader (esp since I think that benefit is extremely limited in a loosely typed language like JS anyways).


That's not quite accurate. It's actually the reverse.

Think of the closure as an object. It contains variables like `this`, `arguments`, a pointer to the parent closure, all your variables, etc.

The interpreter needs to create this closure object BEFORE it runs the function. Before the function can run, it has to be parsed. It looks for any parameters, `var` statements, and function statements. These are all added to the list of properties in the object with a value of `undefined`. If you have `var foo` twice, it only creates one property with that name.

Now when it runs, it just ignores any `var` statements and instead, it looks up the value in the object. If it's not there, then it looks in the parent closure and throws an error if it reaches the top closure and doesn't find a property with that name. Since all the variables were assigned `undefined` beforehand, a lookup always returns the correct value.

`let` wrecks this simple strategy. When you're creating the closure, you have to specify if a variable belongs in the `var` group or in the `let` group. If it is in the `let` group, it isn't given a default value of `undefined`. Because of the TDZ (temporal dead zone), it is instead give a pseudo "really undefined" placeholder value.

When your function runs and comes across a variable in the let group, it must do a couple checks.

Case 1: we have a `let` statement. Re-assign all the given variables to their assigned value or to `undefined` if no value is given.

Case 2: we have an assignment statement. Check if we are "really undefined" or if we have an actual value. If "really undefined", then we must throw an error that we used before assignment. Otherwise, assign the variable the given value;

Case 3: We are accessing a variable. Check if we are "really undefined" and throw if we are. Otherwise, return the value.

To my knowledge, there's no technical reason for implementing the rule of only one declaration aside from forcing some idea of purity. The biggest general downside of `let` IMO is that you must do extra checks and branches every time you access a variable else have the JIT generate another code path (both of which are less efficient).


We are on the same page.

My point is having 2 let or var statements doesn't actually do anything on the interpreter side.

If JS allowed 2 var/lets without complaining, it would be entirely a social convention as to what that meant, since it would have no effect on the actual code that was run.

And the social convention benefit (which could more easily be achieved with just putting a comment at the end) is probably far outweighed by the many, real examples I've seen where someone has accidentally created a new variable without realizing that variable already exists in scope.

Disallowing multiple vars helps linters identify these situations (which are far more common with var's function level scoping than let's block level scoping).


This human gets it. I'd like to add that if you would like to be clear about what your assignment is doing, put some comments in there.


Another disadvantage of reusing the same variable name is the order of the statements is usually important and it isn't always obvious how, especially when you do this in longer functions.

When you're refactoring, you then have to be much more careful when moving lines of code of code around. With unique names, you get more of a safety net (including compile time errors if you're using something like TypeScript).


If you use let it means you intend to use that binding "later", up to the end of the scope, and within that scope the value shall not change: that's the Whole point.

If you want a variable you can assign successive different values to, it's an entirely different thing, and there have always been var and the assignment operator for that.


>> within that scope the value shall not change: that's the Whole point.

That's pure BS. This is only true for atomics, the value can change (under let, var and const) as we can easily see with Array.push, for example.


I think you mean `const`. `let` can be reassigned.


What’s wrong with

  var a = ...;
  a = a.project();
  a = debug(a);
  a = a.eject();


That can’t show the programmer’s intent whether it is mutation (e.g. a = a + someNum) verses defining a new variable with different types (but has a similar meaning so has a same name) (e.g. someKindOfData = [...someKindOfData])

Rust allows this, and it really clears codes up. I don’t have to make up different identifiers for same data but different representations. (e.g. I would do the above code in JS as... someKindOfDataAsArray = [...someKindOfObectAsNodeList])


    let projected = a.project();
    let debugged = debug(projected);
    let ejected= debugged.eject();


And you need to change 3 lines in total (also you need to make changes mid-lines, too) in order to simply view the data in between, vs only 1 line.

And by the way - if you paid attention in the first place my post actually has exactly what you've just written.


Meaningful names? I don't think so.


And what is that meaning for if the functions you are calling have proper names?


That array.flat() and array.flatMap() stuff is great to see. Always having to rely on lodash and friends to do that type of work. Exciting to see how JS is evolving.


I dig it too, but you could previously flatten an array with concat, and the spread operator. [].concat(...array)

Lodash wasn't necessary.


No this is not an alternative, it will fail if the array is too large, as you will exceed the maximum number of arguments a function will accept (which is implementation defined).

In general the spread operator should only be used for forwarding arguments not for array operations.


> In general the spread operator should only be used for forwarding arguments not for array operations.

Not quite. You should also use the spread operator when you are spreading an iterable into an array:

    const unique = [...new Set(arr)];


Seems like a needlessly unreadable alternative to Array.from unless you're combining multiple iterables or values an iterables e.g.

    const unique = [...a, ...b];
You might expect that concat would work, but it doesn't handle arbitrary iterables:

    > [].concat([1][Symbol.iterator](), [2][Symbol.iterator]())
    [Array Iterator, Array Iterator]
    > [...[1][Symbol.iterator](), ...[2][Symbol.iterator]()]
    [1, 2]


Yeah but that reads so gross. Look at it. It’s actually painful.

Much rather have the magic word “flat”


Heh, didn't know that. Thanks for the tip :)


Don't do that, it won't work for arrays greater than a certain size


interviewers furiously searching for a new question


I wonder if there are enough use-cases for .flat to not default to Infinity.

It's also confusing that `arr.flatMap()` is not equivalent to `arr.map().flat()`, but to `arr.map.flat(Infinity)`


According to MDN `arr.flatMap()` is indeed equivalent to `arr.map().flat()` (without the Infinity). [1] Testing in the Chrome Devtools it also seems to be the case:

  x = [[[1, 2]], [[2, 3]], [[3, 4]]]
  x.flatMap(x=>x)
  output: [[1,2], [2,3], [3,4]]
  x.map(x=>x).flat()
  output: [[1,2], [2,3], [3,4]]
  x.map(x=>x).flat(Infinity)
  output: [1, 2, 2, 3, 3, 4]
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... > It is identical to a map() followed by a flat() of depth 1, but flatMap() is often quite useful, as merging both into one method is slightly more efficient


Is there any reason it isn't called mapFlat? FlatMap indicates to me that there's a flattening, then a mapping, not the other way around.


Ah, I missed that. I though flatMap had depth of Infinity.


Infinity was originally the default (which feels intuitive), but here's the argument for changing it [0]:

"I think Array#flatten should be shallow by default because it makes sense to do less work by default, it aligns with existing APIs like the DOM Node#cloneNode which is shallow by default, and it would align with the existing ES3 pattern of using Array#concat for a shallow flatten. Shallow by default would also align with flatMap too."

[0] https://github.com/tc39/proposal-flatMap/issues/9


flatMap is always so unituitive (not a word) for me for some reason. I messed with it a bunch in C#'s LINQ and RxJS and I already forgot the purpose. Not sure what it is that makes it so unnatural for me.


Imagine we have a list of some values and map a function that returns a list of some other type. In other words, for every value in the list, we replace it with the results of calling our function, which returns a list. We have introduced nesting and now are operating on a "list of lists of values" rather than a list of values.

However, generally you don't want to operate on a list of lists and are trying to process each value one by one -- the nesting doesn't add anything. In this case, we use flatMap, which "flattens" or concatenates the interior lists so we can operate on them like it's just a big stream of values.

This is also the case for another type like `Optional`, which represents either a value `T` or the absence of a value. An optional can be "mapped" so that a function is applied only if there is a value `T` present. flatMap works the same way here, where if you want to call another method that also produces an `Optional`, flatMap will "unwrap" the optional since you never really want to work with the type `Optional<Optional<T>>`.


FWIW, I called the same function 'gather' in some of my code. You're producing lists of results for each member, and gathering them all together in order.

'map' as a function name isn't great either, since we have the same name for a data structure. What it has in its favor is being short and traditional.


JS map = C# Select

JS flatMap = C# SelectMany


it should be faster than lodash too, i should hope


A read a similar article few days ago (might be interesting too) (not mine): https://medium.com/@selvaganesh93/javascript-whats-new-in-ec...

And also here is a good recap of ES 6/7/8/9 (just in case you missed something) (also not mine): https://medium.com/@madasamy/javascript-brief-history-and-ec...


Object.fromEntries will be super useful, surprised it’s taken this long to become a native feature.


Really though, don't know why this isn't the most mentioned feature in this thread, processing objects in JS has always been annoying.


What would some common/helpful use cases be?


I often use Object.entries so that I can use Array.filter/map/foreach on an object, but then I need to use Array.reduce to hack it back into an object. Object.fromEntries solves this.


Yeah this is what I’m most excited for. filter and map and my preferred array traversal methods. reduce can be awkward though, and people abuse it by using it to replace or combine filter and map. fromEntries almost makes reduce unnecessary when working with objects.


Functional transformation of objects. There are lots of HOFs working on arrays / iterators but none working on object. fromEntries allows easily converting from object to entries, manipulating the entries sequence and converting back to an object.

The last step is pretty annoying without it.

The entire thing is very common in Python, where Object.entries() is spelled `.items()` and `Object.fromEntries(…)` is spelled `dict(…)`


It's a lot more verbose than _.mapValues() is, but it's nice to have a relatively simple solution without using a library.


Why did they create a flatMap method? What is wrong with .map(...).flat()? Can they improve the performance by combining it that much?


Let me preface this with acknowledging that you're entirely correct that you don't really need flatMap. The following is just some background that might explain why it was included.

If you're familiar with C#'s linq and it's reliance on SelectMany it's somewhat easier to see the significance.

In C#'s linq you might write something like:

    from host in sources
    from value in fetch_data_from(host)
    select create_record(value, host)
with flatmap (and some abuse of notation) you can more easily implement this as:

    sources.flatmap( host => fetch_data_from(host)
           .flatmap( value => create_record(value, host))
If you dig even further you'll find that what makes this powerful is that the flatMap, together with the function x => [x], turns arrays into a Monad. The separate functions map and flat also work, but this adds more conditions. Haskell folks tend to prefer flatMap because most of the conditions for a Monad can be encoded in its type signature (except [x].flatMap(x => x) == x, but that one is easy enough to check).


You could optimise either, I don't think that's the point. It's just a convenience that more clearly expresses the intent of the code where it's used. Imagine an example where the callback to map is quite long; seeing the flatMap identifier alerts you to the fact that the callback returns arrays, even before you start reading.

You'll find equivalents in all the JS utility libraries and most functional programming language standard libraries (and languages like Ruby with functional-ish subsets), so there's a lot of evidence that people who write code in that style like to have such a function available.


They could have also implemented flat in terms of flatMap: flatMap(x => x)

I personally feel flatMap is a much more used method than flat, so if you want to remove one, I would remove flat.


> They could have also implemented flat in terms of flatMap: flatMap(x => x)

Flat can flatten any level of nesting (it just defaults to 1), so would be difficult to implement in terms of flatMap.


You could reproduce that behaviour of 'flat' by doing something like:

    function flatten(x, n=1) {
        return n > 0 ? x.flatmap(y => flatten(y, n-1))
                     : x;
    }


You could also blow up your stack as there is no requirement whatsoever that javascript implementations be tail-recursive.


It is very common for people to implement flatmap themselves or get it from a library of higher-order functions. So now people can use flatMap without doing those two things.


And more importantly what is essentially verbatim in your question:

Why is flatMap = map().flat() and not flat().map()


Because the concept originated in languages where function composition is what's emphasized, rather than method chaining.

  flatmap = flat ∘ map


arr.flat(Infinity) seems like a strange decision for flattening the entire array - wouldn't the most common number of levels to flatten an array be all levels, in which case I'd expect arr.flat() to flatten the whole array but in this case it's just 1 level.


Infinity was originally the default (which feels intuitive), but here's the argument for changing it [0]:

"I think Array#flatten should be shallow by default because it makes sense to do less work by default, it aligns with existing APIs like the DOM Node#cloneNode which is shallow by default, and it would align with the existing ES3 pattern of using Array#concat for a shallow flatten. Shallow by default would also align with flatMap too."

[0] https://github.com/tc39/proposal-flatMap/issues/9


I’m trying to think of a single use case where I’d want to flatten exactly one level of an array and can’t.


If your array is contained in itself, flat(Infinity) gives a stack overflow. My guess is they wanted the default behavior to be safer/saner.


Finally, flatMap is here.

I really hope there could be syntactic sugar like do expression in Haskell, for in Scala, and LinQ in C# for flatMap instead of type limited version like async await.

Another thing is pipe operator seems to be very welcome among the proposals. There will be no awkward .pipe(map(f), tap(g)) in RxJS since then.


Some half decent stuff in here, but I heavily disagree with changing the output of toString. That might cause problems if someone is expecting one output, but the new version creates something new. I don't see a reason why they couldn't have just added a new function functionCode() or something similar. It would give people the functionality they want, without destroying backwards compatibility.


It's already been deployed in the wild for about a year.


Can someone please point me to the rationale behind the new toString() function?


Rather than how complete it is, the real improvements of the proposal are:

1. it all but requires that ES-defined functions stringify to their source code. Pre-ES2019 that's implementation-defined

2. it standardises the placeholder for the case where toString can't or won't create ECMAScript code (e.g. host functions), this could otherwise be an issue as with implementation-defined placeholders subsequent updates to the standard might make the placeholder unexpectedly syntactically valid, by having the placeholder standard future proposals can easily avoid making it valid

3. the stringification should be cross-platform as the algorithm is standardised


Ok that makes sense... I was really confused because that's already more or less toString()'s behavior on modern browsers, though the whitespace discrepancy is important to define.


That's in part because it was fed back through browsers during the specification process.


If you are interested in all the gory details, take a look at the TC-39 proposal (especially the Goals section) and associated GitHub issues:

https://tc39.es/Function-prototype-toString-revision/

https://github.com/tc39/Function-prototype-toString-revision...


I don't know the answer to your question, but I would like to add to it to gain some clarity for myself: why not give the caller the option to have comments eliminated? An optional parameter `includeComments` bool with default `false` would provide backward compatibility while allowing those who need the comments to request them.


That’s a surprisingly small amount of change. I’ll leave it to others to determine if that’s a good or bad thing.


Very good!

Personally, I am not a fan of languages growing. I think C is awesome, because everyone can understand the code, and doesn’t have to be a language lawyer like with C++. Concepts, Lambdas, crazy Template preprocessing, and more. The team can just work, pick up any module and read it without magic.

In C++ I am not even sure if a copy constructor would run vs an overloaded = operator without looking it up.


Clojure too didn't change a lot. In these years of 'disruption' it feels odd. But it may just be that we forgot stability and maturity.


Existing Clojure APIs don't change, but Clojure adds a lot more than this each year. Look at spec, reducers, transducers &c


Well reducers and transducers were big announcements. IIRC clojure 1.10 didn't add any feature of that scale, and few people mentioned that it was quite a "smaller", without criticizing. spec seems important but is still alpha status.


Technically most of this is not even language changes but simple changes to the standard apis. E.g. flatMap is something I've missed and worked around by implementing it myself a few times. Not a big deal and nice that they added it. In any case, I use typescript by default now and have converted most of the code I care about at this point. I think most of this improves typescript as well so; overall a good thing.


That's true for most of ecmascripts history - they tend to introduce convenience methods that people needed, built and relied on.

How much of ES5.5+ was guided by jQuery?


Isn't the new function string representation backwards incompatible? Having struggled with javascript's lame error tooling I could see people actually using it in production too.


Why are empty elements in an array allowed? oO

[1,2,,3]


They need to ensure that any combination of characters will produce some result.


Quite useful in situations such as:

> (str.match(regexWithGroup) || [, null])[1]

I.e. if the regex matches, then give me the first group (1st index) otherwise give me null.


Because you can create that array anyway, like so:

  var arr = [];
  arr[0] = 1;
  arr[1] = 2;
  arr[3] = 3;
so there's not really much downside to also allowing a literal syntax for the same thing.


The part about parameter less catch reveals a lot about the philosophy of the language. For me, silencing error like this is a bad practice. You may still produce a sane error in the catch, but the design goes toward silencing things.

I really love languages that force you to handle errors up to the top level.


It does, though I wouldn't go to the same conclusion. It gives more freedom to the developer. They are lot of cases where errors are in fact, expected. Parsing a user inputted JSON. Making a request to check internet connection availability. Any promise that is going to be consumed by an async - await pattern. In those cases (and many more), you may want to try something just simply to see if it works. It's completely different say to being on the backend, trying to write to a database which may or may not fail.

In those cases, forcing the extra parameter in the catch, even though you are not using it, is slightly annoying. I mean it's literally 3 characters, but in this age of linters encouraging to not specify arguments you don't use, it just feels unnatural.


The problem is that the catch block will catch all errors, so if a different type of error to the one you expected occurs then it gets silently swallowed. This is the equivalent of catching the generic Exception class in python. Horrendous terrible practice in anything but the hackiest scripts.


Why would you ever want to ignore the error parameter?

I really hate when a system tells me "Unknown error occured" or "Either this or that happened" because the software doesn't care to be specific with the errors.

You should at least log the error message, not ignore it.


I see what you're saying - for me, my experience with Java checked exceptions put me off the idea.

In Java it led to lots of exception wrapping and leaky abstractions.

Not sure what the answer is - although my golang experience was better.


The answer is one of two possibilities: either make the IDE a little dumber and stop highlighting uncaught exceptions, or make the IDE a lot smarter and make it highlight when you catch an exception but don't do anything useful with it.

Java programmers need to be comfortable letting exceptions have the default behavior until they're sure they have a better idea. Declaring throws is usually enough.


My current best guess is Java's checked exception silliness comes from the enterprise framework culture. Too much abstraction, architecture, cleverness, turgidity. I joke that Spring is an Exception obfuscation framework.

I've always really liked checked exceptions in my own designs. Though I'm not crazy about the syntax.


Java blew it on generic exceptions.

  stream.map(f).collect(...)
should be able to throw anything f can throw, but instead f has to wrap everything, which makes people give up and stop declaring checked exceptions.


What's a good example of this sort of language?


Elm (https://elm-lang.org/) is a language that compiles to JavaScript that doesn't have runtime exceptions.


Neat, thanks


There's Elm, PureScript and many other functional programming inspired languages. But, TypeScript is a safe bet.


It's very exciting to see how JavaScript is evolving!


Honest question, not playing. What is exactly exciting about it? 2020 is around the corner and the language created back in 1995 is only now getting features that have been standard in many other languages, either as part of the core language or the standard library for decades.


Well, I'd think that was pretty obvious, right? It's exciting because if you spend the day writing JavaScript I don't give a hoot that another language has had that feature for decades because I don't get to use that other language.

Are you implying that no other languages ever add features that other languages have? All languages except JS are feature complete? Come on..


Well now, having an option to use other languages to script web pages, that would be exciting.


The better JS gets, the more I want to use it. JS with Promises + async/await is now one of my favorite languages.


How do you cancel promises?


You'll need to come up with a better bad-faith "ha gotcha!" than that. That's not even a feature of the promise/future construct in most languages.

But you can google around for some ideas for writing an async pipeline that you need to cancel, like taking some sort of abort/poison object that can be consumed concurrently outside of the pipeline.

For example, https://developer.mozilla.org/en-US/docs/Web/API/AbortContro...


Great response! Yeah, I remember trying to do some parallel jobs with promises some time ago and having to use Sindre's node libraries to have anything remotely useful. Node would leak tons of memory. Not a pleasant experience at all. I feel that as soon as I need to do something interesting, I hit some sort of technical limitation with node and js.


It's exciting to see the language that we use for the web gain these features (albeit late)! These changes will allow us to follow better programming practices with less overhead.


I can write things in Javascript 5x faster than I can write things in C#. I am finally convinced .NET Core is inferior to Node for that reason, unless I am writing something that needs to be super accurate/performant.

And it has brackets.


I feel after ES6 and async/await in ES7 we're getting pretty meager upgrades.

IMO the three features that would make a much more significant impact in front end work are:

- optional static types

- reactivity

- some way to solve data binding with the DOM at the native level


Yep, waiting for private methods (Stage 3) and private-fields.


Ummm... 25^2 is 625. 15^2 is 225 (see the Object.fromEntries example). I mean, I knew JavaScript math was a bit sloppy due to the use of floating point everywhere, but I hope it's not THAT bad...


I think it's been changed after you posted this comment. It now shows that 15^2 is 225.


That Function.prototype.toString change is probably going to break some Angular.js code who relies on scanning function argument names for dependency injection.


The Function.prototype.toString change has been in chrome for maybe a year now btw.


Oh wow. Looks like I’m wrong.


TC39 have been fairly conservative about not breaking real world code. Array.prorotype.flat was named flat and not flatten because of compatibility issues with old versions of MooTools.


It also requires that comments get stored and waste memory: previously just enough of the AST needed to be stored to be able to regenerate the JavaScript. You can't throw away the comments because you can't predict where code might do someVarHoldingAFunc.toString()

It seems like an unnecessary change - if the source needs to be accessed then get the source file.


ES doesn't store quite enough of the AST to recreate the function anyway. Most functions reference variables and functions defined in a lexical scope outside themselves. Getting the source with toString(), and passing that to eval(), doesn't reproduce the function, and sometimes it compiles ok but the function does the wrong thing.

Usually it doesn't matter, because these methods are used on functions that luckily only use well known names, mainly properties of window/global.

But it's a risk, and I've seen subtle bugs caused by the assumption that the function's .toString() can be run through text substitutions and eval to get back a variant of the original code.

Contrived example:

  let x = "wrong variable";

  function f() { let x = "I am x"; return function g() { return x; } } 
  f()()
  => 'I am x'
  
  eval(f().toString())
  g()
  => 'wrong variable'


Agreed. I guess the argument for it is it's more accurately reflected but... who cares? Maybe I'm not understanding what people do with [function].toString() in the real world.


What's even worse is that now frameworks (and viruses) will store data in comments.

I know this because I have wanted to use fn.toString() as part of some meta-programming many years ago (but couldn't because comments were not stored).

I am sure a lot of effort went in to making a good decision, aiming for a good outcome, but this smells like a bad one.


Engines hold the entire source anyway, with the exception of XS, which returns "[native function]" for everything.


So, what coffeescript features are we still missing after this round?


I think the flatMap method is pretty solid evidence for my contention the language is really getting bloated now.


Is that all there is that's new? Maybe it's a good thing that JavaScript is finally slowing down.


Under symbol.description:

const test = Symbol("Desc");

testSymbol.description; // "Desc"

---------

Should testSymbol be replaced with test?


Fixed the typo, thanks for the heads up!


Why does flatmap exist? What's against `sentence.map().flat()`


Hard to believe that const arr4 = [1, 2, , 4, 5]; is valid.


Seems perfectly logical to me: an array of length 5 with four populated elements and one empty one (3). Though I did double-check my understanding that the length property would report 5 rather than 4 (it does).

What looks out of place to you in that example?

Would it make more sense to you with a very slightly less arbitrary example, perhaps arr = ['Value for 0', 'Value for 1', , 'Value for 3', 'Value for 4']; instead of simple mapping ints to ints?

Because array contents are mutable [even if the array variable itself is declared const] that third index may be populated at a later point in the code.


> What looks out of place to you in that example?

Most languages don't have sparse arrays so it's really weird.

> Would it make more sense to you with a very slightly less arbitrary example, perhaps arr = ['Value for 0', 'Value for 1', , 'Value for 3', 'Value for 4']; instead of simple mapping ints to ints?

You'd usually put an explicit `null` there, especially as HOFs skip "empty" array cells so

    ['Value for 0', 'Value for 1', , 'Value for 3', 'Value for 4'].map(_=>1)
returns

    [1, 1, , 1, 1]
which is rarely expected or desirable.


.


Doesn't the fact that flatten by default only flattens one level mean you can still create nested output by emitted nested outputs?


Yes, it appears so. From MDN:

> The flatMap() method first maps each element using a mapping function, then flattens the result into a new array. It is identical to a map() followed by a flat() of depth 1, but flatMap() is often quite useful, as merging both into one method is slightly more efficient.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


FlatMap is often most useful when recursing, in my experience.


I have used recursion two times irl outside of interviews.


Used with flatMap, it's a terribly good way of doing something to everything over an arbitrary object and all it's deep values. I do something like this every month or so.

It's one of those tricks you use a lot once you've seen the problems it's applicable to.


seems like a real missed opportunity to add string.leftPad()


Seems like a real missed opportunity to look at the Python stdlib and add a bunch of missing functionality to JS so that people don't have to import a metric ton of third party libraries (or worse, write it themselves) to add something you get for free in most other scripting languages.

Even the trim operations they added fall short of the target. In Python (and tcl, by the way) you can specify which characters to trim.

So close, yet, so far.


You can use regex for other characters, the point of the trim functions is that they match the somewhat complex unicode whitespace.



But we still have trimLeft() and trimRight() and in true JS tradition we need some more redundancy for symmetry's sake.


As in the MDN article:

    All major engines have also implemented corresponding trimLeft and trimRight functions - without any standard specification. 
So ES2019 implements trimStart() and trimEnd(), which are symmetrical to padStart() and padEnd(), but trimLeft() and trimRight() aliases are maintained as not to break working code.


That makes sense, "left" trimming rtl text could be confusing.


You're right, I was just being tongue-in-cheek.



At this point, I'm convinced that Javascript is basically a jobs creation program.

We go on adding fancy new syntax for little or no gain. The whole arrow function notation, for example, buys nothing new compared to the old notation of writing "function(....){}" other than appearing to keep up with functional fashion of the times.

Similarly, python which was resistant to the idea of 20 ways to do the same thing, also seems to be going in the direction of crazy things like the "walrus" operator which seems to be increasing the cognitive load by being a little more terse while not solving any fundamental issues.

Nothing wrong with functional paradigm, but extra syntax should only be added when it brings something substantial valuable to the table.

Also, features should be removed just as aggressively as they are added, otherwise you end up with C++ where you need less of a programmer to be able to tell what a given expression will do and more of a compiler grammar lawyer who can unmangle the legalese.


>The whole arrow function notation, for example, buys nothing new compared to the old notation of writing "function(....){}" other than appearing to keep up with functional fashion of the times.

Incorrect - The main advantage is fat arrow syntax can keep lexical scope of this current context. Hence you dontneed to implement that=this antipattern


In that case, they should have deprecated the "function(){}" notation or at least made it such that arrow function doesn't overlap it.

The current scene is that most people don't know what the real difference between arrow and function notations and this leads to a lot more number of bugs than if they weren't this overlapping. Overall, my point is, this just leads to poor ergonomics and you'll have a larger number of avoidable bugs.


> The current scene is that most people don't know what the real difference between arrow and function notations

That's hard to believe unless you're working on the most amateur of teams.

There's a point where you have to expect people to understand the most basic concepts of the language/tools they're hired to use. This shouldn't require more than a simple 5min pull-aside of the junior developer.

Also, you can't change function(){}'s dynamic scope without breaking the web, which is a major downside for your suggested upside of developers not having to learn the distinction. function(){} was always confusing from day one. ()=>{} is a move back toward intuitiveness.


> arrow function notation

Arrow functions bind this to the lexical scope, which is useful. (In a regular function the value of this depends on how it's called.)

> python which was resistant to the idea of 20 ways to do the same thing

This was in comparison to Perl which intentionally has an unusual excess of different ways to do things.

> "walrus" operator

Simplifies a very common pattern.

  m = re.match(r"my_key = (.*)", text)
  if m:
      print(m.group(1))


Arrow functions aren't just new syntax, they keep the context in which they are declared.

It allows devs to do the following:

onClick={() => doSomething()}

Without having to worry about binding the function to the correct context.


Arrow function made `this` much more intuitive while maintaining backwards compatibility. That's definitely bringing something to the table.


arrow functions are not just a function() alternative, but they solve the whole 'var self = this'/'.bind(this)' boilerplate.

It's one of the best JS improvements of the last 10 years.


Can we get some additions that replace the garbage one-liner `is-even`/`is-odd` npm libraries that are a scourge?


You mean something like x % 2 === 1?

I'm a JS fan but had to admit I chuckled at the implementation of is-even: https://github.com/jonschlinkert/is-even/blob/master/index.j...


That's hilarious.

What's not hilarious is that, after removing the essentially useless error-checking, is-even is literally just `(n % 2) === 1`. On one hand, JS desperately needs a standard library, on the other hand, JS devs can be so infuriatingly lazy and obtuse.


> is-even is literally just `(n % 2) === 1`

Pretty sure that tells you if a number is odd. I guess maybe there is a reason these libraries exist.


That was just a typo, I meant the source for is-odd. I copy-pasted it. I use that exact syntax all the time in code, you don't need to be patronizing.


Most of this is sugar, e.g. flat() and flatMap(), out of scope for a language spec.

Function.toString being more accurate is helpful.

But real progress would be removing dangerous backtracking regular expressions in favor of RE2: https://github.com/google/re2/wiki/Syntax




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: