Hacker News new | past | comments | ask | show | jobs | submit login
ECMAScript 2017 Language Specification (ecma-international.org)
600 points by samerbuna on July 11, 2017 | hide | past | favorite | 241 comments



Proposals [0] that made it into ES8 (“what’s new”):

* Object.values/Object.entries - https://github.com/tc39/proposal-object-values-entries

* String padding - https://github.com/tc39/proposal-string-pad-start-end

* Object.getOwnPropertyDescriptors - https://github.com/ljharb/proposal-object-getownpropertydesc...

* Trailing commas - https://github.com/tc39/proposal-trailing-function-commas

* Async functions - https://github.com/tc39/ecmascript-asyncawait

* Shared memory and atomics - https://github.com/tc39/ecmascript_sharedmem

The first five have been available via Babel and/or polyfills for ~18 months or so, so they’ve been used for a while now.

[0] https://github.com/tc39/proposals/blob/master/finished-propo...


Interesting that String padding made it in -- sort of jumps out as the simplest of these additions. I wonder how much of that had to do with negative PR for JS-land due to left-pad-gate.


You can read that disaster in two ways...

I consider the fact that a stupid-simple package was depended-upon by so many mature libraries as an indication it should be a language feature.

Working with network protocols I find myself needing padding functions all the time, and there isn't really an elegant way to do so inline, so I welcome this addition.


> I consider the fact that a stupid-simple package was depended-upon by so many mature libraries as an indication it should be a language feature.

Unfortunately neither the npm module nor the browser version really do what most people want and string handling in javascript is still a minefield.

'\u{1F4A9}'.padStart(5, '1') => "111" // oops (\u{1F4A9} is at the end of this, HN filter)

'\u{1F4A9}'.length => 2

[...'\u{1F4A9}'].length => 1 //WTF?

'mañana'.padStart(7, '1') => "1mañana" // ok

'man\u0303ana'.padStart(7, '1') => "mañana" // oops

'man\u0303ana'.length => 7

[...'man\u0303ana'].length => 7 // WTF? Why doesn't this match the behavior of [...'\u{1F4A9}'].length ?

'man\u0303ana'.normalize('NFC').padStart(7, '1') => "1mañana" // OK

I do understand the unicode issues here, but the inconsistency in the APIs from a user perspective and lack of any fully cross browser support for sane string processing in Javascript means we still have only a few options:

1) Don't do string processing in javascript at all.

2) Include a library to make it sane, these are usually huge as they usually need large lookup tables.

3) Accept that things won't always be correct.

This is one example but the lack of a sane standard library in Javascript is one of the biggest problems the web has right now. I'd be curious to know how many bytes of JS are loaded on the average website just to work around the lack of standard library support for basic functionality, I'd bet it's a very large number. Another fun one: Try to parse a URL and append extra query params to it, correctly.


    'mañana'.padStart(7, '1') => "1mañana" // ok
    'man\u0303ana'.padStart(7, '1') => "mañana" // oops
You are being disingenuous here. Those are different strings, with different lengths (try copying this into the console):

    'mañana'.length // 6
    'mañana'.length // 7
The latter has two stacked characters. These issues are inherent to Unicode and `padStart` is treating the strings correctly. If you need normalization, use the .normalize method you mentioned yourself.

This is a major improvement: double-wide and stacked characters have been there since ES3, but now the language is providing standard tools to work with them.


> If you need normalization, use the .normalize method you mentioned yourself.

If it were so simple...

`normalize` doesn't exist in IE at all and not in Safari < 10 so to take this advice we need a polyfill. As you may expect, polyfilling unicode normalization isn't pretty, it requires a massive lookup table.

The best polyfill out there, unorm, clocks in at ~38KB gzipped. Now, keep in mind there are a half dozen or more iframes on many web pages, each would have to load their own copy and it's unlikely the caching would overlap for a number of reasons. Also keep in mind that code builds / loading based on browser support isn't realistic in many cases, so if I want to use normalize, everyone pays the network bandwidth usage penalty not just the IE11 users. Of course this is only one part of the problem, want to iterate over graphmeme clusters? That'll be another massive library. Etc, etc.

The browser JS ecosystem is full of these problems, it's not just text processing. If you've ever wondered why a site needs to load 2MB of javascript, it's because that's about what is needed to create a cross browser compatibility layer and a reasonable standard library.


> Also keep in mind that code builds / loading based on browser support isn't realistic in many cases, so if I want to use normalize, everyone pays the network bandwidth usage penalty not just the IE11 users.

Switch to loading it via JS modules and using HTTP2 to keep connection lag low on cellular 3G connections? I agree, more needs to be done to promote these kinds of edge cases. A similar problem occurs with locale-aware date parsing and formatting.


Let's not talk about Javascript date/time.


I don't think that it's really fair to say that he's being disingenuous here, regardless of the underlying byte form the strings look indistinguisable and often users (devs) will expect them to function as such.

I think it would be less confusing to define .length as the number of characters and have an additional .size method returning the number of bytes (I'm assuming that's what .length returns, if not it's even more confusing).

Of course, that already wasn't done - meh.


> have an additional .size method returning the number of bytes (I'm assuming that's what .length returns, if not it's even more confusing).

It's actually not the number of bytes, it's the number of...'codepoint pieces' is what it could be called I guess? Javascript's language level string implementation is something like UCS-2 with the addition of surrogate pairs being allowed, but counted as separate 'characters' for things like length and index access. It's some twisted middle ground between UCS-2 and UTF-16.


That seems deranged to me. Like a true length calculation, It still requires a complex (albeit cachable) calculation to resolve, but it fails to return the length of the string in terms of the number of characters as they would be natually presented.

I understand a need in some contexts to distinguish between a character and its subsequent modifiers - but I do not see such a context here.

Design by committee?


> [...'man\u0303ana'].length => 7 // WTF? Why doesn't this match the behavior of [...'\u{1F4A9}'].length ?

The key thing to remember is that iteration over Unicode strings only makes sense as iteration over code points, not UCS-2 characters, not bytes, not grapheme clusters. The JS String iterator was very deliberately made to iterate over code points. That length reports UCS-2 characters is a historical mistake. That padding is operating on UCS-2 characters is probably a reflection of the fact that the operation isn't well-defined beyond ASCII.


> The key thing to remember is that iteration over Unicode strings only makes sense as iteration over code points, not UCS-2 characters, not bytes, not grapheme clusters.

There are tons of situations where interating over grapheme clusters is what you want to do.


And tons of situations where you don't want neither of two (e.g. nfd vs. nfc). Cairo graphics library has utilities for text rendering, explicitly called "toy text" functions in reference, leaving serious rendering to Pango. That's fair. Languages should not call unicode strings "unicode strings" if these are not covered in detail by special libraries with distinct names for ucp/ucs/etc lengths, iterators, etc. There is no such thing as string length or "char" anymore. String is blank or non-blank, anything beyond that is too complex to be part of any stdlib. Even "blank" is not so obvious today.


That's why I hated the "it might break stuff" arguments against making "string" interfaces against characters (including combinators), and always using UTF-8 for encoding internally, in memory. Would have made a lot of that easier.

As to your last bit, I tend to favor encodeURIComponent and have done it correctly... the main reason, is to avoid "+" vs " " in query strings.


It seems like certain code points (like '\u{1F4A9}' aka the poop emoji) are a single character but the string report a length of 2. That is the root of all of those problems. One of your "problems", the length of an array with a single string element, isn't a problem.


> One of your "problems", the length of an array with a single string element, isn't a problem.

You're misunderstand the code, it's using the spread operator on a string:

[...'word'] => ["w", "o", "r", "d"]

What is being demonstrated is that under the hood, javascript stores astral plane codepoints as surrogate pairs and strings operate on 'characters' which is why '\u{1F4A9}'.length => 2. But, when the spread operator is applied to a string, it breaks up the string into codepoints, not characters. This is also why [...'man\u0303ana'].length => 7, the combining tilde is a separate codepoint.

This is an example of how wonky string processing in JS is, [...string].length is actually the most straightforward way to get a count of codepoints in a string.


> One of your "problems", the length of an array with a single string element

That's not what it is. Look again, and pay attention to the ... part.


That's really helpful, thanks.


Any Unicode codepoint above U+FFFF will require 2 UTF-16 characters


Holy fuck! The Fractal Of Bad Design all over again...


Honestly, I feel most of the blame falls to NPM for allowing publishers to delete packages. This doesn't happen in other ecosystems (e.g. Java).


Afterward, they changed their policies so that this can't happen again.


Most of the blame falls to npm Inc for bending over backwards to a corporation for a bogus trademark claim they weren't even involved in.

Sure, left-pad being deleted was what resulted in most people's problems but this was just the fallout from npm Inc forcibly reassigning an actively used package name from a major open source contributor to appease a company that didn't even threaten them directly.


I believe the string padding proposal predated the "leftpad" incident by a year or so.


I love the trailing commas for function calls. Wish we could get trailing commas for all of JSON too!


It's called JSON5 and it also has comments. The standard JSON will never "change"


For that you'd need to wait until all major parser libraries support it, aaand that all deployments get updated. That means that little change is an at least 10 year long project.


What I want are leading commas. Much tidier and one small thing you wouldn't have to edit/think about when adding and removing list items.


Better yet: automatic comma insertion! (I'm only half joking.)


Unless you need to add something to the beginning.


I prefer comma first notation.

    const hello = {
    , 'one'
    , 'two'
    , 'three
    }
would be easier to visually parse than:

    const hello = {
      'one',
      'two', 
      'three',
    }
Since the commas actually work as a guide.


Why? I just don't find myself doing too much JSON manual editing. A few config files and such. Occasional editing of a server response or call for testing.

I'm basing this on the assumption that the desire for trailing commas is making editing easier and reducing noise in diffs.

Just curious if there are niches where people are spending a lot of time manually editing JSON or if the use case is something else entirely.

I love them in code, despite initial resistance.


The use case for me is JSON blobs (mostly configuration files like package.json) that are checked into version control. Same motivations - reducing line-oriented diff noise.


I suspect that they borrowed this idea from Golang's "composite literal" spec. which is really nice feature https://dave.cheney.net/2014/10/04/that-trailing-comm


Trailing commas have been allowed in C composite literals since the dawn of time.


Thanks for sharing! Interesting to me that separate repositories are used for each proposal, which is news to me. Although I can see some nice benefits to doing so.


I think that’s because a lot of proposals are initially developed away from TC39 by individuals or small groups.


Please don't call it ES8. It contributes to confusion around the language. ES6 was renamed to ES2015. There is no such spec as ES7 or ES8.


I know I’m technically wrong and it’s too late to edit the comment...

...but in my experience usage of ES6/ES7/ES8 as names far outweighs usage of ES2015/ES2016/ES2017. Calling it ES2017, while technically correct, would in my opinion be far more confusing to most people than calling it ES8.


This is switching and ES2017 is the name we should standardize on to avoid confusion. For example, Typescript accepts targets "ES6" or "ES2015" as synonyms, but it doesn't have targets "ES7" or "ES8", only "ES2016" and "ES2017".

I personally prefer the terse version, but there are advantages to using the year-based naming.


In my experience, the usage has started swinging the other direction. While in general I agree that ES6, ES7, etc. would be less confusing, it will only confuse people more if the spec is officially called one thing but some people call it something else. If someone wanted to learn about, say, "ES10", they'd have to search for both ES10 and ES2019 to ensure they got what they wanted. I think it's better that everyone agrees to use the official naming scheme to avoid that kind of confusion. But, you know, that's just my opinion.


This is mostly symbolic. The annual ECMAScript 'editions' aren't very significant now except as a talking point.

What matters is the ongoing standardisation process. New JS features are proposed, then graduate through four stages. Once at stage four, they are "done" and guaranteed to be in the next annual ES edition write-up. Engines can confidently implement features as soon as they hit stage 4, which can happen at any time of year.

For example, async functions just missed the ES2016 boat. They reached stage 4 last July [1]. So they're officially part of ES2017 – but they've been "done" for almost a year, and landed in Chrome and Node stable quite a while ago.

[1] https://ecmascript-daily.github.io/2016/07/29/move-async-fun...


As far as I understood it:

ECMAScript is more a thing for backwards compatibly.

Things get proposed and HAVE to be implemented in runtimes before they get into ECMAScript.

They want to know if things can be implemented nicely before they standardize them.

This is more a game between ECMA and Browser vendors, JS compilers (like Babel) and vendors of other JS runtimes like Node.js.

If you are an enduser, i.e. a user of Babel, it's more a question of the support you get from the Babel devs.

They say JSX is on? No matter what ECMA says, as long as Babel gets this compiled into ECMAScript conforming JS.


I think it does matter to end user developers though. Once something is in the standard, you can rely on it. It won't change in a backwards-incompatible way.

That means that you're not imposing an unusual configuration on downstream users, and depending on the environments you support, at some point you won't even need to run Babel at all.

With IE11 approaching end of life, and more and more sites dropping support for it, we're rapidly approaching a time when we can assume the vast majority of users are on a modern, evergreen browser. And the last version of node that didn't have full ES6 support just went out of long term support.


> Once something is in the standard, you can rely on it. It won't change in a backwards-incompatible way.

That's what stage 4 means. Completely finalised. Appearing in the next 'edition' document is just a formality. Once it's stage 4, it's effectively in the standard. If you restrict yourself to stage 4 features, you have all the same guarantees you mentioned. If you restrict yourself to the last published edition, you'd have missed out on async functions until almost a year after they were completely finalised, for example.

> the last version of node that didn't have full ES6 support

No version of Node supports ES6 imports yet, which is a huge part of ES6, and it won't be supported for at least another year. This emphasises the point that following ES editions is pointless.

All that matters is what combination of individual features is supported by your engine. Thinking in editions has pretty much no benefit.


I agree. My point was in contrast to the attitude that once something is in babel that's all that matters.


The purpose of https://github.com/babel/babel-preset-env is that it handles all this automatically (not run when it's supported natively)


I don't understand who gets to use these things. We have to support back to IE11 so other than some things we've managed to find effective polyfills for, we basically just threw our hands up and code as if it's 2012.


Most modern projects should probably use Babel and a bundler like Webpack for web targets. You may even want to separate legacy and modern browsers into separate targets so the 80-90% of users on evergreen browsers get a smaller download.


Yea, we probably should. Too many feature requests, not enough time to let us clear out the technical debt and make our stack nice and modern :(


I'm only two years into my current project and already am feeling this bite :-\


how smaller do you think things could get?


Depends, are you including ES5 shims, not to mention the corejs stuff for ES6, regenerator etc if you're using async. It all adds up. Given it's not that much in terms of size, but it's a lot in bloat for browsers that don't need it.


> I don't understand who gets to use these things

You, several years from now.

The inclusion of features in the standard is an important part of the evolution of the language. But it's just one part. More recent browsers have already implemented many of these features, and there will come a day when they're pretty much everywhere. Just be patient.

(source: I'm old and remember the era when our sites had to work with Netscape 4 due to its market share)


I feel you. I'm also stuck with IE11 for the time being. No CSS variables, no ES6 symbols. Babel takes the edge off it but there's still plenty we're missing out on for the time being.

Not so long ago, we had to support IE8. Before that, IE6. If you go back a little further IE5 for Mac was the biggest pain point. Before that, Netscape 4 and IE4, which were often mutually incompatible.

There will always be a legacy system many people have to support at any given time. But over time the number of people that has to worry about it will shrink along with its market share. You probably don't see many support tickets about Netscape 4 these days anymore.

Having these standards today means the next generation of legacy browsers will support them when you're stuck with them in the future. That you're not chasing the bleeding edge doesn't mean you don't benefit from these advancements, just that you'll do so with a delay, and likely far less gradually than those privileged enough only to care about the evergreens.


Use CSSNext if you want css varibles now


Err... no.

CSS custom properties are interesting because they are dynamic. CSSNext just translates them statically, hence why it restricts definitions to `:root`. So the only thing CSSNext gives you over other "variables in CSS" preprocessors is that the syntax looks like actual CSS.

But good luck swapping out colors at runtime without recompiling the generated CSS, which is trivial with "CSS-in-JS" solutions like styled-components or glamorous.

EDIT: Also CSS variables are often advocated with `@apply` replacing preprocessor mixins. But `@apply` is pretty much dead outside faux-css preprocessors: http://www.xanthir.com/b4o00 so you probably shouldn't rely on CSSNext for this unless you understand that even if it looks like CSS what you're writing won't necessarily ever work in a browser.


People using node for more than just building.

People writing internal tools.

People writing software for tech-savvy users with modern hardware.

People using babel to compile to a supported version.


You could use Babel.


> This is mostly symbolic. The annual ECMAScript 'editions' aren't very significant now except as a talking point.

A spec is never symbolic. If a library claims it implements ES2017 then its consumer expects the library to follow the entire spec.


Libraries don't implement ES, JavaScript engines do.

And JavaScript engines don't implement ES editions, they implement ES features. These features now have a five stage process ending with their inclusion in the next edition of the language spec.

Most engine vendors want to be on the bleeding edge, so they start implementing features long before they are fully stable.

The reason the editions are symbolic at this point is that they are just snapshots of the collection of features that have reached stage 4 by the time they are published.


The per-feature specs are finalised significantly earlier than the yearly official versions. Nothing practically happens or changes.


True, but in reality such claims almost always have caveats. Like how almost no one supports ES6 imports yet.


To be fair, ES module semantics were not part of ES6. ES6 only defined the syntax. The appropriate semantics for implementing ES modules are currently still not 100% stable last I checked, especially because of Node which had its own module system already that needs to be unified with ES modules somehow without breaking everything or making it awkward.


I would really love to see an object map function. I know it is easy to implement, but since they seem to be gaining ranks through syntax sugar, why not just have a obj.map( (prop, value) => ... ) ? :)


Is Object.entries(obj).map((prop, value) => ...) close enough? Object.entries is newly standard in ES8.


Not really object map. Does not return a new object ;)


Ah, well this would return a new object:

    Object.entries(obj).map((prop, value) => ...).reduce((newOdj, [prop, value]) => newObj[prop] = value);
I see your point about verbosity :)


Shouldn't it be ".map(([prop, value]) => ...)" ?

I like to write it this way, using as much syntax sugar as possible:

const mapObject = fn => obj => Object.assign( ...Object.entries(obj).map( ([key, value]) => ({ [key]: fn(value) }) ) );


>using as much syntax sugar as possible

It's not possible to say this on the internet without being rude so, apologies, but, why code like that? It genuinely was a struggle to parse (in my brain) your oneliner of code there. If I found this in our code base it would be a huge waste of time and energy.


No offense taken.

I said I like to write it this way, not that I always do it.

> why code like that

It's a fun exercise.

> it would be a huge waste of time and energy

I don't fully agree on this: from the name and signature it's pretty clear what the function does, so you don't need to waste time "parsing" the details. And with unit tests it is also very low-risk.

But that's speculative, in a real project I'd just use lodash.


In production code, if I ever have clever one-liners, they're usually offset with just as many additional lines of comments to explain what it's doing and why it works. Plus, this function is generic and would probably be extracted into a utility module, anyway.

To your point, though, this one is pretty esoteric.


the trend of style over readability. google recommends avoiding list comprehensions in python yet 99% of stackoverflow python questions have some convoluted list comprehension answer


I really like that... I've tended to use Object.assign in a reducer, but this works too, and is a little cleaner imho


Yep, it should be. Thanks for catching it.


Your linter will also complain about modifying newObj within the reducer.


Which a problem with using linters, not with programming.


Spread syntax should cover linter issues

  Object.entries(obj).map(...).reduce((acc, [prop, value]) => ({...acc, prop:value}))


isn't that hideously inefficient? you're making a new object each iteration.


But purity.


it's very unlikely this is going to be the bottleneck in your application


I wouldn't be so quick to write that off as a performance concern. Creating tons of unnecessary objects is JavaScript's equivalent to programming without regard for cache locality at a lower level, due to the way the JITs work.


It's very unlikely you'll get any benefit from doing it the inefficient way either. Does eslint not support "ignore this part" comments?


Yes it does

// eslint-disable-line your-rule-to-ignore


In those cases, I usually use

    Object.assign(acc, { [prop]:value })
While not sure if `[prop]` is better or worse than not creating a new interstitial object.


why not just use acc[prop] = value ?


Assignment does not return `acc`.


Which linter rule is that?


The one that doesn't like modifying function arguments?


Usually that only disallows reassignment of function arguments. For example, eslint's no-param-reassign rule by default allows modifying the properties of the function argument. However, the "props" parameter for this rule can disallow even the modification of properties of a function's argument.


Even in es5 you can do Object.keys().reduce().


Object.assign({},obj)


Perhaps use the new ‘Map’ object in ES6 instead?


I believe that one of the main reasons not to do this is that adding anything to `Object.prototype` means that now everything inherits that new method.

So you have weird things like `'foo'.map()`, `( 42 ).map()`, `new WeakMap().map()`, `false.map()`, etc.


Those all sort of make sense to me.

'foo' is a string and can be iterated over.

42 is a number that can be provided to a function. Not sure about this one tbh. Could be useful for passing a number to a function if-and-only-if the number is valid (not NAN)

WeakMap(), I don't know what it is but I assume it's a map that can be iterated over.

mapping 'false' would fit well the Maybe monad.


But what are the own properties of a number? There's nothing to iterate, and therefore nothing to map.

WeakMap is a new-ish type whose keys are objects. The references to those objects are weakly held, which means that an object `x` whose only reference is a WeakMap key will still be eligible for garbage collection. They are non-iterable for exactly that reason.

Also: how do you iterate over `false` or a function? What about user-created classes?

There are so many cases where the behavior would either be nonsensical or at least unintuitive that (in my personal and completely subjective opinion) they outweigh the slight convenience of being able to "map" a plain old JavaScript object.


No answer for WeakMap, and like I said, not sure about integers, but if you take a look at Haskell monads, you'll see that surprising things like Maybe (booleans, essentially, or rather container classes that either contain something or don't) can be iterated over. It's a very useful concept that abstracts somewhat the idea of "iterable", to "do something with all the things", where in the case of Maybe, all the things is either one thing or nothing.

Granted, in JavaScript maybe this wouldn't make sense.


Actually WeakMap() could not be iterated over as its keys are not enumerable by design - you can't "know" at any given time what the list of keys actually contains.


Fair enough.


Adding a `map` method to every single object is not a backwards-compatible change. That's why those new methods are under the `Object` global.


Until then, lodash.mapValues is your friend :P


Notably, with shared memory and atomics, pthreads support is on the horizon.

https://kripken.github.io/emscripten-site/docs/porting/pthre...

Granted it may be limited to consumption via Emscripten, it is nevertheless now within the realm of possibility.

For this that cannot grok the gravity of this -- proper concurrent/parallel execution just got a lot closer for those targeting the browser.


Are locks and mutable shared memory a good thing? Advances in modern programming languages aim to abstract threads away (e.g. .NET's Task Parallel Library), because mutable memory shared between threads is a minefield (atomic, locks, mutexes, compare-exchange, etc.).

Not only are these difficult to reason about and program against, introducing locks over mutable memory causes performance problems and are a speed bump to high concurrency. (Heck, just yesterday on HN, a story was published "24-core CPU and I can't move my mouse" [0] - the culprit? Low-level locks.)

Creating too many threads introduces performance problems, too, thanks to CPU context switching; pools must be created and threads reused from those pools.

And now in JS land we're trying to introduce threads and mutable shared memory? Are we just going to learn these lessons over again?

[0]: https://randomascii.wordpress.com/2017/07/09/24-core-cpu-and...


You can only send strings to webworkers in javascript. Without shared memory you have to send a serialised copy of your data to the other process which then has to deserialise it. In most cases this is perfectly fine. But if you have a lot of data then moving it around might take more time than the computation itself. In this case you're forced to use shared memory.


Not true, you can send ArrayBuffers to webworkers as transferable objects without copying: https://developers.google.com/web/updates/2011/12/Transferab...


TPL abstracts threads away, but not mutable shared memory, nor any primitives used to synchronize access to such from concurrent tasks.


TPL (and PLINQ) raise the level of abstraction. They discourage the use of mutable shared memory and low-level locks. They encourage the use of immutable or thread-safe data structures.

Joe Duffy, who had originally done quite a bit of initial work on the TPL, argues in favor of this paradigm[0] of raising the abstraction level.

I feel like introducing the low-level tools without providing a raised abstraction level may be beneficial in the long run (you need low-level tools to build high-level abstractions), however, without these abstractions I suspect a generation of JS developers will need to re-learn all the multi-threading lessons of the last few decades.

[0]: http://joeduffyblog.com/2016/11/30/15-years-of-concurrency/


I don't see how TPL discourages the use of mutable shared memory any more so than a simple thread pool. Note: I'm not saying that it's not a higher level of abstraction, only that it's abstraction in a different direction (the one that has nothing to do with sharing or mutability).

I'm not surprised that TPL author discourages mutable shared memory and locks - this is the conclusion to which anyone dealing with concurrency a lot arrives sooner rather than later. But that's orthogonal to TPL design.

Now, PLINQ does attack the whole mutable shared memory thing head-on. But it's a much higher level of abstraction on top of TPL.


We have the great benefit of stewards having already learned those lessons.


Shhhh node people still pretend threads don't exist


Hmm... can't tell if this is an attempted Node slam, but figured I would add that you can mimic threads / concurrent tasks by utilizing the child_process or cluster modules.


As a matter of fact, it certainly makes things a fair bit more pleasant than the current state of affairs as they -- shared array buffers -- are another data primitive on top of bidirectionally passing strings/buffers.


Not sure that anybody is "pretending" anything. Seems like you're just going out on a limb to be condescending.


I hope those kinds of comments do not start making their way towards HN. Reddit is getting so bad lately, an influx is probably due.


What I wish ECMAScript had was true support for number types other than the default 32-bit float. I can use 32 and 64 bit integers using "asm.js", but this introduces other complications of its own -- basically, having to program in a much lower level language.

It would be nice if EcmaScript could give us a middle ground -- ability to use 32/64 bit integers without having to go all the way down to asm.js or wasm.


Tangential to your point, but JS numbers are actually 64-bit floats (capable of safely representing 53-bit ints), not 32-bit.

But yes, 100% agreed; this is a major pain point of the language right now. Nice to hear that there's finally some movement on fixing it (as per arthurdenture's comment)!


There's a BigInt proposal at stage 2, which might meet some of your needs: https://github.com/tc39/proposal-bigint


It's parseable in Babel 7, issue to implement the transform is https://github.com/babel/proposals/issues/2


In the last couple of years we've seen a small number of significant improvements like async/await but mostly small tepid improvements like string padding, array.map(), etc. It's like TC39 are simply polishing JS.

I'd like to see TC39 tackling the big problems of JS like the lack of static type checking. I'm tired of looking at a method and having to figure out if it is expecting a string, or an object.

We had EcmaScript4 about 10 years ago with plenty of great features but TC39 killed it. And yeah, it probably made sense since the browser vendor landscape was very different back then. Today it would be possible to implement significant changes to the language much like the WebAssembly initiative.


The reason ES4 was killed was that there was no consensus between the stakeholders. It would have been a breaking change, making conforming engines incompatible with all existing JavaScript code.

How well major breaking changes like that work out on the web can be seen by the massive success (cough) of XHTML -- I dare you find me one notable site correctly using XHTML with the appropriate MIME type (rather than just sending XHTML as HTML tagsoup using `text/html`).

Web Assembly works because it is not a breaking change to JavaScript. It's actually defined in a way that is mostly orthogonal to JavaScript -- it shares some of the APIs but most likely there will be (low-level) APIs only Web Assembly code can use and some (high-level) APIs that only exist for JS.


I'm not advocating for breaking changes, in fact static type checking in ES4 was optional, the language remained just as dynamic if you wanted to use it that way.


Checkout TypeScript. It's great.


I know, but it's not JavaScript.


For the most part it is JavaScript. JavaScript is valid Typescript, because Typescript is a superset. Typescript just gets you type annotations and "future" JavaScript features, none of which you're forced to use.


Yes I know, but again, it's not JavaScript and there will always be headaches here and there.

For example: https://vuejs.org/v2/guide/typescript.html


And, when you compile it to JavaScript, you can choose the target language level. So, you could write Typescript and have it compile such that in essence, all it does is strip out the type annotations.


Really hate the naming for JS standards.. ES2017, ES8, ECMA-262. Way to confuse people :/


"The versions of Unix are numbered in a logical sequence: 5, 6, 6PWB, 7, 4.1, III, 4.3, V, and V.3." -- quoted from memory from The Unix-Haters Handbook

So you see, they're following long-standing industry practice :-)


Angular 1, angular 2, angular 2.1... Angular (note, Angular 2 is now Angular! And so is every version after this! Angular 1 still exists though).

Surface Pro. Surface Pro 2 (note: Surface Pro is now surface pro 1!), Surface Pro 3, Surface Pro 4, Surface Pro. By the way we also have Surfaces (1, 2, and 3?) as well as Surface Books, and Surface Laptops! :D

Android: here's a bunch of fucking candy

OS: here's a bunch of fucking animals. By the way, sometimes people call your macbooks 10,1 or some shit.


> Android: here's a bunch of fucking candy

whose initials follow alphabetical order


You're behind the times. OSX has moved on from animals to nature preserves. Much more logical.


Nitpick: "MacBookPro11,1" "Macmini7,1", etc refers to the hardware model: the first number being the machine's generation and the second one being its revision.


iPad, iPad 2, The New iPad, iPad, iPad Air, iPad Air 2, iPad Pro, iPad


The .NET and .NET Core version numbers are another minefield, and I still can't remember (and don't really care) what's what. Your choices are either to laugh or be filled with rage. I'm coming round to the first one having mostly opted for the second in recent months.


What's particularly weird about .NET version numbers? They're monotonically increasing, with the exception of .NET Core getting a reset to 1.0 (which kinda makes sense, since it's a different and incompatible product).


.Net Framework, .Net Core and .Net Standard all have different version numbers, which correspond to different things. In addition, since Core and Standard are both in the 1.Xs currently, it can be confusing. Finally, the .Net Core SDK is versioned differently from the run time, both somewhere in the low 1.Xs right now.

``` >dotnet Microsoft .NET Core Shared Framework Host Version : 1.1.0 Build : 928f77c4bc3f49d892459992fb6e1d5542cb5e86 ```

``` >dotnet --version 1.0.0-preview2-1-003177 ```

The version numbers aren't necessarily weird, but the versioning sure is.

Disclaimer: MS Employee who uses Dotnet, but doesn't work on it.


Might be worth posting this

https://docs.microsoft.com/en-us/dotnet/standard/net-standar...

I should probably print it out

I was in version hell yesterday, targeting a standard (1.5) because I thought it would help me run as .NET Core 1 whilst supporting a 4.6.2 .NET Framework dll but that broke something else. It was a juggling game of trade offs


Ah, I keep forgetting about the Standard. This makes sense (as in, it does make less sense now that it makes sense).


Framework 1.4 > 4.7 > 4.6.1?


Just copying another idea from Java I'd guess:

Java 1.4 -> Java 5.

(Yes, I'm a Java programmer)


1.4 is not really "greater than" 4.7, they're just two different parallel tracks.


Don't forget openSUSE: 13.1, 13.2, 42.1, 42.2, 42.3, 15.0


Xbox, Xbox 360, Xbox One.


Windows 1, 2, 3, 95, 98, 2000, ME, XP, Vista, 7, 8, 10


ES2017 is the conventional name now that there's annual publications of the spec. ES8 is someone reading the edition number on the document and trying to be terse. ECMA-262 is the formal name of the spec document, not the language the spec defines. All slightly different things, only one of which names the language itself, officially.


Which one is it that names the language officially?


ECMAScript 2017, that's the name given by the speciation document, which is itself known officially as ECMA-262 edition 8.


So which name is the correct one?


Ecmascript 2017.


Is calling it "Javascript" even still correct, or should it be called "ECMAScript"? If someone says they write "Javascript using ES2017" are they talking utter nonsense?

I can find some references to "Javascript" being an implementation of ECMAScript, but doesn't appear to true [1]. Mozilla's implementation is SpiderMonkey, Google's is V8, and Microsoft has Chakra and JScript.

[1] https://en.wikipedia.org/wiki/ECMAScript#Implementations


History time!

Netscape created a language which they called LiveScript when it first shipped in a beta build, largely influenced by Java. By the time of Netscape Navigator 2.0 beta 3, they had negotiated a license for the Java name from Sun and renamed the language JavaScript (and, as wamatt pointed out, Sun owned the JavaScript trademark, now owned by Oracle following the Sun acquisition).

IE3 shipped a reverse-engineered copy of JavaScript, which they called JScript because they didn't have a license to the JavaScript trademark.

Later in the same year that Netscape Navigator 2.0 and IE3 shipped, Netscape submitted a specification of their JavaScript language to Ecma. What followed, as far as I'm aware, was a bit of a debate about naming: JavaScript was basically out of the question due to it being a Sun trademark, JScript was a MS term that Netscape didn't want to legitimise, and hence the compromise was ECMAScript (why this uses the pre-1994 capitalisation of Ecma is a good question!).

So, essentially, JavaScript, JScript, and ECMAScript are three names for the same language.

This then gets a bit complicated as Netscape and then Mozilla referred to ECMAScript revisions as JavaScript versions, and then started adding non-standard extensions as new JavaScript versions, though they've basically killed that now.


> So, essentially, JavaScript, JScript, and ECMAScript are three names for the same language.

This is a little wrong/misleading.

ECMAScript is the specification; JavaScript, JScript, and ActionScript, are various implementations of the specification. Each implementation provides additional features not described in ECMA specs, such as access to ActiveX and the local computer in JScript.


Mozilla's view is that JavaScript is a language, of which they provide two implementations: SpiderMonkey and Rhino.[1]

As far as I'm aware, MS has used JScript to use to both their original implementation and the language it implements, as well as JScript.Net being both the implementation and the language it implements. I can't find any citation for this, however.

Adobe considers ActionScript a language, with implementations including AVM.[2]

[Edit:] I just realised you could also potentially mean "implementation" insofar as they define a host environment for ECMAScript, except JavaScript especially does not: just look at the difference between SpiderMonkey-in-a-browser and SpiderMonkey-on-the-CLI.

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Abou... [2]: http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/actio...


JavaScript is the de facto name of the language, at least the version used in web pages.


"largely influenced by Java"

I believe that's backwards. Javascript was based on Scheme & Self, the Java resemblance is only superficial; added later for marketing purposes when they picked up the JavaScript name.


The syntax was largely influenced by Java, although Scheme & Self influenced more of the semantics than Java did. From memory of what Brendan has written before, the language was meant to be, per managerial plans, "Java but a scripting language".


The syntax sure is much closer to Java than Scheme or Self. So was the obscuring of prototypal inheritance with the new keyword along with the Math object.


Unfortunately because of marketing and the (general) lack of technical expertise by recruiters, you gotta just put the whole kit and kaboodle on your resume. Mine looks like:

Javascript, JS, ES2015, ES2016, ES2017, ES5, ES6, ES7, ES8, Ecmascript 2015, Ecmascript 2016, Ecmascript 2017, HTML, HTML5, CSS, CSS3.... etc.

I worked as a recruiter so I know it's necessary because 1: recruiters ctrl+f like a mofo, and also your resume won't pop up on linkedin/ other sites if you don't hit their tags, and 2: recruiters will scan the "technologies" section to make sure you "have" the language that was given to them on their spec sheet.

Spec sheet will look like:

Web Developer 3+ years experience Knowledge of Javascript, HTML, CSS3

If you have CSS but not CSS3 the recruiter may not realize the difference (or may not realize that pretty much the most important thing on that spec is "Web Developer") and probably won't call.

So in short is calling it Javascript correct? shrug Nothing is correct in this mad world!


I guess if we're supposed to keep our resumes to 1 page we have to use 4 point font?


I use 10, but yea. I mean the bit I just commented about can be smashed onto one to three lines at the top, not a big deal.


Really short and accurate answer from People Who Know about why you don't see 'JavaScript' used as often:

- Oracle own the trademark on JavaScript (though buying what was left of Sun, through the Sun Netscape Alliance)

- Nobody trusts Oracle

A good idea proposed is just to call the language itself 'JS' and have it not stand for anything.


Sort of related: JS is officially written as "JavaScript" (yeah, ikr) and surprisingly the trademark is owned by Oracle.


This goes back to the Netscape / SUN partnership. SUN allowed Netscape to use their trademarked name Java when they created JavaScript. Microsoft created JScript. When they decided to standardize the language into a spec, they did so with ECMA. The involved parties couldn't agree on a name so they settled on the highly original name ECMAScript as a compromise.

"Eich commented that "ECMAScript was always an unwanted trade name that sounds like a skin disease.""


And, just to round this out, SUN is all-caps (rarely) because it was originally an acronym for Stanford University Network.


TIL and can now go to sleep


Think of it as JS 8 came out in 2017 and is defined in a document with an irrelevant ID.


> Way to confuse people :/

JavaScript, that's a scriptable version of Java, right?

/sarcasm


Can anyone recommend a good book or guide for someone who knows pre-ES6 javascript but wants to learn all the latest ES6+ features in depth?


I found the You Don't Know JS series books by Kyle Simpson very helpful.

The last one is for ES6.

https://github.com/getify/You-Dont-Know-JS


I found it helpful to go through the airbnb style guide. They have lots of examples of the "bad" (often old) way to write something, along with the "good" (often new) way. Then I updated one of my projects to follow those guidelines.



I've found Pony Foo's articles to be excellent:

https://ponyfoo.com/articles/tagged/es6-in-depth


This is a very nice and comprehensive overview:

ES6 (very comprehensive!) http://es6-features.org/#Constants

ES7 (succint, easy to read) https://h3manth.com/new/blog/2015/es7-features/


http://www.react.express is intended as a learning resource for those used to ES5 Javascript. Granted, it's for learning React through modern Javascript, so it may not be exactly what you're looking for, but it has interactive sections for new ES6 and ES7 features.


I took a class on ES6 and curated it in a github repo: https://github.com/JacobWylie/ES6

If applicable there are examples of both before and after conventions as well as links to more technical external resources.


exploringjs.com


I like Dr. Rauschmayer's blog, as well: http://2ality.com/

His blog is full of useful and specific deep dives into individual ES proposals as they reach certain stages.


Dr. Rauschmayer also has several books free to read online available here http://exploringjs.com/. I definitely recommend his books they are often quite thorough.


There's also Nicholas C. Zakas' book understanding es6. It's free to read online too. https://leanpub.com/understandinges6/

I haven't read this one but I started one of his other books that was a very large book of JavaScript showing warts and all.

I would definitely support these authors though by buying the books if you have the money to do it.


Here’s what’s in it: https://github.com/tc39/proposals/blob/master/finished-propo...

And some interesting tweets by Kent C. Dodds: https://twitter.com/kentcdodds/status/880121426824630273

Edit: fixed KCD's name. Edit #2: No, really.


If you're a Brit, this is a funny misspelling of Kent's name.

https://en.wikipedia.org/wiki/Ken_Dodd


Whoops, that's absolutely why I misspelled it!


> Edit: fixed KCD's name.

No you didn't.


Oh, dear lord.


Regardless of what gets included in the spec, I hope people think critically about what to use and what not to use before they jump in. Just because something is shiny and new in JS, it doesn't mean you have to use it or that it's some sort of "best practice."


For anyone wondering what's NodeJS support of ES8.

Everything is supported, except "Shared memory and atomics"

[1] http://node.green


Shared memory? Is that also maybe going to be implemented on chrome? I would not have expected shared with code running on the web/in a browser. I'm confused about the goals of ECMA and how ECMA relates to JavaScript.


That's memory shared between the main thread and Web Worker threads. While it's exciting from an "easily implementing multi-threaded applications" point of view, given the nature of the objects, it doesn't actually allow sharing any data that couldn't be shared without the new spec.


Is there a "What's new" section?


The "finished" proposals have a good list:

https://github.com/tc39/proposals/blob/master/finished-propo...


I've been looking at the stage 2 and 3 proposals. I have a difficult time finding use for any of them except for Object spread/rest. The stage 4 template string proposal allowing invalid \u and \x sequences seems like a really bad idea to me that would inadvertently introduce programmer errors. I do hope the ECMAScript standardization folks will raise the barrier to entry for many of these questionable new features that create a maintenance burden for browsers and ES tooling and a cognitive burden on programmers. It was possible to understand 100% of ES5. I can't say the same thing for its successors. I think there should be a freeze on new features until all the browser vendors fully implement ES6 import and export.


Has there been any progress on supporting 64-bit integers?


The BigInt proposal[0] is stage 2, meaning we should see some implementations soon.

[0]: https://github.com/tc39/proposal-bigint


Heck, what about any kind of integers, so we can use bitwise operations without `double -> int -> double` casts?


I wish this-binding sugar would get promoted into stage 1.


The :: syntax is one of my favorite proposals - does anyone know what's holding it up?


I should really learn ES6


Literally after seeing this on #1 felt like, "Wtf? I just started learning ES6"


The additions in ES2016 and 2017 are minimal compared to the massive changes ES2015 (aka "ES6") introduced. Check the full list of them here: https://github.com/tc39/proposals/blob/master/finished-propo...


Yes, I have been through the proposals. Love these new changes. Psychologically it still felt daunting when I first saw it! :P


Looks like ECMA's site is overloaded. Here's a Wayback Machine link for the lazy: https://web.archive.org/web/20170711055957/https://www.ecma-...


If you're using Chrome, you can also go to cache://www.ecma-international.org/publications/standards/Ecma-262.htm (replacing https:// with cache://)


That's just doing a google search, so you could do "cache:{URL}" in firefox's search field too


I'd like to be able to capture object modifications like Python's magic __getattr__ __setattr__ __delattr__ and calling methods that do not exist on objects. In the meantime I am writing a get, set, delete method on my object and using those instead



I've used Proxy in the past. It doesn't allow for the capture of both `object.attr_does_not_exist` and `object.methodDoesNotExist()` simultaneously, it is one or the other. I will admit that I am trying to use another lang's paradigms in JS, but only because it is so similar and it makes sense to me.

But I may be able to use Proxy just to capture object changes. I will try it out


Could you have your "get" handler do its thing and have it forward on the arguments it received to the "apply" handler if necessary with an extra flag argument and vice versa?



theres defineproperty in es5. ive used it to create a orm where foo.bar = 1 executes async sql: update foo set x=1


That sounds pretty cool. Are there any public examples that I can learn from? Thanks


I'd also love to see examples.


Really interesting how bad the only JavaScript code used on their own site is: https://www.ecma-international.org/js/loadImg.js


Do what I say, not what I do.


I've seen worse.


I made a short sum-up of changes in this specification here: http://espadrine.github.io/New-In-A-Spec/es2017/


What is up with decorators?


It's now at stage-2


>AWB: Alternatively we could add this to a standard Dict module.

>BT: Assuming we get standard modules?

>AWB: We'll get them.

lol


> Kindly note that the normative copy is the HTML version;

Am I the only one who finds this ironic..


What's ironic about it?


Wait, so async generators and web streams are 2018 or 2016?


Time to update https://es6cheatsheet.com

What's the feature you're most excited about?


Definitely async functions. JavaScript code will be _so much cleaner_ thanks to them. Can't wait for the much needed improvement in readability.


You don't have to wait, it is already supported server-side and client-side.


Async/await has been implemented in node.js for a long time, in fact I consider that not using it at the server side is almost criminal...


async/await +1

has been possible with babel on older versions of the JS, but nice to have it after all officially.


Atomics & shared memory. We use a ton of array buffers along with web workers to transmux HLS video in the browser (https://github.com/video-dev/hls.js). Being able to share memory across the worker instance will hopefully save a lot of memory/time.


Off topic: I've been using with hls.js at work recently for some stream debugging. Great stuff!


Thanks!


Array comprehensions. Anytime I switch from Python to JS, it's the first thing I miss.


Heh, maybe JS becomes finally usable just before WebAssembly takes off, rendering it obsolete :-D


Wherein a Hackernews suggests an unknown, unpopular technology will replace a known, popular one in short order.


Come on, many people are looking down at JavaScript for a reason - if the new standard allows proper multithreading, it suddenly gets closer to "usable" language as defined by many. But if they had a choice of another language running on top of Web Assembly (or whatever you name the web's VM), I am not sure they would even consider learning JS despite it being finally on par with their favorite language.


JS won't be entirely obsolete. You won't be able to access the DOM from WebAssembly. It will be way easier to make way more performant web apps though.


DOM access is coming to WebAssembly after GC is supported.

https://github.com/WebAssembly/design/blob/master/Web.md


oh, is that all!


Nice 90s style website ECMA!


Its funny that ECMA website is not more modern. Maybe they are too busy with the spec to change it. Who knows?


It's a site read by a bunch of programmers. Why does it need to be pretty? We can read lightly styled text quite easily. It's how most engineers I know prefer their technical documentation.


I'm not clear on if it's dated and ugly or just way too future for me.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: