Hacker News new | past | comments | ask | show | jobs | submit login
Node v6.9.0 (LTS) (nodejs.org)
309 points by dabber on Oct 18, 2016 | hide | past | favorite | 90 comments



When a native Promise incurs a rejection but there is no handler to receive it, a warning will be printed to standard error.

Probably the most important change other than improvements in es2015 support.


Yes, this footgun is why I've been really wary of the of the `fetch()` plus es6-promise polyfill bandwagon. I've been giving that a wide berth. Examples why:

- http://requirebin.com/?gist=8f13d5147c1c252ab1691115bfa8b7c5

- http://requirebin.com/?gist=06e8c85fcaca56c9651c6aabd0d91476


how is that any different than any other ignoring of potential errors...

    fs.saveFile(file, function(err){});
I'd spent most of the last month dealing with a codebase that ignored potential errors systematically... which was a HUGE problem when you WANT to throw an error down that hole... "turtles all the way down" and they're eating all the errors.

It's not limited to Promises, or even unique.


Because it's not just an 'ignoring of errors', it's the fact all exceptions (as in actual errors from the runtime) are silently swallowed. In your example, if you typo in that function body, an exception will be thrown, your program will exit and you will see the stacktrace.

  var fs = require('fs');

  fs.writeFile('foo.data', function(err){
      // A typo here will lead to an exit and stacktrace
      sadsad
  });
Whereas in the Promise examples it is quite literally silently swallowing ALL exceptions thrown, including typos etc.

It is limited to Promises, because they're supposed to help us deal with error propagation, and pass exceptions through to `.catch()` handlers. But if that catch handler is missing, crickets, even for outright invalid code. That doesn't happen with any other construct in JS afaik.


In that example the same error gets thrown with a Promise because you're talking about mostly a JIT error at file load time. (Promise.resolve().then(() => I am a syntax error))

The parent's point is that you still aren't doing anything with the callback errors (that first parameter to most node-style callbacks) and those silent swallows are a lot harder to deal with than Promise's silent swallows because node callback error handling is a convention that is easily missed/forgotten/ignored, whereas Promises always "bubble" exceptions "down the chain" and fixing global unhandled exception handlers in Promises is a lot easier (and done at a platform level) than fixing a codebase full of bad node-style callbacks.


I disagree. A developer choosing to not handle the 'error' param in an error-first callback at all is incompetence from a node dev, akin to writing 'try-except-pass' in Python, whereas silent swallowing of runtime exceptions is a bug of the runtime and inexcusable.

Btw, I prefer Promises and the error-bubbling approach in principle for the reasons you're getting at, but the current implementations (especially in p0lyfills like es6-promise) are simply broken, due to the standards themselves being lacking. See this comment for fact the fix has been in draft for a while now: https://github.com/stefanpenner/es6-promise/issues/70#issuec...


I disagree. Developers are human and if they have to remember to write extra code to get a thing to do the right thing, they are likely to forget. (try-except-pass is an explicit opt out; if (err) throw err is a "required" opt in.)

This "bug" in some of the Promise runtimes could just as easily be handled by forcing every Promise chain to end in a .catch((err) => /* ... */). Requiring even this little bit is less onerous than node-style callbacks where every "link in the chain" needs explicit opt-in, with Promises you only "have to" opt in and deal with boundaries, just like try-catch in most languages.

Yes, it's not great that Node and some of the browsers initially "missed" the "great try/catch in the sky" catch all logging, but again that is easily resolved at the runtime platform level whereas node can't possibly tell you if a developer missed an if (err) throw err somewhere in a callback chain 10-levels deep and across three or four dependencies.


I agree that handling promises should be by default better than remembering to 'if (err) throw err' but my original comment was about how spec-adhering promise polyfills like es6-promise are basically broken because they silently swallow the most egregious errors.

They silently swallow all runtime exceptions, not 'error from reading said file you told me to read, that there should, by any sane degree be SOME KIND of error handling for, and for anyone serious about programming in a dynamic language, also test coverage for'.

Those two things are different issues, on different levels.

> Yes, it's not great that Node and some of the browsers initially "missed" the "great try/catch in the sky" catch all logging, but again that is easily resolved at the runtime platform level

That's all my original comment said. The platforms (actually the spec) failed to get it right initially, the implementation was broken and now it is fixed.


Seriously? It takes 5 seconds to add an eslint rule to avoid this issue.

http://eslint.org/docs/rules/handle-callback-err


The time this bit me the most the error was swallowed in a dependency of a dependency of a gulp plugin. Do you also eslint node_modules//*.js?


In my experience, it has too many false positives to turn on.


Sorry to hear about your experience. Been using it for years as part of our build processes with no false positives.

You're always welcome to write a replacement if you find it unsuitable.


I don’t need a replacement, just offering an explanation for why some people’s needs might not be satisfied by this rule.


In some scenarios, particularly when writing scripts, error handling boilerplate code tends to obscure the nominal flow, which can make the script overall harder to debug and thus net more likely to have errors.

Coincidentally I was annoyed by this very fact a few years ago so I developed https://www.npmjs.com/package/callback-wrappers which made the boilerplate for common error handling scenarios (logging, throwing, exiting, emitting, etc.) much terser and moves it to the end of the functions so it's less in your face. Never made much use of it, and it's so non-canonical that I'm loathe to advocate it too much, but it would make me happy some sort of standard syntactic sugar could be introduced that allowed comparable brevity.


> it is quite literally silently swallowing ALL exceptions thrown

Technically, it propagates the error along to any dependent promises. If nobody's handling errors, then yes, they get swallowed. If that bugs you, all you have to do is add this once when your app starts up:

    process.on('unhandledRejection', function(ex) {
      console.error(ex.stack);
      process.exit(1);
    });
I think that should have been the default, personally, but it's a nuisance at worst. Throwing out promises because of this is absurd thinking.


Killing a process on an exception you can't handle is sort of best practice anyway in Node--assuming you're not building important financial services or whatever in Node (lol).

Like, you want to be careful not to open yourself up to cascading failure, but if my application has a random exception, the 100ms start up time of the JavaScript VM means its almost always worth the cost of restarting to ensure your application isn't in an unstable state.


I'd rethink that. Every connection to the server closes, and whatever else it might be doing stops, because someone hits your syntax error? And they completely DoS your server by slowly refreshing the page?

Surely the best practice is to use something like koa or otherwise create a promise chain with a top-level 500 handler.


Depending on how much state your applications carry forward, I think killing the process is often the right thing to do. Otherwise, it's very easy to end up with connection leaks or just downright confusing state that can't always easily be traced back to what should have been an exception.

I run a node app that serves ~500M requests a day that is configured to die on exceptions, it just means that we do everything we can to prevent them (unit tests, linters, etc) and take them very seriously when they happen.


What makes you think I can't handle it? I definitely can handle it, and Promises give me the ability to handle them.


In my example I was talking about avoiding es6-promise polyfill in the browser, because of the fact that it adheres so strictly to the spec that it does not have an equivalent to the safety net you give in your Node example.


Yeah polyfills might not do the right thing. Natively, both exceptions and rejections stop the next thing from happening, give you a trace in the console, then everything continues at the next tick of the event loop. If a polyfill skips the trace part, that's definitely bad.


Yep I know, I'm specifically complaining about the current trend of casting off old ajax methods/libraries and moving to 'fetch()' polyfills, as currently the 2 main popular fetch modules both recommend 'es6-promise' as polyfill which has serious problems in this regard: https://github.com/matthew-andrews/isomorphic-fetch/issues/1...

Technically however, what that polyfill is doing is 'on-spec'. Note that your `process.on('unhandledRejection')` thing is clearly outside spec as `process` is a Node thing, not a JS thing.


Personally I don't want my process to crash because I didn't handle some unimportant writeFile call correctly. This is why I use Promises, so I can handle exceptions.


I came to the comments section to post this exact comment. This is a hugely impactful change! Super happy to see this land.


I wish the browsers moved on with the support as fast as Node so that we can get rid of stupid hundreds of MBs dependencies to transform a arrow function.


So two things.

First, it doesn't matter how fast browsers moved. Currently they're moving pretty damn quick if you ask me but it doesn't matter because you need to know your audience and what browser(s) they use.

Second, I don't understand the want or need to include "hundreds of MBs dependencies to transform a arrow function". Why do you have to use an arrow function? Yeah it looks nicer but if you're complaining about how much dependency it brings in why not just not use it? Don't get me wrong I like ECMAScript 2015 but if you don't want to bring in the huge amount of transpiler dependencies ECMASCript 5 is still super easy to write.


Exactly my point. Arrow functions and destructuring assignments are great, but we've put up with function() { ... } for the last 20 years, so I'm pretty certain we can wait a couple of more years and not get bogged down in these crazy tooling chains to achieve so little.

and if we insist on crazy tooling chains, how about getting some real benefit – like coffeescript (?-operator anyone) or typescript (yes, typing can help, just look at swift)?


The engine that Node is using is already in chrome, and iirc all the major current versions of browsers already have 97% support (tail call opts withstanding, I think for safari this is still a vNext issue).

You can use one of a number of modern targets which will avoid the need for the various polyfils etc... but if you need current safari and any IE support, you will at least need dual build paths, which isn't too bad. I kind of wish there was a babel preset that was meant to be used in conjuction with Financial Times' polyfill.io


I've been working on https://github.com/babel/babel-preset-env/ and there's a similar request although using babel-polyfill rather than the service https://github.com/babel/babel-preset-env/issues/20


Browsers already support arrow functions (and did before Node), but there’s no reason to break compatibility with old browsers just to get arrow functions.

If that’s the only thing you’re transforming, my advice would be to stop and just use function () {}.


> we can get rid of stupid hundreds of MBs dependencies to transform a arrow function.

Depending on your target market, you may be stuck on ES5. Your users' browser behavior determines when you can remove the shims and transforms, and if they aren't upgrading you can't really force them to upgrade the browsers.


>force them to upgrade their browsers

Please yes


Hmm Node is trailing chrome what exactly are you talking about?


They do. Try writing an arrow function into your browser's console right now.

http://caniuse.com/#feat=arrow-functions


Bluebird already has this, and is more efficient than native. But this is definitely a step forward.


I'm not debating the point, but how is that possible?



I'm not sure how it's possible but it's definitely not rare. Lodash' methods are faster than their respective native implementations (Lodash `map` > `Array.prototype.map`).


As a general answer, doing something simple in pure JS can be more efficient than the overhead associated with making a native call from JS. The costs are going to vary depending on how the JIT handles things. That's probably the best general reason though as to how it's possible.


As I understand it, native promises don't make native system calls. They are still pure js. They are just implemented differently from Bluebird which optimizes for the most common use cases.


http://programmers.stackexchange.com/questions/278778/why-ar...

I'm not sure all of it still stands. Bluebird is in C. Native is in javascript.

Edit: i'm wrong


Bluebird isn't written in C. It doesn't even depend on a compiled node addon.



Sounds like a step in the right direction!



Agreed, I've been using it a bit and it is quite good. Especially since if you use --debug-brk and launch the debugger at the beginning, you can enable pause-on-uncaught-exception and then if you crash you can access all state at that point.


Wow I had no idea this existed.


You can even use VS Code to get a IDE like debugging experience with it.


For those using nvm:

nvm install lts/boron && nvm alias default lts/boron

You may need to update nvm before that works:

( cd "$NVM_DIR" git fetch origin git checkout `git describe --abbrev=0 --tags --match "v[0-9]*" origin` ) && . "$NVM_DIR/nvm.sh"


If you use zsh I wrote a little wrapper for nvm that adds a few handy features. Upgrades can be done with `nvm upgrade`.

https://github.com/lukechilds/zsh-nvm


I'm trying to install it ( antigen ) but it complains that I already have a nvm installed ( well I do, but should I remove it ? )


It should work fine with existing nvm installs, have you commented out the line where you're manually sourcing nvm?

If you have can you paste the exact message you're getting?


I haven't commented out any lines.

    λ ~ ◆ antigen bundle lukechilds/zsh-nvm
    .Cloning into '/Users/drinchev/.antigen/repos/https-COLON--SLASH--SLASH-github.com-SLASH-lukechilds-SLASH-zsh-nvm.git'...
    remote: Counting objects: 228, done.
    remote: Compressing objects: 100% (47/47), done.
    remote: Total 228 (delta 16), reused 0 (delta 0), pack-reused 178
    Receiving objects: 100% (228/228), 29.54 KiB | 0 bytes/s, done.
    Resolving deltas: 100% (91/91), done.
    Checking connectivity... done.
    Installing nvm...
    fatal: destination path '/Users/drinchev/.nvm' already exists and is not an empty directory.
    fatal: Not a git repository (or any of the parent directories): .git
    fatal: Not a git repository (or any of the parent directories): .git
    λ ~ ◆ 
Anyway I think the problem is that it is trying to install nvm, which I already have. Not sure I want to delete my .nvm, since I'm going to loose all my node installs


Ahhh, zsh-nvm requires that nvm has been installed via git.

Although it shouldn't be trying to install over your previous installation, it checks if nvm exists first with `[[ ! -f "$NVM_DIR/nvm.sh" ]]`.

Out of interest what does: `[[ ! -f "$NVM_DIR/nvm.sh" ]] && echo "nvm doesn't exist" || echo "nvm exists"` return?

If you wanna try it out you could backup your "$NVM_DIR/versions" folder and restore it. That holds all your node installs and global modules.


    λ ~ ◆ [[ ! -f "$NVM_DIR/nvm.sh" ]] && echo "nvm doesn't exist" || echo "nvm exists"
    nvm doesn't exist
    λ ~ ◆
Yeah, but tomorrow I'm GMT+2 :D


Strange...

Is it a symlink or something? Does `ls "$NVM_DIR"` list nvm.sh anywhere?


No :(

I've opened an issue, because I think we're polluting HN

https://github.com/lukechilds/zsh-nvm/issues/11


> v6.9.0 marks the transition of Node.js v6 into Long Term Support (LTS) with the codename "Boron"

Welcome, Boron! We've been waiting for you :)


Oddly I've had this tab open for a week or so now: http://madscript.com/boron/


> Support has been dropped for Windows Vista and earlier and macOS 10.7 and earlier.

Windows Vista was released in 2007, OS X 10.7 in 2011.

For some reason, third-party software seems to support old Windows versions much longer than OS X. Meanwhile, Apple stops supporting old hardware in new OS X versions quite quickly (IMHO), probably for business reasons.

I had one Macbook in the past that I had to put Linux on, just because most software dropped support for the most recent available OS version.


Sierra supports Macs back to 2009, so thats 7 years which I think is decent. If you want to use a 10-year old notebook, I doubt you're going to be troubled by a 3-year old OS.

Apple may seize support older hardware earlier than Microsoft does for a very easy reason: because they can. With the insane amount & combinations of hardware Windows has to support, there's no point in adding anything that's somewhat specific – plus they have a lot more of the "enterprisey" customers with a 15-year support contract and underfunded IT budget.

There are quite a few Macs in large companies nowadays, but from what I've seen, they are much more likely to adopt a consumer-like support system, i. e. giving root to the users or letting them individually buy & upgrade within a certain budget. They also tend to update more often, possibly because buying a Mac is already an indicator that they're willing to spend more / that they care about their tools.

It's also the complete opposite on mobile phones: not rare for Android phones to sometimes not even support a version that was already released when you bought it, whereas iPhones are good for about 4 iOS versions I believe.


Microsoft invests heavily in backwards compatibility. Apple could too since they have more money than they know what to do with but I take it they prefer the ability to iterate quickly and make breaking changes for the sake of moving things forward.


For the sake of selling more devices


Can we get an estimated time of arrival of ES2015 imports in nodejs? I understand that the support needs to come from v8 first of all.


They just had a meeting with TC39 about it. Here's the writeup from the node side:

https://hackernoon.com/node-js-tc-39-and-modules-a1118aecf95...


> Unlike require(), however, import() returns a Promise, allowing (but not requiring) the loading of the underlying module to be performed fully asynchronously.

Promises, afaik, are always asynchronous. Maybe the spec changed (I'd welcome it for sure) but last I read it, resolve or rejection handlers must be called in the next tick. Even if something can be executed immediately, it'll still wait till next tick. This obviously means there's always a wait associated with promises, hence they are always asynchronous.

I wish this article would go into more detail on what the edge cases are with the unambiguous module syntax – I thought that was an unusually elegant solution in this community.


> loading of the underlying module to be performed fully asynchronously

Right now, Node uses require(). What he means here is that while this does return a Promise, the underlying loading of the module doesn't have to be asynchronous, you could just do something like this:

    function import(module) {
      return Promise.resolve(require(module));
    }
Which would fill the requirement of returning a Promise, but would actually load the module using the normal synchronous means.

> I thought that was an unusually elegant solution in this community.

The biggest issue with unambiguous module syntax is that, since you have to parse modules before finishing the resolution, it could drastically lengthen startup time.


Thanks for the clarification!

Regarding parsing modules – that's a fair point but surely this has to be done anyway? For ESMs you have to parse them to build the binding map, and for CJS you have to parse to evaluate; either way the modules get parsed at some point so why would this in any way be an additional overhead?


> Promises, afaik, are always asynchronous.

In this scenario, I believe the idea is that the module could be loaded synchronously, but the Promise would still be resolved or rejected in the next tick.


Ah right, that makes sense. Thanks for clarifying!


TLDR: TC39 and NodeJS people are very friendly, and are cooperating to bring native ES modules into NodeJS ASAP. The delay is due to technical obstacles, and not politics/philosophy/power plays. Some progress has been made, and smart guys&gals are working to make this integration seamless for all complex edge cases, although Node may end up using a different file extension for pure ES modules (.ejs). Currently no timeframe is available, even as a rough estimate.


Best articule I've read on the subject. Thank you for posting this.

It does seem like it'll be a while before we can use this in an Node.js LTS.


No idea, but for now you can use rollupjs.


The ES6 improvements make Node a very powerful language. I'm a python developer and the improvements to the language are impressive. No support for array unpacking and import statements just yet though.


> Very large arrays are now truncated when passed through util.inspect(), this also applies to console.log() and friends.

This is a funny one, in conjunction with the facts about npm that 1) npm has no public programmatic API (and zero docs on it), 2) npm devs encourage you to exec() stuff and read stdout, as all the internal methods can be changed/removed at any time, also in non-semver-major.

I used to have script doing `npm view | grep ...` which now fails under node 6. The best solution I guess is to use a hardcoded version of npm as a dependency and rely on programmatic API rather than stdout.


The output of npm view is essentially a version of the package.json. That's a pretty standard structure. You can form some of the fields in different ways. But why would you want to grep through json when you can parse it?


it's not a standard JSON, it's a `console.dir()`, keys are not quoted hence any JSON parser would choke on it, and due to the change in console methods, you can have "..." in long arrays. It's unparseable (not a valid JSON), and ungreppable too (not full arrays are printed).


I am assuming this means they will tag v7 soon so I can easily use it with nvm?

Also, does anyone know if there is a way to get babel to output code targeting Node v7? Like, just stuff in the 'latest' preset that isn't out-of-the-box (with or without the flag).


You can use node specific preset like other's have mentioned but working on https://github.com/babel/babel-preset-env for all future versions as well


You want the babel-preset-latest-minimal package.

Does exactly that - feature based Babel transforms


There have been presets for Node 4, Node 5, etc so I imagine there will be one for v7 once it's released.


Why 6.9 release implies 7.0? I think it implies 6.10 more.


I'm a little surprised that "LTS" doesn't mean at least five years of support like Ubuntu.


It doesn't surprise me considering the speed of development with Node and its ecosystem. It has only been 7 years since its birth.


Contributors that do the job are welcome ;-)


I know it's not completely related, but does anyone know if ExpressJS runs on 6.9?


Why wouldn't it?


Hopefully an official 'Boron' tagged docker image will follow shortly.



I'm stoked about the ES6 support.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: