Hacker News new | past | comments | ask | show | jobs | submit login

A few folks have replied with the usual "use C/C++/Java instead", but in the real world it's often either impractical (or rather commercially indefensible) to fork out a different environment with its own training, testing, environment, automation, documentation and maintenance overheads. A blanket rejection of Node for CPU-heavy tasks is naive.

On the issue of performance, V8 lets Javascript run pretty quickly. Yes, there are languages that broadly offer faster execution, but that's far from the only factor in choosing a solution.

The main issue from my perspective is that the event loop can easily get blocked by CPU-bound tasks, preventing it from doing other things, e.g. responding to HTTP requests. You hit a similar problem with a Java servlet runner, eg. if a couple of your threads are bogged down on CPU-heavy tasks then they can't be responding to requests.

My personal preference would be to split CPU-heavy operations out so that they happen elsewhere, regardless of language, e.g having large PDFs generated by an internal microservice rather than by the webserver, or maybe via a queue in some cases. But that's just a personal preference.




It's relatively straightforward (but moderately involved) to split out CPU-heavy operations in node.js so they don't block the event loop. A rough sketch would look like this:

* Write the CPU-heavy code in C++ and bridge it back to node.js as an add-on: http://nodejs.org/api/addons.html

* node.js is built atop libuv, so use libuv's work queues to offload the CPU-heavy code to a worker thread: http://nikhilm.github.io/uvbook/threads.html


Really?

Companies I worked for always had two environments, one for new features and one for performance.

Like PHP and C, new features where implemented in PHP and if they caught on, they got reimplemented in C if they needed better performance.


Sure, if the companies you've worked for are in a field that needs incremental performance gains and are willing to pay for it then that's totally rational.

Typically I see client-side performance concerns outweighing server-side performance in a ratio of 70/30 or so, with the remaining server-side performance biased towards I/O concerns like waiting for data, or file system reads with a ratio of 90/10 or more. That puts the actual saving available to language or algorithm changes in the app layer to be less than 3% for the kinds of apps I've worked on.

I usually work at companies who are starting out, looking for Product Market Fit, where those marginal gains aren't worth the cost of reimplementing.


Fair enough. The companies I talked about didn't do this right from the start. Most of them after a few years.


sounds expensive $$$


> can easily get blocked by CPU-bound tasks,

That's not a problem with node where you can easily implement a distributed queue system.The job queue processes will block but not your web server.


Yep, you could use a queue. On the plus side that isolates the work from the fragility of a Node process. On the downside it comes with a specialist infrastructure requirement, often complex configuration rules and can be awkward when you need to return the result in a single HTTP cycle.


What about web workers? I believe the whole idea behind them was to allow to run heavy tasks in the background threads. Havent used them personally, but quick search indicates that they are available on node.js via npm [0]. Unless the whole concept is misunderstood by me and they would still be able to block your servers response to http requests.

[0] - https://www.npmjs.com/package/webworker-threads


Yep. Web workers, child processes, whatever works. My preference is to use microservices to keep things isolated. It comes with an overhead of a millisecond or two but for most purposes that's fine.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: