Firefox 53 produced a log that matched what I expected but then I ran it a few more times and then it happened in another order which I then read was the order they were supposed to appear.
>Microsoft Edge, Firefox 40, iOS Safari and desktop Safari 8.0.8 log setTimeout before promise1 and promise2 - although it appears to be a race condition. This is really weird, as Firefox 39 and Safari 8.0.7 get it consistently right.
So Firefox 39 may have gotten it consistently right on his machine, Firefox 40 did not on his machine and Firefox 53 still does not on my machine.
One thing that really helped me understand the JS event loop was that the callback from an asyc call CAN block. Something like
setTimeout(() => {
var x;
while (x < 2000000) {
x += 1;
}
});
will block. Event listeners, timeouts, and xhr requests just wait to receive a response. They're not actually doing anything while that happens. And their callbacks basically just become 'synchronous' when they get that response.
Usually when people refer to something as blocking / non-blocking they mean I/O bound tasks not CPU-bound.
Something like a while loop blocking is surely to be expected?
Many would argue this is not an example of a blocking call. A good example would be writing/reading from localStorage (as some browsers won't return until its written to physical storage i.e the HDD)
Well, a novice coder wouldn't know that. The end behavior isn't going to be any different than writing to localStorage (nothing can run until it's done), right? And you very well could move that code to a different thread via a web worker, couldn't you? So if it walks like a duck, and it blocks like a duck...
I am highly confused as to what you might have thought other than that the callback itself could block... would you mind sharing? (I am an educator and teach a class on programming languages, so my question here is one of some serious interest: to understand the beginner's mindset better.)
I just had a hard time building a mental model of what the asyc function was doing, and the analogy I had heard before made it sound more like a multi-threaded process (something I also didn't fully grasp).
The analogy I had heard was that of a check-in window at a doctors office: the nurse would give you a form to fill out and when you were done, you could get back in line and hand the form back to the receptionist. I think the analogy was trying to show that there weren't multiple windows to check in at, nor did you have to stand at the window while you're filling out the form, preventing people behind you from getting the form. But the reason it's a bad analogy, and why I was confused, was that it made it appear that you (the patient) are the callback function, and you're doing some processor intensive work when you sit down to fill out the form. It would have been clearer had the analogy specified that you are actually a backend server/web-worker, but even that doesn't fit right.
So, I think it would be best to avoid using analogies. The event loop isn't that complicated: async functions just sit around waiting for a response, and they stick their callback on the event queue when they get that response. When the call the stack is cleared (from it's synchronous functions), the first callback in the event queue moves to the call stack (ie, it gets called), and that process repeats until the queue is cleared.
I can understand the confusion. I think the construct with an inline function that executes something "later but synchronously" is quiet special and very specific to the event loop pattern.
If I try to forget all I know about JS, I can think of two different _wrong_ ways to interpret this code:
a) executes the callback immediately in-line, as it was synchronous
b) executes the callback at a later time, in a different thread.
Indeed, nothing in Javascript _language_ indicates that is has to be single-threaded. Then the author continues by comparing this with C and talks about threads. Again, nothing in C programming language indicates that it's multi-threaded.
C is low level enough that you can mutex around non thread safe stuff, without big changes to the underlying implementation. I suppose the JS language spec itself doesn't rule out multi-threaded, but I imagine that would be a fairly big endeavor to re-do the guts of any implementation. The current limitations of web workers seems to confirm that.
With a concurrency-safe garbage collector (which isn't even hard to come by) I would argue the exact same is true of JavaScript. It really isn't a language detail and I would argue it would be a weekend project to get real multithreading working for JavaScript (as in, to the point where it works as well as in C, which is to say it might require tons of mutex locks everywhere).
It can be done, and has been in Rhino, Nashorn, and graal.js. The problem isn't in the language per se but in the libraries provided as standard (mutexes, thread safe hashes/property access etc.) and in the implementation. It's much easier to build a JIT when you don't have to worry about concurrent threads of execution, and converting a single threaded JIT to a robust multithreaded one can take a lot of work.
Right. Whether it's the browser's XMLHttpRequest or localStorage or NodeJS's readFileSync JS can block on I/O.
But since the language is single-threaded (or at least, it defines nothing about multiple threads with shared memory), there is a strong incentive not to have blocking functions.
Indeed. Any time JavaScript is executing, there is blocking. One has to be really careful to avoid CPU-bound code in JavaScript or all the benefits of the "non-blocking" model go down the drain.
The event loop architecture is also heavily used in iOS / Cocoa, although it is often not well understood by developper. Each thread has an event loop, including the main UI thread, and many weird behaviors can be understood better once you know a bit about them.
Which made me wonder if a simple implementation of agent based concurrency in swift server couldn't simply be one agent - one event loop, plus a way to prevent direct calls across agent boundaries. Server is not iOS, but i suppose some language facilities should already be there and make it easier to implement.
The interesting question is more or less which GUI frameworks are not build around a singlethreaded event loop. QT and GTK/glib are too. Adobe AIR too. I think the Java frameworks (AWT, Swing) also.
You could probably do the same with PeekMessage in win32, maybe call it periodically in a green threading library, or similar. But the most common mechanism is an event loop. I'm sure people have developed X applications that talk the protocol directly, but it isn't the typical way to do it.
This doesn't mention server-side JavaScript at all. There are lots of blocking routines in Node.js, the standard library is full of them.
But it is interesting that it doesn't mention the oft repeated meme "JavaScript is single-threaded". Would be nice with an example showing parallel number-crunching without WebWorkers. Is that possible?
I haven't read the article, but my simplistic understanding is that Node.js for your _app_ code is single-threaded, but the async IO that happens is done via an internal thread-pool. So, number crunching in your code will pretty much always block the rest of your code (but your existing IO routines will continue, until they complete, and their callbacks can't run until your number crunching is done). I am very tired, and not thinking straight, so I've probably messed up the explanation a bit.
Lately, I've been playing with libev and thread-pools together in Nim; so you write async/await stuff everywhere with futures, including number crunching, which gets delegated out to the thread pool and yields back to your main thread when it's done. It's quite nice, but I forgot how complex this stuff gets!
> the async IO that happens is done via an internal thread-pool
Not quite, since using threads internally (thread per IO operation) would defeat a purpose of async system, which is created specifically to not pay threading cost (context switches, etc.). In layman terms, kernel will simply invoke your C callback function when "stuff happens", and libuv propagates that to JS in platform-agnostic manner.
There are still threads and thread pools in V8 and node for stuff like calculating PI (or, more realistically, password hashing and other number crunching), but that is mostly unrelated to async IO itself.
> using threads internally (thread per IO operation) would defeat a purpose of async system
Yes and no.
Having single threaded JS (no matter if runtime uses epoll) allows for JS concurrency concerns to be limited to a small number of places, i.e
no pre-emptive. This makes concurrency in JS easier to reason about.
Not without making calls to c-bound processes that plug into the event loop or two independent JS run-times talking to each other which is also possible on the web through iframes or in Node via spawning child processes.
Single-threaded is desired behavior in the case of web ui and of Node, which is multi-process c/++ under the hood and JS responding to message queues.
It's pretty simple. Functions block. The messaging queue that triggers function calls in event-driven, setTimeout, XHR, talking to another JS runtime cases doesn't. So yes, you have to be careful to not lock up your entire server by processing images with JS function calls in the same runtime.
But the huge benefit you get out of this tradeoff is that you don't ever have to worry about two things trying to change something at the same time. Everybody is locked until your functions are finished and with async that doesn't typically take very long.
The synchronous lib calls in Node are mostly for convenience. I use them all the time when doing simple file I/O stuff where I just want to read something from a file, do something to it and stick it in a var or a new file. In a server or more complex application, however, you always want to use async callbacks.
js is single threaded. parallel number crunching is not possible.
You can fake it by kicking off multiple executions on a data set and chunk the work load across the data set, so they progress incrementally and equivalently. The data might be small enough (or browser fast enough) to give the impression of things being done in parallel.
In my time I have encountered quite a few descriptions of how the event loop works in JavaScript.
What I would like to find is _why_ do they work like this. What is so important that this is the way it must be done. We live with browsers that lock up for a while until they figure out that some code has inadvertantly done a while(true). Promises and similar callback amelioration techniques took ages to turn up, when without the event model they might never have been required at all. Is there a reason for this? Is it a good reason?
The reason is that JavaScript runs within the same event loop as the browser GUI itself.
The history of JavaScript is that it started as a sort of "quick and dirty" scripting language at Netscape. Creating a completely separate event model wasn't possible in those circumstances, so JavaScript ended up being shaped by the single-threaded lowest common denominator of Win32 and other platforms that Netscape ran on.
Because it's a great (quick and easy) way of making sure JavaScript and the browser's internal rendering functions don't run at the same time and cause data races. Asynchronous callbacks share the same queue as gui tasks, and those tasks can run whilst JS is waiting on an action or an I/O task to finish.
As the UI uses the same thread as JS, if JS blocks, this would lock up the whole window (clicking on things would do nothing), so it's important functions don't take too long or suspend the thread.
There was an experiment within servo to see if rendering and JS could run in separate threads entirely, but I'm not sure how that went? https://github.com/servo/servo/wiki/Design
The tradeoff is that JavaScript programmers don't have to understand concurrency or deal with locking. Threads in Java (which became popular around the same time) are trickier both to implement and to use. Early multithreading was buggy, even at the OS level.
You also don't gain a lot from multithreading in typical UI event handling. Most event handlers have to finish quickly anyway or the user will notice lag in updating the display. A UI that responds to events slowly and out of order would confuse users, even if it had no concurrency bugs (which is unlikely).
This reminded me of Phil Roberts' excellent talk a couple of years ago at ScotlandJS: "Help, I'm stuck in an event loop". Well worth the 20 minute watch. https://vimeo.com/96425312
practically speaking, I think for the most part, when folks talk about blocking execution, they are referring to the browser locking up while the long running process is running.
To gracefully establish expectation in the user - A processing indicator will buy the application some time before the user gets impatient. The setTimeout 0 pattern is useful to delay the long running process enough for the browser to redraw the UI to throw up the processing indicator.
One can further achieve another state in the processing indicator by leveraging time delayed CSS animation which runs in a thread that's parallel to JS. So you could use CSS to augment the content of the processing dialog while the blocking JS is running.
setTimeout uses the same message queuing system as XHR, DOM events, or communicating with stuff that has its own stack like iframes or workers. That's why a function with a timeout of 0 will not fire until after it's enclosing function has popped. Functions block. Waiting for messages doesn't. That's the model for JS concurrency.
https://jakearchibald.com/2015/tasks-microtasks-queues-and-s...
http://www.c-sharpcorner.com/article/overview-of-micro-tasks...