The code they are benchmarking is written in C, then compiled to asm.js (presumably with Emscripten). In one version, they remove the "use asm" header, but the code appears identical.
That seems like a strange test. They aren't testing JS against ASM, or even compiled-to-vanilla-JS vs compiled-to-ASM. Both versions are compiled to ASM (complete with all strange annotations like num|0), only one has the asm optimizations disabled. I don't know enough about asm to know if their annotations are slower than vanilla JS in unoptimized engines, but I suspect they might be.
I was also surprised that asm still won in Chrome - I thought Chrome optimized for asm-like code without checking for the "use asm" flag.
> I don't know enough about asm to know if their annotations are slower than vanilla JS in unoptimized engines, but I suspect they might be.
The opposite is true, for the most part. JS engines, even without asm.js optimizations, utilize the fact that the | operator emits a 32-bit integer (per the JS semantics), so it helps their type inference.
(The engine needs to be good enough to get rid of the actual 0 value in the |0 coercions, but JS engines have been that good for several years now anyhow.)
> I was also surprised that asm still won in Chrome - I thought Chrome optimized for asm-like code without checking for the "use asm" flag.
Chrome detects "use asm", and enables TurboFan on such code. All JS engines today detect "use asm", except for JavaScriptCore.
This was very infuriating. The server seemed to be under load and took several minutes to load. I was not going to wait for it load again to retry the simulation.
Interesting. I slowed a game down to 1000ms per move so I could realistically follow what was going on. The game started 1.Nf3 Nf6 2.e4?
So White sacrifices (loses?) a pawn for no compensation on move 2. Not really sure what this says in respect of the experiment in general, but it does smell a little.
Also; Showing the dark squares would be a massive usability advance! And it would be nice if they respected some simple and fundamental chess conventions (for example presenting the moves in the way I did in my comment above - move numbers increment after whole moves not half moves)
Well, I guess if it really was out of a book, it would imply that the quality of the engine itself wasn't at fault, but rather a bad opening book. Also, without that line being thoroughly researched, it'd be difficult to say whether it's a blunder, or a clever sacrifice. Again, it could be an incomplete opening book which led the engine to offer the sacrifice, but then didn't have info on how to capitalise on it.
Though in this case, I think it's almost certainly a blunder, that line does not appear to be a well known opening.
There are opening books, and there are opening books. 1. Nf3 Nf6 2. e4 can no doubt be found in some opening books due to the sheer fact that someone, somewhere, played it, and it got archived. Then yes it's a line that could get selected at random.
It's not a serious opening though, it looks frivolous - could be played in simultaneous games, or in bullet chess for its surprise value etc.
Given the insame amount of opening theory research humanity has done by now, if e4 on the third move here was even remotely close to a clever sacrifice, chess community would have noted it by now.
I set it to as slow as it could go so I could watch the game. It was weird because it got to a point where it declared a draw, but the pieces were still moving around so I left it alone. Eventually a couple more pieces were captured and the non-optimized side managed to win, so it updated from "draw" to "checkmate".
I think it rather means that the search time was short enough that neither search tree got deep enough to form advantageous moves. (When I check the detailed output, it seems that the asm.js version visits at least 2x as many nodes as the un-optimized version, independent of Time-per-turn)
That doesn't mean anything, asm.js only increases the amount of positions searched. If it searched more positions and still lost it means it either searched the same depth on average (which means the extra computational power did not finish a deeper search) or it just happened to lose despite it. See if it's searching more positions on average.
When you push it up to 1000ms per move, since the difference between 15 moves ahead and 16 moves ahead is so large, both engines end up making 15 moves ahead and win / lose about half the time, even though asm.js is visiting more nodes.
At high level chess (which this is), white has a significant advantage over black. Always picking asm.js as white (and not providing a setting to change it) is a bit disingenuous.
That seems like a strange test. They aren't testing JS against ASM, or even compiled-to-vanilla-JS vs compiled-to-ASM. Both versions are compiled to ASM (complete with all strange annotations like num|0), only one has the asm optimizations disabled. I don't know enough about asm to know if their annotations are slower than vanilla JS in unoptimized engines, but I suspect they might be.
I was also surprised that asm still won in Chrome - I thought Chrome optimized for asm-like code without checking for the "use asm" flag.