A group of smart WebAssembly experts tried to start a company around the idea of incrementally porting to WebAssembly, and wrote an excellent post-mortem of why that didn't work: https://zaplib.com/docs/blog_post_mortem.html
That was interesting to read. Apart from wasm/rust experiment, they got real customers and validated their idea in the real world. Still an amazing result.
Looking to get the perspective of some HN'ers here. I am primarily an embedded engineer working with C. Our (small) software team also has a few web people that work on the web UI of our embedded Linux product. Obviously, there is a ton of potential for shared code between the two products utilizing WASM. I recently ported one of our C libraries to WASM as an experiment with emscripten and was blown away by how easy it was. I can't run the unit tests in the browser yet but it seems to have worked perfectly. However, the JS "glue" code that is produced by the emscripten compiler is really scary looking, and as I understand is basically an abridged C runtime that needs to run in the browser. My questions are therefore:
1) Is it too early to adopt WASM for production use in a small company with limited resources?
2) If we did decide to go all in on WASM, what sort of gotchas might we be dealing with?
> Is it too early to adopt WASM for production use in a small company with limited resources?
It really depends on what you are doing with it.
For things like "I've got this C lib that I want to be able to call from the web" it's probably fine. For things like "I want to write my whole webpage in wasm" it's not ready for that (and, frankly, likely never will be).
It's a matter of finding the right fit.
> If we did decide to go all in on WASM, what sort of gotchas might we be dealing with?
Depends on what you mean by "all in" if you mean, "I want ALL my webapp logic to live in wasm" then the biggest gotcha you'll face is shipping stuff into and out of the VM (including DOM elements) is really painful. IMO, UX continues to best be done with Javascript or compile to javascript langauges (typescript, for example). Using a language like C, C++, or Rust to do UX work will simply not be pleasant, will be hard to hire for, and will complicate a lot of your build system.
Now, if by all in you mean "This logic only lives in this library and we are shipping that out as a WASM module" then I think wasm will work well there. Better, in fact, than other options like kotlin to native or J2CL (IMO).
That said, there are fledgling UX frameworks written entirely for WASM. However, they are all relatively young and AFAIK, not exactly well established.
This becomes an engineering decision and balancing act that I don't think can be 100% answered for your company.
I don't expect any of the UI work will ever be done in WASM (because it doesn't need to be shared with the embedded side of things). It is more porting standalone libraries that would be useful for both like file parsing and networking libraries. (In particular, the library I ported was a file parser).
In that case, yeah, it will work. You just have to realize that WASM has a REALLY tight sandbox around it, so getting a file/network stream into it will be somewhat of a pain.
WASM works best in cases of strict CPU and limited GPU processing (through webgl).
Never say never, but the DOM and JavaScript environment relies pretty heavily on garbage collection. WASM is inherently not garbage collected, and that makes it really hard to share data between the environments, so I'd say that it is unlikely.
There's nothing about WebAssembly that's "inherently" not garbage collected. There is a "Host GC" proposal [0] along with a "reference type" proposal that would allow holding handles to DOM or JS objects from WebAssembly and using the JS GC for WebAssembly objects.
That's not an issue if you have a way to pin/unpin references as they move into the non-GCed world; a lot of GCed languages like Java/Python have native bridging.
It is a good way to get memory leaks, but a lot of websites already have those.
I think WASM is ready for production use. Emscripten does produce big blobs of JavaScript. If it bothers you it's possible to do WASM without Emscripten. If your embedded code doesn't use a lot of libc then it might actually work for you. https://surma.dev/things/c-to-webassembly/
The big gotcha used to be debugging but Chrome dev tools actually has DWARF support now. Of course as an embedded developer you're probably already used to poor debugging support.
Heavy use of libc for file io and dynamic memory allocation, but not much else. Some of our code could be converted to using static memory buffers, but we also use some third party libraries that are not so easily changed. Thanks for the link, I will have a look!
It's hard to beat JavaScript because the VMs are amazing. If you know a few tricks, like how to let the VM know you want integer math [1], you can get performance that's not too far off from native C in many cases. If you have a JavaScript application with a few hot functions that are slow, optimizing the JavaScript usually makes more sense than reaching for WASM.
In my experience the place wasm really shines is in implementing custom data structures.
A wasm b-tree, skip list, rope, etc seems to outperform the equivalent javascript code by many times.
Edit: I just whipped up a real benchmark to test this, replaying some text editing traces[1].
Replaying the automerge-perf editing trace in javascript naively takes 610ms. Using a javascript based skip list it takes 77ms[2]. In native rust, with a rust port of the same skip list code I can process the same editing trace in 6ms[3]. Or 20ms when that rust code is compiled to wasm.
So in this case, we're seeing about a 4x performance improvement using wasm, or 13x performance improvement using native code.
This makes sense to me; graphs made of JavaScript objects are not going to be as fast as C, although I think knowledge of how the JIT works can still help you get a lot closer than 10x (e.g. ensuring everything is monomorphic). Now if you implemented a b-tree completely inside of a typed array, instead of using JavaScript objects as the nodes, that would be how you would approach C speeds. But if you need a lot of custom data structures, WASM is probably a better choice at that point.
About a decade ago I hand-ported the chipmunk2d physics engine to javascript. After porting it, I tried about 30 different things trying to make the code run faster. Some things I tried made it faster. Some made it slower.
The most performance-critical section is the "arbiter applyImpulse" physics code[2], which is all math.
I tried putting everything in as big TypedArray to avoid all the object lookups, but performance got slightly worse, not better. I still have no idea why. I think the reason is that the optimizer couldn't optimize the typedarray lookups at the time. Not in the same way it can optimize math on normal objects.
Maybe v8 is more clever now? I dunno!
The biggest performance win I got was from inlining vector fields. Chipmunk heavily uses {x,y} 2d vectors. Lots of objects looked like this: {pos: {x, y}, velocity: {x, y}, ...}. Flattening everything to this {posx, posy, velocityx, velocityy, ...} made the code uglier. But it caused a massive performance improvement because it reduced the number of objects allocated on the heap. (This one change reduced the number of heap allocations by about 70% from memory, and it caused a double-digit performance improvement overall.)
As I see it, Javascript will always be much slower than other languages for complex data structures because everything is stored on the heap.
10 years ago was around the time asm.js was just getting started, and that's what drove a lot of these optimizations. I think V8 is a lot more clever now. And the other browsers have largely caught up as well (or even surpassed V8 in some cases). And you can now safely ignore IE :)
I hear what you're saying, but my intuition disagrees with you. Asmjs was optimized in browsers when I did that test. I think people tend to overestimate how much subsequent versions of V8 improve performance in niche areas like TypedArrays. Especially now that V8 is very mature.
I suspects if I re-ran that test, performance would have improved across the board but I'd get the same relative performance result with TypedArray code.
It would be fascinating to try though. I wish I kept the code.
Yeah, you shouldn't do it all over the place prophylactically. It's a technique for optimizing functions that are performance critical. Although I find `| 0` surprisingly well known these days. I sometimes use it in places where I want a cast to integer.
Most of those optimizations stems from Mozilla's asm.js (http://asmjs.org/), which specified those as an explicit signal. Note that if you're still supporting IE (commiserations and I wish you that you could decommission it ASAP) this will slow down you code (only in IE of course) instead of speeding it up.
He makes a point of showing how hard it is to achieve the same performance as JS with wasm, but the majority of the problems he runs into are AssemblyScript-related. Also he doesn't mention that wasm is faster to parse than JS. I'd be interested in a comparison of JS and wasm from the POV of a UI-heavy, compute-light app.
To see results, show "vanillajs-1" and "wasm-bindgen", then hide everything else. WASM is about 6% slower, 18% longer startup, and 66% more memory usage.
Note this is a UI benchmark so the results are overwhelmingly dominated by DOM interop. Things should improve if the WASM interface types proposal ever lands.
I get that it would have a startup cost but I'm shocked that it uses more memory. Everyone complains about what a memory hog electron apps are and I assumed it was because of JS. But maybe we need something other than micro benchmarks to compare.
The absolute numbers aren't that different. You need 100s of MB just to display an empty Chrome window. Then the 66% is the difference excluding that baseline, e.g. it might be 300MB to load Chrome, then you measure 301MB vs 301.66MB. I guess that might be a bit misleading.
I think the electron memory usage comes down to the massive HTML spec and bloated old browser codebases. It's not JS. Node+V8 doesn't have nearly the memory usage of Chrome+V8.
That difference is what I'd be interested in anyway as a developer who focuses on PWAs (where the cost of chrome is paid for already). I'm trying to work out whether a WASM stack would help runtime performance, memory usage, and startup time, but based on these numbers it doesn't look so inspiring.
While just the parsing step may be faster indeed, I don't think that aspect alone is enough to make a difference for an application.
Browsers are often able to defer parsing of JS functions until they're called, so dead JS code doesn't cost much in parsing time.
In JS first run goes through an interpreter (without a delay for compilation/optimization), so JS has a pretty low latency for initial execution.
JS is relatively easy to split into pieces and lazy load (there are various bundlers that support "chunking"), and not loading code is faster than fastest parsers. WASM could theoretically do that too, but current languages and tooling are more geared towards monolithic executables. For UI, where time to interactive matters most, JS will likely do a better than a big blob of WASM.
WASM still needs to call out to JS for DOM interactions, so if your UI is DOM-based, you'll need a bunch of JS anyway, and have JS<>WASM communication overhead.
Browsers do a fast partial parse to figure out things like scoping and variable names, but skip actually building an AST so it's still a lot quicker than full parsing (e.g. https://v8.dev/blog/preparser)
Does wasm support anything other than monolithic executables? The last I used it, multiple wasm bundles had to be loaded up by JavaScript and glued together with JavaScript.
I'm all in on Wasm on the medium-to-long term as I think it will eventually dominate everything (embedded, mobile and web). The comparison with JS is a bit unfair as the former has had a 20+ year head start plus billions of dollars (yes, billions) in R&D to make it as performant as possible.
I think JS is a great language/platform and I consider V8 an amazing piece of engineering, it would be on my list for the seven wonders of the digital world. Wasm will get there someday and will take the place of JS, IMO.
We use WASM today and it's not for any performance reasons. Hopefully, like you say that will become an advantage too down the road as WASM matures. But, simply, there are parts of our application which we feel much more comfortable with and we find much more manageable being implemented in Rust and compiled to WASM. So for us WASM grants portability and language choice in a first class manner (as opposed to transpiling to and from JS and all the glue that entails). It's a real shot at dismantling the browser JS hegemony.
It sounds more like Rust gave you that choice, because it would have been just as good if the rust code transpiled to JS. I’m not knocking WASM just pointing out that you’re not praising any particular qualities of WASM, but of Rust.
Yes, but there's also a reason why the Rust -> JS route never took off.
Wasm is positioning itself as the common runtime for a lot of frontends (term borrowed from LLVM). You could see it as some sort of spiritual successor to the JVM, and it may actually accomplish the goal of "Write once, run anywhere" to a much greater extent.
(Btw, I am not criticizing Java here, I've worked with it and the JVM and other derivative languages for almost a decade and they all are quite valuable tools!)
That just runs wasm inside graal (and thus, openjdk), so I don’t think it is an indiciation of anything, other than graal’s truly exceptional novelty of being the one vm to rule them all (referencing graal’s white paper).
Personally Ive never cared about the performance aspect. If you care about that, you probably shouldn't be targetting the browser in the first place, but instead a native app. What has always turned me off about WASM, is that all the tutorials Ive read say you need some kind of JavaScript shim to get it working. That doesn't feel like a first class solution to me. If I can do WASM without any JavaScript, then I am interested.
Everything has to be in the browser now. People don't download and run programs any more.
Even if they want to, the programs have to be approved by Master Control. Apple [1] and Microsoft [2] are gradually tightening the restrictions for installing an executable on their platform. Downloading a program and installing it is now called "sideloading". Even for desktops.
I have about 20 exe in a folder that say you're wrong. I also have about 9 other folders with unzipped programs that say you're wrong. Nothing is stopping anyone from downloading an executable file and running it on a desktop computer. Nor should it.
How do you think people do software development? Or do you just think new programs come from magic pixie dust?
"Nothing is stopping anyone from downloading an executable file and running it on a desktop computer." Have you ever had a job? If you're in a workplace you probably don't have admin rights to install software on your desktop unless you are a dev, and even then it's not exactly guaranteed.
For home computers, there's now "Windows S".[1] This is totally locked down.
"To increase security and performance, Windows 11 in S mode runs only apps from Microsoft Store."
It's possible, for now, to turn off Windows S mode. For now. Usually. The Windows Store server has to approve turning it off. In an enterprise configuration, some in-house server has to approve.
> It's possible, for now, to turn off Windows S mode. For now. Usually.
As much as I enjoy seeing someone trash Microsoft, this is just a dumb take. Like I said to the other user, how do you think software development happens? Microsoft wants people to develop using Windows [1][2][3]. You can only do that by installing additional software. This is not, and never will go away. Yes, some corporate setting will have locked down computers, but thats how its already been for several decades.
> If you're in a workplace you probably don't have admin rights to install software on your desktop unless you are a dev, and even then it's not exactly guaranteed.
OK then, ask the admin to install it, I don't see the problem. If its appropriate work software, then it shouldn't be an issue. Web browser should be for... browsing the web. If you're simply using a browser to bypass admin restrictions, it seems like you're doing something wrong.
An admin won't just install it. Any software installed on a system connected to the corporate network is a security risk and would need to be vetted.
This takes weeks and can often take months. If the choice is between writing the business case, getting management sign off and waiting weeks for deployment vs running it immediately in the browser, I'm choosing the browser.
My read on that comment was more along the lines of everything must also be available to run in the browser now because users demand it.
The performance of apps run in the browser has improved so much lately that users' habits and expectations have shifted.
There are good use cases for WASM in the browser where the js shim is transparently provided for you, e.g. for game engines that export to web. As for non-web use, I believe the wasm runtimes that exist generally don't require js shims.
Also even if you don't care about performance, WASM can arguably provide a sizable security benefit for a number of use cases such as not-fully trusted plugins.
Performance is important, but the point of WebAsm was to have an asm for other languages to compile to in the first place, so we could finally stop making embarrassing utterances like “JavaScript is the assembly language of the Web.”
> custom compiler for a TypeScript-like language targeting WebAssembly. The reason I like AssemblyScript (or ASC for short) is because it allows the average web developer to make use of WebAssembly without having to learn a potentially new language like C++ or Rust. It’s important to note that the language is TypeScript-like. Don’t expect your existing TypeScript code to just compile out of the box. That being said, the language is intentionally mirroring the behaviors and semantics of TypeScript (and therefore JavaScript), which means that the act of “porting” TypeScript to AssemblyScript are often mostly cosmetic, usually just adding type annotations.
I haven't had the time to read more, but I'd be curious to learn why they didn't just target full TypeScript compatibility. That would reduce the barrier to entry materially.
i was wondering about the use case,
from what little i've read, assemblyscript is an alternative for devs who want to stick to typescript style language with some fine grained control for lower level webassembly. see some utility for niche cases where perfor mance is needed for complex data structures, not entirely convinced though, specially with v8 JS giving excellent performace out of the box
Really? Aside from the dark red links on dark blue background, I couldn't disagree more - it's a rare example where picking custom colours for text & background really does work nicely.
The way I understand it, WebAssembly is faster because you skip the parsing of JavaScript. If parsing JavaScript is not a bottleneck in your app, it's not going to be faster. I'm also under the impression that people want it because they can use a language other than JavaScript.
I could be wrong, but that's my going by 60 mph, looking out the window view
Interesting. That was a out-of-date misunderstanding on my part. I thought it was a binary JS AST. Thanks for the comment instead of just a negative vote