Hi chinedufn, this looks very interesting. I am at the same stage as you re. "I REALLY want to use this for everything ...".
I see you're using wasm_bindgen quite a bit, but from a cursory look at the code; I can't figure out something:
I can see how you're using wasm in the front-end, but could you please elaborate a bit more about in the backend? I presume that your backend is Rust based?
I spent last weekend porting some code from NodeJS to a wasm module. I got it sort of working on Node, but haven't tried it on the browser yet (ran out of weekend).
I'm quite optimistic about wasm [and Rust with wasm_bindgen]. I'd been thinking of writing a native add-on for Node, but the idea that I can use wasm instead is exciting.
1. Pulls in your application crate
2. Initializes your app (more or less sets initial state
3. Renders your app's virtual DOM into an HTML string
4. Serves the HTML to the client, along with the initial state serialized into JSON in a script tag (using serde-json)
5.Also serves the WASM script and the JS that initializes the WASM
So that's your server crate. Then your client crate also pulls in your same application crate and you compile it to WebAssembly and that's what runs browser side.
Feel free to let me know if any of that was poorly explained!
This is good enough for an example, but for a "real" web server, you wouldn't do this; you'd use a framework that handles a lot of this kind of thing for you. I think this code was adapted from the final chapter of the Rust book, which is mostly about learning how to write a simple thing than being production-ready.
Isomorphic - runs the same on both web browser and server nodejs.
Virtual dom - not the real dom tree but a representation of the dom using nested objects. You can diff two versions of trees (current and updated) and apply the changes to the real browser dom. This makes building pretty UIs super easy since a view is essentially a function of state. Diffing the virtual dom and patching the real dom is much faster than iterating over the the real dom for every update.
On the server side, you use the virtual dom to render to html string.
So true isomorphism is basically when the same code is used in both browser and server. Server renders the initial page load to html string, once loaded by the browser the browser again uses the same view code to make further UI changes without causing a page reload (single page app). Good for search engines and crawlers, good for blazing fast performance and instaneous interactivity.
Some definitions from my understanding: "isomorphic" - can render on server and client side with same code, "virtual dom" - non-browser DOM, i.e. often just a HTML data model useful for mutating and then diffing to minimize changes to the page DOM, "dom" - the data model of the page.
An isomorphic virtual DOM is much like an inverted-index homomorphic shadow DOM, but instead of using a shadowed mapping they virtualize the original elements themselves.
Yup. Otherwise he's basically talking about a structure-preserving index that maps DOM elements - that are rendered outside the main document DOM tree - to a file.
I don't think many would understand what an inverted-index homomorphic shadow DOM is, though I think I can cobble together a very very vague understanding.
In a regular DOM there's just a tree of elements, this makes many operations tedious, e.g. finding elements with a given set of classes and so on. So with an inverted-index shadow DOM you get another DOM with element shadows and an index of element properties back to the shadow elements (making it an inverted index). Then simple boolean retrieval can be used instead of DOM traversal. Much more efficient. The actual reason why you want to use shadowing instead of direct-mapped nodes/elements is shadowing enabling E2C (element change coalescing) meaning instead of shadow element changes directly transferring over to a change of the actual DOM element you can batch changes on shadow elements together and change a bunch of DOM elements in one go, which avoids unpartitioned (and therefore wasteful) re-renders by the browser engine.
(Ok, before anyone goes running off telling their colleagues about this great new tech: I literally made all of this up on the fly except the batching stuff. That's actually one of two reasons why this whole shadow-DOM-stuff exists. The other is encapsulation. I think this whole thread is a most beautiful demonstration of Poe's law in its original form.)
I think "shadow-DOM-stuff" refers to the concept of virtual DOMs. The shadow DOM isn't really a virtual DOM though, sure. It's just encapsulated parts of the regular old DOM.
> An isomorphic virtual DOM is much like an inverted-index homomorphic shadow DOM, but instead of using a shadowed mapping they virtualize the original elements themselves.
Not sure if this is ironic, but a 5 year old definitely wouldn't understand this!
Yew is awesome and just knowing that something like that was possible inspired Percy. I also looked at Yew's `html!` macro when figuring out how Percy's could / should work.
One difference is that Yew is powered by stdweb and Percy is powered by wasm-bindgen.
I'm personally SUPER bullish on wasm-bindgen because it's been designed from day 1 to be able to take advantage of the host bindings proposal when it materializes.
Host bindings tl;dr is that instead of needing to go through JS to interact with browser APIs you can interact with them directly.
Another difference is that to my knowledge Yew doesn't support server side rendering ( which was why I couldn't use it even though I wanted to :( ).
Without having used Yew I don't want to comment any further than those high level differences.
I can say that a big focus of Percy is to be a grab bag of modules / tooling for frontend Rust web apps with a major focus of you being able to swap out the parts that you think are bad for other peoples' better implementations. That dream isn't realized yet.. but I think that Rust's generics / traits could make this feel very clean!
>One difference is that Yew is powered by stdweb and Percy is powered by wasm-bindgen.
>I'm personally SUPER bullish on wasm-bindgen because it's been designed from day 1 to be able to take advantage of the host bindings proposal when it materializes.
I find it funny that you mentioned that. I actually was trying out wasm-bindget/yew recently and ended up using yew because of the wasm-bindgen limitations (mostly related to using anything with generics in the type signature).
The experience will probably improve on the future, but right now yew's actor model seems like a much simpler way to encapsulate rust libraries.
The use of the html macro is incredibly nice, no need for a special "JSX" language, it’s all builting thanks to macros. I’m working on the same thing for python (using pyxl/mixt/lys and brython) and you just proved that there’s interrest in that !
If they moved their virtual DOM (the meat of the work React does and its primary bottleneck) over to webassembly, it would almost certainly be faster. I can't really speak to the download size, though I wouldn't be surprised if that improved too.
It's worth keeping in mind that WebAssembly can't access the DOM directly; it has to call into JS in order to do that. I haven't investigated it myself but I'd wager that because of this, any performance benefits would be negligible.
WebAssembly also can't access JS object structures directly, which means that the virtual DOM would have to live in WebAssembly land in the first place, ie. anything generated by JS that you want to end up in the DOM would have to be copied over first, or rather, you want all your VDOM-generating code to be WebAssembly code as well if you want the whole thing to be efficient.
The whole point of a virtual DOM is to simulate and reconcile changes before pushing those changes to the real DOM; it would be fairly clean to do all that processing in wasm and then send the result over to JS for reification.
It's true that there would be some challenges when it comes to "sending" data to the virtual DOM; in particular, event objects could get complicated. I assumed this project has solved that problem, but I didn't read very deeply into it.
Keep in mind that WebAssembly strings are not JavaScript strings, so there is additional data conversion overhead that needs to be accounted for, versus a JavaScript implementation. And it's not just events and DOM updates. If you wanted to preserve React's render API, the input data for re-rendered DOM needs to be converted from JS before doing a diff.
With slower data conversion and faster internal calculation, it's plausible that it might overall be either slower or faster. I don't see any way to know without doing some experiments. It would certainly be more complicated.
It seems like the sort of JavaScript library where WebAssembly would be an easy win would have low API bandwidth (not much data crossing the boundary) and do a lot of expensive internal calculations. As a UI framework, React does some internal calculations but has relatively high API bandwidth.
If you don't need server side rendering you can have one Rust file but you also need a JS file to initialize the WebAssembly module. So just one file isn't reasonably possible.
For a server side rendered app you need to have a cargo workspace with 3 crates (they can be in the same repo so not a huge deal).
One crate is a `cdylib` that you compile to WebAssembly to serve your app to the client. This is a light wrapper around your actual application.
One crate is your actual application.
And one crate is your server, which is also a light wrapper around your actual application.
what i want to see is to have all logic to be centered around features, and have a framework compile the server side and client server out of one logical piece of code, if that even makes sense.
a bit like in the old days of php when you'd have the serverside code and html rendered in their, but this time nearly everything is running on the client unless the client can't be trusted.
you could have some kind of ring system, where something can never run on the client right up to always run on client, but all in one file.
ASP.NET does this, you use its components and it generates client side javascript. In a more functional way, there's another language that does this caled Opa! http://opalang.org/
React already does this, though perhaps not in the way you're imagining. You can use react-dom/server with your client side code to render out HTML templates, then when you run the code on the client side it'll call componentDidMount(), where you can add client-specific code.
Another good tool if you want to write front end in a great general purpose language is scalajs. It allows you to reuse the dame Scala code for backend and front end.
I see you're using wasm_bindgen quite a bit, but from a cursory look at the code; I can't figure out something:
I can see how you're using wasm in the front-end, but could you please elaborate a bit more about in the backend? I presume that your backend is Rust based?
I spent last weekend porting some code from NodeJS to a wasm module. I got it sort of working on Node, but haven't tried it on the browser yet (ran out of weekend).
I'm quite optimistic about wasm [and Rust with wasm_bindgen]. I'd been thinking of writing a native add-on for Node, but the idea that I can use wasm instead is exciting.