Some classes of errors yes, but this still has a giant hole in it that the writeup calls out:
"But there’s a hole in this scheme. We could copy an unrooted pointer — a JS<T> — to a local variable on the stack, and then at some later point, root it and use the DOM object. In the meantime, SpiderMonkey’s garbage collector won’t know about that JS<T> on the stack, so it might free the DOM object. To really be safe, we need to make sure that JS<T> only appears in traceable DOM structs, and never in local variables, function arguments, and so forth.
This rule doesn’t correspond to anything that already exists in Rust’s type system. Fortunately, the Rust compiler can load "lint plugins" providing custom static analysis. These basically take the form of new compiler warnings, although in this case we set the default severity to "error"."
Requiring additional layers of static analysis on top of unsafe code (and still not catching all the error cases) is beginning to sound an awful lot like plain old C or C++ again.
It actually suggests that we're misusing the trait in the short term, since it's one of the few that has built-in deriving magic in the compiler. In the future that won't be the case, but Rust is still a work in progress.
It's still pluggable (as it's just a normal syntax extension), it's only the #[deriving(...)] syntax that is not overloadable. E.g. a custom attribute #[js_traceable] can use all the back-end code of #[deriving], with only the surface syntax different.
What kind of encoding? You have plenty of JSON library which use encode/decode methods, so it's not unheard of in this context. And Visitable would be too generic, this is really about serialization.
The interaction between native code and JS on the DOM is certainly an area of complexity and plenty of chance for subtle bugs in a browser implementation - I wonder if the observation that native code can only manipulate the DOM in a limited fashion compared to JS could be used to simplify things.
For example, the initial DOM tree will be created (by native code) when loading the document, and it needs to be freed as a whole when another (or the same) page is loaded. On the other hand, JS can create/add/remove objects on it, but they must not be freed if some parts of the tree still reference them. This suggests to me that some sort of ownership scheme is appropriate, and in the case of Servo they've decided to make the GC own everything - which certainly makes a lot of things easy - but it might be interesting to think of whether being able to transfer object ownership between GC and something else would have any advantages, e.g. "all objects in the DOM tree of the document are owned by their parent, and objects not in the DOM tree because they've either been removed from it or newly created by JS are owned by the GC."
> "all objects in the DOM tree of the document are owned by their parent, and objects not in the DOM tree because they've either been removed from it or newly created by JS are owned by the GC."
How does this solve the issue pointed out in the article (i.e. a JS object referencing a DOM object referencing the JS object back)? In any case, if something is "owned", then you must ensure that it's not referenced by anything not part of the same "ownership" tree. Otherwise, you can get use-after-free errors. In general, GC is the only viable option for big cyclic object graphs with unpredictable lifetimes.
Not right now. It's a huge amount of work to create something competitive, and as soon as you add a JIT you lose most of the safety benefits that Rust provides. There is js.rs (https://github.com/TomBebbington/js.rs) that already exists, however.
But because Oilpan is written in C++, there is still some manual annotation of GC'd members that needs to be done.