As a contributor to Deno, I am actually quite surprised that this got resurfaced on Hacker news after the hype June last year.
That being said, I suggest checking out this video (recorded this April) for updated information about Deno, since things have changed quite a bit since the initial announcement: https://youtu.be/z6JRlx5NC9E
(Edit: fixed link, posted the wrong one. Why would YouTube think I want to share ads...)
Curious about deno's development stage: can the brave already run it in production, or there are probably too many breaking changes in the pipe that it's better to wait?
The brave? Sure - but there will likely be breaking changes. I actually run a small deno server in production for a non critical service and it's been working out fine :]
Check out the benchmarks. I'm pretty sure normal usage (aka actual HTTP server, not specialized use cases just for benchmarks) is quite a bit slower than node.js
As a long time Node.js developer I took a look at the project but it's still a really half-baked implementation of the security flags. You almost always end up flipping all security switches off because your application needs every feature (network, filesystem, etc).
No package signing, no flags per module, no syscall whitelisting, etc.
You turn all permissions on if you are actually using it as if it is Node for writing servers. But in the case when you are using a (potentially untrusted) code snippet to perform certain operations (like encode/decode base64 values), you do might want a sandbox.
(For the case of Node-like usage, we actually also have a basic implementation of whitelisted directories/files for certain permissions, though not perfect atm)
We have also discussed about possibilities of implementing an in-memory file system. (Maybe we will also bring sessionStorage to Deno)
Flags per module is slightly trickier, while whitelisting of certain feature is trivial to implement.
Package signing is indeed a topic that Deno contributors need to discuss further and possibly finding a solution.
Sure but logic doesn't exist in a vacuum so in most cases you are going to import / export your input data or your results to the filesystem or through the network.
At the time it was deemed to difficult to implement in Node.js after the fact, which makes sense of course.
But I'm disappointed that Deno didn't go for a more bolder, more secure approach. The current system seems pretty pointless.
It is actually more interesting about the definition of “per module”, because technically every single file is its very own module from the view of V8, and is reinforced in Deno since we are trying to be web compatible, so there is not even a package.json-like construct for you to identify the “root” of any library, more so as there is no more index.js magical resolution
I'm really sad that I lost my gist on this, but I was working on a system in Node.js for defining the capabilities of modules in an es module graph (as you suggest) where there might not be defined package boundaries. The implementation complexity (required changes to upstream V8) resulted in us going with node policies as they are today.
I don't remember the exact API, but basically an es module graph is a directed graph (e.g. edges have a direction), and because there is only one entrypoint in node, we can therefore create a hierarchy of modules. From this point on, you basically start at the root with X permissions, and from there each module can reduce the permissions of modules they import (or they can reduce their own permissions, but they can't raise them after that obviously).
In world of hypervisors, containers, apparmors ..., is sandbox really a significant advantage over nodejs?
Actually I saw yt presentation and it didn't convinced me to switch from nodejs. The differences to nodejs are so little, specially when you're using typescript already. The only significant change is "es modules" with ability to resolve modules from url (plus the sandbox).
For certain purposes (especially for writing servers), I would actually suggest Go and Rust as much better replacements for Node.js :) (Ryan also mentioned, if you notice, Go as better alternative for fast servers in the original JSConf video)
Ryan and some of us consider the project quite experimental, and the initial focus was not aiming for writing super powerful servers (though it should not be too bad). A pleasant scripting environment with more attention to features that Node have not tried out is more or less the goal.
(Fun fact: Ryan has been into machine learning and data visualization for some time. So Deno was created under the hope to somehow compete with Python in certain aspects)
Yes it’s a real shame it isn’t based on object capability security. The resource ID concept makes it really hard to do per-module restrictions, because resources provide global ambient access to anything. And it sounds to me like the dispatch model based on typed arrays means that this is baked in on a fundamental level (unless there are some unforgeable handles that Ryan didn’t mention in his talk).
Node.js had a PR to add that in (with packages enforced but not your 'own' code) and Node has policies to deal with loading untrusted code ( https://nodejs.org/api/policy.html ).
Personally I isolate with OS level containers as I think it's a lot more robust and tested but I definitely see the merit in Deno exploring this - even if it doesn't really work yet it's interesting.
on node, this can actually be very difficult or even impossible if using 3rd party libraries.
For example, some node internals like networking require the invoker to attach to some obscure .on("error") event to avoid uncaught errors. and a lot of the time these 3rd parties are not aware of it.
I'm all for deno being built from the ground up to properly crash on uncaught errors. Silent ignoring is a really stupid decision.
I never understood why this is necessary. Using promises means you get to use try-with-resources style construct for handling non-memory resources safely. As such you no longer need to crash on uncaught errors, except errors pertaining to resource disposal.
i think the reason is that, in my example of networking, the network i/o is an "interupt" kind of event, that is triggered outside of normal execution. like if there's a socket timeout.
probably didn't have to be designed this way, but it was designed pre-promises and I guess they are lothe to change it.
I would imagine you have a watchdog of some sort (perhaps a special URL) and have something probe that every 15-60 seconds etc to see if it is up, if it does not respond then spin up a new instance and kill the failed instance.
Watch 10 things I regret about node. js - https://youtu.be/M3BM9TB-8yA from the creator of both node and deno to undersatnd his motivations behind the deno project. A very intriguing talk.
It is very interesting although I didn't really understand the 'security' part. The motivation seems to be twofold - some bad things have happened because of compromised npm packages and v8 happens to have a robust sandbox. This sounds like a solution looking for a vaguely defined problem. The illustrative example he gives is 'malicious linter'. Is malicious linter that important a threat?
In the example the linter itself is not malicious, but used to deliver a malicious program that can have unrestricted filesystem access. Not vague at all, see recent news on the ‘event-stream’ package being used to steal cryptocurrency wallets.
You are right, we will try to get rid of it for some faster serialization mechanisms (after some huge internal refactor lands). See the talk I posted, Ryan mentioned about it near the end.
I admittedly haven't researched too deeply, but are there any examples/docs on embedding Deno in another Rust program and/or writing/exposing Rust libs with a TS API?
Running a local benchmark to compare Node.js and Deno gave me the same magnitude of performance difference. I like the concepts behind Deno but the performance should stay a top priority. Even more for a new technology that is looking for future adoption. If Deno gets faster than Node.js, I adopt it. If it stays 5x less performant than Node.js, I skip it.
Are you running deno 0.10.0 versus Node? Since there has been some internal refactoring it should now be 80% wrt basic http req/sec (ref: https://deno.land/benchmarks.html#all , though the benchmark might have not covered everything)
Overhead from Flatbuffers is a major reason of the slowdown and we are seeking to get rid of it.
That being said, I suggest checking out this video (recorded this April) for updated information about Deno, since things have changed quite a bit since the initial announcement: https://youtu.be/z6JRlx5NC9E
(Edit: fixed link, posted the wrong one. Why would YouTube think I want to share ads...)