Overview of the talk from a comment[0] of that video:
> The goal of Node was event driven HTTP servers.
>
> 5:04
> 1 Regret: Not sticking with Promises.
> * I added promises to Node in June 2009 but foolishly removed them in February 2010.
> * Promises are the necessary abstraction for async/await.
> * It's possible unified usage of promises in Node would have sped the delivery of the eventual standartization and async/await.
> * Today Node's many async APIs are aging baldly due to this.
>
> 6:02
> 2 Regret: Security
> * V8 by itself is a very good security sandbox
> * Had I put more thought into how that could be maintained for certain applications, Node colud have had some nice security guarantees not available in any other language.
> * Example: Your linter shouldn't get complete access to your computer and network.
>
> 7:01
> 3 Regret: The Build System (GYP)
> * Build systems are very difficult and very important.
> * V8 (via Chrome) started using GYP and I switched Node over in tow.
> * Later Chrome dropped GYP for GN. Leaving Node the sole GYP user.
> * GYP is not an ugly internal interface either - it is exposed to anyone who's trying to bind to V8.
> * It's an awful experience for users. It's this non-JSON, Python adaptation of JSON.
> * The continued usage of GYP is the probably largest failure of Node core.
> * Instead of guiding users to write C++ bindings to V8, I should have provided a core foreign function interface (FFI)
> * Many people, early on, suggested moving to an FFI (namely Cantrill) and regrettably I ignored them.
> * (And I am extremely displeased that libuv adopted autotools.)
>
> 9:52
> 4 Regret: package.json
> * Isaac, in NPM, invented package.json (for the most part)
> * But I sanctioned it by allowing Nod's require() to inspect package.json files for "main"
> * Ultimately I included NPM in the Node distribution, which much made it the defacto standard.
> * It's unfortunate that there is centralized (privately controlled even) repository for modules.
> * Allowing package.json gave rise to the concept of a "module" as a directory of files.
> * This is no a strictly necessary abstraction - and one that doesn't exist on the web.
> * package.json now includes all sorts of unnecessary information. License? Repository? Description? It's boilerplate noise.
> * If only relative files and URLs were used when importing, the path defines the version. There is no need to list dependencies.
>
> 12:35
> 5 Regret: node_modules
> * It massively complicates the module resolution algorithm.
> * vendored-by-default has good intentions, but in practice just using $NODE_PATH wouldn't have precluded that.
> * Deviates greatly from browser semantics
> * It's my fault and I'm very sorry.
> * Unfortunately it's impossible to undo now.
>
> 14:00
> 6 Regret: require("module") without the extension ".js"
> * Needlessly less explicit.
> * Not how browser javascript works. You cannot omit the ".js" in a script tag src attribute.
> * The module loader has to query the file system at multiple locations trying to guess what the user intended.
>
> 14:40
> 7 Regret: index.js
> * I thought it was cute, because there was index.html
> * It needlessly complicated the module loading system.
> * It became especially unnecessary after require supported package.json
>> * Many people, early on, suggested moving to an FFI (namely Cantrill) and regrettably I ignored them.
A bit off-topic but Dahl referenced Cantrill here, who I figured to mean one of the authors of DTrace, Bryan Cantrill, who I then found from his Twitter (https://twitter.com/bcantrill) just last month started a new "computer company" which sounds super interesting, especially with his past experience and the passion he seems to have for attempting to solve a tough, bold problem:
For more context, Cantrill was senior at Joyent, who has Nodejs listed as one of their products on Wikipedia and has been the corporate sponsor of Nodejs for a long time.
Thanks. This largely made me realize most decisions behind node.js were made on-the-fly rather arbitrarily without putting too much thought into it, and help me compare and contrast it with Go. :)
> most decisions behind node.js were made on-the-fly rather arbitrarily without putting too much thought into it
that seemed rather apparent even at the time - what's been more interesting is watching others defend some of these decisions as if there was a lot of thought put in to them, and that they're some example of great architecture.
not specifically ragging on nodejs - I see this a lot in various projects - small/minor decisions compound over time, and even if they were not originally planned/intended to have significance, they have it at some point, and often people who weren't involved in the original decisions think there's a lot more 'there' there behind the original decisions, when, usually, there isn't.
Anyone know what the status of this is or like how close to release 1.0 is? I'd really like to dig in and give a try but it's hard to know just how much time I could waste pre-1.0.
I looked on the status page and it seems like things are moving along to the ~1.0 release, but as an outsider I can't gauge if that's three months or a year.
I'm mostly just really excited for this at there's a great deal of it (Deno) that makes a shitload more sense to me than Node (module loading is one, $HOME/.deno/bin is another); and being so heavily integrated with Rust appeals to me in a big way (I actually understand Rust, but C++ always leaves me just instantly mind-fucked).
I believe the current goal is by the end of January (and the main blocker right now is v8 inspector support (debugging in Chrome devtools)).
However internally there are quite a few things need to refactor. Since Deno aims for better Web spec compliance, we need an overhaul of the Worker implementation we have right now. So likely existing API interfaces under the Deno namespace is stabilizing soon, but many of the Web-compat APIs still need frequent updates
I know people love asking when 1.0 is coming out, but please don't feel rushed! A delayed, thoughtful launch is infinitely better than an earlier rushed launch!
You probably don't need me to tell you this stuff though.. it's just annoying to see people pester every project about 1.0. I get it's probably just their excitement..
i know you guys banned this guy [1] for trolling, but he certainly has a fast v8/tcp stack that bypasses node's internals.
from the benchmarks, deno looks similar to node in net/http perf. is there any specific reason you guys cannot get significantly more perf out of deno? (Alex has always said and shown that v8 is not the bottleneck).
does v1 mean that some of the existing architectural choices are frozen and any significant perf improvments become impossible?
TBH I am not probably the right person to answer this. Bartek has been working on a lot of internal refactoring these days and he had some discussion with Bert (presenter in dotjs video) on the topic. Maybe ask him in the Gitter chatroom?
Oh, great news! It’s closer to 1.0 than I realized. Been checking in on the project occasionally, but been waiting till the everything was more settled.
In a GitHub thread ry mentioned, "Currently async ops are executed on the same thread as the issuing Isolate - however I’m quite sure we can improve this and execute async ops in different threads." [1]
Is this what I think it is? Does it mean in the future I can write 'await fn()' and it'll run in a different thread? Or am I a fool and hoping too much?
I think it's likely that in this context, "ops" refers to the low level runtime primitives Deno provides and implements in Rust. E.g. read bytes from a file. I doubt you could transparently multithread `await` without clashing with JS's semantics.
I guess it is about asyncing to a different isolate+thread, not multithreading a single one. Locking issues do not go away just because it is javascript, rust and 2020.
Another comment [0] provided a list of features differenting Deno vs Node, but once it's complete, in what situations would it be more appropriate to use Deno rather than Node?
I think it's intended to just be a "better node" in as many respects as possible. Better security, more coherent module/package system, async/promises from the start rather than hacked in, first class security/sandboxing options.
The cool stuff I think is in where Deno is different. Able to be used as a library from other Rust or C compatible languages. Able to compile your javascript and Deno itself together into a single binary, etc
Also scripting feels nice in Deno so far. Also the ease of using a script written by someone else hosted on the web to do stuff, without worrying it taking over my machine, is cool IMHO.
1. A secure environment. You must authorize access to libraries like the file system or network interface. By preventing access by default you can write systems automation that is arbitrary computation resident in memory not intentionally leaking outside the computer.
2. Management sanity. Deno is written in TypeScript, so it doesn't need something like @types/node to interface with TypeScript the way Node does. This is rarely problematic, but when it does come up its painful. For example in Node the http response type is http.incomingMessage. The port number of the originating server is in there at response.socket.server.port, but @types/node currently thinks response.socket is typed to net.socket which doesn't have a server property and so you get a compile error.
3. Distribution sanity. If you download the @angular/cli framework from NPM it is, at this time, 250 NPM dependencies and 26.6mb for just the CLI interface. The full framework is closer to 90mb and 1140 NPM dependencies. That is insanity. The Node binary plus installer for Windows is only 18.6mb. Deno wants to solve this by wrapping a project into a single archive for package distribution. Hopefully, that will result in smaller packages where dependencies are fewer and intentional instead of the nonsense of stuff like NPM or Maven.
4. ES Modules are a very nice way to manage and organize code. I am using them now with a large Node project I am writing without Node's silly conventions for ES modules. In Node ES modules are experimental and there are virtually no dependencies available that are exclusively using ES modules and feature a TypeScript types library in Node, so if you go down that path you must be dedicated.
Deno does transparent compiling and caching of TS into JS. This allows interesting behavior such as the ability to import TS files with explicit .ts extension in JS files, e.g. `import A from "https://example.com/a.ts";`
> I don't know, I feel like Node.js has a fairly full-featured and useful standard library?
IMO not really. We have many Java and JS projects where I work, I can't find a single Java project that has bigger dependencies than even our smallest JS project, or even an empty "create react app". One of our Java projects is 200k+ lines and still has smaller dependencies than create react app or a empty project created with angular cli.
In JS, it's normal to pull in 500+ libraries and this is only because the standard library is so lacking. Our Java projects, even massive ones, pull in less than 100.
In Java you also have Guava and Apache Commons. Those cover about 95% of data structure needs for a project not covered by the JVM. JS has nothing comparable.
People lament about how "big" the JVM is, but you trade complexity of the VM for complexity of your own app. In practice Java projects are smaller and use less space and memory because your dependencies aren't using 10 different libraries to pad whitespace.
This also makes it much easier to mentally transition between Java projects. If they were built in the last decade, they're probably using the standard library, Guava/Commons, hibernate, and some HTTP interface. I can move between Java projects at work no problem, but if it's Node or front-end 90% of the time I'm going to spend a few days learning a new framework to get anything done.
Small standard libraries are very harmful to the ecosystem. I hope to God they do it right with Deno, batteries included like Python and Java
The average size of these JS packages is much smaller than the Java packages. "Create React App" is not a Node.js app proper, even though it uses Node and NPM to build and run, the end result is a statically served package that gets deployed as HTML, CSS, and JS. CRA would be a front end app, and front end apps have completely different needs than Node.js backend apps.
My experience with Java projects is >1GB in RAM usage minimum (although it's been a couple years since I've written Java, so I don't know what improvements the JVM has made) whereas a Node app can run just fine on <100MB.
Again, conflating Node.js backend and JS front end apps is not a fair comparison to Java as a backend platform. Stick to apples and apples.
Node.js has quite a few standard libraries, although it's very common to use a framework like Express (and Express has been the top framework on Node.js since the earliest days of Node). The most difficult thing about going back to an older Node code base is that promises weren't officially supported, so there's often a 3rd party library for that or callbacks were used instead.
The main benefit of Node.js to me, remains the use of a single language across the front end and back end. You can often use the same libraries so you suffer very little context-switching penalty when bouncing back and forth.
Single language is an advantage if you're trying to save money on hiring people that know both. We don't really share any code between our JS frontends and node backends, and I've never worked at a place that did. As far as I can tell that's a hopeful myth.
Working between languages is easy as long as you use a cross-language object format like gRPC.IMO single language is overrated and context switching cost is basically zero. Is it hard to switch between two verbal or written languages you know? People do it mid sentence, I don't see how it's any different with programming languages.
Node packages tend to be smaller but there's far, far more of them in an average project. And unlike Java they're not stored as zip on disk and class files when built, so in both cases they're using a lot more space. Java bytecode was designed to be compact like WASM, it takes way less space than even minified source and it's zip compressed for deployment into a jar on top of that.
Node dependancies being more fragmented is generally a bad thing. I constantly have to replace libraries when our code scanners find vulnerable npm packages that are no longer maintained. Since Java is far less fragmented I hardly ever have to do this with our Java codebases.
Node's standard libraries are terrible compared to Java and Python but I can't make a case for that besides saying they clearly are.
I agree with you on front vs backend being a bad comparison, but even our regular express apps take up more space and memory.
Java using lots of ram is also a myth. It might have been true many years ago, but Java generally uses about 1/3 the memory storing the same structures because there's a lot less overhead per object and field. The only reason most Java apps use lots of ram is because people configure the garbage collector to use as much RAM as possible so it collects less often. Node doesn't use as much ram because nobody really changes the GC defaults.
Humans have a huge context switching penalty when switching between operating systems, written languages, and of course programming languages. Hell even switching browser tabs incurs a context switching cost. The more complex the interface the more the difficult the context switch is. The process and mindset for debugging Java and JavaScript are totally different. The tools used for each are different. Thinking about the data structures of Java vs. JavaScript alone, is a context switch. Onboarding new devs requires them to learn architectural patterns (as well as master two languages) that are totally different in most cases because JS code tends to be written much differently than Java as a result of Java's heavy focus on OOP. Nothing wrong with any of this and I'm not attacking Java by any means, we could easily have the same convo about C++ vs. Go or Python vs. Rust.
It's not about "sharing code" between front end and back end. Although you can share code and I have in several projects, what I mentioned above was sharing libraries. That's very common, for example, Moment.js, Lodash, and i18n utils. I also use Chrome debugger tools on both front end and back end. I have even shared architectural patterns for state management.
I agree Java dependencies are less error prone because they're bundled up as Java byte code and I get the appeal there.
I agree Python has a fantastic standard library and Node is spartan by comparison. That was always a trade-off that Node consciously made from the beginning though.
I don't see much trouble context switching between HTML, CSS, and JS on the frontend. I don't see much trouble switching between some other language on the backend and SQL, JSON, macro, etc.
Context switching is a real thing, and sometimes creates penalties. But sometimes it increases productivity. For me it is very helpful to switch into SQL mode when grabbing data from a store, then switching to Go/Python/JS/Rust/C++/Ruby/etc thinking for the business rules around that data.
Frontend JS and backend JS are not really the same language or environment. They are different in important ways. But they are so very similar that it can be easy to be in "frontend" thinking mode when working on the backend, or vice versa. And I have many many times seen that cause problems.
That's not to say that it is bad to use the same language in both places. I'm just pointing out that there are pluses to context switching as well as negatives.
> Is it hard to switch between two verbal or written languages you know? People do it mid sentence, I don't see how it's any different with programming languages.
Well, to be fair, humans have dedicated hardware for natural language encoding/decoding ;)
We (fly.io) use Deno pretty intensely. We really like:
* No npm spaghetti, just requiring a URL is a win
* It'll build a lightweight static binary. Usually way smaller than the equivalent node Docker image
* The stdlib and every community lib were written at a time when async/await was the norm. So they're all very nice to use compared to wildly inconsistent node libs
Very interesting to hear. I didn't realize Deno was "production ready", I'd been curious about it for a long time.
Your comment inspired me to study Deno deeper and consider it for practical use. In addition to the pros you've listed, seamless integration of TypeScript sounds fantastic.
Would love to hear more about how Deno is used in your company!
Watch the talk linked above. He goes into a lot of the reasons. Security is definitely a big one. Other things seem to be addressing problems that originated from poorly informed decisions early in development that the node.js community has had to deal with ever since (require() based module resolution, npm, etc...).
Node.JS's require, the lexical scoped module system, encourages you to lift out code as modules, rather then "include files". The difference between an "include file" and a "module" is that modules can have dependencies, but can be reused elsewhere. While "include files" depends on the environment where they are included. Now if you have a module that can be reused elsewhere, why not share it to the world!? So instead of copying random pieces of code from here and there, with Node.JS you got an official library repository aka NPM.
You don't really need NPM hosting with Deno. What you need is an index: sources can be hosted elsewhere independently, and instead people can build sites for popular module lookups.
A package manager is different from package hosting. In the case of Deno you can still build and use third party package managers (and they might be able to utilize Deno's import-maps support to achieve some "magic")
We currently support Rust message-passing based plugins. Rust sources can be fetched with Cargo or other alternative tools.
Maybe (just "maybe" for now) we will be able to support plugin imports. In that case dylibs could be directly pulled down from network just like normal files (don't take my words as official though -- we have not decided on that yet)
I wasn’t just thinking about the features themselves, but the big picture. It does seem a bit exploratory to me (weren’t the non-V8 parts written in Go once?)
So many! First class TS for starters, vs TS being a bolt on which is always onboards build steps and config. Drastically improved module and security systems. And lastly, it fills a the need for a portable, JIT, statically typed scripting language that doesn't really exist ATM in the mainstream with "good" dev ergonomics. I love that the me doc calls it out as a replacement for bash and python scripts. Bash is useful for running other programs but is terrible for scripting. For those of us that drink the types kooolaid, it's lighter and "more correct" than python. I thought maybe Julia was going to fill this need, but it hasn't really taken root.
Yep that is the intended Deno standard modules -- the goal is to have a standard library large enough to cover most use cases (and that this standard library would not depend on external sources), while making sure that it is independent from the Deno binary to avoid bloat and allow choices (you can definitely implement your own alternatives)
Go's stdlib mistakenly sorted files by default on directory walks(with no option to not sort them). This of course means a directory must be fully iterated on before you can process or bail out of processing. I wonder if Deno ported this directly or reworked it.
There are some other issues with file stats as well; requiring multiple syscalls to retrieve information that can be had in just one.
It's a mystery to me why they made such a high level decision in such low level code without an escape hatch.. Been a lot of talk but probably won't be fixed any time soon due to backwards compatibility.
readdir is problematic in ways godirwalk works around.
Of course one could always just make direct syscalls or use a library that does, but this was a comment about stdlib. These would likely be implemented differently in hindsight per discussions on the Go project; and there have been discussions about improving them somehow in Go 2(readdir2?).
I remember saw something long time ago that type hints does not really offer much performance benefit here. Also TypeScript types aren't that reliable anyways since you can always @ts-ignore them. Real impact of TS on runtime performance is more or less an encouragement to developers to create objects of similar shapes that are easier to optimize.
My understanding is that non-sound type systems are not helpful for runtime optimizations. (In fact, to the extent that they operate at runtime at all, they will slow things down, by adding extra checks.) Only sound type systems give you enough guarantees to start doing optimizations with.
It's not releasing with any GUI functionality, though in theory there's no reason it can't go the same route as Electron :)
You might also be interested in webview, which allows programs to use the native system webview without needing each application to bring their own copy of a browser everywhere!
It isn't clear from the materials, but is a capabilities based security model being put in place .. similar to WASI? .. or is it just the beginnings of one with blanket `--allow-read`, `--allow-net` and such.
It's probably for the same reason that Node.js uses C++. Javascript is not understood natively by your hardware, so bindings are created to facilitate interaction between the programmer and lower level processes.
yeah i thought that, too. but assuming the loaded code has no access to any of those things either, then i'm not sure what the concern is. though if you gave network access to your code and it loaded other code from the internet which inherited those permissions, that's pretty terrifying (especially since it's not some edge case) - i assume this has been thought through, since it's so obvious.
"Note that we did not have to provide the --allow-net flag for this program, and yet it accessed the network. The runtime has special access to download imports and cache them to disk."
By default remote imports does allow network access, but when running the downloaded scripts, they subject to network permission settings.
Also if you are just worried about any remote imports anyways, there is a `--no-remote` (turn off http[s] resolution) and `--cached-only` (only resolve remote module if it is already downloaded and saved in cache) flag on `deno run`