From the comments here it looks like the project is missing a comparison table with other, similar tools. There are already questions about nvm, npx [1], asdf-vm [2], nodenv [3] (which also relies on shims).
I've only used nvm so far (it was enough effort to get the team to adopt one such tool, and nvm was the main one at the time), but as far as I can see the main differences are speed, native Windows support, and being able to specify both the Node and npm/Yarn versions — that latter point is an important part in getting reproducible results across machines as well. Additionally, at least nvm does encourage pinning to a major version, allowing people's minor/patch versions still to diverge. (The upside I suppose is that that can save you quite a bit of disk space with relatively manageable risks.)
Nix is one of the few package managers that really obviates building these custom solutions over and over for each different language.
You can also use nix ontop of any distro. It's UX isn't as good as a custom built solution, but if you're a polyglot you'll appreciate not needing to use a different solution for java, node, python etc
Unfortunately, Windows support is not there yet (or am I wrong?).
I stopped using Linux as my main distro. I was tired of maintaining my OS with the only argument "but I can do whatever I want", and I've been doing that for 10 years (with Debian, Gentoo and even a LFS at some point), no more.
Windows 10 works well. The WSL works well. Docker Desktop works well (unless you use WSL2, then it's CPU/RAM/Disk IO hungry). chocolatey works well.
WSL works well, until it doesn't. For a non C# (or msft ecosystem) dev, Windows offers very little advantage for development. Every tool that is used mostly by Unix loving folks requires workarounds(when possible) to make it work on Windows. Terminal clients are subpar at best, the only selling point I see on Windows is the ability to play games. Without mentioning the privacy issues on Windows. Most mainstream Linux distros are rock solid stable. You can use out of the box Ubuntu as your daily driver with no configuration, besides installing the software you like. I'm not familiar with Gentoo, but yeah, if you install an arcane distro you could have issues with drivers, etc... MacOs is pretty solid too.
My daily driver is Debian WSL2, using wezterm (highly recommend) + tmux for terminal, and vscode integration for IDE. Aside from struggles getting custom subdomains to point to localhost, things have been quite stable.
I never thought I’d be saying this, but I find the Windows window manager the best out of the box, and as a bonus I get to do my unreal/vr dev side projects without booting into another OS.
I stayed on WSL1 for the shared localhost. And WSL2 still have some performance issues related to memory usage and Disk IO.
But yeah, I agree, Windows 10 has a great UX. I switched to windows because of the WSL, and I end up using it less and less over time: ssh, ansible, and avoiding cross-compiling toolchains when building releases, that's about it.
Yeah, I am in a similar boat. WSL 2 with Docker build environments gives me faster build times than my Mac. My default Windows Terminal is a Linux prompt with Zsh. It works surprisingly well. I built a Windows gaming rig and then wondered, “hmmm-this thing is fast. Can I work on it too?” And was delighted that our tool chain didn’t need any tweaking.
> for `helloworld` projects maybe, yeah, have you worked on a large Rust project?
No need to be condescending...
Yes as a matter of fact, I have. Never ran into any problem with "cargo" nor "rustup".
> have you tried setting up ENV variables in Windows? is a journey of clicks into config windows and options, and this is just an example.
Git bash still have a .bashrc, I set my environment variables here just like I did on Linux. For production I use Docker.
> I would rather be in Unix
good for you
> than just being able to play Fornite
No need to be condescending... I'm not even playing on my WORK computer...
> and having to "filter my outgoing traffic"
I hope you still do it with your Linux OS because with the amount of "curl | sudo bash" there is out there, I would not trust it either.
Where does it come from that only Windows is subject to telemetry/monitoring? Whenever you "sudo apt install" something you trust someone else. If you're paranoid enough to not trust Windows, why would you trust any third-party software even on Linux?
> it seems too much work. I just want something that "works".
My workflow works for me, I never tried to convince you like you try to convince me. Leave me be.
> have you tried setting up ENV variables in Windows?
Its quite easy.
> is a journey of clicks into config windows and option
or in CMD.exe (e.g., via a .bat file) a simple call to setx.
or in Powershell or Windows Powershell, [Environment]::SetEnvironmentVariable.
This assumed you want to durably set variables associated with the machine or user; transitory settings can be done via set (CMD.exe) or $env:path (either powershell line).
In my experience, though, Windows 10 has proved really annoying. Over the past 8 weeks,
1) Windows 10 update has broken sound (I occasionally have to disable/enable the Realtek Audio device)
2) Windows 10 update has fixed sound (without seeming to have changed the Realtek Audio driver)
3) Has throttled the performance of my graphics card (hash rate on Autolykos V2 dropped by 10x after an update), without having had changed the graphics driver
4) Windows 10 Pro won't allow certain applications to be downloaded or installed (immediately quarantined) and Virus & Threat Protection isn't visible in the Control Panel. This is a common issue, apparently [1]
Generally, Windows 10 seems to take control of configuring the operating system away from the user while adding a bunch of "telemetry" and "application monitoring". Not too happy. I'm in the process of moving to NixOS as my primary system [2]. I'll boot from a flash drive with a live version of the OS including all applications and files I need for local development; the flash drive will mount the OS into RAM; I'll unplug the flash drive and continue my development from RAM; I'll shut down my machine at the end of the day and have a clean slate the next time I boot into the OS [3].
Of course, but what's the point? I use WSL for 3-4 programs, that's it. My entire toolchain is Windows-compatible, and installed through chocolatey with the default configuration.
Volta shows promise and I like that it's fast. But to me it seems like it will be hard to adopt until it supports `.node-version` file (a working standard among node version managers). For example, while some may want to use Volta, others will likely be using `nvm`. And a project's CI likely uses `nvm`. Volta has `volta` property for `package.json` to pin the node and npm version. That's nice and more specific than the `engines` property which can be a range of versions. But it would be better to keep things DRY and leverage the working standard for how node version managers keep track of the node version. (Otherwise a project has to maintain node version information in multiple places.)
In addition to the points raised by the other comments, a big reason we decided not to use `engines` in Volta is that `engines` can be a range: One of the design goals of Volta is to allow for reliable, reproducible environments, and a range is necessarily not static (new versions are released all the time).
Yeah, conceptually the setting is intended to be closer to a lockfile than a semantic version.
Additionally, even with semantic versions, `engines` is often specified as something like `12 || 14 || >= 16`, so they span multiple major versions, which is where breaking changes can (and do) show up.
The engines field is generally used for package consumers so when you do an `npm install` you know if the package is compatible with your local node version.
Pinning the Node.js version for tools like nvm is used by developers of the package or app developers who want to use the same version of node locally and in production, for example.
It functions more like a suggestion than a hard restriction. And, as far as I'm aware, tools such as NVM plain ignore it in favour of their own configuration files.
We've been using volta at sentry.io for probably close to two years now, and I have to say, it's one of those tools I almost never think about because it legitimately "just works" (and is very very fast).
It has definitely helped to keep our team's environment in sync as we grow in # of people building Sentry. It's pretty much eliminated the need to ask 'are you on node / yarn x.y.z?
I switched from nvm to Volta. it is so much faster! if you had nvm switching your node versions per directory in your bash configs, you know it takes sometimes up to a second to cd into a directory. Volta does it so much better with `volta pin`
nvm allows to install and switch node versions, volta does it automatically. It creates a proxy executable which checks package.json, nvmrc, and automatically chooses proper node version; and it remembers the version used when globally installing (npm -g) so stuff will keep working forever, without worrying "can I update node? Are all global deps compliant?"
Also, volta is cross-platform. Nvm is macos only and nvm-windows is a totally different project (slightly different behaviors etc)
> nvm allows to install and switch node versions, volta does it automatically. It creates a proxy executable which checks package.json, nvmrc, and automatically chooses proper node version
NVM also changes the runtime automatically based on the version in .nvmrc, as volta does. NVM does not read the engine version from package.json, as that version is not the _required_ version but rather the suggested version. Just because the author hasn't updated the version since version 4 while you're on version 5, doesn't mean it doesn't work on version 5, just that the author hasn't updated it.
> remembers the version used when globally installing (npm -g)
That's actually a pretty nifty feature, sounds like a good idea and might give Volta a try because I've ended up in that situation many times (or worse, upgrading node and now missing bunch of binaries without thinking about it).
> Nvm is macos only
NVM is also cross-platform (works on Linux too), just happens to not work on "standard" Windows as NVM works via environment variables (and aims to be POSIX compliant), something Windows is notoriously shitty at. Although, many Windows devs use WSL, which NVM also works with (and supposedly MSYS and Cygwin too, but I never tried that).
NVM requires sourcing a very slow loading script. The overhead NVM caused on my shell session constantly was painful enough on a permanent basis that I went to look for alternatives. Even today based on experiences of some NVM holdouts it's still frustrating.
I had all kinds of workarounds for that, even a custom zsh thing that tried to automate it as much as possible. Thankfully I no longer have to deal with this :)
Although, nvm use should not even be necessary if your terminal is configured that way. Not having .nvmrc versions is less explicit, how would I change e.g. LTS and newest node versions in Volta?
You might as well recommend them the Alabama State Defense Force too, it's about as related to ASDF the version manager as your example of the Common Lisp ASDF.
That's not really going to help most people. It's not hard to obfuscate the malicious parts of the script.
I remember someone saying that you could essentially backdoor a target machine by rolling back certain libraries by a few weeks to undo security patches. I don't know modern *nix package management well enough to know if that's true, but it's a scary idea.
I have never heard of asdf. When we started using volta (back then notion) more than two years ago, it replaced nvm for us which was just very slow and frustrating.
So not sure what asdf does or how it behaves, but volta is basically a painless "forget that you even have it installed" version of nvm.
Would be curious to hear what makes it complicated. Volta is from my experience a "works outside the box" type of arrangement. It installs quickly and once you have it, you no longer think about it.
As this is another shim based manager, you can expect that at some point in time, someone in your team will have a broken setup due to shims generation failing for some reason or because something else messed with the way the tool relies on the PATH.
This has been my experience with rbenv, asdf and all other solutions that relies on shims.
Depending on what you are trying to do (for example running short lived CLI), you will incur into a slightly performance cost because of the bash/zsh/fish shell you need to spawn before running your code. In ruby's case rbenv/asdf adds around 20-50ms (could be more etc depending on your shell and your shells initialization code etc).
> you will incur into a slightly performance cost because of the bash/zsh/fish shell you need to spawn before running your code
While Volta does have a shim, it’s written in a sensible systems language (Rust)[0], so it does not spawn a shell. It only determines the correct process and arguments to run and launches it as a subprocess. There will be overhead to read from disk and determine which version of node/npm to run and a syscall to actually launch the subprocess, but that should be very minimal and nowhere near the cost of initializing a shell.
The result is that Volta feels faster than similar tools that are written in scripting/shell languages. It also enables better Windows support since there’s no reliance on a system having a POSIX shell.
I remember contributing a fix for an expansion issue in either bash or zsh (can't really recall) for nvm. Just running it was very, slow. This slowness is primarily why I usually use guix / nix on my distro of choice instead.
I'm glad this exists as I can imagine this way of handling which node to use is much faster than how nvm was doing it.
There seems to be so many of these tools for Javascript. NVM, NPX, n, asdf, Volta. Coming from a Python background I don't understand why the tooling here is so much more complete than for other languages.
> Coming from a Python background I don't understand why the tooling here is so much more complete than for other languages.
Is python immune to this problem? I think not, in fact, I think it is much worse because we have to "hack" how packages are installed by creating a "virtual environment." How many different tools are there to manage python dependencies?
cool, but am i like a complete grandpa if i'm a bit alarmed by how everyone these days seems to think curl bashing with stranges is totally normal and fine?
please, instead of creating something that's language/tool specific, work on asdf-vm[0]. We all benefit if we don't have to remember which version manager to use for which tool/language/whatever.
nvm will never support deno as its scope is specific to nodejs. Volta however could in the future be a single tool to manage deno and nodejs environments, which would be a killer feature for me apart from better performance. the volta devs are sceptical but still not opposed to the idea.
So many wrongs with this comment (bypassing the obvious cargo-cult of "curl | bash needs to die yugga yugga")
So your assumption here is that you can take control over the distribution channel that lives on get.volta.sh. Ok, what other distribution methods can you think of that A) allows easy distribution of binaries and B) when the distribution channel is taken over by Alice, doesn't allow Alice to send malicious binaries?
If you were to use APT and you had control over the keys for publishing to APT, the same thing happens as if you were to do "curl | bash", just via APT instead.
> if I can take over that server, I have a direct channel into all their customers
Volta is open source and I seem to find nothing to suggest you can pay for anything, or even donate. Calling the "users" customers is really weird.
And even with that, if you take over the server you don't get a "direction channel into all of their users", you get a channel to people who installing Volta. The first would be a huge deal, the second is a big deal.
-----
I really wish this "curl | bash needs to die", would die. People are just echoing what they've heard others say, without really considering how hard the "if I can take over that server" part is, and how that applies to any distribution channel, not just "curl | bash" part. It might be a leftover since the http (without TLS) days, but it should hardly be a criminal offense.
No, you seem to have missed my point. I'm only comparing them as distribution channels, not the workflow. If APT is compromised, it's as bad as if any distribution channel is compromised.
What's with the "You don't even seem to understand the difference between apt and debian"? I've used both for countless of years by now, pretty sure I have a solid understanding. If you have anything in particular that seems to be a misunderstanding, please point that out. (Edit: probably this is because I said "Publishing to APT", yeah? If so, yes, you would technically be correct, but I think most people understand what I mean)
> If you have any issue with how debian manages repo keys
I don't, and I don't think anything in my comment says so either. I'm simply pointing out that if we assume that the distribution channel is compromised, any distribution channel could help distribute malicious software, not just if you use "curl | bash" method.
Also, as a reminder from the site guidelines:
> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community. Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
> I'm simply pointing out that if we assume that the distribution channel is compromised, any distribution channel could help distribute malicious software, not just if you use "curl | bash" method.
And from that you conclude that all channels must be equal? That's a logical fallacy. There are trusted channels like the debian repos. Or maybe cargo, pypi, etc. And then there is this abomination.
This is like comparing a vaccination at your doctor's office to a needle found in the park (the needle looks respectable, of course) and stating "well, theoretically, my doctor could inject me poison as well, so it's probably ok to take this one here".
I love me some pypi, but it's hardly a trusted channel. There's no approval process for uploading, and there's very limited code inspection (until last fall there was none). There have been repeated examples of typo squatting malware which has been resident on pypi, which is why there are tons of security products designed specifically to allow you to use pypi safely.
> And from that you conclude that all channels must be equal? That's a logical fallacy. There are trusted channels like the debian repos. Or maybe cargo, pypi, etc. And then there is this abomination.
Are you asking me or telling me? As the rest of your message assumed that I already answered the question.
The curl | bash smells weird to me but I still do it from reputable enough sources since someone that doesn’t like it must be reviewing it.
That said, there’s ample evidence that the “enough eyeballs” stuff doesn’t work as well as we’d like in _any_ of these situations. If a sophisticated actor wanted to insert something malicious I’m not sure that one method is easier than the other.
What would be a safer alternative? At some point you'll have to trust volta.sh unless you're willing to audit all of the code manually.
One potential issue with `curl | sh` is that if the connection dies prematurely you may end up executing half a script which can lead to all sorts of issues, but judging by the current version of the script they Did It Right by declaring a bunch of functions and calling their "main" in the last line, ensuring that the script does nothing if not fully loaded.
Safer and more Secure? Ask the users to run cargo install X or whatever the cargo equivalent of pip install is.
Even more secure than that? Upload a signed tarball to multiple mirrors and generate the tarball from a public got repo in a reproducible manner.
Ideal? Publish to the distro repositories and leverage the process that these guys have in place (I know their audience mostly runs macbooks, but that's a different problem to solve).
> I know their audience mostly runs macbooks, but that's a different problem to solve).
I disagree with the assumption "their audience" mostly use Macbooks, but regardless.
This sentence from you really describes what you're getting wrong. You assume a audience that they are targeting, and then ignore providing any sort of solution for that audience and even acknowledging that you know there is another problem that does need solving, but you're only interested in providing solutions that fits your idea of the problem.
All of those things you've mentions requires an already installed toolchain for managing packages. "curl | bash" is popular because the small footprint and relative ease of making as safe as many other distribution methods.
Not to say I also vastly prefer my package manager I use. But I do realize there is world outside of me as well, that have different problems and might require different solutions with different tradeoffs.
First of all, I am pretty certain that the majority of javascript developers runs a MacBook when they have the choice. Second, no I don't think that "I get something wrong". It's not my fault that people use MacOS and don't have a decent package manager (although I hear dood things about homebrew, it's nowhere near rpm/DNF, afaik). It's also not my fault that people that do have a decent package manager don't contribute enough to get tools like volta packaged. Still, I can warn about the stupid idea to execute random she'll scripts from the internet for a "convenient small recipe". Even for that I get flak and attempts to justify the behavior. So no, I don't feel obliged at all to solve their problems. This vector will be exploited some day and it will be embarrassing for some people.
Essentially, I think that people are lazy regarding these things and don't want to think about what they are doing with their systems. If I had an idea how to make things comfortable like this and still be safe and secure, I would sell it. But I don't really see a way to solve this except for moving the code to a trusted channel with a dedicated approval process.
I agree you shouldn't pipe uninspected code into bash, but it's not that hard to just read the script first. When I visit https://get.volta.sh in a browser I see a human readable shell script with comments and everything.
[1]: https://news.ycombinator.com/item?id=27023701
[2]: https://news.ycombinator.com/item?id=27023909
[3]: https://news.ycombinator.com/item?id=27023881