Sum types (and pattern matching), first-class results, ownership, good performances and a rich ecosystem turn out to be quite nice for a general purpose langage once you’ve passed the hurdle of the borrow checker, even ignoring all the zero-cost abstraction stuff.
Also the really solid and safe multithreading. You might struggle a bit getting all your ducks in a row, but past that you’re reasonably safe from all sorts of annoying race conditions. And rayon is… good.
Not sufficient for GUI applications where the rust experience remains not great, but for CLI or even TUI?
I’ll misquote something i recently saw on Twitter because it’s very much my experience: “and once again I end up riir-ing a one-off tool I wrote in python and wonder why I didn’t just do that from the start”.
There must be something else, because most of what Rust brings to the table is what functional languages have been providing for ages, just with rebranded names.
Existing functional languages had their own issues:
1. Haskell: had to deal with cabal hell
2. Scala: Java toolchain, VM startup time, dependencies requiring differing Scala versions.
3. F#: .NET was considered too Microsofty to be taken seriously for cross platform apps.
4. OCaml: "That's that weird thing used by science people, right?" - Even though Rust took a decent amount of ideas from it, it got validated by its early users like Mozilla and Cloudflare, so people felt safer trying it.
5. Lisp: I don't think I need to retell the arguments around Lisp. Also a lot of the things Rust touts as FP-inspired benefits around type systems really aren't built into lisp, since it's dynamically typed, these come more from the Haskell/Scala school.
There is truth to that, the "something else" is a different set of trade-offs for some other things that have usually been associated with FP languages.
Rust feels like the love-child of part of ocaml (for the sum types), part of C (very small runtime, ability to generate native code, interrop with C libs, etc..), part of npm (package manager integrated with tooling, large discoverable list of libraries), etc...
Borrow-checking seems a bit newer-ish - but I'm pretty sure there is an academic FP language that pionnered some of the research.
No-one is planning to give Rust the medal of best-ever-last-ever language any time soon.
In my case, I use it because it is dead simple to get a standalone, lean, fast, native executable (on top of the other functional programming features). Cargo is a huge part of what I love about rust.
I have a great example.
We have 100s of Markdown files. I needed a link checker with some additional validations.
Existing tools took 10-20 minutes to run.
I cooked up a Rust validator that uses the awesome pulldown-cmark, reqwest and rayon crates. Rayon let me do the CPU bits concurrently, and reqwest with streams made it dead simple to do the 1000s of HTTP requests with a decent level of concurrency. Indicatif gave me awesome console progress bars.
And the best part, the CPU bound part runs in 90ms instead of minutes, and the HTTP requests finish in around 40 seconds, primarily limited by how fast the remote servers are over a VPN halfway around the world.
No attempt made to optimise, .clone() and .to_owned() all over the place. Not a single crash or threading bug. And it will likely work one year from now too.
Reading your comment made me realize another thing: Using rust often feels like the language is a successful attempt to take the best parts of a lot of other languages and put them together into a single, rational collection of features. Most of what's in there isn't new, but it all fits together well in one place so I don't feel like I have to make a devil's bargain for important features when I start out.
Are the checks somewhat time-stable? Couldn't some of the checking (and network requests) be avoided by caching? For example by assuming that anything OK'd withing the last hour is still OK.
> most of what Rust brings to the table is what functional languages have been providing for ages
In relatively familiar package & paradigm, with great package management (Haskell is the best of the functional langages there and it’s a mess), and with predictable and excellent performances.
I think Cargo doubling as both build tool and package manager is a big factor here. The combination of Cargo + crates.io makes it very easy to write some Rust code and make it available to anyone with Cargo on their system. Either by `cargo install nu` to build it from sources on crates.io or `cargo install` inside the git repo to build my own customized version. No more waiting for distro packagers to keep up.
Putting this together makes for a nice environment to distribute native system tools. And in the last few years we've seen a wave of these tools becoming popular (ripgrep, bat, fzf, broot, exa and some others).
Thank you for listing these. I had a look at them and they are really useful utilities. Now the challenge is to try to change the muscle memory that relies on the traditional unix commands!
"It's so easy!" yes if you have the language and tools de jour installed and up to date. I want none of that.
It was node and npm.
Then go.
Now Rust and cargo.
Oh, I forgot ruby.
And all this needs to be up to date or things break. (And if you do update them then things you are actively using will break.)
I don't need more tamagochis, in fact the less I have, the better.
What happened to .deb and .rpm files? Especially since these days you can have github actions or a gitlab pipeline packaging for you. I can't care less what language you are using, don't try to force it down my throat.
Many of the popular rust cli tools like ripgrep, exa, delta, etc -do- have package manager install options.
How dare people writing cli tools not package them conveniently for my distro. The horror of using cargo instead of cloning the source and praying make/meson/etc works.
Feel free to package and maintain these tools yourself for your distro if you want.
The problem with those is they require global consistency. If one package needs libfoo-1.1 (or at least claims to), but something else needs libfoo-1.2+, we can't install both packages. It doesn't take long (e.g. 6 months to a year) before distro updates break one-off packages.
I think some people try hacking around this by installing multiple operating systems in a pile containers, but that sounds awful.
My preferred solution these days is Nix, which I think of as a glorified alternative/wrapper for Make: it doesn't care about language, "packages" can usually be defined using normal bash commands, and it doesn't require global consistency (different versions of things can exist side by side, seen only by packages which depend on them).
I'm the parent that you replied to. In my eyes there is nothing wrong with .deb and .rpm files. In fact, many of these tools are available for download in these formats and some others (docker, snap, etc). And it is good that they do but it comes with extra work to setup the pipelines/builds.
The concept of a language-specific package manager distributing not only libraries but also executables isn't new. Go get, ruby bundler, python pip, cargo and npm all have this feature.
I was originally answering a question about why we suddenly see all these "written in Rust" tools pop up. I think that is partly because Cargo provides this easier way to distribute native code to users on various platforms, without jumping through additional hoops like building a .deb, and setting up an apt repository.
Sometimes you just want to get some code out there into the world, and if the language ecosystem you are in provides easy publishing tools, why not use them for the first releases? And if later your tool evolves and becomes popular, the additional packaging for wider distribution can be added.
Ease of use and familiarity are different things. Tooling around rust really is easy, when the alternatives (for equivalent languages) are CMake, autotools, and the like.
As it stands, I can brew install ripgrep and it just works. I don’t need to know it’s written it rust. If, for some reason, homebrew (or whatever other package manager) is lagging behind and I need a new release now, cargo install is a much easier alternative compared to, again, other tools built in equivalent languages
> Rich Hickey emphasizes simplicity’s virtues over easiness’, showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path.
The problem with .deb and .rpm is your dependencies, some things aren't packaged, you end up having to build separate packages for each major Debian and redhat release to link against the correct dependency version.
I'd love that to all be "one-command automated", but I haven't seen such a thing, unlike cargo, which I do find I can be productive with after a one page tutorial.
100% agree. I find it very funny, but in a sarcastic and totally wrong way, when a project's README has an Install section that reads:
Run "cargo install myProject"
I know Rust, so Cargo is not alien to me. But come on, you know that your install instructions are a bit shitty.
Please, choose a target distro, then test your instructions in a clean Docker container. THEN you can sit down knowing you wrote proper guidance for users.
EDIT because this comment is being misunderstood: I meant that you should make sure your instructions work as-is from a clean installation of your intended distro(s), regardless of how you prefer to do so; using a Docker container is just my preferred method, but you can also do a clean VM or whatever else, as long as you don't assume anything beyond a default installed system.
Hold on, do you not see the insane contradiction of not wanting to rely on having cargo installed but requiring something is deployable and tested in a docker container? What?!
No, you misunderstood. I meant that if you're going to document a block of command-line instructions, you should first make sure those commands work as-is in a clean system.
A very easy way to do this (for me anyways) is using a Docker container. I use this method to test all of my documented commands. But there are other ways, of course, like using a clean VM. Regardless, just test the commands without assuming the customized state of your own workstation!
The point is that if I follow the hypothetical instructions of running "cargo install something", the result will probably be "cargo: command not found". When you test this in a clean system and find this error message, this places on you the burden of depending on Cargo, so the least you should do is to make sure "cargo" will work for the user who is reading your docs. At a minimum, you should link to somewhere that explains how to install Cargo.
tldr: you should make sure your instructions work as-is from a clean installation of your intended distro(s), regardless of how you prefer to do so.
You're telling me that people who want to replace a command-line utility are the same people who can't install a toolchain (or just download a binary and put it in their path)?
As a single-sample statistic I can share with you, I like to think I'm a well seasoned C/C++ developer, and have experience with all sorts of relatively low-level technical stuff and a good grasp on the internals of how things (like e.g. the kernel) work.
Yet I got confused the first time ever some README told me to run "npm install blah". WTF is NPM? I didn't care, really, I just wanted to use blah. Conversely, later I worked with Node devs who would not know where to even start if I asked them to install a C++ toolchain.
The point is don't assume too much about the background of the people reading your instructions. They don't have in their heads the same stuff you take for granted.
Don't focus on the specifics, consider the NPM thing an analogy for any other piece of software.
I've found instances where some documentation instructions pointed to run Maven, and the commands worked in their machine because Maven is highly dependent on customizations and local cache. But it failed in other machines that didn't have this parameter configured, or that package version cached locally. And trust me, Maven can be _very_ obtuse and hard to troubleshoot, too much implicit magic happening.
Testing in a clean container or VM would have raised those issues before the readme was written and published. Hence my point stands, testing commands in a clean system is akin to testing a web page in a private tab, to prevent any previous local state polluting the test.
Testing in a clean container tests deploying in a clean container. For me, I run a computer :) Maven sounds like a nightmare tbh so I can understand that that specific piece of software has warts. That said, a good piece of package management software will be relatively agnostic to where its run and have a dependable set of behaviours. I much prefer that to a world where every bit of software is run on any conceivable combination of OS and hardware. What an absolute drain on brain space and creative effort!
If its deployable and tested in a docker container its much easier to generate user images, it takes the onus away from the user and the developer can just put it on the aur/publish a deb
You happen to have cmake or autotools installed, others happen to have cargo installed.
Once cargo/cmake/autotools/make/npm/mvn/setup.py/whatever runs, the process of taking the output and packaging it for your preferred distro is the same.
There's more work involved if you want a distro to actually pick it up and include it in their repos around not using static linking, but if you're asking for a .deb/.rpm on github actions, that's not needed.
Binary releases seem uncommon from my perspective. Every time I go to install a piece of software written in Rust from homebrew, it invariably starts installing some massive Rust toolchain as a dependency, at which point I give up and move on.
Maybe it's a case of the packagers taking a lazy route or something, or maybe there is a reason for depending on cargo. I have no idea.
Isn't homebrew specifically supposed to build from source? e.g. the example on the homepage of a recipe is running ./configure && make on wget.
The fact that you installed the XCode CLI tools for that wget example to work when you first installed homebrew because homebrew itself requires it, and you only get Cargo the first time you get a rust dependency seems to be what you're really complaining about.
Homebrew tries to install binaries by default. (They call them bottles)
Building from source happens if a suitable 'bottle' isn't available, or when `--build-from-source` is specified with the install command.
I know cargo is installed only once, but I don't want cargo. I don't build Rust software myself, so I don't want to have it hanging out on my system taking up space purely just so I can have one or two useful programs that were written in Rust and depend on it. I'll just go with some other alternative.
Perhaps the packagers on your platform went that extra mile to build binary packages. Taking a quick look, the Homebrew formula[0] for ripgrep on macOS just lists a dependency on Cargo (rust) and then seems to invoke the cargo command for installation. I'm not well versed in Ruby though, so my interpretation could be wrong.
I don't want to come off as entitled, either. I know the Homebrew folks are doing a ton of brilliant, ongoing work to make it work as well as it does, so I can't really blame them for potentially taking a shortcut here.
If it installs a bottle, then does it still require installing Rust? If so, then maybe that's a shortcoming of Homebrew.
Either way, it kinda seems like you're complaining about Homebrew here. Not Rust.
If having Cargo/Rust on your system is really a Hard No (...why?), then I guess find a package manager that only delivers whatever is necessary, or, if available, use the project's binary releases: https://github.com/BurntSushi/ripgrep/releases/tag/13.0.0
Ruby requires an interpreter at runtime. JavaScript too. Rust produces standalone binaries. So no, "things don't break" and you only compile things once.
// I can't care less about deb or rpm files so don't try to force that down my throat.
There's no win/win scenario when comparing libraries to static binaries. On the one hand, static binaries are more user friendly. But they remove the responsibility for keeping your OS secure away from the OS/distro maintainers.
For example, if a vulnerability is found in a create, you then have to hope that every maintainer who manages a Rust project that imports said create diligently pushes out newer binaries quickly. You then have multiple applications that need to be updated rather than one library.
This may well be a future problem we'll end up reading more about as Rust, Go and others become more embedded in our base Linux / macOS / etc install.
I agree that it's not ideal, but unfortunately bad decisions by Linux distributions and package maintainers have trained me as a user to avoid the package managers if I want up to date software with predictable and documented defaults.
Good package manager, broad ecosystem of packages, no header files, helpful compiler messages. It offers a good alternative to C++ for native applications.
So much this. After having encountered ADTs for the first time in Haskell and later on in Rust and other languages, any language without sum types (like Rust's enums) feels wholly incomplete. And the destructuring pattern matching is the cherry on top.
ADT and pattern matching really does it. Older languages certainly have their pull and can do a lot but with many modern coders having an interest or even education in higher level mathematics, easily being able to do that will put you way ahead of the competition. Even the hyper-pure Haskell now runs its own ecosystem and has actual systems written in it.
Backwards compatibility can be a heavy burden for a programming language. C++ could be a much simpler, ergonomic language by eliminating features and idioms that are no longer convenient to use.
Achieving mastery in C++ requires a lot of work. C++ projects require a coding standard. The decision space you have when working in C++ is much larger than when working with Rust due to the proliferation of language idioms.
Rust in the other hand, as a newer language, can benefit from the experiences working with languages such as C++, and provide a better experience right from the beginning. In fact, Rust was created by Mozilla as a response to the shortcomings they perceived in C++.
40 years is a long time, so the experience will almost certainly degrade a fair bit. The notion of editions in rust makes allowances for breaking changes while still keeping backwards compatibility, I'm very curious to see whether the problems that solves outweigh the complexity in the compiler.
I am not convinced that editions are much better than language version switches.
They only work in the ideal case that whole dependencies are available as source code, the same compiler is used for the whole compilation process and for light syntactic changes.
With 40 years of history, several Rust compilers in production, corporations shipping binary crates, expect the editions to be as effective as ISO/ECMA language editions.
Two out of three of those things have nothing to do with editions. The final one is basically saying “you can’t make huge changes,” and I’m not sure how that’s a criticism of the possibility of “in 40 years there will be too many changes.”
> Therefore, the answer to the question of, say, “what is a valid C++14 program?” changes over time, up until the publication of C++17, and so forth. In practice, the situation is a bit more complicated when compiler vendors offer conformance modes for specific language revisions (e.g. -std=c++11, -std=c++17). Vendors may consider defect resolutions to apply to any historic revision that contains the defect (whereas ISO considers only the most recent publication as the Standard).
The design of editions is such to specifically reduce implementation complexity in the compiler, for this reason. The majority of the compiler is edition-agnostic.
My specific concern there is that, while the compiler frontend for any one given edition becomes individually simpler, the whole compiler becomes a bit more complicated, and things like IntoIterator, which pjmlp mentioned elsewhere, imply changes across several editions.
This is not a major problem when editions means {2015, 2018, 2021}, but in a world where we also have 2024, 2027, ... editions, this becomes increasingly tricky.
The Rust compiler is roughly “parse -> AST -> HIR -> MIR -> LLVM IR -> binary.” I forget exactly where editions are erased (and I’m on my phone so it’s hard to check), but for sure it’s gone by the time MIR exists, which is where things like the borrow checker operates. Edition based changes only affect the very front end of the compiler, basically. This is a necessary requirement of how editions work.
For example, it is part of the interoperability story; because the main representation is edition agnostic, interop between crates in different editions is not an issue.
… I don’t know how to say this truly politely, but let’s just say I’ve had a few conversations with pjmlp about editions, and I would take the things he says on this topic with a large grain of salt.
When Rust editions reach about 5 in the wild, feel free to prove me wrong by mixing binary crates compiled with two Rust compilers, mixing three editions into the same executable.
You can also unpolitelly tell me how it will be any different from /std=language for any practical purposes.
Again, the ABI issue has nothing to do with editions. You can already build a binary today with three editions (though I forget if 2021 has any actual changes implemented yet) in the same executable. Part of the reason I said what I said is that every time we have this conversation you say you want to see how it plays out in practice, and we have shown you how multi-edition projects work, and how you can try it today, and you keep not listening. It’s FUD at this point.
It is different because those are frozen, editions are not (though in practice editions other than 2015 will rarely change). They make no guarantees about interop, and my understanding is that it might work, but isn’t guaranteed. If you have sources to the contrary I’d love to see them!
There is the similarity that the editions don't really matter for ABI, but otherwise editions are substantially different from the std switch.
C/C++ std switches freeze the entire language and disable newer features. Editions don't. Rust 2015 edition isn't an old version of Rust. It's the latest version of Rust, except it allows `async` as an identifier.
Editions don't really have an equivalent in C, but they're closer to being like trigraphs than the std compilation switch.
That only works because the Editions get updated after being released.
The same can happen to ISO C and C++, that is what technical revision documents are for.
> Therefore, the answer to the question of, say, “what is a valid C++14 program?” changes over time, up until the publication of C++17, and so forth. In practice, the situation is a bit more complicated when compiler vendors offer conformance modes for specific language revisions (e.g. -std=c++11, -std=c++17). Vendors may consider defect resolutions to apply to any historic revision that contains the defect (whereas ISO considers only the most recent publication as the Standard).
This is a thread about a shell. That’s very much the sort of project that is typically written in C, and also the sort of project that really benefits from being written in something safer.
It’s fine if writing systems languages doesn’t appeal to you, but they fulfil an important niche in. V8, Hotspot, Cpython all have to be written in something.
I write a lot of personal small utilities in Rust these days: The tooling is more modern than C++, it's more consistent cross platform, and it doesn't suffer the VM startup time of Python, which I would have used previously.
"Security bugs" are after all just a specific class of bugs and are still a huge nuisance in non-critical applications as a crash one could leverage for circumventing some security boundaries means most often just a unexplained crash for the common user, which just wants to use a tool in good faith.
So, reducing security bugs means less crashes on weird input, less leaks (better resource usage), just a more stable tool as some classes of bugs (which may or may not be security relevant) are just eliminated completely - that's a big deal as with rust the (runtime-)cost other languages avoiding this is just not there (or much smaller).
Why do you not? No memory leaks, no security issues stemming from such. No random crashes from memory misuse, no issues debugging the issues that exist. It's like a higher level language but lower. You get to do things your way, except when your way is fundamentally unsafe. The code is then easier to debug as it actually describes what's happening and issues are mostly algorithmic, while the application gets a massive boost in reliability and security.
Security is like the issue now, along with reliability. That's what people need and want. Rust offers that.
Rust is perfectly happy to leak memory. Leaks are not considered unsafe. There was actually a bit of a reckoning around the 1.0 release where people widely assumed the language wouldn’t leak, and leaks were proven to be safe behaviour.
Oh? Perhaps I need to reconsider my past trust in Rust. In retrospect it makes sense, interop. without leaking memory would be damn near impossible.
Still, I expect it to be very hard to do accidentally. In C all you need to do is have your mind blank for a moment. Which isn't that uncommon, especially if you're on crunch or something.
First, the language can't save you from getting the program semantics wrong (e.g. if you never delete an entry from a hashmap even after you're done with it, you're leaking that memory). No language can save you from leaks as a general concept.
Second, Rust makes a very specific promise — freedom from data races. Leaking resources does not actually break that promise, because it doesn't allow you to access that resource any more.
Unintentional leaks are rare in Rust, the main issue is around reference counting loops not being cleaned up automatically. Future versions of Rust might even offer some support for unleakable 'relevant types' (the dual to the existing 'affine types' where leaks are not considered incorrect) for better support of very advanced, type-directed resource/state management.
Rust isn't the only language that offers that. In fact most languages offer that. Even Pascal is safer than C. Or if we're really concerned about security then we should be advocating that out shells are written in Ada. But clearly there's more to it than that....
It's also worth remembering that the biggest causes of RCE in shells haven't been buffer overflows. It's been fundamental design problems from the outset (eg Bash executing functions in environmental variables (Shellshock) and CI/CD pipelines or other programs forking out to the shell without sanitising user input nor having sufficient RBAC in place).
Don't get me wrong, I have nothing against Rust. I think its fantastic that we're seeing diversity in the ecosystem. But we need to be careful not to judge a project simply because of its use of Rust (or choice of other language). The reality is soooo much more complicated and nuanced than is often made out in HN.
Maybe we could be writing things in ADA. I don’t know, it’s a language that’s been on my radar for several years but I haven’t actually dug into it yet.
That said, we need something to replace C — and Rust seems to be picking up momentum. Rust seems good enough a replacement to me, and that’s enough for me to cheer it on.
I do agree that “written in rust” isn’t as big a guarantee of quality as people here assume though.
We've already had languages that could replace C. Ironically Pascal was replaced by C on home systems. But Rust isn't a C replacement, it's a C++ replacement.
HN talks about Rust like there was a void before it but there wasn't. I think it's great that the community have finally gotten behind a safer language and I think Rust is a worthy candidate for the community to get behind. But I'm sick of reading about Rust as if it's a silver bullet. HN badly needs to get past this mindset that Rust is the only safe language (it is not), and that programs are automatically safer for being programmed in Rust (in some cases that might be true but in most cases it is not).
I remember learning to program back in the days when people would mock developers for using structured control flow blocks because "GOTOs are good enough". While the Rust movement is, thankfully, the inverse of that in that people are questioning whether older, arcane, paradigms need to be disrupted, there is still a weird following surrounding Rust that has the same emotive worship without actually looking at the problems being discussed. People seriously suggesting everything should be written in Rust or harping on about the language as if its a one of a kind. There's plenty of domains that better suit other, safe, languages and there are plenty of developers who personally prefer using other, also safe, languages. Rust isn't the right tool for everything.
And the fact that I've seen people advocate Rust ports of programs written in Haskell, OCaml and Go because "it's safer now it's rewritten in Rust" is a great demonstration for how absurd the cargo culting has become.
My point isn't that Rust is a bad language or that people shouldn't be using it. Just that people need to calm down a little when discussing Rust. Take this case for instance: most shells out there these days are written in safe languages. The last dozen or so shells I've seen posted on HN been programmed in Python, LISP, Go, C# and Scala. It's really only the old boys like Bash and Zsh that are C++ applications. So Nushell isn't all that unique in that regard. But because its the only one out of a dozen that was written in Rust, it's the only shell what has a page of comments commending the authors for their choice of language. That's a little absurd don't you think?
> it's the only shell what has a page of comments commending the authors for their choice of language.
There isn't "a page of comments commending the authors" here, so I have no clue what you are talking about? The main Rust discussion is in a subthread which someone specifically started by asking "why Rust", at which point you can't really fault the Rust fans for explaining why Rust.
Rust is far from "one of a kind". There's a similar-ish project for C at https://ziglang.org/, and to be honest, there have been 20 such projects in the past, 6000 if you count all the total failures, I just like this one.