Another data point: After writing / (re)writing 20K+ lines of code as a CLI side project in Rust (without touching async), I think I can say it's the best language for me after 20+ years of experience in other languages. I like the compiler, and I learn a lot from clippy.
The "hard" part about Rust is you have to "unlearn" some of the basic mechanisms like scope and ownership that you bring from other languages. It doesn't allow you to build quick and easy solutions that allows you to lie to yourself (or your boss.) You have to spend more time upfront to compile and fix all warnings, but in my experience so far, fixing these at the right time (i.e. before compilation) is better than fixing them after someone writes a Github issue. I can't say it eliminates all bugs, but my trust level to my Rust code is around 10x more than my Python code for the same problem. (And I'm writing Python since 2002.)
The pain comes from async. Over time, I came up to this conclusion: if someone tells me that Rust is nice and you only need to change your mind, this person doesn't write async code. Or he/she writes very-very straightforward, if not primitive async code and doesn't touch HOFs, traits, and similar stuff at all.
However, when you write networking code, you typically use async. The worst role is being an async library author (me).
People talk about C++ suffering from its commitment to zero-cost abstraction, but the same thing applies to Rust async. While async may theoretically be the fastest possible way to write asynchronous code, it feels like an order of magnitude more painful than the CSP/channel-based approached used in languages like Go and Clojure (and the upcoming Java Loom).
Personally if I had to write async code that required anything other than the absolute minimum possible latency, I'd prefer to write Go, and I say that as someone who thinks Go's lack of generics was an absolutely terrible idea.
I made the mistake of trying to learn Rust while doing async programming.
IMO, when it comes to concurrent, it's a matter of picking your poison:
Threaded Rust: No overhead of a GC, but overhead of context switches and multiple stacks.
NodeJS: No overhead of context switches and multiple stacks, but the overhead of a highly optimized GC. (And I suspect that the GC can do tricks like run when the process is waiting on all tasks.)
Only if your application can limit the number of threads to the number of physical cores.
IE, of you're doing a web server with a thread or process for each incoming web request, you're blocking and context switching. If you have to have locks, you're also blocking and context switching.
This is why async programming models are common, they move the logic of blocking and context switching into the language and runtime, where the compiler can juggle more concurrent tasks in a single thread. It's just harder to do in Rust because, to oversimplify, things that are in stack memory in a threaded environment are now on the heap. In C#/NodeJS, this difference is transparent, but in Rust it's not.
Async in something like C# is much less painful precisely because it doesn't try to be a zero-cost (or at least as low cost as possible) abstraction. When the language can allocate stack frames on the heap implicitly as needed, and there's GC to clean them up, things "just work".
Go now has generics. The performance of using them is hit or miss it seems.
Your opinion is valid, but I would say, if you aren't juicing for the best performance, you can adopt easier patterns to async. There are comments on this post detailing how to go about doing that. Or yea use another language if you want.
Can I do this without having to wrap half the libraries in the ecosystem if I want to use them without worrying about async? C# has that issue: the ecosystem buys heavily into async, so it can be hard to avoid it.
On embededded, you can write straightforward asynchronous code without using *Async*. You do it using DMA, and interrupts, perhaps with static analysis etc. There are efforts to use Async on embedded rust to abstract these, but it's not required.
Of note re networking: My observation is that the Rust Async ecosystem only covers TCP and higher. There are loads of Async TCP, HTTP etc libs, but nothing that can do anything lower than that! At that point, you're looking at perhaps `socket2`, and `smoltcp`; the latter, of note, also works on embedded, and goes lower than TCP, despite its name.
My best guess is that a lot of people are using Rust for TCP and HTTP level web programming, eg servers, where spawning 100s or more IO-bound processes at once makes sense; the area where Async shines. Why I'm confounded:
#1: Rust excels at low-level programming; ie it's one of a select group capable of this (Along with C, C++, ADA, and zig)
#2: Web application programming in Rust has a long way to go to get to the level of Django and Rails. It only has Flask analogues.
Neither of those goals are served by the existing Async-based ecosystem; it occupies a spot in between.
> The pain comes from async. Over time, I came up to this conclusion: if someone tells me that Rust is nice and you only need to change your mind, this person doesn't write async code.
Async code is a pain in almost any language. Certainly any language that differentiates between async code and non-async code has the async code be a pain.
> Certainly any language that differentiates between async code and non-async code has the async code be a pain.
Function colouring is not the only problem with async code. The difference is that concurrency in other high-level languages usually don't break down polymorphism and other language features. Also, they don't push you to deal with lifetimes, which is a serious issue in Rusty async.
Writing async code in C# is a lot easier to me than in Rust. Unfortunately, I didn't have a chance to write async in functional languages, such as Haskell or F#, because they are well-known for elegant concurrency.
I'm not sure if C# async was derived from F#, but it definitely looks very similar. The main difference is that in F# async is vastly more customizable (but also slower, because the compiler can't make certain assumptions due to said customizability.
Writing this kind of async code in Haskell (and to some extent OCaml) is much nicer, because you can abstract over the asyncness of code. This can't be done in Rust or C# because the type system isn't powerful enough (no higher-kinded types). To be fair, adding HKTs to Rust's existing type system is a challenging theoretical problem in itself.
In a sense, but the "ill-formed, no diagnostic required" hack allows for scenarios where what you wrote is nonsense, and a human can explain why, but your C++ compiler doesn't have that insight, so it compiles anyway and does... something. This avoids needing to teach the machine how to determine if what you did was sound.
But this is of course not a very safe way to write software.
If the author is fool enough to not use the language features that exist since C++17 to validate template code, surely.
I also don't find debugging Rust macros that fun, yet most likely the answer will be that the macro author didn't took enough care, and Rust is great to write gigantic DSL macros.
> If the author is fool enough to not use the language features that exist since C++17 to validate template code, surely.
To be sure the fact the diagnostics aren't required does not forbid them from being provided, but it does mean you'd need to know whether you've been provided with such diagnostics and how effective they actually are. Unless the answer is "I have diagnostics and they are 100% effective" you're in the same situation.
> I also don't find debugging Rust macros that fun
Which kind? I don't find debugging the declarative macros too hard, they are after all just expanding what you wrote according to some simple rules, and you can ask the compiler to show that expansion to you.
Procedural macros present unlimited potential for exciting debugging because now you're essentially modifying the compiler at runtime. A C++ pre-processor macro can cause some nasty problems but it's not going to run a different compiler... [Technically Mara's nightly-crimes only runs the same compiler with different flags, but it could run a different one if she'd needed to do that]
You might be right on this. I have written a lot of rust and a fair bit of network code, but I never went near async. I'm not exactly sure what my problem with it is, but I started writing rust before async was a thing and always felt more comfortable with finite state machines and polling for network code (I guess I am atypical).
1. Async for trivial things is straightforward and easy
2. Rust actively discourages some async patterns to protect you from some memory misuse edge cases
It does limit your freedom in writing code that eg. relies a lot on async callbacks, but there's a reason.
The first time I did a massive async project (think a rust binary maxing all cores executing the largest possible number of async fns doing various things in parallel - from fetching data online to running tensorflow) I came at it all wrong and wrote something that would have worked in node or haskell but that was a pain to compile in rust.
After days of pain I understood what rust wanted and nowadays I use the same pattern and it's fairly easy for me. Just another tool in the shed.
I disagree. I write Rust, I write mainly async Rust and while my code is maybe not the most sophisticated I use a lot of async traits and generics. I still find it one of the best languages I know, definitely my favourite for all around coding at the moment.
That's probably true. I've experimented a bit with tokio for local IO bound tasks, but decided it's unnecessarily complex. I think some patterns will emerge from current Rust async development eventually. Good parts will be easier and bad parts will be removed, but digestion of such concepts usually require time.
IMHO the time for doing analysis like this is before submitting code, not before compiling it.
Ideally you want code without unused variables, implicit type casts, ... in a repository. But when you are locally testing out code you're in progress of writing, it is very unproductive if you have to care about unused variables because you're commenting out one line to see the difference, or change casts everywhere because you change the type something temporarily. It'd be nice to only do the work of cleaning up such issues in the final version of your change.
These checks are enabled by default in many environments, build tools or scripts for them etc..., it's not trivial to disable it or requires full recompile.
So I wish a language or build environment would use the concept of "development time" vs "submitting time" for different sets of warnings-as-errors.
This is one recent change in Zig which really annoys me: unused variables are now errors. Languages that complain every time there's a unused variable become useless during experimentation and quick hacking. They turn my hyperfocus into a death from a thousand paper cuts.
I hate it with a passion. Please respect my mental flow, if I'm trying some ideas out, it's just rude to stop me in my tracks to tell me I forgot to comment out a variable. Who cares.
I will fight anyone who thinks this is a good idea.
For a variety of reasons, Zig doesn't do warnings.
Personally I don't get the big deal with unused variables being errors, from either direction. I'm not sure what it accomplishes to make them errors and I'm not sure why people complain so much about having to comment them out.
If you comment out a statement, you cannot know for sure how many unused variables this caused unless you visually scan the entire previous code, which can cost time.
So the only way to find out is to compile and then have the compiler tell you that it refuses to continue because there's an unused variable at line X.
So now you needed to do not one but 2 compiles, to do something that should have taken 1 compile, which also costs time.
But it's then also possible that due to commenting out line X, there's now yet another unused variable (or more) somewhere. Etc... (Unless the language offers a way to fake-use the variable, like (void) cast in C++. Then at least you only have one wasted compile and not recursively more)
And then if you want to enable the original line that you commented out again, you need to also remember to uncomment out those other lines.
Repeat this process many times a day when really in progress of developing something, and due to the total time it costs for this silliness it's just a bad tool, not a good tool for efficient programming in the flow.
The goal of the unused variable as error is to prevent bad submitted code, but that should not cost you time during development. Only enforce that check at the end, not during.
I think that one thing that we really need to revisit in PLs is sorting out different kinds of comments. Like, in most languages, a comment is just whitespace, with no semantic distinctions. More recently, docstrings got some special handling. But what's really needed is a kind of comment that's specifically about commenting out code; ideally, working on syntax tree level (so you can comment out one entire {...} block, say), and with the compiler being aware that the stuff inside is code, and handling it accordingly for purposes such as these.
Earlier in Rust, I saw people recommending `#![deny(warnings)]` (always turn warnings into errors) but I think the community has shifted since then because I feel like I don't see that as much and instead see people disabling warnings in CI.
Speaking of warnings, something I appreciate about Rust is you only see warnings for your own code and not for your dependencies so you can crank up the warnings to whatever level you want without being blocked by dependencies (minus macros). Granted, there could be times where looking at high risk warnings for dependencies could be important.
I'm using cargo-limit crate for "cargo lcheck", "cargo ltest" or "cargo lclippy" to show errors before warnings before clippy warnings. You're right that "cargo check" gives the same priority to warnings as errors, and it's annoying.
I have found that as I gain experience with rust, I spend less and less time “upfront to compile and fix all warnings”. I think rust requires a different design philosophy. It just takes time to adopt it into your mind.
That's true for me as well, but I learned these after many failures. Now, I start by writing enums and structs for the problem, then iterate the design with functions and if it's really needed add these functions as impl of a certain struct/enum. This is the inverse of "Object-Oriented" design where one has to start from interfaces and manipulate data to satisfy the interfaces.
I wrote servers in Go as well, and I like the language. After Rust, I believe at some point, I enjoy to solve the "problems" Rust brings, not that it's rationally better than Go.
using c++17/c++20 with g++'s sanitizers(e.g. undefined behavior) and static analyzers, along with clangd the LSP, they too caught most if not all coding errors at editing time and compiling time. In recent two years once my c++ program compiles warning free, it seems bug-free at the same time.
In my experience this is untrue... I've worked with C++ on and off (albeit, sometimes reluctantly, so maybe I'm projecting some misery) and a lot of errors in C++ can not be caught until it starts pasting the templates. Sometimes it also fails when linking. Sure, you can always go through the whole build/compile process, but for serious C++ projects that's impractical (even with stuff like (s)ccache) because of how long it takes. And I'm a code quality tooling/intellisense freak to the point it can be annoying for others, and I've never seen a truckload of tooling come even close to simply running "cargo check"
I've only experienced waiting ~10 minutes at most (on decent dev machine) and it frustrated me to no end, but even more serious projects sometimes take hours. Maybe concepts help with this?
And the "jUsT dOnT uSe HaLf tHe LaNgUaGe" argument holds as much as a leaky bucket, because it's hard to agree on what the good parts are in a large team, unless its strictly enforced and audited somehow
I tried rust on and off, since I do not need it for work I never had time to fully dive into it. The news about rust on the web just seems too good to be true, are there any "cons" against modern c++?
yes the cargo and tools etc are great, but I can set up a full c++ build env quickly too, not as good but enough for daily coding.
A big con is that the ecosystem is young still. You can generally find stuff for almost anything at this point (even enterprisey stuff like OData client generators), but they're not "established" the same way their C++ equivalents would be.
GUI and Game dev spaces in Rust are very inovativne and in how they solve some problems, and I'm fairly enthusiastic about both spaces, but I'd be reluctant to recommend them for production, because they're still immature/developing compared to anything in C++
Unless you change header files deeply engrained into your code base (or your entire code is templates), incremental builds should not reach 10 minutes, not to mention hours...
That's cool. I've worked in C++11 most recently (in 2012 with OpenCV) and used Boost's smart pointers to achieve memory security but I was probably overdoing those at that time and don't remember it as a good experience. (I remember I spent nights with valgrind but it may be my confusion at that time and don't remember the details.) I'll take a look at these as well to brush up my knowledge on modern features of "grand old language." Thank you.
To be honest that doesn't seem too convincing for a new language. It actually reminds me of the early dogmatism of Java which today is often seen as too restrictive.
Rust makes an argument about safety but only for a select few types of errors and the cost seem rather high.
It brought a discipline to my memory/variable management and functional problem solving, as Haskell brought a discipline in my typeful thinking some time ago. I don't claim it's "the best language for all use cases" but I'm glad I'm learning it.
That's true, though in my experience those quick solutions lead to more quick solutions, and in total that leads to a mess in the long run. But if that long run will never come, it may be better not to bother. These are all sensible choices.
I wish there was a Rust-like programming language that was just a little bit higher level than Rust. I like Rust's wide ecosystem with high quality packages, the nice type system, traits, the idea that my code generally runs pretty fast even if I'm being lazy about writing good code, and my code usually working correctly if it compiles.
I care about speed and correctness but Rust makes me also care about ownership and lifetimes even when I don't really want to care about that stuff. I've written Rust for years now and this still occasionally can really slow me down. Rust is still a very nice language so I just deal with it :) no pain no gain.
I would love a Rust-with-GC or something in that spirit that is similar to Rust but automatically figures out lifetimes as much as possible and GCs whatever it cannot figure out automatically. I've looked at languages like Nim and read about research languages like Koka so I'm optimistic something will emerge from this space I really like.
I think that's ocaml. Higher level, with GC, nice type system, no traits but signatures and higher order modules might be enough for you, and a compiler that produces fast native binaries.
There are plenty of languages to choose from if you allow a GC. I think Rust's niche is systems-level programming where a garbage collector is an impossibility. I wish there was an easier to use language with Rust's features which occupied this niche. I find myself reaching for modern C++ instead of Rust when I want to be productive :(
> I find myself reaching for modern C++ instead of Rust when I want to be productive :(
You are not the only one, this is also my experience.
And if you think about it: This is fine.
There is a reason why we are not using TLA+ or format proofs framework when we want to do quick prototyping: Safety has a cost like everything else.
It can be a run-time cost (in case of ARC or GC languages) or a mental effort / productivity cost but it is still a cost.
Same here, I like Rust a lot, however in one side I already have JVM and .NET languages for 99% of the stuff I do, and for the remaining 1%, having C++ burned on my brain since 1994, with the 40 years of tooling, I rather spend the time using it.
I think you are pretty spot on here. I write Rust and Haskell most of the time, where the choice depends on the kind of project I'm working on. Also I've written Rust professionally but Haskell has been confined to random hobby projects.
I love Haskell but it has its own flavor of problems. I've meant to brush up a bit on OCaml; I've heard it has interesting features in its module system that Haskell does not have (maybe what you call "higher order modules").
Yup. Professional OCaml dev here, higher order modules are pretty cool. But OCaml shows its age and without typeclasses I don't find it very pleasant to program in. Most days I'd rather use Rust to be honest.
There is only one OCaml build system (dune) which is used by people that don't write their own build system. But yes, you have the freedom of using alternative to the OCaml standard library if you wish so.
> I care about speed and correctness but Rust makes me also care about ownership and lifetimes even when I don't really want to care about that stuff.
I don't think one can care about speed and correctness without caring about lifetimes and ownership. Even if you do not care about memory ownership and use garbage collection, there are plethora of other things that you take care of. For example, if you've juts sent an RPC/HTTP request, and a user closed a particular window, should you abort the request or not? What if the user have closed all windows except one? Can they actually close window X without doing something in window Y first? What should happen if you forgot to abort the request and now the callback is called, but the original caller is gone?
I think after the (correct) first sentence you are a bit mistaken here. The example you describe can be solved correctly and also with good performance in garbage collected languages such as Haskell or Scala. I would even say that those languages make a correct solution easier than in Rust, but they trade in a bit of performance for it (e.g. by using immutable datastructures instead of an ownership model). But for async stuff, I think this is actually simpler.
However, when performance is key, then Rust is by far the easiest solution to get it performant and still correct.
"good performance" is a relative term. You may consider Haskell or Scala having "good performance" but for others having garbage collection itself is a big no. So you are either left with Rust's approach where you limit the programmer, or C++'s approach where you leave memory management mostly to the programmer
My response was focussed on the part of the challenges of async logic. And what OP essentially said is that Rust does not only help performance because of the ownership model (vs. garbage collection), but also because it makes it easy to come up with correct and performant solution to async-related problems.
My point was that, when ignoring the performance benefits of ownership model vs. garbage collection, a language like Haskell or Scala makes a performant and correct solution for async problems easier than Rust does - at least at the current point in time.
You can get great performance from garbage collected languages, to be honest. I guess what you can't get is perfectly predictable latency — but that only matters for a pretty narrow class of applications.
I feel this fact is grossly underappreciated. Caring about speed and correctness but not ownership and lifetimes (and thread safety and race conditions and...) is like caring about road safety but not caring about headlights. You're not avoiding ownership and lifetime problems, you're just avoiding looking at them.
Thread safety, and memory model safety, and all this important stuff, is important! But ownership and lifetimes are models of reality that Rust asserts as part of its domain language, which, like all models, are approximations of reality, not reality itself. They're useful to the extent that they help to solve a given problem. And not all problems fit into the assumptions that they assert.
I've bounced around a few languages this year, looking for a language to use in my spare time for fun / enjoyment / skills growth (I have a similar set of criteria to you). I've looked at rust, swift, haxe, zig, and now nim, and I think I'll be sticking with nim. I like that it's seemingly simple, and takes care of memory management for you, but you still have access to pointers if you want them and can extend the language with its macros. It seems like a language that I'll be able to grow with / pick up complexity as I want it, but by default I'll have concise readable code. What was your take on nim?
* Compiling is pretty fast too (compared to Rust or Haskell).
* It's easy for me to bind to whatever C library I have even if nobody made nice Nim packages for it. This one was important for the project I was working on.
* There is a GC, but if you choose the proper GC it will be based on reference counting (or if you are compiling to JavaScript, it'll use JavaScript's GC). It means I don't have to care about cleaning up resources. My memory may be wrong but I think Nim also tries to remove reference counting checks when it can. I haven't ever checked in the compiled code does it do a good job at this.
* The language rarely complains that something in my code is wrong; there's no borrow checker complaining. Even with little experience I was able to write some quite complicated code. It's like Nim wants to do its best to compile and run my crappy code.
* It's easy to read. Maybe because it looks so much like Python and I have lots of Python experience. Nim does not want to complain about your code unless it has to. Despite being statically typed, you don't have to write type definitions that much.
There are things I don't like as much:
* The story for running threads in Nim is not great. You can run OS-level threads but it's a bit janky (you'll have to now care what data can be shared). It wouldn't stop me from writing multi-threaded software though if I really needed threads.
* The documentation could be a bit better. I think Nim project should take this giant page: https://nim-lang.org/docs/manual.html and reorganize it and make sure the language is easy to read and find stuff. I often have trouble finding documentation on some language feature. I think the project is acknowledging this right in the first paragraph "This document is a draft! Several of Nim's features may need more precise wording. This manual is constantly evolving into a proper specification."
This one is harder to pinpoint into specific examples but the language feels a bit immature. I feel like there's bunch of half-baked features that were thrown in on a whim idea. And obviously the package ecosystem isn't as wide as in more established programming languages. I looked it up and Nim has apparently existed since 2008; I'm sure it used to be way more immature ;)
Despite the negatives, I think Nim is an amazing tinkerer's language. I can very quickly write programs that will be speedy and easy to read. I'm not sure I'd start a very complicated large project in Nim though. I am bullish that Nim will mature and its userbase will grow, fixing some of the warts.
In a few months I plan to take a long vacation and work on a video game with my friend. There is a high chance that I'm going to use Nim for this project; to make something that runs both in browser and also natively.
Good summary, I haven't touched threads but I'd agree with everything else. I'd also add that since the language feels dynamic, I feel like my code is less structured, but I think this might just be because all the boilerplate you have to write in other languages isn't there. So maybe my code is just as unstructured and with nim I can't be fooled by boilerplate into thinking it is.
I've been following the nature of code and making a little 2d platformer at the same time using nim + raylib. Compiling to wasm I found to be even simpler than when doing the same thing with C++. It's really nice being able to compile to native or wasm with a single compiler flag.
Try D it has most features that you just described. It feels Phytonic and it's GC by default[1].
There is an ongoing work for borrow checker for D language, and pardon the pun but I believe D is borrowing just the right amount features from Cyclon and Rust for safer compiled software without overly complex programming syntax.
Swift could be a lovely general purpose language. I like the balance swift strikes between ease of use, expressiveness and performance. Like, swift has an equivalent of Option - but there’s syntax sugar for it. Rust has String / &str / Etc. To get a SSO string you need to pull in an external crate. Swift just has a built in, good, general purpose SSO string as part of the language.
But adding random half baked features to swift seems to be on the promotion path at Apple. Nothing is well documented. The language doesn’t feel stable and there isn’t much of a broad community like there is with rust, javascript and python. It’s a pity!
I would argue that swift is pretty distant compared to the ML family of languages - specifically Haskell. Besides garbage collection, the biggest difference between Haskell and Rust is that Haskell already has higher kinded types.
Scala might just be that. It has a very strong type system which is quite similar to Rust's, lifetimes are managed by the JVM's state-of-the art GC, and all in all it is a very expressive language. Think of python-level expressivity, but all statically typed with type inference. It also has a similar stance on the functional-imperative question as Rust has - it prefers functional concepts but lets you write imperative code when you want to hand-optimize something.
Oh and I almost forget to point out that it can just use basically one of the largest ecosystems, and can also compile to js or native (for the latter there is scala native as well as graal)
It is a GCd language, a borrow checker is very seldom needed outside of that (and frankly, one should just use try-with and similar constructs for other kinds of resources), so why would it need one?
As for mutability, it is not enforced on a language level (only shallowly), but the standard library, the language primitives and basically everything makes control over mutability very good. In practice it won't be much different than Rust with the interior mutability pattern.
Oh I see. No it does not have it enforced at the language level, but it relies heavily on immutable data structures, and the type system is strong enough to express a complete actor-based concurrency library.
But since it is interoperable with Java and that exposes low-level primitives of concurrency, it can’t really be made guaranteed data-race free.
Your responses were quite informative, thank you. Even for me, Scala looks more interesting now. I have some Java adepts, I’ll try to convince them to climb a few steps.
I really like the idea of something like vlang (https://vlang.io), as I've tried multiple times to find the motivation to learn Rust, but always end up going back to golang for the simplicity.
Unless there is some very compelling reason to need Rust, a lot of people would be better off with Golang or Vlang, because it would make their lives easier in terms of more general usage and ease of use.
Without diving into completely obscure programming languages, you can look at Scala, F#(doesn't scale super great ime), or I guess golang.
That said, they all don't give you what rust gives you and have their own troubles.
The truth is, either you care about life times and ownership, or you really don't care much about speed and correctness. Not saying that targeted at you as a person, but the royal "you". Even in c/c++ I have to think about that stuff, there's just no tools for it .
Fwiw you can import crates to have a gc in rust. To me it defeats the purpose. Wishing you the best on your search
It's interesting that you find Go to be Rust-adjacent. Setting aside the USPs of each language (goroutines and borrow checker respectively) and just speaking about the general experience of writing code, I find Go painful for all the reasons that I find Rust pleasant.
The best summary I can give of Rust is that it was designed by people who learned from the mistakes of the programming languages that came before it.
The best summary I can give of Go is to quote Rob Pike's response [0] to a request for syntax highlighting in the Go playground:
> Syntax highlighting is juvenile. When I was a child, I was taught
arithmetic using colored rods
(http://en.wikipedia.org/wiki/Cuisenaire_rods). I grew up and today I
use monochromatic numerals.
It's like my favorite John Quincy Adams quote, "It is what it is, and it ain't what it ain't" Different strokes for different folks you know? At the end of the day if folks code at all, and it makes them happy, that's a good thing.
It is mostly an example of the mindset that I think GP is trying to illustrate.
Go has nil where Rust has Option.
Go has weird not-quite-tuple returns & if err != nil where Rust has Result.
Go has no real enum concept, where rust has its powerful enums and matching constructs.
Go has generics, but only after a decade of pressure from users (and even then, they are much much less useful than Rust's type system).
I like Go and I feel very productive in it, but its commitment to simplicity is dogmatic in many ways and it very much is missing milestone advancements in PLs from the past several decades. It could easily have been made in the 90s.
I would love a language that combined the best aspects of Go and F#: expressive, great type system, fast compilation, native binaries, opinionated formatter, excellent concurrency, comprehensive standard library, consistent documentation, etc.
Throw in great compile to JS capabilities with small bundle sizes and the ability to map Typescript typings into the language so you can seamlessly consume well typed libraries, and man. Killer language, right there.
I feel like Go was built with primary design considerations that are often not considered publicly and often at odds with what programmers want out of a language. I saw one of the first public talks from the creators at Google I/O and they stressed two things: compilation speed and new developers coming up to speed quickly. From what they said, Google had a few C++ projects with multi-hour compilation times that, when profiled, showed about 90% of the time was spent reading header files. So a core Go philosophy was single-pass compilation to cut down compile times as much as possible. Similarly they stressed that the focus on simplicity meant there wasn’t as much variation in style of Go code and new programmers—even those unfamiliar with Go—could quickly come up to speed and contribute to a project.
Viewed through this lens, the resistance on the part of the creators to changes that compromise these values even a little bit makes sense. Generics take time to get used to and any code base that makes extensive use of them will take longer to get up to speed in, even if it enables you to move faster later on.
That talk has really shaped how I look at Go. I think it solves problems that Google has (really large projects built by teams that have a ton of turnover) really well. But as with a lot of things that emerge from Google, it’s a solution to a problem that not too many other companies face. The ones that do will get an awesome tool that’s proven to work. But the ones for whom it’s 90% of what they need are going to get a lot of pushback getting that last 10% accepted because it already does almost exactly what Google needs it to do and any departure from that will be, in their minds, counterproductive.
I agree, I've mostly made peace with what Go is and I still enjoy using it.
It just tantalizes me because of how close it is to my ideal general-purpose programming language. There is a large middle ground between the minutes-long compile times of Rust and the seconds-long compile times of Go.
It doesn't, primarily the way I saw the discussion heading was (warning: strawmen ahead):
> "Go is a good alternative to Rust because it is easy to write performant, concurrent code"
> "Go is not very comparable to Rust. I won't describe why, here's a quote by Rob Pike and I'll strongly imply it's because its creators deliberately avoided complexity, even where useful."
> "You did not explicitly criticize anything about Go, therefore it must not have flaws"
Then I came in and described exactly where I feel Go ignores the state of the art in PLs.
> But what does any of that have to do with (Rust-shaped vomit)
Go doesn't need all those type system gymnastics because it does not have the problem of the borrow checker to deal with and doesn't promise to avoid segfaults for you.
It is a serialization of relative lifetimes, basically. Whenever you write code in any language that doesn't have a GC, you either have to maintain a mental map like this to avoid bugs, or you make the language do it for you like Rust does.
Yeah I don't mean to criticize Go's generics for being less featureful given how late they were introduced, but they do currently prevent me from building any kind of mapping/filtering/pipeline style code because of their limitations (no generic type parameters on methods). A Go implementation of Result or Option could paper over the lack of sum types if we only had that.
The more I think about it, what I really want is something with the ML feeling of Rust, but in the space that Go occupies (good performance GC languages). Go frustrates me because it nails the runtime and tooling side of things but falls far short of it in the other ways I mentioned.
There's no shortage of good-perf languages with a GC: F# would be one prominent example that sounds very similar to what you describe.
The usual objection is that it requires a .NET runtime, whereas Go produces a single self-contained executable. But .NET can produce self-contained executables these days.
Yes, .NET can produce binary executables, but is it a common practice? I’m asking because I think the ecosystem matters a lot. If it’s a common practice, then it’s more tested and more stable and you get more tools and documentation.
It's a relatively recent feature, at least the "pure" implementation (what they had before was, essentially, a self-extracting binary), so it's still gaining popularity. But seems to be fairly common with containers.
As the sibling commentor said, syntax highlighting is only one example meant to illustrate Go's sometimes unnecessarily frustrating design. In case it wasn't clear, the example wasn't that Rob Pike doesn't like syntax highlighting. The example was that Go's playground doesn't have syntax highlighting (even as an option) for a reason as arbitrary as "because I said so".
That is why I find Rust and Go to be dissimilar. Rust is a language full of good ideas, almost all of which didn't originate in Rust. Go feels like it was designed with a very specific brief - "we want C but with GC and easy concurrency" - but whose designers otherwise had an NIH syndrome-like aversion to good ideas and common sense.
I don't agree with your framing. You are stating a 'moral' proposition, i.e. Rust == Good Ideas AND Go == Bad Ideas, and then applying the 'moral' position to the language as a whole. So you end up saying, in brief (and acknowledging that I do not think you are actually making a moral claim about the qualities of these languages) you are saying Rust is Good, Go is Bad and therefore they are dissimilar. I don't think the reasoning your putting forward actually says anything about the language and there similarity/dissimilarity.
I am not saying that Go and Rust are similar or not. I don't really agree with GP's comment where Go should be in the running for a Rust-with-GC/slightly-higher-level-Rust, but the frankly dismissive Rust Good therefore not similar to Go Bad is not justified. Although the conclusion regarding similarity is probably correct.
Also, this is a little off topic, but I have seen multiple people say that Go 'was designed with a very specific brief - "we want C but with GC and easy concurrency"' and then go on to complain about the lack of things like generics, destructuring match syntax, functional concepts, etc. But, all of those discussion, including your comment, start out with what seems to be an acknowledgement that Go's design had a very specific target. I agree that Go is basically the fulfillment of the brief you gave, i.e. C with GC and easy concurrency. So, when the target is C with two unique things, why does everyone then seem confused that Go doesn't have all of these extra 'good ideas and common sense'.
I'm not trying to turn this into a thread on Go's merits or lack thereof. I just don't understand why so many people seem to think that Go somehow didn't do exactly what it set out to do. You don't have to think what they decided to design was a worthwhile language, but to expect a lion to be a shark, or an apple be a steak, is just illogical.
I appreciate your comment. My intention wasn't "Go is dissimilar because it's bad and Rust is good" but I can see why you'd think that. I'll attempt to clarify.
The reason I like and recommend Rust is the number of decisions it gets right which are completely orthogonal to the borrow checker. It's clear that a lot of thought was put into the unexciting parts of the language. That's why I like using it even though I don't particularly need the borrow checker and I'd be happy with GC. Some examples off the top of my head:
- Expression-oriented nature makes code easy to write and nice to read
- Compilation errors are as clear and helpful as possible
- Comprehensive yet skimmable API documentation
In short, Rust is a nice language outside of the borrow checker.
A lot of the things which strike me as nice about Rust can't be said about Go.
For that reason I think Go is a surprising suggestion for someone who likes Rust but doesn't care about ownership and lifetimes.
There are a number of minor but valid frustrations I encounter when writing Go which are completely orthogonal to the brief of "C but with GC and easy concurrency". I'd understand if these problems were a result of the language's goals but often they seem to exist for no particular reason. In that regard I think Go is quite dissimilar to Rust.
You’re comment make your point very well. Honestly, so did the first comment. Sometimes I find myself so tired of just lurking and reading HN that I just text dump some comment onto a totally valid parent comment.
I like Go a lot. At my previous employer a lot of the systems we wrote for log processing were written in Golang. And I thought it was really nice; I can compile my code really quickly and code in general will run pretty fast. I've only used in the space of big data file processing though.
My only real issue with Go is that I feel the language part itself is simplistic and doesn't have the kind of type/trait/typeclass/similarthing like Rust or Haskell has. In Haskell I especially like doing DSLs using monads, e.g. I did a integer-problem-to-SAT-problem DSL and another DSL for NetHack playing AI. In Go making these DSLs is more pain. Although I think that would be true for Rust as well, just not as much.
Hare is not rust adjacent, sorry. It can't even represent the (small) stdlib of rust since it lacks generics, RAII... Not to mention the memory safety. It belongs in a different class of languages.
Go's lack of expressivity and weak as hell type system is hardly close to Rust, like I can barely see a similarity other than both being a PL. Hell, probably Java is closer to Rust than Go, at least it is also nominally typed.
Just wacky I had to scroll this far to see Go mentioned. In certain contexts Rust is very impressive, in others Go is a much better choice. I'm really starting to hope that we'll see some of the important advancements in Rust packaged up in a better language.
These one-liners aren't super-constructive, my own comment was hardly in-depth, so whatever...
On any team or in a community your most likely to have a range of skill levels across members. A language can be many things, a powerful tool or a strong barrier.
When I first heard about Rust I thought we were on the verge of getting some real advancements across a much larger community of developers and scope of projects. Instead I think we're headed further down a path of at least 3 major groups of PL (scripting, memory-managed, precise semantics). Who knows, maybe it's for the best. If so we better get busy improving the inter-op.
In addition to the Ocaml recommendations I suggest Haxe, which is basically "Ocaml concepts transposed into a compiler targeting various GC runtimes". (Compiler is also written in Ocaml - it's not kidding around) Easy to pick up if you already know JS syntax, and basically covers the "best-of" of static inferred type systems, but you can easily break out of it if you need dynamic or low-level behavior.
Downside is that it's not convenient if you just want one standard library and runtime because it targets all of them. You have to justify the trade-offs involved in that, but it's a good secret weapon.
D lang is incorporating a borrow-checking system. I'm not certain how far along they are with it. Walter Bright posts here often and will know. D has a wide range of GC and non-GC techniques that may meet your needs.
It puts the reference counting in the type system. Like the default string type and arrays (=vectors) are reference counted. And for speed you could use manual memory management
Any string literal is like an arc<string>. E.g. in Delphi you can now write string concatenation like:
var a = 'bcd';
var b = 'xyz';
var c = a + b;
which corresponds to Rust like
let a = Arc::new(String::from("bcd"));
let b = Arc::new(String::from("xyz"));
let c = format!("{}{}", a,b)
I think the big thing that hurt Pascal initially was the poor start it got off to. By the time things like Borland Pascal came around and fixed the weaknesses from explicitly starting as a teaching language, the damage had already let C pull ahead.
As for nowadays, I'd say there are three things that help people to bounce:
1. The big name (Delphi) is proprietary and Free Pascal's documentation feels like it hasn't caught up with various lessons that were learned about how to document a toolchain and standard library since the early 2000s.
(And that's before you discover that, apparently, the API documentation is manual enough that the official stance on the Free Vision TUI component's documentation is "go find a copy of the Turbo Vision book from the Borland Pascal manuals", and that the Free Pascal Wiki either neglected to mention one or two classes when they were saying what is yet to be reimplemented or neglected to mention additional restrictions present in the DPMI port.)
2. Similar to with Ada, the ecosystem people are normalized into as they learn programming is leaning more and more strongly on the C-descended syntaxes, making Wirth-style syntaxes feel more alien.
I have to admit, compared to something like Rust, there's a certain off-puttingness to having so much verbosity and ceremony in the structure of the block constructs. To exaggerate a bit to get the point across, it's sort of like Pascal expects me to remember all the layouts and lines for what an empty tax form looks like well enough to draw it from memory before filling it out... and that sense of discomfort doesn't go away if I delegate it to my code snippets tool.
It lends an ambient sense that there's an iceberg of structure I don't understand and Pascal expects me to remember how many separate peaks it should have poking out of the water and where they should be, without me understanding the topography of the submerged portion.
(And yet I still recommend it as the Java/C# equivalent for DOS retro-hobby computing, since it's safer than C and has a much richer library of bundled functionality and comparable performance.)
3. The Free Pascal APIs are aggressively 90s and don't have enough examples to break you from your 2010s-and-beyond expectations. (I was almost tearing my hair out over how to get status updates from their zip extraction code before I realized that I was fixated on the Qt/GTK/DOM/etc. idea of using some kind of signal.connect(my_callback) function rather than using subclassing to set an event handler.)
Just, in general, it's an experience that's alien to current language trends and growing more so, the documentation available before you're committed enough to pay for it is wanting, and if you're willing to pay hobbyist prices, you need to do your own research on what exists to pay for.
It's an embeddable scripting language with the goal of being a Rust-like language that supports hot reloading of functions AND data. To achieve the latter, it uses GC'ed memory such that memory can easily be mapped when the memory's type changes.
It's still in early development but maybe one day will serve your needs :)
I think some aspects of ownership are useful, e.g. move semantics and mutable/immutable references. withoutboats had a good post about a Rust-with-ownership-and-GC language a while back: https://without.boats/blog/revisiting-a-smaller-rust/
Would it be possible to do something like Rust+Go where you integrate Go code natively into Rust? So Go pointers become `GoPtr<T>` where `GoPtr<T>: Copy + GC` etc. Then all Go structs implement `GoStruct` which would have a static method called `reflect` that returns all the information required for type reflection. One thing I love about Rust is its genericity that at least makes imagining such things possible, though I do expect a lot of engineering effort.
People don't like to mention C# since like PHP, it has a bad rep from early days, but C# 10 and .NET 6 hit all the sweet spots people are mentioning. C# and .NET now run on Linux and Mac and compile to a single executable like Go with tree shaking so the binary is much reduced. I don't really care, but it's a shame people don't take another look at C#. When I learned Rust I was surprised how similar it felt to C#, but much more ergonomic.
I think Rust will eventually repeat the story of C++.
1) There will emerge more ergonomic languages that solve real-world problems that Rust attempted to solve.
2) Rust's issues with type system and async will be largely resolved, so it could be used in areas where it is absolutely required.
Is rusts most loved status simply Stockholm syndrome? I've really tried with rust, and we simply don't get on. I hear all the arguments about CVEs and wonder if rust will reduce the number of these problems simply by disabling the programmers that create them. I understand the draw, I want to love it, but i think I might not be smart enough.
Since Rust is so early in its adoption and there are few jobs, many of the responders are hobbyists who are into Rust, like it and want to keep using; not necessarily people who use it for work. And since you only take into account the people who have used it in the past year, you ignore all the people who have tried Rust before that and didn't like it.
As an extreme example, let's say I created a language in 2020, the entire software community tried it and they absolutely hated it and unanimously rejected it as the worst language ever but it still has 2 users (my mom and I) who want to keep using it - that languages would score 100% Most Loved on stackoverflow's survey even though it was hated by the overwhelming majority of people who tried it.
With this formulation you create the strange effect that if people hate a language enough to stop using it, the language's most loved metric goes up after a year.
This selection effect cannot explain Rust's standing relative to other new languages and neither can it explain why Rust's popularity (as measured by "most loved") increases over time as more and more companies are using it.
Here's the history of Rust's "most loved" percentage going back to 2015
I think that's consistent with my line of thinking - if in 2015 some people tried it and decided that it's not for them, they drop off from the sample and % loved goes up in following years.
Not claiming that this explains the entire increase in the percentage (I have no way of knowing) but I don't think this data contradicts my reasoning either.
In order for this line of thinking to explain the statistics, either (1) Rust would have had no users over the time period or (2) more than the loved percentage of new users like the language (mean goes up).
It's pretty trivial to see that Rust adoption is increasing, so you can discount (1).
Rust is really hard. So while people that don’t like Javascript still continue to use it, but people that don’t like Rust are likely to drop it like a brick.
Rust is too painful for hobby projects, IMO. Over the years, I've worked in C, C++, Go, Perl, Python, PHP, Java, Scala, JS, Tcl, ... others I've forgotten, most professionally and well as for personal projects.
I've used Rust for a hobby project, it was more painful before, but they have fixed a lot of the churn issues. The ecosystem is more or less stable right now, compared to before
1. Squashing bugs that show up after the code successfully compiles (or, in Python's case, appears to work) is a big drain on motivation.
2. The ecosystem's focus on API stability means that, once it works the way I want, the costs of ensuring "it built/ran yesterday, so it should build/run today too" will be minimized.
3. I don't have to write as many unit tests to feel I can trust it with my data.
TL;DR: Once you're on the same wavelength as the compiler, Rust is great for projects where your motivation is purely intrinsic.
(Well, that and the compiler has been constantly improving. Things especially got much better after non-lexical lifetimes landed.)
I'm a bit surprised that Rust is really the top language here. I would have expected the top ranked languages to be nowadays-relatively-unpopular languages with dedicated long-time users, like Lisp, Tcl, Perl, and APL.
There's probably an additional layer of substantial selection bias here. But maybe that explains the effect in the first place, greybeard Tcl users possibly didn't bother to take the survey while Clojure and Rust evangelists did.
For most people who program for a living, the hype over Rust means little. They need to use what is in the industry right now. Often, that is a tried and true language that is relatively easy to learn and use.
Having a complex programming language which limits you and controls how you use it does not sound appealing to those who need a feature done, fast.
> Having a complex programming language which limits you and controls how you use it does not sound appealing to those who need a feature done, fast.
This is so interesting to me. I'm a Rust programmer by trade (as in - I'm not a hobbyist, I actually write Rust for work). We've found that, while the feature work is a bit slower in Rust than in other languages the company used to use (mostly Python), they tend to require a lot less maintenance down the line (less bugs, easier refactoring), and so it ends up canceling out a bit.
I also never find that Rust limits or controls me in my day to day work. There are some things you cannot do easily, sure, but those things tend to not matter in application code. Linked lists are hard, graphs are hard, but when writing an app, I just use an existing dependency like petgraph or std::collections::LinkedList instead of reimplementing those datastructures. And if you need to write those things, it's not like it's impossible, you just need to drop down to unsafe rust, and ideally figure out some safe way to expose that API.
Really, the biggest disadvantage of Rust is that its learning curve makes onboarding newcomers much harder if they don't have experience in the language. For Python/Go/JS/Ruby you don't really have this problem, even if a potential recruit doesn't have experience in the language, they will probably be able to pick it up as they go without too much trouble. But in Rust, it can take a fairly significant amount of time to get up to speed and stop fighting the compiler, in the order of several months.
This feels like a common pattern. Languages like Rust and Go seem to pick up a lot of users from the world of dynamic languages, who then incorrectly associate the productivity gains of static typing with the specific language they picked.
A lot of programs written in Rust would almost certainly be better off being written in Kotlin. The one in the article is a good example. Why are they writing a messenger bot in Rust? That doesn't seem like the sort of use case Rust was targeted at. It's also a modern language with a lightweight syntax and pretty good static typing, great refactoring tools etc, but it's way more user friendly. There's no borrow checker because it's GCd, compile times are much faster (at least for the JVM version) etc.
You can also use Kotlin/Native or GraalVM native-image to produce standalone executables that don't need a full JVM, if that's a requirement for your use cases. The native images are astounding. They can start faster than programs written in C and their memory usage is also way less than a typical JVM app. Downside is of course compilation time but you can avoid that by just developing on the normal JVM and then AOT compiling at the end when it's time to release.
AFAICT Rust is a general purpose programming language. People can use it how they want. How do you learn a recent programming language? "hey boss I'm going to write production code in a new language that I haven't learned. Lol YOLO"
The funny thing about this is, using rust for production is way safer than an old language like C or even C++. So the YOLO part isn't crazy here. If you don't use unsafe you cannot, read can not, get data races, among a host of other common bugs.
Now if we were talking about some other "new languages" not naming names, you can memory leak kilobytes writing hello world... Yikes.
Well, the dispatcher problem is applicable outside of messenger bots too. But your point is right. I think it was a mistake that teloxide is written in Rust. I just wanted to learn a new language so I was like, why not writing such a library in Rust. Don't make my mistakes.
> We've found that, while the feature work is a bit slower in Rust than in other languages the company used to use (mostly Python), they tend to require a lot less maintenance down the line (less bugs, easier refactoring), and so it ends up canceling out a bit.
Python to Rust is a pretty radical swing from one approach to language design to another. I'd expect you could solve most of the maintenance problems with Python by switching to something statically typed but with garbage collection (e.g., C#, Java, or go), without incurring the costs of moving to Rust.
Well, we have other reasons to chose rust rather than a GC'd language (mostly has to do a lot of FFI, which Rust makes much easier than go or Java). I do agree that, if GC'd language fits the problem space, they tend to be much better.
GC has little to do with FFI though. In C# P/Invoke, you basically just declare a static method as external and specify the library where it lives, and that's that. So far as I know, Java has a similar story these days, no need to write JNI wrappers by hand etc. Go is special not because of GC, but because of its green threads.
GC and FFI do have a lot to do w/ each-other though, because FFI usually introduces unmanaged objects.
When a lot of what you're doing is interacting with low-level platform APIs, you end up having a lot of those unmanaged objects. After a certain point, the upsides of using a GC kind of disappear because you still have a lot of places you have to worry about those objects.
Of course, this can be worked around by providing managed wrappers around those unmanaged object, but at a certain point, it becomes easier to just drop down to an unmanaged but safe language (like Rust) that model the unmanaged resources more accurately. In my experience, it's somewhat easier to provide Safe Rust wrappers around your average Windows API than it is to provide a C# Managed Wrapper around the same.
---
And yes, go is special because it has very smol stacks, so doing any kind of FFI on it is a bad idea.
Unless your code is strictly glue between those lower-level APIs, GC is still a benefit for the majority of it. And, on the other hand, the lack of it in FFI is, at worst, similar to C... except you still have all the other language features (like, say, null safety or pattern matching) at your disposal.
I'm genuinely curious as to what would make FFI in Rust easier than in C#, assuming an apples-to-apples comparison (i.e. use of "unsafe" and associated features in both cases"). Most complexity in P/Invoke shows up when you try to use it to automatically map data to managed data types; but in modern C#, you might as well just use raw pointers, stackalloc, spans etc.
Rust seems to excel in having an ecosystem that provides binding helper libraries like PyO3, Neon and napi-rs, Helix, etc. which both ensure "if it compiles, you're doing the C glue properly" and provide a layer for automatic type conversion.
...so requiring `use of "unsafe"` is often akin to forbidding Cargo/pip/NPM/etc. and then calling the language un-productive or faulting a procedural/imperative language for performing badly when you code as if you're writing Haskell.
I’m still learning Rust, but the idea of having to use unsafe features to implement something as
simple as a linked
list seems “wrong” to me. What am I missing?
I kind of hit a wall with Rust after realizing that something like doubly linked lists are difficult because when two nodes are referring to a node between them, you don't have a clear owner. So basically all situations where you have two or more references to an object need to be thought out carefully, and for me it was a bit of a let down (even though I fully understand the reasoning behind it and why it's useful). For now I stick with languages with GC.
So here's a weird thought, why should you ever write your own doubly linked list? Why shouldn't you use one that a community of people has vetted and juiced for performance instead? In C what I just said is for some reason heresy.
But yes you are right, anytime you have two or more accesses to a data structure you do have to think a little harder about it. Usually you just want to use a reference to that object, sometimes it makes sense to clone it, sometimes you want to pass it by value and return it by value. In the case of doubly linked lists you probably want to drop into unsafe code.
I get it though, GC languages are more zen in that regard.
Writing your own data structures allows you to write it to suit the specific needs/constraints of your problem/domain. Typically that means you get something that is significantly simpler, faster and with a different feature set.
Libraries are typically optimized for generic use, so are often more complex, slow, and difficult to extend with new features (both coding wise or the features might not be suitable / have trade offs for other people).
Won't let me reply to fizzynut, but this is a reply to you.
Yes making data structures is important, and the reality of that is, there are people in the ecosystem creating fancy ones which can be leveraged for your use case.
If that doesn't exist then yea you have to write them. Sometimes that's difficult(you need to use unsafe), sometimes it's easyish and you can use all safe(I did this for a graph type). It can be done(lots of people are doing it), but I think what it shows is, data structures can be difficult to do correctly.
In a worst case scenario you could write it in asm inside rust if you really need a low level control, or write it in C/unsafe rust and ffi or just write it in.
Basically worst case scenario in rust is writing something closer to C.
Lots of people writing their own data structures and libraries should be a reflection of the different needs and diversity of the ecosystem, but if it's a reflection of difficulty then it tends to lead to a much narrower ecosystem.
That isn't to say a narrow ecosystem is necessarily bad, it can often be good if a particular language is used for a specific domain as all the libraries and data structures will likely be a better fit for your problems. Going outside of those constraints can often be very punishing in those languages though.
The fact is that writing a doubly linked list correctly is hard and it's easy even for veterans to miss things. Rust is going to put those invariants in your face.
My advice would be not to judge any language on how easy it is to implement a doubly linked list. 99% of code you write will likely not be that, and that particular data structure exposes a lot of choices and trade offs at a time when youd be better served learning other parts of the language.
Unpopular opinion but the whole memory safe idea is more niche than HN commenters would have us believe. Most of us are writing web apps and services that do not have strict memory requirements nor catastrophic failure modes.
Dynamic languages and GC'd languages cover most of what our employers are paying us for: web apps, backend services. It is ironic to build super safe software, then deploying them on kubernetes, written in a GC'd language prone to nil dereference errors.
Rust has its place, but it's a very small place. These days computers are so fast there's companies worth billions built on Ruby and Python, if you go native with a GC language you'll cover 98% of your needs.
Rust IMHO has gone too far towards safety at the expense of developer ergonomics and pragmatism. We're so expensive that adopting a subpar language that is easier to grok is often a savvier business decision.
Indeed. I'm a Rust dev somewhere it actually matters (video dev - we have to process 60 frames a second, for hours/days on end, with completely predictable performance, without crashing once) and I'd never pick it if I just wanted to build a webapp where nothing really matters and if the whole thing falls over we can just put it in a retry loop.
There's plenty of code out there that /cannot/ be written in Ruby/Python/Go/C#. Rust solves some really hard problems for people working on that type of code. But it's not what most people on HN are working on.
I think the problem is that a lot of web developers do not really understand that this type of code exists, and that languages which do not target web development can be exciting.
Your video app is a perfect example of where Rust shines. I'd add embedded development, data-intensive, fault-intolerant systems and game engines are perfect for this language.
But without even going as far as frontend web applications, stuff like backend services, system services, GUI applications, CLI applications, etc., don't actually need all that much memory safety and control over allocations.
Completely agree! When you need to compute numbers or deal with raw bytes, you have C. When you need to be high-level, you have Python, Java, C#, etc. Rust is applicable only in areas where you need both a high-level and systems language. Examples include interpreters, operating systems, browsers, game engines, etc. The hype of Rust just exceeds its problem domain.
This feels a bit like complaining that a tank is badly designed because nobody drives them on the highway...
My understanding is that Rust was designed to be a safe(er) language to write low-level system code than C/C++, not to compete with Python/Ruby/Java for web development applications.
I'm not complaining about Rust. It's a very good language.
I'm complaining about everybody on programming forums, including HN, seeing Rust as the replacement for most, if not all languages. My opinion is that Rust's ideal problem space is much narrower than people think.
Bingo, it's a true systems language. That said, the fact that people can and do write web applications(for practical reasons) with it kind of says something about it's breadth.
There seems to be a myth that Rust is unusual in being memory safe. All modern GCed languages are memory safe (assuming you don't do anything obviously unsafe like manipulate raw pointers, which some languages might let you do if you really want to).
>It is ironic to build super safe software, then deploying them on kubernetes, written in a GC'd language prone to nil dereference errors.
Nil dereference errors don't demonstrate an absence of memory safety. If the runtime checks for a nil reference before dereferencing and then raises an exception, that's perfectly safe.
I'm not sure if you understand what safe means in this context. One common example is modifying a vector element by reference. If you grab a reference to an item, push something else to the vector, then try to access through that reference, the vector may have re allocated during the push and you're either no longer looking at valid memory because it's been freed, or you're looking at a stale copy of the element.
I don't think the null handling in rust is about memory safety so much as it is their choice in error handling. If you come across a null pointer dereference and your code doesn't handle it, that's a hidden bug. Triggering an exception and handling it is a valid approach, but there are trade-offs. If no code handles the nil exception for example then it's no better than crashing as you would in C. Wrapping things up in try catch can lead to missing recoverable errors or unexpected states. The choice to disallow null is one to avoid common bugs, not one for memory safety, as far as I understand it.
I'm 100% sure that I understand what safe means in this context. All modern GCed languages are memory safe and will not access invalid memory regions unless you use specifically unsafe features.
> One common example is modifying a vector element by reference. If you grab a reference to an item, push something else to the vector, then try to access through that reference, the vector may have re allocated during the push and you're either no longer looking at valid memory because it's been freed, or you're looking at a stale copy of the element.
There aren't that many languages which both use GC and allow you to take references to elements of a vector. Go would be one example. I can assure you that you cannot access undefined memory regions by doing this in Go. (What would happen is that the original allocation backing the vector would be retained in addition to the new allocation.)
>If no code handles the nil exception for example then it's no better than crashing as you would in C
C programs are not guaranteed to crash when a pointer has an invalid value. Dereferencing a NULL pointer will reliably cause a crash if you're running the code on top of a modern general purpose operating system with memory protection, but invalid pointers (which don't necessarily have to be NULL) have the potential to access arbitrary memory regions without raising any kind of error.
As far as I remember, some languages like Java are memory safe even if the code produces data races. That means you can't have a reference to a value supposed to be of type A being actually of type B.
That's part of what the Rust borrow checker does for you: tracking who owns what and enforcing single ownership makes data races that much harder to cause.
Turns out I'm the one who forgot what memory safety is o.o I think the part that makes rust's memory safety special is that it also lacks garbage collection
The reference to the slot in a vector has to be a kind of object which tracks the vector itself, and an offset into the vector. It can't just be a wrapper for a naked address.
You might be right for the majority, but there are many other large groups who don't have computers that are more than fast enough. There are a lot of interesting and complex problems that can be solved in something like rust that the languages you name cannot be used for. Right now I'm using C++, but rust is of real interest to me (as is ADA, which in some variants lets me prove my code correct)
Google is investigating Rust as part of a multi-pronged strategy and, while it has the "this is not an official Google project" boilerplate, the `autocxx` crate (a header-parser/code-generator addon for the `cxx` interop crate) is by Google employees.
the thing about Rust's ownership model is that a reference is not ownership, it's 'borrowing', and should be treated as such. But - and here's the kicker - the borrow happens as long as that reference is stored, not just when it's accessed i.e. dereferenced. This makes a number of data structures and patterns difficult and impossible to implement as you can't keep track of references to anything. You can't really make an opaque handle, for example, that can do operations through a stored reference. You have to store some ID or something and then when you want to do operations you need to pass the original container to the function anyway - so what's the point of even having reference types at that point?
IMHO Rust's lifetime analysis should consider borrows only at the point of dereference, not just whenever you encounter an & symbol. They already do that with raw pointers - you can have as many pointers as you want but you only need to mark `unsafe` if you actually dereference one. So at least someone on the team knows how to make that work functionally.
You’re missing that you don’t have to use unsafe, you use a wrapper type which is essentially a smart reference-counted pointer. Rust makes these things explicit.
> And if you need to write those things, it's not like it's impossible, you just need to drop down to unsafe rust, and ideally figure out some safe way to expose that API.
I'm sure you know this already, but for the people reading this: first of all "unsafe" is a very rare, and it's only "less safe" more than fully "unsafe"; ie it's still safer than C++.
And the second thing is that, often you don't have to drop into unsafe Rust, it's possible to achieve most stuff without it, but might incur a small performance cost (idk if some of those can be optimised away by compilers or not).
> Having a complex programming language which limits you and controls how you use it does not sound appealing to those who need a feature done, fast.
This sounds as insane to me as a carpenter who works with power tools saying. "Having safety measures which limits you and controls how you you have to do certain things does not sound appealing to those who need a house build, fast."
I mean, while my understanding of the construction market is rudimentary at best - isn't it actually the case that construction often _does_ end up rushed, or compromised at design level, exactly because of that?
It's not good, and when a bridge falls over the results are a bit more... dramatic and undeniable - but it's not really my impression that other branches of engineering are particularly immune from corner-cutting and deliberate under-estimation.
Edit: though I'll say, just having someone "at random" say they expect better in their industry is an anecdotal evidence that it is at least a bit better. Also, you get to work outside. Truly, grass is greener on the other side of the road.
That is an option, or if you don't want to deal with a GC because it's eating 70% of your clock cycles under load you can use rust get performance gains and still be writing safe code...
To get to a low GC overhead in Java (and perhaps other languages too) you have to pay with an increase in memory consumption. Sometimes as much as 100% additional RAM to avoid frequent full GC scans.
That's true, but malloc/free based systems also have a relatively high memory overhead due to fragmentation and programmers being worse at inserting frees than the GC. It's not at all clear that the C/C++ model of memory management has lower than 100% overhead for long running programs.
Rust only limits you from writing nonsense code. Seriously that's what the compiler is doing, and that's what people are complaining about. "Why won't this language let me write bad code!". I get it it's hard to learn at first, but, it sounds a little silly to people who have spent the time learning the language...
Believe it or not industry has and is continuing to adopt rust. Microsoft, AWS, government agencys in Europe, the Linux kernel itself (the only other language allowed there is C - think about it). It's really not all hype, people like it because it solves a class of problems from never happening while still performing very well.
I invite you to join the community and learn it. It's a lot of fun once you get proficient with it.
This is the ideal, but not always the reality. The Rust borrow checker isn't perfect, so sometimes it rejects code that should work perfectly fine. Hopefully as the language matures (Polonius, GATs, HRTBs…) these cases will become rarer.
A doubly linked list is a classic example. To implement this in Rust you need to use unsafe code.
Come to think of it, an even simpler example is a mutable iterator for a typical data structure. Such iterators usually require 'unsafe' code to implement, even though they are perfectly safe to use. https://stackoverflow.com/questions/63437935/in-rust-how-do-...
Unsafe in rust means you are skirting guarantees given by the language. It doesn't mean the code will blow up when run. It's just an explicit way to tell the compiler you are operating at your own risk. So when you do get UB from using unsafe you look at those places rather then your entire code base...
A lot of people don't seem to realize how easy it is to get UB in other languages. The C spec for example claimed not having a trailing newline at the end of a file was UB...
I also don't think this was an example of code that should compile but doesn't. Rusts' ownership rules make this task hard, but there are lots of people using doubly linked lists in rust... Yes they used unsafe code to do it, and yes it does compile .
The question was show us a case where the borrow checker was wrong.
>Unsafe in rust means you are skirting guarantees given by the language. It doesn't mean the code will blow up when run. It's just an explicit way to tell the compiler you are operating at your own risk.
I'm fully aware of this. That's why I put 'unsafe' in quotes.
However, it's surely fair to point out that you can't implement doubly linked lists or mutable iterators in Rust without giving up Rust's usual guarantee of memory safety. If this guarantee is not actually that big of a deal, as you appear to be suggesting, then why all the hype about it from Rust advocates?
>The question was show us a case where the borrow checker was wrong.
No, that was not the question. OP just asked for 'rejected code that should work perfectly fine'. There are of course loads of examples of such code. Perhaps the simplest is the case of mutable iterators that I mentioned. A correctly implemented mutable iterator is perfectly safe, but rejected by Rust's borrow checker unless certain parts of the implementation are marked unsafe.
It seems you are slightly misunderstanding the point of 'unsafe' as a concept.
And no, memory safety is a huge deal, it is just that the borrow checker cannot verify the soundness of certain code, meaning you have to provide the guarantees normally given to you outside 'unsafe' blocks.
Yes, this means that a few data structures require 'unsafe', but you should be creating safe wrappers around these structures;
'unsafe' won't propagate up your code and poison everything.
> it is just that the borrow checker cannot verify the soundness of certain code
And I was just proving examples of such code for someone who asked. Honestly, some Rust folks get so defensive it makes them very prone to misinterpret simple factual statements about Rust as criticism.
Apparently you don’t disagree with any of the factual statements that I’m making. You just have some vague unsubstantiated feeling that I don’t ‘get’ Rust.
I'm not fighting your claim that the borrow checker has perfectly reasonable situations it can't deal with. That's why 'unsafe' exists. I've already said that.
You're adding other claims and statements that make me question if you actually understand the thing you are criticising.
If you'd say what those claims and statements were, then we could have a conversation. It's not conducive to a good discussion to reply just by saying "you're wrong and you don't get it".
"I invite you to join the community and learn it. It's a lot of fun once you get proficient with it."
I will if the Rust community agrees on a lighter, less complex language core.
"Microsoft, AWS, government agencys in Europe, the Linux kernel itself (the only other language allowed there is C - think about it)."
Again, they are using only a subset of the language. Linus allowed it in the kernel once devs agreed only to use the Rust core features. They even had to make a new memory allocator, IIRC.
You’re confusing standard library naming. Rust has a layered standard library, “core” and then “std” on top of it. They’re using the core library, because that’s what you do in an OS context. But as far as I know they don’t restrict any language features.
They also didn’t have to write a new allocator; they did extend the interface of the “alloc” library (which sits between core and std) which was then also accepted upstream.
So, what software can you make without relying on "std" library?
I think that a lot of complexity in Rust comes from many different ways to handle memory. You have your Arcs, Boxes, Cells, etc. Then you have a core "alloc" library.
Maybe if everyone agreed on a single reliable method to handle heap memory (since using stack is easy), Rust wouldn't be so hard to use.
The syntax also could be improved. Seeing <Box(T)> or whatever nested three or more levels deep hurts my eyes.
You don't have access to the C standard library in the Linux kernel either, so what is your point?
Ime., the core library of Rust is much nicer to work with than the absolute bare bones landscape that is C when compiled with -nostd.
Boxes, cells, arcs, etc. exist for different purposes and if you'd take two hours to actually read about them and their uses, you'd understand why they exist.
You can write lots of stuff. The stuff you lose access too is mostly things that rely on the OS to function, so you lose file system stuff, networking stuff, things like that. Of course, people have written their own implementations of these things when it makes sense to do so.
The problem with rust is its community. A bunch of groupies acting worst than kpop stans on twitter. If you don't make abstraction and write spaghetti code, or if you clone everything, then yeah rust work-ish.
This is not the case if you are designing a library. Much of what is routinely used to capture semantics into mainstream C++ libraries is wholly impossible to express in Rust. (C is not even a participant, here.) This is not about template metaprogramming, just ordinary stuff.
Most of that difference is in things that happen at compile time, so this is not a question of Turing completeness.
> Much of what is routinely used to capture semantics into mainstream C++ libraries is wholly impossible to express in Rust. (C is not even a participant, here.) This is not about template metaprogramming, just ordinary stuff.
All above languages are turning complete, so if you can express it in one you can express it in another. The question isn't can you write it, the question is how hard is it to do, and how performant the code will be.
The heart of C++ is destructors: a bit of code that you can write and the compiler will ensure runs when code goes out of scope/is deleted. You can do this in C by remembering to manually call the right code when doing clean up, but it is easy to forget and thus error pron. (I think Rust has this too?)
C++ gives you the ability to do a virtual base class interface, which - as most people know - just means it writes a vtable behind the scene for you. Sometimes people write a vtable by hand in C: it is just a struct of function pointers, but the syntax to do it in C++ is a lot nicer. If you need an interface of some sort the win goes to C++ because the syntax is a lot nicer. (I'm not sure what Rust does about interfaces, I think it has something)
C++ gives you control over copying structs. In C structs are only copied member wise, if the struct has a pointer you need to keep track of both copies so you don't free it early. In C++ you write a copy function that will make a copy of the pointer. You can do this in C by remembering to call the right function when copying a struct, but the default is the wrong thing. (I'm not sure what rust has here, but at the very least the borrow checker will stop you from making a mistake)
C++ gives you move objects - a way to express that a struct is going out of scope, but only after a different one is taking over the contents. This is a variation of the previous, except that you know the original doesn't need valid data anymore and so you just copy pointers and null them in the original. C doesn't have this concept, you can get around it with use of pointers and manual copying of structs in the right places, but the code is ugly. (again, I'm not sure what rust does, if nothing else the borrow checker should allow the compiler to make some optimizations on this lines)
There are a lot more areas where C++ gives you syntax to write correct code that C does not. Rust intentionally doesn't have some of them (class inheritance has been abused often, but I still find it useful enough in a few cases that I think rust is wrong for throwing it out), and in other cases has come up with a better syntax. Overall I don't know enough about Rust to judge it, but I'll take C over C++ anyday.
Note, the above is about the advantages of C++ over C. C++ has a lot of warts that are out of scope for that discussion. I am not claiming C++ is perfect. If you are starting a new project you should seriously consider your language options - including some not mentioned here)
I will note that Rust has all of those features you mentioned.
1) Traits
2) You have to be explicit, by implementing the Clone trait, otherwise arguments are passed by reference or moved.
3) Rust has both RAII and linear typing, which is effectively a proven-correct form of moves - you can't access something that has been moved, unlike C++.
Rust doesn't have class inheritance, no, but often combining traits and delegating to a "parent" field works. Although sometimes that can be considerably more cumbersome, yeah.
You can also see it this way: The philosophy behind C++ is that the existance of a pointer implies ownership. In C, the existance of a pointer only implies liveness (normally; but really the code can decide on its own whether its safe to dereference the pointer).
With the C++ approach I've found myself overthinking the problem many times. But with the insight that pointers are just data, and memory management can be independent, it turns out that programming with raw pointers need not be hard. There is no problem passing a pointer without "move semantics" shenanigans.
There is no sense, in C++, in which a naked pointer implies or suggests ownership. It is the opposite: the continued validity of a pointer value depends on ownership maintained elsewhere. Nowadays libraries keep track of ownership. Most commonly the library delegates that responsibility to Standard std::unique_ptr, for familiarity. Not doing that indicates something different in play.
(It is, in fact, UB even in C to copy the value of a pointer to an object that does not exist anymore. This is the case regardless whether it had pointed to or into a stack or a heap object.)
In C, there is no way to express library ownership of a pointer. So, in C there is always potential for confusion about ownership.
I give you one advice: Think twice before writing this way again. You are not putting enough effort into understanding what I'm trying to say, and coming across as an arrogant prick.
Admittedly, I didn't make my point sufficiently clear, but this isn't even about being right or wrong. (And, once more, I understand what you say. Stop belittling me.)
You must be stuck in C++98 land. In C++11 a pointer always means live (though you may not have rights to store that pointer for later). We use smart pointers to force ownership.
I know. I'm not stuck in C++98 land. I'm talking about how problems are approached in C++ and more generally, OOP land.
In large parts of the community, there seems to be this idea that (almost) each little pointer (naked or not isn't my point here) should also be a handle to control the lifetime of the object that it points at.
I'm not saying that it has to be this way, but there is a reason why something like unique_ptr (that can be quite tedious to use in my experience) has become so popular in many projects - IMO it is overused, often leading to more typing work instead of less.
If one is sufficiently used to this way of modelling, it can come as a surprise that memory management needn't be so hard, and doesn't require smart pointers or anything, if the program design (global flow of data etc.) is sufficiently clear.
What I'm about to write isn't to pick at you - I think it's a systemic issue, and "professional programmers" unfortunately have limited influence on it, at least right now - though I believe "we" could change it - but not while acting as individuals.
> Having a complex programming language which limits you and controls how you use it does not sound appealing to those who need a feature done, fast.
See, I don't think Rust makes that _harder_ - I think it surfaces the real complexity, makes is undeniable. So far in my - by now 20 year old - career, when I've seen most teams faced with complexity that might either slow down a project, or be swept under the rug, the latter was always chosen.
There are _business reasons_ for that, software projects don't live in a vacuum - but those reasons are... _depressing_ at best - by far the biggest one I've seen is that a correctly estimated project would never be approved. Let's just say that figuring that out is a cognitive hazard that can easily increase risk of burning out - literally the sort of knowledge you're better off ignoring.
So ultimately my line of thinking is, Rust's problem in the business sense is basically the same as when you try to code in it: it _tells you that you're trying to sweep something under the carpet_. It tells you that your feature's cost is under-estimated. And to be fair: what can you do about it? The feature wasn't truly estimated by you. It was, almost certainly, estimated by your boss to be "one sprint at most," before he even asked you, and "please estimate this" has an expected answer, and giving any other one has a high social cost. And goddess protect you if your answer might call the entire business model into question.
I don't think any of us can change this, nor that the software engineering workforce is ready to push for changing this. Because even if we convince _our boss_ - the investors will just pick someone else, after all.
--
Anyhow, now you'd be excused in expecting me to advocate some sort of gatekeeping - but no, what I like about Rust is that it attempts to make programming more accessible in a _very different way_ to how many high-level programming languages did. For example:
- it's not _just_ a systems programming language. It doesn't divide programming into "stuff for anyone" and "eldritch monstrosities for the Chosen Ones" - eldritch stuff is just around the corner, in the same language,
- it doesn't try to hide the complexity - it tries to give you tools to _explore it_,
- and, expanding on the latter, its community values documentation and teaching over restricting use cases.
I don't think Rust is entirely _successful_ at that - but I think that the different model for accessibility is, if anything, more interesting than the borrow checker. I don't think we really need a single language - a garbage collector can be performant and useful, for example, and does it really _have to_ actually result in sweeping complexity under the rug?
IMHO it would be good for Rust to become popular niche/specialist language for systems programming.
Currently Rust is not as complete or well defined as Ada/SPARK or MISRA C/C++ combined with commercial analyzers (Astree for example). At the same time Rust is "too sound and rigorous" for mainstream programmers.
Rust advocates are in the losing battle of of forcing mainstream to adopt "it does not compile unless computer says its ready" style. Reliability and soundness comes with a cost.
Most software is not worth of the cost. When you adopt subscription model, fixing bugs is where the money is. If your app works flawlessly without updates, Apple removes it after few years.
I for one as someone who programmed professionally in both Ocaml and Ada still don’t see where Rust is supposed to fit.
If I was doing something high level, I would probably use Ocaml which has nicer features. If I wanted to write safe code, Ada offers a better experience with the availability of SPARK for parts I would probably end up wanting to prove. If I just want to write concurrent code with the certainty I could find someone to maintain it, Go seems the way to go.
Rust seems to want to be a replacement for Ada but seems to actually attract programmers used to high level language chasing the hype.
What’s the story with Rust and verification nowadays?
I think Rust has a good chance to take away market share from Ada and Ocaml. Its eco system (packages, documentation) is more comprehensive already thanks to its broader appeal and being in fashion. I don’t see why stronger verification should not appear at some point. Progress and need is there.
>> What’s the story with Rust and verification nowadays?
It is in progress but likely several years to go before a MVP is released. Ferrocene [1] is the primary effort that I am aware of and that seems to be making progress towards a verified rustc suitable for safety critical work.
Ferrous Systems is working with AdaCore and just released the Ferrocene Language Specification [2] to formally document the Rust subset that Ferrocene will use.
It partially succeeds at that, because while it has a much better story in being safe by default, there are plenty of C++ use cases where Rust still hasn't a story to sell.
Ick, most programmers aren’t going to write MISRA C/C++ either. Even if people claim they do it seems way to easy to just turn off or ignore it. Take the recent reports on the Toyota gas peddle bugs. They’d completely abandoned.
Gotta say SPARK does seem to provide much of what it claims. Interesting that NVIDIA seemed to be using it for their secure bits.
"From the data obtained, we can make the following key observations. First, there are 9 out of 72 rules for which violations were observed that perform significantly better (α = 0.05) than a random predictor at locating fault-related lines. The true positive rates for these rules range from 24-100%. Second, we observed a negative correlation between MISRA rule violations and observed faults. In addition, 29 out of 72 rules had a zero true positive rate. Taken together with Adams' observation that all modifications have a non-zero probability of introducing a fault, this makes it possible that adherence to the MISRA standard as a whole would have made the software less reliable."
Its so hard for me to think someone actually thinks this.
Rust really is a great programming language that solves a lot of other problems other languages don't. It does so by introducing a pretty clever paradigm.
The rust community is the least toxic programming community I've ever seen. Leaps and bounds away from go lang, python, and light years away from c, etc.
Like use your head, is vba therefore the language which everyone tried and loved and had to move on from? You're trolling yourself lol. Rust is great but yes you have to learn it and yes that is tricky at first. Join a rust chatroom or discussion board and give it a whack.
> Rust really is a great programming language that solves a lot of other problems other languages don't. It does so by introducing a pretty clever paradigm.
Were you around when Scala was the end all be all of programming shiny things?
"Early on, Scala rode a wave of hype that frankly surprised even me: hype around pushing syntactic boundaries, hype around reactive architectures, hype around functional programming, hype around the Apache Spark project. Much of that hype has since died down, and there was a period of backlash and negativity both within the Scala community and outside of it. Since then, even the backlash has faded, and what is left is a reasonable, boring language steadily advancing and providing a great platform for general software engineering."
Not every concern needs to be solved at source level. My brief crit of both Scala and Rust is that they do not take a modular approach and instead have opted for a 'comprehensive' approach that necessarily entails (arguably) unreasonable levels of complexity at syntactic and semantic layers of source code.
I appreciate your view on scala and rust. I was around when scala became popular and wrote it in industry for a while. It wasn't a great time.
I think rust differs from scala in what it is trying to do. I also think rusts' complexity is backed by functionality that defines it's paradigm. Scala on the other hand has a few too many 'i think this would be nice to support' features in it that make it messy to deal with.
Rust imo isn't messy in that regard. Does scala still have it's place, sure. Does rust have it's place, yes.
Also if you read the posts on here, there's a lot less hype going for rust than the other way around. It's success isn't really due to it's hype, it's because people like what it does for them. Lots of people hate on it, usually baselessly.
Sorry for the offtopicness, but could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
I was making new accounts for each different topic I posted on. For example I only used this account for this entire topic. I'll stick with crabby grabby. Sorry didn't know.
I’ve hardly said anything about Rust (and I’m actually learning it at the moment). My point is that stackoverlow’s most loved metric can be misleading. It’s more of a least-wanting-to-jump-ship-right-now metric.
You're not wrong. Being an exdata geek, I can say almost all metrics are flawed. We could even look at the most popular language and realize how that is flawed/not very meaningful too. Thanks for clarifying. Didn't mean to get in your case, but it just didn't sit well with me. I see a lot of people bashing rust in here, who have kind of obviously never tried to learn it.
> Rust really is a great programming language that solves a lot of other problems other languages don't. It does so by introducing a pretty clever paradigm.
Rust's "pretty clever paradigm" lifts a large set of low-level concerns into the application domain, and consequently makes them the responsibility of the application programmer.
This makes sense for a subset of programming contexts, where the application programmer needs to have and assert specific positions in those domains, in order to produce viable programs.
The problem is that, overall, very few programming contexts actually benefit from this level of specificity. An HTTP service implementing a business-level capability absolutely does not give a shit about ownership lifetimes. The fact that Rust "solves" this category of issues is completely irrelevant to this class of program.
Definitely not. I've been a full time rust developer for a long time now. Whenever I need to write some C# or Go or JS I honestly feel quite blind with a hand tied behind my back. I don't have the expressiveness of rusts type system and I don't have the safety of the strong compiler so I have to test my code a lot more thoroughly to be confident. With rust I'm pretty confident in my code from the start
Can support this - I'm a full-time C# SE but learning Rust on weekends has taught me so much and gave me an entirely new perspective when it comes to reasoning about memory and data structures.
I don't know why but haven't seen many posts on how the combination of Rust's trait system, implicit conditional returns and strict enforcement of im/mutability of state allows one to handle significantly higher amount of states the application can take in the same amount of code. Other languages usually have you writing thousands of lines of verbose error handling and state validation code and traits allow for a superior way to compose behavior over standard interfaces / abstract classes.
What does Rust's compiler tell you that C#'s doesn't? You still have guaranteed memory safety in C#, and the type checker will verify against all type errors.
I realize that Rust's compiler might catch more concurrency problems, but your comment is written as if you feel "blind" even writing single threaded code.
Rust's advantages don't stop at concurrency or type safety. In my experience, the type system is actually much more concise and the APIs are a lot more explicit than any other language I've tried. It's clear when the argument you're passing is going to be mutated or not and move semantics help a lot on designing good APIs (not having to remember to close a File or being able to use it after it's closed, for example). Not to mention sum types with pattern matching, being expression based, etc. I also find that Structs and Enums being values instead of references (as opposed to Classes in other languages), reasoning about the code is a lot more straightforward, which goes back to my point about explicit mutability. A huge part of it also goes to great libraries created by the community, that usually cares about API soundness.
It's very common in Rust to do a big refactor or write a program from scratch and have it work on the first or second try, which may release a good amount of dopamine for some.[0]
Not saying the language is perfect, but it can feel pretty good, to the point of being addictive, while also freeing to know you can think less about the variants and let the compiler do it for you, and the reliability of the created program is usually pretty amazing. It can also be annoying, specially when you're learning it and try doing something in a way that's just not viable in an easy way (e.g. manually creating a Linked List).
> Whenever I need to write some C# or Go or JS I honestly feel quite blind with a hand tied behind my back. I don't have the expressiveness of rusts type system and I don't have the safety of the strong compiler
How would you compare it to Typescript? Having become a recent convert, I feel the same way going back to Javascript now
Typescript is great, but it is ultimately a gradually typed language that is built around compiling to Javascript, and that has extensive interaction with untyped Javascript code.
You often end up with obscure issues that a good type system should prevent. Maybe because the third party typings for a package are faulty or incomplete, maybe because the compiler just gave up on a complex expression and fell back to any, or because the developer just gave up and used `as any`.
Typescript is also really constrained by keeping JS compatibility. A prime example is the lack of real sumtypes. Instead you have to use objects with a string tag and do if/switch on that tag to disambiguate the concrete type, which is incredibly awkward when compared to proper compiler sum type support.
I'd much rather use Typescript than Javascript, but it's far from perfect.
Yeah, there were many times I was programming TS, and was very surprised at what the type system permitted. I started to realize type safety was best effort, or at times even performative, but by no means something TS can guarantee.
I'm not the same person that you responded to, but I feel much less secure in TS than in C# due to the fact that typing is not a requirement, and that type definitions might differ from the actual library/framework code. It's still a big improvement over plain JS though.
How you feel relates to you and your preferences and doesn't really say anything about Rust versus let's say Go.
Is your software more performant and does this translate to additional customers or reduced costs? Is your software more stable and does this result in reduced support costs or happier customers?
Testing effort is likewise not a decisive matter. In fact it's quite likely that what one wins time-wise on the testing side is lost on the development side.
Not the person you're responding to, but imo the main thing missing from Go is ADT's. After using these in Rust and Swift, a programming language doesn't really feel complete without them.
That said, I think Go's simplicity has a lot of advantages over Rust for a great many use-cases. Imo Rust almost feels like a prototype for a great language which will finally get ownership-based memory management right. The goals of the language are admirable, and the tooling is great, but it's such a complex language once you start getting into topics like lifetimes and async.
I think it shines for certain use-cases where the tradeoffs it makes genuinely add value, but in a lot of ways I think it's more of a niche language for people who love esoteric programming topics rather than something with the potential to go truly main-stream.
>> the main thing missing from Go is ADT's. After using these in Rust and Swift, a programming language doesn't really feel complete without them
What are the differences between an ADT (plus pattern matching i’d reckon?) in Rust/Swift vs the equiv in Go (tagged interfaces + switch statement)?
One has exhaustive matching at compile time, the other has a default clause (non exhaustive matching), although there’s an important nub here with respect to developer experience; it would be idiomatic in Go to use static analysis tooling (e.g. Rob Pike is on record saying that various checks - inc this one - don’t belong in the compiler and should live in go vet). I’ve been playing with Go in a side project and using golint-ci which invokes https://github.com/nishanths/exhaustive - net result, in both go and rust, i get a red line of text annotated at the switch in vscode if i miss a case.
Taking a step back, there isn’t a problem you can solve with one that you can’t solve with the other, or is there?
I'm not a go expert, but imo the main difference is ergonomics and clarity. Rust/Swift style ADT's plus pattern matching gives you a very concise and readable way to declare and use sum-types, and the Go way seems to have more boilerplate.
Also with Rust for instance you have more robust pattern matching. So you don't have to match only on type, you can match on complex criteria (i.e. foo is a Circle, with foo.radius < 10).
I find this type of programming very expressive and easy to reason about.
Yeah exactly, imo the way the Go version requires you to declare an interface, in this case with a no-op method essentially acting as a tag, and declare conformance on your types is quite noisy and feels like a hack.
Also I prefer with Rust-style sum types that the declaration happens in one place. In the Go version your Shape types could be scattered around all over the place, which makes it harder to parse and understand if you're reading unfamiliar code.
This Go code is wildly non-idiomatic. Type assertions via `.(type)` are a tool of last resort, not something that an application developer should turn to as a solution to a problem, and certainly not post-1.18, which permits generics.
There is no way to express an enum whose possible values are of different types.
In Go code will typically "offer" concrete types, and "accept" abstract interfaces. Abstractions over concrete types are expressed by consumers who want to operate on those types, rather than producers who provide implementations of them. So your Shape would probably end up an interface defined by whatever consuming code wanted to treat Circles and Rectangles equivalently.
Rust and Go are surely different classes of programming language. Rust is a systems language and Go is garbage collected. That is, Go is in the camp with Ruby, Python and Javascript, whereas Rust is in the camp with C and C++. Garbage collected languages are easier to use than non-garbage collected languages, so if you can use them, you should.
Probably comes down to your definition of a systems language.
E.g. if that means implementing low level things like cryptography APIs, or being able to define the exact memory layout of objects in memory or access raw pointers or write ASM inline then obviously that disqualifies ruby, python and js but it would mean Go qualifies.
No it's not. There are tradeoffs, definitely, and one of them is visible in the article: at the moment doing lifetimes and async is hard. But you don't have to do it, most of the time you use `Arc` or `Arc<Mutex>` and call it a day. In async web frameworks you very rarely need explicit lifetimes, because the workload is mostly isolated - a handler gets an input from the request, you compute the output, you return it. Any shared state is typically behind a shared pointer (like `Arc`).
I feel like this article is written from a perspective of a library author that always starts with going for zero allocations generic API. In production apps you very rarely do that, usually it's plenty fast anyway.
If you default to Arc everywhere, aren’t you essentially just implementing a slow GC?
This type of thing comes up of often when people try the language. They re-implement some part of a program, originally written in a fast GC language, and then wonder why it’s slower.
I feel like the power of the language comes through in specific workloads, or when you take the additional time to avoid naive code. That’s why it is so verbose and rich, it gives you more control. And this is something that advocates often clearly state.
As @therockhead replied already: you won't put every single variable behind `Arc`. It will only be for shared state: queues, handlers that you need to move between threads/tasks etc.
> In async web frameworks you very rarely need explicit lifetimes, because the workload is mostly isolated - a handler gets an input from the request
Was going to post the same comment. Most of my hobbyist Rust programming has been via the Actix Webframework and I only run into the most trivial of borrow checker issues. I guess my projects are not complex or interesting enough :).
Rust makes it easier and faster to create certain classes of applications, specifically applications that originally would be written in C and C++. Things that are hard in Rust, are even harder in C and C++. So Rust is liberating and 'rewriting' complicated applications can be satisfying because you can move faster. For me personally I am playing with Wayland and display streaming and it is very satisfactory so far to be doing this in Rust, whereas I would not want to try this in C++. The code I am writing is much simpler than the existing C++ code.
"Things that are hard in Rust, are even harder in C and C++."
Incorrect. This handler-dispatcher problem illustrated in the article is utterly trivial in C++ and many programming languages. No need to struggle and stretch your brain as if you are in the Math olympiad.
I am learning Rust myself and I think Rust fans are spreading misinformation and propaganda by saying Rust makes things easier than other languages. No, Rust makes things VERY difficult - and you need to study and learn the Rust way of doing things - which is significantly different from other programming languages.
We need a BIG design patterns for Rust book - which takes lots of common design problems and shows the idiomatic Rust way of doing it.
But just stating that C++ code for a given design problem is harder than Rust is utterly wrong and demonstrably false. Rust tooling is definitely simpler. Rust coding is definitely not.
This handler dispatcher problem gets trivial if you do it in unsafe mode. Just transmute all lifetimes to 'static and call it a day. That is what you do in C++ all the time (implicitly). So C++ version of it would be easier to get compiling, but harder to get right - because you'd have to prove lifetimes are correct anyways.
Would you agree that it is "easier to do it correctly?" Because rust makes lifetimes explicit? I think that's what rustaceans are trying to say generally.
I don't think anyone is trying to claim that rust is some trivial language. It's not. But it makes systems level concerns explicit, flying blind is much harder in my opinion. The complexity is technically the same, but the language has training wheels that other languages don't.
> Things that are hard in Rust, are even harder in C and C++.
That is not true of C++. There are many valuable things that are much easier to express in C++ than Rust, owing to C++ being a significantly more expressive language. Rust also doesn't make it easy to write software where ownership is intrinsically ambiguous and must be algorithmically resolved at runtime without wrapping the object, a common case e.g. for high-performance storage.
I would agree that everything is harder in C, though.
Where's the proof that Rust makes it easier and faster to create applications? In my experience it's harder and slower compared to other languages like Go and even C++, but you get memory-safety in return while keeping a very good runtime performance profile.
I have been way more productive with Rust than with C++.
For instance, I haven't yet had to debug weirdly muting memory, deal with the immense nuanced possibilities of doing the same thing (there's a reason for CPP core guidelines), stuff like the rule of five, writing cross platform cmake files that include various libraries, ...
Not proof for a general statement, but another example from a Rust adaptor
Rust removes a lot of the debugging and performance tuning, that happens later in production. Is it easier/faster while you are learning. Nope. When you understand it, sure it could be. This depends on the person. Just like how someone might be faster or slower at a specific framework within the same language.
Imo I write programs faster than other languages because the compiler helps me out. Your mileage may vary.
>Things that are hard in Rust, are even harder in C and C++
This is absolutely not true. Something hard in Rust but almost trivial in C++ (via fold expressions): mapping or folding a function or operator over a tuple of heterogeneous types. E.g. to sum over a tuple of arbitrary numeric types:
This is a pretty weird example. Why do you want to sum over a tuple of different data types in the first place? This kind of has a code smell I can't put my finger on...
Fwiw rust doesn't let you dynamically index tuples, because rusts' take on tuples is different.
Should your tuple actually be a struct with a "sum" method on it? Imo probably. Or should you have your numeric types be the same type for better memory layout, maybe...
Just because another language let's you do something doesn't mean you should do it. Kind of like in js, does it make sense to divide a number by a string(not sure if you can do this but guessing it's valid js), probably not, is it a feature or a bug, you decide. Should this be part of say the kotlin standard, probably not.
I think I'm quite fad-resistant (others would use less positive words to describe the same attribute). I still find Rust truly excellent, and the most joy I've had programming since Haskell (while at the same time being a lot more mainstream and pragmatic choice for collaborative projects than Haskell).
Rust is young, very ambitious, of course general purpose enough to do everything in it, but its ecosystem has serious gaps. (And some of those are dependent on planned language level features.)
...
I spent a lot of time reading r/rust, various blog posts before going beyond helloworld in it.
To me it's in the same category as Scala (ZIO) or TypeScript. Both are very powerful, a joy to work with them, until the low-leven limitations hit in (JVM or JS).
However I don't interpret that as a sign of oh we need to go back to assembly, but more like, okay, we need to accept that our tools and the problems we use them on are not in harmony (yet).
Totally agree. Scala and TypeScript are both usually wonderful to write.
Though I might argue that one of Scala's biggest issues in my eyes, performance, is mostly _not_ a JVM limitation.
For example, a Scala for-loop calls functions on each iteration, and immutable containers need to be garbage collected for each modification. Both of these could be compiled away in many common cases (like C++ or Rust do, but Scala's compiler doesn't, unless they've changed that in the last couple of years).
Oh I don't think the performance is bad, the JVM is an amazing technological platform, but it uses a lot of RAM, it's GC pauses are bad for latency, and the Scala/Java/JVM ecosystem is simply not that high quality as Rust's. (See all the security issues, log4j2, jdk ecdsa rewrite, just to name the few that come to mind in the last few months.)
And for TS I don't think I have to mention the quality of the underlying JS ecosystem :)
On the other hand, of course Rust is still on 1.x, GAT is still on the roadmap (though the RFC recently entered into final comment period (or how it's called), yaay!, and then a bunch of legitimate criticism was submitted..., so it might still take year(s))
I don't have too deep knowledge of Scala compilers, but Scala 3 is a huge revamp so I wouldn't be surprised to see it change. Though both of the mentioned cases seems to be easy to "fix" by the JIT compiler -- the JVM is really good at inlining and short-lived objects are almost free.
Did scala 3 finally clean up the syntax. Scala is a difficult language because there are so many ways to do things. I ran away from it after dealing with it professionally off and on for a year.
Which is a shame because I did a hobby project with it and it was super fun and liberating compared to old java.
Well, they didn’t break the existing language, so not sure. It depends on what do you find readable/difficult. Because contrary to the usual opinion on the language, I believe it is not “difficult” - it has much fewer exceptional rules than Java for example. Sure, some features are more expressive and that comes with big responsibility, but to actually answer your question, some implicit usage is indeed cleaned up making it much less magical-looking.
But scala 3 has been a very huge update, including having a new compile flag that will turn to language “null-safe” (so a nullable String will have String | Null as a type). So it might be worth it to have a new look at the language.
Null safety sounds great. My big gripe was all of the symbols and how they had different meanings in different contexts. That said, in most cases it's not a real problem, but with a team you can end up with some messy stuff.
Yeah I agree that operator overloading should be used very sparingly, I often dislike it in Haskell as well. Fortunately it seems to be less and less often used in modern libraries.
I actually feel this way about C++ to some extent. I love that C++ has zero overhead abstractions. I hate that to use them requires programming in the most obtuse and indirect meta language.
If you ever read, I think it was Modern C++? Where they point out the huge "trick" in C++ was someone figuring out you could make a compile time branch by declaring arrays of size 0 or some such nonsense. They then wrapped that feature in a template so you could then try to use it. At some point they formalized it so it's no longer based on arrays of size 0 but still, read any boost library and look at how unreadable it is. It might be nice to use, but it shouldn't be so hard to write!
But, I think the fact that it is hard to write brings joy indirectly to many programmers. They get a dopamine hit for "solving the puzzle of finally getting their cryptic incantation to work". Maybe they write a bitset class. It takes 500-1000 lines of code. Or maybe they make a lighter weight fixed size array and again it's 1000+ lines of template code. They forget that the goal of coding is shipping, not having fun solving the incantation puzzle.
It doesn't seem like it take 1000+ lines to do these things.
An application developer spending hours refining a method/class/function to it's most optimized form is usually not worth the time. It's only going to be used in the application so the ROI is much lower.
The author of a library with wide usage spending hours refining a method/class/function to it's most optimzied form may actually be worth the time because that effort is amortized over hundreds or even thousands of applications. The ROI is much higher in that case.
Determining when it's correct to make that tradeoff is an important part of the job. Boost probably makes the correct tradeoff. But your average C++ application dev might need to make a completely different one.
It's not a matter of smarts but grit. Modern society cultivates short attention spans, impatience, and a need for instant gratification. It's really uncomfortable, and costly, for many to spend hundreds of hours cultivating a new skillset. So, many give up shortly after starting to pursue it. However, those who persist and achieve capabilities come to value what those capabilities offer over alternatives.
If you ask an embedded developer (as in bare metal, no OS), who is also into electronics, instant gratification and especially instant feedback, is what actually makes you choose this path of interest in the first place.
It's a matter of style and character how you cultivate new skills, and Rust wants you to cultivate them in a certain, some people are just not made for. YMMV.
I'm an embedded developer, and for me it's the opposite. Not saying everyone is like me, just saying not everyone is like you. I find regular software development to provide more instant gratification, I like the result of systems interacting with each other physically. Something about that just feels really cool to me. So much that I spent a few years as an FPGA developer, designing hardware and writing drivers for it, and also writing C on a soft core microcontroller running on an FPGA, touching registers that configure things, messing with i2c busses, I just find it all really amazing, in the same way I find car engines spinning at 6000+rpm and managing timings is incredible. Regular software development has faster results and gratification, at least from my perspective. And to be clear, I'm not saying you're wrong, or that your perspective is any less valid than mine
Rust is really great for a large subset of problems where there are really no good alternative. (Low level problems, where the next best alternative is C or C++. And if C++ looks better than C, it's probably not low level enough.)
But if you deviate from those, into higher level programs, a language with garbage collection will always be much more productive than Rust. So we have a set of people doing the things that Rust does best, and celebrating that they have a great language, and a set of people trying to fit the square language into a round hole, and complaining that it doesn't fit at all.
Added to that, there is a wide space of problems where high-level languages do not have appropriate support, but you can always hack your way in with a low level one. Those are the worst, because there is just no good option, there's just one that you can beat into working.
The advanced documentation is also incredible and crystalized a lot of cs concepts for me. There is no impedance mismatch between the beginner docs and the advanced ones, they reference extra learning materials... Super greatful for everyone on the documentation team.
Read the rust book and practice for two weeks, you'll get the hang of it. It's just not a language you can pick up 80% of it in a day. Like BASIC or Go or something
I did ok with Rustlings and I read the 'The Book', I'd say I am largely struggling with async and lifetimes.
I have spent most of my career in C, C++, Objective-C and have recently tried Zig, which I enjoyed, possibly because of it's brevity, and I read through the Jakt programming language docs, which was much more familiar to me but I think if you only used Rust with ARC it would be a simple language to adopt so that's not really a fair comparison.
I think what I really need is a project rather than learning resources, with time being our greatest enemy, I need motivation to get over the hump so to speak.
(Like I said, I might not be smart enough, or I might just be profoundly lazy).
I've been leraning rust with a pet project of mine (an Apple2 emulator). That was a bit too complex for a start as I had threads and a bit of architecture to structure my code correctly. But in the end, I know understand how rust does threads and memory allocation and I'm now much less intimated by the type system. I still don't get the lifetimes.
Moreover, a lot of the libraries in rust needs you to understand the type system thoroughly sometimes to use them, and some of them use the typesystem to enforce some specific behavior (without telling you :-)). For example, the logger mechanism in rust makes it very hard to dynamically change the logger implementation at run time but it doesn't explain why (in the end, I think it has to do with thread safety, but I'm not sure).
It took me about the equivalent of 30 full days to get there. Although I'm an experienced programmer (C++,assembly,Java,python,R), the last 15 years have been python 99%. Had I continued with C++ all those years, I'm sure I'd understand rust much better.
So, just keep going and really listen to the borrow checker. The thing is really smart and sometimes, after a lot of pain, you understand that your own mental model was wrong :-)
> Moreover, a lot of the libraries in rust needs you to understand the type system thoroughly sometimes to use them, and some of them use the typesystem to enforce some specific behavior (without telling you :-)). For example, the logger mechanism in rust makes it very hard to dynamically change the logger implementation at run time but it doesn't explain why (in the end, I think it has to do with thread safety, but I'm not sure).
Mainly because logger is a static value, if you really want to change your logger at runtime, then you have to pay a little bit for synchronization (i.e. locking with mutex), other than that, it's not that hard.
There are some very practice oriented books for rust: zero to production, rust in action. And a third one that I didn’t read, it is game programming focused and seems pretty good. Something with “handmade”.
I believe the game programming one you are thinking of is “hands on rust” by Herbert Wolverson. I haven’t read all of it, the parts I have read are quite good but I’m not big into games so I switched over to rust in action.
That's actually one of the reasons that I asked. I am the author of Rust in Action and wondered if the person having trouble had investigated one of the more practical resources.
It's a language designed by people who's favourite language is not even pronounceable.
Did you think that such a group of people, who place their love for the language above practical things like readability, would come up with a new language that cared about the user-experience[1]?
As we see all the time WRT to programming languages, readability is more important than the more abstract stuff.
> I hear all the arguments about CVEs and wonder if rust will reduce the number of these problems simply by disabling the programmers that create them.
That sounds like a different way of saying "the global number of CVEs will be reduced by reducing the number of applications written". After all, if you "disable" the programmers who write systems programs, they aren't going to get magically replaced by programmers who don't write systems programs.
You're just gonna have fewer programmers.
[1] A language is an interface between a user and their problem. If this is not highest priority of a language design, the language may never take off in any meaningful sense. See Monads.
I personally don't like Rust (I like to have a garbage collector for what I do), but I don't think it's a hard to read language.
Readability is an incredibly subjective thing. For example, APL might look like an absolute mess to most people but to the people who know it, it's incredibly easy to read. All "readability" means in this context is that it's familiar to you.
In my opinion, the only truly hard to read languages are languages that require you to keep large amounts of context in memory while reading a specific part of the codebase, be it due to lack of support for abstractions for proper encapsulation, messy overloading, insane inheritance hierarchies, or something else.
EDIT: Readability also requires you have a codebase that's well written, and even that's subjective.
> For example, APL might look like an absolute mess to most people but to the people who know it, it's incredibly easy to read.
But if we use that as a bar ("you have to know it"), then all languages are equally readable.
Pretty much all the popular languages converge on similar syntax an/or grammar.
You can say that it is because that particular set of syntax is the first, or you can say that languages that didn't have similar syntax was abandoned by developers.
There is an equal evidence/lack of evidence for either claim.
GP brought up a thing that’s certainly not equally represented in all languages. There are physical limitations for reading, and the amount of novel context you can juggle.
There is a balance between verbosity and conciseness that makes code more or less readable to different people.
If you have few experts who are maintaining a codebase over a longer time, then verbosity gets in the way. If you have a wider skill distribution and fast turnover, then you want verbosity.
Additionally I think there are more objective features. Like ambiguity, simplicity, formatting, naming, visual hierarchy...
I got that, but the point is still very much relevant. If places switched to Rust from C++ (or Go, or C), they'd write less software because there will never be as many people who understand Rust as there are people who understand Go.
If it was at all easy to pick up there'd be a fewer stories from people who tried learning it and abandoned the effort.
I cannot think of any person who reported abandoning Go because it was too hard to learn.
While the article is an absolutely excellent analysis of Rust from the low, PL-level ergonomics point of view, it seems to largely miss (or, intentionally, skip) the bird's eye point of view of complexity inherent to software development.
I was recently tasked with developing a fragment of a mobile app written in Objective-C++ (an awesome PL powerhorse, BTW) that mixed multi-threaded (multiple dynamic UI layers, multiple levels of background processing) and async (multiple over-the-network downloads, multiple database updates, etc.) To make a long story short: even though I've been coding C++ for 20 years, I am still a bit depressed about how badly I got my own @$$ handed to me on my first try. It took a ton of bug hunting and code refactoring to get everything right. Being forced by Rust to throw together a couple of somewhat inelegant reference counted pointers is... trivial compared to the benefits Rust offers (safety).
I was also recently tasked with investigating serverless. To make a long story short: due to the fact that serverless is billed by the amount of memory used and the CPU time required to execute a lambda, so far in my analysis Rust is coming out on top over interpreted and garbage-collected languages to such extent that it actually makes the difference not just between a "good" and "very good" solution, but between an "impossible" and "feasible" one, cost-wise. Being forced by Rust to read a couple of books on how the new concepts of ownership and lifetimes work is... trivial compared to the benefits Rust offers (efficiency).
If you define "feasibility of a commercial success" as "finding a pain point and being the only solution for it on the market", then it becomes more and more likely that, with time, Rust will win over many other PLs.
Rust is not an appropriate solution for iOS, I don’t think it’s even possible to use it except through some hacks. So the options aren’t buggy Objective-C++ vs inelegant but safe Rust.
It’s having a product vs. not having one and the latter guarantees a lack of commercial success.
I'm not sure what you mean - I mentioned Objective-C++ mobile development only as an example of how challenging multi-threaded/async programming can be. The problem is analogous on other platforms and can be simplified by Rust on the platforms that Rust supports. Also, it wasn't Objective-C++ that was buggy, it was my brain that was bug-prone :). (BTW, when it comes to iOS development, I consider Swift to be significantly more complex than Rust.)
I disagree. If you want common code between all platforms (iOS/Android/Windows/macOS) your choices are pretty limited. Right now I work on a large codebase where C++ is used to solve the code sharing issue, but would much prefer it to be Rust.
I liked your thoughts but disagree with the conclusion.
It seems that serverless is cost prohibitive in many scenarios, and requires lots of optimization to be cost effective. So isn't it more of a fault of serverless than a strength of rust?
I see your point, but it all depends on whether the choice of a particular organizational infrastructure architecture is a priority for you or not.
If infrastructure is a priority, (e.g., "we don't really want to hire anyone just to setup/monitor/secure that Linux box with all those REST API server/database processes, thus we prefer serverless), serverless is important and Rust performance is its strength.
If infrastructure is not a priority, (e.g., "I was gonna roll out my own VPS anyway, I love this stuff"), serverless is all about its faults and Rust is irrelevant (it is logical to choose an easier PL, e.g., the excellent Golang).
I've been doing a lot of Rust the last two weeks. Like others have said, if you stay away from async, it's not that bad. But I also have plenty of experience in OCaml and Standard ML and Scala, and also C++. Still, I'm finding my C++ experience is more hindrance than help most times.
Async & Tokio is pure hell if you have any kind of shared mutable state. And also it's tough to fight 20-30 years of programming experience with modeling things in an object-oriented fashion; I find tying state & method into structs and using 'self' on methods in OO fashion just makes things far worse when async is involved.
The thing that really bugs me about Rust and I think it's the core of what really confuses many people is that it upends our normal intuitions about scoping. In other languages we're used to thinking of the scope of things in terms of lexical scope at the block/function/module/class level, and inside that scope, anything is game. You can almost visually see it. Usually.
Rust can work this way, but mostly doesn't by default. It's like you were programming in C++ but instead of normal copies and references, almost every single variable use was a std::move. Profoundly unintuitive. I just used this variable, why can't I use it again?!
I think many of the things in the language -- lifetimes and borrows especially -- should have been modeled with a more explicit syntax instead of hiding inside existing standard language features (type parameterizations and assignment)
>Rust can work this way, but mostly doesn't by default. It's like you were programming in C++ but instead of normal copies and references, almost every single variable use was a std::move. Profoundly unintuitive. I just used this variable, why can't I use it again?!
This was my experience of rust as well. I've since forgotten exactly what I was trying to do (it was a year ago now), but a seemingly innocuous check of the variable I was using meant I couldn't use it immediately afterwards. I have no issues with move semantics in C++, but making that the default behavior in rust made it a pain in the ass to use, especially when it doesn't seem to be consistent. I ended up rewriting a week's worth of rust in C++, including the unit tests, over two days.
Yeah to be clear I'm not opposed to the semantics of how Rust manages lifetimes. I'm ... disturbed.. by the syntax.
It feels "bolted on".
Variable assignments and usages that borrow should be clear with a different syntax.
Lifetimes should not be lumped in the same box with other type parameters but instead broken out visually into some other sort of annotation on the type.
BTW I also think "std::move" in C++ is a hack. They should have introduced a new syntactical element for that as well instead of masquerading it underneath a pseudo-function.
couldn’t agree more. I’ve been thinking a bit about rust recently and why it’s hard to write when starting out, and I think needlessly verbose syntax is about 10% of the problem and not enough syntax around borrowing/the memory model is 90% of the problem. In other words, Rust syntax is minimalist in all the wrong places.
A lot of borrow checking related stuff in rust forces you (until you internalize how it works) into a check/compile-fix loop that is horribly inefficient and tedious. if performing moves had special syntax it’d be palpably obvious to even new rust programmers that a move is happening and that assignment does not work as it does in about 99% of other programming languages (in most cases).
also important things that then invert or alter this behavior should also have special syntax. for example, I’m still not sure how any client code is supposed to know that some library type implements Copy (which complicates things further by inverting rusts inversion of typical semantics). seems like the only way is “try it and see” or read the docs. I guess the idea is that you shouldn’t necessarily need to know this, but I do think rust would have benefited from making borrow and memory handling more transparent and obvious with special syntax around it (beyond lifetimes, at point of use, not just in signatures). As it stands it seems to want to give users the control that comes with manual memory management with the sort of “hiding away” of details that GCs traditionally afford
> Rust can work this way, but mostly doesn't by default. It's like you were programming in C++ but instead of normal copies and references, almost every single variable use was a std::move. Profoundly unintuitive. I just used this variable, why can't I use it again?!
You just need to get used to it, really. As you've noted, it's about our habits. The ownership model makes perfect sense after some experience in Rust, unlike that stuff with asynchronous programming.
Most of the pain here come from the unholy trifactor: combining async, lifetimes and dynamic dispatch with trait object closures; which is indeed very awkward in practice.
Async support is incredibly half-baked. It was released as an MVP, but that MVP has not improved notably in almost three years.
There are lots of ideas, some progress on the fundamental language features required (GAT, existential types, ...) and the random RFC here and there, but progress is painfully slow.
This isn't because no one cares, but because Rusts async implementation (which is very cool due to the low overhead) and its interactions with the other language features require complicated extensions to the type system. It does seem to me like there might be a lack of resources/coordination/vision ever since the Mozilla layoffs, but that's a different topic.
If you can avoid async I would recommend doing so. The problem is that the entire ecosystem has completely shifted to async. There are almost no active / popular libraries related to network IO that haven't switched over.
To be clear: using async is fine if you know what you are doing, and it can provide incredible performance. But if you do, keep it simple: avoid lifetimes and most importantly: don't attempt advanced trait shenanigans - if you do need traits, just returned BoxFutures without lifetimes, throw in lots of clone(), share as little data as possible, use Arc<tokio::sync::Mutex<_>>, and call it a day.
> Most of the pain here come from the unholy trifactor: combining async, lifetimes and dynamic dispatch with trait object closures; which is indeed very awkward in practice.
Even in regular Rust, trying to get too clever with lifetimes can cause serious pain. The usual culprit is complex code that tries to never allocate memory. "Oh, well this closure borrows this parameter from the parent function, and then stores a reference to it in this stack-based structure inside the closure, and then we pass everything by reference to this higher order function..." Just say no.
If you add async to the mix, then you need to keep your ownership simple. If you have a long-running async function, then pass parameters by value! If you have a polymorphic async function, then return your result in a Box. (This also breaks up the generated state machine used to implement async functions, and can reduce binary size.)
So much of this pain is caused by premature optimization.
In C++, if you get to clever, you eventually make a mistake and segfault. In Rust, if you get too clever, it eventually becomes impossible to satisfy the borrow checker. Most of the time, the solution is to be less clever.
Async Rust was a fascinating experiment, but in practice, I think it turned out to be a tool that's best used conservatively. I've written some very stable and high-performance production code using async Rust. But I keep it simple when I can.
> So much of this pain is caused by premature optimization.
Part of the problem is that Rust itself has such lofty aspirations to do it all (particularly "abstraction without overhead"), and is largely successful at that. So if I compromise on efficiency, it feels like my code is unworthy of the language it's written in.
Also, it's easy to feel that the slightest inefficiency puts us on a slippery slope to the bloat and slowness of something like Electron. I'd like to fight back against that bloat, the upgrade treadmill, and the rapid obsolescence that serves hardware makers but hurts poor people. On the one hand, there was good software in the 80s and 90s that made liberal use of heap-allocated, reference-counted objects with dynamic dispatch. On the other hand, there were slow, bloated desktop applications before the rise of Java, C#, Electron, etc. So I don't know where the right balance is at.
I agree; if you use dynamic `Arc`s almost everywhere in your code, what's the point in a systems PL whose essence is in managing static lifetimes? Sometimes people use Rust just because many other languages suck; they are choosing between two evils: tedious programming in Rust or inadequacies of another language. It should not be like this. This is why I think we need a high-level, no-BS version of Rust.
> This is why I think we need a high-level, no-BS version of Rust.
F# is probably what you're looking for if you want functional-lite programming with a strong type system, but don't want to deal with the pains caused by static resource management.
F# has some nice things in it: I especially like how it handles multiple return values. But compared to the language it is most clearly decended from, OCaml, it ties you to .NET Core and it lacks an ML-like module system and macros.
If you want to use .NET, F# is a fine choice. But otherwise, and especially in the context of an alternative to Rust, I recommend OCaml.
It might sound superficial but the simple fact that there is seemingly a hard dependency[0] on Visual Studio (Code) is a major turnoff for me. I am quite attached to my Vim/command line workflow.
That link includes documentation for building and running F# projects using only the .net core framework. Seems like the main set of headings is misleading, making it sound like you must install VS or (shudder) VSCode to get F# installed.
Vote here for Haskell, which receives great hate on HN because it allows essentially unlimited abstraction.
Meanwhile it has one of the fastest concurrent+GC runtimes, obliterating Ocaml in this regard, and competing handily with JVM or CLR langs.
Finally, it has standard abstractions for concurrency and parallelism that are basically flawless.
Clearly, the hate comes from the association of Haskell with category theory, which is frankly useless/borderline harmful to the working Haskell programmer.
CT was critical to core library devs being able to deliver the main workaday abstractions that make Haskell such an unreasonably productive app languages. But since 2012 you can simply use these main abstractions and enjoy a level of safety and maintainability that rust, c++, ocaml, kotlin, java can't touch.
It's not perfect, but there's been huge improvements of late to tooling (cabal) so there's never been a better time. I really wish the tide could turn, because if you're willing to take on rust for low-level, you're missing out if you don't try haskell for everything else server-side.
> I agree; if you use dynamic `Arc`s almost everywhere in your code, what's the point in a systems PL whose essence is in managing static lifetimes?
You can have the best of both worlds: In many cases, I simply put an object into an Arc, clone that a bunch of times, and pass/store it to wherever it's needed. Then, within loops, I pass a reference to local function calls (instead of cloning the Arc for every call). For any given v: Arc<T>, doing &*v is zero-cost – unlike in many other languages, where you'd have to pass the Arc itself, which would involve atomic increments/decrements without escape analysis.
Hide memory management under a language runtime. Make a single Fn trait/string type instead of multiple ones. Add effect polymorphism to deal with function colours. Remove async/.await and express it using algebraic effects, do the same for streams and iterators.
How it'd be designed and implemented is probably a theme for a separate blog post, not a HN comment.
Functors and modules are nice but do not have the convenience of traits and type classes. The problem with functors and modules is that you have to manually instantiate the right functors and modules when you want to use some operation. Type classes and traits build the right instantiations for you based on the types.
I want to write `toString (1,true)` not `ToStringPair(ToStringInt)(ToStringBool).toString (1,true)`.
> Which parts of Rust would you give up to make this happen? And how would this be implemented?
Traits, maybe? Switch to an OO model more closely aligned to what the majority of developers understand. Dump all of the line-noise-type syntax.
My Own Toy Language (ComingRealSoonNow)^tm that I started designing had exactly one goal - prevent the majority of memory errors, not prevent ALL memory errors. All I wanted was to indicate when/where an object will be destroyed/mutated. That's enough for me.
Actually something like traits (as concept) is what powers COM, WinRT, the basis of Objective-C protocols that influenced interfaces in more common OOP languages.
Also the basis for one programming language that used to be widespread in the enterprise for quick and dirty solutions, VB and its ecosystem of OCX libraries.
It is more widespread that people think, because when many argue about OOP, they miss the full spectrum of how OOP is approached.
Traits seem, to me, a lot like generic functions and methods in CLOS (Common Lisp). It's just that CLOS doesn't bundle a group of generic functions into a trait, they're a la carte.
> Actually something like traits (as concept) is what powers COM, WinRT, the basis of Objective-C protocols that influenced interfaces in more common OOP languages.
So? Giving up Rust-Traits doesn't mean that I will give up on interfaces as a whole.
I don't like the way C++ does interfaces, for example, but that doesn't mean that my ideal language won't have interfaces.
You hit the nail on the head, at least for me. I've found Go to be the right sweet spot between efficiency and productivity. I can't get anything done in Rust. My worst enemy is myself.
I like the thick stdlib in go, but it feels a lot slower even than java. The thin stdlib and giant dependency trees is my primary complaint with rust. I like most of the syntax, but async is not my threading style, I mostly use fine grained threads like rayon provides. I'm currently more excited about julia though. Same dependency hell, but super fast and flexible, and the code looks good.
That is what I usually do. And you can supposedly precompile everything if you just want to deploy a service so restarts are short. I have not done that though.
I just tell everyone, just write Rust like you'd write Scala. Don't try to optimize the crap out of anything until you need it, especially if no one else is going to use your code.
Just make the allocations.
Even if you write it that way it's still significantly more performant and lightweight than the alternatives without much loss in productivity.
This is good advice. Write simple code first. Another generic thing I've come across is, "if the borrow checker is screaming at me when I hit a patch of code, I probably have a design flaw not a programming problem".
I've seen a lot of new people also prematurely over generalizing code. I know it sounds terrible to say, but if you probably aren't going to reuse it, you really don't have to prepare your code base just in case you might later. Decide that later and move along...
In industrial machines, there's such a thing as holding it wrong. In some cases, that means getting maimed (that old-school table saw doesn't care whether it chews on wood or flesh) or precluded through a clunkier mode of operation (you need to press these two buttons separated at roughly arm length to make sure that they can't be actioned while you have an arm in the way of the heavy arm-eating chunk of metal). The former is C and unsafe Rust, the later is safe Rust and GC languages.
You're trying to draw a comparison with a consumer product with a design flaw.
It's not the only way, but if you do the tradeoff of going with manual memory management, then you should accept what the tradeoff entails. Essential complexity is non-reducible -- you either manage it automatically through a GC or choose a model that while let you ignore it quite often, will make you think about it in the end.
> If you have a polymorphic async function, then return your result in a Box.
Agreed. Copy, Clone, Box, Arc, RwLock, etc. are your friends. Don't be afraid to use them.
You don't need as much performance as you think you do. Passing things around on the heap is fine for most applications. The Rust compiler is often smarter than you think.
And, when the Rust compiler is dumb or your code is getting stuck, you have code that you can understand and find the hotspot of. Now you can be clever.
I generally go by the principle to borrow if I can without effort, otherwise I think a second of the code is in the critical path or the object is megabytes in size. If neither is the case, I clone without shame. The remaining 0.1% gets my attention (Arc, lifetime annotations, refactoring, or whatever is appropriate).
The important part is perspective. Nobody cares if you clone your command line arguments ten times during startup. As Dijkstra said, optimize the 2% of your program that matter, keep the rest simple
Exactly. Even very experienced people will be easily misled to believe something is performance-sensitive -- computers are ridiculously fast. Like honestly, if it is not a tight loop chances are it seriously doesn't matter what you do. Remember, we write plenty of shell scripts that literally spawn a new process for almost every line, and yet they feel instantaneous.
I'm using Rust since ~2 months. At the beginning I was trying to write my usual C/C++ code and I got completely entangled with lifetimes relationships and the code ended up becoming a mess.
Nowadays as soon as the compiler starts mentioning lifetimes I see that as a warning => I then take a step back and change approach/design => no problems.
I use a little bit of async (I create dedicated variables that are moved to the async functions) and classic multithreading (I use "mpsc" to exchange data between the main & subthreads) and so far I never had a single segfault nor any kind of weird behaviour, which is incredible if compared to some other languages at least for my programming skills :)
> If you have a long-running async function, then pass parameters by value! If you have a polymorphic async function, then return your result in a Box.
I've taken to making heavy use of the smallvec and smartstring crates for this. Most lists and strings are small in practice. Using smallvec / smartstring lets you keep most clone() calls allocation-free. This in turn lets you use owned objects, which are easier to reason about - for you and the borrow checker. And you keep a lot of the performance of just passing around references.
I tried to use async rust a couple of years ago, and fell on my face in the process. Most of my rust at the moment is designed to compile to wasm - and then I'm leaning on nodejs for networking and IO. Writing async networked code is oh so much easier to reason about in javascript. When GAT, TAIT and some other language features to fix async land I'll muster up the courage to make another attempt. But rust's progress at fixing these problems feels painfully slow.
However the constant checking if something is on the stack or heap is a big performance problem for smallvec. It results in terrible cpu branch prediction.
I don't see how is it bad for branch prediction. Like, if you use the same smallvec object in a loop and do not insert any elements to it (so that it doesn't allocate on the heap) the branch will always turn in the same direction making it easy and cheap to predict. And I would think that in most use-cases your smallvec object will either remain in one state, not changing in-between.
> So much of this pain is caused by premature optimization.
Rust is normally used only when high performance is of uttermost importance, so it will always attract people who want to optmise everything.
> Most of the time, the solution is to be less clever
I've done it myself and saw performance degrade, as was expected. That's fine, but by that point you might as well just use another language that has none of the hard problems and end up having very similar performance, which is what I did.
> Rust is normally used only when high performance is of uttermost importance, so it will always attract people who want to optimise everything.
Which is not the way to do things. Profile, then optimize.
I'm writing a metaverse client that's heavily multithreaded and can keep a GPU, a dozen CPUs, and a network connection busy. Only some parts have to go fast. The critical parts are:
* The render loop, which is in its own higher-priority thread.
* Blocking the render loop with locks set during GPU content updating, which is supposed to be done in parallel with rendering.
* JPEG 2000 decoding, which eats up too much time and for which 10x faster decoders are available.
* Strategies for deciding which content to load first.
* Strategies for deciding what doesn't have to be drawn.
Those really matter. The rest is either minor, infrequent, or not on the critical path.
I use Tracy to let me watch and zoom in on where the time goes in each rendered frame. Unless Tracy says performance is a problem, it doesn't need to be optimized.
Coming from gaming industry i think you might want to measure how far you can go with a single threaded rendering. There is limit of content and code that can be a brick wall later.
Here is an example from SIGGRAPH 2021 where Activision presents how multithreaded rendering looks like: https://youtu.be/9ublsQNbv6I
ps I don't work with Activision its just a public example that illustrates industry practice.
I'm using Rend3/WGPU, where multithreaded rendering is coming, but isn't here yet. Work is underway.[1]
The Rust game dev ecosystem is far enough along for simple games, but not there yet when you need all the performance of which the hardware is capable.
Cool video that matches best practices. Reducing memory footprint is always good and laying out things in memory is also good way to speed things up without changing amount of work.
I have trouble with concept of WGPU. GPUs are complex by themselves to bolt on top any abstraction that is coming from Web. But its just me, its not important since I am not a 3d programmer myself. I am more Engine / CPU optimization guy.
My interest in Rust and this topic is that i would like to see fine grained task parallel systems written in Rust. Instead of systems with separate thread for render that became a bottleneck years ago. I wish you good luck and hope to see a success story about Rust.
WGPU's API is basically Vulkan. It exists mostly to deal with Apple's Metal. Metal has roughly the same feature set as Vulkan, but Apple just had to Think Different and be incompatible. I'm not supporting the Wasm or Android targets. Android and browsers have a different threading model, and I don't want to deal with that at this stage. Linux/Windows/Mac is enough for now.
Thought for the near future - will VR and AR headgear have threads or something more like the processes with some shared shared memory model from Javascript land?
(That video isn't me, it's the Rend3 dev, who also works on WGPU.)
In Zig, we are writing a moderately complex application that calls mmap many times in the first few seconds and never thereafter. We can use every part of the stdlib and most libraries because we can pass in allocators backed by the memory we reserved at startup.
> If you can avoid async I would recommend doing so. The problem is that the entire ecosystem has completely shifted to async. There are almost no active / popular libraries related to network IO that haven't switched over.
Yes. I've been complaining about async contamination for some time. I'm writing heavily threaded code, with threads running at different priorities, and libraries which want async get in the way.
If you look at the poster's example, the "Arc" version is very close to the Go version. And if it didn't use "async", it would be even closer.
Go's green threads simplify things. It's real concurrency; you can block. But there are times when you have to lock.
As I've said before, if you're writing webcrap, use Go. The libraries for web-related stuff are stable and well-exercised, since Google uses them internally.
I agree, I don't understand why so many people lately seem to want to use Rust for web domain stuff.
I don't like Go, I hated the year I had to work in it @ Google. But frankly, it's better suited for 'server' type stuff, unless you're talking about a very specific type of server that has super intense latency guarantees. And now that Go has generics, I'd probably hate it less.
Go is the new Java. Rust is the new C++. Let's just stick with that.
I do it first and foremost because I dislike the error checking story in Go, and writing Rust for web stuff doesn't really feel too dissimilar to hacking on web frameworks of years past. In fact, I'd argue that web domain stuff is where Rust is relatively mature.
But this all said, there's definitely a point to be made here - Go works fine for so many use-cases, people should use it if they like it or it fits the story better. The idea of "one true language" has never worked out.
Generics were around for some time now, and people don't seem too keen on using them for anything aside from data structure specialisation, because frankly dynamic dispatch via interfaces hits such a sweet spot where you get a lot done with a very minimal overhead. The added syntactic complexity is very rarely justified, and containers is perhaps one of those cases where it is the case. Otherwise, from what I observed— generics in Go were massively over-hyped, but failed to gain the traction that was initially expected from it. I feel like there's a very good reason for that.
Generics as a feature is almost invisible when it exists and is done well, but is a huge pain in the ass if it doesn't. It is mostly needed for libraries, so your average dev won't write them often, but there it is invaluable.
>"I don't understand why so many people lately seem to want to use Rust for web domain stuff."
>"Rust is the new C++. Let's just stick with that."
I write "web domain stuff" in C++ and it is incredibly easy (well for me at least). In C++ I could always use my own styles / paradigms / patterns etc. etc. Not forced to any particular way. And modern C++ is incredibly safe if one wishes.
So if it is bad idea to write "web stuff" in Rust it means it is anything but new C++
That's not my experience. I'm 3 times more productive at writing web apps in C# than in C++ and that is not even taking into account compile times and hot reload and doing unit tests.
Maybe I'm the worst C++ programmer in history and maybe I didn't spent too much time in trying to write web apps in C++ - it was just to test if it's a viable approach - but that was my particular experience.
Aside from doing more boiler plate code, the language being more ceremonious, I find that I miss the tools like frameworks and libraries which are very easy to integrate with each other and which I take it for granted in C#. Probably the story is the same with any other language used for web: Java, JS, Ruby, Python and even the (in)famous PHP.
I think the situation can be much better if someone would make some nicely designed frameworks and libraries, but I guess no one is interested to as C++ is perceived as a "not for web" language. People who are into C++ are generally systems programmers who are not into Web, and people who are into Web are taught they only have to use "web languages".
That is really a shame because for some situations there would be a huge benefit of having performant web apps - scaling out is not always a solution to a performance problem.
>"I didn't spent too much time in trying to write web apps in C++ - it was just to test if it's a viable approach - but that was my particular experience."
I rewrote web apps written in PHP and Python. Those were rather decent size and in my case app specific code was about the same size in C++ as in the other 2. The performance was of orders of magnitude better.
My applications do not contain millions line of code and maybe because of this and the way the code is organized I do not really suffer long compiling times. Usually it is just few seconds. Good enough for me.
Depends on the ecosystem, while I agree with you, at least on Microsoft stack there were always nice tooling to write C++ applications for web apps.
Before .NET we had ATLServer, then C++/CLI.
They also published some frameworks for writing web APIs in C++.
Now, I would also advise C# and then if needed, to call into C++ via P/Invoke (or C++/CLI, C++/WinRT if on Windows), than exposing C++ directly into the wire.
I wrote plenty of vaguely web stuff in C++ at Google. A lot of services at Google are C++. But it's kind of, not how the rest of the industry expects things. And most of that stuff there is now moving to Go.
>And most of that stuff there is now moving to Go.
Go is on the same page performance wise with Java and C#. Sure, if you don't need the best performance, you can move to another language. But, then, why did you start using C++ in the first place?
I really don't believe that this high-level low-level language thing is true. Sure, both C++ and Rust are incredibly expressive and cool languages but they want to give control for low-level details, yet make it very easy to ignore these for the most part. But there will sure come a time when you will have to (may not happen at first write, but will definitely happen at refactor, adding a new feature, etc). Managed languages make such refactors trivial, while C++ and Rust (even with its very advanced type system) make these harder due to you having to rearchitect the whole program from a memory model perspective. Sure, it can be trivial in many cases, but not always.
So all in all, I really don't think that the (long-running) productivity and maintainability of managed languages can be approached by these low-level langs. And that is fine, (thank God) not everything is a dumb CRUD web app, there are very real niches where that low-level detail is a necessity.
>"Managed languages make such refactors trivial, while C++ and Rust (even with its very advanced type system) make these harder due to you having to rearchitect the whole program from a memory model perspective. Sure, it can be trivial in many cases, but not always."
This is not the function of the language but the ability of the developer to properly architect their code.
And tools like Visual C++ / CLion have very advanced refactoring features.
> In C++ I could always use my own styles / paradigms / patterns etc. etc.
That's also one of its major disadvantages, unless you literally rewrite the entire thing when major maintainer changes happen, because otherwise you get a mix of different C++ styles in your codebase which leads to nobody being able to maintain it.
I don't know, I never got heavily on the C++ inheritance bandwagon when it was popular in the late 1990's early 2000's. Rather preferring aggregation.
If you application is actually using the OO parts of the language that way, even without modern C++ its fairly easy to maintain as the different styles/etc area also encapsulated in their classes. Then hopefully the top level is using some kind of message passing interface/whatever to avoid trying to glue everything together into a god class.
AKA, there are a few fairly easy to understand rules that allow people to do their own thing without creating a maintainability nightmare even with very large C++ codebases. If an experienced engineer/architect/etc with a track record of successful C++ projects is in charge during the initial application design/etc you should have a fairly maintainable application.
I'm not sure this is really a C++ thing though, rather being a general engineering thing. If the main architecture is well thought out and understandable, a lot of sins can be burred in places where they can't create application wide chaos.
There are a lot of things that can make C++ applications suck, but you don't tend to hear about the success stories on your favorite board, those systems silently do their job. So many of the things people rail about with C and C++ simply aren't problems when appropriate engineer culture is maintained. AKA, having solid unit tests for most of the base classes being agragated, means that running them under various address sanitizer/etc tools will find the errors that aren't picked up by static analysis tools/etc.
Yes, sometimes things sneak by, but i'm not sure there are any languages java/rust/etc that solve that problem completely.
> because otherwise you get a mix of different C++ styles in your codebase which leads to nobody being able to maintain it.
I'd be really interested in meeting someone who has enough mental flexibility to learn C++ but not enough to accommodate different code styles in a single codebase.
Modern C++ isn't really safe. Safe means the compiler catches you, generally C++ compilers don't. Trivial example is iterator invalidation -- even with all warnings and errors on compilers don't catch it.
Can we cut it out with the hyperbole? Using "It Isn't Really Safe Unless It Is Written In My Favourite Niche Language" as an argument is .. well ... ridiculous. You can extend that argument to any practical programming language.
So ... Rust ... "Isn't Really Safe" because you can get into a point where memory is corrupted, where your application deadlocks, or threads starve, etc.
Haskell ... "Isn't Really Safe" because it is not possible to formally verify the logic.
Etc, ad infinitum ...
Maybe go for "Not As Safe As". After all, I don't even like C++ (see my comment history), but it's certainly possible (and not very hard) to get about 90% of Rust safety in C++.
Safety is not a binary, it's on a spectrum. Saying that a language is either safe or unsafe implies that the "safe" language is actually safe while the "unsafe" language is completely deadly. That's certainly not true.
> Rust ... "Isn't Really Safe" because you can get into a point where memory is corrupted
i dare you to find memory corruption in safe rust that isn't already on an issue tracker.
> Saying that a language is either safe or unsafe implies that the "safe" language is actually safe while the "unsafe" language is completely deadly.
tell that to the people who got owned by https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i... , most likely human rights activists targeted by less-than-savory regimes. memory corruption bugs are so frequent that it is possible for nso group to sell to pretty much any regime, and not forced to be classified as top secret information.
(To be clear, I’m not saying that there’s no issues that aren’t in the tracker yet. But there are a bunch of them that are. More will absolutely be found as time goes on, that’s just how these things are.)
Safety isn't an all or nothing thing, and in most cases, rigorous safety comes with tradeoffs. In some applications those tradeoffs are worth it, and in others they're not. A small project probably doesn't need to be provably correct, because it's easy enough to analyze it, and you aren't gaining much if the more rigorous language is unwieldy (they often are); similarly, a large enterprise project is harder to analyze, and has larger costs if there are CVEs, so a safer language is probably a good fit. Note that this doesn't necessarily mean rust, though -- garbage collected languages sit here, too. Rust sits in the niche of "big enough and / or critical enough to justify rigorous safety", and "high performance really matters".
We use rust for web dev purely for the type safety. Having business logic encoded as a state machine in an enum with compile time checked matches makes the world of difference
Those benchmarks are highly gamed. Don't trust them. Stuff like hardcoded response length ... Also, they use http and DB pipelining, which hardly reflects most real world usecases.
I find that part a hurdle, actually, despite loving Rust. I'm struggling with magic tools like wasm-pack that take over the build process. Cargo is fine – I know what Cargo does and how and why it calls rustc. But finding out what wasm-pack and friends do has been an uphill battle for me.
Why can't I just compile to wasm32 with Cargo? What's missing?
As far as I know you can using the wasm32-unknown-unknown target. I think wasm-pack does extra stuff like supporting different targets like a nodejs module or a webpack compatible format, which is outside of the scope of something like cargo imo.
Why? Because it's stupid fast, fast means serving an order of magnitude or more clients before requiring scale up. Scale up means $. No stop the world GC time situations, etc. That's basically it.
To be fair, I usually prefer to use go as well, lately though rust is more appealing.
I mean, there's no world in which my personal tastes wouldn't prefer Rust over Go -- I quite dislike writing Go. But as an engineer it is my responsibility to use the right tool for the job, not pick tools based on my tastes or gut feelings.
There's different kinds of fast. For most kinds of fast that people doing 'web' type things need, a garbage collector and a VM are not going to be the bottleneck. Efficient management of workloads across blocking I/O is going to be where the hard work is.
Now, I've written ad servers and video streamers, and other low latency high throughput things, and yes, I'd probably reach for C++ or Rust there. But some of the jobs I've seen lately posting for Rust, I do question. Even if I'm tempted to apply, because I'd like to get $$ to work in Rust.
Yea it's hard to say without context, I just gave the generic answer. I'm not sure you ever 'need' a garbage collector, and if avoiding one lets me save 20k in cloud expenses a year on a small team, it's probably worth it, because after scale out 20k isn't 20k anymore.
That said, people misuse technology all the dang time(I've done it). And in general I agree, Go is usually enough and has the best cloud ecosystem. I've had trouble with large go code bases exploding over time and requiring a lot of bodies to maintain compared with rust/scala.
Sometimes it's hard to speculate when one tool is clearly better than another. Some companies just want to use new stuff to sound cooler...
Rust does not in general have an order of magnitude advantage over Go. 2 or 3 would be closer, and that's with some nontrivial attention paid to optimization, not "you write Rust and it's automatically always faster".
Super high-end stuff can outclass Go by that much, like if you're seriously using DPDK or something, and in those cases I strongly recommend Rust over Go. There some other noches like that. But in general, it's not a factor of magnitude.
It really depends. I agree it's not always an entire order of magnitude didn't mean to paint that picture for passerbys who don't know better. Not trying to touch the "this lang vs that lang performance" conversation with a ten foot pole...
10x improvement over Go is a reasonable expectation, in a limited set of environments, generally at a scale where you're looking at using every last bit of a 32-core or 64-core machine and doing a lot of memory work. You should also expect to be spending a lot of time optimizing that Rust to get there. But when you need that level of performance, you also basically made a mistake starting with Go in the first place.
However, by far the bigger problem is people thinking their little web service serving out 5 requests a second requires that level of optimization when in fact even a Go implementation will use <1% of the CPU and no other resources to speak of. Dynamic scripting languages, IMHO, have skewed a lot of people's performance views. Go is already more power than most problems need, in terms of raw performance, and it's only a sliver, a niche where that's the difference between a Go vs. Rust choice.
It's a delicate understanding, but an important one for a professional.
I would not suggest using DPDK with Rust, unless you want to write a lot of stuff for yourself and wrangle with some tricky `unsafe` code. Sticking to C or C++ will make your life a lot easier.
Pretty sure you could hire 5 rust developers pretty easily. A lot of people in the rust community are dying to find a job that will let them write rust professionally...
Perhaps in TechEmpower benchmarks, which to be fair aren't the worst way of taking frameworks out for a run to see what they can do, but at the end of the day they're very synthetic workloads. Interesting applications are piles of business logic where a slightly faster framework doesn't really buy you much.
Anecdata are all I can provide, but the equivalent (Google) batch job in C++ vs. Java is often an order of magnitude better for a variety of reasons. It might only be 0.5-3x faster, but it also uses less cores and less memory to do the same amount of work.
Having written C++ and Java services, the median C++ service is performs better than the median Java service. Some of that is less pointer chasing, some of it is better libraries. Some services will be slow no matter what you write them in. There's too many variables to quantify, and "performs better" is a real load bearing term. Sometimes it's less CPU, memory. Sometimes it's latency.
I still primarily work on Java services, and they perform pretty well. That said, the operational dynamism of Java services is a real thorn in my side. Beyond the GC, there's a lot going on in the runtime including JIT that just makes JVM services less predictable. Even classloading is a source of unpredictability, unless you eagerly load classes.
FWIW, I think Go does a pretty good job being predictable too. AOT compilation helps there, you're not worrying about new tasks needing to JIT. Go mostly worries about GC (often irrelevant), warmup of state (like TCP connections), and avoiding footguns that leak Goroutines. :) At least Go's footguns rarely blow your whole leg off.
>As I've said before, if you're writing webcrap, use Go. The libraries for web-related stuff are stable and well-exercised, since Google uses them internally.
I don't get why are you so dismissive about web programming?
The Go libraries are good because it's easy to write good libraries in Go. And it's easy to write good libraries in Go, because of design decisions. Same for C#, F#, Java, Python and more.
Go's ecosystem is amazingly stable. Its rich standard library helps, but in general, there's a lot of attention to backward compatibility.
Once an application has been written in Go, updating dependencies or using a more recent compiler version is a breeze. Nothing breaks.
Rust code on the other hand comes with a high maintenance cost. The ecosystem is still very unstable. Core dependencies constantly have breaking API changes, or get abandoned/deprecated/superseded. Keeping everything up to date is not trivial and very time consuming.
I abandoned projects due to this, and others use dependencies with known vulnerabilities, that may not even compile any more at some point. But dealing with changes in Rust dependencies instead of the actual application logic is not fun.
So, for a project that has to be maintained long-term, I would choose Go anytime. Productivity is so much higher, especially when including maintenance cost.
> The ecosystem is still very unstable. Core dependencies constantly have breaking API changes, or get abandoned/deprecated/superseded. Keeping everything up to date is not trivial and very time consuming.
I'm only a hobby coder but this has hit me a few times. I suspect this will level out in time, though.
This has been discussed on the Rust forums. There are too many widely used packages that are stuck at version 0.x, with no stability guarantee. The basic HTTP library, "hyper", is at 0.14.19, and it's had breaking changes more than once. 48,300,708 downloads.
* "image": 0.24.2 (Read and write common image formats, 7,257,076 downloads)
Rust needs a push to get everything with more than a million downloads up to version 1.x. Then the semantic versioning rules are supposed to require no breaking changes for existing code without changing the major version number.
> The problem is that the entire ecosystem has completely shifted to async. There are almost no active / popular libraries related to network IO
The ecosystem, or the ecosystem where network IO is a thing? Surely that's just a corner of the Rust library ecosystem. I have almost never used network IO (databases, http,...) in 20 years of programming, and zero times in Rust.
There is a big (and might I say extremely comfortable) world of programming everything is on one machine, and things are CPU instead of IO bound.
Completely agree. I wrote a multithreaded compute-heavy program in Rust, semi-ported from a C++ version, and sidestepped async completely—just used old school thread primitives. It was delightful! Once I sorted out the data structures and messaging, the borrow checker and Send/Sync traits made implementation nearly trivial, and absolutely no memory corruption or accidental non-atomic clobbering. It took a bunch of hours with gdb and valgrind to achieve the same stability with the C++ code, and to this day I'm not 100% sure I got every edge case.
I’m a Rust newbie. Mind if I ask: are you referring to using threads and locks and queues and such?
Does Rust give you rope to hang yourself when doing it without async or does it continue to be very specific about forcing you to guarantee that you’re not going to run into races and whatnot?
Rust marks cross-thread shared memory as immutable in the general case, and allows you to define your own shared mutability constructs out of primitives like mutexes, atomics, and UnsafeCell. As a result you don't get rope to hang yourself with by default, but atomic orderings are more than enough rope to devise incorrect synchronizations (especially with more than 2 threads or memory locations). To quote an earlier post of mine:
In terms of shared-memory threading concurrency, Send and Sync, and the distinction between &T and &Mutex<T> and &mut T, were a revelation when I first learned them. It was a principled approach to shared-memory threading, with Send/Sync banning nearly all of the confusing and buggy entangled-state codebases I've seen and continue to see in C++ (much to my frustration and exasperation), and &Mutex<T> providing a cleaner alternative design (there's an excellent article on its design at http://cliffle.com/blog/rust-mutexes/).
My favorite simple concurrent data structure is https://docs.rs/triple_buffer/latest/triple_buffer/struct.Tr.... It beautifully demonstrates how you can achieve principled shared mutability, by defining two "handle" types (living on different threads), each carrying thread-local state (not TLS) and a pointer to shared memory, and only allowing each handle to access shared memory in a particular way. This statically prevents one thread from calling a method intended to run on another thread, or accessing fields local to another thread (since the methods and fields now live on the other handle). It also demonstrates the complexity of reasoning about lock-free algorithms (https://github.com/HadrienG2/triple-buffer/issues/14).
I find that writing C++ code the Rust way eliminates data races practically as effectively as writing Rust code upfront, but C++ makes the Rust way of thread-safe code extra work (no Mutex<T> unless you make one yourself, and you have to simulate &(T: Sync) yourself using T const* coupled with mutable atomic/mutex fields), whereas the happy path of threaded C++ (raw non-Arc pointers to shared mutable memory) leads to pervasive data races caused by missing or incorrect mutex locking or atomic synchronization.
Re: fearless concurrency... Would Rust prevent you in general from writing code that could deadlock, btw?
Thread1: takes lock A, ..., tries to take lock B
Thread2: takes lock B, ..., tries to take lock A
Looks like you should be able to pass Mutex<A> and Mutex<B> to both threads otherwise what's the point of mutex if there's no way to share data protected by it, so it doesn't look like it prevents you from hitting this scenario.
No, Rust doesn't prevent deadlocks, a deadlock is safe (it isn't what you wanted, but it's safe). There are well-known strategies to avoid deadlock (in any language)
In the trivial example you gave, one strategy just insists we take locks in alphabetical order. Thread 2 can't take lock A, because it already has lock B and that's not the correct order.
No, deadlock-free code requires some additional structure. There is no general way that I know of to prevent deadlocks in any software with non-trivial lock graphs, but there are standard techniques to detect deadlocks programmatically so that they can be broken and resolved. OLTP databases figured out how to do this decades ago, but those techniques are expensive for general purpose programming.
The common method for deadlock-free code is roughly that when a thread is required to wait on a lock owned by a second thread, it checks to see if the second thread is waiting on a lock already owned by the first thread. This requires that the lock graph essentially be a high-performance and concurrent global structure.
Locks like this can be expensive, particularly under high concurrency or contention, so they aren't used for most software. If you can fit your software in a simpler model e.g. where locks are singular or only acquired as a DAG, then much higher performance options are available that don't require deadlock detection.
In the general case you're right, it's equivalent to the halting problem. The outline of the proof by reduction: set up two communicating processes in a way that will deadlock iff a particular loop in one process fails to terminate. So if you had a deadlock detector for arbitrary communicating processes, you could use turn it into a termination detector for arbitrary loops.
Yes, deadlock-free lock systems are an ordinary part of OLTP database engines. They don't prevent deadlocks per se so much as detect them and dynamically resolve them.
The mechanism is costly but elegant. If a lock you are trying to acquire is owned by another thread, you inspect the locks you own to determine if that thread is waiting on one of your locks. When a deadlock is detected, there are several strategies to automatically resolve it e.g. rolling back one of the threads to a point where forward progress can be safely serialized.
No one wants to use these mechanics for ordinary code, due to their cost. For the fashionable thread-per-core software architectures, deadlocks aren't something you commonly have to worry about.
There are two kinds of threading bugs: deadlocks which are easy to detect and race conditions which are far more difficult to detect and to fix.
AFAIK Rust helps with the latter not with the former which is a very big improvement (much more than if it was the other way round)
Just to add, AFAIK Rust only prevents data races, not race conditions in general. Which is still a huge help, but concurrency is still hard without a much more restricted model.
Yeah, but it doesn't help on shared-memory process concurrency, and we all know that in 2022 the best way to ensure secure software is to go back to processes.
I don't understand. What about this thread would make you fear running rust? Fear of learning a solution to rusts memory model, and 'fighting with the borrow checker' as they say -- sure. But rust never claimed that you wouldn't have to climb a steep learning curve, it claims that if you do make it up then the result can't hurt you due to a memory fault. So "fallen from the light" seems a little overdramatic.
"fallen from light" would mean breaking it's safety promises. No promises have been broken, the async developer experience is just less than a silver bullet and needs a lot of work. If you're so disappointed by that fact that you'd say rust has "fallen from light" then you've not been paying attention at all or your expectations need serious calibration.
Yes, Arc for structured data passed between threads, atomics for smaller things like counters job queue length. Of course the fastest synchronization primitive is nothing at all :)
> Async support is incredibly half-baked. It was released as an MVP, but that MVP has not improved notably in almost three years.
As the primary mover of the MVP (who stopped working on Rust shortly after it was launched), I'm really sad to see this. I certainly didn't imagine that 3 years later, none of the next steps past the MVP would have even made it into nightly. I don't want to speculate as to why this is.
I also recommend avoiding async if you don't need. Unfortunately for people who don't need it but do want to do a bit of networking, a huge part of the energy behind Rust is in cloud data plane applications, which do need it.
> As the primary mover of the MVP (who stopped working on Rust shortly after it was launched)
At the risk of rehashing something that has already been discussed to death, did you stop working on Rust because of the difficulty of launching that MVP? I imagine that all of the arguments involved, particularly about things that are prone to bike-shedding like the await syntax, could be exhausting.
Any idea where we should send money to get things moving again? Lack of money is always the biggest problem in open source, right?
FWIW, I only started seriously using Rust in the past year. I quietly watched and waited while the async/await MVP was being developed. I didn't participate in any discussions. On the one hand, that means I didn't exacerbate any exhausting arguments. On the other hand, I wasn't actively supportive either.
That's not why I stopped working on Rust. Given that Amazon, Google, Microsoft and others all employ people to work on Rust, lack of money is certainly not the problem. There has never been more money in Rust development.
> This isn't because no one cares, but because Rusts async implementation (which is very cool due to it's low overhead) and it's interactions with the other language features require complicated extensions to the type system. It does seem to me like there might be a lack of resources/coordination/vision ever since the Mozilla layoffs, but that's a different topic.
I wonder how much technical debt is slowing things down. It seems like we keep hitting a breaking point where we'll finally be forced onto polonius and chalk but people keep finding ways to extend the existing implementation to make things work, kicking the can down the road.
Wow, this really speaks to me. I've sunk hundreds of hours in the last few weeks into a tokio-based thing using a bunch of async and ... what a nightmare. It truly is half-baked.
And yet it seems like so many things have tied themselves to this async/tokio mast. It's not a good look for Rust.
>There are almost no active / popular libraries related to network IO that haven't switched over.
To extend this with a slight caveat: for many of those network IO libraries, there's still often a sync alternative. e.g, ureq in place of reqwest works for many use-cases and doesn't bring in an entire tokio runtime for a blocking request. You can find sync DB libraries.
Some other crates have started to catch on and offer them as a feature-enabled adapter (e.g, Sentry does this and can use ureq in the background).
I feel like a real problem, though, is that there's a division of eyeballs across these boundaries. I would chip in to funding work on a "reqwest-non-tokio-adapter" or something that utilizes all the same reqwest types, but avoids Tokio.
(And I like Tokio! I'm basing a new project on it as we speak. I just cringe every time I need to use reqwest::blocking because of something the sync alternatives haven't gotten around to implementing.)
> To be clear: using async is fine if you know what you are doing, and it can provide incredible performance. But if you do, keep it simple: avoid lifetimes and most importantly: don't attempt advanced trait shenanigans - if you do need traits, just returned BoxFutures without lifetimes, throw in lots of Arc<Mutex<_>>, clone() and call it a day.
It seems like everyone doing async Rust goes through a long journey before arriving at this conclusion. Once you know the pitfalls you can navigate around them, but I hit a lot of dead ends along the way.
Anecdotally, this is the pain felt most across the entire rust language. There are more ways not to do something than to do something. This makes it hard for beginners to pickup, and difficult for projects to scale on.
I've heard people complain about languages having too many ways to do one thing. Never have I heard the opposite complaint hahaha. You can't please everyone.
I feel you, I really do, but it has it's place. Quite often you really do want to execute multiple different things in the background and wait for all of them to return before proceeding.
In pseudocode:
1. var logResults = Background writeLogServiceStarted() // Sent to different machine
2. var authoResults = Background performAuthorisation() // Perform by 3rd party
3. var userSettings = Background getUserSettings(request.currentUser) // Stored in DB
4. var results = Background executeQuery(request.query, authoResults) // Different DB
5. var response = Background generateResponse(results, userSettings)
6. wait (logResults, response)
7. transmitResponse (response)
The current async/await solution doesn't really make this as clear as the above though: The code is littered with some form of unwrapping/wrapping at every step hiding the actual intention, the call stack is marked as async making it hard to figure out where and when a sync function can make a call, etc.
The JVM guys are working on that with Loom, and, because of its multi-language nature, that can be brought across to many other languages too. Including, oddly, Rust, because Rust compiles with LLVM and GraalVM has a Truffle interpreter for Rust. I doubt anyone would actually want to run an app that way today especially as it's kind of cutting edge stuff and the Rust ecosystem is forcing async anyway, but in principle you could run a non-async Rust server on the JVM with millions of lightweight threads. It'd preserve the safety properties of the language and even the memory layouts, because Truffle doesn't force GC or Java-style memory layouts on the languages it runs. You can even AOT compile stuff but that requires the enterprise edition.
In your experience what languages would you say handle async well? Genuinely curious. I’ve only ever done JS professionally for a decade but started branching out into python, rust, and kotlin due to personal projects.
For example, in your typical web application a Goroutine is spawned for every incoming connection. This basically gives you a dedicated runtime for each simultaneous connection. In Node or Python, this would be the equivalent of starting a whole new process for every request. But in Go, there's very little cost to doing it this way.
The advantage is that each connection basically never blocks waiting for another user.
In Node and Python, we talk about handling tens to hundreds of requests per second per server.
In Go, we talk about handling thousands to tens of thousands per second. It's just orders of magnitude more throughput.
Can't comment on Python, but I once worked on a Node.js codebase that could only handle tens of requests per second, it was highly unusual and doing some incredibly dumb things like pinging out to Redis to check for DDOS protection counters before starting to serve any request.
Node is more susceptible to poor code and out of band IO slowing it down, but the V8 runtime itself is cpp and simple services are close to just writing decorators over a cpp webserver.
It's obscenely fast and good enough for most purposes.
Scala. With the functional effect systems, like Cats Effect or ZIO, you get superpowers.
Not only can write programs that are "async", but you also get easy retries (and other tricks), safe refactorability (because of its Pure FP nature), reliable and painless resource management and some other goodies like STM (Software transactional memory).
Haskell has always been the best. Go's did the same as Haskell. In either case IO is automatically async. If you want concurrency you create very light weight threads. This approach works very well for most use cases, but when you want the best performance possible the light weight threads can still be too much. Zig is going for a lowest possible overhead approach like Rust but has an interesting take: https://kristoff.it/blog/zig-colorblind-async-await/
That's definitely a description of GHC Haskell. All network IO in Haskell goes through a subsystem called the IO manager, which makes use of platform appropriate high-performance non-blocking APIs. (Actually, I think windows isn't getting an IOCP implementation until the next major release, but Windows has always been a bit undersupported by GHC.)
The nice thing about this is... You don't have to care. Haskell is a good enough programming language to just put the platform-specific non-blocking hell APIs in a library and let you write code that looks linear and blocking. If you want more control you can get down to the low level non-blocking APIs, but that's usually not going to be worth the trouble.
Yeah, many people don't even see a reference about Haskell IO being async. That's what the "automatic" part is doing.
Haskell IO is basically as async as Javascript, as in every operation is fully asynchronous, you need to call some foreign function if you want otherwise. Except that you have parallelism and can have concurrency too added if you want.
Interestingly, I've become a big fan of node's single-threaded "one big loop" model, which means multitasking is cooperative instead of preemptive. This strikes me as more honest, somehow. It doesn't distract you with abstractions (like threads) that don't make sense in this context. Most production workloads these days will be a docker process assigned to (at best) a single sticky core/thread on a blade somewhere - so in terms of resources, node is quite honest about what you actually have to work with. (This as opposed to, say, a Java process, which wants to believe it has control over an entire physical server CPU, and when you run it in a docker process, you're just exercising virtualization overhead if you use Thread, etc.)
That said, if you have a physical server, Java is quite good, especially with the upcoming Project Loom improvements. The langspec and vmspec have gotten fat the last 20 years, but at least those documents exist. Plus there is OpenJDK which is a great enabler and calmer of nerves; there's a reason so many alt langs target the JVM, and they are good. Groovy, Clojure, Kotlin, Scala are all first-rate languages, IMHO. And with projects like Quarkus you can more easily build native executables that bundle the JRE and make distribution very Rust-like.
Another environment I like specifically for async operation, but mostly by reputation, is Erlang and it's BEAM VM. Erlang itself is such an interesting language, being dynamic, functional, immutable, without traditional control structures (!) but relying heavily on recursion and pattern matching, and of course the "actor model" and extremely lightweight "processes" was invented here (and promptly ported to elsewhere, as with Akka). It was also created by one of the nicest human beings I've ever experienced, Joe Armstrong, may he rest in peace.
I'm not sure what you mean here, but if you refer to OracleJDK here then there is basically only OpenJDK for quite some time now -- OracleJDK is just an (optionally) paid support version of the same codebase. Also, most other vendors are pretty much just tiny patched OpenJDKs also, with some niche exceptions.
You must be young. Java was not always distributed like this, and although it was "open source" few people compiled it, and the JRE and JDK was distributed by Sun (then Oracle) primarily through a user interactive web UI. The OpenJDK existed alongside this for some time, but then supplanted the proprietary binaries. That was a relief because it meant Java was actually (not just theoretically) open source now, which meant it was safe from deprecation, disablement, and all the other negative aspects of control that come with de facto proprietary software.
I cannot, for the life of me, recommend Elixir enough! You write your code without every thinking of words like "async" or "await" and the VM handles it for you!
So this. You have to learn to think differently about your problem. But then when you do, so many of these other issues just go away. A small amount of our product offering is implemented in Elixir. I wish more of it was. It’s my favorite part of the whole thing.
Any language which is dataflow based will be great in an async context. Async is hard in most languages because they are imperative, which stands in stark contrast to the whole idea of asynchrony. This whole exercise of adding async features to various imperative languages is coming at the problem from the wrong way around in my opinion. It's a recognition that async programming is hard in imperative languages, and the thought is that maybe this could be made better with more language features. But the sad truth is that async features clash hard with the imperative nature of most mainstream languages.
Server workloads are different from client workloads here, and network/IO heavy workloads are different from CPU heavy workloads. You'll get tend to get misleading advice from people only familiar with one of them. Especially if the client is an asymmetric architecture like ARM has and Intel is moving to.
Swift's design is made to be good for IO workloads on smaller clients, though it hasn't got as many tools for the other end.
It has been a sec but if I were to do another multi-threaded async Rust project I would do one thread per async runtime and explicitly pass anything that needed to be shared.
This should be more ergonomic as this should get rid of everything needing to have send/sync traits. I also suspect it may be more performant as I am not sure how good the async runtimes are about keeping scopes pinned to a particular core so its not constantly jumping around and busting the l1 caches (which would be extremely detrimental to compute latency and bandwidth)... Happy to be schooled on any of this.
But what when you have some threads slacking off, and others too busy? It would be nice in this case to use those idle threads, even if it means a little bit of CPU cache trashing. And I believe this is what Tokio offers with a work stealing thread pool.
True, but I suspect that without a truly global prescient scheduler it is almost never worth it to core switch unless you generally have really long tasks.
For an efficient core context switch the scheduler must accurately predict that the source (current) core won't be free for the duration of the full core context switch time and that the sink core will be free by the time the meta context gets there and will have been free by the time the rest gets there. Otherwise, the scheduler ends up thrashing the cpu (it is actually a bit worse as future task might need same context so you have to be aware of the future). So, for the scheduler to know this it would need to be:
- Global: The only scheduler on the system or basically rafting with all the other schedulers on the system
- Prescient: The scheduler(s) would need to be able to predict all tasks, thier context, and work time per task perfectly. Which could really could only happen when everything is static and hence deterministic.
For example, I think most tasks people are throwing at async are web requests. Most actually take the core an order of magnitude shorter time to compute then the time it takes passing the context from one core to another and they are all unpredictable to the scheduler. In this scenario I could see the scheduler taking up the majority of computational time on the system. So turn on multi-threading + async on a quad core and you will get worse bandwidth and latency(always) for all your pains.
EDIT: Although this single data point would tell me I am wrong (see description):
A naive question: do I have to use async to build a web service or RPC service? If I code in Java, I rarely need to worry about async since a service framework will take care of concurrency. I may throw in thread local and a scheduler here and there to handle some shared context or background tasks, or using some bounded queue to manage some boutique concurrency. Will using Rust be similar? Or I’d have to know all the async as people mentioned in this thread?
No, you don't. Threaded HTTP servers and standard, blocking network clients are available, just not that en-vogue. The very highest performance servers and clients will be async, because threading doesn't get you there, but then again, you (as in: application developer) almost never need that kind of performance in your HTTP stack anyway (this does not make using Rust for such a project pointless in any way).
The last paragraph is the key. Just use `Arc` or `Arc<Mutex>`, it will be fast enough. That's why my first article on a new blog is precisely about that: https://itsallaboutthebit.com/arc-mutex/
I might be (probably am) missing something, but for the use case described, it seemed odd to me to have borrowing come into play at all. I would think that rather than a borrowed reference with a lifetime, the dispatcher could just consume the object coming in. The term “moving” is a bit misleading in that it carries with it an implied heavyweight operation that likely doesn't come into play at all.
The last thing I would take away from async/await in Rust is that it's "half baked." It's incredibly deeply thought out with years of RFCs, great contribution work that required both low level implementations in nightly and creating extensions to the memory model, and extensive bike shedding and discussion with the community on surface APIs.
Like a lot of things in Rust, it's incredibly well thought out but its implementation just isn't complete to the point that it is usable by other programmers. A lot of Rust libraries and components of the Rust STL are "half baked" in this way.
async/.await in Rust is a perfect example of "code duplication" in the language core. Okay, you have a nice syntax for performing do-notation on futures, but what about iterators, streams, etc? We have generators in nightly and some third-party macros for streams, which sucks.
A proper algebraic effect system could resolve the problem. You can take a look at Koka to see how elegantly it abstracts common control flow patterns using the concept of an algebraic effect.
Algebraic effects are still a research area - no mainstream language has them fully working - but basic Haskell-style higher-kinded types should not be so much of a stretch, and are a necessity if you want reusable libraries for this kind of stuff.
A proper algebraic effect system could resolve the problem.
It could also take as long as the async MVP itself to get off the ground, and you can't know if it wouldn't hit its own snags when deployed at scale. (From long ago, I remember the "non-parametric dropck" RFC as an illustration of a neat concept colliding with reality.)
Languages with mainstream aspirations evolve under greater pressure. I don't know this, but I strongly suspect that async was necessary for Rust to gain acceptance at, say, Amazon. So it couldn't practically have been designed with a wide-open timeframe. The result is what we have; one can uncharitably call it "half-baked" and tut-tut about the sharp edges, but it's workable if you know what to avoid. (But it's so tempting to imagine what could have been, eh?)
Rust is like C++ in the following way: it has many features and a complex type system, but that doesn't mean you should use all its features all the time!
Dancing around with lifetimes can be premature optimization. Yes you can write very efficient code that way but if you find yourself spending tons of time fighting the borrow checker you might be overdoing it.
I tend to use Arc<> a lot in async code. It makes things relatively straightforward and easy to reason about. Mixing lifetimes with async is probably the most confusing thing you can possibly do.
> Rust is like C++ in the following way: it has many features and a complex type system, but that doesn't mean you should use all its features all the time!
... Until third-party libraries push you to use specific features.
I get the async frustration, but keep in mind that there are really only three possible ways of handling this:
(1) Async in the core language and do async I/O.
(2) A fat runtime like Go that implements lightweight concurrency.
(3) Everyone hand-rolls their own implementations of select, kqueue, epoll, etc. loops as well as optimizations like io_uring for every single application.
Rust chose (1) because (2) is off the table as it's a systems language and (3) is more painful and bug-prone than (1).
As for how to implement async: I am having trouble thinking of a significantly better model than Rust and the fact that the Rust community hasn't dramatically improved async shows that a bunch of people who are way smarter than me about languages and type systems are also having a tough time here. Doing async natively in a systems language with virtually no inherent overhead is some seriously difficult stuff.
I personally think the biggest oops with Rust async is the absence of a runtime in the standard library, forcing everyone to pick a third party library and causing fragmentation around which one to use. Tokio seems like the clear winner but having played with both Tokio and Smol I really think the latter is better designed. Tokio does not implement structured concurrency and makes dealing with lifetimes harder. If we used Smol there would be less of a temptation to just throw Arc<> everywhere and less memory leak bugs around forgotten-about tasks.
(For those curious about the latter: Smol JoinHandles abort when dropped while Tokio just forgets about tasks and leaves them running when handles are dropped.)
> Rust is like C++ in the following way: it has many features and a complex type system, but that doesn't mean you should use all its features all the time!
I have come to the same opinion about Scala. Stick to the basics (i.e. the Scala Book[0] and doesn't even have to be all of it) and it is a joy to use.
not to the rust library situation, but it made me wonder, since Rust does not even presume a stdlib, if a standard library with green thread aware implementations, as in Go, is possible in Rust.
(i definitely could have worded that better up above, to your point!)
Just got started with Rust and yeah, my biggest pain point so far were lifetimes and async code. I finally replaced tokio with multithreaded blocking code, which seems to be much simpler and more familiar. The problem with async was that some of the libraries I needed (e.g. for QUIC) didn't support it, so you had an unholy mixture of blocking and non-blocking code.
Coming from a C++ background I appreciate Rust's novel approach of borrow checking, though I'm not sure if it's more "elegant" than C++'s approach with unique pointers, move constructors etc.. What I love is the modern ecosystem and build tooling around the language (including documentation and package management), for me that's Rust's main advantage over C++.
I've successfully replaced tokio with the lower-level mio in one such case. After all threads and async serve different purposes: Threads when you're computing a lot, async when you're waiting a lot, no?
I would say that threads work _very well_ for concurrency in _most_ apps. Thread context switches have gotten much much cheaper over the years, and the idea that threads are heavyweight is very outdated.
True, java apps have comfortably run 10s of thousands of threads without issue historically.
Coroutines are nice as they allow trivially writing an app which scales to millions of concurrent operations. Rust is nice as it lets us write code for every occasion, async is awful as it poisons the entire ecosystem. The same result could have been achieved with a few tactically inserted yield statements for I/O and locking which are pretty much the only times that async or coroutines help.
I'm just an average web developer, so I could be wrong here, but that's not my understanding of it.
Your application has n threads to begin with, if you use async, these will be utilized to schedule tasks.
You're basically trusting your language to properly pause and restart the procedures.
If you spawn new threads, these will be managed by your kernel. If that kernel is Linux, that it already has a pretty good algorithm to pause and restart them, so you're not really wasting a lot of resources.
So I believe there are effectively two differences: threads can scale up to the limit of your os/kernel and hardware. Async can scale up to the level of the threads your language spawns to handle these tasks.
Threads trust the kernel/OS to prioritize workloads, async trusts the language
> If you spawn new threads, these will be managed by your kernel. If that kernel is Linux, that it already has a pretty good algorithm to pause and restart them, so you're not really wasting a lot of resources.
Thread context switches are pretty expensive. If you use an async runtime such as tokio, all your tasks will be spawned on a fixed number of threads and you won't have any context switches on a task switch.
Opinions are pretty divided on this one these days. There was a time when this was very much true but there is some evidence that it is no longer, practically speaking, an issue except in extreme cases. You do pay a cost but it's not immediately obvious that the cost is material. I reach for a threadpool first and async when it becomes clear that I need the extra performance. 99% of the time the threadpool is more than good enough.
The important difference is in purpose, not implementation. Most async runtimes use a thread or more per core, and the OS is doing that thread scheduling, so it's not the either-or situation that you describe.
The interesting thing about async (especially async-await) is that it is a fluent way to write a program that waits for events (e.g. I/O), without resorting to inefficient designs like thread-per-client in order to achieve concurrency.
Software (even in C) has made use of this concept for a long time through poll() and friends; async-await is just a higher level abstraction over the same concept.
> The important difference is in purpose, not implementation.
I couldn't disagree stronger on that one.
Abstraction is generally good, but this becomes a purely philosophical discussion as soon as you ignore the actual implementation when talking about the differences of how something actuals behaves and should be used. And philosophical discussions like that have their place but are imo pointless in the context of what should be used, when. They're generally better placed in a context of deciding wherever you want to implement an abstraction
I do fundamentally agree with the rest of your comment however.
that's what I meant with trusting your language to prioritize workloads vs your OS.
I'm not saying that implementation doesn't matter. I'm saying that that there isn't a fundamental implementation difference "async" vs "threads". Threads are used in both cases. It's a different choice of abstractions over the same underlying concepts.
FYI, Rust (the language) has no concept of prioritising workloads; there is no async runtime in the language.
Even with any of the async runtimes, workloads (i.e. tasks doing work) are prioritised by the OS. It's just threads executing code. Async runtimes are primarily concerned with the gaps between work, like waiting for I/O or other events.
I always wonder about code bases using bolt-on reference counting. I would count C++ shared_ptr and Rust's Arc among them. At this point wouldn't it be better to just use a regular GC language? What I don't like in modern C++ is that almost everything seems to be heap allocated, when it certainly doesn't need to be. In case of GC language you at least get fast allocations and heap compaction. But it is true that resource release is more predictable with RC. On the other hand you get those cascades of releases in RC.
As always it is a game of trade offs, but it is something that bothers me.
> At this point wouldn't it be better to just use a regular GC language?
No, because you wouldn't slap `Arc` on every single variable. The example from the blog post is a dispatcher and `Arc` is then needed for dispatched functions. The same might be true for queues and other stuff where you need to move stuff between threads, but other than that it would be mostly `Arc` free.
As an example: when you write a web service in one of the frameworks like Actix, you typically use `Arc` for stuff that you need to share between all of the handlers like DB connections, queues, counters etc. All of the other stuff would be relatively straightforward Rust without much consideration for lifetimes.
I've been doing async Rust since the tokio 0.x combinator days, and I remember being like OP. For some reason `Arc` did not exist in my mind and it was a struggle appeasing the borrow checker.
The Go version is similar to the final Rust version; except the Go version is forcing you to use Arc everyone[1]. Seriously, just use Arc (or Arc<Mutex<>>), in 70% of cases you are wrestling with the borrow checker trying to do something dangerous, in the other 30% the borrow checker is wrong and isn't smart enough to understand what you are trying to do. In both cases, 95% of the time, you aren't creating enough objects per second to justify getting rid of that atomic add.
I'm not even sure this is limited to async code either. I've seen plenty of code by C++ "zero cost" gods which abuse lifetimes in order to avoid a single clone or Arc. At least the compiler made sure you won't segfault, but I'm not sure the complexity is worth it over just an Arc.
"If you can't handle me at my BoxFuture, Arc<Mutex<_>> and #[async_trait], you don't deserve me at my zero cost abstractions and fearless concurrency."
It also blurs the advantage of Rust as a lifetime validation tool if you use reference counting anyways. But it seems that it's the only viable approach for async at the moment.
let id2 = id.clone();
let watch_id2 = watch_id.clone();
let id3 = id.clone();
let watch_id3 = watch_id.clone();
let ready2 = Arc::downgrade(&ready);
let video_stream = async move {
// use id2, watch_id2, id3, watch_id3, ready2
}
My realization is that in most cases an Arc is the right tool for the job, especially in concurrent applications. I see it as the borrow checker doing it's job, if you have two threads that need to share data and one of them may go away at any moment, the only way you can ensure the memory stays around is with "runtime borrow checking", e.g. a garbage collector.
In C++, the way this is solved is usually "just trust me, I won't use this memory here"
The overall effect of Arcs being more usable is something that was true before Rust. It is an emergent property of Rust too, because it's still true. The alternative to using Arc or similar used to be validating your own lifetimes manually and inevitably making show-stopping mistakes in C/C++. Now the alternative is machine-checking it and being able to never make mistakes. But Arc is still more ergonomic than either.
If Rust forces you to use Arc where in C/C++ you would normally just be freewheeling your lifetimes, by complaining about your lifetimes, a lot of people are going to see that as "Rust is really annoying and gets in my way, it's making me choose between a tangle of lifetime bounds and explicit Arcs". The truth is that it is teaching you that your normal style would have led to mistakes and you should never have been coding that way. So people go on complaining about being re-taught until their normal style works in Rust, or until they give up. (I don't think that's what you're doing, this is mainly just to say Arcs are not a waste of the language's power, rather an effect of the language's power to know when your freewheeling would be mere lifetime guesstimation and quite possible buggy in C++.)
The author's point is strong though.
> the process of designing APIs is affected by numerous arbitrary language limitations like those we have seen so far.
There's a long way to go in being able to perfectly express all the ways you might want to use lifetimes. Sometimes you come across a situation when a few of these limitations converge on one spot, and you get the author's problem. But equally, the author is trying to write something that has probably never had its lifetime rules described before. Yes, languages with GCs can do it easily. That was also true of lock-free queues & running destructors. GCs have it easy in a ton of different areas around lifetimes. The core complaint here is that Rust does not have a GC and we have had it really easy being able to rely on one & not having to write down exactly the lifetime bounds that could make it work statically. The author is complaining that there are hundreds of compiler issues preventing some of this being expressed... well, yeah, nobody has ever, EVER, tried to write this stuff down before in a machine-readable way, we are slowly covering the untrodden ground, and you might be in an especially untrodden area. I think that's a pretty good excuse for the compiler having issues and some things being not quite expressible, but it's also an important caveat to "fearless concurrency" and I think the post illustrated that well.
> except the Go version is forcing you to use Arc everyone
The overhead of atomic reference counts everywhere is higher than the overhead of a good garbage collector. This thinking is biased against Go for no reason.
I don't think it's biased against Go. The an `Arc` (or garbage collector) is the correct way to model lifetimes for complex programs whose lifetimes are managed at runtime rather than compile time. In the context of both languages, rarely is overhead of an atomic add or GC going to be the limiting factor of your programs, just like Java programs are efficient. While static memory management is nice, it's not the sole reason I continue to use Rust. I've been using Go since 1.0, and there are two times where I have ever thought "My program is slow because of GC", and in one of those times, it was solved by simply upgrading Go.
A language is starting to approach maturity when articles complaining about it or explaining how to work around its fundamental flaws come to exceed articles bragging about having got something, anything, working in it.
Some people are just so ahead of the average. I thought I was pretty good knowing the basics of Rails/Databases at that age, but I was no where near the level of writing coherent posts on languages.
I enjoy Rust, but it's not that simple. It really is more work to accomplish certain tasks in Rust than many other languages even when what you're doing is safe.
The narrative that "Rust isn't hard" is getting tiresome, and I say this as someone who writes a lot of Rust. Let's be honest that Rust can be harder than many other programming languages in many ways, but those of us who use it believe the upsides and tradeoffs are worth the minor to moderate increase in difficulty.
Pretending Rust is easy just sets beginners up for disappointment when they get into Rust and realize it wasn't what they were sold. Or worse, they start doubting themselves when they encounter the hard parts because Rust fans were busy insisting it's all easy.
Writing widely useful, performant, reusable, correct and stable libraries is very hard. Rust is the easiest language to do that in.
If someone says programming in another language is easier, it's because they are not attempting (or are failing) to do one or more of those things. Those things are not always important so that's fine, but a lot of programmers (myself included) have this dream of being able to solve a problem once rather than over and over again, and that's why Rust appeals to us, and why we argue against Rust being hard - because it's actually easier for our particular usecase, but clearly not everyone has that same usecase.
> Rust provides very substantially less support to library designers than C++ does. Anyone creating ambitious libraries finds Rust a big step down.
Before I used Rust my main language was C++ where I specialized in writing libraries, and this is a ridiculous statement.
It's only recently that the C++ standard library has gained enough functionality to do even some basic things in a portable way, so you're relying on other libraries to provide that, yet there is no standardized way to declare dependencies on those other libraries. There is no module system to ease structuring library code. There is no hygiene - a ton of stuff you include will just pollute the global namespace. There is no standard way to version your code. There is no standard way to update a library. Everyone uses incompatible string types - seriously, if you think Rust has too many string types, wait until you find out that every freaking C++ library represents strings differently, sometimes using the same types though! There is no standard place to publish libraries. Even basic language types like `int` differ massively from platform to platform, or even between different compilers on the same platform. Each compiler's preprocessor behaves slightly differently. The programmer must manually forward declare their functions, types, etc, and the rules are different for inline/templated code. All code is unsafe, and yet the rules for what constitutes UB are informal at best. (Whereas in Rust, the rules for UB are also not fully defined yet, but this is only relevant for the minority of your code which is not safe). All experienced C++ programmers think in terms of lifetimes, and yet cannot express this through the type system, so this must be documented informally. There is no standardized coding style or format.
I'm going to stop now just because I'm bored but this list could go on for a very long time...
What makes UB in C++ is spelled out in its International Standard; there is no specification for Rust, just an implementation. It has been many years since C++ preprocessors differed notably from one implementation to the next. Nothing in C++ is global except what you choose to make global.
In fact today C++ lifetimes are expressed in the type system, and this is an example of what C++ enables a library to provide.
If, working as a library designer, Rust was not a big step down, you were neglecting to provide users of your libraries much of the value you could have offered. Your remarks suggest that libraries you delivered were closer to C than C++.
> What makes UB in C++ is spelled out in its International Standard
That's not true, the standard only specifies that some things are UB, it is not exhaustive, nor is it unambiguous. Furthermore, no current C++ compiler is compliant even with those parts of the standard which everyone agrees on (eg. see proposals to introduce a "bytes" type to LLVM to resolve known miscompilations). There is work to improve this, such as defining an explicit memory model, but Rust is ahead of C++ on this.
> In fact today C++ lifetimes are expressed in the type system, and this is an example of what C++ enables a library to provide.
Do you have an example of this, or are you talking about using smart pointers? Smart pointers are about ownership, not lifetimes.
> Your remarks suggest that libraries you delivered were closer to C than C++.
Most of my remarks were about consuming other libraries from my library, which is not something I have control over. Sure, I can use smart pointers and other modern C++ features in the API I expose... That doesn't change any of the points I mentioned.
> Pretending Rust is easy just sets beginners up for disappointment when they get into Rust and realize it wasn't what they were sold. Or worse, they start doubting themselves when they encounter the hard parts because Rust fans were busy insisting it's all easy.
You could be describing me. I recently convinced my boss to let me write a server in Rust for the safety, speed, etc. After being two weeks overdue, I threw it all out and wrote a working version in modern C++17 in an afternoon. Of course part of the issue was language familiarity, but I think also what I was trying to do was objectively harder in Rust in many respects, and the ecosystem of crates was less mature than battle-tested set of libraries I was using in C++.
At the end of the day I want to ship code and move on to the next project. Rust wasn't helping me there.
Rust isn't hard for the things you try to solve in Rust.
The author compares Rust code with Go code. Those two languages serve entirely different purposes, with entirely different mechanisms behind it.
Go does what the author does with ease because the runtime fixes all the complicated parts for you. You tell Go what you want and it'll try to solve all the memory management/threading/memory safety issues for you, usually succeeding.
Rust doesn't do that. Rust expects you not just to tell it what you want, but also how you want it to happen.
I think comparisons between Rust and Go/C#/Java are what will really trip up a beginner. Rust has a lot of nice features found in higher level languages, but it's decidedly not a higher level language. Rust operates in the space of C and C++, where a small mistake can cause memory corruption no debugger will ever be able to unravel, but where a well placed byte of padding can accelerate a program by as much as 30 percent.
I think the difficulty in Rust lies in that it will enforce correctness. Competing languages are less strict about that, especially when it comes to threading. You tell them a piece of memory is safe to use across thread boundaries and they'll believe you, and most of the time you can rely on race conditions not screwing you over perfectly well. A C program can be short, fast, and clear, as long as you leave out the error checking and resource management in case of failures; with Rust you often don't get that luxury. Writing correct code is a slow, tedious, painful experience, and in Rust you'll have to live with that pain (unless you throw around unsafe{} everywhere).
I believe that teaching programming should follow a bottom-up approach, but many others disagree. If you've dabbled in assembly, done some multi threaded C(++) and experienced the challenges in low-level code, Rust should be enjoyable enough to learn after cursing at the borrow checker a bunch of times. If you're a top-down learner, though, you'll run face-first into low-level problems and their complexities for seemingly no good reason.
In fairness, for the kind of software that C++ is particularly suited, the idiomatic software architecture is thread-per-core, which has the distinction of being almost entirely single-threaded at the code level. Race conditions aren't a meaningful concern because data virtually never crosses thread boundaries. The bigger issue, particularly and mostly for C code, is object lifetime management.
If maximum performance, whence thread-per-core architectures originate, is not the objective, then GC languages start to become more attractive and C++ may not be the right tool for the job. And in those cases, Rust may not be either.
You're not wrong, but the crux of the problem described in the article is that Rust's object lifetimes are very hard to get right (even for the people working on the compiler) when working with cross-thread code.
I'm no professional Rust dev but I wouldn't have written the code like this; I know Rust isn't particularly suited for this style of callback mechanism and I know not to try and force this paradigm into Rust the same way.
For example, I think the author would have had a much raiser time if instead of passing async futures, they'd use channels or some other message passing mechanism in combination with a bunch of blocking threads to communicate events. Such a mechanism would also translate into Go quite easily (less so for other languages, though).
This example was deliberately picked to show a complex problem with writing Rust. I don't think this represents a challenge you'd face very commonly if you were programming Rust all day, at all. It's not bad criticism, but it appears to imply a much wider problem than there really is, in my opinion.
I don't really know what kind of programs require such an elaborate callback system commonly enough where it even makes sense to use Rust. C#, Java, and Go are fast, easy to write, and each have libraries to do almost anything you want. That 10-30% speed boost you can achieve with well-written Rust is probably not really worth the effort, especially with upcoming AOT compilation features in C#.
Rust isn't a solution to all problems, and neither is any other programming language.
> I think the difficulty in Rust lies in that it will enforce correctness. Competing languages are less strict about that, especially when it comes to threading.
Enforcing correctness at compile time is not the only way to insure correctness.
Some do enjoy solving language puzzles (so choose Rust) and some prefer thinking before coding and prefer solving design puzzles. I personally prefer that latter, as the 'hard' problems are intellectually interesting, solving them is satisfying, and over the years the design lessons build upon each other. At which point you don't need a Mommy Dearest Compiler to ensure correctness.
This is the "don't do anything wrong" model of software development, and while it works well for some, we have enough experience as an industry to know it doesn't scale.
Crucially, it's hard to prove whether or not you've actually solved whatever design issue you wanted to overcome. Such a proof usually would entail some sort of analysis of the program as written (because it may actually differ from your design). To perform this analysis, you may want to annotate the lifetimes of the various objects as they are declared, so that you can track (for example) that some memory is not accessed after it is freed, or any other number of issues.
This lifetime analysis as you would imagine can be very tedious and complicated, so you would perhaps want to automate the process. And that's essentially why Rust's borrow checker exists. It's almost inevitable that it should exist imo. Seems completely obvious after the fact.
> All type systems will have meaningful and true propositions which are apparent to the programmer but not yet to the language team... Some of what the author is complaining about matches my conversations with people who aspire to be Rust library authors — that you're often trying to hijack the type system because you <do> actually know better.
Rust catches some problems (like data races and use-after-free). Safe Rust translated into correct C++ is still correct. Rust also fails to catch some problems (like preventing out-of-bounds indexing at compile time); admittedly idiomatic C++ fails to catch bounds errors at runtime. And when encountering problems that Safe Rust cannot solve (like the generic lifetime quagmires in the original post), C++ often makes it possible (and easier than esoteric programming languages like Unsafe Rust) to solve the problem correctly in the current situation; though admittedly, ensuring you haven't missed any UB cases, and validating that your assumptions don't break later on, is difficult (Unsafe Rust is better at marking unsafe code for future readers).
> Enforcing correctness at compile time is not the only way to insure correctness.
> some prefer thinking before coding and prefer solving design puzzles
Yeah, you just need to guarantee that person working on it, considered all edge cases, had uninterrupted time to think, thought about how the edge cases interact, didn't make a single mistake, wasn't sleepy, under influence of substances, and perfectly wrote it into the program without a single semantic error (off by 1).
Easy.
That's why I code in Malbolge Lisp CodeGen that outputs Brainfuck.
If you can solve all your problems by thinking before writing code then you will never see a compiler error, unless you see a compiler bug. If you're solving your problems beforehand by thinking about them but end up solving language puzzles, you clearly haven't thought enough.
If the benchmark is "projects that are known for idiomatic C", such as Redis or Sqlite, we know that even they introduce memory corruption errors that lead to vulnerabilities every now and then. You're not better than them.
>I think comparisons between Rust and Go/C#/Java are what will really trip up a beginner. Rust has a lot of nice features found in higher level languages, but it's decidedly not a higher level language. Rust operates in the space of C and C++, where a small mistake can cause memory corruption no debugger will ever be able to unravel, but where a well placed byte of padding can accelerate a program by as much as 30 percent.
I agree. But if you use Rust for web programming, it is fair to compare it with C# or Java.
On the other hand, C and C++ feels easier to read and write, a bit more productive than Rust.
So the question is vs C# and Java: is the performance worth the pain and loss of productivity?
And vs C and C++: is the guarantees made by Rust worth the pain and loss of productivity?
I'd argue that in some cases the answer can be yes, while in others it ca be no. So, there's no universal good or bad choice, it really depends on project, team, budget and many more.
I don't know why you'd possibly want to use Rust for web programming, to be honest. When you add a full stack of databases and entities, Rust barely becomes faster than ASP.NET or Spring Boot. I messed with it for fun, but I don't think I'd pick Rust as a web server language any time soon.
The only reason I can think of is the WASM space, which Rust lends itself very well to, to reuse the same entities and data structures in the front end. Then again, you'll end up writing a terribly bloated web UI and other languages have similar bindings.
I think for new projects where C++ makes sense, Rust probably makes more sense. There are some edge cases (if you expect to be operating on trees in memory, for example, or if you're interfacing with libraries written in other languages) but I think Rust is generally better for such system tools. That assumes that you have in house Rust devs, of course; if you're a C++ shop, you'll have to teach everyone a new language before the switch makes sense.
The C(++) crowd is difficult to teach other languages because they, more than any dev group I've encountered, seem to have a larger amount of vocal people who think their code is perfect, they won't ever produce bugs, and all those compiler errors warning about failing edge cases are unnecessary because they know best.
I'm in the same boat, I spend a bit of my working day writing in Rust .. it's a great language but it's very cumbersome and I find it suffers form poor ergonomics, like it's annoying to type
I find it enjoyable to type for the most part. Explicit type casts aren't a treasure or anything, but then I think about why I am doing it and I can't really complain.
I feel you, but, my experience differs in the end I guess. To be fair turbofishes we're something I discovered by intuition, so maybe I'm a lost soul.
Programming should be hard only if the problem you are trying to solve is hard. Creating an boring CRUD app shouldn't be hard. Predicting with a good degree of success the market trends for stocks and options should be hard.
All type systems will have meaningful and true propositions which are apparent to the programmer but not yet to the language team. Choosing a typed language is choosing to have a relationship with a living & evolving team/ecosystem, one which can improve over time in their ability to express and prove propositions or provide ergonomic abstractions.
Some of what the author is complaining about matches my conversations with people who aspire to be Rust library authors — that you're often trying to hijack the type system because you <do> actually know better.
Im not so sure. There are a lot of error states that won't happen in production ever with other languages because the language/libaries handle those conditions gracefully without you needing to care. Rust pushes this to the edge, _which is good if you want that contol_, but is a cost you have to pay.
This right here. Rust is a low-level language hat makes a whole dimension of implicit knowledge explicit.
This is a very good thing, but if you are a programmer that is used to "copy paste, and then it works". You will have a very, very, very bad time with it. Rust forces you to think about memory. In an age where dynamic typing is so prevalent this seems like a fading art.
I started picking up Rust a bit over two years ago, and it was HARD. I didn't have teammates who knew it to help me out, so I used community resources (/r/rust, exercism.io, etc.) I'm still not great with the language, but I'd say I'm about as proficient in it as anything else. The progress is slow enough that you find many more opportunities to quit, but once you do become productive, I think the hard work really pays off.
I've written before about some of my woes in taking a service to prod with mostly Python code. It was easy to write and get running, but then it had all sorts of errors that were basically impossible to detect until runtime. Kind of a nightmare to maintain. Transitioning it to Rust was a lot of work, but I've literally only had one error that I had to fix, which was caused by my faulty assumption about incoming data that I simply `.unwrap()`ped.
I initially was interested in Rust for the promises of higher-performance, lower memory footprint compute, and that is a great aspect of it. But the correctness guarantees that I've experienced make all the hard work I went through absolutely worth it.
Sure, it would be great if the learning curve was less steep. But that doesn't mean the hard work isn't worth it.
Rust and Python sit at almost extreme opposite ends of programming paradigms. The question on the table remains if the excessive complexity of the Rust language is factually worth the pain.
I’d agree they’re at opposite ends of the complexity spectrum: it’s very difficult to write a moderately complex python program which uses even a single dependency and is bug free, then deploy it to the machine of a non-technical user. Rust meanwhile excels at this task.
In Python, "it’s very difficult to write a moderately complex program which uses even a single dependency and is bug free" - whereas in Rust, it's very difficult to write pretty much anything, but if you manage to do it, it has less bugs, (and being a single binary, is easier to install - but you don't need Rust for that)?
lol "excessive complexity". Compared to what? Every language if you learn it to sufficient depth starts to appear that way, and Rust is mostly displacing C++ which is insanely more complex.
If you need performance, compiled binaries, etc then yea it sure can be worth it. In some cases you don't have a product if you can't get the performance to reach a certain level. In other cases you spend a lot more money tracking bugs in production that rust just doesn't allow to happen in the first place. Every project has different priorities, some projects don't need rust at all, some greatly benefit from it. The complexity is worth it for me, even for hobby projects, mostly because after a few weeks of toiling with it, I learned it wasn't that complicated, and what it gives me in price of mind way surpasses that.
You are basically asserting that performance & correctness are beyond the reach of developers who build systems in other languages. So based on this I suppose all systems developed in other languages are either performing subpar and/or are incorrect?
"Is the pain worth it" is the question. My 42 years of professional development tells me the answer is no. (I lean towards programming pleasure and not pain). YMMV.
I think the idea is that performance and correctness are easier to achieve with rust. Once you get over the initial mountain of learning. The mountain is a different height for each person.
I would say most systems are either slower or less correct, yes. The nice thing about being a developer is that these costs are mostly hidden from us. The business pays for more compute if you churn out slow code, and churning out bugs just increases your job security.
Imagine if you had to pay for the performance loss or production bugs directly though...
For the rare case where the small speedup is otherwise worth the lower producitivty of Rust, it can still be a worse choice because the you have a disadvantage in iteration speed to rework the code for a faster problem solving strategy / algorithm.
The strong and static type system makes refactoring and iteration pace extremely predictable in Rust. And it's often quick to do as well.
With dynamic languages you can pretend you're done in an hour and then endure a lot of production bugs. That's not being more productive than Rust. That's playing pretend that a complexity doesn't exist.
On the contrary, I find the speedup due to rust to be quite large and the productivity to be quite acceptable. Refactoring for me is so much easier when I can rely on the compiler to catch so much.
>I initially was interested in Rust for the promises of higher-performance, lower memory footprint compute, and that is a great aspect of it. But the correctness guarantees that I've experienced make all the hard work I went through absolutely worth it.
I'd go with 1/3 less performance by using Java or C#. While having better productivity. If we are talking a web service, of course.
As someone that has only dabbled in Rust, the post reminds me of C++ and its mind-boggling templating system. Rust even seems to provide you with the multi-page compile errors.
Is it really as bad, or does the post only highlight the misery you get when you're working on the fringes of what the language is capable of doing?
For me the notable difference is that the Rust compile errors are almost always about the problem I actually have.
They can be quite verbose, but that's largely from showing me what the problem is exactly. e.g. Rustc will show you where you borrowed X and then where you tried to modify it after forgetting it was borrowed, not just go "You can't do that, it's already borrowed" and expect you to figure out where and how or fly into a rage because you're sure you didn't borrow it.
The error[E0308] mismatched type message with a type whose name repeats itself several times is a bad sign, yeah, you probably should not make a type like that. But most of the rest look like helpful Rust errors to me.
To be fair, async (and generators, closures and Iterator) can expand to fairly hairy types with innocuous looking code. A think I do to simplify the output is to rely on `impl Trait` or `Box<dyn Trait>` to erase the type at strategic points, but that's working around my inability to find a way to make rustc figure out whether the full type is relevant or not right now.
> Rust even seems to provide you with the multi-page compile errors.
rustc strives to provide you as much relevant context as possible. When hitting very verbose output, it is a hint that what you're trying to do is difficult and will require considered design that the compiler can't help you with. The compiler is trying to help you clarify your code so that it can understand it.
Looking at the diagnostics shown in the blogpost, I see only one that is indeed terrible[1], but those in particularly are getting less and less verbose as we tackle them one by one with more targeted diagnostics. I also see the lifetime errors (the closure one when you specify the argument type and the on "`Execute` impl is not general enough"), which I wish gave more context.
As support for HRTBs lands, related errors will stop happening as much and the code will indeed work (or the compiler will be able to tell you what alternative syntax you should be using).
And, in a general plea for people writing Rust, when you encounter a subpar diagnostic, file a ticket[2].
It's actually really fun once you get the hang of it and realize the compiler is there to help you ship code that runs! This person hit a rough spot and got very frustrated basically... It happens, it's part of learning.
The hardest thing about rust is, it tells you straight up when you don't understand something you are trying to do. For people who are otherwise productive and think they know certain things, this can be an ego slap. In return it makes you a better programmer because you learned something, it's just a bitter pill sometimes.
In my opinion this person should have gone to rust discord/discourse, and asked for help. A lot of this would have been explained. Some of what they dealt with are rough edges that require practice to overcome though. But yea, lots of people are happily shipping async code in rust, right now. It's very possible.
Some people also get into C++ templates and build complicated things with them for fun (eg the whole subfield of template metaprogramming came from inventing a clever abuse of the accidental power of templates as a poor man's macro system)
C++ templates are an inscrutable nightmare, the reputation is deserved. Rust is a bit different.
Any software engineer that doesn't have a love/hate relationship with C++ templates is lying. On one hand they are extremely opaque and not user friendly, unnecessarily so. On the other hand, mastery of that dark art allows metaprogramming that you could only dream of in other systems languages -- the modern C++ template facility is extremely powerful. The number of C++ software engineers that achieve this level of metaprogramming mastery is quite small. In fairness, the C++ language has been intentionally evolving to make metaprogramming much easier than it used to be, and it has made massive strides in that direction.
Rust lacks the expressiveness of C++ template metaprogramming facilities in significant ways. However, it is plausible that it will gain them eventually. The question is if it is possible to support advanced metaprogramming without the train wreck that is C++. I think it is eminently possible to do better, the question is how long it will take other systems languages to have metaprogramming expressiveness similar to current incarnations of C++.
Is there a tutorial-style (or anything beginner-focused) resource for learning modern C++ that you could recommend? A lot of the learning material is about earlier versions (for understandable reasons), or at least don't seem to reflect C++ as it is today, and the best practices of writing code with the tools it provides.
I would argue that the meta-programming facilities that are in Rust are more expressive and can be used to implement things that are not possible in C++. For example embedded dsl and similar. There is likely some cases where templates can express something that cannot be expressed with generics though.
Unreal Engine vs Unity. Unreal Engine has a visual editor that allows the developers to accomplish most task without the need of entering the code editor thanks to a visual scripting language, it mirrors the C++ counterpart 100%. I believe it is not possible to do that in C# (Unity language).
Rust is a systems language. To be a systems PL, it is very important not to hide underlying computer memory management from a programmer. For this reason, Rust pushes programmers to expose many details that would be otherwise hidden in more high-level languages. Examples: pointers, references and associated stuff, memory allocators, different string types, different Fn traits, std::pin, et cetera.
Rust is a static language. This is better explained in my previous essay “Why Static Languages Suffer From Complexity”. To restate, languages with static type systems (or equivalent functionality) tend to duplicate their features on their static and dynamic levels, thereby introducing statics-dynamics biformity. Transforming a static abstraction into its dynamic counterpart is called upcasting; the inverse process is called downcasting. Inside push_handler, we have used upcasting to turn a static handler into the dynamic Handler type to be pushed to the final vector.
C and C++ are both systems languages and static typed languages (I don't know of any systems language which is dynamic typed) and don't feel to me so hard.
C# has "pointers, references and associated stuff, memory allocators, different string types" as features but it is at the same time a higher level language and you make use of those features only when you need to deal with low level stuff, no pain implied.
So, to me, these reasons don't quite explain why "Rust is hard".
Maybe in C++ it's easier to get a program to compile, but it's dramatically harder to make sure you've checked all the boxes you're supposed to check to make sure your code is safe and complies with best practices.
For instance, consider the rules of 5 [1]. There are so many rules like this in C++, so much complicated stuff you're supposed to know to write anything at all, and much of this is just _accidental_ complexity: it's just there for historical reasons or language design reasons that in retrospect don't make much sense.
Rule of 5 was a C++11 thing, maybe even mainly a C++03 thing.
Your complaints about C++ have aged badly. C++20 is not the same language as C++11, which was not the same language as C++03.
Rust's complexity is rapidly approaching C++'s. In 5 years, if Rust hasn't fizzled, it will get there. Only being used to it will make it seem any simpler.
> C++20 is not the same language as C++11, which was not the same language as C++03
This is not a good thing. In fact, it's a shockingly bad thing. C++ is basically 40 years old, and still every few years the C++ community feels like, "OK, now we know how to get C++ right; yeah the old C++ was a mess, but now we just need to add these 12 new features and it'll all be good."
Unfortunately many of the problems with C++ are due to mistakes in its early design. It was wrong from the beginning.
I've been programming Rust for 6 years. C++ for over 20. I'll take Rust every day of the week and twice on Sunday.
If it is bad that C++ evolves, then it is worse for Rust, which has been evolving much faster, trying to catch up. It is still just barely possible Rust could avoid fizzling. For that to happen, it will need to change enough so that the many, many more people who reject than embrace Rust, as it is today, choose differently tomorrow.
Evolution is necessary to stay relevant; the only alternative is stagnation. If your language or your code of five years ago does not embarrass you today, it only means you haven't learned anything new.
Rust is indeed changing pretty fast. But it's new. And most of the design decisions work well together. The language as of 2015-2016 was a reasonable language, and the changes since then have very rarely been of the form "This earlier design was a mistake, and we need to fix it all."
But C++, even in its pre-template 1980s form, was already full of wrong decisions (granted, many of those decisions were done for compatibility with C).
Including headers is the wrong way to share code. C-style macros are a bad idea. The whole copy constructor idea instead of move semantics, and then an explicit clone function when you need that, was the wrong idea. The complex mess of various constructor types. The friend mechanism to control access is complicated and unnecessary - why not just a simple module system? The complicated inheritance system - virtual, public, protected, private inheritance - what is this goop? The implicit conversions between integer types has caused countless problems. The language is extraordinarily hard to parse.
All of these things (and plenty more) I feel perfectly comfortable saying are just outright bad.
In Rust, all the basic language stuff... the module system, the traits, trait objects, the mechanism for defining new types as enums or structs, even the borrow checker... it's all fine, and it works well together. It's definitely not perfect, I have my gripes. But when I write C++ I can never forget for one second that it's a conglomeration of baffling design decisions and one attempt after another to patch over prior mistakes. I never feel anything like that with Rust.
> If your language or your code of five years ago does not embarrass you today
I mean I get where you're coming from with this, but I would say if an entire community that's been around for 40 years can look back every 5 years and always say "the way we were doing things 5 years ago was completely wrong," that's a serious problem.
In just another few years, if Rust does not fizzle, you will feel the same way about code in Rust as it evolves and improves.
But how C++ was coded to older Standards was not wrong at the time. That code was chosen judiciously according to features of the language as it existed then, and our understanding of it and our world.
Changes in languages are not random. Every last change in each new C++ Standard was in response to recognition that code would be better using the proposed change. Failing to use the new features as intended, after, would amount to choosing not to write code better.
Rust started at a snapshot of how problems of coding were understood at one time. Our understanding today and the world we program for have changed markedly from that time. Both will continue changing. What is good Rust will also change as the world, our understanding of it, and the language all change.
> In just another few years, if Rust does not fizzle, you will feel the same way about code in Rust as it evolves and improves.
I've written fairly extensive amounts of code in C, C++, Rust, Python, Java, OCaml, Haskell, Clojure, Common Lisp, Scheme, and probably some others that aren't coming to mind right now. All of these languages except Rust have been around for decades (and I guess Clojure is about a decade and a half). My opinion of these languages vary. They all have their warts. Some of these languages I don't particularly like. Some of them are complicated, some are fairly simple.
But C++ is the only one where I feel like the basic features of the language are actively working against me. It's the only one where I feel like such a huge portion of language design decisions were utterly baffling and wrong.
IME some members of the C++ community can be fairly insular. They just can't see how things are done outside their world. You expect me to believe that I'll eventually feel the same way about Rust as I do about C++, but I know that's unlikely to be true because C++ is the only language that I feel this way about.
I'm very confused. Rust is mainstream? C, C++, Java, C#, Python, JavaScript are all what I would call mainstream. Rust is not a language that comes to mind at all.
Rust has hit the point where it's in the conversation for what language you'd write a new project in, even for big companies (typically the most lethargic). I'd say it's at the level of mainstream where there's Rust being written in a significant portion of the industry, but not at the level of mainstream where it's somewhere in the stack behind most of organised society
Pretty damn mainstream nowadays, yes. Quite a few big tech companies are either actively doing much of their new systems programming in Rust, or else dipping their toes in.
I honestly don't know of a big tech company that is happy with the idea of just continuing to use C++ indefinitely - they are _all_ looking for alternatives, and Rust is the most obvious option.
> Quite a few big tech companies are either actively doing much of their new systems programming in Rust, or else dipping their toes in
I don't think that's true. It seems to me that a lot of people believe this simply because a lot of other people believe it. You see it everywhere. "Oracle rewriting MySQL in Rust". "Microsoft rewrites Skype in Rust". "Linus Torvalds rewriting Linux in Rust". I have yet to see proof of any of it.
The Google team I work on (ChromeOS, crosvm[0] specifically) has been transitioning to Rust for a lot of our new services (mostly crosvm and stuff that interacts with it) and I couldn't be happier :)
The claim was "continuing to use C++ indefinitely." For that not to be true they'd have to be looking at replacing C++ entirely. Moving everything to rust/go/whatever. Do you foresee Google doing that?
Google as a company? Not necessarily, I don't have a crystal ball.
ChromeOS as a product? Maybe. (note: not Chrome)
ChromeOS platform internals? I totally could. You don't just rewrite everything from C++ to Rust, because that wouldn't make much sense, but more and more new service/products are being spun up that use Rust (we have docs and stuff https://chromium.googlesource.com/chromiumos/docs/+/master/r...).
More people pick up coding C++ professionally in any given week than the total being paid to code Rust full time.
The bigger a company is, the more various languages people there dabble in, and the less it means that somebody there dabbles in your favorite. You would better add up revenue (NB: not market cap) of companies specializing in using your favorite language; but that number would be disappointingly small for any language as far from maturity as Rust still is.
I really do not like the minimal stdlib idea, but as far as the language itself, I don't see the problem.
Have people never experienced "Lots of pain" debugging?
All those hoops modern languages make you do are just the same things you'd have to do yourself.
Instead of having to use some language construct to do whatever it is, you do it the simple and obvious way... but all reasoning to prove safety is on you.
Instead of "Learn these 50 features", which is hard but not impossible, you have to "Lol IDK git gud and don't do bugs", which is not only hard, but it's all on you to even figure out if you did it right.
I am fluent in C/C++, Java, Scala. I really wanted to like Rust, but it is hard. The borrow checker is hard and the language api is not intuitive and/or ergonomic.
In my opinion, easy rust is easy, hard rust is hard. A lot of people take for granted how hard some things are to do correctly until they are asked to do them correctly. Not saying this to put you down, we all struggle with this until we get over the hump.
My best advice for loving errr learning rust is put it aside for a few weeks after being annoyed with it. Survey some other language like idk Haskell, then try it again.
> I have the unsubstantiated theory that experienced developers have a harder time than less experienced developers when learning Rust. You need to forget a lot of constructs that work well enough in the languages you already know because they introduce things that go against the single owner enforcement that Rust has, whereas somebody with less experience will simultaneously accept restrictions as "just the way it is" and not seek out more performant constructs that can be much harder to understand or implement.
> Rust has a curse (it has many, but this one is critical): inefficient code is generally visible. Experienced developers hate to notice that their code is inefficient. They will recoil at seeing Arc<RefCell<T>>, but won't bat an eye at using Python. I know because I have the same instinct! This makes it much harder to learn Rust for experienced developers because they start with the "simple Rust code that will work but is slightly inefficient" and in an effort to improve it they land squarely in parts of the language they haven't yet developed a mental model for.
This is not a critique of rust but my experience is that I needed to build something that was distributed cross platform as a statically linked binary. So, I narrowed my choices to Golang and Rust. I wanted to learn rust and was only so/so on golang and really tried. I knew this project would go slower because I’d be picking up a new language but as I was still being brain hurt by rust I peeked at go and it can get a little convoluted but overall it’s a readable language where I felt like I could start writing decent code within days.
Since it was mostly about building something for users and not just my own learning I had to go with go.
Not sure if I will ever learn rust but it would have to be for a hobby project or solve a problem for me that is fundamental to the desired outcome that go, python etc could not.
While I can see why zero-cost abstractions are important in Rust, I was wondering how much simpler Rust async could have been if zero-cost had not been a hard requirement. The way I see it, Rust async is most useful for IO-bound applications, which can afford some non-zero abstraction CPU cost.
If zero-cost wasn't an aim then it would indeed be much easier. For example, async fn in traits isn't (yet) in the language. The type system level changes to make them possible is being actively worked on (it is high priority!), but if we didn't care about "you can't write more efficient code by hand", then we would have provided that feature with the semantics of the async-trait crate[1], which provides a macro that turns `async fn foo() -> i32 { 42 }` into `fn foo() -> BoxFuture<Output = i32> { Box::new(async { 42 }) }`. Going down that route would have "silenced" that criticism, at the cost of locking us to subpar behavior at least until the next edition. The approach of waiting until things are baked well enough to land in stable is frustrating as a user today, but I believe has a better chance of standing up to scrutiny long term. In the meantime, the community is able to "plug" those failings, at the cost of worse dev UX due to not being builtin.
Compare the Rust Handler-Dispatcher program with the Go Handler-Dispatcher program and look at the truly extraordinary complexity that Rust adds to what is effectively a simple design problem that any of us have implemented several times, if your career has had a few years of original coding.
Handler-Dispatcher is not some complicated design pattern. It is very basic. Its implementation in any language should be short and elegant. This is NOT something you should be losing sleep over.
Rust, unfortunately, is even MORE complicated than C++ in language complexity today. Good tooling does not eliminate the burden of extraordinary complexity that Rust wants to enforce on its adopters.
I don’t think it’s true that any given “basic” design pattern should should necessarily be simple in every language.
Certain things are short and elegant in Rust. Certain things are short and elegant in Python. Each optimises for different things.
I also don’t believe it places “extraordinary complexity” on the shoulders of the programmer; this complexity is often a sign you’re doing something wrong, or working against the grain of what Rust is optimised for.
They’re not doing anything wrong, but working with generics, traits and async code is very much against the grain of the 2018/2021 editions of Rust.
The async story hasn’t finished, and is definitely the area that needs most improvement in terms of developer experience. There are a number of gated features and RFCs that propose solutions to these problems (e.g. GATs), but they haven’t been moved into the core language yet.
That doesn’t mean Rust is inherently bad or difficult, only that we’re working with an early version. Rust is young! We didn’t have futures until 2019, iirc.
Lots of core Rust concepts (lifetimes, ownership, references) need to be rethought for how they fit into asynchronous programming, and how to make everything compose well together.
I find rust and scala quite similar on learning curves. You have to reach a certain level before you can work with the language and library ecosystem.
The benefits proposed by proponents of both languages are there but after the mean of the bell curve.
Also a beginner is asked to work with external interfaces and libraries because the language and its standard library is defined to be minimal in certain ways. The answers to which are supposed to come later. So a lot to take it on the first step itself.
Both languages are still working on some answers and kept options open. So "why" of that is also complicated.
Performance, safety, ease of use. You can only pick two.
I like the strategy that Nim employs. Due to multiple ways of memory management strategy you can use (different types of garbage collectors, reference counting, manual memory management or no memory management), you can have different combinations of performance, safety and ease of use - so it can fit nicely what you are trying to accomplish.
Out-of-topic, but it's funny how some pro-Rust comments actually read like pro-Haskell comments. Haskell has more extensive and thorough typing than Rust, so anything good about `trait` is even better in Haskell. You just have to "unlearn" more to get used to pure functional programming.
The borrow checker statically prevents having mutation and aliasing at the same time. Aliasing is when two names refer to the same state; mutable aliases make it possible for "someone else" to act on that state while you're using it.
Consider the humble for-each loop:
for (let x : xs) {
xs.remove(0);
}
If you're not careful, modifying a collection while you're traversing it can cause you to visit elements multiple times, or skip some elements entirely. There are effectively two tasks occurring at the same time -- interleaved, not parallel, but still concurrent: the ordered traversal and the action on each element. Both tasks can view the list, but one of them also modifies it behind the other's back.
This problem simply can't happen in Rust (without going out of your way, at least), because you can't independently mutate a list you're already iterating over.
everytime i sit down to give rust a try i get catapulted down some esoteric rabbit hole about conceptual abstraction over computer memory that rust invented for my safety that isn't fully fleshed out conceptually or in implementation
all my rust programs are provably safe though, because i never finish them and no one ever uses them
I see a lot of people here saying something like "Just use Arc<Mutex>, it will be plenty fast".
As someone who does it all the time, I noticed that code becomes less readable and hard to comprehend that way. Important types stand much less out in function signatures once you have Arc<Mutex<InterestingType>>. Rust is already extremely verbose and adding additional layers to types doesn't help. Not even to mention that now every time you want to access the value you need to do dance with calling .lock().
Rust pro tip: use type alias.
As for using lock() - you'd have to do it in any language in some way. If you have sharing and mutability, you need some kind of synchronization.
You don’t need full synchronization if you’re not going cross threads. RefCell and the like don’t use atomic operations or barriers or anything fancy like that.
I personally believe that Rust is an exceptional language and it’s ahead of its time. Seriously, can you think of any other language with as incredible type system, incredible tooling, incredible performance, incredible package management and general package quality, incredible compilation target support, and practically no few backwards-compatibility quirks?
But Rust is a really bad general-purpose language. Because Rust isn’t a general-purpose language. It’s designed for high-performance, low-memory, safe computing. The only reason it’s even remotely considered “general-purpose” is because of how great it is. Unless you actually need to write something that’s high-performance, low-memory, or safe (or there’s a really important package in Rust that you can’t find anywhere else), stick to something like Swift or Kotlin or Java or TypeScript. Or even JavaScript or Python or Lua if you’re only writing small scripts.
-
Rust abstracts almost nothing about the OS. In order to understand Rust, you need to understand at a low-level how computers actually work and how languages compile. You need to understand the what, why, and how of threads, the stack / heap, I/O, endianness, etc.; and language features dynamic dispatch, references, async runtime, memory management etc.. These language features are all implemented in Rust, but they’re explicit, you have to think about them and choose your implementation.
People say that Rust is hard because of the borrow checker but there’s much more than that: dyn Traits (you have to understand when a trait can and cannot be dyn), Unsized types (and why there are so many type parameters, because everything must be sized), closure traits, multi-threaded communication, what async actually does, etc. Heck, we almost got you to manually choose allocation schemes (they’re default type parameters, which is weird because Rust doesn’t have many implicit defaults).
You often face decisions like
- should this trait be a static / type parameter, or dynamic / Box<dyn Trait>?
- Is this value borrowed or owned or Cow?
- Should this be a unique reference / Box or a shared reference / Rc or a shared-across-threads Arc or not a reference at all?
- How do we control this value and communicate across threads, do we make the whole thing thread-safe or message passing or provide a shared Mutex/AtomicBool?
- How do we manage async, do we set up an async runtime ourselves (and if so how do we configure this runtime), or how do we add integration with existing async runtimes?
- Should this be a Cell / RefCell because we don’t want to or can’t pass a single mutable reference around? And when / how do we dereference the RefCell as to ensure it doesn’t get accessed anywhere else until the dereference is dropped?
- Can we do this safely, aka encode it in a way the compiler understands? And if not, how do we a) do it correctly unsafe and b) make the unsafe code as short as possible?
-
And this explicitness is part of what makes Rust so amazing, because other languages make these decisions for you. But every decision has costs, and either they a) incur higher overhead, or b) lose safety guarantees.
e.g. in JavaScript everything is on one thread, so no data-races but no true parallelism; everything is garbage collected, so no use-after-free or double-free but you get GC spikes; everything is a dynamic boxed reference and types are checked at runtime, so no passing complex types and type parameters everywhere, but calls are much slower and there is no true type-safety.
Or in C++, there are multiple threads with shared memory but you must explicitly avoid data-races; there is no GC overhead but you must explicitly manage allocs/frees; you can choose whether to pass/store things as references or values and there’s a type system, but values can be assigned / casted to the wrong types leading to memory corruption.
Rust is the only mainstream language where you can do unsafe-but-efficient stuff, but it’s actually safe because it’s checked at compile-time.
-
But this explicitness also makes Rust a bad general-purpose language. e.g.
If you want, you can pretty much make everything in Rust like this. #[tokio::main] for the implicit async runtime, everything Gc<RefCell<T>> for no borrowing concerns, dyn Any with upcasting / down casting, etc. This is mainly at the cost of a ton of extra syntax. And of course you get higher overhead and lose compile-time guarantees, which are now checked at runtime.
Or better yet, what you should actually do, is write your code which doesn’t need high performance, low memory, or security guarantees in another language. And possibly interface it to Rust code with FFI.
Because Rust isn’t a general purpose language. It tries to be general-purpose, and is to an extent, but it’s ultimately built for niche performance and reliability guarantees, and you just can’t have those without user overhead.
I spent about 10 years as a professional C programmer, userland and kernel on Linux, BSD, and WinAPI systems, and I feel like I have a pretty solid grip on "how computers actually work and how languages compile" (I've written compilers with codegen), and "the what, why, and how of threads, the stack / heap, I/O, endianness, etc.", as well as (from C++) "dynamic dispatch, references, memory management etc.." and, like everyone coding in the 2000s, "async runtimes" as well.
Rust is hard. It's not hard because you have to understand all those things. It's hard because you have to understand how Rust understands all those things. Different kettle of fish.
I'm perpetually leery of "you'd have an easier time with Rust if you had a better grip on the CS of systems programming". No, that's not it at all.
(It's fine that Rust is hard. I recognize the achievements, and see where it's a near-perfect fit; I'm looking forward to Rust kernel code. But then, there's a reason almost nothing I write is in-kernel, even when I need "performance".)
Rust has opinions. They are all at least arguably reasonable opinions. But, often enough, you have sound reasons to do things differently. Then, Rust will fight you.
Rust is a pretty good language, and is getting better, but it just takes a very long time for a language to mature. There are no short cuts.
When Rust is mature it will be very far from simple. People will talk about which language subset they are working in. It goes with the territory. Tools that adapt to the real world get as complicated as the world they serve.
>It’s designed for high-performance, low-memory, safe computing.
Meh. If it had been designed for low-memory computing, it wouldn't have had an implicit static global allocator, or a standard library that panics on allocation failures. Currently you have to choose between having no upper bound on your program's memory usage, or using an allocator with an upper bound and accepting that you'll crash on OOM, or giving up on almost the entire third-party crates ecosystem and write with no_std.
The upcoming effort to stabilize std::alloc::Allocator and the A in `Vec<T, A>` etc doesn't help, because every piece of third-party code that currently uses `Vec<T>` is implicitly using `Vec<T, GlobalAlloc>`, and using other allocators in your own code will do nothing to change that. Heck, libstd's own std::io::Read::read_to_end requires a `Vec<u8, GlobalAlloc>`
Maybe someone in the future will invent some bastard child of Zig's comptime, Zig's explicit allocators and everything else from Rust, and that will be the language to rule them all.
The evolution of JavaScript from "DHTML" to jQuery to "JS: The Good Parts", browser runtime speed improvements and the introduction and development of TypeScript really highlights to me that languages that are complex (e.g. how JS objects work) can be explained, improved on, etc.
I guess what I mean by this is just as C++ got move semantics and auto keywords, it's very possible for a language to choose to evolve in ways that can make code easier to maintain and more expressive. TypeScript or Kotlin are examples of this. Also .NET core and Java since Gradle.
Some things rise in popularity despite their complexity. Objective-C and Swift come to mind... as well as writing truly portable Unix code or Kubernetes.
Personally, I'm waiting for the inevitable "Rust: The Good Parts" that I can feed into my linter and help myself and others on my team understand which parts are minefields of complexity and which parts are incredibly useful once understood.
In JS, there were multiple JS APIs and common practices with global state and more that just fell off the face of the earth. Most IDEs now say what is modern and what isn't, backed with new syntax and conventions that avoid traps in understanding like rebinding this for magic syntax, or using lots of stringly-typed functions.
The only problem I can see is that improvements don't come for free, or quickly. In C++'s case, some of the improvements took multiple decades before they were implemented and half a decade to become widespread and recommended...
I don’t buy it. To write meaningful C that you actually understand requires all the same knowledge of how computers work, yet C is a tremendously simpler language than rust and has an easy to comprehend memory model.
The difference is all the stuff Rust adds on top of fundamentals to make it easier to write correct code with respect to memory usage (in other words, it’s a sophisticated system to make sure you always pair one malloc with one free and don’t make any of the other mistakes C permits partly as a result of its simplicity)
Yeah I get that Rust tries very hard to be general-purpose. It also has a great ecosystem, has rare features like proper ADTs/macros and no annoying backwards-compatible quirks, and its one of the best languages for WebAssembly. So there are reasons you might want to write something in Rust that doesn't need to be low-overhead.
But even when you're doing everything single-threaded with Rc and RefCell and Box<dyn>, you can still run into pitfalls. More significant, when you do this your code becomes incredibly verbose. You have to constantly specify over and over again you're doing this the high-overhead way and not the weird optimal way, it gets annoying.
Honestly I think Rust should have an "easy mode", where you everything is implicitly wrapped in ARC / dyn / mutex, and when you write code it's almost like you're working in a truly general-purpose "regular" language. Except it interfaces perfectly with regular Rust and compiles to Rust, so you can still use your favorite cargo packages and even jump into real Rust when you need to.
The issue with that is Rust is already stretching itself kind of thin. So this "easy mode" has to be implemented so it's very simple and you can completely ignore it if you want. Maybe it will just be adding better FFI support to Rust, or maybe it will be another ambitious language like Rust but has better support for implicit easiness.
Rust is hard, from a systems programming perspective, not because of anything about the problem domain but because Rust has an unusually narrow perspective of what systems programming entails. And it is unforgiving for software that has requirements outside that perspective, which happens more often than I think some Rust proponents allow.
Modern systems software architectures are thread-per-core, involve a lot of DMA, etc for performance reasons. That is language agnostic, though most is written in C++ because it is amenable to it. This obviates a lot of the safety features Rust obsesses over. It often feels like the Rust community hasn't acknowledged that this is how systems software actually works these days, or the deficiencies of the language in supporting these software architectures.
The trait_alias[1] nightly feature might be what you want :)
It was dormant for a long time, but it seems like it might be getting some love towards stabilization (or outright removal, I guess) at some point in the medium future.
I've been toying with Rust on and off for a few years. I find it very difficult to be productive for many of the same reasons that TFA lists.
> Finally, imagine that Rust’s issues dissapear, it is high-level, and has uniform feature set. That would presumably be close to the theoretical ideal of a high-level, general-purpose programming language for the masses. Funnily enough, designing such a language might turn out to be a less intimidating task than original Rust, since we can hide all low-level details under an impenetrable shell of a language runtime.
Holy smokes yes. Rust introduces some extremely powerful features that other languages seriously need. Matching on an enum (although Rust's also needs a more traditional enum,) is awesome. Rust's error handling, IMO, is a step in the right direction away from exceptions. I even like the fact that I can enforce single ownership of an object; or have methods that take ownership of an object. And, differentiating between mutable versus immutable without needing different types is infinitely valuable.
Furthermore: My favorite feature of Rust is that it's 100% compiled with no need for a framework or runtime.
Oh, don't argue semantics. It's not like I have to tell someone to install .Net, Java, Python, ect, in order to run a Rust program. (Or figure out how to bundle it in an installer.)
By your logic every language, except for assembly, has a runtime.
I'm not arguing, just adding some detail to a slightly incorrect fact.
It's true that you usually don't have to ask anyone to install anything else to run a Rust program, because the default experience of using Rust + Cargo bundles the runtime for you. But the same can be done with the languages you used as examples as well. Many programs do in-fact include .net, a JVM or a Python runtime so the users don't have to care about it.
Biggest example is probably Blender, which ships with Python (and doesn't require a installer). Many IDEs made with Java also ship their own JVM runtimes.
One area that I think we'll see some progress on soon is "Ease of use" in Rust. A major change in the last few months for the github trending repositories has been the appearance of full applications built in rust. As more full software products are written we should start to see what works and what doesn't work. IDE support in rust is getting better, but still has a long way to go.
What Rust will likely need is a set of libs similar to Guava, and Guice to make coding actually productive. We already see that starting to emerge with the Tokio ecosystem, however it would be helpful to have stronger support outside of the Async community.
Excuse my ignorance but my small experience in Rust convinced me that the language actually promotes functional programming (you get bizarre errors when trying golang or c++ style of programming). Registering handlers doesn’t sound functional at all. Quick search about reactive functional streams leads me to interesting solutions. For example tokio::sync::broadcast. I would love to hear your opinion on this.
> 1. garbage collection is amazing for developer productivity if you can afford it.
Probably not for everybody. I haven't noticed any significant productivity boost when coming from C++ to Java years back, and then I also can't see any significant productivity decrease when switching from Java to Rust. I feel the unavailability of GC is widely offset by other features that make Rust more productive than Java. Surely, I got stuck once or twice while learning (Rust is harder than Java) but once it clicked, I know which things to avoid and generally it goes very smoothly. Now there are many things I love about Rust and I constantly miss in Java.
(The number of times I wished I had RAII in Java) > (The number of times I wished I had tracing GC in Rust).
if you were a competent C++ programmer, yeah, i can absolutely see that; nowadays, most people start with a GC-ed language and get tripped up all the time if things they take for granted aren't there. unsurprisingly.
I am too hardcore fan of Rust, but whenever I see incomplete features like GAT / HRTB/ No async function in trait / too much boilerplate code even if its unnecessary it make me annoyed. However, when I look on other side of the aisle, I think Rust is still good PL.
I think a big factor that makes rust hard is the prevalence of special syntax everywhere. These are special character sequences that could have been expressed another (more verbose) way, that were introduced for dev convenience. It makes the code extremely hard to read for me.
i haven't met one dev whose interested in rust for solving a problem. they just want to rust for its sake. which is ok. whatever. but that tells you something.
Author makes a claim that Rust is hard because it's a systems language, i'd like to claim that Rust is hard because it hides the systems part of systems programming.
Systems Programming is just UNUX/POSIX/WIN32 Programming it's writing software that interacts with the OS, Rust hides much of this in language abstraction (for good reason) but the trade-off is that using the language becomes complicated.
Don't all languages hide OS API under their own libraries and abstractions? How is Rust special in this? IMO Rust is hard because it (the borrow checker) forces you to keep track of data and state manually. If you were aiming for practically bugless C/C++ I believe you'd have expend pretty much the same effort, but Rust forces you to do it, while C/C++ leave it up to you.
Rust really has steep learning curve. I have 20 years of C and C++ experience. It did not take me so much time to learn languages like Javascript (especially async javascript) as it is taking for me to understand borrowing and other features of Rust.
I initially decided to use Rust for developing a microservice for my video conferencing startup HeyHello.video . I think I am going to stick to Javascript or use Go.
Unpopular opinion: I find rust code unreadable. And I consider myself a polyglot. I am sure all those symbols have a special meaning but it's like perl to me. Just throw some random special characters and they all mean something.
Perl did nothing wrong, but I echo the sentiment you have about Rust.
Last year I ported a mid size internal project to it as part of a resolution to learn a new language and the experience left me feeling like it was just me who "didnt get it". Building things took a lot longer, not just in the amount of code, but also the compile time. Ultimately compile time was the dealbreaker. On more than one ocassion I'd have to go back to the git history to remember what it was I had planned to do next.
Rust pro tip: don't compile til you're really done. Use something like cargo check or rust-analyzer to quickly spot errors without wasting time on the rest of compilation.
You are not the only one. I find rust code unreadable too even though I can understand it. I wish the type signatures and function definitions were on separate lines a la Haskell. That would have drastically improved the legibility.
Anything with trait bounds beyond `fn foo<T: Trait>(t: T)` should put them in the `where` clause (and in this case, it arguably should be `fn foo(t: impl Trait)`).
I had that same issue but got over it once I started diving into rust. What really helped me are the compiler error messages. Without that i would have given up learning it
I find Rust easier to read in a real IDE with semantic highlighting (although the IntellIJ plugin constantly breaks this) and inlay type hints so that everything doesn't end up blurring together.
I tell new programmers to start with either Python or JavaScript. The pay is the same or better, and it's much easier.
I've been programing for almost a decade, I still can't understand lower level languages like Rust and C++.
Why, if you can design a new language, not make something with the simplicity of Python ?
If your going to compile the binary anyway, have the complier figure out the types and optimize.
This is why JavaScript is eating the world. We don't need legions of hardcore software engineers. We need people who use a bit of Python, JavaScript, etc, to make their jobs easier.
The future of programing is easier higher level languages. Rust has some applications, but it's too hard for most
Python can be simple, to the degree it is simple, because it makes so many of your decisions for you.
In Rust the design space is very large. For instance, if your new type is going to refer to some data, should it own the data, should the data be on the heap (in a box), should it carry a safe reference (if so, mutable or not?), should the data be reference counted, or possibly atomic reference counted, or maybe for some reason you need to keep an unsafe raw pointer to the data?
In Python this is simple because the decision is already made for you: your new type keeps (essentially) a reference counted ref to the data it needs. That’s it. That’s all you can do.
That’s great if that decision is always adequate for your application. If you need extra flexibility you’ve got a problem.
I wonder what approach acts as the best mentor for someone who wants to eventually develop a good mental model for how systems work. The C approach where you suffer at runtime and then have to debug ferociously, or the Rust approach where your mentor hits you with a stick all day?
I had C programming experience before touching Rust. I may be biased, but I think this is the right way to understand how computer systems work. Otherwise, Rust's design choices will make little sense to you.
The "hard" part about Rust is you have to "unlearn" some of the basic mechanisms like scope and ownership that you bring from other languages. It doesn't allow you to build quick and easy solutions that allows you to lie to yourself (or your boss.) You have to spend more time upfront to compile and fix all warnings, but in my experience so far, fixing these at the right time (i.e. before compilation) is better than fixing them after someone writes a Github issue. I can't say it eliminates all bugs, but my trust level to my Rust code is around 10x more than my Python code for the same problem. (And I'm writing Python since 2002.)