Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the slides lists:

> Has driven changes in upstream Rust: more try_ methods for Vec that don't panic in OOM: https://github.com/rust-lang/rust/pull/95051

I was curious to have a look at that PR, but it seems it was closed after a long discussion (mainly because it would add ~30% more methods to Vec?). So which changes landing in upstream Rust is the bullet point referring to? Was the Keyword Generics Initiative born out of this?



> (mainly because it would add ~30% more methods to Vec?)

Sort of. Rather than bolting on fallible methods adhoc to an existing type, it was felt it would be better to take a step back and actually design this properly. This includes third party crates experimenting with different options.

Maybe we should have a FallibleVec type? Maybe common vec-like methods could be abstracted out in to a `RawVec` type? Maybe both? Maybe the (unstable) `Allocator` API could be adapted to better suite all these cases? Whatever the case it's not great to be adding on a ton of methods in the heat of the moment.


They actually split these changes into their own crate I think:

https://github.com/microsoft/rust_fallible_vec


Panicking on OOM was always a questionable design decision.

It doesn't always mean that your app has no memory, it just means that your chosen allocator has no free memory. That's not always an unrecoverable situation.


A few things to say about this:

1. It's not always possible to detect memory allocation failure (e.g., Linux overcommit). So many applications will have to design their operation around the possibility that out-of-memory means someone is going to axe their process unwillingly anyways to support those platforms.

2. Memory allocations tend to be pretty close to omnipresent. If you consider a stack overflow to be a memory allocation failure, then literally every call is a chance for memory allocation failure. But even before then, there's often the use of small amounts of heap memory (things like String in particular).

3. Recovery from OOM is challenging, as you can't do anything that might allocate memory in the recovery path. Want to use Box<dyn Error> for your application's error type? Oops, can't do that, since the allocation of the error for OOM might itself cause an allocation failure!


You can get a view into what Rust looks like with fallible allocation in Rust for Linux, since Linus required this. So e.g. Rust for Linux's Vec only has try_push() and you'd better have successfully try_reserve'd enough space to push into or it may fail.

https://rust-for-linux.github.io/docs/alloc/vec/struct.Vec.h...

NB The prose for Vec here, including examples, is copied from the "real" Vec in Rust's standard library, so it talks about features like push but those are not actually provided in Rust for Linux.


I think it speaks to the fact that Rust's original/ideal usecase (writing a web browser) is slightly higher-level than actual kernel-level OS work (just like C++'s is). It's expanded into kernel territory, and done a good job of it, but there are places like this where a choice was made that creates some dissonance

If you're writing a high-performance userspace application, there's a good chance you don't want to deal with handling an error in every single place where your code allocates. I think Rust made the right choice, even though it means some growing pains as it starts being used in kernels


Not quite, because Mozilla themselves use forked data structures with falliable allocation.


Interesting! Do they use those everywhere throughout Firefox or only in special situations?


It really depends on what you are doing. If you’re writing an application running on an operating system you don’t need out of memory handling, it will even make programming harder.


Like I said, if your allocator is actually the system allocator then yes maybe you're right. If instead you're doing something like using an arena allocator then OOM isn't a huge deal, because all you've done is exhaust a fixed buffer rather than system RAM; totally recoverable. There are huge performance gains to be had with using custom allocators where appropriate.


Sure, and you can do that with Rust today. There's nothing stopping you from writing a custom data structure with its own custom allocator. The "abort on OOM" policy is not a property of the language, it's a property of certain collections in libstd that use the global allocator.


I think the point here is that users would like to use Vec, HashMap etc using that arena allocator and handle OOM manually instead of having to write their own collection types.


The their problem is not a lack of try_, it's a lack of custom allocators.


I think you missed "Add support for custom allocators in Vec": https://github.com/rust-lang/rust/pull/78461


If that's the first time you touch the last bits of your private arena, you can trigger the OOM killer.


True, but also worse than that: if an unrelated application starts consuming slightly more memory it can trigger an oom kill of your application


That's not necessarily the case though. It may be worth it for, say, a document viewer to catch the OOM condition and show a user-friendly error instead of dying. Of course, linux with overcommit memory can't do this. But on Windows, that's totally a thing that can happen.


I was curious so I did a Brave search to find out if that behavior can be changed. You can supposedly (I haven't tried it) echo 2 into /proc/sys/vm/overcommit_memory and the kernel will refuse to commit more memory for a process than the available swap space and a fraction of available memory (which is also configurable). See https://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6 for more details.

I usually write my programs to only grab a little more memory than is actually needed so I might play around with this at home. I wonder if this has lead to a culture of grabbing more memory than is actually needed since mallocs only fail at large values if everything is set the traditional way.

Defaulting to overcommit seems risky. I'd much rather the system tell me no more memory is available than just having something segfault. I could always wait a bit and try again or something or at the very least shut down the program in a controlled manner.


That's a terrible idea to disable overcommit on a generalist Linux system because:

* exec

* some tools or even libs map gigantic area of anon mem but only touch few bits of it.


You can add enough swap space that fork+execve always works in practice (although vfork or vfork-style clone is obviously better if the goal is to execve pretty much immediately anyway). Linux allows reserving address space with PROT_NONE, populating it later with mprotect or MAP_FIXED, and many programs do it like that.

However, I stopped using vm.overcommit_memory=2 because the i915 driver has something called the GEM shrinker, and that never runs in that mode. That means all memory ends up going to the driver over time, and other allocations fail eventually. Other parts of the graphics stack do not handle malloc failures gracefully, either. In my experience, that meant I got many more desktop crashes in mode 2 than in the default mode with the usual kernel OOM handler and its forced process termination.


It is the best behavior for the language you are writing your browser engine on.

The thing is that ironically, a browser engine is only marginally inside the Rust's niche. (Or maybe it's even marginally outside, at this point I don't think anybody knows.) And for most things things that fit squarely at the focus of the language, it is a bad choice.


The original design had no allocator on collections and no alloc crate. If you cares about allocation, you'd use your own data structures with your own allocator in a no_std binary.

The alloc crate came later, and the custom allocator too and is not even stable yet.


For people that might be confused, setting a custom global allocator is possible in stable, but the Allocator trait isn't yet, so specifying the allocator for a specific instance of a Vec isn't possible in stable.

https://doc.rust-lang.org/std/alloc/index.html#the-global_al...

https://github.com/rust-lang/wg-allocators


It really is one of my major gripes with Rust at the moment


Indeed - I can understand that some languages like JavaScript don't care, thats fine.

But the entire value proposition of Rust is reliability and predictability. Use this in critical applocations. And this is the first time this language is being used in a major Os.

The fact that these changes weren't accepted is not a good sign.


As mentioned elsewhere, a different design is being pursued. In addition, lots of similar changes have already landed as part of the Rust-in-Linux work, which has many of the same needs.

In addition, Rust doesn't require you to use allocation, ever. It was originally expected that users who can't handle allocation failures would eschew libstd in favor of libcore (a subset of libstd with all the allocating parts removed).


> And this is the first time this language is being used in a major Os.

Sorry to be pedantic, but that's not really the case: https://en.wikipedia.org/wiki/Rust_for_Linux




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: