somewhat unrelated but it's worth pointing out that noexcept is more specific for move semantics.
In fact most c++ developers believe that throwing an exception in a noexcept function is undefined behavior. It is not. The behavior is defined to call std::terminate. Which would lead one to ask how does it know to call that. Because noexcept functions have a hidden try catch to see if it should call it. The result is that noexcept can hurt performance, which is surprising behavior. C++ is just complicated.
I’m not well versed in C++’s exception system, but why can’t the unwind system itself call std::terminate? Why does it need to be the annotated method (that unwinding returns to)?
It doesn’t need to be, but the annotated function can still miss optimization opportunities, because it must be compiled as if it had a try-catch block if the compiler can’t prove the absence of exceptions, and this places constraints on reordering and inlining.
On the other hand, the guarantee given by noexcept can enable optimizations in the caller.
A try { } catch block that calls terminate has no overhead. Normally the constraints on reordering are because e.g. constructor/destructor semantics and other side effects need to be accurately preserved during unwinding, but here any exception is going to result on a call to terminate, and (auto) destructors are not going to run.
This was the entire point of noexcept versus the throw() specifier...
This is unfortunately not always true, even with a table-based unwinder. In order to detect the noexcept frame and call terminate(), the unwinder must be able to see the stack frame that has noexcept. This means that the compiler must suppress tail call optimization when a noexcept function tail calls a non-noexcept function.
Because the exception can’t be allowed to escape the function marked noexcept. No matter the actual implementation, the exception has to be effectively caught in that no except function.
I also find it difficult to conceive of a case where adding noexcept would lead to slower/longer code, other than arbitrary noexcept overloads such as TFA.
The article describes the performance implications of the hash tables storing hashes. That it decides to do so based on examining noexcept() of passed in types doesn't make noexcept a pessimization itself
And, as you can see on the sibling thread where I'm being downvoted, there is an actual pessimization: since a noexcept function requires an eh_frame, it will not be able to tail-call (except for noexcept functions).
-fno-exceptions doesn't get rid of exceptions, it just causes your program to abort when one is thrown, which sounds--kind of worse? How do you deal with (for example) a constructor that fails? And if you're using the Standard Library, it is very difficult to avoid code that might throw an exception.
> How do you deal with (for example) a constructor that fails?
The usual alternative is to have a factory function that returns std::optional<T> or std::expected<T, Err>. This avoids two-stage init, but has other tradeoffs.
Works rather well in rust but in c++ its not as nice for one because of the lack of something like rusts question mark operator. (Tbf thats kinda workaroundable with compiler extensions but MSVC for example doesn‘t have most of them)
In other words, you introduce an invalid state to every object and make constructing objects a lot more cumbersome. The first is the exact opposite of the (imo highly desirable) "make invalid states unrepresentable" principle, and the second is also a pretty extreme cost to productivity. I wouldn't say this is never worth it, but it's a very high price to pay.
A better solution than what rmholt said is to have a static method that returns std::optional<T>. This way if T exists, it's valid.
Later you get into another whole debate about movable types and what T should be once it's moved from (for example if you want to turn that std::optional<T> into a std::shared_ptr<T>), if it only has non-trivial constructors. An idea that just popped into my head would be to have a T constructor from std:optional<T>&&, which moves from it and resets the optional. But then it's not a real move constructor. With move constructors, a lot of times you just have to accept they'll have to leave the other object in some sort of empty or invalid state.
I'm not saying it's perfect but it's better than dealing with C++ exceptions. At least with error codes you can manually do some printing to find out what went wrong. With C++ exceptions you don't even get a line number or a stack trace.
You do get invalid states with exceptions in much worse way. Think exception thrown from a constructor, or, even better, from a constructor of one of the members even before the constructor's body is reached. Not to mention managing state becomes much more complicated when every statement can cause stack unwinding unexpectedly.
> (...) throwing an exception in a noexcept function is undefined behavior.
Small but critical correction: noexcept functions can throw exceptions. What they cannot do is allow exceptions to bubble up to function invokers. This means that it is trivial to define noexcept functions: just add a try-catch block that catches all exceptions.
Hmm... If you were reading the documentation for function foo() and it read "if the argument is negative, foo() throws an exception", would you understand that the function throws an exception and catches it internally before doing something else, or that it throws an exception that the caller must catch?
> If you were reading the documentation for function foo() and it read "if the argument is negative, foo() throws an exception" (...)
I think you didn't understood what I said.
In C++ functions declared as noexcept are expected to not allow exceptions to bubble up. If an exception bubbles up from one of these functions, the runtime calls std::terminate.
This does not mean you cannot throw and handle exceptions within such a function. You are free to handle any exception within a noexcept function. You can throw and catch as many exceptions you feel like it while executing it. You just can't let them bubble up from the scope of your function.
I think you're the one who didn't understand me. You made a correction about the meaning of the phrase "throwing an exception". The point of my question is that your correction is incorrect, because if you read the sentence "if the argument is negative, foo() throws an exception" you would indeed understand that an exception will unwind the stack out of the foo() call in such a situation. There's no difference between "foo() allows an exception to bubble up" and "foo() throws an exception"; both phrases describe the same situation.
I suppose you are technically correct that noexcept can throw to themselves. But that's just being pedantic, isn't it? From the observer/caller point of view the function won't ever throw. It will always return (or abort).
> I suppose you are technically correct that noexcept can throw to themselves. But that's just being pedantic, isn't it?
No. There are comments in this thread from people who are surprised that you can still handle exceptions within a noexcept function. Some seem to believe noexcept is supposed to mean "don't use exception within this scope". My comment is intended to clarify that, yes, you can throw and catch any exception from within a noexcept function, because noexcept does not mean "no exceptions within this scope" and instead only means "I should not allow exceptions to bubble up, and if I happen to do then just kill the app".
Yeah. Throwing from a noexcept function is often a better abort() than abort() itself because the std::terminate machinery will print information about whatever caused the termination, whereas abort will just SIGABRT.
> what.. noexcept throws exception..? what kind of infinite wisdom led to this
Not wisdom at all, just a very basic and predictable failsafe.
If a function that is declared to not throw any exception happens to throw one, the runtime handles that scenario as an eggregious violation of its contract. Consequently, as it does with any malformed code, it terminates the application.
There’s a reason both Go and Rust eschew exceptions. They’re something that superficially seemed like a great idea but that in practice complicate things by creating two exit paths for every function. They also don’t play nice with any form of async or dynamic programming.
C++ should never have had them, but we have sane clean C++ now. It’s called Rust.
IMO the pendulum swung too far with Rust. The experience is better than C++, but the template generics system is not very powerful. They ostensibly made up for this with macros, but (a) they're annoying to write and (b) they're really annoying to read. For these reasons Rust is much safer than C++ but has difficulty providing fluent interfaces like Eigen. There are libraries that try, but AFAICT none match Eigen's ability to eliminate unnecessary allocation for subexpressions and equivalently performing Rust code looks much more imperative.
Rust doesn't have a template system per se. C++'s templates are closer to C's macros. Rust has a typed generics system which does impose additional limits but also means everything is compile time checked in ways C++ isn't.
I agree that Rust's macros are annoying. I think it was a mistake to invent an entirely different language and syntax for it. Of course Rust also has procedural macros, which are macros written in Rust. IMHO that's how they should all work. Secondary languages explode cognitive load.
I'm not attached to the word "template", I just wanted to clarify that they're not Java-style generics with type erasure. If you'd like me to use "monomorphizing generics" instead I'm game :)
Even procedural macros are annoying, though. You need to make a separate crate for them. You still need to write fully-qualified everything, even standard functions, for hygiene reasons. Proc macros that actually do, erm, macro operations and produce a lot of code cause Rust's language server to grind to a halt in my experience. You're effectively writing another language that happens to share a lexer with Rust (what's the problem with that? Well, if I'd known that I'd need another language to solve my problem I might not have chosen Rust...).
For all its warts, using constrexpr if and concepts, is much more easier to do macro like programming than dealing with Rust's two worlds of macros and special syntax.
If static reflection does indeed land on C++26, this experience will be even better.
Rust panics are basically exceptions, aren’t they? Typically they aren’t caught without terminating. But you totally can. And if you’re writing a service that runs in the background you’ll probably have to.
Rust Result is basically a checked exception. Java makes you choose between adding an exception to "throws" or catching it, Rust makes you choose between declaring your function as returning Result or checking if you got an Err.
The only difference is that Rust has better syntactic sugar for the latter, but Result is really isomorphic to Java checked exceptions.
Panic could be said to be the same as an unchecked exception, except you have a lot more control on what causes them. The panic you get from calling unwrap() on an Option is the same as a NullPointerException, but you have full control on which points of the program that can generate it.
Rust goes to substantial lengths to allow unwinding from panics. For example, see how complicated `Vec::retain_mut` is. The complexity is due to the possibility of a panic, and the need to unwind.
I’ve never written any Java so your comparisons are lost on me.
Rust Result is great. I love it.
The root article was talking about C++. Rust panic is basically the same as a C++ exception afaict. With the caveat that Rust discourages catching and resuming from panics. But you can!
Catching panics is best-effort only. In general, Rust panics can't be caught. (Even if a program is compiled with panic=unwind, this can change to abort at run-time.)
If you find that exception-free code that is necessarily littered with exit value checks at every call, which discourages refactoring and adds massive noise, then you can call the decisions to eschew exceptions as “sane” and “clean”, but I find the resulting code to be neither. Or practically speaking, exit codes will often not be checked at all, or an error check will be omitted by mistake, thereby interrupting the entire intended error handling chain. Somehow that qualifies as a better outcome or more sane? Perhaps it is for a code base that is not changing (write-once) or is not expected to change (i.e. throwaway code written in “true microservice” style).
It is better. That doesn't make it perfect but it's better.
This is to be expected, in fact Rust has to be a lot better to even make a showing, because C is the "default" in some sense, you can't just be similarly good, you have to be significantly better for people to even notice.
I expect that long before 2050 there will be other, even better languages, which learn from not only the mistakes Rust learned from, but the mistakes in Rust, and in other languages from this period.
Take Editions. C was never able to figure out a way to add keywords. Simple idea, but it couldn't be done. They had to be kludged as magic with an underscore prefix to take advantage of an existing requirement in the language design, in C++ they decided to take the compatibility hit and invalidate all code using the to-be-reserved words. But in Rust they were able to add several new keywords, no trouble at all, because they'd thought about this and designed the language accordingly. That's what Editions did for them. You can expect future innovation along that dimension in future languages.
It's categorically better because it's memory-safe. We just had another RCE bug in the Windows TCP/IP stack and it's 2024. This should not be happening.
> The outcome is that libstdc++’s unordered_set has performance characteristics that subtly depend on the true name and noexceptness of the hash function.
> - A user-defined struct H : std::hash<std::string> {} will see smaller allocations and more calls to the hash function, than if you had just used std::hash<std::string> directly. (Because std::hash<std::string> is on the blacklist and H is not.)
> - A user-defined hasher with size_t operator() const noexcept will see smaller allocations and more calls to the hash function (especially during rehashing). One with size_t operator() const will see larger allocations and fewer calls to the hash function.
Also, I hope that if you’re reading this post in a year or two (say, December 2025), these specific examples won’t even reproduce anymore. I hope libstdc++ gets on the ball and eliminates some of this weirdness. (In short: They should get rid of the blacklist; pay the 8 bytes by default; introduce a whitelist for trivial hashers specifically; stop checking noexceptness for any reason.)
FWIW: I've had your comment at the back of my mind for the past 5 days, and I haven't yet come up with any clever way to work around it. I think you're simply right: to "fix" this in "new code" would break ABI for anyone who passed e.g. `vector<unordered_set<T>>` across an ABI boundary. (I.e., anywhere that "new code" might construct an unordered_set object according to the "new ABI," which object might later be destroyed by "old code" which assumed the "old ABI," leading to heap corruption.)
So yeah, it is probably physically impossible for libstdc++ to "get on the ball" in the way I'd hoped. Darn.
If you care about performance, you should consider using Abseil's hash tables instead of unordered_set and unordered_map. The Abseil containers are mostly drop in replacements for the standard unordered containers, except they have a tweaked API (e.g. erase() returning void) that admits important performance optimizations and (except the node based ones) use open addressed hash tables instead of chaining (i.e. each hash bucket is no longer a linked list). You end up with 2x-3x speedups in a lot of scenarios. The standard containers can't be fixed to match the performance of the Abseil ones because 1) the specification API requires pessimization, 2) Linux distributors are reluctant to break C++ "ABI" (such as it is).
I mean, of course you should follow this article's advice on noexcept, but there's a whole world of associative container performance optimizations out there.
FWIW, I’ve been working on a project that has transitioned from a standard-ish CMake super build to vcpkg and it has been a fantastic upgrade in terms of usability and reliability.
There are huge swaths of the C++ lands map I’ve not visited, so I’m sure there are areas where it wouldn’t be a good choice, but I personally think vcpkg has been great.
I 100% agree with the other comment: Abseil and Folly are both broken monstrosities by design... the build system garbage is just one reason almost nobody uses them outside facebook and google.
The chaining hashtable in liburcu is truly lock free, based on a real build system, and in my experience outperforms everything facebook and google have ever published: https://github.com/urcu/userspace-rcu
It is also pointless, because the question is not how to build the library, the question is how to integrate and reference it in the project where you want to use it.
Sure, and packaging the artifacts of the liburcu build is going to be trivial in any package management system. It makes very few assumptions about how it's going to be used. Using Folly requires you to ascribe to the Folly worldview of dependency management.
In fact most c++ developers believe that throwing an exception in a noexcept function is undefined behavior. It is not. The behavior is defined to call std::terminate. Which would lead one to ask how does it know to call that. Because noexcept functions have a hidden try catch to see if it should call it. The result is that noexcept can hurt performance, which is surprising behavior. C++ is just complicated.