In square(), we multiply the input by itself and return the result in rax. In square2(), we call the function with an extra hidden pointer argument and store the output Result via that pointer. That's really bad considering how central this fat-return-value model is to Rust programming.
x86_64 has nine caller-saved registers. We should be using all of those to return function return values. Yes, I know that we'e returning by pointer because the Box has a non-trivial destructor. That's irrelevant, a holdover from the bad old C++ ABI. Rust should be returning from square2() via two registers, and I hope that it implements this optimization before freezing its ABI.
> That's irrelevant, a holdover from the bad old C++ ABI.
It's not irrelevant, it's doing whatever's most convenient given that the caller is to drop the Result<> later. As mentioned in the blogpost you link to, the [[clang::trivial_abi]] attribute changes how these things happen, such that it's no longer clear whether the caller or callee is responsible for a value that's passed via trivial_abi. I don't think it would make sense for Rust to adopt this, especially not in all cases.
> That's really bad considering how central this fat-return-value model is to Rust programming.
It's not 'fat return values' that lead to this, it's just boxed return values. Or more generally, values with non-trivial drop implementations.
It's not unclear in the slightest. If the value dropped before the callee returns, the callee drops it. If the value is dropped after the callee returns, the caller drops it. The claim that returning by pointer is better for a caller that's going to drop the value later makes no sense. If you think there's an actual problem, point it and be specific.
The problem isn't the boxing. The problem is that the boxing triggers a needlessly inefficient calling convention, and returning boxed values is central to Rust. The language implementation as it is today uses the machine inefficiently. Rust can make up its own ABI. Returning via pointer when the architecture has tons of registers available is just bad ABI no matter what excuses people use to justify bad performance.
My mental model comparison says that the cost of those may potentially be about the same, with no clear winner over many benchmarks. I’ve even seen recently a claim that using two registers was measured to be slower than stack memory return when tested in another language, so it’s been on my TODO list now to performance test it (I work on another language runtime). But I don’t actually expect to be able to measure the difference—instead I expect the latency often may simply disappear in the execution of the `ret` statement. Plus, for a non-trivial function, returning via a register may mean you have to materialize an extra load, whereas the box could have been filled much earlier and kept a tiny bit more of the stack hot.
Sure, that part of the stack is going to be in L1 cache, and there's not a huge difference between that and the register file. But think about the code size cost too. The icache hit won't show up in a microbenchmark, but when you have 100,000 of these functions, the difference will be real.
> I’ve even seen recently a claim that using two registers was measured to be slower than stack memory return
I find that difficult to believe. Those output registers are undefined anyway. I get that store-to-load forwarding may hide the latency hit of the stack access, but they'll apply to the registers too. At best, the stack pattern runs at the same speed as register return, but with a bigger code size. How on earth could a register return be worse? If it really were worse, we'd use a pointer return for multi-word PODs, and we do actually use a pair of registers to on x86_64 to return a pair of words.
I agree it’s hard to believe it could be faster. But we expect there must be _some_ cut off where more registers is worse, it’s just a question of where. But more-than-one being usually worse seems surprising to me too. I expect it’s unlikely someone actually benchmarked when making that decision, but just went with the “obvious must” reaction too though. On reflection now, I feel like there could be some reasons the stack pattern may sometimes be better or equivalent for multi-return. To your point though, curiously the Win64 ABI does only use one return register (unlike the SysV ABI which allocates up to two integer registers as you say).
I’m guessing their microbenchmark may have been that if the compiler had to spill the value earlier, it’s better to be able to spill directly to the sret pointer, than to need extra code to reload it at the end.
The bigger reason I usually see that this matters none is that if you’ve missed the inlining opportunity of something that small, you’re already so far away from optimal {code size, performance, memory usage} it is really too premature to optimize the shape of this code (and the ABI).
You’re completely right. But AFAIK there isn’t much of a push to cement the ABI as no one believes we’re “there” with what we have. That’s one of the things I like about rust (although a certain class of developers have a problem with that), the emphasis on trying to get things right rather than merely on time.
I'm somewhat surprised to learn that there's no plan to have Rust support non-byte-addressable architectures, such as are found in DSP's (that is, architectures where the addressable word is something other than 8 bits). It shouldn't be too hard to add such support, at least for no_std use.
Rust relies on LLVM for architecture support, and LLVM itself doesn't support non-8-bit-byte architectures very well.
(Similarly, LLVM's poor support for IEEE 754 sticky bits, FP exceptions, and rounding mode is one factor in Rust's lack of support for these features).
Personally, I think it's great that Rust places radical limits on ISA and C ABI like octet addressing, two's complement integers, and size of bool being 1.
What non-octet-addressing DSPs do you find to be important these days and why don't octet-addressing DSPs work for the use cases?
Is there any point in supporting DSPs? They seem to be getting phased out in favor of FPGAs or chips with more standardized architectures. And LLVM has a little support for some higher-end DSPs like Hexagon, but most toolchains are based on a modified GCC or an in-house C compiler like TI's compiler: http://processors.wiki.ti.com/index.php/TI_Compiler_Informat...
Rust seems to be targeting a development environment that's a bit more constrained than C and C++. C++ is general enough to work everywhere. Rust seems specialized to, well, the kinds of systems that run Firefox. Embedded into the shape of the language isn't only byte addressing, but also this idea that memory is practically unlimited and that allocation failure recovery isn't important.
Rust may be perfectly fine in its domain, but my position is that these baked-in assumptions make its domain a subset of C++'s.
Nothing in the language assumes unfailing allocation. It's only the current standard library APIs that don't support failable allocation, but there is work underway to remedy this.
The fundamental mistake in the Rust programming language is avoiding the use of exceptions for reporting errors. It's because the Rust designers eschewed exceptions that lots of standard library functions that need to allocate memory are also no-fail. If we reported allocation failure with Result, we'd have Result _everywhere_, which would have an ergonomic disaster, especially before the "?" operator appeared.
If Rust had been designed to use exceptions from the start, this problem wouldn't exist, and it'd be a much nicer language besides. Some people seem to just have a philosophical or aesthetic aversion to exceptions, which is a shame, because exceptions work wonderfully in practice.
Rust's development path is especially unfortunate because when you return Result everywhere and use the "?" operator consistently, what you end up writing looks just like exceptional code, but with weird warts and limitations from the language essentially implementing exceptions with return codes instead of first-class affordances built into the language itself.
It's because Rust got error handling wrong that I still prefer C++ despite Rust's advantages in variable lifetime reificiation. Exceptions turn out to be so useful in general-purpose programming that Rust had to add them in a hacky way after the fact using macros and special operators.
I understand that my position is controversial, but it's still my opinion and it's going to inform my language choice.
I disagree. In fact, I'd go a step further the opposite direction and say that the inclusion of panics was a mistake in the language design, though we're stuck with it now. To be fair, they do allow much better diagnostics when things fail, so this is more of a matter of simplicity vs developer economics, unless I'm missing something.
I agree that supporting panics was a mistake as long as exceptions aren't really supported. Panics force everyone to pay for the possibility of unwinding without getting to use unwinding for exceptions. It's unfortunate. If you're going to have to consider the possibility unwinding, running destructors, and so on, just add exceptions!
You don't need the panic mechanism to get good stack traces. You can get good stack traces in plain C with abort(): walking the stack does not require unwinding the stack! Confusing these two concepts is just another example of the unfixable confusion baked into the core of Rust that ruins an otherwise-interesting language.
If you don’t want to pay for landing pads, you can compile your program with panic=abort, and you won’t get them. As you say, you can still get stack traces.
The only reason that this is feasible is precisely because panics are not used for recoverable error handing in Rust; libraries don’t break if you turn this option on.
Well, panics should not be used for recoverable error handing in Rust. There's no guarantee turning off panics in favor of aborts wont break the assumptions of some library. That library should probably break tho, since as you said, panics aren't meant to emulate application logic exceptions.
use std::panic::catch_unwind;
fn main() {
let err = catch_unwind(|| { panic!("B") });
println!("Look ma, recoverable error handing! {:?}", err);
}
Can I go further and disable stack traces as well, without giving up entirely on std? I don't want my release builds to have any of the extra code required to walk the stack. Just let the app crash; if I want a stack trace, I'll use a debugger.
Almost all of the bloat you get for stack traces is the extra information embedded in the stack frames, which you still need if you want your debugger to be able to display stack traces. If you don't want them to get printed on panic, you can give it a custom panic hook that doesn't do the stack trace walk.
Not downvoting you because I think you bring up meaningful opinions.
I have to say that I much prefer errors as values though. With exceptions I never know what I might get, with error values on the other hand I know exactly what I'll get. And the "?" operator makes it very low friction.
It can sometimes be annoying to wrap error values into an appropriate type though, especially if you combine error types from multiple libraries
> With exceptions I never know what I might get...
Exactly this. It's just as Joel Spolsky wrote in that fifteen-year-old article that hit the front page yesterday: exceptions are an invisible goto.
> It can sometimes be annoying to wrap error values into an appropriate type though, especially if you combine error types from multiple libraries
For one-off code, the quick-error create is invaluable. For production code, I find this is one more indication of the value of the Rust philosophy. Forcing you to address the issue means you get a chance to thoughtfully consider what API you wish to offer your consumers. Most likely, you want to wrap the original errors so details of your dependencies don't become part of your public API.
> Exactly this. It's just as Joel Spolsky wrote in that fifteen-year-old article that hit the front page yesterday: exceptions are an invisible goto.
Exception semantics are well-defined. Exceptions clean up the stack, unlike longjmp and such, and in C++, you can mark which functions throw and which ones don't. When you write Rust code using Result everywhere and propagate all errors using "?", you end up with code that looks just like code that uses exceptions, but with extra syntax noise and decreased efficiency due to the need for runtime branching instead of exception dispatch tables.
When you call a Rust function that can fail, do you carefully think through all the various failure paths and reason about everything from scratch? No. You call a function. The compiler complains, you insert a "?", and the compiler stops complaining. You don't reason about all possible errors, and you shouldn't have to reason about all possible errors. Exceptions free program logic on the common path from the mental overhead of the error path, and that's a good thing.
Languages that start off boldly exception-free, like Rust and Go, end up reinventing the try/catch model because it turns out that exceptions are actually incredibly useful. As long as exceptions have been around, people have claimed that they've made code hard to follow. I reject these claims. They're just not consistent with my experience.
In my experience, it's not difficult at all to understand control flow in exceptional systems, and there are many large C++ systems that you probably use all the time that work with exceptions without major problems. It's the weirdest thing: people who claim that exceptions make C++ unusable on Monday go on to write some Java or Python in Tuesday and don't seem to have immense cognitive difficulty in these languages. That you should avoid exceptions in C++ is obsolete conventional thinking from the 1990s era of bad compilers.
> When you call a Rust function that can fail, do you carefully think through all the various failure paths and reason about everything from scratch?
From scratch? No, of course not. The type of the function tells me just what errors I need to expect.
It's really not complicated. Either you don't care about the error (or it "shouldn't ever happen"), in which case you .expect("some message") and call it a day, or else you're writing professional code and you absolutely need to think about the error cases first.
> Exceptions free program logic on the common path from the mental overhead of the error path, and that's a good thing.
This is backwards from the correct way to write professional software. It is, unfortunately, quite common. The proper way to engineer bug-free software is to address the error cases first.
Making a habit of ignoring what errors might happen is a sure way to get exploitable vulnerabilities and generally unreliable software.
> The proper way to engineer bug-free software is to address the error cases first.
And that's what exceptions do: they handle the error cases in a uniform and automatic way so your program can focus on the logic paths. In the vast majority of cases, you "address" the "error cases" in a given stack frame by cleaning up and propagating all errors to your callers. Both exceptions and Rust's "?" operator automate this process: C++ exceptions just do it better.
Don't conflate not handling errors with letters errors propagate automatically. Exceptions "address" all errors in a uniform way without polluting the logic with irrelevant details of the errors themselves. Generality and cleanliness are hallmarks of good code. When a language feature allows the right thing to happen by default, that's a good language feature.
> professional
Are you suggesting that anyone who uses exceptions is an amateur? Billions of dollars say otherwise. People write "professional" exceptional code all the time. Don't try to present your position as the only "professional" one.
I suppose Joel does a better job explaining it than I do, so I'll just quote him verbatim.
"In other words, the more information about what code is doing is located right in front of your eyes, the better a job you’ll do at finding the mistakes. When you have code that says
dosomething();
cleanup();
"… your eyes tell you, what’s wrong with that? We always clean up! But the possibility that dosomething might throw an exception means that cleanupmight not get called. And that’s easily fixable, using finally or whatnot, but that’s not my point: my point is that the only way to know that cleanup is definitely called is to investigate the entire call tree of dosomething to see if there’s anything in there, anywhere, which can throw an exception, and that’s ok, and there are things like checked exceptions to make it less painful, but the real point is that exceptions eliminate collocation. You have to look somewhere else to answer a question of whether code is doing the right thing, so you’re not able to take advantage of your eye’s built-in ability to learn to see wrong code, because there’s nothing to see."
This scenario is contrived. Use RAII and you never have to worry about whether cleanup() is called. You can even write, with some macros, something like
FINALLY(cleanup());
dosomething();
And the right thing happens all the time automatically. But you shouldn't even have to use something like this, because cleanup should be implicit in the types you're using, e.g., unique_ptr.
> But the possibility that dosomething might throw an exception means that cleanupmight not get called.
You shouldn't care whether dosomething() throws: you should be using the language's scoped cleanup facilities to clean up on scope exit anyway, and doing that improves code clarity whether or not you use exceptions.
You're doing the wrong thing and then complaining that exceptions make code unclear. It's like complaining that a car doesn't float after you've driven off a dock.
Besides: exactly the same bug can occur in Rust, because panics. On the panic path, your cleanup() won't be called. And with "?", it's also easy to forget to call cleanup(), because "?" acts just like rethrowing an exception.
Exceptions, in practice, produce clear code. All the confusion scenarios people sometimes claim arise from the use of exceptions are just contrived.
C++'s unfortunately falls over when you throw an exception from a destructor, and also doesn't let us return error codes from destructors (not that any other language does).
How do you deal with this exceptional destructor problem in practice? Or does it not come up enough to warrant concern?
Destructors shouldn't fail: that's why they're implicitly noexcept in C++11. (You can, however, throw inside a destructor so long as you catch before returning.)
In practice, it's not a problem. If you really want to do something fallible on every success path, you can use an IIFE or a named function to isolate everything before the thing you want to do on the success path.
~MyRAIIFile() {
underlying_file.close(); // also threw something
}
So, AddSomeStuffToFile threw an exception, and the destructor also threw an exception, meaning I had two exceptions in flight which was undefined behavior and caused some weirdness. It took many hours to track down this particular problem...
I can see that one correct answer would be to put a try-catch around the .close() call, but that's the wrong place for that logic in my case; I want the caller of the destructor to decide what to do to recover. Even Java's checked exceptions would cause chaos here. Only returning an error in the destructor's return type (with a must-use annotation of course) would force me to handle this situation at compile time... but C++ can't do that.
The problem is that your close is fallible in the first place. In general, resource reclaim code should always be infallible. When you deallocate a resource, be it a file descriptor or a chunk of memory, you're returning something to the system. You're providing a gift. The kernel should never refuse this gift.
Linux confuses the issue somewhat. close(2) can report errors, and I'm guessing that when your close() throws an error, it's just propagating something it got from the operating system.
Thing is, close(2) errors aren't really errors. There are three cases: 1) close(2) succeeds; 2) close(2) fails with EBADF; and 3) close(2) fails with some other errors. In case #2, there's no problem. In case #2, your program has a logic bug and you should abort, not throw. In case #3, the close operation itself actually succeeded, and the kernel is just reporting some error that occurred during file writeback in the meantime.
Errors in case #3 should be ignored. If you care about file durability, call fsync(2) before close. Catching and propagating IO errors from close(2) ensures nothing, since the kernel is allowed to defer potentially-failing IO operations until after the close!
For case #2, isn't it a bit presumptious of the MyRAIIFile to make the decision to abort the entire program? It would be nice if the destructor could report the error upward to whoever called it, so they can decide whether to log or abort.
When you say "In general, resource reclaim code should always be infallible", that sounds kind of optimistic (as this example shows, cleanup code is fallible), the question is just where we handle it. So, should I instead read this statement as "destructors shouldn't return"? And if so, is that because of the C++ limitation that destructors can't return, or fundamentally a best practice unrelated to the language?
> isn't it a bit presumptious of the MyRAIIFile to make the decision to abort the entire program? I
No. Closing an invalid file descriptor is a logic bug. It's just as bad as an invalid pointer. When you notice one of these, you crash, because to continue means operating in some unknown and potentially dangerous state.
The usual advice is to add a TryClose method to your MyRAIIFile class that can signal failure, and that also keeps the object around in some well-defined state. This doesn't force you to handle the situation properly, but at least it makes it possible to do so.
> Besides: exactly the same bug can occur in Rust, because panics. On the panic path, your cleanup() won't be called. And with "?", it's also easy to forget to call cleanup(), because "?" acts just like rethrowing an exception.
It's harder. If you're typing ?, you're aware that you're dealing with a result or option (with the values of the error case clearly documented in the type system), and aware of the fact that you might be returning early, and have forced your own function's signature to also be a result or option.
In contrast, I've had code where finger->Position.X occasionally threw. Because a finger was released, invalidating the old finger ID, before it even gave you the chance to process the finger up event to realize you shouldn't query that finger. Did you know you need to wrap every finger position query in a try/catch? I didn't, so this was just a rare crash bug for awhile. What exception did I catch? I don't remember (but not a null reference nor an access violation exception), and it's not documented, so in new code I guess I'd just try { ... } catch (Platform::Exception^) {} despite the fact that we all know catch-all do-nothing statements are terrible. At least I can minimize the scope!
For bonus points, the exception handling overhead was heavy enough that the framerate of the game we were working on would stutter if you pawed at the touch screen. Better than crashing, but still terrible. Not having source access to the throwing code, and touch release events being delayed, this was the least horrible option available though. Yaaaay.
Was this a nasty edge case and not indicative of all use of exceptions? Yes. Do I eventually encounter such a nasty edge case in most, if not all, large scale projects involving exceptions in practice? Also yes. Often enough that it influences my preferred error handling mechanisms, even.
Do I encounter such a nasty edge case in return based error handling? Not in the wild. The use of error codes makes it much clearer what can fail. On the rare occasion I've encounter something similar, it's been when harshly stress testing cross platform API abstractions in contrived tests, when disambiguating between multiple error codes. Occasionally, the underlying API does something strange and returns an unexpected error code (either a unique error code, or one of the usual error codes in unusual circumstances) and I take a suboptimal error handling path.
Usually, at worst, the end user would've need to retry something, even if I hadn't caught and worked around the underlying API weirdness.
> Exceptions, in practice, produce clear code.
Do you catch NullReferenceException s instead of writing basic if conditionals to check values for null? Probably not. For such unexceptional cases it's less clear (which reference, exactly, was null?), and the performance overhead is often unacceptable.
For more exceptional uses of exceptions, you can sometimes get clearer code. It's often brittle and mishandles rare edge cases, but it's clearer for the happy path at least. But I will happily sacrifice a little of that clarity to make that code handle the edge cases properly instead of crashing - because those exceptional cases aren't really all that exceptional after all.
If you're wrapping lots of things in try/catch, you're doing something very wrong in the first place. People who think you need to do that are using exceptions wrong. It's no wonder that they come to dislike them.
Your example sounds like a badly structured piece of code. If your finger-query code is racing against your event processing code, that's a bug. You violated the function's contract. The exception was telling you about the bug. Don't shoot the messenger.
Maybe you wanted to write the moral equivalent of optional<Coordinate> getFingerPosition(FingerID finger). Nothing stops your using the optional value pattern in exceptional code.
I'm sympathetic. Should contracts be more explicit in code? Sure. In C++, the default should be noexcept, and it should be a compiler error to call a non-noexcept function from a noexcept one. But that's an argument for doing exceptions better, not an argument for abolishing them.
> Do you catch NullReferenceException s instead of writing basic if conditionals to check values for null?
If a pointer is null where it's not supposed to be null, you crash. That's a contract violation, and failing fast in the face of contract violation is the right thing to do. Are you one of those people who writes out null checks at the start of every function? I dislike code like that very much.
> If you're wrapping lots of things in try/catch, you're doing something very wrong in the first place. People who think you need to do that are using exceptions wrong. It's no wonder that they come to dislike them.
I agree something has gone terribly wrong. But in this case, it's the initial API design. Not my fault!
> Your example sounds like a badly structured piece of code. If your finger-query code is racing against your event processing code, that's a bug. You violated the function's contract. The exception was telling you about the bug. Don't shoot the messenger.
That's what it sounds like, and if the original API was sane you'd be correct. The original API was not sane. As I recall, this might've been when handling finger moved events, that the position query threw, because a not-yet-recieved finger up event had already invalidated the finger.
One could say that I violated the contract by checking the position of released fingers. But that contract was designed in such a way as to be impossible to consistently fulfill. It's terrible API design, and arguably a bug, but not my bug - I just wrote the workarounds.
One can share blame with the API authors, but this is a reoccuring pattern with APIs that use exceptions, so I'm willing to share the blame with exceptions too - they seem prone to misuse.
> Maybe you wanted to write the moral equivalent of optional<Coordinate> getFingerPosition(FingerID finger). Nothing stops your using the optional value pattern in exceptional code.
The API I exposed (wrapping the underlying system API) was similar. Well, I didn't use optional, because my API wasn't prone to race conditions despite the underlying system API being prone to race conditions. (Returning a position that's been stale for a few milliseconds seemed acceptable in that case.)
Of course, this had the performance problems I mentioned earlier, but those were fundamentally unfixable.
> I'm sympathetic. Should contracts be more explicit in code? Sure. In C++, the default should be noexcept, and it should be a compiler error to call a non-noexcept function from a noexcept one. But that's an argument for doing exceptions better, not an argument for abolishing them.
Except I've yet to see exceptions done well. Java tried to go a step further with checked exceptions, but that turned out pretty poorly too. I have yet to see exceptions done well, which is an argument for abolishing them until such a time as someone does do them better.
EDIT: Particularly topical in this thread - almost every single exception system I've ever used has caused me grief trying to translate errors across C ABI boundaries at some point or another. Per https://doc.rust-lang.org/nomicon/unwinding.html, unwinding across FFI boundaries is undefined behavior - and I've seen some really nasty bugs from C++ exceptions, C longjmps, C# exceptions, Ruby exceptions, etc. all trying to unwind over ABI boundaries too. And then I get to try and sanitize a whole slew of call sites to not invoke undefined behavior. Yuck.
> If a pointer is null where it's not supposed to be null, you crash. That's a contract violation, and failing fast in the face of contract violation is the right thing to do. Are you one of those people who writes out null checks at the start of every function? I dislike code like that very much.
There are plenty of cases where optional-and-missing values are represented with null in most null supporting programming languages. I'll typically lean towards the null object pattern to get rid of the null checks where sane and possible, but sometimes null checks are the sane and simple solution. This is the case I was asking about.
But sure, let's cover cases where it's not supposed to be null too. I won't just check/bail/ignore, that's terrible for the debugging and bug finding experience. But I probably don't want megs of minidump and a confused end user's error report, just because they installed a mod pack that was missing a sound asset resulting in a null pointer somewhere, either. Just crashing is also unacceptable - I want the error report, which I might not get, and the gamer is probably happier with a missing sound effect instead of a crash.
Instead, I'd rather insert a null check. For the dev side, you can have your check macros insert an automatic breakpoint, log (for the mod makers), report the error via sentry.io (for catching released bugs in your unmodded game), or even explicitly crash for your internal builds (so you and QA can find bugs). Just as easy to debug as a crash (if not easier thanks to smart error messages), much less end user angst.
> Instead, I'd rather insert a null check. For the dev side, you can have your check macros insert an automatic breakpoint, log (for the mod makers), report the error via sentry.io (for catching released bugs in your unmodded game), or even explicitly crash for your internal builds (so you and QA can find bugs). Just as easy to debug as a crash (if not easier thanks to smart error messages), much less end user angst.
I really wish it were feasible to write more code as out-of-process components. The Right Way to handle unreliable mods is to just run them in their own process where they can't hurt anything. I really like that COM tried to make this approach easy. I feel like we've regressed since COM's heyday.
This is my reason for hating exceptions. And impure functions. But I frequently use impure functions for interop or performance reasons. I haven't found a reason to throw exceptions.
Yep, RAII is an eminently reasonable way to address the narrow question of cleanup. In fact, that's how idiomatic Rust code would handle it, too.
That still leaves the general question of how to tell what errors a function may alert you to. Sanity suggests including it in a type signature, either in the return type or with checked exceptions. From what I've seen, the much more common solution in exception-using code is to ignore or forget the possibility of error.
> Sanity suggests including it in a type signature, either in the return type or with checked exceptions
Yet basically all of Rust's std::io propagates just io::Result<T>, which is Result<T, io::Error>, where io::Error is a generic result that could be anything vaguely related to the outside world. It's as useful as writing "throws Exception" in Java. C# gets along fine in practice using unchecked exceptions everywhere.
Your point would be stronger if, say, File::Create specifically said it could fail only with things like disk-full, file-exists, and so on. But all it actually says is "this can fail". The information richness you're describing does not exist in practice.
The problem with File::create is that it must interoperate with the existing OSes. And it turns out, File::create can fail for many many reasons. Your file can be on disk, it can be remotely accessible on the network, it can be virtual, etc... And the OS can return just about any error. Linux lists the errors returned on each operation, but they're really just advisory and not exhaustive.
So for io::Error, we're basically constrained by what the OS provides. Not much Rust can do about it, I'm afraid.
that just means the language isn't being understood, in a very basic and very dangerous way.
you don't need to check the call tree. without `finally`, it is not "always", period. we can debate at length about if that leads to desirable outcomes or not, but the behavior is unambiguous.
By that same logic we should not write multithreaded code, ever. It is still a good idea to strive for code where wrong things look wrong (which largely requires collocation), but you can't put that above everything else.
I'm not sure I follow. Sure, we shouldn't just have threads blindly working from the same memory, but it seems perfectly reasonable to use tools like Go channels or MPSC queues to write sane code.
Everything can fail. Anyone who knows Java knows the difference between “a; b” (b depends on a) and “try {a} finally {b}” (b always runs). It’s not even worth the effort to prove that dosomething can fail today, just assume it will after someone changes something.
This is precisely the power of the Rust model of errors: nothing can fail unless either a) the type signature says it can, or b) it's not sane to try to recover from the failure.
> This is backwards from the correct way to write professional software. It is, unfortunately, quite common. The proper way to engineer bug-free software is to address the error cases first.
I don't want to write a bug-free software. It required tremendous effort and even more tremendous effort to test every path (which usually is not performed, so people keep pretending that their software is bug-free, very few really do test everyting). I want to write software that works for its happy path and predictably fails for unhappy path. When that happens, I'll either rework the software adding relevant paths or fix the source of unhappy path.
The issue is that, if you've not taken the time to consider the possibility of the error occurring, the failure is by definition unpredictable. It's predictable only if the caller 1) knows it can happen, which is often not the case in exception-using code, and 2) specifies the expected behavior, which is by design rarely the case in exception-using code.
People who write exceptional code consider the possibility of error all the time: there's just no need to mark this constant background possibility with syntax in every possible spot. When exceptional code calls a function, the programmer is saying that the default error behavior --- unwind and propagate to the caller --- is the right thing. It's the right thing in the overwhelming majority of cases, and it's so common that we don't need special syntax to say so.
It's as if you're saying there's no difference between using an exception and not "consider[ing] the possibility of [an] error occurring". The lack of special syntax for this default behavior is not the same thing as error sloppiness.
Rust's "?" operator does the same damn thing, except that you need to type a character on your keyboard to say so. With respect to the standard error contract, "?" in Rust functions as an "I agree" button. And we all know that people's eyes glaze over when prompted with endless "I agree" buttons and they just click "yes". "?" in Rust is the "I consent" button on a cookie warning popup.
Well, I'm writing Java code and exception can be thrown from literally every function call. It's hard to not expect it. It's easier to find code that never throws exceptions. It does not make it any harder to write. Basically you use try-finally for resource release and avoid long living mutable objects (which survive request for typical request-response style server) or write code extra carefully, so those objects don't stay in bad state, but those objects are extremely rare. So in the end it's not hard to write that code.
There’s been ongoing work to support AVR, which as I understand is a 16 (or was it 8?) bit architecture. It hasn’t landed in tree yet due to codegen bugs in LLVM, last I checked.
Regarding systems with segmented memory models, I think it would be kind of cool if a language like Rust, with its emphasis on both safety and zero-overhead abstractions, could be cross-compiled for my first computer, the Apple IIGS, with its 65C816 processor. But I admit I don't want it enough to put any serious work into making it happen. Probably time to let that nostalgia go.
You'd need to get the architecture supported by LLVM first. This can be a hurdle, because LLVM backends need to be actively maintained to keep up with the rest of the code. The MC68K (far more popular than the 65C816) is still not supported for example, and work on it is really just starting.
Segmented memory can be supported by C itself in a quasi-standard way, because the "Embedded C" technical report has a proviso for "multiple address spaces". Loosely speaking, this means that a "fully general" pointer type, encompassing all possible address spaces, still has to exist; but address-space specific pointers are also made possible.
Another possible avenue is mrustc, an alternative Rust compiler which compiles Rust code to C code. Every platform has a C compiler (though maybe not C11, which is what mrustc targets).
> To my knowledge the last great bastion of these properties being violated is some DSPs (Digital Signal Processors), because they really don't like 8-bit bytes.
Now, there's definitely a tangent to be explored there. Anyone know more about this?
The C spec has a concept called CHAR_BIT that specifies how many bits are in a byte. Looking for systems where it’s not 8 will give examples. I always link to https://stackoverflow.com/questions/32091992/is-char-bit-eve... which has a TI DSP as the first answer, for example.
https://rust.godbolt.org/z/jN2AA_
In square(), we multiply the input by itself and return the result in rax. In square2(), we call the function with an extra hidden pointer argument and store the output Result via that pointer. That's really bad considering how central this fat-return-value model is to Rust programming.x86_64 has nine caller-saved registers. We should be using all of those to return function return values. Yes, I know that we'e returning by pointer because the Box has a non-trivial destructor. That's irrelevant, a holdover from the bad old C++ ABI. Rust should be returning from square2() via two registers, and I hope that it implements this optimization before freezing its ABI.
In C++, we can address this ABI wart using the [[clang::trivial_abi]] attribute: see https://quuxplusone.github.io/blog/2018/05/02/trivial-abi-10...
There's no reason Rust can't do the same thing.