Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rust error handling can be concise thanks to the try! macro, but macros bring their own problems (like making more difficult to write refactoring and static analysis tools).

Haskell error handling can be concise thanks to monads, but they need higher kinded types which bring their own share of complexity.

The conversation on the "RFC: Stabilize catch_panic", found on Rust's issue tracker, illustrate some unsettled questions I had in mind (https://github.com/rust-lang/rfcs/pull/1236).

For example, kentonv wrote:

All code can fail, because all code can have bugs. Obviously, we don't want every single function everywhere to return Result<T, E> as a way to signal arbitrary "I had a bug" failures. This is what panic is for.

graydon wrote:

Currently you've adopted a somewhat-clunky error-type with manual (macro-assisted) propagation. Some rust code uses that correctly; but much I see in the wild simply calls unwrap() and accepts that a failure there is fatal.

ArtemGr wrote:

The only way to maintain both the safety and the no-panic invariants is to remove the panics from the language whatsoever. Explicit errors on bounds check. No assertions (you should make the assertion errors a part of the function interface instead, e.g. Result). Out of memory errors returned explicitly from every heap and stack allocation.

If you'd like to keep the assertions, the smooth allocations and other goodies then you either need a way to catch the panics or end up making programs that are less reliable than C. No modern language crashes the entire program on an out-of-memory or an integer overflow, but Rust will.

The libraries we have, they do panic, it's a matter of fact. Withing the practical constraints and without some way of catching panics you can't make a reliable program that uses external crates freely.

BurntSushi wrote:

If something like catch_panic is not stabilized, what alternative would you propose? (Option<T> and Result<T, E> are insufficient.)

On the same topic, there is this post about introducing a `?` operator or a `do` notation (inspired by Haskell) to streamline error handling:

http://m4rw3r.github.io/rust-questionmark-operator/

And there is RFC 243, about "First-class error handling with `?` and `catch`":

https://github.com/rust-lang/rfcs/pull/243

But I'm sure you're quite aware of these discussions :-)

My general feeling is that, whatever programming language you consider (Python, JavaScript/Node, Go, Rust, Haskell, Erlang, etc.), the right way to handle errors is still an open question.



> Rust error handling can be concise thanks to the try! macro, but macros bring their own problems (like making more difficult to write refactoring and static analysis tools).

No, it's not more difficult to write static analysis tools. You use libsyntax as a library. Refactoring tools, maybe, but it's a lot better than refactoring with code generation :)

> For example, kentonv wrote:

How does that describe an unsolved problem? It illustrates that Rust's bifurcation of errors into Result and panics works.

> graydon wrote:

I think it's a relatively minor issue that would be solved with "?" or something like what Swift does. Switching to Go's system would make it worse; Graydon's criticism applies even more so to Go than to Rust.

> ArtemGr wrote:

Catching panics is important, yes. No argument there. It doesn't change the overall structure of Rust's error handling story, though.


> No, it's not more difficult to write static analysis tools.

I agree it's solvable, but I'd argue it's a bit more difficult to write static analysis tools when macros are implied. But maybe I'm missing something.

Here is an example:

The subsystem types in the sdl2 crate starting in 0.8.0 are generated by a macro, so racer has issues evaluating the type of video(). A workaround is to explicitly declare the type of renderer. (source: https://github.com/phildawes/racer/issues/337)

But my real concern is how to write refactoring tools when macros are implied. It's seems a lot harder than writing static analysis tools, because the refactoring tool wants to examine the source code with macros expanded, but has to modify the source code with macros unexpanded. In other words, the tool has to map from source with expanded macros, back to source with unexpanded macros. How do you solve that?

As a sidenote, I agree that refactoring generated code doesn't sound fun too :-)

> How does that describe an unsolved problem? It illustrates that Rust's bifurcation of errors into Result and panics works.

I quoted kentonv here because it shows that Rust and Go have converged towards structurally similar solutions to error handling, by using two complementary mechanisms: explicit error checking one one hand (using Result<T,E> in Rust and using multiple return values in Go) and panic/recover on the other hand.

The big difference is that Rust have sum types (instead of using multiple return values in Go) and macros (try! instead of repeating `if err != nil { return err }` in Go).

> Catching panics is important, yes. No argument there. It doesn't change the overall structure of Rust's error handling story, though.

You're right.


> a bit more difficult to write static analysis tools when macros are implied. But maybe I'm missing something.

Most Rust static analysis tools hook into the compiler and get this for free.

Racer has that problem because racer implements a rudimentary minicompiler that's much faster than Rust. When you want autocompletion, it needs to be fast. Running a full type check is a non-starter here. So you implement your own "type searcher" which is able to perform some level of inference and search for items. Being deliberately incomplete, it doesn't handle some cases; looks like macros are one of them. Since racer uses syntex handling macros would not be much harder (just run the macro visitor first; three lines of code!), but I assume it doesn't for performance reasons or something.

(disclaimer: I only have a rough idea of racer's architecture; ICBW)

> But my real concern is how to write refactoring tools when macros are implied

This is a problem with refactoring whether or not you're using tools. And like you mention there's exactly the same problem with generated code. If anything, Rust macros being hygenic are nicer here, since you can trace back where the generated code comes from and _attempt_ to refactor the source.

And macros like try do not affect refactoring tools at all; being self-contained. Its user-defined macros that mess things up.


> Most Rust static analysis tools hook into the compiler and get this for free. Racer has that problem because racer implements a rudimentary minicompiler that's much faster than Rust.

I didn't know that. Understood. Thank you for the explanation.

> And macros like try do not affect refactoring tools at all; being self-contained. Its user-defined macros that mess things up.

What do you mean by "self-contained"? How is it different from user-defined macros?


> What do you mean by "self-contained"?

It doesn't introduce any new identifiers or anything. As far as refactoring is concerned, it's just another block with nothing interesting inside it. This is sort of highlighted by the fact that we can and do plan to add syntax sugar for try!() -- if it was a language feature it wouldn't cause refactoring issues, so why is that the case here?

User defined macros (there may be some exported library macros that do this too, but try is not one of them) may define functions or implement traits or something, which might need to be modified by your refactor, which might need fiddly modification of the macro internals.

(Also, note that due to Rust's macro hygiene, all variables defined within a macro are inaccessible in the call region, unless the identifier was passed in at the call region. This helps too)


Thanks for the very clear answer.


> like making more difficult to write refactoring and static analysis tools

As one of the people behind a lot of the out-of-tree static analysis in Rust (clippy, tenacious, Servo's lints) I'd disagree. Performing static analysis across macro boundaries is easy.

The only problem Clippy has with macros is that the UX of the linting tool is muddled up at times. Clippy checks for many style issues, but sometimes the style issue is internal to the macro.

For example, if Clippy has a lint that checks for `let foo = [expression that evaluates to ()]`, it's quite possible that due to the generic nature of macros, a particular macro invocation will contain a let statement that assigns to a unit value. Now, this isn't bad, since the style violation is inside the macro, and not something the user should worry about. So we do some checking to ensure that the user is indeed responsible for the macro before emitting the lint. Note that this isn't much work either, the only hard part is remembering to insert this check on new lints if it's relevant.

But anyway, the UX of clippy is orthogonal to the static analyses provided.

(I also don't recall us ever having issues with `try!`)

> The conversation on the "RFC: Stabilize catch_panic",

FWIW most of the points are fixed with the catch and ? sugar that you mention later.

> My general feeling is that, whatever programming language you consider (Python, JavaScript/Node, Go, Rust, Haskell, Erlang, etc.), the right way to handle errors is still an open question.

Sure, however this isn't a very useful statement when comparing languages. The OP was making a relative statement; compared to C#. Saying that "all languages have problems with error handling" doesn't add much, since the question being discussed was whether Go's error handling is nicer than C#.


I replied to pcwalton in a sibling comment:

https://news.ycombinator.com/item?id=11222862




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: