Hacker News new | past | comments | ask | show | jobs | submit login
Checked exceptions: Java’s biggest mistake (2014) (literatejava.com)
114 points by flying_sheep on Sept 11, 2020 | hide | past | favorite | 317 comments



I like checked exceptions. Yes, sometimes (especially in the oldest APIs when they were still figuring this stuff out), they were overused but mostly I think they encourage developers to really think about what happens in the failure case.

I notice this especially with less experienced developers and remote calls - a lot of JS code I’ve reviewed in the past assumes the remote call will always work, yet Java code from the same developer will almost always correctly handle the situation, simply because the exception is explicitly required to be handled.


Right.

They don't just "encourage" developers to consider error paths, they "force" them to do so.

The concept of checked exceptions is very sound, as is the more general concept of compiler enforced error checking. Very, very few languages have that (only Java and Kotlin in the mainstream league).

Languages with a solid implementation of algebraic data types offer a good first step in that direction but they still require users to manually bubble and compose monadic values, which introduces an unnecessary, and sometimes intractable, level of obfuscation and boiler plate.

All other languages provide weaker approaches to this concept that are library enforced, not language enforced, and therefore more prone to being overlooked since they require discipline from the developer.


I conjecture that much of the legitimate pain from checked exceptions in Java were because the language to talk about exceptions was so limited.

For instance, consider fold or map functions. There was no way of saying "this might throw anything that might throw", so the only option was "this won't throw anything and that cannot throw anything" or "this might throw anything".

Without the necessary flexibility, developers aren't "forced" to consider only error paths but also manifold impossible paths that aren't easily distinguished from legitimate error paths.


Kotlin has no mechanism to force error handling. You may suggest that we use sealed classes as return values and match on them, which is okay-ish, but suffers from two big issues:

1. You are not forced to use the result, so if you call a function for its side effects, the compiler will not warn you that you should unwrap the value. Rust and Swift do not have this issue.

2. This is extremely tedious, as you mention. But it's specific to Kotlin and not generally true as you later suggest. Haskell had monad comprehension (do notation), Scala has the same thing, Rust has the ? operator.

Also, regarding Kotlin; no library, nor the standard library, does anything other than throw unchecked exceptions on all kinds of failures. Kotlin is a step backwards, IMO.

Also PHP has checked exceptions, like Java.


2 specifically, just to um-actually a minor detail: Rust does have this problem. This code: https://repl.it/repls/LoathsomeCompleteColdfusion

    #[must_use]
    fn get() -> i32 { 
      5
    }

    fn main() {
      get();
    }
will compile and run. It does emit a warning by default at least though:

    warning: unused return value of `get` that must be used
     --> main.rs:7:3
      |
    7 |   get();
      |   ^^^^^^
      |
      = note: `#[warn(unused_must_use)]` on by default
Without the `#[must_use]`, you won't even get that warning. Result has it, so it's common, but it's definitely not enforced. To get the Result's value though, yes - there's no way around it.


You said you were picking on my second point, but you actually addressed the first.

I was specifically talking about Result and Option, which, as you mentioned, are marked must_use. It's correct that it's a warning, but I think that's good enough. You won't accidentally forget to use it.


> Kotlin has no mechanism to force error handling.

I was referring to nullability support, which is similar to Option/Maybe, except that it's enforced by the compiler. This is pretty unique to Kotlin at the moment (honorable mention to Ceylon which offered similar support).


Unique to Kotlin? Don’t Swift and TypeScript have this feature too?

Maybe it’s just the languages I’ve been working with this recently, but I think of this as something that’s breaking into the mainstream as language designers are starting to agree that it’s a good idea.


> I was referring to nullability support, which is similar to Option/Maybe, except that it's enforced by the compiler.

Option/Maybe is enforced by the compiler on most languages that have it. Nullability as part of type signatures differs not because it is compiler-enforced, but because it is not nestable; it provides no equivalent of Maybe[Maybe[T]].

> This is pretty unique to Kotlin at the moment.

C# 8, Python’s typing module, Java 8 via the Nullness Checker, Sorbet for Ruby, and lots of other statically-typed languages or static-analysis packages for languages (even dynamically-typed languages) provide nullability enforcement. It's definitely not unique to Kotlin.


> Option/Maybe is enforced by the compiler on most languages that have it.

It's enforced by exactly zero languages.

It's up to the developer to decide that their function should return an Option/Maybe instead of the naked value.

By definition, any library construct is NOT enforced by the compiler, because... well, it's a library construct and not a language construct.


> It's enforced by exactly zero languages.

It's enforced in exactly the same way a nullability constraint is; if you don't use Maybe[T] instead of T , you can't use None (presuming T itself isn't Maybe[U]), just as if you don't use a nullable type you can't use null.


I like Kotlin's null most of the time, but I actually prefer Option still because every once in a while I need to nest them. You can nest Option, but not null.


I think you should look into some more programming languages before you claim that this is unique to Kotlin. ;)


You are not forced per se, and can have the same level of obsfucation. It's pretty common to find code ignoring exceptions or wrapping them in RuntimeExceptions, like:

  try {} catch (Exception e) {
    throw new RuntimeException(e);
  }
Also not doing nothing, or the famous catch and log


At which point it's super easy to identify in code review and slap the developer on the wrist.

I find these arguments that posit incompetent / ignorant developers as a hurdle a bit strange. If they are going to incompetently handle errors wrong when explicitly forced to handle them I can't even imagine how poor their code will be without any assistance from the compiler, and it seems awful to think that you will have no way to identify such poor handling on review - you're going to have to look up every function they call and check if it can return an error or not manually.


i.e. Go. Go can make sure you are aware that an error exists (often. linters do pretty good here too), but it does next to nothing to help you handle it correctly.

From personal experience: yes, little to no compiler help on errors takes an enormous amount of effort by both authors and reviewers (and future readers) to ensure correct handling. The vast majority of the time it's just `if err != nil { return err }`, which is very frequently sub-optimal. But without knowing the call in complete detail, you can't judge if that's true or not... and it may have changed since you last saw it.

IDEs help that kind of "is this optimal/correct" question quite a lot, but they can't verify it either. It's question-marks all the way down, unless you fully know all the code you call, which is often infeasible.


The dev may need to. If you are implementing an interface method and the signature does not include the exception, it must be wrapped in the implementation. You’d like this not to be the case, but it’s better than trying to rewrite major dependencies. I hope they can find a more specific RuntimeException subclass to throw, but that’s a relatively minor quibble.

I’d like to suggest a different POV for your comment on code review. Every method can throw exceptions, that’s life with the JVM. You don’t declare IllegalArgumentException, ArrayOutOfBoundsException, NoSuchElementException, etc. Yet your code needs to deal with it and usually does nothing because it means the element doesn’t make it into a collection, the rest of the object doesn’t get constructed, etc. Avoiding raw RTE, and instead using an RTE subclass that conveys the information you care about works fine. Code review avoids raw RTE, and all places that might care about IO exception causes etc. can process the caused-by of the wrappers.

In what kind of code has this technique been an actual problem? I agree standards and techniques need to be rigorously applied.


You aren't forced to handle it well, you are forced to handle it.


You aren't forced to write your happy path code well, either. What's your point?


The point seemed to be that checked exceptions are a bit of a help, whereas you seem to be responding to an implied claim that they aren't helpful. I think that's why you're being downvoted.


Perhaps. I definitely read it as them saying checked exceptions aren't helpful because you can still handle the errors poorly. But I can see how I might have misinterpreted the intent.


I suppose I wasn't clear. You did misinterpret what I said, I intended to convey the exact opposite.


Once upon a time, I worked on a C++ project with another, experienced developer who kept commenting out -Wall from the Makefiles with the comment, "Nobody has time for that."


You are still forced to deal with it.

In your example, the developer just chose to write crappy code, and there's no defense against that.


Not sure if Rust counts as mainstream by your metric, but it does have compiler enforced error handling in the form of the Result sum type.


> compiler enforced error checking. Very, very few languages have that

  __attribute__ ((warn_unused_result))
in C/C++ with GCC and Clang is great.


And standardized as [[nodiscard]] in C++17.



Avoiding the “boilerplate” of error handling means that you have implicitly coupled code now. Exceptions make local reasoning not possible. I guess you may want that in a small minority of cases, but I don’t understand the benefit they give you overall. It’s just a shortcut and a hack, and I’d rather see explicit control flow 100% of the time.


> hey encourage developers to really think about what happens in the failure case.

It's a noble goal, but once I started thinking about what happens in the failure case, I came to the conclusion that checked exceptions are no help here:

- there are always unchecked exceptions. I found it useful to think that any function might throw. So if extra reporting or graceful shutdown are required, just catch everything

- in most cases I have no idea how to recover from error: just keep throwing it to the caller until someone knows what to do. I want it to be the default behavior and I don't want to clutter my code with all the catch-wrap-rethrow boilerplate.


> in most cases I have no idea how to recover from error: just keep throwing it to the caller until someone knows what to do. I want it to be the default behavior and I don't want to clutter my code with all the catch-wrap-rethrow boilerplate.

What's your alternative? Using return values? But then you are doing the bubbling, manually.

Exceptions offer a more elegant and less boiler plate approach to this problem: if your code can't handle an exception, just declare it in your signature and ignore it. This is really the best approach to this problem:

- The error cases are part of the function's signature (as they should). - The language takes care of bringing the exception to the right handler. - Your code can proceed with the assumption that all the values are sound.


I've found that in practice it leads to very generic exception names.

When a new failure mode is found for a lowlevel component, in other languages devs would add a new specific exception and things would generally work. In Java, devs have to modify every intermediate component to handle or pass the new exception type so they end up just using an existing one that is super generic.


>What's your alternative? Using return values? But then you are doing the bubbling, manually.

Huh? The alternative is unchecked exceptions, which are already present in Java and therefore have to be accounted for anyway.

Removing checked exceptions from Java would greatly simplify the APIs that use them, with absolutely zero cost.


Return values are a million times better than checked exceptions because return values compose and return values don't collide with the rest of the langugage.

Checked exceptions are basically monads but with all the disadvantages of monads but none of the advantages. There is a reason no language since has copied them.


> Return values are a million times better than checked exceptions because return values compose

No they don't, not even when they're monads (you need monad transformers to compose monads since they don't universally compose).


Technically you only need swap (eg:

  swap :: Either e2 (Either e1 a) -> Either e1 (Either e2 a)
  swap (Left z) = (Right (Left z))
  swap (Right (Left y)) = (Left y)
  swap (Right (Right x)) = (Right (Right x))
) to implement join (at the functor `Either e1 . Either e2`) and then a general monad-compose. You could also use:

  join :: f1 (f2 (f1 (f2 a))) -> f1 (f2 a)
  -- join = fmap join . join . fmap swap -- if you have swap
directly, which doesn't even require f1 and f2 to be monads in the first place (though they do need to be applicative for return/pure/unit). (See http://web.cecs.pdx.edu/~mpj/pubs/RR-1004.pdf for technical details.)

swap is generally trivial (see above) for any sane error-reporting monad, although might be a bit more difficult if you're shoving error-handling logic into them.


Except that many languages just do worse. Go, Kotlin, Clojure (very different philosophy, though), C#...


IMO the solution is to either:

* Use return values and let people deal with the boilerplate ala Optional in java. We use Optionals to replace null values and IMO the boilerplate is 100% worth it. I've also used return values that can encode possible errors in Java.

* Use unchecked exceptions and expect people to understand the methods they call and the exceptions those methods can throw, which should be documented in javadoc instead of a throws clause. This needs to happen regardless, as methods can throws unchecked exceptions the caller might need to be aware of, so this in reality doesn't involve extra work.

For me, either of these solutions is preferrable to checked exceptions.


Agreed that the boilerplate of Optional is worth it. That doesn't mean it's actually good. With Java it's just a choice between "bad" and "worse".


Unchecked exceptions were intended for problems that your code shouldn't really be expected to handle; catching them at the top-level, reporting the error, and letting the user figure out what to is about the only reasonable answer.

Checked exceptions are part of your API, things that your code should be dealing with, and catch-wrap-rethrow is often the right thing. It only becomes a problem with Java programmer's tendency towards thin, shallow abstractions.


> there are always unchecked exceptions

I’m not sure this is necessarily true.

There are always unchecked Throwable’s, but exceptions and Errors are quite distinct things.


I tend to also point out in a lot of these discussions that "checked exceptions" and "java's implementation of checked exceptions" are different things.

Java's implementation has undeniable issues, e.g. in designing stuff with callbacks. It can and often does lead to "everything's runtime" or "everything throws Exception" or other hell-scapes people have heard of. Personally I still prefer them, for pretty much the same reason you mentioned - they are effective, especially with a bit of restraint.

Checked exceptions as a concept are not bound to that. And I wish more languages would make use of them. They can be just as flexible as ADTs, which are pretty widely approved of... because they describe exactly the same thing, just short-circuiting rather than requiring an explicit return. On that front, it's "exceptions" vs "returns" and there are plenty of opinions and tradeoffs between them.


Perhaps because the complexity budget of trying to get checked exceptions right is so high that no one even bothers to try. Once you find yourself able to just return values representing an ADT, then why bother? If the point of a checked exception is that the caller should try handling it immediately, then just return a value like IO<T> instead of return T but maybe throw IOException.


It depends on your ADT implementation, but sometimes yes. E.g. Rust makes a pretty good case for not needing exceptions, with the `?` operator, implicit "into" conversions, and a `match` operator that allows returning from the func and not just from the match branch's closure.

In a language without some or all of those (or without ADTs at all, you could have them just for exceptions for instance), ADT-matching to many specific types can get pretty onerous... or you need to do an equivalent to the "catch Exception -> throw RuntimeError" safety-erasing nonsense that this article is rightfully claiming is a problem. Shoving error-types into a separate bucket though often leaves you with a single "return" type, possibly also removing generics entirely, which is trivial to deal with in the happy path in all cases. Optimizing code for the happy path is one of the reasons people like exceptions, so that's potentially significant.

---

edit: ah, no, IMO a large part of the point of exceptions at all (checked included) is that the caller can ignore it by forwarding it implicitly, punting it up the chain without effort. Checked largely just makes sure it's visible in your type signature so you cannot do that silently, unlike runtime exceptions / panics / etc. ADTs typically require handling immediately, exceptions are the opposite of that.


But Java has no language support for dealing with a Try/Result type. It would be objectively worse than just using checked exceptions, IMO.


It's definitely not ideal, but I use Optional to replace nulls in Java and I think it works fine. Similarly you can return values that could represent an error. Or, you can throw an unchecked exception. Callers should be aware of all exceptions a method can throw, especially since those can include unchecked exceptions the compiler doesn't tell you about. Since the caller should do this work anyways, using unchecked exceptions is more than acceptable.


Optional is okay. Of course, you could still be a real jerk and return null instead of Optional<T> and the compiler won't warn me at all and I'll just get an NPE at run time... But it probably won't happen.

The problem with Try/Result is that you still have to do manually unwrapping and/or early returns. Scala and Haskell have monad comprehensive to make this less noisy. Rust has the ? operator. Java has nothing. This makes your code WAY more noisy than having a try-catch inside your non-trivial function.

In a proper world, callers should NOT be aware of all exceptions a function can throw. That's exactly the point of having checked and unchecked exceptions. Checked means the library author thought there is a chance that you might want to handle the exception locally. Unchecked means the library author does not want you to try to recover- they've already determined you're screwed.


I don't think this is true in theory or in practice. Java's Integer.valueOf(String) throws an unchecked exception if it fails, and you should very much be aware of it and catch it in many situations.

Also, in many situations, it doesn't make sense to catch IOException but rather let it propagate. The set of exceptions you should not be expected to catch is generally a very narrow subset of unchecked exceptions, like java.lang.LinkageError or NPE due to internal bug in the called method.


Ack. Totally agree about Integer.valueOf. I think I agree about IOException, too. Definitely most other people agree that it should've been unchecked and that does seem reasonable, as long as it's really only thrown for "oh shit- someone unplugged the hard drive" errors.

I still think that in theory, the distinction I articulated would be proper.

I'd always prefer returning algebraic data types rather than checked exceptions if I were inventing a new language. But given that Java has nothing in the way of that, I'd still say that one should attempt to follow the hypothetical distinction I articulated, even if Java itself fails at it...


One of the bigger problems is that exceptions are slow. Thus the recommendation is not to use them in cases where the caller could branch on. A perfect example is a missing element in a remote system. An old api would throw some sort of a Element Not Found Exception. With the addition of Optional though you have a object that is easier to use, conveys your intention better, and doesn't have any of the performance drawbacks. I think what it comes down to is that when dealing with situations that might be handled by the caller you should use some sort of result object instead of a exception. This leaves exceptions only for the cases where the result cant be handled. In which case you dont need checked exceptions.


> A perfect example is a missing element in a remote system. An old api would throw some sort of a Element Not Found Exception.

Wait, if the example involves a remote system how can the Exception be the bottleneck? Even generating the completely optional stacktrace shouldn't take that much time.


Exceptions are slow to throw, but they're meant to be exceptional so that's OK. Good implementations impose no overhead the rest of the time. This is not the case for return type based systems, where you can easily end up with an allocation that must be GCd later.


Not as slow as many people think, especially with things like OmitStackTraceInFastThrow


OmitStackTraceInFastThrow has caused so many headaches in debugging production issues, and each time now the answer has been to just disable it. The real answer is to stop trying to abuse exceptions as a form of general control flow.


For some reason the most talked about "alternative" to exceptions seems to be "use optional". I would assume that this has the same debug issues as omitting the stack trace.


> I would assume that this has the same debug issues as omitting the stack trace.

I don't expect so. If I "forget to handle" an optional, I get an error at compile time pointing at the lines in question. If I forget to handle an exception with an omitted stack trace, IIUC, I am told at runtime that there's an error somewhere in my program. It's much easier to debug the former.


> I like checked exceptions. Yes, sometimes (especially in the oldest APIs when they were still figuring this stuff out), they were overused but mostly I think they encourage developers to really think about what happens in the failure case.

I agree and like them in theory, but in practice the only practical thing that can be done with an exception is to get away from this section of code as quickly as possible and get back up to a layer where the user/system can be notified of the failure in some way. Cases where you actually can gracefully handle an exception like a missing file tend to be something you should check and not rely on exceptions for anyway.


Well... I will take the bullet and confess that I do like checked exceptions. When they are not miss|over used they transfer the enough required knowledge what is to be handled. You dont need to handle? Transfer to higher levels on stack. That is a great fit in my opinion for applications designed with especially fault tolerance futures and using many external components. Not saying that design of CE is perfect, they might be too broad that doesnt tell you exact handle case, so it will leave you in the dark or in a call chain of a()->b()-c()-d() there might be cases that b and c wouldnt need to have contract in their method signature maybe compiler would decide if exception is orphaned or needs to be handled.


I just read the article in diagonal. It doesn't seem to address the reason that I hate the thing: sometimes I don't care about errors at all. Let's say I'm writing a kleenex program to explore some feature. Not nice if you make me feel all kind of boilerplate.

Or maybe I just want to defer the error management to a higher level. Again busywork declaring exceptions.


If you don't care about exceptions just put,

try { //your code } catch (Exception e) { throw new RuntimeException(e); }

around it and you'll be fine.

The thing about Java is that the code lasts for a really long time because when it fails, it does so usually with an exception that points to the problem. When promises hang in NodeJS or memory corruption happens in C, it's much more annoying to track down problems.


There's also this magic function:

  private static <T extends Throwable> T sneakyThrow(Throwable ex) throws T {
     throw (T) ex;
  }
Due to generic type erasure, the entire throw clause gets erased so it looks like this method doesn't throw anything.


You can do that - I do it all the time myself - but I'd stop short of saying, "you'll be fine." That approach has its own downside, which is that it makes life obnoxious if you ever do want to actually handle exceptions. Then you'll have to add extra logic to determine if the exception you've caught is the actual error, or just a wrapper of some type. So your error handling inevitably becomes more error-prone.

If you can get your team to commit to a "let it crash" policy, though, then it's pretty doable.


You’ll end up double wrapping your exceptions though so your stack traces get a bit uglier.

Better to check if it’s a runtime exception and throw it again without wrapping it. Only wrap if it’s not a runtime exception. Also you should check if the thread was interrupted and, if so, set the interrupt flag again.

The worst part of this is having that dance littered throughout all your code.


Lombok has @SneakyThrows which does pretty much this but without the additional wrapping.


Or, you can use C# and not put anything at all.


Then just wrap them in RuntimeException and rethrow them.

I mean, why are you writing a kleenex program in Java? Do you not want to declare a return type for your functions either? Java's checked exceptions are just a way to return multiple types from your methods: a success value and a failure values.


Everything about what you, and others, are saying seems incredibly wrong to me. I guess it's a point of view thing. Very different mindsets. So it would be more interesting knowing why, but I'm at a loss here.

Why should I wrap simplest code with boilerplate? OK, maybe too late, since I need to write several lines of boilerplate to just say hello world. People argue that it's just a template that wraps the program. OK, but now it's something more fine-grained: every function must be wrapped.

The way I see it, simple code must be simple. If you want to do complex things, the language should allow you to do it some way. But not at the expense of everyday tasks. It's an elemental usability consideration.

I mean, why are you writing a kleenex program in Java?

See? The mindset again. That question I can't understand. Actually I find it annoying as in "what kind of language is Java that you thing Kleenex functions are not allowed? does it thinks it's too good for my crappy functions?"

Do you not want to declare a return type for your functions either?

If there isn't a return value, I don't.

Java's checked exceptions are just a way to return multiple types from your methods: a success value and a failure values.

Why do you need checked exceptions, as opposed to regular unchecked, typed exceptions?


> The way I see it, simple code must be simple. If you want to do complex things, the language should allow you to do it some way.

The simple code must be simple, but not simpler than that. For exploratory code it's fine to ignore the failure paths; for production code, it isn't. Java's checked exceptions force you to operate in the "production code" mode, so the language isn't super-convenient for prototyping. I'd personally like all exceptions to be checked, but with "checked exceptions as warnings" mode by default - with an understanding that production code will be built with "warnings = errors" switch.

> If there isn't a return value, I don't.

What if there isn't a return value in the success case, but there is one in case of failure (i.e. the error)?

> Why do you need checked exceptions, as opposed to regular unchecked, typed exceptions?

You don't need, but you probably want them. Unchecked exceptions are invisible in the method's signature; to get a list of exceptions you need to handle, you'd have to dig through implementations of everything downstream of your method.

There's a trend across different languages to use algebraic data types (Result, Expected, etc.) to allow returning a valid result XOR an error, which by design make you deal with a possible error if you want to get the result. Exceptions are essentially the same thing, except with less boilerplate (in C++/Java-style language), but you need checked ones to actually force the programmer to deal with failure modes.


Force is the key word for me. It's the "we know better than you" language designer mindset. My point is that if they really knew better, they wouldn't be making this anti-usability calls.


The language isn't forcing you. The developer who chose to use a checked exception is forcing you.

Is the language "forcing" you when someone writes a function that returns a String? What if I wanted it to be an Int?


Edit: OK, someone is downvoting ALL comments because opinions. No more comments by me. I'll delete everything that I can. Seriously, if someone doesn't want to read opposing opinions, I guess it's their right. And mine is to STFU, for good.

Edit2: hey TeMPOraL, I don't care too much about that, it's the feeling of being in a community where someone thinks this is an acceptable behaviour.

I've been writing here for very long. But this is getting ridiculous. Anything that is minimally controversial gets downvoted. And the thing is that I am already censoring myself a lot.


> It doesn't make much of a difference if Sun, the authors of the language, or Sun, the authors of the runtime, is screwing me.

But it matters for me, the programmer using modules written by other co-workers and third party libraries, that I'm made aware of what can fail and at what point, when I'm using these modules/libraries. It also matters to me that my tools (e.g. the compiler) force me to handle these cases correctly, or at least warn me when I'm not - lest I ship broken code through carelessness or ignorance.

Quality Sun's API design is a different issue. This is about giving people tools to express and enforce error handling semantics in software they design.

EDIT:

> Edit: OK, someone is downvoting because opinions. No more comments by me.

It's good to not be attached to imaginary Internet points. They come and go and ultimately don't matter much. And FWIW, bad downvotes often get countered, and the score of a given comment settles to something reasonable over time.


Java generics aren't powerful enough; an exception signature is a union type whose size varies. There's no way to declare that a method's exception signature depends on the signatures of the arguments for each call. No way to take a lambda or method reference and throw whatever that could throw. Checking is nice but IMHO not worth giving up higher-order functions.


Sounds like an issue with generics and lambdas in Java


It's not boilerplate. It's part of the contract of using the code underneath - Potential error-states are part of the prototype and you have to deal with that just as much as you have to deal with the fact that argument 2 is a String.

Don't want to? Throw it on up the stack, but do so in the knowledge that your program will fall over at the first problem.

And even that is better than carrying on with (for example) a null that then trips up some random bit of code later.


> The way I see it, simple code must be simple. If you want to do complex things, the language should allow you to do it some way. But not at the expense of everyday tasks. It's an elemental usability consideration.

As you already mentioned, that ship has sailed the second you decided to use Java. Every function already has to be wrapped in `class`, which is obnoxious.

> See? The mindset again. That question I can't understand. Actually I find it annoying as in "what kind of language is Java that you thing Kleenex functions are not allowed? does it thinks it's too good for my crappy functions?"

My point wasn't celebrating boilerplate. It's about having a statically typed language. If it's too much to either add `throws Exception` to your function signature or to write a try {} catch {} block, then it must certainly already be too much to write `class MyClass { void myMethod() }`, no?

> If there isn't a return value, I don't.

You still have to write `void`, don't you? And checked exceptions are supposed to be something you want to force on callers of your code- like a return value. You don't want to return anything? Then return `void` and don't throw any checked exceptions.

> Why do you need checked exceptions, as opposed to regular unchecked, typed exceptions?

Because it's part of your API. If you write:

`int fooMethod(int input) {...}`

Then you are saying "If you call `fooMethod` with an int you will get an int." If you always throw an exception on input == 3, then your API is now lying. You should either return a type that encodes the possibility of not being an int OR throw a (checked) InputWasThreeException, to be honest to your caller.

Java is a poor language. Ideally there would be an ergonomic way to use a Result/Try type a la Rust and Scala. If such a thing existed, I would stop advocating for checked exceptions immediately.


> Ideally there would be an ergonomic way to use a Result/Try type a la Rust and Scala. If such a thing existed, I would stop advocating for checked exceptions immediately.

On that note: would it improve things, compared to checked exceptions that cause compile warnings if not handled?

I'm currently dealing with similar problem on the C++ side - a codebase that's using a lot of tl::expected<> and "functional interface" instead of exceptions. And the more I work with it, the more I realize this introduces a lot more of boilerplate/book keeping to achieve the same goal exceptions would, with little to no benefit - except the possibility of error being visible in function signature. Which wouldn't be a problem for exceptions if "throws" in C++ wasn't broken.


The way Rust does it is not perfect, but it's the best I've used so far, IMO.

Rust, if you are not familiar, has a special operator that let's you call a function that returns Result and automatically unwrap it and return early if it's an Error rather than Success. It will even automatically CONVERT the error for you if you've defined the proper trait implementation to convert the one error type into the other.

So, in Rust the boilerplate happens ahead of time (implementing the conversion trait), outside of your function's internal logic. In your function you just write:

    fn foo(): Result<String, Error> {
        let f = something_that_can_faile()?;
        uses_the_success_value(f)
    }
No try{}catch{}, no match statements, nothing. Just a question mark.

Swift also strikes a nice compromise, IMO. It just has a throws tag like C++, except it actually works.

I never considered a warning-level check for Java's checked exceptions. That sounds like a nice idea at first blush.


> let's say I'm writing a kleenex program to explore some feature. Not nice if you make me feel all kind of boilerplate.

Use Groovy. Seriously, you can code it up with zero boilerplate, use the REPL, notebooks, etc to get your idea into shape, all with 100% interop with your Java libraries.

When you are done, if parts of it should graduate to "real code", you can pretty easily port it over or keep parts of it in Groovy and just take the elements over that need to be Java for robustness, etc.

This is actually my standard workflow in development now.


It has been a while since I touch Java, but I thought you could just add `throws Exception` to `main` if you don't care about error?


And to every other method too


If it's exploratory trowaway code then you don't care anyway; if it's meant to last, this forces you to actually deal with failure modes instead of pretending they don't exist.


I've never heard the term "kleenex program", what does that mean?


You write it, use it once, then throw it away.


I don't think a language like Java should be optimized for exploration code, it should be optimized for long lived codebases that want to maintain stability and quality.

I'd also disagree that adding a throws to the signature is much boilerplate at all. Especially in the ages of IDE's


I don't really dislike the concept of checked exceptions, but the awful way they prevent the usage of functional interfaces and modern Java in general is pretty infuriating. Functional interfaces and code that uses them should have a generic way of being transparent to exceptions, but I'm not sure it can be done without breaking the language.

Nevertheless, I still believe that Java's biggest mistake is not checked exceptions, but the stupid distinction between primitive types and Objects (caused because all the cruft present in the latter would have make basic operations prohibitive in terms of performance, at least for early Java versions) and all the associated boxing. I have seen some extreme cases of performance degradation because of that (fortunately a refactor to use arrays solved the problem, but this is not always possible).


> Functional interfaces and code that uses them should have a generic way of being transparent to exceptions

This irks me too. It's been a while but I believe the workaround is just to define your own (checked) FunctionE, SupplierE, ConsumerE functional interfaces. But maybe that causes other problems I've forgotten.

> the stupid distinction between primitive types and Objects

I also dislike the distinction between primitives and Objects. But I don't think your argument follows. Performance is why the distinction exists. Objects have the cruft and primitives don't.


Performance is why you should use different implementation details on the metal. The reason why there's a distinction at the language level, though, is because Java's designers chose to leak implementation details into the language's high-level semantics.

This distinction wasn't such a big deal in 1995. It was perhaps even the preferred way of doing things according to contemporary values. Nowadays, though, Java is a very different language that lives in a very different cultural milieu. So, nowadays, the language-level distinction, in combination with some other design decisions that came later, is absolutely a source of performance problems. Any code that tries to do things like store numbers in one of the standard collection classes will quickly devolve into a horrible mess of memory overhead and pointer chasing. It's led to this perverse situation where it's honestly not too hard to beat Java performance in a dynamically typed language like Clojure or Python, simply because dynamic languages make it easy to encapsulate the implementation details (and therefore to choose a more performant run-time implementation) in a way that Java's type system doesn't really allow.

You can still keep Java toward the top of the well-known performance benchmark leaderboards, but only by coding as if Java 5 never happened. Which is an option that's only tenable for toy use cases like benchmarks.


I don't disagree about the penalties of boxing and chasing pointers and moving things around on the heap. And I reiterate my dislike of the ergonomics of having a primitive/Object distinction, especially when it comes to collections.

But I'm skeptical of the argument that the language designers could have improved performance while hiding the implementation details from us. If anything it feels like the reverse is true. Here's a few additional implementation details I'd like access to:

* I want to allocate an object (by value - not by pointer) straight onto the stack (not the heap).

* I want to specify that a class completely opts out of inheritance so that the compiler knows which exact method I'm calling (static dispatch) and can inline appropriately.

* (If tail-call optimisation existed) I'd like the ability to annotate a method as such, and have the compiler reject it if TCO couldn't be done.

> It's led to this perverse situation where it's honestly not too hard to beat Java performance in a dynamically typed language like Clojure or Python, simply because dynamic languages make it easier to encapsulate the implementation details (and therefore to choose a more performant run-time implementation) than Java's type system does.

I'm skeptical of this too. It sounds a lot like the "Java faster than C++" I've been hearing for like 15 years, e.g.

> https://trs.jpl.nasa.gov/bitstream/handle/2014/18351/99-1827...

> Java performance can exceed that of C++ because dynamic compilation gives the Java compiler access to runtime information not available to a C++ compiler.


> * I want to allocate an object (by value - not by pointer) straight onto the stack (not the heap).

The JVM does escape analysis to perform this in cases where it can prove the lifetimes will work.

> * I want to specify that a class completely opts out of inheritance so that the compiler knows which exact method I'm calling (static dispatch) and can inline appropriately.

I think this is the final modifier?

The problem with all of these are that despite java having static types, the JVM is fairly dynamic. It's difficult to really guarantee anything at the source level because you don't know what classes will actually get loaded into the JVM. Even without inheritance, you can even load multiple versions of the same class simultaneously with different classloaders. Thus it's difficult to guarantee any optimizations at the source level in java. The JVM has to see what classes actually get loaded before it can make optimization decisions.

Having different classes at runtime and compile time is actually pretty common too. E.g. in gradle if you have a implementation dependency on liba, which depends on libb=2.0, you can also have a runtime dependency on libc, which depends on libb=3.0. This would cause you to compile against libb=2.0 but run against libb=3.0.


re: performance, I am guilty of making a general statement when I should have made a much more specific one: It's not too hard to beat Java for numerical applications.

The root of the situation is that it's pretty easy for the dynamic languages to choose an unboxed representation on the back-end while still exposing a reasonably generic interface to the programmer. The ergonomic payoff there is immense. I submit, for example, that the just obscenely rich ecosystem of high performance data analysis packages that have been built on top of numpy just could not happen in Java. Java forces you to either accept boxing, or write specialized overloads for every specific combination of argument types you want your function to be able to handle.

Java does have some specialized numeric libraries, generally built using code generation, but they aren't widely used because the ergonomics aren't great, and building any additional general-purpose tooling on top of them requires still more code generation. And that's just not a fun way to work. It's brittle.


Huh? Java has Interfaces.

fastutil.di.unimi.it provides unboxed Generic collections and is over 10 years old.


>This irks me too. It's been a while but I believe the workaround is just to define your own (checked) FunctionE, SupplierE, ConsumerE functional interfaces. But maybe that causes other problems I've forgotten.

You can do that, but you need to do it for every exception type ): I'd love it if Java had templated exceptions.

>> the stupid distinction between primitive types and Objects

One of the slowest things I found out about C# was floats being objects and `a < b` (or something similar) being a stack about 7 levels deep.


> You can do that, but you need to do it for every exception type ): I'd love it if Java had templated exceptions.

It does have them. You can write an interface with a type parameter `<E extends Throwable>` and declare functions inside it with `throws E`


You can do this, but it's not ideal. A function can list multiple exceptions in its throws clause, which share no common type except Throwable, but this combination of exception types can't be represented by a single type parameter (except for the common base type). So in a lot of cases, you lose information about the specific exceptions your function can throw, which limits the value of using checked exceptions.


> One of the slowest things I found out about C# was floats being objects

Floats are value types not objects: https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

Maybe you we're dealing with floats that had been boxed?


They were probably boxed yeah, we were dealing with unity physics stuff and that's what the benchmarks showed.


That's right, I would run into even bigger performance issues if I used Integer/Long/Double/etc everywhere instead of the primitive types. The problem is not that primitive types are not object, but rather, that generic code will only accept objects and not primitive types.


So don't use generic code in critical inner loops.


The other alternative is to use the error monad in combination with railroad style programming. Of course you also want infix operators like (>>), (>>?) aso.


Yes, objects have the cruft and primitives don't. So you use primitive types whenever you can, including mathematical operations (unless the numbers are big and you need a BigInteger or BigDecimal, but that's another story). So far so good. But then, because of type erasure, collections (and literally anything generic except arrays) only accept objects, meaning that if you need a set of longs, you use a set of Longs instead. And if you are using some complex algorithm that relies on maps or sets of primitive collections, where you are constantly adding or removing elements, boxing and unboxing will kill your performance. I have been hit by this more than once. The last time I measured this (about half a year ago, which was the last time I encountered this in my job), the difference in memory usage was several Gb, and in particular there were about 7Gb used just for the Long class.

There are third party libraries that alleviate this. I use koloboke a lot, for example (which works for maps and sets, but lacks sorted collections and multimaps, which I also need to use very often); but the problem remains for anything slightly complex, and it's not uncommon to find yourself writing two almost identical copies of a relatively complicated method or class, one for objects and another one for ints (and maybe a third one for longs, a fourth one for floats...), because otherwise you hit the same problem.

So yeah. Primitive types are not objects and that limits them because Java doesn't work well with things that are not objects. C++ is much better in this sense, because generic code is actually generic and any type will do.


fastutil.di.unimi.it

You don't need to rebox in performance sensitive code.


I think that neither of those are Java's worst offense. It's that all references are allowed to be null.


I think the syntactic awkwardness in general in Java have been a huge issue and it's a shame because people hold it against checked exceptions rather than the language.

For example, it took forever to be able to handle two unrelated exceptions with the same catch block, and incredibly common things like closing file handles would require nested try / catch blocks in all the primary catch clauses (and now every utility library ends up with "closeQuietly" or similar ...).

I think if Java had done it much better there would be a massively different opinion of CE.


> The biggest argument against “checked” exceptions is that most exceptions can’t be fixed. The simple fact is, we don’t own the code/ subsystem that broke. We can’t see the implementation, we’re not responsible for it, and can’t fix it.

Here's what Oracle has to say:

> Here's the bottom line guideline: If a client can reasonably be expected to recover from an exception, make it a checked exception. If a client cannot do anything to recover from the exception, make it an unchecked exception.

https://docs.oracle.com/javase/tutorial/essential/exceptions...

- checked exception for recoverable errors

- unchecked exception for non-recoverable errors

So the argument that most errors can't be recovered from is _not_ a reason to abandon checked exceptions. It's a reason to reserve checked exceptions for those cases in which recovery is likely.

The main argument in this article appears to be based on a misunderstanding.


It's interesting that is how it is described, as it is at odds with actual usage. From my memory going back to early 2000's, it was seen as bad form to use an unchecked exception in "user land" code - these were looked at as only for very severe internal failures like out of memory or similar. This was probably encouraged by IDEs that will auto-generate code for you for handling, rethrowing, adding to type signatures etc etc that proliferate the exceptions up through API layers. But it also seems at odds with the language itself ... for example a simple file not found will generate a checked IOException out of the standard libraries. It seems strange to suggest there is recovery from that (beyond choosing another file).

Even though I completely agree that checked exceptions make Java a huge pain to work with, I do feel something has been lost when I use languages that don't support them or any alternatives. Having no way to enforce systematic handling of an error isn't a great solution either.


> It seems strange to suggest there is recovery from that (beyond choosing another file).

That's the thing though, right? If you're writing a program with a UI you want to pop up a box saying "File not found, choose another".

A server program might format an appropriate response, log it and send it back to the client.

A batch process may just want to bail with a code.

So it would seem to make sense to have it checked...


> If a client can reasonably be expected to recover from an exception, make it a checked exception. If a client cannot do anything to recover from the exception, make it an unchecked exception.

Trying to make those decisions as a library author is a sort of combination of the tail wagging the dog and self-fulfilling prophecies.

Depending on the client's ethos and operating parameters, they may or may not reasonably want to try to recover from anything. Java's designers decided that failing to reserve memory for a new heap allocation isn't something that people might want to recover from, and so it really is impractical to do so - for lack of documentation as much as for any technical reason. On the other hand, Zig decided that developers should be able to recover from it, and so it's really not such a big deal for Zig developers.

Personal example: I tend to be a "let it crash" person, so I generally write code that throws exceptions that really are hard to handle. Not because of any fundamental reason, just because that attitude leads me to be a bit more loosey goosey about organizing and documenting the exceptions that I throw. I get away with it because it's all internal projects. If I were writing an open source library meant for public consumption, I'd probably try to be more disciplined, so that users could choose their own moral paradigm.


As a library writer, almost always you can’t decide whether your user can recover or not. So checked exceptions for libraries would be used very rarely. They might be useful for application code. But they were abused for Java standard library and now most people hate them. At this point I would rather see them gone. If I would want to force API user to handle result, probably I would just use something like Optional.


People hate them because they want to quickly write unreliable software but are using a language designed for building reliable software


Java is positioned and used as a general-purpose programming language. I don't think that it was specifically designed for building etra-reliable software.


Checked exceptions are the poor man's sum types.

They're overly ceremonious for relatively limited functionality (don't encompass the broad range of circumstances where you would want to have multiple return types) and poor composability (e.g. lamdas).


That guideline ends up less great than it sounds, in practice, since the kind of error a client can recover from is very client-sensitive. A general-use library, like Java's, needs to throw checked exceptions for anything that /any/ caller could reasonably catch. If you're writing, say, a file converter, then there's nothing you can reasonably do about an IOException. But plenty of other programs can show an error dialog and keep moving. There's usually going to be some sort of exception that's completely out of scope for you.

As unfortunate as that is though, I don't really think it's something that can be done better. Ultimately errors are breaks in a function's abstraction. There's nothing that a readFile() function can do about a missing file, because it doesn't know how that file was selected or what you're going to use it for. The only way around the error would be absurdly coarse abstractions like readUserSelectedFileInGuiProgram() and readCriticalFilePackedWithProgram().

Better syntax can reduce the boilerplate - algebraic types plus match expressions are certainly an improvement. But the guidelines for when you should return one end up the same as the guidelines for a checked exception, since they're expressions of the same idea.


Misuse of checked exceptions pre-dates this advice and some of those libraries are very slow to go away.

Library and framework developers mostly only know that a client cannot recover from an exception; rarely can they make good judgement about when a client can recover from one but they have to bake that choice in.

It's more flexible to encode error or otherwise abnormal returns into the return type - there is basically no reason to prefer a checked exception.


> Here's what Oracle has to say:

doing the opposite of what Oracle has to say seems a good guideline in life


The argument is that an error that can be recovered from isn't an exception. It should handled with standard guards (bounds check, null check, format check, etc). There's no checked exception that can't be better solved with an if/then statement.


> The argument is that an error that can be recovered from isn't an exception. It should handled with standard guards (bounds check, null check, format check, etc). There's no checked exception that can't be better solved with an if/then statement.

That's a philosophical position that is largely driven by the language (and I suppose the ecosystem around it). I happen to agree with that position, so I prefer languages like rust and go over java.

But I also know that if I try to fight the customs of the language I'm working in, it'll end in a lot of pain and unnecessary angst; so, if I find myself using Java, I grit my teeth and used checked exceptions.


Checked exceptions are optional in Java. If you look at a even an long-standing enterprise platform like Spring, there's almost no checked exceptions. I always make it a policy on Java projects to not allow checked exceptions and it's never a problem. Anything in the standard lib may have to be dealt with, but I don't allow commits with any throws declaration.


Why should all the callers have to redundantly implement all the validity checks instead of implementing them once in the callee and reporting the result?


The callee should not return invalid results. It can return empty data or some kind of results wrapper or just throw a runtime exception of something really catastrophic happened.


How would eliminating checked exceptions mitigate terrible API design?

I've never understood the angst. All the arguments reduce down to mitigating terrible abstractions.

The Correct Answer is better APIs. Mostly, that means don't pretend the network fallacies don't exist. Embrace them. Which generally means work closer to the metal.

I'll say it another way. The problem is EJB, ORMs, Spring, etc. The obfuscation layers.

Someone smarter than me will have to rebut the functional programming points. I'd just use a proper FP language. Multiparadigm programming is a strong second on the list of stuff you shouldn't do. (Metaprogramming is first.)


> How would eliminating checked exceptions mitigate terrible API design?

Any API which has used checked exceptions was made worse by those, because Java's checked exceptions are bad, and their use runs actively against good APIs. So not having them would have made the corresponding APIs less bad (not necessarily good, mind) by definition.


Thank you for replying. It helps me to better articulate my thinking. You're my Rubber Duck.

Why would network programming (I/O, persistence, etc) look any different whether the language was C, Java, GoLang or other?

Most of my code is error checking and handling. (And now logging too, which I'll ignore here.) Is this abnormal? (Being rhetorical.)

Plenty of noob code ignores errors. I sort of thought we all decided that was suboptimal. Java's response was checked exceptions.

The only argument I've ever heard that made any sense is the silliness of catching exceptions so far removed from the root cause that your code can't do anything about it.

So don't do that.

Really, why would any one design a system that way? Because network programming is messy? Because it'd be neat to compartmentalize the messiness?

I've been able to cleanly separate the value add business logic from real world messiness exactly one time. I was in control of the full stack, end to end. Imagine something like a useful BizTalk. I had been inspired by postfix. My engine would pass work to plugins, which didn't have to do any I/O of their own. My work anticipated serverless and AWS Lambda, if those programming frameworks (paradigms) were better designed.

It now occurs to me that the checked exception abolitionists are advocating Happy Path Programming.

The only other feasible Happy Path Programming strategy I know of is Erlang. I've only done an Erlang tutorial, nothing in prod, so this is just a guess.


> Most of my code is error checking and handling.

> Plenty of noob code ignores errors.

You can encode errors into the return type of the function to force people to deal with errors. Here's a good example:

  byte[] read() throws IOException
    // vs
  Optional<byte[]> read()
Both cases force the user to consider failure cases.

Checked exceptions are essentially modifying the return type already, just in a different syntax. It's inconvenient because the nonstandard syntax for the type doesn't work well with all the other return types. It's still a bit inconvenient to use the other option in Java though because there's no language level support for real tagged unions (sum type) (union type). https://en.wikipedia.org/wiki/Tagged_union

The other class of not handling errors takes a bigger conceptual leap:

  void write(byte[]) throws IOException
    // vs
  Optional<Void> write(byte[])
The caller could just ignore the value in Java, so your criticism is still valid. However, you can force the caller to never implicitly ignore return values. If the function 'throws' by returning a value, you have to explicitly ignore it:

  val _ = write(bytes);
If the function can't throw (i.e. the return type is strictly 'void'), then you don't have to do anything:

  write(bytes);


Optional<T> isn't a replacement for typed exceptions as you can't pass or inspect the error details.


That's correct. I used Optional because it's included in Java and it's simpler. With Optional I could explain handling the presence of errors, but not the inspection of errors.

A full replacement is the Result type (a specialized 'Either'): https://doc.rust-lang.org/std/result/


Yeah. Although, that has an explicit error type which is good and bad. The errors are consistent and known but you're restricted to throwing them together into a single type. I kind of like C#'s Task type where you can still use the catch block pattern matching if you want to. Maybe dropping the catch syntax is better though.


> Both cases force the user to consider failure cases

But only one allows you to pass back information about the source of the failure rather than just the existence of the failure.


Interesting. I'd love to see more of these kinds of alternatives.

I don't have experience with monads, union types, and the like for network programming.

I dimly recall some nodejs stuff which moved the err and result values from the callback to the method's return, which I also found appealing. (Or maybe that was golang. Sorry, am replying on phone.)


> Why would network programming (I/O, persistence, etc) look any different whether the language was C, Java, GoLang or other?

Because different languages provide different abstractive tooling and can thus yield different solutions?

> Plenty of noob code ignores errors. I sort of thought we all decided that was suboptimal. Java's response was checked exceptions.

And it was a bad one. Which is OK, everybody makes mistakes. That doesn't change the fact that Java's checked exceptions are bad.

> The only argument I've ever heard that made any sense is the silliness of catching exceptions so far removed from the root cause that your code can't do anything about it.

Made any sense with respect to what? Checked exceptions? They're bad because their classification is often incorrect (exceptions are checked which should not have been, mainly IOException) and Java provides no abstractive capabilities over them so you can't build tooling around them or intermediate layers which are not aware of the specific checked exceptions without losing that specificity (you can't be polymorphic over checked exception), they're either extremely leaky or completely erased, and things have gotten worse as Java integrated more functional features which have run head-first into these issues.

Plus they're inconvenient, because managing exceptions in Java is as verbose as everything else and by definition checked exceptions requires more management than unchecked.

> It now occurs to me that the checked exception abolitionists are advocating Happy Path Programming.

That's certainly not correct. Checked exceptions "abolitionists" just consider Java's checked exceptions to be bad.


> That doesn't change the fact that Java's checked exceptions are bad

You keep saying that but without justifying it.

Let's start from the beginning: what is bad exactly, and why:

- Checked exceptions in general

- Java's implementation of checked exceptions

- Or the way some Java API's use checked exceptions

?


> what is bad exactly

The phrase you quoted says exactly what.

> and why

The next paragraph I wrote explains why.


Some checked exceptions make a lot of sense to try to catch and fix. FileNotFoundException is something my code can probably recover from - you asked for a file, it's not there, let me ask for a different file. Having a file reading method declare that it throws FileNotFoundException can be a helpful reminder to make sure you handle that possibility.

But then there are other types of checked exceptions that are almost certainly unrecoverable because they happen way down in some other third party code. And then you get the endless chain of "throws" all the way back up the code base.


> But then there are other types of checked exceptions that are almost certainly unrecoverable because they happen way down in some other third party code. And then you get the endless chain of "throws" all the way back up the code base.

No you don't. When your code calls ThirdPartyAPI and it throws one of these checked exceptions that you know you can't recover from, you wrap it in a RuntimeException and rethrow. Then it's totally invisible to the rest of your code.

Likewise, you should never have a method that throws 10 kidns of checked exceptions. You should be writing your own custom exceptions that wrap downstream exceptions into forms that are useful for you and your code (or people who will use your code).

The biggest issue with checked exceptions is that people refuse to think through their unhappy paths.


Wrapping exceptions when re-throwing can be really useful. I think that this feature is often underused, especially by people who complain about checked exceptions.


E.g. Future wraps any Throwable in ExecutionException, which is a checked exception. But ExecutionExeption could wrap any exception! It may as well just throw Exception.

I really would have loved e.g. Future<Value,IOException|FooException>. Obviously it gets erased in the executor, but if your code holds on to it, it could maintain checked exceptions over the async boundary.


I suspect that most Java devs have literally never considered anything beyond "Handle it here or rethrow it as-is".


I think there's a point to be made here about how checked exceptions interact with java's type system. FileNotFoundException is a pretty great checked exception - it's concrete enough that the caller actually has a chance of doing something useful in response. But most of the java standard library is designed with rather abstract APIs.

Take java.io.Reader - it represents an arbitrary input source, so the Reader.read() method is declared to throw the very generic IOException. The subclasses don't make these any more concrete, leading to absurdities like StringReader.read() having a checked IOException.


> FileNotFoundException is something my code can probably recover from - you asked for a file

Actually, no. Sometimes it is fatal, and sometimes it's not.

If the file being open is expected to be part of the distribution and your app can't start without it, it's fatal.

If it's a file picked by the user, it's most likely recoverable.

The main problem is that every single one of these exceptions should be able to be either runtime or checked, and that choice should be made by the application.

Open question: should this decision be made at the call site or the use site?


Call site vs use site is a great dichotomy for this discussion.

My position is that the caller decides how to deal with exceptions. When I'm adding a new operation to my web app, I won't add any exception handling at all. Why would I? There's a central exception handler in my app, which will log exception, analyze its type and return an appropriate status code to the caller (e.g. 500 or 400). If there's a kind of failure my centralized handler doesn't handle correctly, I'll add an explicit handler in the new operation, which would do the right thing in some cases, while the rest of issues are handled centrally still.

Checked exceptions make me have dummy handlers all over the place.


Whenever I need to convert a string to a UTF-8 byte array, or do a SHA-256 hash, I need to catch UnsupportedEncodingException or NoSuchAlgorithmException respectively. These are ubiquitous standards built into every JVM. It's impossible to recover from these exceptions, but it's impossible to not have these standards. It's one of examples of astronaut architecture[1] that I've needed to get around.

I have a throwable type JVMNotSupportedError[0] specifically to wrap these possible but will never be thrown exceptions. Its sole reason for existence, theoretically, is to yell at the user to get a better JVM.

[0] https://github.com/theandrewbailey/toilet/blob/master/libWeb...

[1] https://www.joelonsoftware.com/2001/04/21/dont-let-architect...


I've been using Scala quite heavily recently, having mostly used Haskell for years, with distant memories of Java. Scala allows Java methods to be called, but doesn't bother with checked exceptions, which has bitten me quite a few times.

My preferred style of error-handling is Option/Either, since I can implement the 'happy path' in small, pure pieces; plug them together with 'flatMap', etc.; then do error handling at the top with a 'fold' or 'match'.

Exceptions break this approach; but it's easy to wrap problematic calls in 'Try' (where 'Try[T]' is equivalent to 'Either[Throwable, T]').

The problem is that Scala doesn't tell me when this is needed; it has to be gleaned from the documentation, reading the library source (if available), etc.

I get that a RuntimeException could happen at any point; but to me the benefit of checked exceptions isn't to say "here's what you need to recover from", it's to say "these are very real possibilities you need to be aware of". In other words checked exceptions have the spirit of 'Either[Err, T]', but lack the polymorphism needed to make useful, generic plumbing. The article actually points this out, complaining that checked exceptions have to be handled/declared through 'all intervening code'; the same can actually be said of 'Option', or 'Either', or 'Try', etc., but the difference is that their 'intervening code' is usually calculated by the higher-order functions provided by Functor, Applicative, Monad, Traverse, etc.

It's similar to many developer's first experience of Option/Maybe: manually unwrapping them, processing the contents, wrapping up the result, then doing the same for the next step, and so on. It takes a while to grok that we can just map/flatMap each of our steps on to the last (or use 'for/yield', do-notation, etc. if available). It would be nice to have a similar degree of polymorphism for checked exceptions. Until then, I'd still rather have them checked (so I can convert them to a 'Try'), rather than getting no assistance from the compiler at all!


I absolutely agree, a checked exception is pretty much a Either<Exception, RESULT>, it only has a different (arguably ugly) syntax, I don't get all the hate. A "sufficiently intelligent compiler" could have turned signature with throws declarations in signature returning Either<Exception, T> but the hivemind has spoken and checked exception were simply ignored (see Scala and Kotlin)


Kotlin made the worst mistake here. At least Scala actually gives you ways to do error handling somewhat ergonomically with for comprehensions. Kotlin gives us nothing. Sure, you're "supposed" to return sealed classes and match on them. But it's so tedious and non-obvious that I've literally never seen it done in any Kotlin code I've depended on.


They're so similar you can easily translate between the two e.g. https://github.com/unruly/control/blob/master/src/test/java/...


The compiler can still help without needing to resort to checked exceptions. For example, `Either[T, Either[FileNotFoundCheckedException, RuntimeException]]` is basically isomorphic to a function that returns T or throws and has the same monad support, but can still force the developer to handle the checked exceptions.

I'm not sure if it really addresses the underlying concern that the article presents though, which seems more like checked exceptions seem to be used in a way where the developer has no recourse anyways, so surfacing it through a monad or checked exception doesn't matter.


> For example, `Either[T, Either[FileNotFoundCheckedException, RuntimeException]]` is basically isomorphic to a function that returns T or throws and has the same monad support, but can still force the developer to handle the checked exceptions.

Yes, and I tend to write my code this way (although I find right-biased 'Either' a bit cleaner). The problem is (a) the mountain of JVM code which uses checked exceptions instead plus (b) the Scala compiler completely ignoring that exception information. Solving (a) is unrealistic, but (b) is an entirely self-imposed decision by the Scala developers. Their type checker could have incorporated checked exceptions in exactly the way you describe, and I wouldn't have any complaints (about exceptions, at least... null is a whole different beast ;) )


What's the difference to modeling the "happy path" with function calls one after another, with an outermost try block that can catch any exception that might have happened inbetween?

These functions wouldn't even need to care about Some[T]; T is enough.


There are three possibilities:

                         | Polymorphic exceptions | Monomorphic exceptions
    ---------------------+------------------------+------------------------
      Checked exceptions | Unsupported            | Java
    ---------------------+-------------------------------------------------
    Unchecked exceptions |                      Scala
If exceptions are unchecked then there is no difference between "inner" code and "outermost" code; the compiler cannot tell us that a handler is needed/missing. This is the case in Scala. The advantage is that we can compose code which throws and which doesn't throw (this is essentially dynamic typing for exceptions).

If exceptions are checked then there is a difference between "inner" code and "outermost" code: the inner code has 'throws Foo' annotations, the "outermost" code doesn't. The compiler will spot missing handlers (i.e. when our outermost code can throw). There are two ways this could be done:

If checked exceptions aren't polymorphic then we need to make multiple versions of higher-order functions, like List::map: one version which doesn't throw, one which can throw one exception, one which can throw two exceptions, etc. (these exceptions can be kept generic, but the number of them must be explicit). For example if we have a lambda which can throw KeyNotFound we can't use it with the standard List::map method, since that only accepts lambdas which don't throw. We could make an alternative method 'public List<B> mapE(FunctionE<A, B, E> f) throws E', but that wouldn't work for lambdas which can throw FileNotFound and PermissionDenied; we could write a 'public List<B> mapEE(FunctionEE<A, B, E1, E2> f) throws E1, E2', but that wouldn't work for three exceptions, and so on. AFAIK this is the current situation in Java.

If checked exceptions could be polymorphic, similar to row polymorphism ( https://en.wikipedia.org/wiki/Row_polymorphism ) or algebraic effect systems ( http://lambda-the-ultimate.org/taxonomy/term/35 ), then we would have the best of both worlds. In this setup the 'E' in an annotation like 'throws E' doesn't stand for a name, but for a set of names. Higher-order functions like 'map' can throw the same set of exceptions as the lambda they're given, and that set could have any size: if the lambda can't throw any exceptions then map's set of exceptions is empty; if it can throw five types of exception then map's set of exceptions contains those five; and so on. This is becomes even clearer for things like function composition:

    public Function<A, C> compose(Function<B, C> f, Function<A, B> g)
If 'f' can throw something from set E1 and 'g' can throw something from set E2, then 'compose(f, g)' can throw something from set E1∪E2. Likewise if something can throw E1 and we have handlers for E2, then the result can throw E1 \ E2.

AFAIK the JVM can't do this, nor can those languages which typically target it (Java, Scala, Kotlin, etc.; Idris has algebraic effects and it can run on the JVM, although it's not the standard target)


How aren't Java exceptions already polymorphic? "catch (Exception e)" will also match subclasses of Exception, won't it? And in the composition example, the compiler would correctly see that f(g(...)) can throw any subclass of E1 or E2. The only problem admittedly is that there's poor support for checked exceptions when used with lambdas, but isn't that just a language problem and not a JVM issue?

Although it's not possible to infer the exceptions automatically you can write a function to compose functions with exceptions and it will be checked at compilation time like you'd expect: https://gist.github.com/shawnz/5e9a0d344a6a693b46c662c5c8124... (EDIT: Actually they can be inferred to some extent.. example updated)

NVM. I see you addressed the possibility of doing this already.


There are two ways we can make exceptions polymorphic:

- Subtype polymorphism lets us say things like 'throws Exception', 'catch (Exception e) {...}', etc. and this will work for any sub-class of Exception, e.g. 'FileNotFound'. This works by upcasting: essentially discarding some of the information about the type, so the intermediate code can rely on a smaller interface. If we try to downcast it later, we need to handle the possibility that it doesn't match; e.g. we can write 'catch (FileNotFound e) {...}', but that won't remove the 'throws Exception' annotation, since we haven't handled the other possibilities.

- Parametric polymorphism (AKA generics) lets us say things like 'throws E', where E can be instantiated to any specific class, e.g. 'FileNotFound'. This doesn't upcast: the full type information is propagated (but the generic steps aren't allowed to use it). We don't need to downcast: the type checker will instantiate the generic types to that specified by the source, and see if the destination type matches. If 'E' is instantiated to 'FileNotFound', and we write 'catch (FileNotFound e) {...}', then the annotation will be removed, since there's nothing else to handle.

Hopefully the problem with the generic approach is clear from your example: we have to re-implement things over and over for different numbers of exceptions ('FuncThrowingOneException', 'FuncThrowingTwoExceptions', etc.)

Thinking about it, the situation is similar to Haskell's "constraint kinds": Haskell can "constrain" types (i.e. require interfaces), e.g.

    showBoth :: (Show a, Show b) => a -> b -> String
    showBoth x y = show x ++ ", " ++ show y
This is roughly equivalent to the Java:

    String showBoth<A extends Show, B extends Show>(A x, B y) {
      return x.show() + ", " + y.show();
    }
The GHC compiler has an extension "constraint kinds", where the constraints are treated more like normal values (similar to exceptions). The interesting part for this scenario is that each constraint is treated as a single value, so something like "(Show a, Show b)" is a single (tuple) value. Yet the type checker is smart enough to look inside such tuples, e.g. it knows that "(Show a, Show b)" implies "Show a", etc. It also doesn't care about order, e.g. "(Show b, Show a)" will work just as well; or nesting, e.g. if "c1" is "(Foo a, Bar a)" and "c2" is "(Bar a, Baz c)" then "(c1, c2)" is equivalent to "(Foo a, Bar a, Baz c)".

Those are the sort of features that would make generic exceptions much nicer, since we could put 'throws E' on everything, and be able to instantiate E to a single exception (like "FooException"), or a tuple of multiple exceptions (e.g. "(FooException, BarException, BazException)"), or a tuple of no exceptions "()". There's probably a way to encode this already, but it would require manually packing, re-arranging and unpacking the exceptions at every use-site.


The right way isn't Either / Option (though it's a lot more viable with top-level type inference); it's unchecked exceptions.

Checked exceptions have bimodal usage, from the programmer POV. Either you care about the exception, and you deal with it very close to the throw point, or you don't care about the exception, and it should be handled far far away, across many stack frames.

The former isn't a problem. It's the right thing if e.g. you try to open a file and the file is missing, and you have reasonable error handling logic to retry, open a different file, replace the file and try again, whatever.

The latter is where the issue lies. If you're handling errors far away, then you're handling lots of different errors there, and you're not distinguishing between them, because there's too many different failure modes. You're most likely just in a loop logging errors, or terminating. You're too far from the cause of the exception to do anything specific with it, the context is lost. So the effort to transport the exception type throughout the call graph is pointless.

tl;dr: checked exceptions are fine near the leaf of the call graph, but are increasingly pointless towards the trunk.


Unchecked exceptions are the worst of both worlds: you can call a function that will throw an exception that you could recover from but you have no idea, so you don't even know you should catch it.

At least, Either/Option force your code to take errors into account.

You still have to bubble them up and compose them manually, though, which is why checked exceptions shine.


> You're too far from the cause of the exception to do anything specific with it, the context is lost.

Which is perfectly fine. If you _can_ handle the exception near where it happened, do so. Otherwise bubble it up and log it or whatever.

Consider a REST service. If a DB or other error happens, I can retry right then and there if business logic calls for it (probably not), or just kick the can down the road where something appropriate like a 500 response serializer will take care of it. That’s a great pattern in my opinion.


> If you're handling errors far away... the context is lost. So the effort to transport the exception type throughout the call graph is pointless.

I get what you're saying, and things like 'Try[T]' lose the specifics of the error type too (we just get a 'Failure(Throwable)').

My problem occurs earlier on: which code might throw exceptions, and why? Java's checked exceptions let methods declare "I can fail with an AccessDenied error (along with all the usual stuff like NullPointerException, etc.)", and the compiler will make sure that's handled somewhere (even if it's just a log and quit).

Unchecked exceptions are implicit; the signatures don't tell us they exist, and the compiler doesn't check that they're handled. This makes sense as a last resort, for things like OutOfMemory (although Zig would disagree!); but in general it's unhelpful and dangerous. I think it's a poor choice on Scala's behalf to treat all exceptions this way. Their only redeeming feature is short-term convenience; it's essentially an instance of static versus dynamic typing. "Exception polymorphism" (which I imagine would look something like row polymorphism) would make checked exceptions more convenient, since we wouldn't need exception-specific boilerplate, and this might be enough to solve the problem.

Maybe the JVM might gain such a feature in the future, but until then we can use 'Either' or 'Try' to achieve a similar thing: they show us explicitly which methods can fail (answering the question in my second paragraph), and they force us (via the type checker) to handle the error case somewhere (even if that's just a generic log+quit handler at the top level, as you say).


So you're saying checked exceptions are perfectly fine.

I'll write my library code with checked exceptions. You call my library code in one of your methods. The compiler tells you to do something about the possible failure. If you want to handle it there, you handle it with a try{}catch{}. If you don't, you wrap it in a RuntimeException and rethrow it so your top level handler can deal with it.

Perfect.

Unchecked exceptions make it easy to fuck up the case where you actually might want to handle it close to the call.


`Result<T, dyn Error>` from Rust (since I'm not a Haskeler) is a result that can contain any error. Which is exactly what unchecked exceptions are.

Rust developers can fluently switch and convert between conrete `Result<T, SomeErrorEnumeration>` and `Result<T, dyn Error>`. It works beautifully. Libraries usually enumerate their errors (leafs), applications usually just throw everything to one universal error bag.


> Rust developers can fluently switch and convert between conrete `Result<T, SomeErrorEnumeration>` and `Result<T, dyn Error>`

That's something different, since both cases tell you that errors might occur.

In Java we can do the following:

    public int concreteChecked() throws FileNotFound {
      if (bar) throw new FileNotFound();
      return 42;
    }

    public int polymorphicChecked() throws Exception {
      if (bar) throw new FileNotFound();
      return 42;
    }

    public Either<FileNotFound, int> concreteEither() {
      return bar? new Left(new FileNotFound()) : new Right(42);
    }

    public Either<Exception, int> polymorphicEither() {
      return bar? new Left(new FileNotFound()) : new Right(42);
    }

    public int unchecked() {
      if (bar) throw new RuntimeException(new FileNotFound());
      return 42;
    }
I think your 'Result' examples are like the third and fourth examples above: using a sum type, differing by whether the error is more/less specific.

The first and second use checked exceptions, again differing in whether the error is more/less specific. Importantly: these will refuse to compile if we don't have 'throws ...' in their signature.

The last example uses an unchecked exception: if we throw 'RuntimeException' (or a subclass), we don't need to put 'throws ...' in the signature, and hence the compiler won't keep tell us to put 'catch' block anywhere.


In Rust you can do `Result<T, dyn SomeErrorInterface>` as well, which makes it able to express everything exactly like Java can. It's just Rust community generally doesn't bother with the taxonomies of errors. You either get a concrete list, or "any error".


I think you're making a good point here if you look at it as an application developer. In my experience, checked exceptions are helpful when using a library. Would you recommend this approach for libraries as well or do checked exceptions have more of a role to play there?


I think checked exceptions are backwards. If you use a `throws` declaration the caller must catch it. It quickly becomes quite onerous. (Especially if you come from the philosophy that an exception often means "abort this program - unless somebody catches this". In small programs you might just want it to crash early.) And even worse, it is not exhaustive. You can always get a RuntimeException or a NullPointerException from nowhere.

It would be great if they worked the other way around. Instead of forcing the caller to catch an exception, they would guarantee that no exception leaves a certain block.

So you would have a function

    void MyFunc() onlythrows IOException {
        first();
        second();
    }
And the compiler would statically guarantee that no other exception can leak out of it - because first and second have been marked `onlythrows IOException` or are "pure" and cannot throw at all.

For sure you'd need an escape hatch, like Rust's "unsafe". And it would not be very useful around legacy libraries. But it would be tremendously useful if you could drop a block like

   neverthrows NullPointerError { ... }
in your code and be sure that everything inside is null safe! I asked about this a few years ago on StackExchange [1] but so far I never heard about it anywhere else.

[1] https://softwareengineering.stackexchange.com/questions/3497...


> I think checked exceptions are backwards. If you use a `throws` declaration the caller must catch it.

I'm not sure I I'm reading your comment right, but this is plainly false. Caller can add a throws declaration themselves and catch anything.

It seems to me that what you're advocating bears no difference at all with checked exceptions. The "unsafe" escape hatch is called RuntimeException. "throws" behaves exactly like your advocated "onlythrows".

The only actual difference you're proposing, AFAICS, is that you'd like java to handle nulls differently - and I think we can all agree on that.


> Caller can add a throws declaration themselves and catch anything.

You are right. I meant someone in the call stack has to catch, not the immediate caller.

That I mentioned null pointers is a red herring. I want to add a block that tells the compiler "prove that no exceptions (checked or unchecked, not even NPE)" can escape outside of this block!".

The escape hatch is about this: imagine you have a function that throws if the argument is odd, but you know you will only call it with an even number:

    void myFunc(int i) never_throws {
        swear_never_throws(NumberOddException) {
            throwsIfOdd(i*2);
        }
    }
(Apologies for the pseudo-code, I haven't made up a nice syntax.) If you mark your function that no exception can escape (not even an unchecked exception!), but the compiler sees that throwsIfOdd can throw NumberOddException, you must of course assert that what you are doing is OK.


It's an interesting idea. My only issue is that if you decide to call an existing function from your "neverthrows" function, there's no way to tell whether it's okay to call that function without actually running the compiler. And if you want to take that opportunity to do an audit of null correctness, you're at the mercy of the compiler's error message to hopefully tell you where the offending code is.


You shouldn't be getting NPEs at random. And if you want to protect against them, just catch them.

Your "neverthrows" looks to me just like a different way of expressing try/catch


If you use try/catch, you can either swallow the exception, deal with it, or rethrow. But sometimes you don't know how to handle or rethrow an exception. Maybe you don't even know that the library code you are calling is going to throw!

A "neverthrows" block would not compile if there is anything inside that can throw (even a runtime exception). Its a way of drawing a line. A library author could use it to make sure no unexpected exceptions can bubble up.

And yeah, if you get NPEs at random you are making a mistake. If I could just choose to not make mistakes, I would :-D. But until then, I'd prefer the compiler to check my work. (And catching them is not an option. What do you do then? Terminate the program? Ignore them? They need to be found at compile time, not at runtime.)


The confusion regarding checked exceptions (which are fine, if not misused, see aazaa's answer) was made much worse by IDE's such as Eclipse, which would generate:

      catch (MyCheckedException e) {
        e.printStackTrace();
      }
This causes unreliable programs, since the programmer will initially only think about the successful path. Eventually, the exception will get thrown, and things will break in weird ways. They may not notice it quickly, because the stack trace will be buried in logs. Alternatively, if the IDE default has been

    throw new RuntimeException(e)
or something similar, which would crash the program, the programmer would have noticed it more easily. Of course, the program would still be broken, but better crash hard and violently than subtly and confusingly.


Checked exceptions are far from fine, and this has nothing to do with IDE code generation.

Monads are highly cumbersome, unwieldy, and difficult to use, but Haskell programmers put up with them anyways because they get specific value from them, being able to say things like:

* effect tracking * continuations * tracking which methods perform I/O * tracking errors

Java checked exceptions are like monads, but worse: they are even more unwieldy and interact poorly with the rest of the language. And yet, unlike monads in a language like Haskell, they provide practically zero value for their cost.

There is a reason no language since Java has copied checked exceptions as a feature, including C# which started as a direct rip-off of Java. There are better ways to encode errors into a method signature to try to force the caller to consider them than checked exceptions.


According to Gosling, including classes/inheritance was his biggest regret.

I once attended a Java user group meeting where James Gosling (Java's inventor) was the featured speaker. During the memorable Q&A session, someone asked him: "If you could do Java over again, what would you change?" "I'd leave out classes," he replied.

https://www.infoworld.com/article/2073649/why-extends-is-evi...


Possibly unpopular opinion: Java's biggest mistake, by far, was annotations that define behavior at runtime.

So now we have consultingware like Spring where if something isn't working, it could because you missed an annotation somewhere, or put the right annotation in the wrong place. Which annotation? Where? Maybe you'll find out a week from now that you made a mistake, when a customer finds a bug in production.

This took all of the compile-time checking goodness that you got from Java and threw it in the garbage. Now you either have to call an expensive consultancy, read books/manuals about your gigantic framework (fun!), go on forums, etc. You can't just use your coding skills.

I still often use Java for my side projects because I love it without runtime annotations, but thank god for the rise of Golang. I'd rather deliver pizza than go back to the misery that is annotation-driven development in Java.


Worth noting that Go has this too in the form of struct tags on fields, which people use to define behaviour like reading from a db or json, and which have similar pseudo-languages stuffed into strings - people are even trying to extend them into structured data now. Madness.

It's quite possible to avoid them of course and I prefer just to use code to instantiate objects rather than learning yet another configuration language attached to fields.


> Possibly unpopular opinion: Java's biggest mistake, by far, was annotations that define behavior at runtime.

Now add to that the fact that if the runtime can't load the annotation class via the classloader, it just silently pretends the annotation isn't there.


That's as demonic as "assert" silently doing nothing by default.


I believe you're right, though I think it's Spring's inversion of control rather than the annotations themselves.

The problem with inversion of control is that if you get it wrong, there is no feedback. All you get is "nothing happened". When it works, it just works, which is great. When it doesn't work, it doesn't work, and "it doesn't work" is practically impossible to Google. Spring had the same problems, worse, with XML configuration.

The solution is exactly as you say: let programmers program. Solve the problem with debugging tools that you already know, rather than introducing a whole new meta-meta-programming environment without any debugging support. (If you attach a Java debugger to a running Spring program to step through the point where it's failing to find your annotation, you will regret it.)


I actually miss this in C# and wish the .net community had gone in with bytecode rewriting the way the Java community did.


The new source generators in C# 9 seem like a good compromise. Rather than rewriting code they can only add code but that ensures some amount of predictability.

https://github.com/dotnet/roslyn/blob/master/docs/features/s...


The alternative is heaps of XML or JSON configuration, detached from the code where it’s used. Consider that if you wanted to inject a bean in Spring XML it requires creating a bean definition that in turn defines all the beans injected into it, which in turn require their own bean definitions, etc. Then you have to declare exactly which field/method the bean is injected into. If you were developing software in the pre-annotations day you’d understand how much that sucks. The annotation approach is much better in comparison.


Is it though?

Why not just use code (generated if required) to read xml or json for each class? That way it is clear, in the source code and can perform other transformations as required.


> Is it though?

Yes. It's what the Java world was before annotations became a thing.

> Why not just use code (generated if required) to read xml or json for each class?

Because it didn't happen that way, because Java is way too verbose for that, and because "imperative declarations" are a horrible thing.


You're describing what was, not what could have been. There's nothing to preclude just having a function on your classes to read from json or db data.

Perhaps the culture is stronger than the language though.


> inject

I see the problem.

The whole point of spring XML was so you didn't have to "write code" to wire things up. Now we're using annotations to replace XML - we're writing code so we don't have to write code. It makes no fucking sense.

Annotation injections are a completely ridiculous turn of events.


> Now we're using annotations to replace XML - we're writing code so we don't have to write code. It makes no fucking sense.

It does though. Turns out that writing XML configuration means you don’t get to take advantage of the context associated with an annotation. I.e., if I put @Inject on method something(X value) in class A, all the context of what type to inject and where comes along for the ride. In XML I have to explicitly specify every single bit of context, and oh yeah, if I rename “A”, “X”, or “something” I better fix that in the XML or my program will blow up. Not a good look for a programming language that already gets grief about being overly verbose! Annotations just flat out make the configuration part of the equation easier.


Wiring up dependencies via XML is indeed a bad idea for the reasons you specify. But if you're going to wire up dependencies in code, you don't need annotations. Java already has a method for declaring and providing dependencies for a class: writing and calling constructors, which is clearer and checked by the compiler.


It's important to understand that spring gets around the wiring problem by being a system for declaring and using global variables. This turns out to be a fundamental waste of time, as the whole point of making things dependency-injectable in the first place was to avoid global variables.

However, every spring bean is a global variable, and every bean reference, whether explicit or implicit via autowiring, is a reference to a global variable. Spring results in bad, un-modular, hard-to-debug, and hard-to-maintain wiring code. Just say "no" to runtime dependency injection frameworks.


> I have to explicitly specify every single bit of context,

You still are doing that, just in a less clear way and a less debuggable way. Most other languages get along without meta-languages (there are some frameworks that are spring-like) because being explicit is better than being implicit.


> You still are doing that, just in a less clear way and a less debuggable way.

That’s debatable to a degree. If I put @Inject on a field it’s pretty clear what’s going on just from a quick glance of the source code. By contrast, I don’t know injection of some field happened _unless_ I take a gander at the XML config. And the debuggability of both approaches is the same, the injection manager is doing the same magic under the covers, only the configuration is different.


You can easily and correctly summarize Spring as follows. Spring tries to solve the problem of "applying parameters to function is boilerplatey" by creating a DSL for declaring and using global variables, because, global variables don't have to be passed around! And yet, ironically the reason we do parameter passing to functions (constructors factory methods, etc) is to avoid global variables.

Spring is a fundamentally flawed waste of time, and it represents one of the biggest mistakes of the Java community.


Is turning Java into a dynamic programming language really the way to go to fix this, though?


Consider that people to this day still dog Java for being overly verbose, and then consider how much boilerplate (code and XML) is required to make something like a Spring REST controller that includes dependency injection. You can boil that down to basically one small class with a couple of annotations these days. The old way is absolutely dreadful in comparison.


Java was already that. The grandparent is just noting what was there before annotations arrived.

Well they're actually being kind, XML was a step up from having the annotations in docstrings. That was infuriatingly bad.


Annotations are great as long as the consuming framework reflects the annotations exactly once for JIT/bookkeeping. Often the people I see who hate annotations are invoking reflection (explicitly or implicitly) in critical code paths.


On a slight tangent, I'd rather have only checked exceptions, so I know exactly what might throw where. Instead, I have to rely on potentially outdated Javadoc and debugging runtime exceptions in production to find them.


You still need unchecked exceptions for errors that truly should never happen and are not accounted for. Even Rust and Go have unchecked exceptions in the form of "panics".


I wish that handling/rethrowing wasn't required, but that at dev-time I could just ask, hey, what's every kind of exception that could be thrown out of this expression & then decide which ones I wanted to handle at this level, if any.


I think this feeds into my pet peeve that program source code shouldn't be text. The need to have everything spelled out in a readable fashion in text files is a major facilitator of boiler plate code in many situations (like long throws statements in Java). More semantic representations of source code have the potential to be more selective in their handling and display of such things. There would be more ways to show automatically deduced information or to filter out code/information that is irrelevant right now.


You can ... ?

Handle some, declare the method throws the others?


I don't want to have to write the declares part. I don't want my callers to have to do so, either. I want the nice list of unhandled exceptions the java compiler gives you, but I don't want to have to do anything about it if I'm cool with those exceptions being tossed up a level.

Kinda like if everything was a RuntimeException, but I had a way to figure out what are all the subclasses of RTE that this expression could produce.

Consider how Swift does exceptions. You mark methods as throwing methods, but you don't say what kinds of exceptions can be thrown. If you call a throwing method, you have to write a catch, but in order to figure out what kinds of different things can be thrown, I have to rely on documentation or on examining the source. AFAIK, there's no way to figure out all the different types of exceptions that can be thrown.

I want something in the middle. I want the conciseness of swift, but the info provided by java while I'm writing. I'm no language designer, so I don't even know if that's possible, but it's the kind of pony I want.


That Swift way sounds terrible for me.

I do want to know what the error-path contract is, and I do want it enforced that I deal with them. I've seen too much crap in my time that just assumes the happy path, when error handling and recovery are just as important, IMHO


Today, you can write this:

  while (iterator.hasNext())
      System.out.println(iterator.next());
Take away unchecked exceptions and iterator.next()'s NoSuchElementException either becomes checked (requiring handling code that's unreachable) or gets removed (hope no-one ever forgets to check hasNext)


Don't get me wrong this is bad, but Java projects are littered with checks for logically but not technically unreachable code.

This is one of those things that should have never been done with exceptions, I'm looking at you Python, but with Maybe. Then you can either handle the None case or prove to the compiler that it's impossible.


Defending unchecked exceptions with bad iterator interface design is a little weird.

`next()` should return an option that you can `map` over.


If checked exceptions overall are Java's biggest mistake, then the InterruptedException implementation in particular is the second.

It conflates thread management with exception handling in a way that's difficult to understand and implement correctly. The relationship between InterruptedException and the Thread.isInterrupted() method is a particular pain point for coders.


What, in your opinion, would be a better mechanism for interrupting threads in Java (aside from just making it an unchecked exception)?

Something like Erlang, where any process will just die upon being sent the exit message, whether it's blocked on IO or receive or busy in the CPU, would definitely be simpler and easier to reason about. But that capability carries a runtime cost.

Go's mechanism is also arguably simpler, in which goroutines are not first class objects and if you want to be able to interrupt one then you have to write ad hoc logic using a channel that you provide specifically for the purpose. But in 99.9% of cases, I think Java's more complicated mechanism with first class threads is more convenient.

I would make it an unchecked exception, though. And I wish the old java.io.Socket operations and similar methods would throw InterruptedException.


This is a really hard problem. If you use exceptions at any level they have to be generated consistently, or the solution will be what we have now.

Because that's the other thing--you can't guarantee InterruptedException will even be delivered to a thread. An underlying library can just eat it or the thread could be waiting on a socket [1], etc. This kind of behavior the bane of correctness or even getting operations like clean server shutdown to work at all in some cases.

So I think this really needs to be something like CSP that's built into the language in a way that makes the behavior consistent in all cases even if it introduces coding patterns that have other costs. Java _did_ get object locking and data visibility right by building in simple primitives like the synchronized keyword into the language. You can create deadlocks but the behavior is clean enough it's not hard to program around them.

I would also be fine with just dying as long as there is a way to clean up shared data structures. However, that can't be manual because it's just about impossible to ensure that such cleanups are correct. Databases use transactions to get around that problem.

[1] https://stackoverflow.com/questions/1024482/stop-interrupt-t...


Don't forget ClosedByInterruptException: if you are accessing a file using Java NIO (but not the classic Java IO), and the thread is interrupted, the file will be automatically closed. There's no way I know of to keep the file open (which you might want, for instance, when the file is shared by several threads), other than not using Java NIO or not using thread interruption.


Aaaagh! I forgot about that one. This is like Lucy pulling away the football from Charlie Brown.


I don't think the whole concept of checked exceptions isn't a mistake, although the way they're implemented certainly is. In my experience, the problems with them almost always stem from one of two issues:

1. Built-in exceptions that are checked but should be unchecked, IOException being the main offender (I don't mean things like FileNotFoundException; I mean the kind you can get if the OS returns -EIO)

2. Lack of exception polymorphism, preventing you from doing things like l.stream().filter(SomeClass::somePredicateThatMayThrow), even if the function that you're doing it from can throw the same exception that the predicate can

I think checked exceptions would be great and nobody would hate them if those two problems were fixed.


This is still one of the pieces written on error handling. http://joeduffyblog.com/2016/02/07/the-error-model/


I sorely miss them in .NET, specially with libraries without any kind of documentation regarding the errors that they throw.

Also checked exception haters always overlook the fact that CLU, Modula-3 and C++ did it first.


If anything, I think unchecked exceptions were the bigger mistake in Java.

Checked exceptions are pretty much isomorphic to result-types that are oh so fashionable these days (see also Rust), unchecked exceptions on the other hand are completely invisible and unpredictable crazyness.


`OutOfMemoryError` is the root of unchecked exceptions, you can't put it everywhere because then it might as well be nowhere.

Rust improves on this by using a different syntax for unchecked exceptions (`panic!` vs `Result`) which provides the benefit of discouraging unchecked without preventing it.


Aren't panics() in Rust unrecoverable in release builds?

Except for this minor point, I agree - you can't really make a managed memory system without resorting to unchecked exceptions. Array index out of bounds is another good example of an almost unavoidable exception that would only pollute the code if it had to be checked everywhere (as is Null pointer exception, but that of course could be mitigated by not having nulls in the language).


I don't believe so, no.

You can set panics to automatically abort, though. But that's opt-in.

The real point is that panics are absolutely unchecked exceptions, but it's really awkward to catch them and the standard library and entire Rust ecosystem has a good, solid, culture around returning Results whenever reasonable.

The problem with Java is that a bunch of people have no idea what they are "supposed" to do with the unhappy path parts of their programs.


Well, as long as the option of disabling panic handling exists, and if it is somewhat widely used in real applications, library writers can't rely on panics as an error-handling mechanism, so generally they have to treat it as if panics can abort the program.

I think the problem of deciding what to do on the unhappy path is often very difficult, a lot of the time much harder than the actual happy path. This is true regardless of the error reporting/bubbling mechanism.


There are unwind panics and abort panics, it is the choice of the one panicking. Also if you are a library maintainer the binary can choose to disable unwind panics completely.

On the topic of array index out of bounds Rust provides three ways to do indexing `obj[i]` which panics on out of bounds, `obj.get(i)` which returns `Option<T>` and `obj.get_unchecked(i)` which is the C/C++ style "if I go outside the bounds of the array it is undefined behavior" (and thus is marked unsafe). The first avoids polluting and provides the easiest syntax, the second allows you to opt into "I don't know if this is in bounds" and the final one is designed for instances where you otherwise can prove the index is in bound to avoid the conditionals to check.


And badly in at least the case of C++. I was a fan of Java checked exceptions initially because they actually worked compared to C++ exceptions, which conflicted with threading models based on setmp/longjmp. We banned them in my projects in the mid-1990s for this reason. (We were programming on Sybase OpenServer before it implemented native threads.)


C++ never had checked exceptions in Java's sense. The only purpose of declaring exceptions was so that the runtime could throw a nastier error if some exception other than the listed ones was raised.

The modern noexcept specifier is closer to checked exceptions, but has learned some of Java's mistakes (for example, a function template can be conditionally noexcept based on input arguments).


The concept was the same, though.

And they were the inspiration for Java's.


Libraries can convey that information through IntelliSense with an XML comment:

  <exception cref="FileNotFoundException">Thrown when file is not found</exception>


Have you missed the "specially with libraries without any kind of documentation" part of the sentence?


XML documentation is part of the package and development experience when using an IDE. It’s way different than trying to find some random Wiki page online. I don’t consider them equivalent.


I'm a relative newcomer to Java, and, being a newcomer, I have put some effort into exploring as many corners of the language as I can. One that's proven to be a particular puzzle is checked exceptions. But I think I finally understand them now.

I quickly found that checked exceptions just do not play nice with any sort of functional-style programming, like the article describes. But the problem goes so much deeper than that. Checked exceptions are also, as far as I can tell, incompatible with an object-oriented programming style. More or less for the same reason that they interact poorly with FP. The fundamental problem is that checked exceptions don't really play nice with polymorphism or higher-order programming of any type.

Which takes us to the crux of how I understand them now: Checked exceptions may not go well with FP and OOP, but they make all the sense in the world if you're doing procedural programming. There, you're not trying to create deeply nested abstractions, and you're not messing around (much) with polymorphism tricks. The code's very lexically organized, with little in the way of dependency injection or higher-order programming. When you're programming procedurally, it's fine to be exposed to the implementation details of stuff further down on the call graph, because you're the one who put it there in the first place.

And that, in turn, means that checked exceptions are not really a mistake. They're just a piece of evolutionary history. Because, early on, Java wasn't really an object-oriented language. It was a deeply procedural language with some object-oriented features. It arguably still is, it's just that there's been a big cultural shift toward trying to take a more object-oriented approach since Java 5 came along and made it more practical to do so.


I agree with the part of your post where you show that checked exceptions are bad, but IMO Java has been OO from the start. And, if by procedural code you mean code that has zero abstractions, then, yes checked exceptions aren't a problem there because their big problem is you can't abstract over them. However, I find this statement more to be an obvious tautology than a statement that checked exceptions are really useful in any situation.


My bias there is that I'm one of those Alan Kay worshipping hard-liners who thinks there's a lot more to object-oriented programming than simply using objects. Similar to how there's more to functional programming than using first-class procedures.

So yeah, it's true, Java has classes. But its culture and idioms and standard libraries and even some language features (checked exceptions, for example) are forever pushing developers toward procedural idioms. Less so now, perhaps, but intensely so in the 1990s.


IIRC, Alan Kay says OO is message passing, encapsulation, and extreme late binding. In java:

* message passing is virtual method invocation

* encapsulation is through public vs private members

* late binding is through the JVM dynamic linking system.

While Alan Kay may have had something much more flexible like Smalltalk in mind, I believe the Java OO system meets the spirit of OO well enough and makes certain tradeoffs for good reasons.


Alan Kay was also fairly specific about what he meant by encapsulation, and it's not just making things private. For example, he wrote, "Doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming." And, later in the same paper, "Human programmers aren't Turing machines—and the less their programming systems require Turing machine techniques the better."

Java's programming culture tends to favor a shallower approach to OOP that cleaves very closely to the fiddly, imperative, state manipulation-oriented approach that is emblematic of procedural programming.

A great example of this is Java 8's streams API. There's absolutely no way to evaluate a stream without introducing a state change that will alter its behavior. Which means that, unlike for almost any other comparable API in another language, you can't safely pass around and share instances of Stream for fear that someone might break it on you. It's the shallowest possible interpretation of the abstraction in question, and making it that way was a deliberate decision that motivated by the tacit understanding that Java programmers have a fundamentally procedural way of thinking about the world, and would be confused by something that was truly declarative. The only other language I'm familiar with that takes the same approach is another popular "procedural programming with objects" language, Python.

This isn't to say that people can't do principled object-oriented programming in Java. Just that few people do. In part because, if you try, you'll end up constantly picking fights with the JDK. Not entirely unlike how you can do principled functional programming in Java, but it's an uphill struggle.


That statement is neat but it looks to have little to do with programming and more to do with programming style. What about Smalltalk, the language, for instance, helps you "eliminate state oriented metaphors" any better than any other programming language?


Virtual method invocation is already "late binding" in quite a few contexts, although dynamic linking certainly takes it further.


> message passing is virtual method invocation

a virual method call is definitely not a message[1]

[1] https://davedelong.tumblr.com/post/58428190187/an-observatio...


Method calls and virtual method calls are not the same thing.


While we're talking about terrible decisions, can you guess what the following code will print?

  String s = null;
  switch (s) {
      default:
          System.out.println("Hey");
  }
Hint: it will throw NullPointerException.


I'm confused why you'd want it to do anything else. Perhaps you could give a more realistic example to motivate the argument for why the behaviour is wrong or inconvenient?


It's inconvenient because null could be considered a case

  switch(s) {
    case null: return false;
    case "y": return true;
    default: return false;
  }
The broader issue is that Java handles null inconveniently.

1) Every object can be null, switch requires the argument to be non-null, and the type system doesn't warn you when NPE are possible. A type system which handles nullability could fail to compile if 's' is nullable. Kotlin does this, and it let's you opt-in to the NPE with some convenient syntax:

  switch(s!!)
2) It's inconvenient to handle null as a value. To properly handle the null case without throwing, you need to do one the following:

  if(s == null) { ... }
  else switch(s) { ... }

  switch(s == null ? "some-default" : s)
The first way can be made more convenient if you change switch to work on nullable values. The second way is inconvenient, so people generally skip it. If you want switch to only work on non-null values, there're more convenient syntaxes to handle null, such as the 'elvis operator':

  switch(s ?: "some-default")


You could also use Optional, like

Optional.ofNullable(s).map(i -> {switch (i) {...}}).orElse()


I assume this is because null is false-y in many other languages, and can be used in switch/if statements and so on. Rather than having to be treated like an error.


Don't worry, there's a whole new set of terrible decisions with switch expressions.


Mistake maybe but I disagree on the ‘biggest’ part.

For me type erasure is a bigger issue. I get that it was done for backwards compatibility but the drawbacks imposed by that decision seem to only grow as more time passes and more new compromises have to be made.


It could have been handled in the class loader. By the time the spec had gone anywhere Java was already transitioning rapidly to servers. The memory usage argument was always bullshit. The JIT internals could have done some many to one type metadata to share function bodies between concrete instances of a particular generic.

I’m sure a bunch of material would have been written describing how to avoid runtime code duplication by tweaking your class hierarchies.


What, in your opinion, is the biggest drawback of type erasure?


Some things that came to my mind:

- List<T> doesn't "just work" with non-reference types. It needs boxing that increases memory usage and introduces stuff like ints being null.

- We need special functional interfaces for non-reference types for that reason (e.g. IntConsumer).

- This also affects Stream<T>, so we need IntStream etc.

- A method must have a parameter of that generic type (or it has to belong to a class that has this generic type). It otherwise becomes indistinguishable during rumtime. For example, a method like ImmutableList.CreateBuilder<T>() is not possible in Java (that example is from C#'s collection types).

Type erasure moslty comes into play when looking at non-reference types. For reference types, it seems to work pretty good (although it's weird that Map<String, String> will have the same runtime type as Map<Object, Object>). The last point I mentioned is not good, but no deal breaker. If generics would be like in .NET, we wouldn't have any of these restrictions.

With type erasure, we ironically have to write more java code while not being able to express stuff in an abstract manner (Stream<T> is incompatible with IntStream).


Thankfully, C# team made the right call and used reification.


Type erasure makes deserialization a pain. This has lead to multiple incompatible implementations of ways to indicate what a generic type contains on top of the existing type system. In practice it means using generic types on DTOs is a pita.


I just want to throw a plug for the concept of Railway Oriented Programming. It can be laid on top of almost any functional-ish language that can implement some sort of Result(Ok,Err) return type. And to the OP's point, you can catch exceptions where they occur and `return Result(Err(details))`.

We applied ROP with great success at a fintech where we wanted to clean up a block of business logic with many failure paths. Instead of a forest of nested conditionals or try/catch mess, there was a very simple happy path with clear handling for all the errors.

Here's a good start. Ignore the language details, the concept is universal. https://fsharpforfunandprofit.com/rop/


I've been using java since Pre-1.0 and professionally since 1999 and I don't see checked exceptions as being particularly high up the list of java issues.

Even with all the functional stuff they almost never seem to really create a major issue.

My biggest issue with Java is just the way they've caved and constantly added new stuff that is always grafted on so it's never quite as good as a language that focuses on that programming paradigm from the start.

But none of the problems in the language compare to the scale & scope of the problems caused by Java's default developer & architect culture. The culture is terrible... everything gets overcomplicated, overabstracted, etc. and you've got charismatic charlatans convincing wide swaths of developers to misuse and abuse language features in ways that have made a lot of people hate the language and have produced a lot of buggy and hyper inefficient code.

Java itself doesn't have to be bloated, slow, buggy, and a massive memory hog. But the java developer community has continually made decisions to structure their java software in a way that makes that the default condition of Java systems. The way the Java language constantly gets new giant features grafted on plays into this.. everyone jumps on the latest language addition and misuses it for a few years before they come to understand it. By the time it's understood there's something new to move onto and abuse.

Java became everything about C++ it was originally supposed to simplify.


The reason for the complexification is the culture of unit testing, and in particular cargo culting that idea to mean every class needs a corresponding test class, every method needs at least one corresponding test method, etc.

In order to tests classes in isolation like this, all dependencies must be injected, so they can be stubbed or mocked.

Because there's no universally available IoC framework or baked-in functionality, everyone takes the library approach to DI, which means factories of various levels of abstraction, and an excess of parameters.

But in practice most production Java systems do run with an IoC container, which in turn makes more fun things possible - AOP, interceptors, automatic transactions, all manner of magic.

(There's no doubt that checked exceptions were a mistake, to my mind, but the mistake is being repeated in Rust, so I guess that lesson hasn't been learned. But perhaps all checked exceptions needs is a better type system to manipulate the error types through the control flow graph. We'll see.)


> In order to tests classes in isolation like this, all dependencies must be injected, so they can be stubbed or mocked.

This really boils down to which school of testing you follow. London (Mockist) vs Detroit/Chicago (Classicist). [0] is a good article on the different views of testing. Made me think more about it.

I find that using fewer mocks, and unit tests that integrate many classes (vs one set of tests for each class) makes Java programming more fun- but it is very much a Detroit school way. And that's okay, but it's good to really think about the differences.

[0]https://medium.com/@adrianbooth/test-driven-development-wars...


Good article/comment. Thanks!


> AOP, interceptors, automatic transactions, all manner of magic

Magic stuff like this makes maintenance a nightmare. All these random attributes turn regular code into a 4d jigsaw puzzle - the complexity OP is talking about.


The Java community is so enormous that it has lots of different cultures. It's easy to pick on the enterprise architect yadda yadda culture because, well, they deserve it. But there are people doing FP in Java, realtime financial in Java, embedded systems in Java, games in Java... there are just a LOT of Java programmers out there.

Stop talking about the Java community like it's a monolithic thing. It's far too big for that.


I recommend Brian Goetz’ talk about language stewardship. I don’t think what you’ve said here is true, and it kind of bums me out because some of these features took years to develop so they wouldn’t break people on ancient versions. That deserves much respect and appreciation.

https://m.youtube.com/watch?v=2y5Pv4yN0b0


> My biggest issue with Java is just the way they've caved and constantly added new stuff that is always grafted on so it's never quite as good as a language that focuses on that programming paradigm from the start.

Isn’t that basically the #1 issue with C++? They needed more and more features to compete with newer languages and, in the end, the language feels like a Swiss Army Knife. It’s got a tool for every situation but it’s ultimately impossible to grab and use with all those tools making getting a grip impossible.


I think most of C++'s additions have avoided these problems in the way Java ran into.

C++ is pretty clear on the right way to do things at a given point in time. You will run into a similar problem when working with old code, you need to decide whether to abandon the new features to stay consistent, rewrite your program to make it consistent and new or do something in between and be inconsistent.

The problem with C++ is that the prevalence of macros and #include make backwards compatibility effectively impossible to patch around.

So you end up having to support ancient C++ code working as it was written decades ago working exactly the same. You can't even use file level flags because the object file in almost every case is going to #include old code.

The compiler can't stop you from doing things the wrong way due to this. You could have other tooling that warns you that your code is making X mistake but the compiler can't enforce that or assume that you don't make that mistake due to this extreme backwards compatibility requirement.

Side note: backwards compatibility in C++ is great and fundamental as it prevents fragmentation of the language between different incompatible dialects. The requirement alone doesn't fundamentally cause C++'s problems it just eliminates the easiest way to solve them. (Assuming Python 3 was easy)


I wouldn't agree at all. I think that for all its worts, Java is a fundamentally simpler language than C++ (well, basically every language in any kind of use is fundamentally simpler than C++).

With the exception of generics a long time ago, Java hasn't really added any major language - changing features. Stuff like lambdas, try-with-resources, even annotations, are only minor quality of life improvements to things people were already doing. Project Loom (fibers) and value types will probably be the largest additions to the language in its history, we'll see how they pan out.

By contrast, C++ has several ways of doing absolutely anything - 4 kinds of pointers (pointers, references, r-value references, unique/shared pointers), 3 ways of creating functions (top-level, functor, lambdas), 3 ways of initializing objects (construcor, copy assignment, initializer list), and on and on. A variable in C++ is characterized by a type, const-ness, volatile-ness, mutability (if it's a field), being a value, reference or rvalue reference, visibility (if it's a field of a class), and probably others that I'm missing right now.


I didn't say C++ was a better language or more easily used.

I said the specific problem called out about Java didn't occur to C++.

I would go into detail about your other comments but it looks like you never actually dived into the language just looked at the complexity of its syntax and got annoyed.


I think Java is much closer to having one right way to do things. In C++, there are still code-bases that don't use std::string, and not (just) for backwards compatibility reasons.

I wasn't commenting on the syntax, but the semantics. All of my examples are cases where there are legitimate choices to be made, with different trade-offs,which themselves depend on other choices. I admit that I have very little professional experience with C++, but I have some hobby esoterism, and I have read quite a bit about the language.

It is very expressive and can be quite marvelously simple and powerful, but that comes at a very high cognitive cost while working with it. This is especially true when you have a piece of code that contains a bug - that is the time when you need to think about all of the semantics of the code, even the ones you would normally ignore, because you already know someone did something wrong, so now you can't rely on how things are supposed to be.


> I think most of C++'s additions have avoided these problems in the way Java ran into.

I strongly disagree.

C++ is a giant, warty bag of cats with wires hanging out everywhere. It's so bad, and so arbitrary, that many institutions I know of don't permit their teams to program in C++ per se, but rather in a strict house subset of C++. That is a bad sign for a language.

This is not the situation Java is in yet. You can hate many of the language features added to Java recently (I certainly do). But it's completely plausible to know all of them, and more or less know all of their implications. This is just impossible in C++, full stop.


I love how I said Java has problems C++ doesn't have and get responses saying C++ is a terrible language.

Java has problems C++ doesn't have and nothing you said says anything to counteract that point.


There are two kinds of exceptions: (1) those that can (and should) be handled in a meaningful way, and (2) those there's no way to handle and you should just crash.

A big part of the reason people hate checked exceptions is that actually doing #1 correctly is really damn hard. It's a whole separate dimension of complexity that your design needs to tackle.

A compiler that checks exceptions forces you to do it. Abruptly, if you're new to the language. It flips the floodlights on at full brightness and makes you see the full scope of the problem. It's tempting to shoot the messenger.


The biggest mistakes in Java are checked exceptions, NullPointerException, and primitive types. But which one of these is the worst mistake really depends on your perspective, and the mood of the day.


- primitive types + not having value types

FTFY


Rust has a system equivalent to checked exceptions.

However Rust, unlike Java, has a great macro system and can thus easily generate higher level exceptions wrapping the lower level ones.


Rust's macro system is not what makes its "system equivalent to checked exceptions" bearable.

That the language provides tools to operate on both results themselves and their content (in part because results are reified and thus normal values of the language, and in part because specific tooling like `?`) is what does that.

Also that there is no issue of misclassification as in Java, because everything is a result and that's that.


I don't use any of the Rust error helpers. I don't mind a bit of boilerplate.

I don't hate Java's checked exceptions. But I also actually craft my own Exception types when I write a Java package. I think that's the biggest mistake that devs make. In Rust you have to combine errors into composite error types. In Java you should do that.


In my eyes, Java's biggest mistake is that the byte type is signed instead of unsigned. Masking a signed byte with (b & 0xFF) causes so much needless pain, and I have never wanted to use a signed byte. On the other hand, I appreciate that Java doesn't have unsigned versions of every integer type; that simplifies things a lot. As for checked exceptions, I'm still undecided on whether they're a good or bad thing.


I liked checked exceptions. It makes you aware of what exceptions that might happen. Makes you think about how to handle it.


The biggest problem with checked exceptions is that the application writer decides what is a recoverable error. Not the library designer. I could easily imagine a program where SQLException is a recoverable error. But I don't write those types of programs.


at a high level, doesn't this make sense though? the library author knows whether their code can recover from a certain exception, but they don't know how important their library is to the client application. in your SQL example, if the application's whole purpose is to track inventory using that database, then yeah, SQLException is probably unrecoverable. but if the database is only needed for some ancillary functionality, the application writer probably just wants to tell the user "hey, this feature isn't available right now" and let them keep on using the rest of the program. maybe I'm missing some context here? I'm primarily a c++ dev, and I don't know much about SQL either. in my project, a crash is only acceptable if we are out of memory or if data corruption is imminent.


well the problem is that SQLException is not a good exception at all. it actually includes the following errors:

    - database connection closed
    - SQLTimeoutException (o.O)
    - protocol exceptions like duplicated key
    - query exceptions
the problem is that some of them are recoverable but usually it's more of an hassle and duplicated key exceptions are easier to handle in app code via upsert, etc.

so basically the problems are library designer errors and not application writer errors.

checked excpetions should only be exceptions that a developer COULD handle and not ones that he can't

in C# the library designers usually create a "Result" object that has a boolean of succeded or status of enum instead of using exceptions for failures. i.e. most i/o errors are not recoverable, thus c# does not enforce you to recover from them.


Yeah it's reasonably common for people to try and salvage checked exceptions by saying people are doing them wrong but at some point you realize the best you're doing is trying to put lipstick on a pig.

Here's a bigger problem: checked exceptions pollute your API and expose implementation details. Consider an API that stores and retrieves objects. A particular implementation does so by writing them to a database via JDBC so you get SQLExceptions. You have two basic approaches:

1. Include SQLException in your function signatures so the caller can deal with it. This exposes the implementation detail; or

2. You can hide it by transforming that checked exception into something specified for your API. At this point, what benefit have you gained from SQLException being a checked exception? You're hiding that detail.

For (1), you're baking checked exceptions into your function signatures such that it can be really difficult to change later on.

If you sit down and think about the practicalities the argued upsides of checked exceptions are essentially nonexistent and unchecked exceptions are actually strictly superior.

Here's another pattern that happens with checked exceptions in Java:

    try {
      doSomeSQL();
    } catch (SQLException e) {
      // do nothing
    }
You will see people do this all the time because they don't want to deal with the checked exceptions. A better catch-clause is:

    throw new RuntimeException(e);
You can argue people shouldn't do the first and they shouldn't but unchecked exceptions will simply bubble up unless you deliberately swallow it. That's a way better default. Defaults matter.


> checked exceptions pollute your API and expose implementation details

Exceptions aren't an implementation detail. They're part of the API contract. Checked exceptions make this explicit, but people don't like that because error handling is hard and pretending that errors don't happen is easy.


I argue against that. because the problem is not that error handling is hard, the problem is that it is mostly unnecessary. Basically if you would write a hello world that writes directly to STDOUT you would need to handle IOException, guess what? it's basically useless to handle any kind of IOException in that case, because guess what? if your i/o device for output to display is broken you can only let your program crash. and that is the most common case how most exceptions are handled. and thats excalty where the k8s hype is about.

fail fast, start from scratch. you can't handle a network error in your application, but you can basically kill everything that lost network access to your database and recreate it where database access is still possible. but you don't need to do that in your application, your infrastructure should handle these and thats why most checked exceptions in the java stdlib are basically stupid. 90% of the stuff reuses exceptions for ease of use but 90% of the stuff should be runtimeexception and 90% of them should split them up, between stuff that I COULD recover from and from stuff that I can't (i.e. SSLException should be split into validation of certificate exceptions and protocol violations!!! that mostly can't be handled without reconfiguring the JVM!!!! I mean there is a SSLProtocolException but it's basically useless since it reports a totally different thing..)


I've literally never seen somebody catch a SQLException and do nothing with it - you quickly learn that just doesn't work, and it would never pass a code review.

With respect to exposing implementation details:

If the caller is the one who passed in the connection, then it makes complete sense to throw a SQLException so the caller can deal with it.

If the connection was opened in your method, then yes you probably should wrap it, handle it properly and clean up the connnection, and then throw a different exception.

The only time it's ambiguous is if the connection came from an instance variable. That's often a poor design, but usually means it came indirectly from the caller and so it still makes sense to throw SQLException.

In general, SQLException works fairly well as a checked exception. If some code generates a SQLException, your transaction failed and you should generally abort it (or you can check the specific error code if you are prepared to handle anticipated failures such as a duplicate key error). If it generates any other exception, you can continue working with the database (such as saving the failure status to a table).


> You can hide it by transforming that checked exception into something specified for your API. At this point, what benefit have you gained from SQLException being a checked exception? You're hiding that detail.

This is what you should do, yes. And I gained from SQLException being checked, because I am reminded by the compiler that I need to wrap it in my API's Exception value. If I can't handle it or don't think a caller of my code can/should handle it, I'll wrap it in an unchecked exception and rethrow.

> Here's another pattern that happens with checked exceptions in Java:

I mean... what do you want? You can't fix stupid. That's a really obvious mistake that should never ever make it past code review. I can shit on Java for making everything nullable and thus actually difficult to figure out if you should null-check something. But I can't shit on Java for someone writing code like that.


> checked excpetions should only be exceptions that a developer COULD handle and not ones that he can't

And that's kinda the problem; you can declare the ones a user MIGHT be able to recover from, but there's still the chance of unrecoverable or unforseen errors, so you wind up declaring THROWS EXCEPTION anyway...

> in C# the library designers usually create a "Result" object that has a boolean of succeded or status of enum instead of using exceptions for failures. i.e. most i/o errors are not recoverable, thus c# does not enforce you to recover from them.

Depends on the operation but yes. Either There's a Try___ Pattern (where boolean is result, and 'out' parameter from method is your parsed value) or some will do an enumeration pattern.

Still, I'm a fan of Option for these sorts of things nowadays...


I fucking hate SQLException. It's a giant pain in the ass to dissect it to figure out if it's an error because the database exploded, or because I made a syntax error, or because something real failed like a uniqueness or foreign key violation.


> - protocol exceptions like duplicated key

> - query exceptions

Even that is just barely scratching the surface. Postgres has ~250 error codes, and while not all of them can be triggered by all operations or statements, there's way more granularity in there than there is in just two piddly exceptions.


The sad bit about checked exceptions is that everyone compares any sort of error tracking to them and immediately dismisses many useful ideas

Java's checked exceptions got the worst possible combination of error tracking. Its optional, so you don't even get to see if a function throws (kind of like the billion dollar null mistake) and its based on classes, which means a lot of irrelevant names leaking throughout the codebase.

Like with nulls, the main value is being able to claim that a function doesn't ever throw, at all. Apple's Swift got this just right.

A simpler system of "throws / doesn't throw" and with optionally polymorphic variants / unions to complement it would go much further.


The biggest mistake of C# is not having checked exceptions. Your carefully written program can crash because someone modified a dependency to throw a new exception. So the only way to make your program resilient against such changes is to catch the base Exception class, which everyone agrees is wrong (because of "swallowed" exceptions). In Java the compiler alerts you if someone modifies a dependency to throw a new exception, which is good.

See long discussion here: https://forum.dlang.org/thread/hxhjcchsulqejwxywfbn@forum.dl...


Arguably, a careless library developer can break your code in infinite different ways than changing the exception type, even in Java.

I'll get one in a million chance of careless library developer randomly changing exception types over writing `throws` and `try/catch` statements a million times even when I don't need to handle any exceptions at all.


I've found checked exceptions pretty useful. I can pass context-specific details to the exception and encapsulate the message formatting to the exception itself (helps if I'm throwing that in multiple places). They also allow me to decide the http return code based on the exception. E.g. using Spring Boot's controller advise, I can map a group of exceptions to be user errors (say, bad request) and another group to be service errors (say, internal server error) etc and don't have to worry about where the exception is being thrown from - it'll return all the details with correct return code to the user.


Having used Java somewhat, the biggest pain points I encountered with checked exceptions was their incompatibility with highly generic code (aka streams). What if all library functions taking lambdas (e.g. map, filter) all took throwing functions as parameters (i.e. have an extra generic X argument for the exception type, like R apply(T arg) throws X) and simply rethrew those exceptions, AND have the exceptionless functions (R apply(T arg)) be a subtype of the throwing version so they are compatible? I haven't touched java in a while so I may have forgotten a thing or 2 about its type system


I would prefer there were just a semantic way to find out what exceptions a method might throw. I develop a lot of Java and often end up just rethrowing a checked exception as RuntimeException or AssertionError.

A long time ago, I developed quite a lot of code in Ada83. Our team found having to use documentation to express what exceptions might be thrown led to many errors. I was pleased when Java came around that this was expressed directly in the function declaration.

But then it became clear that it lead to a lot of boilerplate.

I would like the throws keyword to just be an indication of what might be thrown and not require that I catch it.


I don't mind Java checked exceptions but of the languages in this style I think I prefer C#'s Task type. It combines a promise with a simple IsFaulted boolean and a way to rethrow the caught exception at your discretion (or it will throw if you accidentally access the result getter).

You can use catch block style with typed exception handling or simple boolean checks depending on the situation and what you prefer.


Java has checked exceptions, unchecked exceptions, and errors.

Some checked exceptions, such as InterruptedException, should really be something else. I've very rarely seen this exception handled properly by anyone. Often, a general catch Exception will also catch this, and while the code will work just fine in most circumstances, random threads will not go away when they should.

It's a mess.


Java's biggest mistake was not including lambdas and generics from the very beginning. Checked exceptions are a feature because they force you to think about how you will handle exceptions: usually one of die, try again, or try something else. Forcing the programmer to make these decisions explicitly is a good thing.


C# had also adopted lambdas and generics post first release, but made all the right decisions for transition so type system isn't a mess.


If the exceptions thrown by your code aren't part of the interface, then pretty much nothing is. That means strong typing is actually Java's biggest mistake.

This isn't to say that checked exceptions are always used well. (What exactly am I supposed to do about an exception from .close()?)


Most codebases use lombok, you can use @SneakyThrows or @SneakyThrows(SpecificException.class) - https://projectlombok.org/features/SneakyThrows


I'm surprised how controversial this is; checked exceptions are a mistake. There is a reason C#, Go Swift, Rust, Scala, etc. don't have them, and it's not because these language authors don't know what they are doing.

Checked exceptions have all the disadvantages of monads:

* Checked exceptions don't compose with each other. You have to create manual witnesses of composition (methods throwing multiple exception types, or wrapping exceptions into new exceptions like a monad transformer)

* Checked exceptions infect the type system, having to be copied everywhere.

However, they have none of the advantages of monads, and more disadvantages besides. Java the language does not provide any facilities for abstracting over checked exceptions, and they interact terribly with any design involving higher order functions or any other higher-level abstraction.

It's time for the java community to admit they got this one wrong and move on.


> Java the language does not provide any facilities for abstracting over checked exceptions

Can you explain what you mean by this?


I can't write a method like this:

public <T> higherOrderFunction(Function<T> f) throws whatever f throws { }

I can write a method that takes in a function object that throws zero checked exceptions. I can write a method that takes in a function that throws exactly one type of checked exception. I can write a method that takes in a function object that can throw two types of checked exceptions. And so on. But this involves lots of copy-pasting, which is the opposite of abstracting.

Checked exceptions are not first-class citizens in the language (see https://en.wikipedia.org/wiki/First-class_citizen). Unlike return values, I can't, for instance, in the general case assign the possible checked exception thrown by a method to a variable without losing type information. To do so would require sum types, a feature which most popular languages don't have.

On the other hand, if I create a class like IO<T>, which represents a possible return value of T or an IOException, then that is first class in the language and I can do anything with it that I can do with any other first-class value in the language.


So you effectively want -

class ThrowingFunction<T, R>{ public T execute() throws R; }

public <T, R> higherOrderFunction(ThrowingFunction<T,R> f) throws R { }

Honestly I have no idea if such a construct is possible. You can write a method that throws a superclass of exceptions.

At this point though I'm going to say I also don't see the utility. By the time we're getting so abstract we're also getting into code that can be quite hard to reason about and debug.

Exceptions are first class in java AFAICT, they're just objects. You can store anything in them and pass them around freely.


I write a lot of Java, and like checked exceptions. The problem is rather the opposite, basically all of the concrete exception classes that you might think to use (e.g. IllegalArgumentException) are unchecked!


Sure and they are a big, big mistake, but biggest? I can name bigger ones. Default virtual is quite the whopper, for example.


Given that Java is basically Objective-C ideas, but with C++ like syntax to cater to mainstream developers I wouldn't consider it a mistake.

What I consider a mistake is not making override a proper keyword.


Default virtual is quite the whopper, for example.

Interesting. Could you elaborate on why you find that so?

By way of illustration regarding why I ask: I've been writing Java code for 20+ years now, and I can't remember a single time that the "default virtual" behavior wasn't what I wanted. And I wrote C++ before moving to Java so I'm familiar with the approach of requiring one to mark methods as virtual explicitly. I've just always found that the Java approach makes sense. What am I missing?


Not thinking about the design of the language is a necessary adaptation for Java coders; the cognitive dissonance would be debilitating.

The reasoning goes stepwise.

1. In a good design, a class interface represents an abstraction. The public interface provides access at the level of the abstraction, obscuring details of implementation.

2. If it is a good abstraction, by definition it maps the public view to an internal model that differs from the public view.

3. Inheritors provide different implementations of the abstraction by defining their own versions of inherited virtual-function signatures, operating on the internal, abstracted model.

4. Because these virtual functions operate on the implementation, they should not be exposed to the client. Exposing them would violate the abstraction, exposing the internal model.

5. The public members of the base class translate abstract operations to operations on the internal model, and call virtual functions that provide variable concrete operations on it.

6. Therefore, it is wrong for the public members of the base class to be virtual: their interface is the public view of the abstraction. It is their job to map that view to the internal model, and to call the virtual functions that implement the model.

7. Similarly, it is wrong for the virtual functions to be public, because they implement the internal model. Making them public would expose it.

All that said, since Java offers only one organizing mechanism, the class, it necessarily gets used for everything, and not just for well-designed abstractions. It is not wrong to treat language features as a pot of mechanisms, and use them in ways that do not match their designed intent. Coding Java, there is no choice about it; classes are all you have, so you use them for everything, and the well-designed abstraction may be a rarity among all the other uses, and might not exist at all in many programs.

But Java's limited feature set is out of scope for discussion of the designed purpose of virtual functions. Used according to the Object Oriented model, public virtual functions are an oxymoron. But as mechanism, they are as usable as any other, if they work.


My understanding is that default virtual is preferred because the JIT compiler is capable of inlining or devirtualizing most of the calls.

It would be a huge mistake in a statically compiled language because the static compiler doesn't have as much information as a JIT in flight.


nullability and no value types other than primitives.


Looking it up, “virtual” seems to mean “capable of being overridden by a subclass”. Rather than being a whopper, wouldn’t that be the natural thing for a pure-ish OOP language?


The sibling comments are talking about performance, but there's also a conceptual disadvantage to overriding.

There are two different ways to create a subclass in OOP. Java treats the subclass and the superclass as the same object. If the subclass calls a method on itself, it may go to the superclass first. This is a valid way to do things, but it can be dangerous and confusing.

Another way to create a subclass is to create two objects: the superclass and the subclass. The subclass has a reference to the superclass, but they are not the same object. If the subclass doesn't override one of the superclass's methods, then it implicitly proxies the method call to the superclass. If the superclass calls a method on itself, then it goes to itself rather than the subclass.

The difference between the two approaches is whether you want your class implementation to be open to extension (monkey-patching). Allowing class inheritance (Java's approach with the virtual methods) means that subclasses can override methods you define. It can be convenient, but it can also be a foot-gun. Class composition (proxies) prevents monkey-patching by subclasses.


If you know that a method isn't virtual, calling that method is just a function call with an extra argument for 'this'. If it's virtual, you have to look it up in a vtable, probably dereferencing two pointers in the process and making it harder on your instruction fetcher.

Java makes a lot of common classes final (e.g. String and Optional) so that it can avoid this. As I understand it, it's also got a lot of pixie dust to predict this some of the time, but then you can't rely on it.


When the JIT was finally smart enough to handle lookups better, they didn’t change the types.

Final classes encourage the creation of utility classes, which can often become a problem.


Yes that's the benefit.

One downside is that every method now requires following a pointer, which is pretty awful for performance on modern hardware.


Methods being default virtual? Or, perhaps more accurate, being only virtual? If that's what you mean, that may well be the first time I've ever heard someone complain about this. I used Java for literally more than a decade and I never once thought "man I wish I could create a non-virtual function".

This seems like some C++ pattern holdover where you obviously do have virtual vs non-virtual methods. 99% of the time I suspect it's just a source of extra bugs and the vtable lookup really isn't the performance hit (most of the time) you think it is.


You can create non-virtual methods in Java, just make them final.


Oh, so the OP really means methods being non-final by default? Yeah that makes way more sense and is a valid criticism.

I don't think it's that big a deal however because, in practice, libraries that use inheritance as an external API are almost an anti-pattern at this point. I forget who said it but someone (Josh Bloch? Scott Meyer?) said at one point "inheritance is an implementation detail". Or maybe it was "inheritance breaks abstraction"? Something like that anyway.

I 100% agree with this. At that point it doesn't really matter if your methods are final or not. Ideally you make your leaf classes final and move on with your life. But designing for inheritance is probably not going to end well regardless other than some notable utility classes (eg AbstractMap).


As others have mentioned in the thread, the issue is that virtual calls are more expensive because you have to resolve the correct function in the related vtable.

So instead of just setting up and executing the call, you have to traipse through memory a bunch of times, affecting caches and making things slower.

But the thing is, Java's compiler works at runtime, so it can either inline the function (basically dump the code into the call site instead of invoking) or devirtualize it (if the same object is called over and over, the JIT can just remember the last lookup and add a guard if the target changes).

In say C++, making everything virtual would be crazy because the compiler can only safely inline in certain situations and can rarely devirtualize a call.


Performance is irrelevant to the principle involved.


!?!


I also see people saying they miss checked exception because developers always forget to check the returned error code...


Checked exceptions are hardly Java's biggest mistake.


It's easy to see that checked exceptions are bad... no other language tried to copy the idea. A worthy experiment, but that's all. What else did not get copied? Classloaders.


Actually, PHP has checked exceptions exactly like Java


Well, that I did not know. But I'll take it as the exception that proves the rule if only one language copied it.


It's certainly not an endorsement of checked exceptions. Just an FYI. And, further, PHP is basically just copying every feature they can from Java for the last several years.


My biggest complaint with Java is Generics. I am not against Generics, I don't like the way they were implmented in Java and they copy/pasted the same shit to C#.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: