Computer programming is young enough as a field that the mainstream hasn't realized it isn't possible to fit everything into one unified taxonomy. (Biologists and librarians have both spent over a century trying, and they have a lot of insight on the matter.) Putting types into a hierarchy is a useful abstraction sometimes, but there's not some Grand Truth in it -- often, attempts at cramming disparate elements into a hierarchy just create extra complexity. Most systems will keep trying to turn back into graphs unless you anchor the hierarchy in a specific context (what does 'a IS-A b' mean, really?), and the context tends to shift with the problem definition.
Worse still, it's the sort of complexity that looks like work, even though it's usually more like flailing in quicksand.
I think not having to deal with class hierarchies is a big reason why some people find dynamically-typed languages so freeing. I'm still kind of new to Erlang, but I think its push towards modeling things via a graph of communicating processes, with a relatively flat type structure, is worth considering as an alternative. It explicitly focuses on the message protocol between actors, rather than a tree of subtypes. Granted, I haven't worked a large enough project to see it break down and get ugly yet. (C++ style OO sounds good on paper, too.)
Regardless, it isn't classes per se that lead to OO's problems, but the tendency to overuse inheritance. Among statically-typed languages, Haskell's typeclasses seem like another good solution - they break the hierarchy of inheritance up into composition of properties.
Also, the late-binding in OOP means that any class can potentially be responsible for maintaining its invariants even after any of its methods has been overridden by any possible future subclasses. That's not necessarily true even with dynamically-typed languages.
It isn't because programming is too young, it's because programming is commercial. OOP retardation is new thing, it's not something we've been trying to do forever are are just now realizing is problematic. It was pushed by manager types who just wanted as many reports as possible and something that let them divide arbitrarily simple jobs among all of them, and it was allowed to grow by a class of programmers who didn't know any better because they were taught just enough to churn out the sort of programs that industry figured it needed. Now it's everywhere and even people who known it's crap just have to deal with it to get anything done.
Your points concerning inheritance are good but "the field is young" seems like just a pseudo-profound explanation for the situation. CS as a field really can't be compared in this way to other fields - it's not really a science since it doesn't deal with the discovery of verifiable facts. It is more like some combination of mathematics, humanities, management and engineering.
CS has to connect very fuzzy things, human beings, with very exacting things, computers. The problems of CS aren't really about the age of the discipline but difficulty of the domain. For example, when you start to get into biology-as-programming-with-DNA, you are dealing with an even more complex domain and biologists look to CS for clues in this situation.
That's why I mentioned librarians. Professional catalogers have spent over a century trying to figure out how to organize resources so that people doing research can find relevant information, and classifications that are useful for everyone prove surprisingly elusive. The Dewey Decimal system is from the 1870s, after all. (Its flaws are well-known, now, and there are several other cataloging systems in use.)
I was thinking about a quote (I think it's from Philip Greenspun or Joe Armstrong, though in ten minutes of googling I wasn't able to source it. Anyone?) observing that programming seems to be in a pre-scientific phase (in the Kuhn-ian sense) -- there still isn't a broad consensus about major ideas, so advances often come via books and manifestos, rather than gradually through organized research.
(Edit: The closest match I'm getting is regarding AI. I've been reading about Prolog, so that might be where I read the quote...)
The thing is that the phrase "pre-scientific" implies that CS is destined to reach a scientific phase. I don't know if that's at all certain.
... Another thing, just to think about, is how Google today often lets one find more cross-links than any hierarchical system would allow you to find, yet Google is, itself, not a well defined system using the semantics of the items involved but rather a "dumb" algorithm that only looks at the links between things...
I don't know if programming (as opposed to computer science, in the research sense) will ever attain "science" status because we don't stay in one place long enough to be settled. If we were all still writing terminal apps on 80x25 green screens, the science would be pretty well settled on how to do that by now, with recipes for every occurrence that any normal programmer would ever hit readily available, and easy access to people who could do the somewhat harder cases.
"What about the really hard cases?" you might ask, and the entire point of my hypothetical is that if we can't do it in 80x25, we don't do it.... and the contrast to the real world where we kept going is exactly my point. So, when will programming finally stabilize enough to become a science? I don't know, but I don't see it even in the long term (30+ years).
That's kind of how I think of the goal of finding a grand unified theory in physics. It seems to me that if such a thing is ever found, it means there is some unifying and "correct" organization to the universe and that'd revolutionize a lot of stuff when the same ideas and structures are applied to all other fields. It'd be a huge win if this worked out - and not just for physics. In my warped and uninformed mind, the grand unified theory, the perfect programming model, P=NP, and countless other such edges of theory are all different facets of the same entity that we have yet to identify - assuming it even exists in an identifiable way in our reality.
Worse still, it's the sort of complexity that looks like work, even though it's usually more like flailing in quicksand.
I think not having to deal with class hierarchies is a big reason why some people find dynamically-typed languages so freeing. I'm still kind of new to Erlang, but I think its push towards modeling things via a graph of communicating processes, with a relatively flat type structure, is worth considering as an alternative. It explicitly focuses on the message protocol between actors, rather than a tree of subtypes. Granted, I haven't worked a large enough project to see it break down and get ugly yet. (C++ style OO sounds good on paper, too.)
Regardless, it isn't classes per se that lead to OO's problems, but the tendency to overuse inheritance. Among statically-typed languages, Haskell's typeclasses seem like another good solution - they break the hierarchy of inheritance up into composition of properties.
Also, the late-binding in OOP means that any class can potentially be responsible for maintaining its invariants even after any of its methods has been overridden by any possible future subclasses. That's not necessarily true even with dynamically-typed languages.