As a philosophy grad who finds this stuff mildly interesting, I wish the "rationalists" had chosen a different name for their movement. While "rationality" brings to mind Plato or Descartes, this movement appears to be a kind of hyper-empiricism informed by cognitive psychology, AI research, and Bayesian statistics. In any case, if you're going to be an empiricist, it strikes me as a pretty decent way to go.
One thing the parent didn't bring up was there was a philosophical movement called "rationalism" that preceded the contemporary movement that refers itself by the same name. These so-called "Continental Rationalists" included philosophers like Descartes and Leibniz. The British Empiricist movement, featuring Locke, Berkeley and Hume, arose in response to the Continental Rationalists.
One of the key points of contention between these two movements was the problem of "innate knowledge". Rationalists believed that we were born with some sorts of knowledge (foundationalism). Leibniz believed that we were born with the basic mathematical knowledge and reasoning. Descartes thought he had a great deal of innate ideas -- perhaps most famously, his proof of the existence of God relied on the fact that he had an innate idea of God.
It was really only until Locke came along that anyone presented a top-down theory attempting to account for the acquisition of basic forms of knowledge (e.g. difference and similarity, truth and falsehood) while maintaining an initial "tabula rasa" state.
I wouldn't lump in the contemporary rationalist movement with either of these previous movements -- it is probably best considered as an extension of logical positivism, owing to heirs like early Wittgenstein.
However, positivism brushed up against many philosophical problems: semantics, the analytic-synthetic dichotomy, etc. Things got worse when Wittgenstein eventually wrote "Philosophical Investigations" which effectively contradicted everything he wrote in the "Tractatus Logico-Philosophicus", the latter of which was the foundational text of logical positivism. The two schools of thought were so different that people generally refer to "Tractatus" and "post-Tractatus" Wittgenstein.
One of the key scientific critics to logical positivism was Karl Popper -- best known for formalizing the ideas of falsifiability -- eschewed the idea of induction altogether. I highly recommend his "The Logic of Scientific Discovery" (which is not to say that I agree with Popper, but he is an important figure in the philosophy of science).
Thomas Kuhn's "The Structure of Scientific Revolutions" calls into question the very idea of there being a "scientific truth" that is independent of historical assumptions. Contemporary heirs of this kind of historically focused thought, like Ian Hacking, try to make room for the ideas of "scientific fact" in some contexts (particle physics) but question it in others (psychology). I highly recommend Hacking's "Making Up People" -- a brilliant collection of essays which is still friendly to people who haven't read a great deal of Foucault (another important figure in recent philosophy of science).
Second to last in my brief and certainly not comprehensive list is Hilary Putnam. It's hard to precisely pin down Putnam's thought -- as a philosopher he is remarkably open to criticism -- he has written papers in response to himself. Putnam has mostly kept himself consistent with scientific realism -- which basically states that scientific theories are usually more-or-less true.. Early on, he was a metaphysical realist who became sharply opposed to the school. Frankly, I don't know where he stands today, but read his essay "Brains in a vat" if you're at all interested in Skepticism.
Finally, this list of alternatives to positivism would be incomplete without Richard Rorty. His landmark text, "Philosophy and the Mirror of Nature", rejects the possibility of an epistemology altogether. I can't recommend Rorty's writings enough -- I believe he stands with Foucault as the most important pair of philosophers in the last 50 years.
I hope that I've given some more background on the rationalist/empiricist debate and contemporary alternatives to proper empiricist and positivist thought. I certainly cannot give you anything comprehensive in the space of a HN comment -- my only goal has been to hint towards various schools of thought on this matter. Pierre Hadot wrote that philosophy is a way of life moreso than a domain of thought -- and I tend to believe that in life, the journey is more important than the destination. In that respect, it is impossible for me to give you a firm answer to the question you asked of the OP: "if not empiricism, then what?" -- but I hope that my comments help you find where you stand.
> Thomas Kuhn's "The Structure of Scientific Revolutions" calls into question the very idea of there being a "scientific truth" that is independent of historical assumptions.
OK, let's do this.
We have a scientific theory.
From it, we derive some engineering discipline, which uses the theory to, essentially, make predictions about what will happen if we do this to that, with the property that, if the predictions hold true, we'll have something useful.
The people following the engineering discipline create things.
Those things work.
Does that not, then, validate the scientific theory?
And if that scientific theory is validated, does that not knock Kuhn on his ass?
Because the forces which the artifact the engineers created is subject to don't give a rat's ass what our current culture says. They were the same billions of years ago and will be the same billions of years hence, the existence of our species or intelligent life at all notwithstanding.
OK, some fields of science don't make sense without humans to study. Right. But others will still be just as true if we're wiped out and replaced by sapient Corvidae, or not replaced at all.
Kuhn never denies that there are facts about the universe that we can observe. And I agree with you about this general class of example: no number of scientific discoveries or paradigm shifts will ever make the photoelectric effect cease occurring.
Kuhn's argument is that science occasionally undergoes what he calls "paradigm shifts" -- a low-level shift in assumptions about a certain realm of scientific thought that fundamentally changes the way we approach a particular field. One example Kuhn gives is the Copernican Revolution. Copernicus, as we all know, proposed the heliocentric solar model. Before Copernicus, most people used Ptolemy's epicycles to model the movement of planetary bodies. Initially, it worked, but the cracks started to show as observations accumulated. A major shift in our assumptions about the organization and modeling of planetary bodies had to occur before scientific progress could move forward.
In this sense, scientific knowledge progresses in giant shifts, rather than linearly or incrementally. Consider the theory of atomism how it was disrupted by the Rutherford Gold Foil experiment, or the Double-Slit experiment and what that did for physics. A dominant paradigm must always make way for a new paradigm in order for scientific progress to occur.
The upshot of all of this, according to Kuhn, is that the criteria for scientific truth are always caught up in certain historical assumptions and that we have to take these assumptions into account when assessing the veracity of a given theory. He doesn't say that there aren't facts about the universe, but rather, that the scientific approach to understanding the universe is caught up in a paradigmatic frame which makes it impossible to derive a simple, objective algorithm/process for scientific discovery.
> The upshot of all of this, according to Kuhn, is that the criteria for scientific truth are always caught up in certain historical assumptions and that we have to take these assumptions into account when assessing the veracity of a given theory.
If your historical assumptions are true in a useful way, your science will progress to the point where you have more-useful engineering; if they're wrong in an important way, your science will stall out, or give you the wrong answers. If they're neither, and they don't affect the ultimate utility of your predictions one way or the other, it all becomes a bit academic: Is it even useful to say your theories are wrong if they keep making good predictions and allow scientific progress to be made? Note well that atomism (pre-quantum) and geocentrism eventually stopped making good predictions, stopped being a gateway to more complete theories, or both. (For example, geocentrism is probably impossible to integrate with universal gravitation.)
"Truth" is something mathematics has access to, not physics. Truth-With-A-Capital-T is Absolute, Perfect, Incorruptible, and utterly inconsistent with reality as it is outside of the symbol-games we play in our minds, because Platonism is downright insane.
Therefore, "scientific truth" is contingent, sure, but it's contingent on more than mere fashions. It's contingent on experimentation and experiments don't care if your histories are contingent one way, the other, or the other way entirely. Nobody's histories were contingent enough to imagine the Earth repelled small rocks.
So Kuhn agrees that there is a universe and that there are, at least potentially, facts about that universe humans are capable of discovering. That puts him a few up on some philosophers. However, I don't agree that our criteria for scientific truth is fully entwined with our historical accidents as long as we rely on science to predict what the non-human world is going to do.
That doesn't invalidate the scientific method, nor does it preclude having a best-known model that matches all the currently available experimental data. New data that invalidates a model will necessitate a new model, or a new model may come along with better predictive power.
And to use current rationalist terminology: any given scientific model has an associated probability estimate for being true, which is very close to but not equal to 1; any work built on top of that model will depend on the truth of the model; and invalidating a model in favor of a new one requires re-evaluating any work based on that model. The "giant shifts" you're referring to occur when a model lower down the stack, with a pile of things built on it, gets invalidated or replaced by a better model.
On a day-to-day basis, you don't typically re-evaluate the validity of Newtonian or relativistic physics. Most of us regularly use Newtonian models despite knowing that they don't exactly match how the universe works. And we know that relativistic models don't exactly match how the universe works either (notably on a very small scale), though we don't have better models yet that work on both small and large scales.
You're right, but neither I nor Kuhn ever said that it did. In fact, he echoes your thoughts about how models replace one another:
First, the new candidate must seem to resolve some outstanding and generally recognized problem that can be met in no other way. Second, the new paradigm must promise to preserve a relatively large part of the concrete problem solving activity that has accrued to science through its predecessors.
He doesn't deny that there is such a thing as scientific progress, he only means to model scientific progress as an episodic cycle in which existing paradigms present with insoluble problems, and that these problems are only resolved when the old paradigm is replaced.
It wasn't so long ago that Einstein declared that God doesn't play dice with the universe. Kuhn doesn't deny the existence of scientific facts or the utility of the scientific method -- he only hopes to illustrate that the notion of scientific truth is contingent on certain assumptions and that these assumptions often get in the way of future progress.
It's also important to remember that Kuhn wrote "The Structure of Scientific Revolutions" back in 1962. At that time, he was largely responding to the logical positivists. While I think a lot of the contemporary rationalist movement is caught up in old logical positivist modes of thought, your own willingness to invalidate models based on evidence wasn't fully developed before Kuhn and Popper brought such thinking into the mainstream.
> He doesn't say that there aren't facts about the universe, but rather, that the scientific approach to understanding the universe is caught up in a paradigmatic frame which makes it impossible to derive a simple, objective algorithm/process for scientific discovery.
I don't see how he gets from A to B. How is the fact that we obviously can only build models based on the past - since the future is not accessible to us - prove that it is impossible to refine models to asymptotically approach a hypothetical fundamental truth?
We may not have a formalized algorithm. But whatever is running on human brains has worked so far.
If I boil this down it sounds to me like he's claiming that a system that takes its past states as one of its inputs (observation of the universe being another) is incapable of refining scientific theories? Which, given the fact that's exactly how scientific discovery has been done so far, seems false.
There is only so much I can do to summarize his argument in an online forum without replicating the entire book. I recognize that I'm not doing the full-text justice, but it's hard to go into much more depth without simply recommending that you read the book itself.
If I boil this down it sounds to me like he's claiming that a system that takes its past states as one of its inputs (observation of the universe being another) is incapable of refining scientific theories? Which, given the fact that's exactly how scientific discovery has been done so far, seems false.
I wouldn't say that paradigm shifts like the Copernican revolution, the discovery of elementary particles, or the uncovering of quantum mechanics are acts of "refining" existing theories -- they largely involved throwing away a significant amount of work and starting from scratch. Scientists since Ptolemy created extraordinarily complex epicycle models to explain the movement of planetary bodies from a geocentric perspective. When the Copernican heliocentric model became accepted, all the work on epicycles became more or less useless.
Now, we didn't have to abandon Newtonian mechanics entirely, but quantum mechanics have replaced Newtonian mechanics in most fields dealing with particles and small numbers of atoms.
Kuhn's argument is that science has advanced, but not through a simple process of "refining". According to Kuhn, science isn't generally a linear or incremental process -- it is a cyclical one in which our existing models cease being useful and we have to find a better one.
I hope that helps! If you're interested, I highly recommend reading the text -- Kuhn was an excellent writer [1].
I think this is a very naive refutation of what the point was about Kuhn.
First off, the notion that engineering "validates" science. What do you mean by validate? Do you mean successful engineering informed by a set of scientific principles, somehow, in a scientifically rigorous way, renders those principles true? As in, end of story true?
Because most of mechanical engineering is done on the assumption that force equals mass times acceleration. The industrial revolution yielded countless engineering marvels on the back of Newton--cars, trains, breathtaking buildings and bridges. But the theory isn't "true," because Albert Einstein did some thinking and realized that all of the physics change as you go really fast--something "engineering" missed, despite the fact that the stuff "worked."
But somehow it feels wrong to say that Newton was wrong, right? Because in his world, with the kind of thinking and set of scientific instruments available to him, and the battled hardened inverse square phenomena [1] that could be painstakingly measured and applied in engineering, it was infallible. It was true. But only in historical context.
1) Actually not quite so. The strange orbit of Mercury was out of line with Newton's equations, so much so that the planet Vulcan, placed out of Earth's visible observation, was invented to explain it. And so the theory was saved, until General Relativity explained it away, too. So much for "truth."
> I think this is a very naive refutation of what the point was about Kuhn.
I don't deny that it's naive.
> First off, the notion that engineering "validates" science. What do you mean by validate? Do you mean successful engineering informed by a set of scientific principles, somehow, in a scientifically rigorous way, renders those principles true? As in, end of story true?
Nothing is end-of-story true except in mathematics, where we have access to absolute truth by virtue of first having accepted an axiom system as absolutely true within a context and then having accepted some logical rules as being capable of turning one absolute truth into another absolute truth within the same context.
Mathematics is absolute, but it's only valid within the abstract context of that branch of mathematics.
Physics, for example, is only conditionally true, contingent on us finding evidence which refutes a given theory, but it is applicable to the real world.
So engineering provides evidence that the theories we have can predict the behavior of the Universe at least in the context where they're being applied. A theory is only validated in the world in which it is tested. Granted. However, to the extent it is tested and validated, that validation should be accepted as worthwhile, as opposed to being written off as something culturally contingent.
> Because most of mechanical engineering is done on the assumption that force equals mass times acceleration. The industrial revolution yielded countless engineering marvels on the back of Newton--cars, trains, breathtaking buildings and bridges. But the theory isn't "true," because Albert Einstein did some thinking and realized that all of the physics change as you go really fast--something "engineering" missed, despite the fact that the stuff "worked."
Right, and Einstein's predictions about how the acceleration of a massive particle to near light speed would affect its measurement of time were not validated by engineering but experimentation. And it's also true that bridge engineering validates Newton as much as it does Einstein and Dirac, for example, because it operates in a world where all three theories are "valid" in the sense of "if you use them to help make your bridge, they will not cause it to fall down", and it validates whatever ideas the ancient Roman bridge-builders had, at least if the bridge is of a style the Romans made. I grant all of that.
Philosophically, then, we're back to Popper, in that negative results push science forwards, whereas positive results only make us more sure that the ground we're standing on is solid. We shouldn't ignore positive results, though, because the bridge will still stand even after the next paradigm shift; we should further accept all theories as provisionally correct. That much seems fairly mainstream, philosophically speaking.
However, we are moving forwards. We are able to explain more observations than we have been able to in the past. We are not just moving in circles, with each paradigm shift undoing all of our work and sending us back to square one. We learn to make better and better bridges, to bring this back to engineering.
> But somehow it feels wrong to say that Newton was wrong, right? Because in his world, with the kind of thinking and set of scientific instruments available to him, and the battled hardened inverse square phenomena [1] that could be painstakingly measured and applied in engineering, it was infallible. It was true. But only in historical context.
Newton's laws were always provisional. We now know them to be incomplete, but still useful for human-scale construction, on Earth or in space or on other bodies entirely. They've been subsumed into more modern theories as a special case; they're the equations you observe when you set the paramters to be similar to what humans will experience first-hand. And, as you said, they couldn't explain Mercury, which modern theories can, so they were incomplete even before we had GPS satellites to falsify their predictions. (I mean, they were observably incomplete. Our observation doesn't dictate what reality is; any solipsists can kindly imagine that I don't exist and refrain from communicating with me.)
So engineering does validate theories, but validation isn't enough to winnow theories until you come up with some test some of them fail. That's just Popperian philosophy, though, isn't it? That's just the philosophy of science that all the cool kids are so done with right now, right? My point is that we shouldn't imagine that the validation is worthless, or imagine that it can be undone, because any new theory will have to explain precisely the same behavior as the old one, paradigm shift or no.
I don't really have time to respond to your comments right now, but I did want to make one tangential remark.
Mathematics is absolute, but it's only valid within the abstract context of that branch of mathematics.
Mathematics itself went through a paradigm shift in the early 20th century, known as the "foundational crisis". At the time, mathematicians began running into paradoxes which existing theories could not properly address, including Russel's Paradox.
In response, mathematicians developed a set of formal axioms (nowadays most people use ZFC, although sometimes Von Neumann–Bernays–Gödel and other variations are used) which produce a mathematical foundation that is consistent (i.e. free of paradoxes/contradictions).
However, as Gödel's Incompleteness Theorems demonstrated, there is no set of foundational axioms which are both consistent (free of contradictions) and complete (all mathematical truths can be deduced by such a system).
So, while it is true that mathematical proofs are formally valid deductions from a set of axioms, it is worth recognizing that the relationship between mathematics and truth are somewhat more complex than they seem. As it stands, there are an infinite number of mathematical statements that cannot be derived by an axiomatic system. Some philosophers have even sought to identify 'quasi-empiricism' in mathematical thought [1].
And if you find that interesting, you'll love James Conant's paper on Logically Alien Thought [2].
My point about Mercury was important because it shows that there is an appreciable level of "give" that a theory has before the scientific community agrees that there is something wrong with it. That level of give is socially determined. It does matter if the discoverer of the anomaly is a Cambridge phd or a crackpot with no credentials. The measurement instruments matter, and the fallability of those instruments play into the acceptance of the results, too. A Popperian viewpoint is somewhat naive because what constitutes a falsification is incredibly fraught! Read Lakatos. He models scientific progression as a series of research programs that have "hard cores" of belief that are protected by ancillary theories. In the event of a negative result, it's those theories that are investigated first. For example, is my telescope correct? Is the theory of light that informs my telescope correct? Is there a dark planet influencing things that I can't see? In hindsight, Mercury should have falsified Newton, because all of the falsifying observations were valid. But it didn't because reasons.
We (and by we I mean Popperians) want to believe that science is a series of universally positive logical assertions that can be cut down by a single negative observation, as logic would dictate. But we don't always know what the criteria are for successful negative observations. The criteria are less rigid and well defined than we would be willing to admit. They vary from community to community. Robert Milikan won the Nobel prize for measuring the charge of an electron with his brilliant oil drop experiment. Only problem? His measurement was wrong. As folks tried to repeat it, they deviated more and more from his original measurement, until many repetitions and many publications later they landed on the correct value. If you were to plot the "true" measurement for the charge of the electron against time, you would see something deviating very slowly from an arbitrary incorrect value to the correct one. You have to ask, how on earth is this possible? Bias, authority, imprecision over truth criteria—all at play. And I think it's this sociological fuzziness in play in many thousands of small ways that lead us to at least question the assumptions on which truth is founded.
> We (and by we I mean Popperians) want to believe that science is a series of universally positive logical assertions that can be cut down by a single negative observation, as logic would dictate. But we don't always know what the criteria are for successful negative observations. The criteria are less rigid and well defined than we would be willing to admit.
Bayesian reasoning helps here, I think, because people are wrong, and different people are wrong with different probabilities. For example, overturning mass-energy conservation because someone said they saw a professor turn into a small cat or a strange spacecraft appear and disappear is not reasonable: The probability of one person being wrong or insane is a lot higher than the probability of something really well-verified being completely incorrect.
Is it political at times? Yes. Can it be improved? Sure. But it is flawed, not completely broken, and I think Kuhn makes too much of the flawed-ness which encourages people to imagine that it's completely broken and therefore the next paradigm shift will validate homeopathy.
I don't think anybody is claiming that science is "completely broken," but there are some who want nice, clean, logical delineations, who think that science is filling out some invisible, giant truth table. And that by each assertion in that truth table, there's a straightforward "this is how to falsify me" entry that scientists can look up and enact.
Based on how science actually works, this notion is fanciful. No such table exists. If you were to have the luxury of asking the top physicists, say, to create such a table for you, they'd very likely all look different.
Also, your comment regarding homeopathy is something of a strawman. Paradigms are incommensurate. If we do incur a paradigm shift in our lifetimes, it's likely that our current ways of speaking about science will be unable to capture it.
The hang-up in these discussions is the concept of "truth."
Newton's physics are useful in engineering; in fact they have been good enough for almost every engineering project mankind has ever undertaken, including complex stuff like landing a robot on a comet. But are they true?
Well, we know they are not complete. There is evidence that Newton can't explain; that's how we ended up with our modern theories of quantum mechanics and general relativity. But are those true?
Because we know we cannot reconcile them with one another. And then there is dark matter and dark energy, which are as yet unexplained, and might comprise about 95% of the known mass/energy of the universe.
More prosaically, think of the last time you saw a bird fly. What's the "truth" there? Your observation, or the theories of quantum mechanics and gravity, which we believe govern the matter and energy of the bird?
In programming we have a concept of "leaky abstractions"--build a layer of abstraction on top an underlying technology, and chances are at some point, someone will have to descend to the underlying technology to fix a bug or find an optimization.
What if our entire history of scientific observation is just a collection of leaky abstractions? We have no way of telling in advance when we've reached the bottom. So the theory we think might be "true" today, might turn out to be a leaky abstraction tomorrow.
Edit to add: Cultural and historical assumptions raise their heads when we find the holes in the abstractions and are trying to explain them. In the absence of reliable theories, or in the face of seemingly incompatible observations, we just try to sort of apply what we already know.
Kuhn's idea is not that scientific knowledge might be wrong. It's that human beings might be wrong when they think they know what the truth, or reality.
Einstein said of quantum mechanics, "God does not play dice," and he spent years trying to prove that the universe is as deterministic as he believed it to be. That's what the voice of historical assumptions sounds like.
>From it, we derive some engineering discipline, which uses the theory to, essentially, make predictions about what will happen if we do this to that, with the property that, if the predictions hold true, we'll have something useful.
We actually use logic along with induction to get a lot of theories. The fact that it works is proved by the scientific method, not the other way around.
There are a lot of ways to be mistaken that you're sweeping under the rug. For example, the geocentric model worked reasonably well for predicting the movements of planets.
The scientific revolutions that Kuhn was talking about often don't make the old model entirely invalid, but rather an approximation that works reasonable well in a limited domain. So the old model is not entirely true, but it's sorta true. If you're the sort of person who wants to say that theories are either true or false, it's an edge case that's not easily handled.
This is a fairly uncharitable interpretation of Kuhn's thought. Please read my above explanation of Kuhn's thought or, better yet, skip the middleman and read "The Structure of Scientific Revolutions" and get it from the man himself :)
Aren't we past Kuhn these days? For instance, Imre Lakatos take it further than Kuhn in his "Research Programs". And Larry Laudan takes it further than these two: research traditions and problem-soloving. Well, we have all -isms (positivism, empiricism, constructive empricism) pragmatism, realism), which are to explain the progress of knowledge (natural sciences, that is).
You have to remember that, in this era where there are more books in existence than is possible for a single person to read within a lifetime, most people haven't heard of everything.
I've certainly never heard of the two names you dropped.
Instead of empiricism? Probably some variant of pragmatism in the direction of Wilfrid Sellars and Robert Brandom. Try this on for size: section two of Brandom's article "Pragmatism, Inferentialism, and Modality in Sellars' Arguments against Empiricism."[1] Brandom's argument (which he has given different versions of elsewhere) is that having a belief is necessarily a social endeavor, because the content of that belief can never be fully specified by observations. Philosophers[2] have tried and failed to formalize observations in order to specify the content of beliefs in a formal language, but it never works. Brandom suggests an alternate route: the content of a belief is inseparable from the role it plays in the social "game" of asking for and giving reasons.
Here's a key passage: "Observational vocabulary is not a vocabulary one could use though one used no other. Non-inferential reports of the results of observation do not form an autonomous stratum of language. In particular, when we look at what one must do to count as making a non-inferential report, we see that that is not a practice one could engage in except in the context of inferential practices of using those observations as premises from which to draw inferential conclusions, as reasons for making judgments and undertaking commitments that are not themselves observations. The contribution to this argument of Sellars’s inferential functionalism about semantics lies in underwriting the claim that for any judgment, claim, or belief to be cognitively, conceptually, or epistemically significant, for it to be a potential bit of knowledge or evidence, to be a sapient state or status, it must be able to play a distinctive role in reasoning: it must be able to serve as a reason for further judgments, claims, or beliefs, hence as a premise from which they can be inferred. That role in reasoning, in particular, what those judgments, claims, or beliefs can serve as reasons or evidence for, is an essential, and not just an accidental component of their having the semantic content that they do."
EDIT2: This might be a more helpful explanation. Elsewhere, Brandom elaborates on this view by redefining beliefs as commitments. A commitment entails some other commitments, and is mutually incompatible with others. For example, if I believe that I'm eating an apple, this entails the belief that I'm eating a fruit. It's mutually incompatible with the belief that I'm eating a squid. It also entails that I believe things I might not even know about the apple, such as the chemical formulas for the various sugars I'm eating. The point is, if I play the game of reasoning correctly--mostly by dispensing with mutually incompatible commitments when they arise--others (e.g. botanists, chemists) can hold me accountable for all the correct beliefs about the apple. The "content" of the apple, for me, is nothing other than the series of moves I make in this game.
EDIT1: You should also read rpedroso's comment, which does a great job exploring the many alternatives that arose after empiricism reached its impasse.
> this movement appears to be a kind of hyper-empiricism informed by cognitive psychology, AI research, and Bayesian statistics.
Opening intro from wiki for Logical Positivism asserts,
> Logical positivism and logical empiricism, which together formed neopositivism, was a movement in Western philosophy that embraced verificationism, an approach that sought to legitimize philosophical discourse on a basis shared with the best examples of empirical sciences.
Stephen Bond had a neat essay, perhaps a bit dubious on the AI/declarative side, about this exact topic. He writes a bit acerbically, but read at least past "For a long time I accepted this explanation at face value...".
Thanks for the link to the essay. The images are interesting, I didn't get that anything was going on until the third one. It's pretty good reading once you pick up on his humor. This passage is a gem:
> The reluctant scion of a domineering steel tycoon, one of the wealthiest men in Europe, Wittgenstein spent his childhood hob-nobbing with the highest of imperial Viennese high society — and society doesn't get much higher than that. Groomed from an early age to take over his father's industrial empire, Wittgenstein instead fucked off to England at the earliest opportunity, to devote his life to the study of the most uncommercial and practically useless subject he could find. Russell proved an ideal mentor, and Predicate Logic an ideal subject.
The wikipedia definition isn't the clearest definition of logical positivism. If I had to define logical positivism in one sentence (which, in philosophy, is almost universally a bad idea), I would say something like this:
Logical positivism is a philosophical movement which maintains that statements are only meaningful if they can be formally derived or empirically verified.
In that respect, I think that logical positivism and the kind of empiricist approach identified by the OP have a significant amount in common. Similarly, I think the approaches share many of the same problems.
I'm seriously surprised at how many people here are fans of Yudkowsky, given his lack of any real productive output beyond thought experiments and fanfiction, the distinctly weird stuff he judges as right by taking his philosophy to the extremes (e.g. "it's better for one person to be tortured for fifty years than for an extremely large number of people to each get a speck of dust in their eye"), and the way he treats certain fringe positions as absolutely true despite the lack of any solid evidence to support them (for example, many-worlds theory).
> I'm seriously surprised at how many people here are fans of Yudkowsky
In my (admittedly extremely limited) exposure to him as a person through videos of his talks and some of his transhumanist writing, I didn't really get the impression that he's anything but a very nice and gifted person. So I think this practice of judging or dismissing entire lives by applying a simplistic theme is an antipattern, and this is true here, too. There's a reason why ad hominem attacks are generally frowned-upon.
Yes, I personally found the certainty with which several conclusions are asserted as universal truths on LessWrong in general offputting at times, especially when used in conjunction with the Rationality label, insidiously implying any other analyses of the subject matter would be inherently irrational.
However, some subjectively bogus tenets notwithstanding, I still think it's a valiant and intellectually stimulating attempt at building a new philosophical framework which could potentially keep up with science and future human development. At the very least LessWrong Rationality is a good basis for an ongoing discourse on the subject, and at its best it demonstrates a unique exploration of ethics and, indeed, rationality.
You may argue most or even all of this framework is lifted from earlier philosophical and scientific achievements, but in some areas standing on the shoulders of giants is actually a good sign you're in the right place.
>You may argue most or even all of this framework is lifted from earlier philosophical and scientific achievements, but in some areas standing on the shoulders of giants is actually a good sign you're in the right place.
The good bits are not original and the original bits are not good.
The problem is that LessWrong has a habit of neologism - so EY will use his own term for something ("fallacy of gray" is one example - known for 2000 years as the "continuum fallacy"), then his young readers, who have met whatever it is for the first time ever, will think his work is much more original and significant than it is 'cos they can't find his term for it. This cuts them off from 2000 years of thinking on the topic and increases LW's halo effect.
What about people like me that would never have learned about the "continuum fallacy" if it weren't for Eliezer's willingness to stoop to my level and explain things like I'm 5 (or, more accurately, like I'm a fan of Harry Potter)?
I personally don't care one bit if the good bits aren't original. They are approachable, and nobody else has done that for me. So I applaud Eliezer and his efforts, regardless of whether or not he has broken ground philosophically.
Would you have known that everyone else had been calling it the continuum fallacy all that time? No, you wouldn't - you'd think Yudkowsky was uniquely insightful.
Furthermore, you wouldn't learn anything beyond the limits of Yudkowsky's knowledge, or - more importantly - that there was anything beyond those limits.
The habit of neologism makes stuff impossible to look up, and creates the illusion that this is new ground, not old, and that there isn't already a world out there.
su3su2u1, debating this matter with Scott Alexander (Yvain), sums up a lot of their problems with the world view (which I am as familiar with as anyone who doesn't actually drink the Kool-Aid can be, having been on LW around four years and read not only the Sequences through twice but read literally all of LessWrong through from the beginning twice), which I largely agree with as a summary: https://storify.com/lacusaestatis/sssp-su3su2u1-debate
I'll quote one telling bit, which points out the level after Bayes:
> Heck, there are well defined problems where using subjective probability isn’t the best way to handle the idea of “belief”- when faced with sensor data problems that have unquantified (or unquantifiable) uncertainty the CS community overwhelmingly chooses Dempster-Shafter theory, not Bayes/subjective probabilities.
Do you remember the Sequences post mentioning the words "Dempster-Shafter"? Me neither.
(And then there's the use of "Bayesian" to mean things that nobody else uses the term for. As su3su2u1 puts it: "I suspect I’d be hard-pressed to write about probability theory in a way that wouldn’t fit some idea you cover by the word 'Bayesian.'")
Yudkowsky definitely gets credit as a good pop science writer. The habit of neologism, not so much. And definitely if he wasn't into the encapsulated, self-referential world that LW builds. In philosophy, Yudkowsky is the quintessential Expert Beginner: http://www.daedtech.com/tag/expert-beginner
untiltheseashallfreethem notes in http://untiltheseashallfreethem.tumblr.com/post/107159098431... : "I think Eliezer did a great service in writing these ideas up. But they are not his ideas, and I’m really worried that a lot of people read LessWrong, see that Eliezer is right about this stuff, assume he came up with it all, and then go on to believe everything else he says." And that's a serious problem when the good stuff is not original, and the original stuff is not good.
I haven't read everything on LessWrong, nor do I have time to keep up on the meta-discussion of Eliezer's neologism habits, but I can say that I've never thought that he invented any of the concepts that I learned through HPMOR or the sequences that I have read.
On the contrary, he seems very intent on citing the very books and people from which he learned these things. At least in my more limited experience. You definitely seem to have studied up on the issue much more than I.
It took way too much reading to realise there wasn't actually a "there" there, that none of the pointers-to-pointers-to-explanations actually resolved in the end. The evidence is pretty clear that I have way too much time on my hands.
>.g. "it's better for one person to be tortured for fifty years than for an extremely large number of people to each get a speck of dust in their eye"),
The number in question was 3^^^3 using Knuth's Up Arrow Notation. As he explains, that's a lot of people: (3^(3^(3^(... 7625597484987 times ...)))).
If we decide that one person being tortured for 50 years is worth a quick blink by all those other people, then a still tiny amount of people (say, 7625597484987) being tortured would be worth everyone else blinking furiously for their entire lives.
So it's not about deciding one person's fate. It's about being consistent, because if you make an exception for one person, that turns into several hundred billion, then that small sacrifice everyone else was to carry now has destroyed everyone else, too.
(I think the issue is that people tend to round down "speck of dust" to 0, then multiply, then find of course any amount of torture isn't worth 0.)
Personally, I take a more negative view and see the immense possibility for suffering as a reason that we should seek to destroy the entire multiverse. It's just not very nice otherwise.
I've thought about this and concluded that this thought experiment ignores the subjective way people experience reality. If a dust speck fell in your own eye, you would round that experience down to zero. You wouldn't even remember the event an hour later. I think this subjective experience of reality should be taken into account.
As for the part of the argument that dust specks in so many eyes would cause the death of a small fraction of the people, that's far from a given but for the sake of "steelmanning" let's assume that's true. At some point my own ethics place death as a higher utility outcome than prolonged suffering. How many deaths vs how much suffering? That's hard to quantify.
> I think the issue is that people tend to round down "speck of dust" to 0, then multiply, then find of course any amount of torture isn't worth 0.
Why not round it down to 0?
There's some level of discomfort in everyday life to start with that's effectively negligible for any normal person, simply because it's a prerequisite for interacting with the world (getting a raindrop in your eye, dealing with a minor wedgie, having an itchy nose, whatever).
If you round down before multiplying, then you get an invalid answer. First multiply it out. So in my example, torturing 7,625,597,484,987 people for 50 years, versus that many specks of dust for everyone else. (7 trillion is essentially 1, compared to 3^^^3, right?) 7 trillion specks of dust is enough to turn all those other lives into torture, as 7 trillion specks of dust is a few thousand a second, for a very long life.
That's why you can't round down first. Eliezer's entire intent there is to illustrate that by default, we suck at scale, we're scope insensitive.
Another way: If everyone in the US gave just a penny directly to a poor person, once in a lifetime, that person would be wealthy for life and be OK, right? A penny rounds down to zero, thus we can determine that the right action is to always give money directly to poor individuals, as this will cure poverty at a cost of nothing.
You're right that if the only choice, ever, in the entire multiverse was this one 50 years or 3^^^3 specks of dust, then sure, you don't need to follow the rules/math. But we're never going to be faced with just a single decision. Instead, we should be consistent and follow the numbers to determine utility.
> If you round down before multiplying, then you get an invalid answer.
Your argument here is based on the premise that the subjectively-inflicted-pain of "a whole lot of specks of dust in a person's eye" can be treated as a multiple of "one speck of dust in a person's eye", which to me is the point of absurdity that makes the whole thought exercise fall apart.
Yes, it's the sort of thing that makes people go to second-order utilitarianism.
(It's a bit like when someone says something absurd about Uber or Bitcoin and, when called out, says "But that's Econ 101!" Yes, but in second and third year you find out it's all a bit more complicated than that.)
That some arbitrarily minimal amount of subjective pain can be presumed to be actually distinguishable from the general background unpleasantness of being human in the first place.
Edit: Rephrased to remove gibberish resulting from temporary brain/keyboard disconnect.
For some reason HN won't let me reply to Houshalter's post, but...
> objectively
How can pain be objective? The universe doesn't give a shit that some lumpy sacks of meat dislike certain experiences, and exactly the same action can be unpleasant for some people but pleasant for others. (BDSM enthusiasts, for example, often actively seek out experiences that other people would consider painful.)
That depends on the context. If it's acupuncture, for example, it can be quite good, even if the precise sensation is exactly the same as the otherwise unpleasant case of poking myself with a needle while sewing.
It doesn't matter if it's "distinguishable". It's objectively worse. A pin prick might not be noticeable but a million is the worst torture imaginable.
You're assuming that dust specks across different people can be added, and then asserting that as evidence that dust specks across different people can be added. The addition is ridiculous.
In real-world AI, they tend to go for minmaxing, which avoids such absurdities. Minmaxing says "don't torture people, your dust speck has literally been defined to be insignificant."
That in real-life AI, they don't use simplistic utilitarianism.
You're dodging the core of the problem, as stated above: "However, when pushed too far, those tools tend to break down—but the rationalist answer to that breakdown is all too often to embrace the model and discount the reality." https://news.ycombinator.com/item?id=9204442
Just because you love your model doesn't make its conclusions at extremes true.
It's not so hard to explain: he writes well and generally has something interesting to say. This is more than enough to keep people reading.
That doesn't mean everything he writes stands up, but he's smart enough that you have to think a while about what might be wrong with his arguments, and that's entertainment enough.
I'm seriously unsurprised to find the token why-does-anyone-like-Yudkowsky-he-doesn't-have-a-PhD-and-hasn't-even-saved-the-world-yet criticism in a thread about his Harry Potter fan fiction.
He's a great writer, and that should be more than enough for the purposes of recommending a thing he wrote.
I left this out from the parent post because it's a subjective judgment, but I absolutely disagree that he's a "great" writer, or even anything really more than "serviceable" at best.
"Scientific parable fiction" is a pretty narrow genre (though I'd personally argue that Yudkowsky does a pretty poor job of it - for example, Harry constantly assuming things without testing them and just happening to be right because the author says so).
alexanderwales writes relatively short, punchy stories that explore specific academic and narrative themes, and, importantly, generally work extremely very well as stories even if you discount the thought experiment aspects.
> I'm seriously unsurprised to find the token why-does-anyone-like-Yudkowsky-he-doesn't-have-a-PhD-and-hasn't-even-saved-the-world-yet criticism in a thread about his Harry Potter fan fiction.
I don't think that this was the kind of criticism that was expressed, but on the other hand it's always easier to fight a straw man.
I haven't read all of the "Sequences," nor am I familiar with most details of the Less-Wrong-mindset's idiosyncrasies. However, I just re-read the whole book and I'm not sure either of those concepts have any bearing on this work.
There is a bit of assertion that "timeless physics" is representative of true reality, but my reaction was to look it up and find that it's an interesting fringe theory. For all I know, EY doesn't even assert that same theory's truth. Nothing harmful in sparking curiosity, especially in things such as statistics and logical biases, which is where HPMOR puts its emphasis.
Interesting. I still have no problem with it, of course, buttressed by my quick glance at that page to see, at the very top:
Warning: The central idea in today's post is taken seriously by serious physicists; but it is not experimentally proven and is not taught as standard physics.
Today's post draws heavily on the work of the physicist Julian Barbour, and contains diagrams stolen and/or modified from his book "The End of Time". However, some of the arguments here are of my own devising, and Barbour might(?) not agree with them.
Yeah. Despite the disclaimer, he then goes on to assume it's literally true. And, for relevance to this post, he does so in the story as well, in Chapter 28.
> There is no “true math of quantum mechanics.” [...] These are different mathematical formulations, over different spaces, that are completely equivalent.
> What Hariezer is doing here isn’t separating the map and the territory, its reifying one particular map (configuration space)!
>I also find it amusing, in a physics elitist sort of way (sorry for the condescension) that Yudkowsky picks non-relativistic quantum mechanics as the final, ultimate reality. Instead of describing or even mentioning quantum field theory, which is the most low-level theory we (we being science) know of, Yudkowsky picks non-relativistic quantum mechanics, the most low-level theory HE knows.
> So this is more bad pedagogy: timeless physics isn’t even a map, it's the idea of a map. [...] It seems very odd to just toss in a somewhat obscure idea as the pinnacle of physics.
One important thing, for those who find the main character insufferable at times: this is not a simple mary sue story.
When you notice Harry doing something dumb, oblivious, overconfident, condescending, etc. do not assume that this is the author's personality leaking through. It may be the intended reading.
I only read a little of it, but every time a character did something dumb, oblivious, overconfident, condescending, I braced myself for a thought missile.
In fact, Harry's personality is a Clue, or rather Bayesian evidence to use when predicting future events in the story, or deducing the true nature of the mysterious events that started the story.
Given that the early parts of the story are written in such a way that the narrative treats Harry's behavior as completely right and justified, it seems far more likely to be a retcon done years after the fact than some secret master plan Yudkowsky had in mind from the beginning.
The ending was explicitly foreshadowed in Ch. 1, including advance quotes. And, like, all the other early chapters. Nobody familiar with HPMOR could possibly take seriously the notion that this was a retcon.
No, but it's pretty close to one. Guess who bit his maths teacher at age seven when said teacher didn't know what a logarithm was?
Also, read this: http://lesswrong.com/lw/k9r/cognitive_biases_due_to_a_narcis... It's a perfectly normal literary analysis of HPJEV as a narcissist and/or raised by narcissists. It's not Ph.D-quality rigour, but it does back its claims. Note the amazing special pleading in the comments - fans outraged someone would dare analyse their favourite thing in less than glowing terms.
They seriously think they can get this thing a Hugo, somehow evading all artistic critique along the way.
I'm very happy about this. I got to around chapter 85, but, due to the author's time constraints, I put it down at that point. I'll have to pick it up and start over. It is well worth the read.
Ditto. Started reading it a few weeks ago and just caught up two days before the end. Certainly ranks among my favorite novels now. Almost literally could not put it down until I finished. Thanks, Eliezer.
Thanks Eliezer. I loved it. People new to HPMOR will be able to gorge on it, but reading it in episodes and anticipating the next one with a bunch of people made it something special.
I've recommended this book to sooo many people, I think it is one of the best ways to introduce rational thinking & understanding to people (well... at least to Harry Potter fans).
Fascinating and excellent story. I was originally "eye rolling" at the diatribe against death, but I changed my mind when I realized how few people really have different and thought provoking opinions on what we think about the status quo. The rationality of the argument got past my original response, in time.
Similar here -- there are plenty of huge pitfalls there, if we don't die; but that's not the same thing as saying the pitfalls are insurmountable, and I value things that get me thinking without just hitting "oh, that's dumb" after a bit of reflection.
You would get more out of it having had the back story of at least watching the first movie (it's only 3rs). But the Yudkowsky version of it stands on its own.
Fan fiction at best is a grey area and at worst is blatant copyright infringement.
The community seems more than happy to leave it in that place rather than put it to the test by publishing something like that or attempting to make money from that.
As soon as someone tries to make money from it, even if just to recoup self-publishing costs there will be a lawsuit and it'll be decided concretely, probably not in the community's favor.
Yudkowsky has already made money off it by gating chapters behind receiving a certain amount in donations, which is part of why I find it really strange that he's apparently now trying to explicitly get Rowling's attention.
I didn't know that, donations are baiting the lion, I suppose it helps though that his story is almost completely separate from canon. That's a very risky path to walk though.
I offered to post pre-written chapters at a higher speed (1 per 2 days instead 1 per 4 days) if a medium-sized charity I cofounded many years earlier made its annual donation target during a matching grant. Beware that HPMOR has a hatedom, and don't believe everything you're told about it.
The Author's Notes recently:
"I decided long ago that once HPMOR was fully written and published, I would try to get in touch with J. K. Rowling to see if HPMOR could be published in book form, maybe as HJPEV and the Methods of Rationality, with all profits accruing to a UK charity. I’m not getting my hopes up, but I do have a rule telling me to try rather than automatically giving up and assuming something can’t be done. If any reader thinks they can put me in touch with J. K. Rowling, or for that matter Daniel Radcliffe, regarding this matter, I do hereby ask them to contact me at yudkowsky@gmail.com."
I loved it. But I did have second thoughts when it started turning dark after I had already recommended it to a friend as adequte reading for his very young, Potter-obsessed son. After the troll incursion, I was thinking "Uh oh, what have I done?!?".
On a lighter note, I was hoping right to the end that the last transfiguration would end up producing a house elf, ideally Dobby. :D
Eliezer, if you're reading this: thanks. It was awesome.
no it never stopped being awesome ! But over time and many plots the focus changed and it became a bit more serious... but its totally worth every minute reading ! its really great!
We have no way to know if homulilly got drunk and puked on the floor in public, but we do know that Yudkowsky and assorted Lesswrong visitors had a reaction to the Roko's basilisk concept that most relatively "normal" people would judge as embarrassing or humorous.
That was one situation in the long history of a community with high standards of reasoning and discussion, that produces tons of interesting and educative content. Are you really going to judge the whole community by a single overreaction they AFAIR later admitted was an overreaction?
I'm not really saying anything about the community other than my own immediate reaction to it. I've read a bit of hpmor and found it fairly entertaining in an Ayn Rand sort of way but I have trouble investing the time required in online fiction as vast as it is and I've found the fanbase to be fairly obnoxious in my personal interactions.
For the record, while I wont deny that I've had more to drink than I could handle in the past, I've always made it to the toilet in time to avoid any public embarrassment.
I'm in agreement regarding the absurdity and group think of Less Wrong, for reasons that aren't that relevant to this thread. That said, HPMOR is a fairly entertaining read, once you can get past the feeling that you're having an ideology forced on you through Harry's point of view. The book reminds me of things Ayn Rand has written in that sense.
(FYI: If you follow qrendel's above link to /r/XKCD you'll see a detailed list of the false claims in the RationalWiki article as they existed at that time. After repeatedly being provided such lists, David Gerard claims every time that no list of false claims have ever been provided him. Checking the XKCD link will show you that this is outright and verifiably false.)
I reply that you are blatantly lying about an easily checkable matter, and I request the interested observer to follow the link and check.
(Wow. Am I missing something about status games that make the above open lie a good move for David Gerard? He's done it repeatedly, too. I'm confused, how is this a good thing from his perspective? Am I playing into his hands somehow by calling him on it each time?)
I have repeatedly asked for a list of the inaccuracies, since I substantially wrote, researched and cited the article http://rationalwiki.org/wiki/Roko%27s_basilisk . Yudkowsky called me a LYING LIAR WHO LIES LIES LIES but the only claim of an actual lie he could come up with was where it said this was an incentive to donate to MIRI. Given that the original Basilisk post and the preceding post were literally about how to come up with lots of money to donate to MIRI (then SIAI), I'm not really willing to accept that as a lie.
For the claim of inaccuracies: that article is cited to the phrase level for good reason. If you're claiming inaccuracies, you're going to need actual details of why the cites are wrong.
Mileage varies! I'd like to affirm and congratulate your wise decision to stop reading at that point instead of continuing to torture yourself, and advise against anyone recommending parent to try again or go back. HPMOR is not going to be everyone's cup of tea!
For comparison, `Harry Potter and the Natural 20' is a better piece, because the author just wants us to have a good time, and is not constantly trying to prove how smart he is. (HPMOR reminded me a bit about Freakonomics in that respect. But Freakonomics is way worse.)
HPMOR could do with some radical editing, perhaps?
Strategy of Conflict is one of THE BEST books for ANYONE to read. Schelling is one of the guys whose job it was to do game theory when "I know that you know that I know" had nuclear annihilation at the other end if you get it wrong. He came up with the Red Telephone between the White House and the Kremlin.
As a book that is literally about negotiating your way out of terminal nuclear war, I've also found it an excellent practical guide to raising a toddler.
Dr Strangelove is actually a treatise on why Smart Contracts are a terrible idea nobody should want - the plot is literally an unstoppable Smart Contract gone wrong.
I thought I was missing something because I only hear great things about it but I couldn't stand it. I tried twice to read it but could never understand the appeal. It's nice to know I'm not alone.
Same here. I'm not sure what those who enjoy the series get out of it, but I really couldn't see the appeal. Perhaps the author's use of HP as a vehicle for his philosophical ideas was just too jarring.
Among other things, I've often seen people try to sell HPMOR as, "Harry Potter fanfiction in which Harry applies the scientific method to the magical world of Harry Potter!" And that shows up a little [^1] but is by and large not the content of the work. Harry does very few experiments to verify his hypotheses and is actually a broadly incurious character. For example: Harry is nominally interested in eliminating death, but never once investigates the many magical mechanisms which seem to eliminate death, preferring to just talk about it instead. Similarly, he comes up with hypotheses as to how magic works, but never bothers to investigate them beyond speculation.
Additionally, a lot of the "solutions" to problems Harry has are relatively unsatisfying: Harry often circumvents a problem by "clever" applications of the rules, but because the rules of magic as given both in Rowling and in Yudkowsky are ill-defined and even self-contradictory, this seems less clever and more like arbitrary author fiat. Harry is also given a small time machine early on in the plot, and so more often than not, the solution to his problems is, "Use the time-turner again."
Finally, I personally found the writing style to be generally in need of editing—not terrible but certainly not polished—but I consider that to be a smaller problem than the above plot-related issues. EDIT: All this doesn't mean that you shouldn't like it! These are just problems that I had with the work.
[^1]: The bit where Harry tries to experimentally prove that P=NP is probably my favorite part of the entire thing.
While I think the line you were fed about its premise was incorrect, I don't think you read as far as chapter 28, did you?
The thing is... HPMOR isn't about the scientific method. It's about rationality. It's not a story about distilling a universal theory of magic or triumphing over death; it's a story about a kid who's had a rationalist education fumbling his way through a world that refuses to actually make sense.
I read well through chapter 70. I did imply that scientific investigation shows up a little, after all! I agree that the line people gave me wasn't correct, but I've heard that line enough that I felt it was appropriate to dispel that particular notion.
Perhaps also germane is that I am not personally a believer in Less Wrong-style rationality, and so the intellectual content of the work was not relevant to me except in the detached way that philosophical or political or religious schools can be interesting to non-adherents (which is one reason I read as much as I did.) Whether the story accomplishes its actual goal, then, is something I can't judge, but I can say that, as a non-rationalist (irrationalist?) I didn't find it to be a good or engaging story.
But as I said in my previous comment, this is my own reaction, and I include it for informative reasons, not because I think others should necessarily share it!
It wasn't apparent in my own comment, but I don't put much stock in the "intellectual content of the work" myself. I dislike LessWrong and self-described rationalists, including Yudkowsky as far as I know him. I agree with su3su2u1 that, insofar as Yudkowsky intended HPMOR as a pedagogical vehicle, it was not really well done.
That said, it is rare that a work of fanfiction manages to be a legitimate deviation and self-contained work that both comments and reflects upon its originator usefully. Generally speaking, I prefer to give all fiction the benefit of the doubt: whatever its failures, it is hard to ignore that it was successfully written and as someone who has tried my hand at it, I know firsthand the challenge overcome. There's a "Man in the Arena" quote insertion that belongs here. I did find it to be a good and engaging story, though, if painfully in need of an editor, which is not a unique remark in the land of fanfiction.
If you want to really have fun, compare HPMOR with the Left Behind series, which qualifies as fanfiction as much as anything. There are probably some fascinating parallels to be found with their relationship to their original works and author intentions and the like. You can find a much higher quality counterpart to su3su2u1 in Fred Clark's Slacktivist blog, which dissects it on a page-by-hilarious-page basis.
This is the kind of thing which could very easily turn into a flame war, and is much further off-topic. I'm not interested in that kind of argument right now, so all I will say in this thread is this:
My personal experience with Less Wrong-style rationalism, to simplify the situation aggressively, is that it has a core of good, useful tools (Bayesian reasoning, strict positivism, utilitarianism) that I have no problem with. However, when pushed too far, those tools tend to break down—but the rationalist answer to that breakdown is all too often to embrace the model and discount the reality. This general refusal to regard their core tools with suspicion results in beliefs which are paradoxically irrational: when faced with e.g., utilitarianism condoning torture to prevent mild discomfort, the rationalist response is not, "Perhaps human experience does not map straightforwardly to integers—we should re-examine our tools," but rather, "As our mathematical tools are of course correct, we must believe in this conclusion." This belief in math-over-matter is a major part (though not the only factor) in my skepticism towards the kind of rationalism promoted by Less Wrong.
And yet no one has really made an convincing argument of why the conclusions of utilitarianism contradict the axioms of utilitarianism.
And no one seems to offer the rather simple solution that utility is non-linear (so dust in ones eye is really a quite small fraction as bad as a century of torture), and people who mock Eliezer's utilitarianism hypocritically do so on sweatshops built&recycled computers, clothes, and food.
The strongest argument against rationalist utilitarianism seems to be that people don't like the cognitive dissonance it imposes on hypocrites.
The linearity of utility is irrelevant to the dust specks or torture problem because you can always increase the number of people to receive dust specks so the utility of torturing a single individual for 50 years is higher.
And calling people hypocrites because things like involuntary organ donation gives them "cognitive dissonance" is absurd. Also regarding hypocrisy, note that Yudkowski works for an organization whose goal is to protect humanity from Skynet, while people in Africa are starving. (I'm not personally attacking him, he can work on whatever he likes, and I think governments of wealthy nations have both the resources and responsibility to alleviate starvation, and individual efforts are mostly pissing in the wind. But parent poster brought up sweatshops &c. so I went there.)
The really funny part of HPMOR is that, while it was intended to demonstrate rationalism, what I got out of it was that rationalism doesn't actually work.
I'm unwilling to post spoilers, which makes this unfairly undiscussable, but the meaning of the original prophecy did not--could not--come down to rationalism and could not for reasons that were repeated a few times through the fic. I can't tell if this was intentional on Yudkowsky's part, and I don't really care, since it really changes nothing about everything else.
For the record, though, when I disparage utilitarianism, I promise it isn't just "Eliezer's".
>And yet no one has really made an convincing argument of why the conclusions of utilitarianism contradict the axioms of utilitarianism.
No, they've said "this is a reductio ad absurdum, perhaps reality is a bit more complicated than that." You don't get to assert your assumption - that simplistic utilitarianism works as advertised - as evidence.
Consider, for instance, minmaxing as an alternative. (Like real-life AI work tends to.) What answer does that give?
Thanks for explaining your views. My intent was not to spark a flame war, I've heard quite a few critiques of less wrong but most of the authors expressed their disagreement without explaining what they disagreed with (perhaps it was obvious to them).
I did enjoy HPMOR, but there's many reasons why people wouldn't.
Take one in common with the Harry Potter franchise — consider that "J.K." Rowling's publisher asked her to hide her gender. [1] Obviously there's many absurdities in such a system. Unicorns and horcruxes are perfectly fine — but god forbid you have a lead character who's (say) a black girl. Whites and males can't be expected to empathize with her!
I can enjoy a product while acutely aware of such things, since our backwards culture leaves little else to conveniently enjoy otherwise. But decided not to go to the local Wrap Party, given: the sort of people most likely to know about HPMOR, lesswrong.com posts, and what I researched of the local organizers. Better things to do with my day.