Here's a theory of what's happening, both with you here in this comment section and with the rationalists in general.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalist philosophy basically creeps people out. There is something "insidious" about it. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions have already been made by the rationalists---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's morally fine, because it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it as a better way to live: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors; or, it's better to protect animals than people, because there are more of them". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority, regardless of how the math works out. To work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, and (I suspect) autistic people, who probably experience less direct moral empathy for the people around them, as well as to the economically-insulated well-to-do tech-nerd types who are less likely to be directly exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include it, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety, because that is the foundation of all morality, and is utterly essential to preserve, because it makes sure that whatever else you are doing doesn't go awry.
(edit: let me add that your aversion to the criticisms of rationalists is not unreasonable either. Given that you're parsing the criticisms as unreasonable, which they likely are (because of the skill issue), what you're seeing is a movement with value that seems to be being unfairly attacked. And you're right, the value is actually there! But the ultimate goal here is a synthesis: to get the value of the rationalist movement but to synthesize it with the recognition of the red flags that it sets off. Ignoring either side, the value or the critique, is ultimately counterproductive: the right goal is to synthesize both into a productive middle ground. (This is the arc of philosophy; it's what philosophy is. Not re-reading Plato.) The rationalists are probably morally correct in being motivated to highly-scaling actions e.g. the purview of "Effective Altruism". They are getting attacked for what they're discarding to do that, not for caring about it in the first place.)
I finally gave in and created an account because of your comment. It's beautifully put. I would only perhaps add that, to me, the neo-rationalist thing looks the most similar to things that don't work yet attract hardcore "true believers". It's a pattern repeated through the ages, perhaps most intimately for me in the seemingly endless parades of computer system redesigns: software, hardware, or both. Randomly one might pick "the new and exciting Digg", the Itanium, and the Metaverse as fairly modern examples.
There is something about a particular "narrowband" signaling approach, where a certain kind of purity is sought, with an expectation that, given enough explaining, you will finally get it, become enlightened, and convert to the ranks. A more "wideband" approach would at least admit observations like yours do exist and must be comprehensively addressed to the satisfaction of those who hold such beliefs vs to the satisfaction of those merely "stooping" to address them (again in the hopes they'll just see the light so everyone can get back to narrowband-ville).
(Thank you) I agree, although I do think that the rationalists and EAs are way better than most of the other narrowband groups, as you call them, out there, such as the Metaverse or Crypto people. The rationalists are at least mostly legitimately motivated by morality and not just by a "blow it all up and replace it with something we control" philosophy (which I have come to believe is the belief-set that only a person who is convinced that they are truly powerless comes to). I see the rationalists as failing due to a skill issue as well: because they have so-defined themselves by their rationalism, they have trouble understanding the things in the world that they don't have a good rational understanding of, such as morality. They are too invested in words and truth and correctness to understand that there can be a lot of emotional truth encoded in logical falsehood.
edit: oh, also, I think that a good part of people's aversion to the rationalists is just a reaction to the narrowband quality itself, not to the content. People are well-aware of the sorts of things that narrowband self-justifying philosophies lead to, from countless examples, whether it's at the personal level (an unaccountable schoolteacher) or societal (a genocidal movement). We don't trust a group unless they specifically demonstrate non-narrowbandedness, which means being collectively willing to change their behavior in ways that don't make sense to them. Any movement that co-opts the idea of what is morally justifiable---who says that e.g. rationality is what produces truth and things that run counter to it do not---is inherently frightening.
They aren’t motivated by morality. They just are more moral relative to the niches you referred to.
Any group that focuses on their own goals of high paying jobs regardless of the morality of those jobs or how they contribute to the structural issues of society is not that good. Then donating money while otherwise being okay with the status quo —- not touching anything systemic in such an unjust world but supposedly focusing on morality is laughable.
I see it as moral action with a skill issue, which is how I see a lot of bad things in the world. People are very good at missing the forest for the trees.
In their defense they do try to do the calculations: is a high-paying job okay on net if you give most of the money away? depends on the job; depends on if somebody else would have it if you didn't; etc. Not that there is a rigorous way to do them but it's very much a group that does try.
> Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include personality morality, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety.
I head not read any rationalist writing in a long time (and I didn't know about Scott's proximity), but the whole time I was reqding the article I was thinking the same thing you just wrote... "why are they afraid of AI, i.e. the ultimate rationalist taking over the world", maybe something deep inside of them has the same reaction to their own theories as you so eloquently put above.
I don't read these rationalist essays either, but you don't need to be a deep thinker to understand why any rational person would be afraid of AI and the singularity.
The AI will do what its programmed to do, but its programmers morality may not match my own. What more scary is that it may be developed with the morality of a corporation rather than a person. (That is to say, no morals at all.)
I think its perfectly justifiable to be scared of a very powerful being with no morals stomping around!
Those corporations are already superhuman entities with morals that don’t match ours. They do cause a lot of problems. Maybe it’s better to figure out how to fix that real, current problem rather than hypothetical future ones.
This parallel has been drawn. Charlie Stross [0] in particular thinks the main difference is that pre-digital AIs are capable of behaving much faster, so that other entities (countries, lawmakers, …) have time to react to them.
One mistake you making is thinking that rationalists care more about people far away than people in their community. The reality is that they set the value of life the same for all.
If children around you are doing of an easily preventable disease, then yes, help them first! If they just need more arts programs, then you help the children dying in another country first.
That's not a mistake I'm making. Assuming you're talking about bog-standard effective altruists---by (claiming to) value the suffering of people far away as the same as those nearby, they're discounting the people around them heavily compared to other people. Compare to anyone else who values their friends and family and community far more than those far away. Perhaps they're not discounting them to less-than-parity---just less than they are for most people.
But anyway this whole model follows from a basic set of beliefs about quantifying suffering and about what one's ethical responsibilities are, and it answers those in ways most people would find very bizarre by turning them into a math problem that assigns no special responsibility to the people around you. I think that is much more contentious and gross to most people than EA thinks it is. It can be hard to say exactly why in words, but that doesn't make it less true.
To me, the non-local focus of EA/rationalism is, at least partially, a consequence of their historically unusual epistemology.
In college, I became a scale-dependent realist, which is to say, that I'm most confident of theories / knowledge in the 1-meter, 1-day, 1 m/s scales and increasingly skeptical of our understanding of things that are bigger/smaller, have longer/short timeframes, or faster velocities. Maybe there is a technical name for my position? But, it is mostly a skepticism about nearly unlimited extrapolation using brains that evolved under selection for reproduction at a certain scale. My position is not that we can't compute at different scales, but that we can't understand at other scales.
In practice, the rationalists appear to invert their confidence, with more confidence in quarks and light-years than daily experience.
> no special responsibility to the people around you
Musing on the different failure-directions: Pretty much any terrible present thing against people can be rationalized by arguing that one gadzillion distant/future people are more important. That includes religious versions, where the stakes of the holy war may presented as all of future humanity being doomed to infinite torment. There are even some cults that pitch it retroactively: Offer to the priesthood to save all your ancestors who are in hell because of original sin.
The opposite would be to prioritize the near and immediate, culminating in a despotic god-king. This is somewhat more-familiar, we may have more cultural experience and moral tools for detection and prevention.
A check on either process would be that the denigrated real/nearby humans revolt. :p
> they're discounting the people around them heavily compared to other people
This statement of yours makes no sense.
EAs by definition are attempting to remove the innate bias that discounts people far away by instead saying all lives are of equal worth.
>turning them into a math problem that assigns no special responsibility to the people around you
All lives are equal isn't a math problem. "Fuck it blow up the foreigners to keep oil prices low" is a math problem, it is a calculus that the US government has spent decades performing. (One that assigns zero value to lives outside the US.)
If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
And since air travel is a thing, what the hell does "close to us" mean?
For that matter, from a purely selfish POV, helping lift other nations up to become fully advanced economies is hugely beneficial to me, and everyone on earth, in the long run. I'm damn thankful for all the aid my country gave to South Korea, the number of scientific advances that have come out of SK damn well paid for any tax dollars my grandparents paid on many orders of magnitude times over.
> It can be hard to say exactly why in words, but that doesn't make it less true.
This is the part where I shout racism.
Because history has shown it isn't about people being far or close in distance, but rather in how those people look.
Americans have shot down multiple social benefit programs because, and these are what people who voted against those programs directly said was their reasons "white people don't want black people getting the same help white people get."
Whites in America have voted, repeatedly, to keep themselves poor rather than lift themselves and black families out of poverty at the same time.
Of course Americans think helping people in Africa is "weird".
> If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
The thing about strict-utilitarian-morality is that it can't comprehend any other kind of morality, because it evaluates the morality of... moralities... on its own utilitarian basis. And then of course it wins over the others: it's evaluating them using itself!
There are entirely different ethical systems that are not utilitarian which (it seems) most people hold and innately use (the "personal morality" I'm talking about in my earlier post). They are hard to comprehend rationally, but that doesn't make them less real. Strict-utilitarianism seems "correct" in a way that personal morality does not because you are working from a premise "only things that I can understand like math problems can be true". But what I observe in the world is that people's fear of the rationalist/EA mindset comes from the fact that they empirically find this way of thinking to be insidious. Their morality specifically disagrees with that way of thinking: it is not the case that truth comes from scrutable math problems; that is not the point of moral action to them.
The EA philosophy may be put as "well sure but you could change to the math-problem version, it's better". But what I observe is that people largely don't want to. There is a purpose to their choice of moral framework; it's not that they're looking at them all in a vacuum and picking the most mathematically sound one. They have an intrinsic need to keep the people around them safe and they're picking the one that does that best. EA on the other hand is great if everyone around you is safe and you have lots of extra spending money and what you're maximizing for is the feeling of being a good person. But it is not the only way to conceive of moral action, and if you think it is, you're too inside of it to see out.
I'll reiterate I am trying to describe what I see happening when people resist and protest rationalism (and why their complaints "miss" slightly---because IMO they don't have the language to talk about this stuff but they are still afraid of it). I'm sympathetic to EA largely, but I think it misses important things that are crippling it, of the variety above: an inability to recognize other people's moralities and needs and fears doesn't make them go away; it just makes them hate you.
> The thing about strict-utilitarian-morality is that it can't comprehend any other kind of morality, because it evaluates the morality of... moralities... on its own utilitarian basis.
I can comprehend them just fine, but I have a deep seated objection to any system of morality that leaves behind giant piles of dead bodies. We should be trying to minimize the size of the pile of dead bodies (and ideally eliminate the pile altogether!)
Any system or morality that boils down to "I don't care about that pile of dead bodies being huge because those people look different" is in fact not a system morality at all.
Well, you won't find anyone who disagrees with you here. No such morality is being discussed.
The job of a system of morality is to synthesize all the things we want to happen / want to prevent happening into a way of making decisions. One such thing is piles of dead bodies. Another is one's natural moral instincts, like their need to take care of their family, or the feeling of responsibility to invest time and energy into improving their future or their community or repairing justice or helping people who need help, or to attend to their needs for art and meaning and fun and love and respect. A coherent moral system synthesizes these all and figures out how much priority to allocate to each thing in a way that is reasonable and productive.
Any system of morality that takes one of these criteria and discards the rest of them is not a system of morality at all, in the very literal sense that nobody will do it. Most people won't sell out one of their moral impulses for the others, and EA/rationalism feels like it asks them too, since it asks them to place zero value in a lot of things that they inherently place moral value in, and so they find it creepy and weird. (It doesn't ask that explicitly; it asks it by omission. By never considering any other morality and being incapable of considering them, because they are not easily quantifiable/made logical, it asks you to accept a framework that sets you up to ignore most of your needs.)
My angle here is that I'm trying to describe what I believe is already happening. I'm not advocating it; it's already there, like a law of physics.
Perhaps part of it is that local action can often be an order of magnitude more impactful than the “equivalent” action at a distance. If you volunteer in your local community, you not only have fine-grained control over the benefit you bestow, you also know for a fact that you’re doing good. Giving to a charity that addresses an issue on the other side of the world doesn’t afford this level of control, nor this level of certainty. For all you know most of the donation is being embezzled.
I think another part of it is a sort of healthy nativism or in-group preference or whatever you want to call it. It rubs people the wrong way when you say that you care about someone in a different country as much as you care about your neighbors. That’s just…antisocial. Taken to its logical conclusion, a “rationalist” should not only donate all of their disposable income to global charities, they should also find a way to steal as much as possible from their neighbors and donate that, too. After all, those. Holden in Africa need the money much more than their pampered western neighbors.
What creeps me out is that I have no idea of their theory of power: How will they achieve their aims?
Maybe they want to do it in a way I’d consider just: By exercising their rights as individuals in their personal domains and effectively airing their arguments in the public sphere to win elections.
But my intuition is they think democracy and personal rights of the non-elect are part of the problem to rationalize around and over.
Would genuinely love to read some Rationalist discourse on this question.
Why is "theory of power" a necessary framing? "Theory of power" seems to be only a popular idea in "online left progressive" circles which are based on pop readings of post-structuralist continental philosophy. There's plenty of other schools of thought out there and there have been many criticisms of the idea of "theory of power" altogether.
Reading critiques of Hegel is a great starting point for this reading.
Whether you accept it or not though, there's lots of non-rationalist schools that reject the need for a "theory of power".
I didn’t mean the term in anything but a colloquial sense: They believe certain outcomes are right and proper. Such outcomes don’t typically manifest themselves out of good intentions. What’s the plan?
When Curtis Yarvin is at least in your orbit, these should not be surprising questions to get.
you've latched on to some academic sense of the term but I understood exactly what the person you're replying to meant. They mean: are these people you would respect and trust to be in charge of things? What would they do? And a lot of people's vibe-check says it's a big unknown question mark, or worse, which is not something you can put faith in.
It's hard to argue with vibes which is what makes today's culture war based politics so difficult to weigh in on conclusively. It's all just vibes after all. One person's vibes can be cap to someone else.
> Humans are generally better at perceiving threats than they are at putting those threats into words.
Not only that, but this is exactly the kind of scenario where we should be giving those signals the most weight: The individual estimating whether to join up with a tribe. (As opposed to, say, bad feelings about doing calculus.)
Not only does it involve humans-predicting-humans (where we have a rather privileged set of tools) but there have been millions of years of selective pressure to be decent at it.
This is a fantastic comment, but it also underscores the thing I really dislike about most internet conversations about philosophy: a lack of reading prior art. Utilitarian morality certainly isn't a new concept though the rationalists today are some of its strongest standard bearers. But around the time when utilitarianism was starting to take hold, rigorous philosophical thinking had exactly the debates that come up here.
Part of the reason I enjoy rationalist discourse more is because, even if they are unabashedly utilitarian, they try to rigorously derive philosophy. Most internet discourse on philosophy is, as you say, just vaguely derived around gut feelings. But philosophy can and has been thought of rigorously. Virtue ethics and continental morality are both schools of thought that reject utilitarian ethics but are much more meaty than the sort of internet "no but my neighbors" intuition that you see in full force, and the weird insistence that these internet commenters continue to use their vague moral intuition without being rigorous about their own thoughts.
No offense, but this way of thinking is the domain of comic book supervillains. "I must destroy the world in order to save it." Morality is only holding me back from maximizing the value of the human race 1,000 or 1,000,000 years from now type nonsense.
This sort of reasoning sounds great from 1000 feet up, but the longer you do it the closer you get to "I need to kill nearly all current humans to eliminate genetic diseases and control global warming and institute an absolute global rationalist dictatorship to prevent wars or humanity is doomed over the long run".
Or you get people who are working in a near panic to bring about godlike AI because they think that once the AI singularity happens the new AI God will look back in history and kill anybody who didn't work their hardest to bring it into existence because they assume an infinite mind will contain infinite cruelty.
the correct answer is probably that it could contain infinite of any of these things, but you don't know which one it's going to be a priori, and you get one shot to be right.
heh. It would not be an overstatement to say I've been working on better putting the issue with the rationalists into words for ten years. glad it's resonating with some people.
> The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions."
I really like the depth of analysis in your comment, but I think there's one important element missing, which is that this is not an individual decision but a group heuristic to which individuals are then sensitized. Individuals don't typically go so far as to (consciously or unconsciously) extrapolate others' logic forward to decide that it's dangerous. Instead, people get creeped out when other people don't adhere to social patterns and principles that are normalized as safe in their culture, because the consequences are unknown and therefore potentially dangerous; or when they do adhere to patterns that are culturally believed to be dangerous. This can be used successfully to identify things that are really dangerous, but also has a high false positive rate (people with disabilities, gender identities, or physical characteristics that are not common or accepted within the beholder's culture can all trigger this, despite not posing any immediate/inherent threat) as well as a high false negative rate (many serial killers are noted to have been very charismatic, because they put effort into studying how to behave to not trigger this instinct). When we speak of something being normalized, we're talking about it becoming accepted by the mainstream so that it no longer triggers the ‘creepy’ response in the majority of individuals. As far as I can tell, the social conservative basically believes that the set of normalized things has been carefully evolved over many generations, and therefore should be maintained (or at least modified only very cautiously) even if we don't understand why they are as they are, while the social liberal believes that we the current generation are capable of making informed judgements about which things are and aren't harmful to a degree that we can (and therefore should) continuously iterate on that set to approach an ideal goal state in which it contains only things that are factually known to be harmful.
As an interesting aside, the ‘creepy’ emotion, (at least IIRC in women) is triggered not by obviously dangerous situations but by ambiguously dangerous situations, i.e. ones that don't obviously match the pattern of known safe or unsafe situations.
> Sometimes people don't or can't practice this protection for various reasons, and that's fine; it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors".
The problem with the ‘us before them’ approach is that if two neighbourhoods each prioritize their local neighbourhood over the remote neighbourhood and compete (or go to war) to better their own neighbourhood at the cost of the other, generally both neighbourhoods are left worse off than they started, at least in the short term: both groups trying to make locally optimal choices leads (without further constraints) to globally highly suboptimal outcomes. In recognition of this a lot of people, not just capital-R Rationalists, now believe that at least in the abstract we should really be trying to optimize for global outcomes.
Whether anybody realistically has the computational ability to do so effectively is a different question, of course. Certainly I personally think the future-discounting ‘bias’ is a heuristic used to acknowledge the inherent uncertainty of any future outcome we might be trying to assign moral weight to, and should be accorded some respect. Perhaps you can make the same argument for the locality bias, but I guess that Rationalists (generally) either believe that you can't, or at least have a moral duty to optimize for the largest scope your computational power allows.
yeah, my model of the "us before them" question is that it is almost always globally optimal to cooperate, once a certain level of economic productivity is present. The safety that people are worried about is guaranteed not by maximizing their wealth but by minimizing their chances of death/starvation/conquest. Up to a point this means being strong and subjugating your neighbor (cf most of antiquity?) but eventually it means collaborating with them and including them in your "tribe" and extending your protection to them. I have no respect for anyone who argues to undo this, which is I think basically the ethos of the trump movement: by convincing everyone that they are under threat, they get people to turn on those that are actually working in concert with them (in order to enrich/empower themselves). It is a schema problem: we are so very very far away from an us vs. them world that it requires delusions to believe.
(...that said, progressivism has largely failed in dispelling this delusion. It is far too easy to feel as though progressivism/liberalism exists to prop up power hierarchies and economic disparities because in many ways it does, or has been co-opted to do that. I think on net it does not, but it should be much more cut-and-dry than it is. For that to be the case progressivism would need to find a way to effectively turn on its parasites, that is, rent-extracting capitalism and status-extracting moral elitism).
re: the first part of your reply. I sorta agree but I do think people do more extrapolation than you're saying on their own. The extrapolation is largely based on pattern-matching to known things: we have a rich literature (in the news, in art, in personal experience and storytelling) of failure modes of societies, which includes all kinds of examples of people inventing new moral rationalizations for things and using them to disregard personal morality. I think when people are extrapolating rationalists' ideas to find things that creep them out, they're largely pattern-matching to arguments they've seen in other places. It's not just that they're unknowns. And those arguments are, well, real arguments that require addressing.
And yeah, there are plenty of examples of people being afraid of things that today we think they should not have been afraid of. I tend to think that that's just how things go: it is the arc of social progress to figure out how to change things from unknown+frightening to known+benign. I won't fault anyone for being afraid of something they don't understand, but I will fault them for not being open-minded about it or being unempathetic or being cruel or not giving people chances to prove themselves.
All of this is rendered much more opaque and confusing by the fact that everyone places way too much stock in words, though. (e.g. the OP I was replying to who was taking all these criticisms of the rationalists at face-value). IMO this is a major trend that fucks royally with our ability as a society to make moral progress: we have come to believe that words supplant emotional intuition in a way that wrecks out ability to actually understand what people are upset about (I like to blame this trend for much of the modern political polarization). A small example of this is a case that I think everyone has experienced, which is a person discounting their own sense of creepiness from somebody else because they can't come up with a good reason to explain it and it feels unfair to treat someone coldly on a hunch. That should never have been possible: everyone should be trusting their hunches.
(which may seem to conflict with my preceding paragraph... should you trust your hunches or give people the chance to prove themselves? well, it's complicated, but it also really depends on what the result is. Avoiding someone personally because they creep you out is always fine, but banning their way of life when it doesn't affect you at all or directly harm anyone is certainly not.)
One thing I'd like to add though is that I do think there is an additional piece being discarded irrationally. They tend to highly undervalue everything you're describing. Humans aren't Vulcans. By being so obsessed with the risks of paperclip-maximizing-robots they devalue the risks of humans being the irrational animals they are.
This is why many on the left criticize them for being right wing. Not because they are, well some might be, but because they are incredibly easy to distract from what is being communicated by focusing too much on what is being said. That might be a bad phrasing but what I mean is that when you look at this piece from last year about prison sentence length and crime rates by Scott Alexander[0] nothing he says is genuinely unreasonable. He's generally evaluating the data fairly and rationally. Some might disagree there but that's not my point. My point is that he's talking to a nonexistant group. The right largely believes that punishment is the point of prison. They might _say_ the goal is to reduce crime, but they are communicating based on a set of beliefs that strongly favors punitive measures for their own sake. This causes a piece like that to miss the forest through the trees and can be seen by those on the left as functionally right wing propaganda.
Most people are not rational. Maybe some day they will be but until then it is dangerous to assume and act as if they are. This makes me see the rationalists as actually rather irrational.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalist philosophy basically creeps people out. There is something "insidious" about it. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions have already been made by the rationalists---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's morally fine, because it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it as a better way to live: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors; or, it's better to protect animals than people, because there are more of them". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority, regardless of how the math works out. To work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, and (I suspect) autistic people, who probably experience less direct moral empathy for the people around them, as well as to the economically-insulated well-to-do tech-nerd types who are less likely to be directly exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include it, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety, because that is the foundation of all morality, and is utterly essential to preserve, because it makes sure that whatever else you are doing doesn't go awry.
(edit: let me add that your aversion to the criticisms of rationalists is not unreasonable either. Given that you're parsing the criticisms as unreasonable, which they likely are (because of the skill issue), what you're seeing is a movement with value that seems to be being unfairly attacked. And you're right, the value is actually there! But the ultimate goal here is a synthesis: to get the value of the rationalist movement but to synthesize it with the recognition of the red flags that it sets off. Ignoring either side, the value or the critique, is ultimately counterproductive: the right goal is to synthesize both into a productive middle ground. (This is the arc of philosophy; it's what philosophy is. Not re-reading Plato.) The rationalists are probably morally correct in being motivated to highly-scaling actions e.g. the purview of "Effective Altruism". They are getting attacked for what they're discarding to do that, not for caring about it in the first place.)