> tension btwn being "rational" about things and trying to reason about things from first principle.
Perhaps on a meta level. If you already have high confidence in something, reasoning it out again may be a waste of time. But of course the rational answer to a problem comes from reasoning about it; and of course chains of reasoning can be traced back to first principles.
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Doing rationalism properly is hard, which is the main reason that the concept "rationalism" exists and is invoked in the first place.
Respected writers in the community, such as Scott Alexander, are in my experience the complete opposite of "full of themselves". They often demonstrate shocking underconfidence relative to what they appear to know, and counsel the same in others (e.g. https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ ). It's also, at least in principle, a rationalist norm to mark the "epistemic status" of your think pieces.
Not knowing the answer isn't a reason to shut up about a topic. It's a reason to state your uncertainty; but it's still entirely appropriate to explain what you believe, why, and how probable you think your belief is to be correct.
I suspect that a lot of what's really rubbing you the wrong way has more to do with philosophy. Some people in the community seem to think that pure logic can resolve the https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem. (But plenty of non-rationalists also act this way, in my experience.) Or they accept axioms that don't resonate with others, such as the linearity of moral harm (i.e.: the idea that the harm caused by unnecessary deaths is objective and quantifiable - whether in number of deaths, Years of Potential Life Lost, or whatever else - and furthermore that it's logically valid to do numerical calculations with such quantities as described at/around https://www.lesswrong.com/w/shut-up-and-multiply).
> In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
AI safety discourse is an entirely separate topic. Plenty of rationalists don't give a shit about MIRI and many joke about Yudkowsky at varying levels of irony.
Perhaps on a meta level. If you already have high confidence in something, reasoning it out again may be a waste of time. But of course the rational answer to a problem comes from reasoning about it; and of course chains of reasoning can be traced back to first principles.
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Doing rationalism properly is hard, which is the main reason that the concept "rationalism" exists and is invoked in the first place.
Respected writers in the community, such as Scott Alexander, are in my experience the complete opposite of "full of themselves". They often demonstrate shocking underconfidence relative to what they appear to know, and counsel the same in others (e.g. https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ ). It's also, at least in principle, a rationalist norm to mark the "epistemic status" of your think pieces.
Not knowing the answer isn't a reason to shut up about a topic. It's a reason to state your uncertainty; but it's still entirely appropriate to explain what you believe, why, and how probable you think your belief is to be correct.
I suspect that a lot of what's really rubbing you the wrong way has more to do with philosophy. Some people in the community seem to think that pure logic can resolve the https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem. (But plenty of non-rationalists also act this way, in my experience.) Or they accept axioms that don't resonate with others, such as the linearity of moral harm (i.e.: the idea that the harm caused by unnecessary deaths is objective and quantifiable - whether in number of deaths, Years of Potential Life Lost, or whatever else - and furthermore that it's logically valid to do numerical calculations with such quantities as described at/around https://www.lesswrong.com/w/shut-up-and-multiply).
> In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
AI safety discourse is an entirely separate topic. Plenty of rationalists don't give a shit about MIRI and many joke about Yudkowsky at varying levels of irony.