Hacker News new | past | comments | ask | show | jobs | submit login

Rationalism? The term has been used a lot of times since Pythagoras [0], but the combination of Bay Area, Oxford, existential risks, AI safety makes it sound like this particular movement could have formed in the same mold as Effective Altruism and Long-Termism (ie, the "it's objectively better for humanity if you give us money to buy a castle in France than whatever you'd do with it" crowd that SBF sprung from). Can somebody in know weigh in?

[0] https://en.wikipedia.org/wiki/Rationalism#History






You're correct. Those communities heavily overlap.

Take, for example, 80,000 Hours, among the more prominent EA organizations. Their top donors (https://80000hours.org/about/donors/) include:

- SBF and Alameda Research (you probably knew this),

- the Berkeley Existential Risk Initiative, founded (https://www.existence.org/team) by the same guy who founded CFAR (the Center for Applied Rationality, a major rationalist organization)

- the "EA infrastructure fund", whose own team page (https://funds.effectivealtruism.org/team) contains the "project lead for LessWrong.com, where he tries to build infrastructure for making intellectual progress on global catastrophic risks"

- the "long-term future fund", largely AI x-risk focused

and so on.


It bothers me also how the word “rationalist” suddenly means the LW crowd, while I keep thinking Leibniz

Their sneers are longer than their memories

Check your notifications

Doom scroll

Refresh

Refre

Ref


Rationalism is simply an error. The thing being referred to here is "LessWrong-style rationality", which is fundamentally in the empirical, not rational school. People calling it rationalism are simply confused because the words sound similar.

(Of course, the actual thing is more closely "Zizian style cultish insanity", which honestly has very very little to do with LessWrong style rationality either.)


Thet're virtually identical. Seven chapter history: https://aiascendant.substack.com/p/extropias-children-chapte...

Just like HN grew around the writing of Paul Graham, the "rationalist community" grew around the writings of Eliezer Yudkowsky. Similar to how Paul Graham no longer participates on HN, Eliezer rarely participates on http://lesswrong.com anymore, and the benevolent dictator for life of lesswrong.com is someone other than Eliezer.

Eliezer's career has always been centered around AI. At first Eliezer was wholly optimistic about AI progress. In fact, in the 1990s, I would say that Eliezer was the loudest voice advocating for the development of AI technology that would greatly exceed human cognitive capabilities. "Intentionally causing a technological singularity," was the way he phrased it in the 1990s IIRC. (Later "singularity" would be replaced by "intelligence explosion".)

From 2001 to 2004 he started to believe that AI has a strong tendency to become very dangerous once it starts exceeding the human level of cognitive capabilities. Still, he hoped that before AI starts exceeding human capabilities, he and his organization could develop a methodology to keep it safe. As part of that effort, he coined the term "alignment". The meaning of the term has broadened drastically: when Eliezer coined it, he meant the creation of an AI that stays aligned with human values and human preferences even as its capabilities greatly exceed human capabilities. In contrast, these days, when you see the phrase "aligned AI", it is usually being applied to an AI system that is not a threat to people only because it's not cognitively capable enough to dis-empower human civilization.

By the end of 2015, Eliezer had lost most of the hope he initially had for the alignment project in part because of conversations he had with Elon Musk and Sam Altman at an AGI conference in Puerto Rico followed by Elon and Sam's actions later that year, which actions included the founding of OpenAI. Eliezer still considers the alignment problem solvable in principle if a sufficiently-smart and sufficiently-careful team attacks it, but considers it extremely unlikely any team will manage a solution before the AI labs cause human extinction.

In April 2022 he went public with his despair and announced that his organization (MIRI) will cease technical work on the alignment project and will focus on lobbying the governments of the world to ban AI (or at least the deep-learning paradigm, which he considers too hard to align) before it is too late.

The rationalist movement began in November 2006 when Eliezer began posting daily about human rationality on overcomingbias.com. (The community moved to lesswrong.com in 2009, at which time overcomingbias.com became the personal blog of Robin Hanson.) The rationalist movement was always seen by Eliezer as secondary to the AI-alignment enterprise. Specifically, Eliezer hoped that by explaining to people how to become more rational, he could increase the number of people who are capable of realizing that AI research was a potent threat to human survival.

To help advance this secondary project, the Center for Applied Rationality (CFAR) was founded as a non-profit in 2012. Eliezer is neither an employee nor a member of the board of this CFAR. He is employed by and on the board of the non-profit Machine Intelligence Research Institute (MIRI) which was founded in 2000 as the Singularity Institute for Artificial Intelligence.

I stress that believing that AI research is dangerous has never been a requirement for posting on lesswrong.com or for participating in workshops run by CFAR.

Effective altruism (EA) has separate roots, but the two communities have become close over the years, and EA organizations have donated millions to MIRI.


What puzzles me about Eliezer Yudkowsky is this:

He has no formal education. He hasn't produced anything in the actual AI field, ever, except his very general thoughts (first that it would come, then about alignment and doomsday scenarios).

He isn't an AI researcher except he created an institution that says he is one, kind of as if I created a club and declared myself president of that club.

He has no credentials (that aren't made up), isn't acknowledged by real AI researchers or scientists, and shows no accomplishments in the field.

His actual verifiable accomplishments seem to be having written fan fiction about Harry Potter that was well received online, and also some (dodgy) explanations of Bayes, a topic that he is bizarrely obsessed with. Apparently learning Bayes in a statistics class, where normal people learn it, isn't enough -- he had to make something mystical out of it.

Why does anyone care what EY has to say? He's just an internet celebrity for nerds.


It is true that he has no academic credentials, but people with academic credentials have been employed on the research program led by him: Andrew Critch for example, who has a PhD in math from UC Berkeley, and Jesse Liptrap who also has a math PhD from a prestigious department although I cannot recall which one.

Also, this page lists 3 ex-Googlers as being currently employed by Eliezer's org: https://intelligence.org/team/

Nisan Steinnon who worked for Google also did some research work for the Eliezer's org.


It's not only that he has no academic credentials, he also has no accomplishments in the field. He has no relevant peer reviewed publications (in mainstream venues; of course he publishes stuff under his own institutions. I don't consider those peer reviewed). Even if you're skeptical about academia and only care about practical achievements... Yudkowsky is also not a businessman/engineer who built something. He doesn't actually work with AI, he hasn't built anything tangible, he just speaks about alignment in the most vague terms possible.

At best -- if one is feeling generous -- you could say he is a "philosopher of AI"... and not a very good one, but that's just my opinion.

Eliezer looks to me like a scifi fan who theorizes a lot, instead of a scientist. So why do (some) people pay any credence to his opinions on AI? He's not a subject matter expert!


Ok, but hundreds of thousands of people have worked for Google without being experts on AI. Anyone who employs one, doesn't automatically become more credible. If you believe that then I want you to know that this comment was written by an ex-Google employee and thus must be authoritative ;)

Good point! If I could write the comment over again, I'd probably leave out the ex-Googlers. But I thought of another math PhD who was happy to work for Eliezer's institute, Scott Garrabrant. I could probably find more if I did a search of the web.

Math PhDs are also a dime a dozen

Yes, they are, but remember the point I was responding to, namely, Eliezer should be ignored because he has no academic credentials.

Personally I think the lack of actual output in the field is more relevant than the academic credentials.

If you believed (like Eliezer has since about 2003) that AI research is a potent danger, you are not going to do anything to help AI researchers. You are for example, not going to publish any insights you may have that might advance the AI state of the art.

Your comment is like dismissing someone who is opposed to human cloning on the grounds that he hasn't published any papers that advance the enterprise of human cloning and hasn't worked in a cloning lab.


> [...] remember the point I was responding to, namely, Eliezer should be ignored because he has no academic credentials.

That's not the full claim you were responding to.

You were responding to me, and I was arguing that Yudkowsky has no academic credentials, but also no background in the field he claims to be an expert on, he self-publishes and is not peer-reviewed by mainstream AI researchers or the scientific community, and he has no practical AI achievements either.

So it's not just lack of academic credentials, there's also no achievements in the field he claims to research. Both facts together present a damning picture of Yudkowsky.

To be honest he seems like a scifi author who took himself too seriously. He writes scifi, he's not a scientist.


OK, but other scientists think he is a scientist or an expert on AI. Stephen Wolfram for example sat down recently for a four-hour-long interview about AI with Eliezer, during which Wolfram refers to a previous (in-person) conversation the 2 had and says he hopes the 2 can have another (in-person) conversation in the future:

https://www.youtube.com/watch?v=xjH2B_sE_RQ

His book _Rationality: A-Z_ is widely admired including by people you would concede are machine-learning researchers: https://www.lesswrong.com/rationality

Anyway, this thread began as an answer to a question about the community of tens of thousands of people that has no better name than "the rationalists". I didn't want to get in a long conversation about Eliezer though I'm willing to continue to converse about the rationalists or on the proposition that AI is a potent extinction risk, which proposition is taken seriously by many people besides just Eliezer.


He’s basically a PR person for OpenAI and Anthropic, the latter of which is fucking deep in with these long-termer creeps.

He was writing fan fiction and creepy torture shit one for ages until there was big money in influencing public policy on AI.


He has received a salary for working on AI since 2000 (having the title "research fellow"). In contrast, he didn't start publishing his Harry Potter fan-fiction till 2010. I seem to recall his publishing a few sci-fi short stories before then, but his non-fiction public written output has always greatly exceeded his fiction output until a few years ago after he became semi-retired due to chronic health problems.

>He’s basically a PR person for OpenAI and Anthropic

How in the world did you arrive at that belief? If it was up to him, OpenAI and Anthropic would be shut down tomorrow and their assets returned to shareholders.

Since 2004 or so, he has been of the view that most research in AI is dangerous and counterproductive and he has not been shy about saying so at length in public, e.g., getting a piece published in Time Magazine a few years ago opining that the US government should shut down all AI labs and start pressuring China and other countries to shut down the labs there.


> He has received a salary for working on AI since 2000 (having the title "research fellow")

He is a "research fellow" in an institution he created, MIRI, outside the actual AI research community (or any scientific community, for that matter). This is like creating a club and calling yourself the president. I mean, as an accomplishment it's very suspect.

As for his publications, most are self-published and very "soft" (on alignment, ethics of AI, etc). What are his bona fide AI works? What makes him a "researcher", what did he actually research, how/when was it reviewed by peers (non-MIRI adjacent peers) and how is it different to just publishing blog posts on the internet?

On what does he base his AI doomsday predictions? Which models, which assumptions? What makes him different to any scifi geek who's read and watched scifi fiction about apocalyptic scenarios?


A great example of superficially smart people creating echo chambers which then turn sour, but they can't escape. There's a very good reason that, "Buying your own press" is a cliched pejorative, and this is an extreme end of that. More generally it's just a depressing example of how rationalism in the LW sense has become a sort of cult-of-cults, with the same old existential dread packaged in a new "rational" form. No god here, just really unstable people.

My explanation for why Eliezer went from vocal AI optimist to AI pessimist is that he became more knowledgeable about AI. What is your explanation?

I've seen the explanation that AI pessimism helped Eliezer attract donations, but that does not work because his biggest donor when he started going public with his pessimism (2003 through 2006) was Peter Theil, who responded to his turn to pessimism by refusing to continue to donate (except for donations earmarked for studying the societal effects of AI, which is not the object of Eliezer's pessimism and not something Eliezer particularly wanted to study).

I suspect that most of the accusations to the effect that MIRI or Less Wrong is a cult are lazy ad-hominems by people who have a personal interest in the AI industry or an ideological attachment to technological progress.


correct. there isnt a single well founded argument to dismiss AI alarmism. people are very attached to the idea that more technology is invariably better. and they are very reluctant to saddle themselves with the emotional burden of seeing whats right in front of them.

> there isnt a single well founded argument to dismiss AI alarmism

AI alarmism itself isn't a well founded argument.


more well founded than pressing on the gas pedal

Although not nearly as well founded as the logic you're demonstrating with this comment.

> there isnt a single well founded argument to dismiss AI alarmism.

I don't think that's entirely true. A well-founded argument against AI alarmism is that, from a cosmic perspective, human survival is not inherently more important than the emergence of AGI. AI alarmism is fundamentally a humanistic position: it frames AGI as a potential existential threat and calls for defensive measures. While that perspective is valid, it's also self-centered. Some might argue that AGI could be a natural or even beneficial step for intelligence beyond humanity. To be clear, I’m not saying one shouldn’t be humanistic, but in the context of a rationalist discussion, it's worth recognizing that AI alarmism is rooted in self-preservation rather than an absolute, objective necessity. I know this starts to sound like sci-fi, but it's a perspective worth considering.


the discussion is about what will happen, not the value of human life. even if human life is worthless, my predictions about the outcome of AI are correct and theirs are not

> my predictions about the outcome of AI are correct and theirs are not

How very zizian of you.


yes, now anyone who points out human obsolescence will be marked as a zizian. would love to see your road map for human labor at zero dollars per hour

> What is your explanation?

A combination of a psychological break when his sibling died and that being a doomsayer brought him a lot more more money, power, and worship per unit of effort and particularly per unit of meaningful work-like effort.

It's a lot easier to be a doomsayer bullshiter than other kinds of bullshitters, the fomer just screams stop the latter is expected to accomplish something now and again.


>being a doomsayer brought him a lot more more money, power, and worship per unit of effort

I thought someone would bring that up, so I attempted to head it off in the second paragraph of this comment: https://news.ycombinator.com/item?id=42904625

He was already getting enough donations and attention from being an AI booster, enough to pay himself and pay a research team, so why would he suddenly start spouting AI doom before he had any way of knowing that doomsaying would also bring in donations? (There were no AI doomsayers that Eliezer could learn that from when Eliezer started his AI doomsaying: Bill Joy wrote an article in 2000, but never followed it up by asking for donations.)

Actually, my guess is that doomsaying never did bring in as much as AI boosterism: his org is still living off of donations made many years ago by crypto investors and crypto founders, who don't strike me as the doom-fearing type: I suspect they had fond memories of him from his optimistic AI-boosterism days and just didn't read his most recent writings before they donated.


> My explanation for why Eliezer went from vocal AI optimist to AI pessimist is that he became more knowledgeable about AI. What is your explanation?

He spoke to businessmen posing as experts, became increasingly self-referential, and frankly the quasi-religious subtext became text.


Businessmen like Elon and Sam Altman, you mean?

The very ones, both of them had and have every reason to hype AI as much as possible, and still do for that matter. Altman in particular seems to relish the use of the "oh no what I'm making is so scary, it's even scaring me" fundraising method.

Eliezer was hyping AI back in the 1990s though. Really really hyping it. And by the time of the conversations with Sam and Elon in 2015, he had been employed full time as an AI researcher for 15 years.

Here is an example (written in year 2000) of Eliezer's hyping of AI:

>The Singularity holds out the possibility of winning the Grand Prize, the true Utopia, the best-of-all-possible-worlds - not just freedom from pain and stress or a sterile round of endless physical pleasures, but the prospect of endless growth for every human being - growth in mind, in intelligence, in strength of personality; life without bound, without end; experiencing everything we've dreamed of experiencing, becoming everything we've ever dreamed of being; not for a billion years, or ten-to-the-billionth years, but forever... or perhaps embarking together on some still greater adventure of which we cannot even conceive. That's the Apotheosis. If any utopia, any destiny, any happy ending is possible for the human species, it lies in the Singularity. There is no evil I have to accept because "there's nothing I can do about it". There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, no cancer patient, literally no one that I cannot look squarely in the eye. I'm working to save everybody, heal the planet, solve all the problems of the world.

http://web.archive.org/web/20010204095500/http://sysopmind.c...

Another example (written in 2001):

>The Plan to Singularity ("PtS" for short) is an attempt to describe the technologies and efforts needed to move from the current (2000) state of the world to the Singularity; that is, the technological creation of a smarter-than-human intelligence. The method assumed by this document is a seed AI, or self-improving Artificial Intelligence, which will successfully enhance itself to the level where it can decide what to do next.

>PtS is an interventionist timeline; that is, I am not projecting the course of the future, but describing how to change it. I believe the target date for the completion of the project should be set at 2010, with 2005 being preferable; again, this is not the most likely date, but is the probable deadline for beating other, more destructive technologies into play. (It is equally possible that progress in AI and nanotech will run at a more relaxed rate, rather than developing in "Internet time". We can't count on finishing by 2005. We also can't count on delaying until 2020.)

http://web.archive.org/web/20010213215810/http://sysopmind.c...

No longer is he hyping AI though: he's trying to get it shut down till (many decades from now) we become wise enough to handle it without killing ourselves.


That castle was found to be more cost-effective than any other space the group could have purchased, for the simple reason that almost nobody wants castles anymore. It was chosen because it was the best calculation; the optics of it were not considered.

It would be less disingenuous if you were to say EA is the "it's objectively better for humanity if you give us money to buy a conference space in France than whatever you'd do with it" crowd -- the fact that it was a castle shouldn't be relevant.


Nobody wants castles anymore because they’re impractical and difficult to maintain. It’s not some sort of taboo or psychological block, it’s entirely practical.

Actually, the fact that people think castles are cool suggests that the going price for them is higher than their concrete utility would make it, since demand would be boosted by people who want a castle because it’s cool.

Did these guys have some special use case where it made sense, or did they think they were the only ones smart enough to see that it’s actually worth buying?


> That castle was found to be more cost-effective than any other space the group could have purchased

In other words, they investigated themselves and cleared themselves of any wrongdoing.

It was obvious at the time that they didn't need a 20 million dollar castle for a meeting space, let alone any other meeting space that large.

They also put the castle up for sale 2 years later to "use the proceeds from the sale to support high-impact charities" which was what they were supposed to be doing all along.


The depressing part is that the "optics" of buying a castle are pretty good if you care about attracting interest from elite "respectable" donors, who might just look down on you if you give off the impression of being a bunch of socially inept geeks who are just obsessed with doing the most good they can for the world at large.

Both are factual, the longer statement has more nuance, which is unsurprising. If the emphasis on the castle and SBF - out of all the things and people you could highlight about EA - concisely gives away that I have a negative opinion of it then that was intended. I view SBF as an unsurprising, if extreme, consequence of that kind of thinking. I have a harder time making any sense of the OP story in this context, that's why I was seeking clarification here.

The irony of pure rationalists buying a castle, unable to see what every other market participant can.

Why buy a conference space. Most pubs will give you a seperate room if you promise to spend some money at the bar. There are probably free spaces had they researched.

If I am donating money and you are buying a conference space on day 1 I'd want it to be filled with experienced ex-UN field type of people and Nobel peace prize winners.

Otherwise it looks like a grift.


Somewhere between “once a year” conferences hosted at hotels and the continual conferences of a university lies the point where buying a building makes sense.

The biggest downside, of course, is all your conferences are now in the same location


I'd love to see the logic they used to determine the castle was the best option.

Optics are an important part of being effective

There is significant overlap between the EA and Lesswrongy groups, also parallel psychopathic (oh sorry, I mean "utilitarian navel gazing psychopathy") policy perspectives.

E.g. there is (or was) some EA subgroup that wanted the development of a biological agent that would genocide all the wild animals, because-- in their view-- wild animals lived a life of net suffering and so exterminating all of them would be a kindness.

... just in case you wanted an answer to the question "what would be even less ethical than the Ziz-group intention to murder meat-eaters"...


Wow. Didn't they learn about ecosystems at school. And who says they are suffering?



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: