He has received a salary for working on AI since 2000 (having the title "research fellow"). In contrast, he didn't start publishing his Harry Potter fan-fiction till 2010. I seem to recall his publishing a few sci-fi short stories before then, but his non-fiction public written output has always greatly exceeded his fiction output until a few years ago after he became semi-retired due to chronic health problems.
>He’s basically a PR person for OpenAI and Anthropic
How in the world did you arrive at that belief? If it was up to him, OpenAI and Anthropic would be shut down tomorrow and their assets returned to shareholders.
Since 2004 or so, he has been of the view that most research in AI is dangerous and counterproductive and he has not been shy about saying so at length in public, e.g., getting a piece published in Time Magazine a few years ago opining that the US government should shut down all AI labs and start pressuring China and other countries to shut down the labs there.
> He has received a salary for working on AI since 2000 (having the title "research fellow")
He is a "research fellow" in an institution he created, MIRI, outside the actual AI research community (or any scientific community, for that matter). This is like creating a club and calling yourself the president. I mean, as an accomplishment it's very suspect.
As for his publications, most are self-published and very "soft" (on alignment, ethics of AI, etc). What are his bona fide AI works? What makes him a "researcher", what did he actually research, how/when was it reviewed by peers (non-MIRI adjacent peers) and how is it different to just publishing blog posts on the internet?
On what does he base his AI doomsday predictions? Which models, which assumptions? What makes him different to any scifi geek who's read and watched scifi fiction about apocalyptic scenarios?
>He’s basically a PR person for OpenAI and Anthropic
How in the world did you arrive at that belief? If it was up to him, OpenAI and Anthropic would be shut down tomorrow and their assets returned to shareholders.
Since 2004 or so, he has been of the view that most research in AI is dangerous and counterproductive and he has not been shy about saying so at length in public, e.g., getting a piece published in Time Magazine a few years ago opining that the US government should shut down all AI labs and start pressuring China and other countries to shut down the labs there.