What do people thinking hard about problems that are not solvable produce? Philosophy for example. Yudklowsky created the field of ai safety thinking isn't that enough? What would be your benefit of discrediting him for which argument?
> What would be your benefit of discrediting him for which argument?
He is asking the government to nuke people under certain scenarios. I'm taking his words seriously and ask for original research to understand the point, and now that is discrediting him? And I will quote the statement in the article so that it is clear I am not exaggerating
> preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
My benefit is that I'm living on Earth and I'd much prefer for no nuke to ever be used again.