I find it hard to believe that Yudkowsky et al. think we're facing a threat that is many times greater in magnitude, and yet are completely unwilling to act.
Here's what he advocates for in the article:
>If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
>Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Does this sound like someone who has a deeply ethically-ingrained prohibition against violence?
If Yudkowsky believes Israel's brand of preventative saboutage to be unethical, why does he advocate for exactly that style of measure to be taken against data centers in this very article?
https://en.wikipedia.org/wiki/Assassination_of_Iranian_nucle... https://en.wikipedia.org/wiki/2020_Iran_explosions
I find it hard to believe that Yudkowsky et al. think we're facing a threat that is many times greater in magnitude, and yet are completely unwilling to act.
Here's what he advocates for in the article:
>If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
>Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Does this sound like someone who has a deeply ethically-ingrained prohibition against violence?