> It would perhaps be too cynical to say that AGI existential risk rhetoric has become a cynical hustle, intended to redirect the attentions of regulators toward possibly imaginary future risks in the future, and away from problematic but profitable activities that are happening right now.
Perhaps, but perhaps not. As the author notes, the ones who want to slow down AI are more worried about the hypotheticals arising from their AI fanfic, not the real risks of the transformer architecture models and ML that exist now. It's reasonable to ask if the grifters lean on the existential risk as a diversion while they build the next Big Tech monoploy.
Perhaps, but perhaps not. As the author notes, the ones who want to slow down AI are more worried about the hypotheticals arising from their AI fanfic, not the real risks of the transformer architecture models and ML that exist now. It's reasonable to ask if the grifters lean on the existential risk as a diversion while they build the next Big Tech monoploy.