I'm with you on disinformation around AI—that is surely happening—but I disagree on the target. Intelligible AI does more than merely prevent accidental discrimination, it allows humans to iterate more quickly on their approach then cross train models that lack the intelligibility requirement, thus exposing a closing gap between what is possible without this restraint whilst simultaneously speeding up human understanding of AI approaches and algorithms.
No, with AI the thing that sounds like total bullshit to me is the way that some government officials talk of the risks AI in frenetic terms, while speaking as if the risk will become apparent once AI is capable of contemplating abstract concepts and cognition. The reason I think this is bullshit is that I consider those preconditions 20 years away at the very earliest[0] and the real threat is present: Humans harnessing AI is already powerful enough, especially for state actors.
[0] Likely 100 years away or creating so much thermal waste that their true hazard is mitigated.
No, with AI the thing that sounds like total bullshit to me is the way that some government officials talk of the risks AI in frenetic terms, while speaking as if the risk will become apparent once AI is capable of contemplating abstract concepts and cognition. The reason I think this is bullshit is that I consider those preconditions 20 years away at the very earliest[0] and the real threat is present: Humans harnessing AI is already powerful enough, especially for state actors.
[0] Likely 100 years away or creating so much thermal waste that their true hazard is mitigated.