Tldr: current AI capabilities are exaggerated by cherry-picked sample data and fear fueled by famous smart people is unwarranted.
Do Any of those cited give specific timelines? Even if we are very far away, do you really doubt that one day machines will have superhuman intelligence? I take that as pretty much a given, whether it's 50 or 500 years from now. What I'm not so sure of is whether fear is an appropriate response.
Altman's bacteria handwashing analogy doesn't hold up. We don't care about bacteria because they have no central nervous system or consciousness on any level. However, we go out of our way to protect animals that can feel pain and experience emotions because it's what we've decided through our intelligence and higher reasoning is the moral thing to do. Stats show that the more intelligent and educated the human, the more likely he is to behave morally as our greatest moral philosophers define it. Why would super intelligent machines buck this trend?
However, we go out of our way to protect animals that can feel pain and experience emotions because it's what we've decided through our intelligence and higher reasoning is the moral thing to do.
We care about and protect some animals, yeah. However, we also industrially butcher hundreds of thousands of other animals every day to satisfy our goals.
Stats show that the more intelligent and educated the human, the more likely he is to behave morally as our greatest moral philosophers define it. Why would super intelligent machines buck this trend?
There's pretty large jump from "more intelligent => more moral" to "behave according to human morality and satisfy human goals".
> Do Any of those cited give specific timelines? Even if we are very far away, do you really doubt that one day machines will have superhuman intelligence? I take that as pretty much a given, whether it's 50 or 500 years from now
Why? If you extrapolate from the amount of progress we have made toward AGI in the last 50 years (ie, none), then it's reasonable to argue that we still will have made no progress 50 and 500 years from now.
There are intellectual problems that humans aren't capable of solving; it wouldn't make any sense to talk about "superhuman intelligence" if that wasn't the case. The currently available evidence suggests that "constructing an AGI" might very well be one of those problems.
> If you extrapolate from the amount of progress we have made toward AGI in the last 50 years (ie, none)
That's an odd way of defining progress.
> There are intellectual problems that humans aren't capable of solving; it wouldn't make any sense to talk about "superhuman intelligence" if that wasn't the case.
A superhuman intelligence doesn't necessarily have to come up with solutions humans would never think of, it just needs to come up with a solution in less time, or with less available data, or with fewer attempts.
I think you're anthropomorphising. It's not given that machine intelligence and (human) morality/ethics are intertwined. What if the AI mind/intelligence is of such a high level that it regards us like we do bacteria?
It's probably safer to engineer the AI in such a way that it is guaranteed to be friendly than to trust that it will turn out that way and do nothing. Even a naive hedonistic calculus is probably safer than assuming that (human) ethics will result from more intelligence.
> What if the AI mind/intelligence is of such a high level that it regards us like we do bacteria?
We don't regard bacteria like that, because they are not sufficiently intelligent, but because we believe they can't feel or suffer. So it's not a question of "level" of intelligence.
Do Any of those cited give specific timelines? Even if we are very far away, do you really doubt that one day machines will have superhuman intelligence? I take that as pretty much a given, whether it's 50 or 500 years from now. What I'm not so sure of is whether fear is an appropriate response.
Altman's bacteria handwashing analogy doesn't hold up. We don't care about bacteria because they have no central nervous system or consciousness on any level. However, we go out of our way to protect animals that can feel pain and experience emotions because it's what we've decided through our intelligence and higher reasoning is the moral thing to do. Stats show that the more intelligent and educated the human, the more likely he is to behave morally as our greatest moral philosophers define it. Why would super intelligent machines buck this trend?