>The Culture novels talk about super intelligent AIs that perform some functions of government, dealing with immense complexity so humans don’t have to. Doesn’t prevent humans from continuing to exist and being quite content in the knowledge they’re not the most superior beings in the universe.
The minds in the Culture are formed with the explicit goal to maximise individual flourishing and minimise coercion, this is the reason why Humanity are treated the way they are in the Culture. It's not a base fact that Superintelligences would be so benevolent to other species and in fact they are probably hostile by default. The point is that Minds in the Culture are aligned.
I'm a fan of The Culture universe and the system the Minds use seems like one of the only ways for biological humans to have any relevance in the future.
>Why do you believe human extinction follows from superintelligence?
No, it follows from building unaligned superintelligence which is what we are seemingly on track to build, potentially in the near future.
I think broadly my problem with your position is that it amounts to wishful thinking. Theres no reason to think we end up in a Culture future by default, quite the opposite. Theres many reasons to think the default for building superintelligence is bad for humanity and probably bad for the entire universe.
Alignment by default is to me a pipe-dream so if we want a future for humanity we need to forcefully fight for it.
The minds in the Culture are formed with the explicit goal to maximise individual flourishing and minimise coercion, this is the reason why Humanity are treated the way they are in the Culture. It's not a base fact that Superintelligences would be so benevolent to other species and in fact they are probably hostile by default. The point is that Minds in the Culture are aligned.
I'm a fan of The Culture universe and the system the Minds use seems like one of the only ways for biological humans to have any relevance in the future.
>Why do you believe human extinction follows from superintelligence?
No, it follows from building unaligned superintelligence which is what we are seemingly on track to build, potentially in the near future.