It shouldn't be exclusively on the near term, but I'd argue human/machine collectives are a form of superintelligence with many of the same risks a misaligned AGI would hypothetically present us with
Alignment problems aren't new, paperclip maximizers aren't just thought experiments
I suppose that's true, but at the same time any paperclip-maximizing human corporation is implicitly aligned, at least to some degree, with goals of maintaining certain conditions for human survival, for the sake of their shareholders and their employees' productivity. Although I'll accept that they can still get trapped in a Prisoner's Dilemma of bad incentives, or produce externalities that might be dangerous to humans, I think they'll at least only do that where the harm is controversial and indirect enough that the corporation can convince themselves the profit is worth it. With an AI superintelligence, there's a risk that destroying human life is an instrumental goal on the path towards paperclip production. That also comes with a risk that the AI might become powerful enough that government loses its monopoly on violence, which creates a big increase in the space of options for, say, paperclip factory zoning.
Even if someone maniacal and heartless like Kim Jong-Un set out to use an AI superweapon to take over the entire world, and succeeded, I don't expect them wiping out all their human subjects on purpose. Because what would be the point in ruling an empire of machines? You can get most of that from a video game. Whereas an AI would likely have no qualms in getting rid of the humans as dead weight.