To call AI a dehumanization technology is like calling guns a murder technology.
There is obviously truth to that, but guns are also used for self defense and protecting your dignity. Guns are a technology, and technology can be used for good or evil. Guns have been used to colonially enslave people, but also been used to gain independence.
I disagree with the assessment that AI is intrinsically dehumanizing. AI is a tool, a very powerful tool, and because the very rich in America doesn't see the people they rule as humans of equal dignity, the technology itself betrays their feelings.
Attacking the technology is wrong, the problem is not the technology but that every company has a tyrant king at it's helm who answers to no one because they have purchased the regulators that might have bound their behavior, meaning that their are no consequences for a CEO/King of a company's misdeeds. So every company's king ends up using their company/fiefdom to further their own personal ambitions of power and nobody is there to stop them. If the technology is powerful, then failure to invest in it, while other even more oppressive regimes do invest in it, potentially gives them the ability to dominate you. Imagine you argue nuclear weapons are a bad technology, while your neighbor is busy developing them. Are you better off if your neighbor has nuclear weapons and you don't?
The argument that AI is a dehumanization technology is ultimately an anarchist argument. Anarchy's core belief is that no one should have power to dominate anyone else, which inevitably means that no one is able to provide consequences for anyone who ambitiously betrays that belief system. Reality does not work that way. The only way to provide consequences to a corrupt institution is an even more powerful institution based on collective bargaining (founded by the threat of consequences for failing to reach a compromise, such as striking). There is no way around realpolitik, you must confront pragmatic power relationships to have a cogent philosophy.
The author is mistaking AI for wealth disparity. Wealth is power and power is wealth, and when it is so concentrated, it puts bad actors above consequences and turns tools that could be used for the public good into tools of oppression.
We do not primarily have an AI problem, but a wealth concentration problem and this is one of many manifestation of it.
That is a truth but not the truth. By framing guns as a murder technology, you ignore that they are also a self defense technology, equalizing technology, or any other set of valid frames.
My point was that guns can be used for murder, in the same way that AI can be used to influence or surveil, but guns are also what you use to arrest people, fight oppressors and tyrants, and protect your property. Fists, knives, bows and arrows, poison, bombs, tanks, fighter jets, and drones are all forms of weapons. The march of technology is inevitable, and it's important not to be on the losing side of it.
What the technology is capable of us less interesting then who has access to it and the power disparity that it creates.
The authors argument is that AI is a (1) high leverage technology (2) in the hands of oligarchs.
My argument is that the fact that it is a high leverage technology is not as interesting, meaningful, or important as the existence of oligarchs who do not answer to any regulatory body because they have bought and paid for it.
The author is arguing that a particular weapon is bad, but failing to argue that we are in a class war that we are losing badly. The author is focusing on one weapon being used to wage our class war, instead of arguing about the cost of losing the classwar.
It is not AI de-humanizing us, but wealth disparity that is de-humanizing us, because there is nothing forcing the extremely wealthy to treat others with dignity. AI is not robbing people of dignity, ultra wealthy people are robbing people of dignity using AI. AI is not dehumanizing people. Ultra wealthy people are using AI to dehumanize people. Those are different arguments with different implications and prescriptions on how to act or what to do.
AI is bad is a different argument than oligarchs are bad.
There is obviously truth to that, but guns are also used for self defense and protecting your dignity. Guns are a technology, and technology can be used for good or evil. Guns have been used to colonially enslave people, but also been used to gain independence.
I disagree with the assessment that AI is intrinsically dehumanizing. AI is a tool, a very powerful tool, and because the very rich in America doesn't see the people they rule as humans of equal dignity, the technology itself betrays their feelings.
Attacking the technology is wrong, the problem is not the technology but that every company has a tyrant king at it's helm who answers to no one because they have purchased the regulators that might have bound their behavior, meaning that their are no consequences for a CEO/King of a company's misdeeds. So every company's king ends up using their company/fiefdom to further their own personal ambitions of power and nobody is there to stop them. If the technology is powerful, then failure to invest in it, while other even more oppressive regimes do invest in it, potentially gives them the ability to dominate you. Imagine you argue nuclear weapons are a bad technology, while your neighbor is busy developing them. Are you better off if your neighbor has nuclear weapons and you don't?
The argument that AI is a dehumanization technology is ultimately an anarchist argument. Anarchy's core belief is that no one should have power to dominate anyone else, which inevitably means that no one is able to provide consequences for anyone who ambitiously betrays that belief system. Reality does not work that way. The only way to provide consequences to a corrupt institution is an even more powerful institution based on collective bargaining (founded by the threat of consequences for failing to reach a compromise, such as striking). There is no way around realpolitik, you must confront pragmatic power relationships to have a cogent philosophy.
The author is mistaking AI for wealth disparity. Wealth is power and power is wealth, and when it is so concentrated, it puts bad actors above consequences and turns tools that could be used for the public good into tools of oppression.
We do not primarily have an AI problem, but a wealth concentration problem and this is one of many manifestation of it.