Murder also doesn’t adhere to my leftist values, which is to say, your statement is useless without being specific about which values AI doesn’t adhere to, and why you think that’s not a problem at all. The article explicitly calls out the “deeply-held values of justice, fairness, and duty toward one another.” Are these the specific leftist values you’re so dismissive of?
>What happens to people's monthly premiums when a US health insurance company's AI finds a correlation between high asthma rates and home addresses in a certain Memphis zip code? In the tradition of skull-measuring eugenicists, AI provides a way to naturalize and reinforce existing social hierarchies, and automates their reproduction.
This sentence is about how AI may be able to more effectively apply the current values of society as opposed to the author's own values. It also fails to recognize that for things like insurance there are incentives to reduce bias to avoid mispricing policies.
>The logical answer is that they want an excuse to fire workers, and don't care about the quality of work being done.
This sentence shows that the author perceives that AI may harm workers. Harming workers appears to be against her values.
>This doesn't inescapably lead to a technological totalitarianism. But adopting these systems clearly hands a lot of power to whoever builds, controls, and maintains them. For the most part, that means handing power to a handful of tech oligarchs. To at least some degree, this represents a seizure of the 'means of production' from public sector workers, as well as a reduction in democratic oversight.
>Lastly, it may come as no surprise that so far, AI systems have found their best product-market fit in police and military applications, where short-circuiting people's critical thinking and decision-making processes is incredibly useful, at least for those who want to turn people into unhesitatingly brutal and lethal instruments of authority.
These sentences shows that the author values people being able to break the law.
I tried to keep my explanation simple since it appeared the other commenter had trouble understanding the author's views on AI which were pretty clear when I read over it. The other commentary called out a set of values which were quoted from a discussion unrelated to AI.