Very interesting to see this on HN - Heidegger's theory of technology was ground-breaking then and has only become more timely.
If anybody has questions about this article, Mark Blitz was my advisor in grad school and I wrote an MA thesis applying Heidegger's theory to modern AI research. Happy to clarify any of the issues raised here since, even with Blitz's explanation, Heidegger can be a tough nut to crack.
The Heidegger stuff is in the fourth section, 'Artificial Intelligence In Technological Society', but a rough summary: Heidegger argues that the essence of technology involves a command to human beings to order the world as standing-reserves of resources. Our specific place in this schema is as the being who can do the work of ordering, of revealing the world as orderable standing-reserve and then actually putting it into a particular order (e.g. of a supply chain). However, modern AI, with its emphasis on pattern-recognition, search, optimizing for a goal, etc., is also working towards that function. This has two implications. First, it suggests why AI research heads in this direction, because it's the way human intelligence manifests in the technological framework (and, thus, suggests that AI research could potentially benefit from trying to replicate other aspects of the human mind). Secondly, it means that modern AI might affect a Heideggerian theory of technology by making it possible for humans to delegate the work of ordering to AI, which wouldn't have appeared a possibility in Heidegger's day. If we can in fact hand off the task of ordering to AI, how should we do that and what effects might it have on human culture? While it appears a loss of agency, it may even have a liberating effect in the same way other technologies have liberated us from previous forms of labour.
Heidegger is against it, yes, but he's also clear that we're stuck within technology as a historical destining of Being. We can't just escape technology by thinking differently about it, but must wait heedfully for the historical development of Being (as he lays out in The Turning, just after QCT). In fact, the fulfilment of technology's own reductionism is a necessary step, as only the full concealment of Being in technology allows us to recognize that oblivion for what it is and thus to overcome it. As such, I think it's a reasonable argument to make that AI taking away the work of ordering for humans may help us escape the incomplete agency of ordering the standing-reserve so that we're no longer forced to merely see the world as standing-reserve.
By the way, I saw your comment about Heidegger and Asian religious philosophy below - are you familiar with the Kyoto school? I've signed up for the Halkyon Guild's upcoming course on Nishitani Keiji, and am vaguely hoping his theory of the self-overcoming of nihilism can be brought to bear on the questions Heidegger raises about technology.
I have read briefly about the Kyoto school, but haven't delved deeply in to it yet.
In Heidegger I hear echoes of Buddhism, the Tao Te Ching, and Hinduism.
Recently I came across "The Book of Tea"[1], which Tomonobu Imamichi claimed Heidegger plagiarized. I've only just started reading it, and so far it seems nothing like Heidegger, but maybe it gets more interesting later.
I disagree. You can separate the author's observations (namely, the identified pattern of revealing standing reserve) from the author's intentions (standing reserve = bad).
However, one can easily conceive of an AI that "obsoletes" humanity from the supply chain, so-to-speak, which would theoretically allow humans unbridled opportunity to "be", as Heidegger envisions it. Once human labor is obsolete, what will become of humanity?
Will play also be obsolete? Because work can be play, and play can be work.
Work often has a negative connotation in today's world, but it can also be positive, fun, and rewarding.
The economic whip (ie. the incentive of working to survive) might become obsolete, but that doesn't mean that people will stop wanting to work for fun, fulfillment, or to make original contributions.
It's unlikely many people would willingly choose to be garbagemen and toilet cleaners, and we'll probably all breathe a sigh of relief when those jobs are finally automated, but will we stop wanting to make art or music when AI can make it "just as good"?
Even in more practical fields than art or music, it's more likely that we'll see AI-human collaborations rather than AI doing everything by itself without human input.
What is human is constantly in flux anyway. Even if we consider just technology's effect on the human, there are humans now with all sorts of prosthesis that make them "more than human" in some ways, and this trend itself will likely continue, with things like more and more advanced neural interfaces, memory upgrades, sensory enhancement, etc.
The future of humanity will likely be as some sort of cyborg.
While the predictions made by the study regarding human behavior have since been called into question (and because a human version of these experiments would be horribly unethical), it does bear to think about what the world would look like if all work was performed by automated means free of human interaction.
On one hand, there are many jobs that would be seldom missed by most people. On the other, people can find fullfillment in just about anything.
I think the distinction lies in the fact that "play" activities are done for their own sake. This is an essential part of the experience of "being" (IMO) as it entails performing activities simply because one enjoys them.
Compare this to "work" in the sense of a problem that needs to be solved with the application of human resources as standing-reserve.
I agree that the concept of "humanity" is constantly evolving. I think the ways in which society will change in order to reflect this evolution in humanity are super interesting to explore.
If anybody has questions about this article, Mark Blitz was my advisor in grad school and I wrote an MA thesis applying Heidegger's theory to modern AI research. Happy to clarify any of the issues raised here since, even with Blitz's explanation, Heidegger can be a tough nut to crack.