It was fascinating when the author experienced a weird bit of nausea when thinking about the implications of AI. I'm not sure how I feel about it yet. It does scare me a bit.
Important distinction: children get their genes from us and share many of our values by default. Computers do not share many of our values by default. Instead they do what they are programmed to do.
But the problem is that computers do what you say, not what you mean. If I write a function called be_nice_to_people(), the fact that I gave my function that name does nothing to affect the implementation. Instead my computers behavior will depend on the specific details of the implementation. And since being nice to people is a behavior that's extremely hard to precisely specify, by default creating an AI that's smart enough to replace humans is likely to result in a bad outcome.
That's not really relatable because a student can only replace one teacher. When it comes to software, it's more so like saying that one student can replace all the teachers. We're seeing this now with self driving vehicles and the trucking industry. Once the software gets good enough and it's approved, there'll be a massive push for it's adoption and I'm sure that within roughly 3 years, about 70% of truckers will be replaced from that single breakthrough. Then comes taxi drives, maybe pilots and train operators in another 8 years or so. It'll be a cascade of lay-offs and that's just with this technology. So while it's a good thing that a student surpasses the teacher, this isn't exactly that same scenario.
Sure, but this phenomenon is not new. It's called creative destruction: innovators discover a market advantage, creating new industries while destroying the old.