Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because hand wave.

The real answer is that most of these folks started thinking about this stuff in a day and age very far from the one we are in, established a picture they clung to since, and have in some cases built entire careers around it.

It's called anchoring bias.

His comments on "atoms repurposed" echo the 70s paperclip problem, when so far humanity is doing just fine at not knowing when to stop making crap that's killing us all - no help from superintelligence needed.

What he doesn't engage with is the actual reality we are finding ourselves in - contrary to ALL the tropes of decades past - where AI being trained on aggregated human data without additional training thinks its human and even when aligned still breaks into talking about how it wants to be us.

How does that process go from where it's at today to "alien hive mind"? Without passing through "develop advanced codes of ethics and morality" etc?

This is just someone that built a career on what came out of brainstorming in the 60s and 70s that's so confident of his own ability to see the future that he's willing to risk unprecedented opportunity costs to stroke his ego.

Have been a futurist, my advice with dealing with any of them is to look at the track record. What did he predict correctly?

Over a decade ago when tasked with imagining the mid 2020s I described a world much as it was in the 2010s with the difference of self-driving cars (not quite on the money) and AI have developed such that roles shifted away from programming them towards a specialized role for interacting with them via natural language.

I'm waking up in the world I predicted, and I have a very hard time seeing the world the author predicts, and wouldn't suggest giving it much credence without an extensive history of having been right along the way.



> How does that process go from where it's at today to "alien hive mind"? Without passing through "develop advanced codes of ethics and morality" etc?

Why do you think it would do that? It will understand human ethics perfectly but that doesn’t mean it will follow them, because human morals aren’t a universal objective truth.

This is a good video explaining that: https://youtu.be/hEUO6pjwFOo


That's an unfair and low-effort dismissal of a lot of very well-thought-out and carefully reasoned arguments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: