Hacker News new | past | comments | ask | show | jobs | submit login

The most convincing and scary take on the subject comes from Yuval Noah Harari, author of "Sapiens" best seller.

His premise that as soon as a system knows us better than we know our selves (e.g. facebook) then we can diverge all choices (what to eat, who to marry, what to watch) to the machine and then it's a new kind of dystopia were no decision needs to be taken by a human who is comparatively uninformed. Now, as he points out, if the system has glitches a-la Matrix and Neo comes along, we basically keep being the "heroes" of our story, but what happens if the system _really works_ for us... What if a computer can match mine and another one's happiness with a % of success that it's impossible for me to match, what happens then?




I'm pretty convinced that ML produces better curated lists of things from corpuses that are enormous (billions of photos / posts / songs / dating partners / places to eat from) than a human curator does.

I met my wife from an algorithmic match in an app. My resume was surfaced to my employer from a similar tool. My company makes money from the surveillance data economy which makes the originators of those funds (companies who want their ads to be seen by high probability product purchasers) happy.

So yeah, throw me in the Matrix / Borg Cube.


Here I thought the NPC thing was just a meme.


WHy are the neoliberals like this, like I hate commies but you people giving yourselves to the machine, is pathetic...


> but what happens if the system _really works_ for us... What if a computer can match mine and another one's happiness with a % of success that it's impossible for me to match, what happens then?

since any system has to start with a set of assumptions and expectations, and based on feedback loops for what works and what doesnt, always optimizing for what "works", wouldnt such a system more and more narrow itself down to a limited scope of outcomes and possibilities that basically we stop being peoole and turn more and more into basically narcisistic psycophants?

wouldnt it basically be the end of evolution and society?



>What if a computer can match mine and another one's happiness with a % of success that it's impossible for me to match, what happens then?

It might make you happy to be in control and to be able to choose yourself. AIs and their benefits are largely result oriented, but there's also process orientation, where the way or trajectory to a result is equally if not more important than the result. That's the kryptonite for computer systems.


Did he write an article about this or is it in the book?


I came across this specific discussion in a YouTube video with Harare and Russel Brand.

I believe that these ideas are in his book “21 lessons for the 21st century”.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: