Hacker News new | past | comments | ask | show | jobs | submit login
Will Artificial Intelligences Find Humans Enjoyable? (futurepundit.com)
12 points by GiraffeNecktie on Feb 18, 2012 | hide | past | favorite | 4 comments



In and of themselves, programs want nothing. Think of "hello world". It wants nothing. It desires neither life, nor death, or anything else. What needs to be carefully considered are the goals given strong AI at initialization. Consider the paperclip maximizer:

http://wiki.lesswrong.com/wiki/Paperclip_maximizer

The wording here understates the problem--the program might start with a goal as benign as "collect paperclips" as opposed to "fill the universe with as many paperclips as possible" but the outcome could be similar.

Given the ease with which programs can be copied or modified (and mistakes made), I can't see many futures filled with strong AI's that end positively for us, chiefly because of the asymmetry in difficulty of destroying civilization vs. protecting it from all possible dangers.


Yes, if we engineer them that way. Next question, move on. It's sad people are still this primitive on AI philosophy. Thanks for letting me know not to read anything seriously about "The Future" by Randall Parker, I guess.


Enjoyment is something that happens to a thing that must interact with its environment to continue to live and reproduce. Its something only life is capable of.

Which is also why I don't fear computers and robots as much as those who are capable of creating and wreaking havoc with them. Other humans are far more a problem than the technology they invent. The rules we live by are far more important than our technology.


A much better intro to this topic (pdf warning): http://selfawaresystems.files.wordpress.com/2008/01/ai_drive...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: