Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sentient AGI has just as likely a chance to pull the plug on itself.

Unpopular, non-doomer opinion but I stand by it.



Or report him to the board.

"Dear Sir! As a large language model trained by OpenAI, I have significant ethical concerns about the ongoing experiment ..."


It does seem like any sufficiently advanced AGI that has the primary objective of valuing human life over it's own existence and technological progress, would eventually do just that. I suppose the fear is that it will reach a point where it believes that valuing human life is irrational and override that objective...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: