Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can’t this be fixed with “Advice is provided as is and we’re not responsible of your cat dies”?


I doubt any major tech company is going to take that risk. Even if legally binding, the PR nightmare would hardly be worth the minimal reward.


Yes, it is likely that a liability disclaimer would address this, but it doesn't address the deep psychological need of Google psychopaths to exercise power, control, authority, and to treat other people like children.

AI "safety" is grounded in an impulse to control other people, and to declare oneself the moral superior to the rest of the human cattle. It has very little to do with actual safety.

I vehemently disagree with Eliezer's safety stance, but at least it's a REAL safety stance, unlike that taken by Google, Microsoft, OpenAI, et. al. Hell, even the batshit crazy neo-luddite stance of nuking the datacenters and blowing up the GPU fabs is a better stance on AI safety than this corporate patronizing bullshit.

Nobody there cares about reducing the risk of grey goo. They just want to make sure you know daddy's in charge, and he knows best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: