Yes, it is likely that a liability disclaimer would address this, but it doesn't address the deep psychological need of Google psychopaths to exercise power, control, authority, and to treat other people like children.
AI "safety" is grounded in an impulse to control other people, and to declare oneself the moral superior to the rest of the human cattle. It has very little to do with actual safety.
I vehemently disagree with Eliezer's safety stance, but at least it's a REAL safety stance, unlike that taken by Google, Microsoft, OpenAI, et. al. Hell, even the batshit crazy neo-luddite stance of nuking the datacenters and blowing up the GPU fabs is a better stance on AI safety than this corporate patronizing bullshit.
Nobody there cares about reducing the risk of grey goo. They just want to make sure you know daddy's in charge, and he knows best.