Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Alluding to FAFO, a computer cannot find out, so a computer shouldn’t fuck around.

I’m hoping for a lot of legal precedent showing that an AI cannot be blamed, especially in a medical context.



I would hope that would be the case but a conservative safety culture is unfortunately built on piles of dead people.


Companies and people should have liability, but mere tools like AIs should not.

How would that even work?


I'm the simplest way - if you allow AI to make decisions, you're responsible. Like this https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-th...

So far we're doing pretty good with that idea globally (I've not seen any case going the other way in court)


I mean how would it work, if you tried to hold the AI liable?


Liability for the company selling the AI, I'd presume.


And that's perfectly acceptable, if everyone involved agreed beforehand.


Ah, I misunderstood. That is an interesting idea to consider.


Liability should imo be placed on those that selected the tools and arranged their implementation without providing due care and procedures to ensure the validity of output data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: