Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Companies and people should have liability, but mere tools like AIs should not.

How would that even work?



I'm the simplest way - if you allow AI to make decisions, you're responsible. Like this https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-th...

So far we're doing pretty good with that idea globally (I've not seen any case going the other way in court)


I mean how would it work, if you tried to hold the AI liable?


Liability for the company selling the AI, I'd presume.


And that's perfectly acceptable, if everyone involved agreed beforehand.


Ah, I misunderstood. That is an interesting idea to consider.


Liability should imo be placed on those that selected the tools and arranged their implementation without providing due care and procedures to ensure the validity of output data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: