Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

not entirely.

The risk raised in the article is that AI is being promoted beyond its scope (pattern recognition/creation) to legal/moral choice determination.

The techo-optimists will claim that legal/moral choices may be nothing more than the sum of various pattern-recognition mechanisms...

My take on the article is that this is missing a deep point: AI cannot have a human-centered morality/legality because it can never be human. It can only ever amplify the existing biases in its training environments.

By decoupling the gears of moral choice from human interaction, whether by choice or by inertia, humanity is being removed from the mechanisms that amplify moral and legal action (or, in some perverse cases, amplify the biases intentionally)



to build on your point, we only need to look at another type of entity that has a binary reward system and is inherently amoral: the corporation. Though it has many of the same rights as a human (in the US), the corporation itself is amoral, and we rely upon the humans within to retain moral compass, to their own detriment, which is a foolish endeavor.

even further, AI has only learned through what we've articulated and recorded, and so its inherent biases are only that of our recordings. I'm not sure how that sways the model, but I'm sure that it does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: