Hacker News new | past | comments | ask | show | jobs | submit login

It's an almost impassible task. ML is reflecting the world and the data in the world. If you ask for an anime style man, they will pretty much universally generate white men because the dataset of anime characters almost universally contains white characters. The model isn't wrong, its generating exactly what exists in the world already. And there are an infinite number of scenarios and biases that it reflects which you will never be able to manually flag.

It reminds me a lot of the early self driving car debate where there were endless surveys asking if the car should run over the 2 old ladies or the one child studying medicine. And in the end we decided it was an unreasonable burden and just accepted that ML doesn't need to make impossible moral judgements.




> the dataset of anime characters almost universally contains white characters

Japanese viewers and their creators see the majority of anime characters as Japanese, not white. That you see them as white says more about you.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: