It's an almost impassible task. ML is reflecting the world and the data in the world. If you ask for an anime style man, they will pretty much universally generate white men because the dataset of anime characters almost universally contains white characters. The model isn't wrong, its generating exactly what exists in the world already. And there are an infinite number of scenarios and biases that it reflects which you will never be able to manually flag.
It reminds me a lot of the early self driving car debate where there were endless surveys asking if the car should run over the 2 old ladies or the one child studying medicine. And in the end we decided it was an unreasonable burden and just accepted that ML doesn't need to make impossible moral judgements.
It reminds me a lot of the early self driving car debate where there were endless surveys asking if the car should run over the 2 old ladies or the one child studying medicine. And in the end we decided it was an unreasonable burden and just accepted that ML doesn't need to make impossible moral judgements.