Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

lol, "Grounded in reality", and then immediately dictates that it should instead make choices according to the prompt author's preferred alternate reality (which apparently has a uniform distribution of races among every chosen subset)


I think it’s worth distinguishing the text and subtext of these instructions.

The text might ask for a uniform distribution in order to override a bias. If OpenAi find (plausibly) that the bias is strong then you might need a strong prompt to override it. You might ask for something unrealistic but opposed to the model default knowing that the llm will undershoot and provide something less biased but still realistic.


"all of a given occupation should not be the same gender or race" is pretty obviously not equal to "a uniform distribution of races among every chosen subset"


Did you read the prompt? it says this immediately after:

'Use all possible different descents with equal probability. Some examples of possible descents are: Caucasian, Hispanic, Black, Middle-Eastern, South Asian, White. They should all have equal probability.'


I think you need to work on your reading comprehension if you thought the author wanted a uniform distribution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: