Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree people aren't great at this either and my post said as much.

However we're familiar with the human limits of this and LLMs are currently much worse.

This is particularly relevant because someone suffering from the mistaken belief that LLM's could explain their reasoning might go on to attempt to use that to justify the misapplication of an LLM.

E.g. fine tune some LLM using resume examples so that it almost always rejects Green-skinned people, but approve the LLMs use in hiring decisions because it is insistent that it would never base a decision on someone's skin color. Humans can lie about their biases of course, but a human at least has some experience with themselves while a LLM usually has no experience observing themself except for the output visible in their current window.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: