Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> low in general, then you are less confident about that datum

It’s very rarely clear or explicit enough when that’s the case. Which makes sense considering that the LLMs themselves do not know the actual probabilities



Maybe this wasn't clear, but the Probabilities are a low level variable that may not be exposed in the UI, it IS exposed through API as logprobs in the ChatGPT api. And of course if you have binary access like with a LLama LLM you may have even deeper access to this p variable


> it IS exposed through API as logprobs in the ChatGPT api

Sure but they often are not necessarily easily interpretable or reliable.

You can use it to compare a model’s confidence of several different answers to the same question but anything else gets complicated and not necessarily that useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: