Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM would, surely, have to:

* Know whether its answers are objectively beneficial or harmful

* Know whether its answers are subjectively beneficial or harmful in the context of the current state of a person it cannot see, cannot hear, cannot understand.

* Know whether the user's questions, over time, trend in the right direction for that person.

That seems awfully optimistic, unless I'm misunderstanding the point, which is entirely possible.



It is definitely optimistic, but I was steelmanning the optimist’s argument.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: