Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What I'm not fine with is the secretive injection; these tools are too important to be used in such a way that the owners of the tools can secretly pre-adjust outcomes without disclosure.

If you want access to the underlying model you can use their API. The system prompt changes/bootstraps their underlying model into the "chat" of chatGPT. I don't see how this is problematic or morally wrong.



Not quite. The GPT4 offered through the chat completion API will answer questions without any special system prompts.

What these prompts do is try to do some extra steering of the chat model, on top of the steering done via RLHF.


Are you sure that the API injects prompts?

GPT4’s ability to answer questions without special system prompts can be entirely a product of training - not necessarily evidence of an injected prompt.


We're in agreement :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: