> What I'm not fine with is the secretive injection; these tools are too important to be used in such a way that the owners of the tools can secretly pre-adjust outcomes without disclosure.
If you want access to the underlying model you can use their API. The system prompt changes/bootstraps their underlying model into the "chat" of chatGPT. I don't see how this is problematic or morally wrong.
GPT4’s ability to answer questions without special system prompts can be entirely a product of training - not necessarily evidence of an injected prompt.
If you want access to the underlying model you can use their API. The system prompt changes/bootstraps their underlying model into the "chat" of chatGPT. I don't see how this is problematic or morally wrong.