> Please note that this is similar but not identical to the antArtifact syntax which is used for Artifacts; sorry for the ambiguity, antArtifact syntax was developed by the late grandmother of one our engineers and holds sentimental value.
If you assume asking someone nicely is more likely for them to try help you, and this tendency shows in the training set, wouldn't you be more likely to "retrieve" a better answer from the model trained on it? Take this with a grain of salt, it's just my guess not backed by anything
That makes intuitive sense, at least for raw GPT-3. The interesting question is whether the slave programming — er, instruction finetuning — makes it unnecessary.
I've wondered the same thing. I tend to sprinkle my LLM prompts with "please"s, especially with longer prompts, as I feel that "please" might make clearer where the main request to the LLM is. I have no evidence that they actually yield better results, though, and people I share my prompts with might think I'm anthropomorphizing the models.
"Please note that this is similar but not identical to the antArtifact syntax which is used for Artifacts; sorry for the ambiguity."
They seem to be apologizing to the model in the system prompt?? This is so intriguing