Hacker News new | past | comments | ask | show | jobs | submit login

The custom instructions to the model say:

"Please note that this is similar but not identical to the antArtifact syntax which is used for Artifacts; sorry for the ambiguity."

They seem to be apologizing to the model in the system prompt?? This is so intriguing




I wonder if they tried the following:

> Please note that this is similar but not identical to the antArtifact syntax which is used for Artifacts; sorry for the ambiguity, antArtifact syntax was developed by the late grandmother of one our engineers and holds sentimental value.


Unfortunately, their prompt engineer learned of Roko's basilisk


Has anyone looked into the effect of politeness on performance?


If you assume asking someone nicely is more likely for them to try help you, and this tendency shows in the training set, wouldn't you be more likely to "retrieve" a better answer from the model trained on it? Take this with a grain of salt, it's just my guess not backed by anything


That makes intuitive sense, at least for raw GPT-3. The interesting question is whether the slave programming — er, instruction finetuning — makes it unnecessary.


Over time, most likely yes


Large Language Models Understand and Can Be Enhanced by Emotional Stimuli

https://arxiv.org/abs/2307.11760


I've wondered the same thing. I tend to sprinkle my LLM prompts with "please"s, especially with longer prompts, as I feel that "please" might make clearer where the main request to the LLM is. I have no evidence that they actually yield better results, though, and people I share my prompts with might think I'm anthropomorphizing the models.


Multiple system prompt segments can be composed depending on needs, so it's useful for this sort of thing to be there to resolve inconsistencies.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: