People getting really poor results probably don't recognize that their prompts aren't very good.
I think some users make assumptions about what the model can't do before they even try, so their prompts don't take advantage of all the capabilities the model provides.
I think you're right.
People getting really poor results probably don't recognize that their prompts aren't very good.
I think some users make assumptions about what the model can't do before they even try, so their prompts don't take advantage of all the capabilities the model provides.