one of the more important ones in my opinion would be a prompt to make ChatGPT much less agreeable.
Unless explicitly asked for, it never really challenges your observations and will just keep being supportive and telling you how well of an argument you're making.
I fear this will push even more people into deep rabbit holes they won't be able to get out of because they think this neutral AI has confirmed their suspicions/ideas/observations.
yeah that's actually what I was thinking about. I have a PhD in physics, so I easily notice when ChatGPT just keeps agreeing with me even though we're on very shaky ground. But I worry about the times it does this when we're talking about stuff I'm not as knowledgeable about.
And you can see the influx of people on r/physics and the like who are convinced they've solved dark matter/quantum gravity/... because ChatGPT kept agreeing with them when they presented their ideas to it. Just recently there was a post by a guy who essentially "rediscovered" 17th century physics with the help of ChatGPT but was convinced his formula would explain dark matter because ChatGPT told him so.
I fear this will push even more people into deep rabbit holes they won't be able to get out of because they think this neutral AI has confirmed their suspicions/ideas/observations.