> chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?
Therapy isn't magic always-correct advice either. It's about shifting your focus, attitudes, thought patterns through social influence, not giving you the right advice on each and every step.
Even if it's just whatever, being heard out in a nonjudgmental manner, acknowledged, prompted to reflect, does a lot of good.
I get your point. I think it would bother me that's it's a robot/machine vs a real human, but that's just me. The same way that venting to my pet is somewhat cathartic but not very much compared to doing the same at my SO/parents/friends.
I don't disagree with you. It feels somehow wrong to engage in theory of mind and the concomitant effects on your personality with an AI owned by a corporation. If OpenAI wished to, they could use it for insidious manipulation.
Therapy isn't magic always-correct advice either. It's about shifting your focus, attitudes, thought patterns through social influence, not giving you the right advice on each and every step.
Even if it's just whatever, being heard out in a nonjudgmental manner, acknowledged, prompted to reflect, does a lot of good.