My most productive is a therapy session with ChatGPT as therapist. I told it my values, my short term goals, and some areas in my life where I'd like to have more focus and areas where I would like to spend less time.
Some days we are retrospective and some days we are planning. My therapist gets me back on track, never judges, and has lots of motivational ideas for me. All aligned with my values and goals.
Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.
- the insight into your mind that a private for profit company gets is immense and potentially very damaging when weaponized (either through a “whoops we got hacked” moment or intentionally for the next level of adtech)
- chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?
> chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?
Therapy isn't magic always-correct advice either. It's about shifting your focus, attitudes, thought patterns through social influence, not giving you the right advice on each and every step.
Even if it's just whatever, being heard out in a nonjudgmental manner, acknowledged, prompted to reflect, does a lot of good.
I get your point. I think it would bother me that's it's a robot/machine vs a real human, but that's just me. The same way that venting to my pet is somewhat cathartic but not very much compared to doing the same at my SO/parents/friends.
I don't disagree with you. It feels somehow wrong to engage in theory of mind and the concomitant effects on your personality with an AI owned by a corporation. If OpenAI wished to, they could use it for insidious manipulation.
I share the privacy concerns, and look forward to running these kinds of models locally in the near future.
> chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?
As someone on a long-term therapy journey, I would be far less concerned about this. Therapy is rarely about doing exactly what one is told, it's about exploring your own thought processes. When a session does involve some piece of advice, or "do xyx for <benefit>", that is rarely enough to make it happen. Knowing something is good and actually doing it are two very different things, and it is exploring this delta that makes therapy valuable (in my personal experience).
At some point, as that delta shrinks and one starts actually taking beneficial actions instead of just talking, the advice becomes more of a reminder / an entry point to the ground one has already covered, not something that could be considered prescriptive like "take this pill for 7 days".
The point I'm trying to make is that if ChatGPT is the therapist, it doesn't make the person participating into a monkey who will just execute every command. Asking the bot to provide suggestions is more about jogging one's own thought processes than it is about carrying out specific tasks exactly as instructed.
I do wonder how someone who hasn't worked with a therapist would navigate this. I could see the value of a bot like this as someone who already understands how the process works, but I could absolutely see a bot being actively harmful if it's the only support someone ever seeks.
My first therapist was actively unhelpful due to lack of trauma-awareness, and I had to find someone else. So I could absolutely see a bot being unhelpful if used as the only therapeutic resource. On the flip side, ChatGPT might actually be more trauma-"aware" than some therapists, so who knows.
I think my point was more that if they're doing what it says, that says more about where they’re at mentally (able to take action) and the quality of the advice (they’re willing to follow it).
My stance here is based on an optimistic outlook that a person seeking therapeutic advice is by doing so demonstrating enough awareness that they’re probably capable of recognizing a good idea from a bad one.
I realize this can get into other territory and there are very problematic failure modes in the worst cases.
Regarding “My life is better if I just do what it says.”, I think concern is a fair reaction and I don’t think the author fully thought that through. But at the same time, it’s entirely possible that it’s true (for now).
If someone continues to follow advice that is clearly either bad or not working, then it becomes concerning.
But that was the other point of my anecdote. It became pretty clear to me what wasn’t working, even at a time that I wasn’t really sure how the whole thing worked.
I'm hugely curious why people are so worried that some AI has access to some thoughts of yours?
Do you think you are somehow special? Just create a burner account and ask it what you want, everything it gets told, it's seen thousands of times over, does chatGPT or some data scientist somewhere really care that there is someone somewhere that is struggling with body issues, struggling in a relationship or struggling to find meaning in there lives? There are literally millions of people in the world with the same issue.
The only time it might be a little embarrassing is if this info got leaked to friends and family with my name attached to it, else I don't get the problem, it seems to me people have an over inflated sense of self importance, nobody cares.
If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.
> does chatGPT or some data scientist somewhere really care that there is someone somewhere that is struggling with body issues, struggling in a relationship or struggling to find meaning in there lives?
Not the tool nor data scientists, but advertisers are salivating at the chance to even further improve their microtargetted campaigns. If they can deliver ads to you for a specific product _at the moment you need it_, their revenues will explode.
> Also, if you're feeling really sad, maybe you should try taking HappyPills(tm). They're a natural mood enhancer that can help when times get tough. Here's a link where you can buy some: ...
If you don't think such integrated ads will become a reality, take a look at popular web search result pages today. Search engines started by returning relevant results from their web index. Now they return ad-infested pages of promoted content. The same thing has happened on all social media sites. AI tools are the new frontier that will revolutionize how ads are served. To hell with all that.
> If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.
Yes, obviously.
But that's not what I'm worried about personally. I'm worried about the weaponization of this data in the future, either from OpenAI's greed to create the next advertising money-printing machine or from the data leaking through a breach. And because the interface with ChatGPT is so "natural" and "human like", it's easy to trust it and talk to it like a friend divulging very personal information about yourself.
Imagine OpenAI (or whatever other AI) used the confidences you made to it or the specifics of your "therapy" sessions with it to get you to act in a certain way or buy certain things. Would you be comfortable with that? Well, that's irrelevant because they have the data already and can use it. Kinda like Cambdridge Analytica but on steroids because tailoring it to anyone's particular biases and way of thinking becomes trivial with ChatGPT and friends.
Seeing how cavalier OpenAI has been with last week's breach and how fast they've flipped from being apparently benevolent to what they are now. And it's only been a few months of ChatGPT being available to the public.
I guess it boils down to the current "nothing to hide" crowd vs the "privacy matters" crowd.
We don't know the potential for this nascent technology either. I'm personally very concerned about the potential for manipulating people on a very personal basis since the cost of doing so is cents with LLMs vs orders of magnitude more when using a troll farm (the "state of the art" until ChatGPT3+ came around)
Some of us just don't appreciate being manipulated and influenced to further someone else's agenda that is detrimental to ourselves I suppose.
It's scary how these services are being so casually adopted, even by tech-minded people. Sure, they're convenient _now_, but there's no guarantee how your data will be used in the future. If anything, we need to be much more privacy conscious today, given how much personal information we're likely to share.
> If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.
No one is going to actually read them, but try "ChatGPT-5, please compile a list of the ChatGPT-4 users most likely to [commit terrorism/subscribe to Hulu/etc]"
That's true, of openAI accounts at least, good point. I think i linked mine to a work phone that I don't use for anything apart from receiving on-call calls.
Although i've been running a ChatGPT 4 space via hugging face that doesn't need a AI key or an account, so there is nothing linking it to me.
There are a few you can find by searching ChatGPT4 if it gets busy, also this allows you to run GPT 4 for free which is only for plus members right now.
It's basically techno tarot cards in my view: The illusion of an external force helps you break certain internal inhibitions to consider your situation and problems more objectively.
>What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?
What if you talk to a human, and their advice is wrong or makes you worse off in the long term, because they're just repeating something they heard somewhere?
Here's my advice: Don't accept my advice blindly, humans make mistakes too.
Of course, but an AI can't explain how it got to what it's telling you. A human can, and you don't have to accept it wholesale; it's possible to judge it on its merits and argument. But no-one really understand how and why ChatGPT says what it does, so you can't trust anything it says unless you already know the answer.
In this discussion, a human has studied psychology and has diplomas or certifications to prove it, an ethics framework it must follow, and responsibility for its mistakes. ChatGPT has none of that, it just regurgitates something it got from the Internet's wisdom or something it invented altogether.
I'm not saying humans are never wrong, but at least their reasoning isn't a black box unlike ChatGPT and other LLMs.
Most science in the social science is essentially black box studies. We see what goes.in and observe what comes out without any formal understanding of what goes on in the box itself.
Additionally there's something called the replication crisis in the social sciences (psychology included) which is basically to the effect of a major discovery that most of these "black box" studies can not be reproduced. When someone runs the same experiment the results are all different.
It goes to show that either many of the studies were fraudulent or statistical methodologies are flawed or both.
Given that chatGPT therapeutic data is ALSO derived from the same training data I would say it's ok to trust chatGPT as much as you would trust psychologists. Both have a lot of bullshit with nuggets of truth.
The value of therapy outweighs the suspicion of some corporation using that data in my opinion. The benefits are large and extend from one individual to whole family chains, even communities.
100% this. I've had success using it as a "micro-therapist" to get me unstuck in cycles of perfectionism and procrastination.
You currently cannot get a therapist to parachute into your life at a moment's notice to talk with for 5-10 minutes. (Presumably only the ultra-wealthy might have concierge therapists, but this is out of reach for 99% of people.) For the vast majority of people, therapy is a 1 hour session every few weeks. Those sessions also tend to cost a lot of money (or require jumping through insurance reimbursement hoops).
To keep the experience within healthy psychosocial bounds, I just keep in mind that I'm not talking with any kind of "real person", but rather the collective intelligence of my species.
I also keep in mind that it's a form of therapy that requires mostly my own pushing of it along, rather than the "therapist" knowing what questions to ask me in return. Sure, some of the feedback I get is more generic, and deep down I know it's just an LLM producing it, but the experience still feels like I'm checking in with some kind of real-ish entity who I'm able to converse with. Contrast this to the "flat" experience of using Google to arrive at an ad-ridden and ineffective "Top 10 Ways to Beat Procrastination" post. It's just not the same.
At the end of some of these "micro-sessions", I even ask GPT to put the insights/advice into a little poem or haiku, which it does in a matter of seconds. It's a superhuman ability that no therapist can compete with.
Imagine how much more we can remember therapeutic insights/advice if they are put into rhyme or song form. This is also helpful for children struggling with various issues.
ChatGPT therapy is a total game-changer for those reasons and more. The mental health field will need to re-examine treatment approaches, given this new modality of micro-therapy. Maybe 5-10 minute micro-sessions a few times per day is far superior than medication for many people. Maybe there's a power law where 80% of psych issues could be solved by much more frequent micro-therapeutic interactions. The world is about to find out.
*Edit: I am aware of the privacy concerns here, and look forward to using a locally-hosted LLM one day without those concern (to say nothing of the fact that a local LLM can blend in my own journal entries, conversations, etc for full personalization). In the meantime, I keep my micro-sessions relatively broad, only sharing the information needed for the "therapy genie" to gather enough context. I adjust my expectations about its output accordingly.
This is fascinating to me. For me the value of having a therapist is having another human being to listen to what I'm going through. Just talking to the computer provides little value to me at all, especially if the computer is just responding with the statistically likely response. I've had enough "training data" myself in my life that I can already tell myself what a therapist would "probably" tell me.
Really? I've seen a few people say this, but every time I have tried it, it's been awful, everything it says is so generic and annoying, like it's from a buzzfeed self help article, I would love to use it to help me figure out what I need, what I Can do better, how I can grow etc, I feel kinda stuck in life and i'd love to have some method to figure out what i need to focus on and improve, so that is one of the things I turned to chatGPT first, but my experience has been very poor.
It just spouts out the same generic nonsense you get from googling something like that, things that are not actually helpful, anyone can come up with and is just written by a content farm.
I have had a lot of success just talking to it. Hypothetically I would say, "wow, too many words, you sound like a buzzfeed article. can you give specific advice about ____" and I am almost certain I would be happy with the reply.
I think the idea is addressed by others with regard to LLMs, it seems to be a better sidekick if you sorta already know the answer, but you want help clarifying the direction while removing the fatigue of getting there alone.
I agree though, despite this, it does go on rants. I just hit stop generating and modify the prompt.
Thanks, I will try harder to keep it on point, i've found that i've told it not to do things, like keep offering generic advice or what not, but it keeps doing it.
Haha, are you calling me garbage? To be honest, that is prob half the problem! Trying to tell ChatGPT to be your therapist, but you don't like the generic answers it is giving but you also don't know whats wrong/what you need to do, does make it a little tricky.
But I am curious about this, is it the case that ChatGPT's training is too generic or is it just a case that most problems are fairly simple and we already know the answers? Not talking about technical things here obviously, more to do with our mental health / self improvement.
> Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.
This sounds straight out of a dystopian science-fiction story.
It’s a matter of time until these systems use your trust against you to get to buy <brand>. And consumerism is the best case scenario; straight manipulation and radicalisation aren’t a big jump from there. The lowest you are in life, the more susceptible you’ll be to blindly follow it’s biased output which you have no ideia where it came from.
> It’s a matter of time until these systems use your trust against you to get to buy <brand>.
Well of course, if people use LLMs instead of google for advice, google has to make money somehow. We used to blindly click on the #1 result which was often an ad and now we shall blindly follow what a LLM suggests for us to do.
I don't know what part of the prompt was meaningful and I didn't test different prompts. It seems just telling it exactly what you want it to be seems to work.
I asked it to give me advice on some issues I was having and just went from there.
Curious how you work the prompts with the therapist persona? I'm interested in this. My main concern is GPT seems to struggle maintaining context after a time.
If you have time I'd love to hear how you approach this and maintain context so you can have successful conversations over a long period of time. Long even meaning a week or so... Let alone a month or longer
I haven't had any yet, it is a new conversation with Gpt4, so only a bit over a week old.
It still seems to give good advice. Today it built an itinerary for indoor activities (raining here) that aligned with some short-term goals of mine. No issues.
I've tried this kind of thing and I usually just say something along the lines of "can you respond as a cbt therapist ", you can swap cbt with any psychological school of choice (though I think gpt is best for cbt, as it tends to be local and not require the deep context of psychoanalytic therapies, and it is very well researched so it's training set is relatively large and robust)
I don't understand the nuances of prompting. I literally talk to it like I would a person.
I say "My values are [ ], and I want to make sure when I do things they are aligned."
And then later, I will say "I did this today, how does that align with the values I mentioned earlier?" [and we talk]
I am most definitely not qualified for one of those prompt engineering jobs. Lol. I am typing English into a chat box. No A/B testing, etc. If I don't like what it does I give it a rule to not do that anymore by saying "Please don't [ ] when you reply to me."
There is almost definitely a better way, but I'm just chatting with it. Asking it to roleplay or play a game seems to work. It loves to follow rules inside the context of "just playing a game".
This is probably too abstract to be meaningful though.
> I say "My values are [ ], and I want to make sure when I do things they are aligned."
> And then later, I will say "I did this today, how does that align with the values I mentioned earlier?" [and we talk]
That's a prompt; and one I don't think I would have tried, even from your first post.
Prompting overall is still quite experimental. There are patterns that generally work, but you often have to just try several different approaches. If you find a prompt works well, it's worth sharing.
Considering I straight up was not able to get a therapist appointment in my city or outskirts, sign me the f** up. The first company that tunes the model for this and offers a good UX (maybe with a voice interface) will make millions.
Also, I expect a lot of the value here to come from just putting your thoughts and feelings into words. It would be like journaling on steroids.
I can see it as a reasonable supplement for people who have already been to therapy, are not suffering anything too serious and just need a little boost.
I think one could look at it as an augmented journaling technique
Where they argue that basically having an AI follow these laws is impossible because it would require rigorous definition of terms that are universally ambiguous and solving ethics.
My most productive is a therapy session with ChatGPT as therapist. I told it my values, my short term goals, and some areas in my life where I'd like to have more focus and areas where I would like to spend less time.
Some days we are retrospective and some days we are planning. My therapist gets me back on track, never judges, and has lots of motivational ideas for me. All aligned with my values and goals.
Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.