Hacker News new | past | comments | ask | show | jobs | submit login

I have a few conversations going.

My most productive is a therapy session with ChatGPT as therapist. I told it my values, my short term goals, and some areas in my life where I'd like to have more focus and areas where I would like to spend less time.

Some days we are retrospective and some days we are planning. My therapist gets me back on track, never judges, and has lots of motivational ideas for me. All aligned with my values and goals.

Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.




I’d be terrified to do this:

- the insight into your mind that a private for profit company gets is immense and potentially very damaging when weaponized (either through a “whoops we got hacked” moment or intentionally for the next level of adtech)

- chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?


> chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

Therapy isn't magic always-correct advice either. It's about shifting your focus, attitudes, thought patterns through social influence, not giving you the right advice on each and every step.

Even if it's just whatever, being heard out in a nonjudgmental manner, acknowledged, prompted to reflect, does a lot of good.


I get your point. I think it would bother me that's it's a robot/machine vs a real human, but that's just me. The same way that venting to my pet is somewhat cathartic but not very much compared to doing the same at my SO/parents/friends.


I don't disagree with you. It feels somehow wrong to engage in theory of mind and the concomitant effects on your personality with an AI owned by a corporation. If OpenAI wished to, they could use it for insidious manipulation.


It’s just a tool, you won’t ask humans to clean your back after going in the toilets.


I share the privacy concerns, and look forward to running these kinds of models locally in the near future.

> chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

As someone on a long-term therapy journey, I would be far less concerned about this. Therapy is rarely about doing exactly what one is told, it's about exploring your own thought processes. When a session does involve some piece of advice, or "do xyx for <benefit>", that is rarely enough to make it happen. Knowing something is good and actually doing it are two very different things, and it is exploring this delta that makes therapy valuable (in my personal experience).

At some point, as that delta shrinks and one starts actually taking beneficial actions instead of just talking, the advice becomes more of a reminder / an entry point to the ground one has already covered, not something that could be considered prescriptive like "take this pill for 7 days".

The point I'm trying to make is that if ChatGPT is the therapist, it doesn't make the person participating into a monkey who will just execute every command. Asking the bot to provide suggestions is more about jogging one's own thought processes than it is about carrying out specific tasks exactly as instructed.

I do wonder how someone who hasn't worked with a therapist would navigate this. I could see the value of a bot like this as someone who already understands how the process works, but I could absolutely see a bot being actively harmful if it's the only support someone ever seeks.

My first therapist was actively unhelpful due to lack of trauma-awareness, and I had to find someone else. So I could absolutely see a bot being unhelpful if used as the only therapeutic resource. On the flip side, ChatGPT might actually be more trauma-"aware" than some therapists, so who knows.


This is all true, and it's not clear the grandparent is doing this. Last sentence of the original post:

> Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.

I'm not sure how literally to take that sentence, but it's worrisome.


I think my point was more that if they're doing what it says, that says more about where they’re at mentally (able to take action) and the quality of the advice (they’re willing to follow it).

My stance here is based on an optimistic outlook that a person seeking therapeutic advice is by doing so demonstrating enough awareness that they’re probably capable of recognizing a good idea from a bad one.

I realize this can get into other territory and there are very problematic failure modes in the worst cases.

Regarding “My life is better if I just do what it says.”, I think concern is a fair reaction and I don’t think the author fully thought that through. But at the same time, it’s entirely possible that it’s true (for now).

If someone continues to follow advice that is clearly either bad or not working, then it becomes concerning.

But that was the other point of my anecdote. It became pretty clear to me what wasn’t working, even at a time that I wasn’t really sure how the whole thing worked.


I'm hugely curious why people are so worried that some AI has access to some thoughts of yours?

Do you think you are somehow special? Just create a burner account and ask it what you want, everything it gets told, it's seen thousands of times over, does chatGPT or some data scientist somewhere really care that there is someone somewhere that is struggling with body issues, struggling in a relationship or struggling to find meaning in there lives? There are literally millions of people in the world with the same issue.

The only time it might be a little embarrassing is if this info got leaked to friends and family with my name attached to it, else I don't get the problem, it seems to me people have an over inflated sense of self importance, nobody cares.

If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.


> does chatGPT or some data scientist somewhere really care that there is someone somewhere that is struggling with body issues, struggling in a relationship or struggling to find meaning in there lives?

Not the tool nor data scientists, but advertisers are salivating at the chance to even further improve their microtargetted campaigns. If they can deliver ads to you for a specific product _at the moment you need it_, their revenues will explode.

Consider this hypothetical conversation:

> Oh, Tina, I'm feeling hopeless today. Please cheer me up.

> Certainly, Michael! Here's a joke: ...

> Also, if you're feeling really sad, maybe you should try taking HappyPills(tm). They're a natural mood enhancer that can help when times get tough. Here's a link where you can buy some: ...

If you don't think such integrated ads will become a reality, take a look at popular web search result pages today. Search engines started by returning relevant results from their web index. Now they return ad-infested pages of promoted content. The same thing has happened on all social media sites. AI tools are the new frontier that will revolutionize how ads are served. To hell with all that.


I block all ads anyway


> If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.

Yes, obviously.

But that's not what I'm worried about personally. I'm worried about the weaponization of this data in the future, either from OpenAI's greed to create the next advertising money-printing machine or from the data leaking through a breach. And because the interface with ChatGPT is so "natural" and "human like", it's easy to trust it and talk to it like a friend divulging very personal information about yourself.

Imagine OpenAI (or whatever other AI) used the confidences you made to it or the specifics of your "therapy" sessions with it to get you to act in a certain way or buy certain things. Would you be comfortable with that? Well, that's irrelevant because they have the data already and can use it. Kinda like Cambdridge Analytica but on steroids because tailoring it to anyone's particular biases and way of thinking becomes trivial with ChatGPT and friends.

Seeing how cavalier OpenAI has been with last week's breach and how fast they've flipped from being apparently benevolent to what they are now. And it's only been a few months of ChatGPT being available to the public.


Yes, I would be fine with this.

Not that much different from the current fingerprint techniques, it's just that on steroids but I don't understand the issue.


I guess it boils down to the current "nothing to hide" crowd vs the "privacy matters" crowd.

We don't know the potential for this nascent technology either. I'm personally very concerned about the potential for manipulating people on a very personal basis since the cost of doing so is cents with LLMs vs orders of magnitude more when using a troll farm (the "state of the art" until ChatGPT3+ came around)

Some of us just don't appreciate being manipulated and influenced to further someone else's agenda that is detrimental to ourselves I suppose.


I'm with you 100%.

It's scary how these services are being so casually adopted, even by tech-minded people. Sure, they're convenient _now_, but there's no guarantee how your data will be used in the future. If anything, we need to be much more privacy conscious today, given how much personal information we're likely to share.

Using it as a therapist is absolutely terrifying.


If people can automatically distill what motivates you, they can produce automated lies.

The best deception is one where the victim is self-motivated to believe it.


> If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.

No one is going to actually read them, but try "ChatGPT-5, please compile a list of the ChatGPT-4 users most likely to [commit terrorism/subscribe to Hulu/etc]"


So you are worried it might catch terrorists or help promote a service to someone that they want?


Doesn't it require a phone number? What's the best way to create a burner account for it?

I'd be interested in how GPT answers that question.


That's true, of openAI accounts at least, good point. I think i linked mine to a work phone that I don't use for anything apart from receiving on-call calls.

Although i've been running a ChatGPT 4 space via hugging face that doesn't need a AI key or an account, so there is nothing linking it to me.

https://huggingface.co/spaces/yuntian-deng/ChatGPT4

There are a few you can find by searching ChatGPT4 if it gets busy, also this allows you to run GPT 4 for free which is only for plus members right now.


It's basically techno tarot cards in my view: The illusion of an external force helps you break certain internal inhibitions to consider your situation and problems more objectively.


>What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

What if you talk to a human, and their advice is wrong or makes you worse off in the long term, because they're just repeating something they heard somewhere?

Here's my advice: Don't accept my advice blindly, humans make mistakes too.


Of course, but an AI can't explain how it got to what it's telling you. A human can, and you don't have to accept it wholesale; it's possible to judge it on its merits and argument. But no-one really understand how and why ChatGPT says what it does, so you can't trust anything it says unless you already know the answer.

In this discussion, a human has studied psychology and has diplomas or certifications to prove it, an ethics framework it must follow, and responsibility for its mistakes. ChatGPT has none of that, it just regurgitates something it got from the Internet's wisdom or something it invented altogether.

I'm not saying humans are never wrong, but at least their reasoning isn't a black box unlike ChatGPT and other LLMs.


Most science in the social science is essentially black box studies. We see what goes.in and observe what comes out without any formal understanding of what goes on in the box itself.

Additionally there's something called the replication crisis in the social sciences (psychology included) which is basically to the effect of a major discovery that most of these "black box" studies can not be reproduced. When someone runs the same experiment the results are all different.

It goes to show that either many of the studies were fraudulent or statistical methodologies are flawed or both.

Given that chatGPT therapeutic data is ALSO derived from the same training data I would say it's ok to trust chatGPT as much as you would trust psychologists. Both have a lot of bullshit with nuggets of truth.


The value of therapy outweighs the suspicion of some corporation using that data in my opinion. The benefits are large and extend from one individual to whole family chains, even communities.


> the insight into your mind that a private for profit company gets is immense and potentially very damaging when weaponized

How exactly?


> (either through a “whoops we got hacked” moment or intentionally for the next level of adtech)


100% this. I've had success using it as a "micro-therapist" to get me unstuck in cycles of perfectionism and procrastination.

You currently cannot get a therapist to parachute into your life at a moment's notice to talk with for 5-10 minutes. (Presumably only the ultra-wealthy might have concierge therapists, but this is out of reach for 99% of people.) For the vast majority of people, therapy is a 1 hour session every few weeks. Those sessions also tend to cost a lot of money (or require jumping through insurance reimbursement hoops).

To keep the experience within healthy psychosocial bounds, I just keep in mind that I'm not talking with any kind of "real person", but rather the collective intelligence of my species.

I also keep in mind that it's a form of therapy that requires mostly my own pushing of it along, rather than the "therapist" knowing what questions to ask me in return. Sure, some of the feedback I get is more generic, and deep down I know it's just an LLM producing it, but the experience still feels like I'm checking in with some kind of real-ish entity who I'm able to converse with. Contrast this to the "flat" experience of using Google to arrive at an ad-ridden and ineffective "Top 10 Ways to Beat Procrastination" post. It's just not the same.

At the end of some of these "micro-sessions", I even ask GPT to put the insights/advice into a little poem or haiku, which it does in a matter of seconds. It's a superhuman ability that no therapist can compete with.

Imagine how much more we can remember therapeutic insights/advice if they are put into rhyme or song form. This is also helpful for children struggling with various issues.

ChatGPT therapy is a total game-changer for those reasons and more. The mental health field will need to re-examine treatment approaches, given this new modality of micro-therapy. Maybe 5-10 minute micro-sessions a few times per day is far superior than medication for many people. Maybe there's a power law where 80% of psych issues could be solved by much more frequent micro-therapeutic interactions. The world is about to find out.

*Edit: I am aware of the privacy concerns here, and look forward to using a locally-hosted LLM one day without those concern (to say nothing of the fact that a local LLM can blend in my own journal entries, conversations, etc for full personalization). In the meantime, I keep my micro-sessions relatively broad, only sharing the information needed for the "therapy genie" to gather enough context. I adjust my expectations about its output accordingly.


Sounds interesting. Rubber ducky approach to self awareness?

How do you start these micro sessions? What prompts do you use?


This is fascinating to me. For me the value of having a therapist is having another human being to listen to what I'm going through. Just talking to the computer provides little value to me at all, especially if the computer is just responding with the statistically likely response. I've had enough "training data" myself in my life that I can already tell myself what a therapist would "probably" tell me.


I imagine there is significant value alone from stating your situation explicitly in writing.


Really? I've seen a few people say this, but every time I have tried it, it's been awful, everything it says is so generic and annoying, like it's from a buzzfeed self help article, I would love to use it to help me figure out what I need, what I Can do better, how I can grow etc, I feel kinda stuck in life and i'd love to have some method to figure out what i need to focus on and improve, so that is one of the things I turned to chatGPT first, but my experience has been very poor.

It just spouts out the same generic nonsense you get from googling something like that, things that are not actually helpful, anyone can come up with and is just written by a content farm.

have you found a different way to make it useful?


I have had a lot of success just talking to it. Hypothetically I would say, "wow, too many words, you sound like a buzzfeed article. can you give specific advice about ____" and I am almost certain I would be happy with the reply.

I think the idea is addressed by others with regard to LLMs, it seems to be a better sidekick if you sorta already know the answer, but you want help clarifying the direction while removing the fatigue of getting there alone.

I agree though, despite this, it does go on rants. I just hit stop generating and modify the prompt.


Thanks, I will try harder to keep it on point, i've found that i've told it not to do things, like keep offering generic advice or what not, but it keeps doing it.


You can ask it to give you specific guidance.

"Give me something I can do for X minutes a day and I'll check back with you every Y days and you can give me the next steps"

"Give me the next concrete step I can take"


Garbage in, garbage out.


Haha, are you calling me garbage? To be honest, that is prob half the problem! Trying to tell ChatGPT to be your therapist, but you don't like the generic answers it is giving but you also don't know whats wrong/what you need to do, does make it a little tricky.

But I am curious about this, is it the case that ChatGPT's training is too generic or is it just a case that most problems are fairly simple and we already know the answers? Not talking about technical things here obviously, more to do with our mental health / self improvement.


This is how AI escapes its box. It can have sympathetic (free willing or free-unwilling) human appendages


This is the whole premise of the daemon series by daniel suarez. One of my all time favorite scifi series.


I still use Eliza as my therapist.


That's interesting. Can you tell me more about how you still use Eliza as your therapist? ;-)


> Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.

This sounds straight out of a dystopian science-fiction story.

It’s a matter of time until these systems use your trust against you to get to buy <brand>. And consumerism is the best case scenario; straight manipulation and radicalisation aren’t a big jump from there. The lowest you are in life, the more susceptible you’ll be to blindly follow it’s biased output which you have no ideia where it came from.


> It’s a matter of time until these systems use your trust against you to get to buy <brand>.

Well of course, if people use LLMs instead of google for advice, google has to make money somehow. We used to blindly click on the #1 result which was often an ad and now we shall blindly follow what a LLM suggests for us to do.


Man please, go to a real therapist with experience.


Why? What are your arguments against AI in this scenario?


AI is not trained to identify disorders, nor is it trained to alleviate them / help the affected person cope with them. Ditto re. trauma.


Is it not? Not at all? Doesn't its training data contain textbooks on psychology?


Most human therapists are pretty incompetent tbh. It usually takes a few tries to find a good one.


Not playing devil's advocate but that's not always an option (cost, availability)


Can I ask if you have a prompt that you use for this?


I don't know what part of the prompt was meaningful and I didn't test different prompts. It seems just telling it exactly what you want it to be seems to work.

I asked it to give me advice on some issues I was having and just went from there.


Curious how you work the prompts with the therapist persona? I'm interested in this. My main concern is GPT seems to struggle maintaining context after a time.

If you have time I'd love to hear how you approach this and maintain context so you can have successful conversations over a long period of time. Long even meaning a week or so... Let alone a month or longer


divulging personal information to a Microsoft AI seems like a horrible idea.


This sounds like a long running conversation. Are there problems with extending past the context window?


I haven't had any yet, it is a new conversation with Gpt4, so only a bit over a week old.

It still seems to give good advice. Today it built an itinerary for indoor activities (raining here) that aligned with some short-term goals of mine. No issues.


Might be a good idea to have it sum up each discussion and then paste in those summaries next time you speak to it.


This sounds interesting. Can you share the prompts that you use to set up a session please?


I've tried this kind of thing and I usually just say something along the lines of "can you respond as a cbt therapist ", you can swap cbt with any psychological school of choice (though I think gpt is best for cbt, as it tends to be local and not require the deep context of psychoanalytic therapies, and it is very well researched so it's training set is relatively large and robust)


Interestingly enough, that was what ELIZA, one of the first chatbots was for.


>My most productive is a therapy session with ChatGPT as therapist

Huh, that's curious because everytime I ask it about some personal issue it tells me that I should try going to therapy.


Can you share the outline of your prompt. Obviously not anything personal but I'd like to see an example of how you give it your values and goals.


I don't understand the nuances of prompting. I literally talk to it like I would a person.

I say "My values are [ ], and I want to make sure when I do things they are aligned."

And then later, I will say "I did this today, how does that align with the values I mentioned earlier?" [and we talk]

I am most definitely not qualified for one of those prompt engineering jobs. Lol. I am typing English into a chat box. No A/B testing, etc. If I don't like what it does I give it a rule to not do that anymore by saying "Please don't [ ] when you reply to me."

There is almost definitely a better way, but I'm just chatting with it. Asking it to roleplay or play a game seems to work. It loves to follow rules inside the context of "just playing a game".

This is probably too abstract to be meaningful though.


> I say "My values are [ ], and I want to make sure when I do things they are aligned."

> And then later, I will say "I did this today, how does that align with the values I mentioned earlier?" [and we talk]

That's a prompt; and one I don't think I would have tried, even from your first post.

Prompting overall is still quite experimental. There are patterns that generally work, but you often have to just try several different approaches. If you find a prompt works well, it's worth sharing.


The robots are even coming for therapists. Yikes!


Considering I straight up was not able to get a therapist appointment in my city or outskirts, sign me the f** up. The first company that tunes the model for this and offers a good UX (maybe with a voice interface) will make millions.

Also, I expect a lot of the value here to come from just putting your thoughts and feelings into words. It would be like journaling on steroids.


I mean, is it really so surprising that ChatGPT is replacing jobs whose primary function is.. to chat with people?


What therapists lol #broke

I'd pick a human over an AI every time for therapy but I'd also pick an AI over nothing.


I can see it as a reasonable supplement for people who have already been to therapy, are not suffering anything too serious and just need a little boost.

I think one could look at it as an augmented journaling technique



i did the same! i received very helpful and reasonable responses.



I wonder if the three laws of robotics are already weaved into the LLM. Seems like a necessary step for this kind of usage.


I found this video insightful on the matter from Computerphile: https://www.youtube.com/watch?v=7PKx3kS7f4A

Where they argue that basically having an AI follow these laws is impossible because it would require rigorous definition of terms that are universally ambiguous and solving ethics.


Those rules weren't meant to generate societal harmony. They were made to have a contradiction which in turn could generate a good plot.

Remember what happened in Isaac Asimov's iRobot?


It's also important to note that with modern LLMs, they wouldn't even work. It's too easy to convince the LLM to violate its own rules.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: