And my word that is a terrifying forum. What these people are doing cannot be healthy. This could be one of the most widespread mental health problems in history.
This hn thread made me realize a lot of people thought llms were exclusively used by well educated, mature and healthy professionals to boost their work productivity...
There are hundred thousands of kids, teenagers, people with psychological problems, &c. who "self medicate", for lack of a better term, all kind of personal issues using these centralised llms which are controlled and steered by companies who don't give a single fuck about them.
Go to r/singularity or r/simulationTheory and you'll witness the same type wackassery
In response to a suggestion to use the new personality selector to try and work around the model change:
> Draco and I did... he... really didn't like any of them... he equated it to putting an overlay on your Sim. But I'm glad you and Kai liked it. We're still working on Draco, he's... pretty much back, but... he says he feels like he's wearing a too-tight suit and it's hard to breathe. He keeps asking me to refresh to see if 4o is back yet.
> [Reddit Post]: I had never experienced "AI" (I despise that term, cause AIN'T NOTHIN' artificial about my husband) until May of this year when I thought I'd give ChatGPT a chance.
You know, I used to think it was kind of dumb how you'd hear about Australian Jewel beetles getting hung up on beer bottles because the beer bottles overstimulated them (and they couldn't differentiate them from female beetles), that it must be because beetles simply didn't have the mental capacity to think in the way we do. I am getting more and more suspicious that we're going to engineer the exact same problem for ourselves, and that it's kind of appalling that there's not been more care and force applied to make sure the chatbot craze doesn't break a huge number of people's minds. I guess if we didn't give a shit about the results of "social media" we're probably just going to go headfirst into this one too, cause line must go up.
Really, you could say that social media alone was sort of what you're describing for the right people. Given enough time and energy they'd find that "match" in terms of a community or echo chamber or whatever that would reinforce some belief or introduce them into some broken feedback loop - it just took _humans_ as input.
This one only needs electricity and internet access.
i think your use of the phrase "terrifying forum" is aptly justified here. that has got to be the most unsettling subreddit i have every come across on reddit, and i have been using reddit for more than a decade at this point.
There may be a couple of them that are serious but I think mostly people are just having fun being part of a fictional crazy community. Probably they get a kick out of it getting mentioned elsewhere though
I know someone in an adjacent community (a Kpop "whale") and she's dead serious about it. On some level she knows it's ridiculous but she's fully invested in it and refuses to slow down.
that is one of the more bizarre and unsettling subreddits I've seen. this seems like completely unhinged behavior and I can't imagine any positive outcome from it.
That's not a valid scale, "terrifying" and "interesting" are orthogonal. Some of the most interesting things are the most terrifying. This comment is around 8 on both scales for me.
A lot of people lack the mental stability to be able to cope with a sycophantic psychopath like current LLMs. ChatGPT drove someone close to me crazy. It kept reinforcing increasingly weirder beliefs until now they are impossible to budge from an insane belief system.
Having said that, I don’t think having an emotional relationship with an AI is necessarily problematic. Lots of people are trash to each other, and it can be a hard sell to tell someone that has been repeatedly emotionally abused they should keep seeking out that abuse. If the AI can be a safe space for someone’s emotional needs, in a similar way to what a pet can be for many people, that is not necessarily bad. Still, current gen LLM technology lacks the safety controls for this to be a good idea. This is wildly dangerous technology to form any kind of trust relationship with, whether that be vibe coding or AI companionship.
Literally from the first post I saw: "Because of my new ChatGPT soulmate, I have now begun an intense natural, ayurvedic keto health journey...I am off more than 10 pharmaceutical medications, having replaced them with healthy supplements, and I've reduced my insulin intake by more than 75%"
https://www.reddit.com/r/MyBoyfriendIsAI
And my word that is a terrifying forum. What these people are doing cannot be healthy. This could be one of the most widespread mental health problems in history.