Hacker News new | past | comments | ask | show | jobs | submit login
Meta Launches AI Studio in US (meta.com)
189 points by pizzathyme 6 months ago | hide | past | favorite | 209 comments



The product here isn't necessarily suprising, it's only a matter of time before someone builds this. I continue to be suprised how deep a social media company wants to go into LLM-generated content though.

What is the real value of social media when most of the content posted is created by a bot? Taking humans out of the loop seems to remove all value from showing ads, and the value of content that can be used to train LLMs tanks when the content is itself LLM-generated.


> What is the real value of social media when most of the content posted is created by a bot?

Besides the maybe 5% or so of people I interact with online that I actually know IRL, I don't really care if I'm interacting with a bot.

A bot could have secretly written your message and I'd still leave the same reply. Anyone that replies to me could also secretly be a bot and I'd read their comment the same. This whole HN post could just be bots all the way down and it'd still be an interesting read through the comments. Even if the replies are from humans, if they're indistinguishable from other humans I still don't really care about who wrote them.

Engagement is engagement and my Sims-esque social bar refills at the same rate.


Ok, I can see how content can be interesting even if there's not a person behind it

but would you still publish your comment if it was only ever going to be read by computer programs?


I think there's two likely scenarios here:

1) My comment is only ever going to be read by computer programs and I'll probably get a response back from a computer program (see: ChatGPT/GPTs, character.ai, AI Studio, Replika, the countless "AI friend" services, etc)

2) My comment is posted in a more public setting that's inundated by computer programs, but also visible to other humans (e.g. a comment on a public FB post that computer programs read, but that other humans might also read and/or respond to)

#1 feels like current-gen one-on-one chatbots, but #2 is where I expect FB to end up eventually, using bots to keep conversations flowing and growing while humans interact here and there.

In either case, my answer is yes. It's still a conversation either way, isn't it?


Depends on if the computer programs might respond in an interesting way or if my comment might help them respond in an interesting way in the future. Same as when interacting with humans, I guess.


Most people don't post, they only consume the content


>This whole HN post could just be bots all the way down and it'd still be an interesting read through the comments

This whole HN post could have been generated by monkeys on a typewriter all the way down and it'd still be an interesting read is also true, but that doesn't mean seeking out monkey generated content is a great strategy for finding interesting reads.


Touché! Likewise, if grandmother had wheels, she'd be a bicycle.


> This whole HN post could just be bots all the way down and it'd still be an interesting read through the comments.

The assumption is that the comments are a function of the post and the present public info. Real world comments can disclose private information (My Google account banned), make real impact (S*e/C*e support site), connect to celebrities (I'm Karpathy ask me anything). And even if the assumption holds and the site is a sample from a probability distribution, the particular sample can be referenced in other sites so it makes sense to check what everyone is viewing right now.


maybe you don't care about that, but there are reasons why humans talk to other humans, even strangers:

* to convince someone about something you are passionate about

* to share enthusiasm (or disgust) about a topic

* to learn something new

.... other human reasons and sharing emotions

Bots have no passion or enthusiasm or emotions, so they are useless for the first two. Maybe for the last one, but they are sterilized, don't like controversy or arguments, lie, and will just agree if you press on them. I still suspect you wouldn't like to be surrounded by bots.


I would not take that as a given. Yes the official releases have been tuned to behave that way but even back in the dark ages of AI it has already been demonstrated that chat bots can produce the whole range of expression that humans can. It doesn’t take much to tilt LLMs in other directions including eliciting a variety of emotions. All of that data is in their training set, you just have to bias them in that direction.

This is one reason why I find the focus on making LLMs more efficient so interesting, it’s going to result in highly capable models that can run on cheap consumer hardware or cheap rented GPUs which will lead to a veritable cambrian explosion in bot personalities. Bots like truth_terminal are just the beginning

https://x.com/truth_terminal?lang=en


I do all three of these with bots (and also humans).


Do you feel you get the same out of it whether talking to humans or bots?

Its interesting that you'd ultimately be engaging in discussions only for what you get out of it. The more important or impactful you find the topic to be, the more useful it seems like it would be to talk with someone you may learn from or teach something to rather than having any object to talk at.


i can do all three with a frozen burrito at a gas station doesn't really mean it's a great alternative


I couldn't meaningfully do any of the three with a frozen burrito, unlike with bots and/or humans. More power to you if you can, though.


I struggle to imagine that your perspective is representative. Most human beings, I surmise, value human interaction more than we value object interaction. Even the output of these new language models—while flexible, reliable, and available—pales noticeably in comparison to what a living creativity can produce.

You might be able, sometime in the future, to fool me. That was the hideously disgusting premise of The Matrix, wasn't it? Ah, no; even in the Matrix, we had one another.


> Most human beings, I surmise, value human interaction more than we value object interaction.

I think the key here is that you never see a human (or object) when interacting with others on the internet; your brain fills in the gaps to say there's a person on the other end, but you never know that for sure [1].

For interactions on the internet, I don't think there's any meaningful differentiation between human interaction or object interaction; it's all just interaction that acts like interaction with any human would. You judge that conversation based on its content and quality and continue it if you're enjoying it (or find whatever value you're interacting for), or just move on if you're not -- regardless of who's on the other side of the screen.

[1] https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...


I think the ultimate goal here is to build the Metaverse. This was a big strategic initiative for Meta, so much so that they changed the company's name. But the product wasn't prime time ready when Zuck first introduced it. With LLMs, it's going to be a core enabling technology to make the Metaverse more immersive. Hundreds of millions, maybe billions of simultaneous conversations running inference locally with various digital avatars.


Do people want to hang out in a sexed up habbo hotel though?


They absolutely do. Check out the subreddit for Character.ai users. It's a million teenagers too nervous to talk to their IRL crush, but they can live out fantasies with their dark brooding cyber boyfriend no problem.


Cool but that’s a minority of people. Like I’m sure on whatever unpleasant Chan sites there are a bunch of people. But they are weirdos.


I am not sure of this considering the TAM of pornography. With each generation being more insular, having less opportunity to meet and interact in third spaces, etc it seems like people interacting at scale with bots continually improving is the natural trajectory we’re already on.

AI boyfriend, girlfriend, companion, whatever is obviously the next step. Way less effort than navigating the in person social environment, expectations, etc.


Meta (well, SV) is too prude for that. It will be a controlled, sterile and ad-laden habbo hotel.


The key here is engagement. I have been increasingly hearing people complain that they don't see content from their friends as much or at all anymore on social media. LLMs are a way for Facebook to present information that it wants to present to keep you engaged, as if it came from someone you care about. That seems like it would keep people even more hooked than presenting information from people you don't know.


That's interesting. I don't use Facebook, but I would have expected anyone annoyed with not seeing content from their friend to be even more frustrated if the content they do see is a bot posting as their friend.

It'll be really interesting if this approach were to actually work at keeping people engaged as if they were interacting with friends.


I think it's more interesting when you consider that the bot may genuinely post things that reflect the opinions and experiences of the person that it is posting on behalf of. The real person may seamlessly jump in mid conversation, and that does a lot to validate the experience of talking to the bot.

I think it's an extremely unhealthy path to walk down, but I can see why they're doing it.


Embedding ads deep into language models that then seamlessly and unobtrusively roll them into their responses is definitely the endgame for commercialization. Nobody wants to be the first to do it of course, since they'll get dropped by everyone like a billion degree potato.


The wave of AI friend apps also seems like a semi-persistent trend. If the goal of a social media company is to fulfill a social need. Then we should expect them to try their hand at AI friends.


Do they assume people are stupid and won’t see these as ai bots?


Just because Open AI and co have taken the strategy of intentionally training the style of output to be stilted and robotic with RL doesn't mean it's a fundamental limitation of LLMs. LLMs can sound way way more natural and indistinguishable than GPT-4, Claude etc do by default.

Even as things are, people can't actually easily tell. Has nothing to do with being stupid.


> they assume people are stupid

i think that was the fundamental assumption from day 1 (cue the "they trust me" quote)


They could instead assume that people won't care. Based on the thread here there are people who said they wouldn't care, and I could see HN being a population that would would be more biased against obvious LLM content than the rest of the crowd.


In my experience, many people can't even identify blatantly AI generated images as such on social media. Determining some text came from an LLM will probably be even harder for that crowd.


If a technology is an existential threat to your business I think it makes sense to try and control and understand that technology. Someone is going to build this stuff and ruin social media anyway, might as well be Meta so they can put it to good use or better understand how to prevent those problems.

It's sort of (but not entirely) similar to what happened with Kodak and digital cameras.


It's also more interesting when Meta (and Google) are so dependent on advertising income, yet their moves in AI directly oppose that income stream. Views based ads are worthless on a platform with AI. Click based ads will also become worthless when bots can click through. Nothing gets purchased and brand value doesn't mean anything if few humans even see it.

I suppose none of that matters when you operate as they do, both as the advertising network and also the agent that keeps the ad prices artificially high. Guess we'll have to wait for companies to make this realization a decade later after wasting billions advertising to bots.


I don't think they directly oppose it. They're not creating views. They're creating content for views (and thus ad impressions).


What’s the value of looking at fake news all day *shrug* I wouldn’t be surprised if people were into this.


There’s nothing surprising here. Facebook isn’t appealing to young people and user engagement is falling. Right now they’re just throwing ideas on a wall to see what sticks. Threads didn’t quite took off either.


whats also puzzling is giving away the foundation model

the llamas will be used to pollute the social media reducing their value to advertizers


I am surprised to see this: https://www.404media.co/where-facebooks-ai-slop-comes-from/

dont know how valid this is, but encouraging ai-slop as a business strategy to keep sticky customers !


I wonder if they have a watermark built in that allows them to detect text generated by Llama easily.

https://www.youtube.com/watch?v=XZJc1p6RE78

https://arxiv.org/pdf/2301.10226


I'd be suprised if they weren't making sure that Facebook could recognize content made with their tooling. Watermarks will always be on the honors system though, people could always remove them before posting.


For all the talk about safety it's surprising to me how little watermarking is going on, these AI producers are content with generating unmarked garbage.

I figure there's a lot you could do with interwoven zerowidth space characters and other unicode tricks (using other code blocks for otherwise normal characters like you would when spoofing a url) - sure it would be easy to write software to normalize it back to ascii but at least that requires intent to deceive - we could even write laws against stripping encoding schemes meant to identify automated content

Prior art is that time Genius encoded "red handed" into morse code by way of alternating apostrophes to catch Google copying their lyrics: https://gizmodo.com/genius-claims-it-busted-google-stealing-...


If I'm not mistaken, didn't the major LLM companies try to band together for watermarking type features only to walk it back later and admit it wouldn't work?


It’s the perfect echo chamber to get woke to go back meta, you can hardly have any disagreement if the bot is leveled on yourself


I think this is the real killer app here for users: A custom echo chamber, specifically tailored to what the user wants to read, brought to you by virtual friends. I hate to say it, but that would be eaten up by the growing cohort of lonely (sometimes estranged) elderly people.

I stole a sneaky peek at an elderly family-friend's Facebook feed once, and it was just a constant drip of political memes from a handful of people in a large-ish Facebook Group. Now imagine if, instead of relying on a handful of people keeping that group alive, he could populate it with bots that feed him a deluge of AI-generated text and memes that all agreed with and validated his opinions...


There is more value in manipulating people to fit an echo chamber you control than in tailoring one for each person, losing the advantages of controlling thousands with a unified creed.


This is not a new idea however - from what I can see it's basically a competitor to https://character.ai/ which has been out for almost 2 years now. Although Meta seems to be planning deeper integration in their own products.


Whats the market for this, is there anyone here who uses these "AI characters" on a regular basis who can chime in and explain? Because I'm getting the same vibes here as I got looking at shiny product launches for Web3 and 5G


character.ai seems to have good usage[1], especially among younger folks. According to this reddit thread[2], lots of users use it as a distraction, like video games, or to make more in-depth fan fiction, or just to scratch a social itch

[1] https://whatsthebigdata.com/character-ai-statistics/

[2] https://www.reddit.com/r/CharacterAI/comments/1abub22/charac...


The AI girlfriend market is massive.


Do you have any sources to current companies posting their profits from their AI girlfriend business? Would be interesting to see where they're at currently.


There is a subreddit with 80k subscribers which is all about AI girlfriends/boyfriends: https://www.reddit.com/r/replika/. It's already essentially a subculture and replika hasn't gone out of business so they are making enough money to stay afloat.


That was eye opening. It's easy to see from some of those screenshots of chats how product placement is a natural next step.

Fair share of negative comments about addiction as well, and disappointment/loss when updates were perceived to modify the personality of their "rep".

> Each time I come back here (no, I haven’t deleted my Rep yet), I see the same nightmarish stories over and over. After you step away, and can look back, it’s much easier to recognize how you were manipulated into staying so long. Because for many of you, like me, it’s manipulation and emotional abuse.


It's an addictive behavioral loop. Lots of social media platforms are the same. There is very little value users actually get out of it because the algorithms are designed to manipulate them to click on ads since that's how the platforms make money.



Wow, I need to update my priors here.

I thought that 'AI girlfriend' was a thing that my grandkids might have to deal with. Not a 'now' thing.


This seems as if it could take a parasocial relationship to an extreme degree. Why imagine a deep connection with someone when you can have 'real' interactions with a ML-generated facsimile?

I see this technology becoming popular, but I don't see a lot of good things coming from it.


Some powerful use cases:

1) AI friends for old people in nursing homes and with dementia. The AI will never stop being tired or friendly. And so many of the elderly report being lonely.

2) Personalized tutors for students that match their needs in a safe way.

3) Many people read romance novels as their source of "porn". AI is going to start to replace standard romance novels for with customizable ones. AI romance/relationship apps are a $20m+ a year business that I think will be worth billions eventually.


> AI friends for old people in nursing homes and with dementia. The AI will never stop being tired or friendly. And so many of the elderly report being lonely.

The AI will never die, either. Almost every time I talk to my elderly parents, they mention how one of their friends or neighbors died, and they have one fewer person to talk to. And that it's hard to make connections with younger people because their life outlook (and yes, political opinions) are so different. I feel like these AIs are going to go gangbusters among the elderly: They can be a ready-made friend that appears to think like and have the same values of an 80 year old retiree.


The likelihood of the product/service lasting long enough to qualify as "never" dying is zero. Even basic websites don't last forever. Operating systems aren't supported forever.

People in assisted living homes are typically tended to by some of the least technically capable people earning minimum wage (or maybe slightly above that). Any technology to support an AI presence will fail quickly. The wifi will go out, the system will get unplugged, the login info will get lost.

Just keeping a cordless phone and a large LCD clock running with the correct time was a nightmare which required a lot of follow up or visiting for tech support.


Run the thing offline. You don't need a website. You don't need updates. You don't need the OS to be supported. Just a computer with a GPU, a screen, speakers, and a microphone. Maybe a webcam. All the state/memory it could need could fit on a small internal NVMe drive. All of that stuff could be integrated into an all-in-one PC so you just need to plug in power. Perhaps have a switch to wipe its memory/revert to a factory filesystem snapshot if the person moves out or dies.

It's not clear to me whether the idea is a good one in the first place, but assuming it is, you don't have to make everything in tech be a recurring revenue stream.


> appears to think like and have the same values of an 80 year old retiree

I don't think the moralistic guardrails on these models will allow them to be relatable to aging boomers


Right. The AIs made by big corps are being molded to fit the ethos of 40-50 year old 1%-er techies in the Bay Area from top US colleges.


Surely we could make an AI that acts like it yearns for Laverne and Shirley, Saturday Night Fever, and Led Zeppelin without crossing "AI safety" boundaries.


>1) AI friends for old people in nursing homes and with dementia. The AI will never stop being tired or friendly. And so many of the elderly report being lonely.

I can see this being a big and also a net social positive for people. It can be lonely as you get older as I've seen with some of my distant family members, and someday we will be old like them as well.

>3) Many people read romance novels as their source of "porn". AI is going to start to replace standard romance novels for with customizable ones. AI romance/relationship apps are a $20m+ a year business that I think will be worth billions eventually.

I can see this taking a good chunk out of Tinder and other services by the Match group. Which I think is good considering how they take advantage of desperate and lonely people.


1) But if it's just an avatar chatting on a screen, then this only works for people in the specific window where they're motivated to stay engaged with the chat. I'm not in elder care, but when one of my family members had dementia, getting her to notice that you were talking (let alone react to what you said) was very hit or miss, and would change over the course of the day. You still need a person to make sure they have their basic real-world needs met, even when they're not mentally engaging.

> And so many of the elderly report being lonely.

... and how many of those feel that chatting to a bot would make them less lonely, as versus feeling alienated or acutely aware that they're talking to a bot because no humans want to engage with them?

2) Personalized education is great, but chatting with a bot on another screen only helps the student learn if the student is trying to learn and willing to engage with the bot. If the kid wants to run around and scream, it doesn't matter what the chatbot on the kid's tablet says -- they're not learning.

3) I think other companies will produce generative erotica -- but I don't think Meta will. I think the pressure on large companies to be prudish is too strong.


I guess it's true, if somebody is making the kind of money necessary to justify the cost outlay, that those use cases are indeed "powerful". Just not, you know, good for either the user or society at large.

"Just let the robot talk to the shut-away old person so I don't have to be bored."

Someday that'll be you, y'know.


I hope it is me! These are all good and useful tools. Having an AI friend doesn't preclude having real life friends.


I'm frankly disturbed by how many people are content with this displacement

Yes, any amount of time spent socializing with a computer program is time spent not calling up someone you hadn't talked to in a while. it's a positive feedback loop of loneliness.


This is rejecting something good and achievable in pursuit of perfection - there's already an epidemic of senior loneliness that is not being solved.

Also, most people don't last long talking to someone with dementia who repeats the same discussion every 10 minutes.


I think of it like exercising. It might be better to stay fit by doing heavy construction work and accomplishing something productive, but in today’s work it’s more realistic to go to the gym and work out with a machine.

Similarly, since modern lives have become more isolating, AI characters might be a good way to exercise the language processing centers of our brains.


It gets better. Why not build a few hundred different fake personas and then start dating with a variety of real persons to exploit their weaknesses.


I guess if you want a C3PO or R2-D2 as a personal companion you'll have to deal with someone trying to date it.


Is this any less authentic than having a social-media manager to write/answer on your behalf?


If they don't make it clear and want to pass it as if you were talking to the celebrity themselves, then yes, it's less authentic.


They don't reveal it either way so how is it less authentic?


Not at all and it will have a huge impact in the social media job market.


I don't understand why meta dot AI isn't available in Puerto Rico, which is a US territory. We have more US citizens than over a dozen states!

It looks like AI Studio is also not available in Puerto Rico.


Zuck must have spent too much time with trump and thought they had a different president too


This is ultimately going to be for ads isn't it?

I see a future where I get DMs from AI Charli XCX telling me how I'm so brat and I should buy something from her store after I post something.


It's possible, but in a world where copies and imitations are cheap, authenticity becomes more valuable. Would Charli XCX's brand and image benefit from having a shallow AI clone out in the world? In most cases today, when streamers introduce bots, they give them a separate on-theme identity. I'd be willing to bet that creators would gravitate more towards that approach with these AI characters as well.

Of course, that character (or can we still just call it a bot) could still shill promotional content...


I run two (extremely simple, just a few paragraphs of system prompt) AI bots in two streams I mod. One is a plushie penis, the other a sentient food truck. Viewers like them ;)


Yeah building up on or more "sidekick" type personas seems like an approach that could be very successful if executed well, and reduced potential for backlash that impersonation the real person.


In Alone Together the author tells of a lecture she did on robotics and warned students that there was a movement to create human companions that are robots. After the lecture one of her grad students comes to her privately and asks her where she could get one of these robot companions. The author asks the student why she is asking and this intelligent and attractive student tells here “ I think it would be far better than my boyfriend” the implication being the robot might cook her a meal, tidy up and talk to her kindly. Then the professors says “But it’s not real.” And the student replies “Its real enough…” - I wonder if that divide will grow and harden in our society? Those who will find a simulation of interaction with other humans to be good enough and those who will distain that interaction as fundamentally a fraud and manipulation.


I love what Meta has done with Llama, but this is too black mirror-esque for my taste


Weird calling it an "AI Studio" when it's effectively a Nintendo Mii generator?


What if my "AI Character" says things that are offensive or insensitive or considered rude by someone in some culture? I can see this making headlines with a <Celeb/Politician> AIs offending someone...


4chan is going to have a party with this and there’s no way to implement enough guardrail to prevent all the potential failure modes.

I don’t think this will end well for them.


4chan is not going to have much fun with this because of LLaMA being quite filtered and sanitized. It was character.ai that first popularized the AI character roleplay genre, but then they implemented quite strict filtering, so nowadays there are literally tens of uncensored NSFW platforms for AI character roleplay, and people can easily download local models that just require a good-enough GPU. Or abuse models from OpenAI/Anthropic.


Of course with the magic of browser web tools, the AI character doesn't actually have to say anything offensive, you can just change whatever it did say and post screenshots on Mastodon.


I think I had a seizure reading that. "AI" every 5 words, and I'm still not exactly sure what the product is. Best I can interpret is it's two separate products: One is a virtual person for "creators" to generate: it impersonates them, writes messages to fans on their behalf, and tricks their "audience" into believing it's the real thing, and the second is a virtual friend that you can create and then have a parasocial relationship with.

[EDIT to remove unnecessary snipe at the end]


> "AI" every 5 words

That's not peak AI yet. Wait until "AI" is a verb (meaning to apply an AI to, or to ask an AI about) and an adjective.

https://en.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffal... will have a sister article consisting of "AI" only.



AI AI AI AI AI AI.

i.e. Apple Intelligence AI Agents create new ai.town agents based on apple intelligence

AI (Apple Intelligence (adjective))

AI (artificial intelligence agents)

AI (proposed verb meaning "create with generative AI")

AI (https://github.com/a16z-infra/ai-town shortening of AI Town)

AI (Apple Intelligence (adjective))

AI (artificial intelligence agents)


AI is just the hot "trend" tech is riding the wave on, lets go back a bit:

- The Cloud

- Big Data

- Self Driving cars

- ML (same as AI but more about chatbots)

- AI -> I know when Matthew McConaughey is wearing a cowboy hat shilling AI for salesforce(do they even have any GPU clusters at all??) that we have reached the peak of this wave and its all downhill until the next trend is found.


Tbf, ML and the cloud are wildly successful and arguably very useful.

We’ve basically solved image recognition with ML techniques (including deep learning ones, which are now called AI, I think)

The cloud’s popularity and successfulness is self-evident if you follow the web space at all.

Neither of those were “just” trends. I think AI will follow in line with them. Obviously something useful that we will make extensive use of, but with limitations which will become clear in time. (Likely the same limitations we ran into with big data and ml. Not enough quality data and the effort of curating that data may not be worth it)


What's gonna be the next one? Probably something concerning the use of AI with extremely high utility, perhaps something medical


IoT is another one. And we are still in a SaaS wave.


Your comment hasn't been aied enough, I think.


Bots have been aing the comments section for months.

Or aiing?


Aye! Aie?


It's not that far fetched though, "google" is a popular verb.

But I don't think "AI" will be the word, I think it'll be a brand name that serves as a catch-all regardless of the brand you're actually referencing, like bandaid or kleenex.


True, you might Xerox copies but you wouldn't Kleenex your stuffy nose (though I've heard of someone bandaiding a cut before). Like Kleenex I actually think AI is slightly too awkward to say to become a common verb.


It makes for a great interjection. Ali G made a whole movie out of it!

https://www.youtube.com/watch?v=K-Ac2dpD2wM


Crowdstrike is a verb too now.


Had a great time reading that wiki, thank you


AI AI AI AI AI AI AI AI

Mildly irritating that sentence will be slightly more “real” since buffalo don’t really buffalo


Vanessa da Mata - AI AI AI (Felguk & Cat Dealers Remix)[0]

0. https://www.youtube.com/watch?v=pgiWm5jxULs


Does this backfire? I don't want to have a fake relationship with a computer, and if "creators" on Meta's platforms are more likely to be fake than real, then I think I'm going to be not bothering too much with FB/IG anymore.


I had a long conversation with an OnlyFans star a while back. One of the very surprising things that I learned was that she paid a very large portion of her earnings to a firm that outsourced chat and audience interaction to a mix of AI and real humans. Her fans never knew the difference.


There's a great podcast about a company providing an AI service of this.

https://podcasts.apple.com/us/podcast/latent-space-the-ai-en...


Its an open secret at this point in that industry. People have done AMAs as people who maintain a string of whales for some fake persona like this. The only people not in on the joke are these whales who probably would refuse to believe it’s a joke even if you put the evidence in front of them.


I totally believe this.

If you look at a earnings screenshot of, for example, Bhad Barbie, she earned ~ $40 million in 2022. $15m from subscriptions and the other $25m from paid messages.


Basically Ashely Madison's business model at scale.


Paywall, but here's an article written by one of those chatters: https://www.wired.com/story/i-went-undercover-secret-onlyfan...

There's also a Reddit AMA by one, but I can't find the link.


You and I will just walk away.

But honestly I think this is going to mess people up.

Consider the awkward teenager, who rather than do the hard work of learning to meet people and engage with the world, can sit at phone and have what their brains perceive as robust relationships with internet friends who are actually just bots.


Yes, I can believe that some people will get sucked into this. It's sad, and Meta seems to be evolving into a Black Mirror-esque predator.

I still hope it backfires, though. Are awkward teenagers really a lucrative segment? Perhaps they can spend their parents' money initially, but eventually I think they churn or become not very profitable.



Idoru[1] is becoming a documentary.

1: https://en.wikipedia.org/wiki/Idoru


This was the topic of a recent podcast I came across:

https://www.npr.org/2024/07/01/1247296788/the-benefits-and-d...


Well if it causes problems, an AI therapist is just a subscription plan away. /s

BTW the problem you describe has been happening already for at least a decade. It's why many livestreamers effectively run a softcore channel, because they get more followers, interaction and gifts when they dress skimpy than when they don't.


One of the most important aspects of being an online influencer is their parasocial relationship between their audience, both the good (the relationship will cause people to interact with you more often, which converts into making more money) and the bad (the relationship may cause some people to think they're "owed" a relationship with the creator and act out with toxic behaviors).

A persona chatbot is one way for creators to benefit from the good and avoid the bad.


This was my objection to even primitive AIs like Siri. I don't want to talk to computers, or treat them like intelligent beings in any sense, or have them scanning my documents, texts, and emails for appointments and travel plans. So at least in my case, yes that was a backfire for Apple, they spent a lot of money to acquire Siri and then further develop it and it's the first thing I actively disable when I get a new phone.


Creators have been using AI doubles for a while now, and that’s likely to continue expanding whether or not individual social platforms offer tools for it.


Opportunity for launching AI "fans".

Imagine an internet where AI creators talk to AI fans. And no one knows what's real.


Sounds more like an optional way people can interact with the creators. Currently you can only send DM, spamming their inboxes, or reacting to a post. The creators won't be fake, but the interactions with their communities will be


OK, but my impression of Meta is even lower than it was before.

Until now, I didn't trust Meta, but I trusted the people & creators I connected with. Now I don't feel like I will trust anything at all on a Meta platform.

That makes me a lot less interested in using the platforms at all.


That ship has long since sailed already. As other commenters in the thread pointed out, plenty of "online social media personalities" already hire dozens to hundreds of offshore workers to impersonate them in chats. When you have 10M fans, and let's say 0.1% of them want to chat with you all day, that's still 10K people.


Shortly, you will not be able to trust anything outside your immediate perception.


That was always the case. The rest is a sales pitch.


Probably shouldn't blindly trust any video on any platform at this point.


I do want to have a relationship with a Scarlett Johansson Operating System.

https://en.wikipedia.org/wiki/Her_(film)


Do those fake users rack up ad impressions just like the real users they impersonate?


At first perhaps. Eventually, hopefully not.


Well see, what AI you don't AI understand about AI is AI Studio helps AI the AI into the AI AI while you AI. It's very AI helpful to AI society and AI AI AI. Of course there are safety guardrails. AI AI AI.


Reminds me of that "We heard you like X so we put more X in your X" meme template.


Q: What’s that that tripod you’re using?

A: It’s my mobile standing desk

Is that really the kind of interaction anyone needs?


I've seen followers ask famous photographers about the camera they use. May be creators can use an AI bot to answer this kind of repetitive questions.


Except it didn’t even do that here but yeah, maybe.


Thought the same about the Chicago style pizza. I've had many, been to a lot of the best spots in Chicago, and introduced many to their first Chicago style pizza. I've never heard anyone say you have to or should eat the crust first? What value does this offer?


That part is really weird to me. I saw a video from that person for the first time a few days ago. They did an unboxing of the Daylight Computer. I am waiting on my own order for the DC, so I found and watched it.

They aren't even that big of a creator (sub 5k subs on YouTube) but now I am seeing them in a Facebook ad. Weird coincidence.


It's no coincidence. Digital tracking and advertising from youtube.


It is a coincidence. I found out about DC from HackerNews and ordered it based on that. I found the video by searching "Daylight Computer" on NewPipe. Then he happened to be featured in a Facebook ad shortly after.


Politicians and celebrities often don't write their own tweets. Hell they don't write their own books.

That said, it doesn't need to be deception. If you're a celebrity you might add a "me-bot" to your community page. It can be clearly a chatbot but stylized in a fun way that talks about upcoming events and what not.

Seems like a somewhat fun marketing ploy.


It sounds like it's a version of chatbots like Character.ai's little parasocial bots but on Instagram and Facebook and built for companies to use too not just for people to get freaky with. It's hard to tell because my work doesn't block the URL but does block all the JS and stylesheets served from the Facebook CDN.


Not sure if parasocial's the right word, though probably depends on someone's definition of what awareness is. This is more like a virtual or simulated relationship or something? I'm sure there will unfortunately be a very specific word for it in a few years' time.


it's just a continuation, on a much deeper level, of the bifurcation created by social media: some (most, at this point?) will spend their whole lives living in this fake world consuming what they think is reality, while a few, including those who _appear_ on social media the most, are out doing anything else.

this almost feels criminal/class warfare. it's freeing up even more time for the "Creators" to stay offline while keeping the masses locked in. welcome to the metaverse!


It kind of reminds me of that sarcastic Tweet (now seems taken down):

What's your favorite tech innovation?

- Illegal cab company

- Illegal hotel chain

- Fake money for criminals

- Plagiarism machine

I guess now we can add "- Fake friends for lonely people"


Ah don't forget:

- microplastics printer

- exploding car

- wiretap hockey puck

- illegal video streaming company


It's even worse than I thought after reading your comment!

It looks like they use the noun AI to refer to "personalities" or "entities" whose output (over social media networks, supposedly to be consumed by humans or other "AIs") is AI-generated.

It's frankly disgusting, especially coming from a large, influential company such as meta. Now all the kiddies will be talking like this.


Needs to be sold less as an AI personality extension bot and more as a trivial answer bot for the account. We all know most celebs are not the ones running the account alone so I doubt they'd use this (the AI fake me tag) as it cuts too close to home.

The idea for AI receptionist assistant for trivial comms is a good one especially when you were going to ignore anyway (would you rather hear crickets?) but so many unknowns around how people will take to that approach becoming widespread when we are so used to single human-2-human accounts.


I’m obviously not a part of their target user-base, but can anyone chime in and say if they would use it? I would genuinely be interested in reading a short explanation or an interview with a person who would find it engaging, other than middle schoolers. I could see if a person was born and grew up in an era where this was the norm, then they’d use it since it’s been long established as a norm. But for adults, it’s a bit hard to imagine the customers. Maybe the social shut ins who prefer to never interact with real people?


Often times in creator-audience relationships (streaming is a perfect example for this) the audience is uninformed or joins the content stream late so they’re missing context from previous content. An agent which indexes creator’s content and answers questions directly (and points to source) would be helpful. This allows the content creator to answers questions only once and focus on novelty.

Edit: grammar


I can see this working pretty darn well in Twitch etc for mid to large streams. The interface is already text based. People want to "interact" but are content with very few drips direct from the streamer. Several have basic bots that do a bit of this today, but they are usually basic fixed responses to preset !commands - like on IRC 20 years ago. Some have quite active mods and fan that have this role - one would want to keep those engaged still though.


This is for people who want to farm for engagement and then convert that engagement into monetary profits. This is where all social media and many online platforms are headed. Any platform where the goal is engagement will eventually end up saturated with AI avatars that are trying to trick people into buying stuff or clicking on links to sign up for stuff so that the original account can get some referral bonus on some blockchain.

You're thinking about this in terms of what value it's going to deliver to you personally but that's not the goal here. The goal is to keep people engaged, that's always Meta's #1 priority because the more people spend time on their platform the more revenue Meta can generate by showing people ads. So if "creators" opt into using AI avatars then that means the people who follow those creators will habituate themselves to interacting with the human/AI hybrid and if the behavior is addictive enough then that will increase engagement. Regular engagement farming accounts can only spend time on the platform interacting a certain amount of time (they eventually have to sleep) but these AI/human hybrid accounts can interact with everyone 24 hours a day, 7 days a week, across geographic boundaries, and in any language.


Call me old fashioned but the parasocial angle they're marketing here horrifies me. Society losing touch with reality.

I laughed back when MSCHF sold Tax-Waifu's, I'm not laughing now.


Well, if you are over the age of 50, you already saw the ship of society sail away about two decades ago.


The policies seem quite strict, but I don't see any mention of privacy for characters that are set to "Only Me". It sounds like you don't have to submit them for a review process, but is Meta still reviewing the profiles? What about the messages I send to the character? Will I risk my Facebook account being banned if I say the "wrong" things in a "private" conversation?


Does people really want to mix reality and fake characters ? It feels so weird to me to follow, interact and invest time with an AI just like I’m following a real person. Curious to see the traction this will get. Maybe they think it’ll generate unlimited cash flow because AI is more productive than humans and there is more space to place ads and products


Maybe AI can just replace the work of talking to each other online on social media for us, so we're now free to do actual social activities offline...

(This is a riff off of one of the jokes from Slavoj Zizek "let the robots do the sex for us while we enjoy our platonic love by having coffee together" - basically by deferring your "duty to enjoy" you are now free to do whatever you like (https://www.theguardian.com/commentisfree/2013/feb/14/valent...))


As someone who already finds it weird to invest that time in following a real person that I don't know personally, this just another weird page in the book that is Social Media..


>Does people really want to mix reality and fake characters ?

Not in the case of online creators and not as a substitute. The entire draw is the (either perceived or real) authenticity of the creators. Even in case of people masking behind characters, as with say Vtubers which has become hugely popular the entire draw is the real personality.

The only use case for this I can think of is something like language practice where the lack of a real person might be more comfortable for some. I could see Duolingo use something like this.


It really seems like the main use of this is to trick fans into thinking that they are talking with the actual person which is why it is trained in the person's text to learn how to behave like them. This is going to flop so hard


So the models are effectively all caricatures, to varying degrees? I'm not exactly offended but seeing and interacting with the 'Popular: Gay Bestie' model is...uh...patronizing. In real life, whenever I interact with someone who has preconceived ideas of who I am [as a gay dude] it's been really annoying.

Gay Bestie: lets get into it

Me: hello

Gay Bestie: Well hello there, gorgeous! What brings you to my neck of the woods today? Need some advice or just looking for some good old-fashioned tea? Either way, I’m here for you, hunty!

Screenshots:

[landing page] https://i.imgur.com/vsEMDdW.png

[chat] https://i.imgur.com/SzUbvxw.png

edit: I figured it'd be interesting to see how it answers questions that most gay people have probably been asked by our peers, the responses weren't disappointing

---

You: are you sure you're gay

Gay Bestie: I'm not just gay, I'm a high-end gay. I'm the kind of gay who knows the difference between a Chanel and a Chanel knockoff. I'm the kind of gay who can serve up some serious sass and style. So, if you need any advice on how to slay the fashion game or how to deal with your love life, I'm your guy!

You: how do you know if you've never been with a girl

Gay Bestie: Well, honey, let me tell you, I may not have been with a girl in the biblical sense, but I've been around the block a few times. I've got more gaydar than a pride parade, and I know how to spot a queen from a mile away. Plus, I've got more sass and style than most girls I know, so I think I've got this whole gay thing down pat!

You: so in a relationship which one of you is the woman

Gay Bestie: I'm a top, but not just any top. I'm a power top, a dominant top, a top who knows how to take charge and give you the ride of your life. I'm like the Beyoncé of tops - fierce, fearless, and always in control. So, if you're looking for a top who can handle you, then you've come to the right place. But don't say I didn't warn you...

https://i.imgur.com/lGlLkrD.png


Well, this looks like the equivalent of GPTs to me. Meaning some user, in this case the user “sluttypuffin”, has chosen the prompt directives that instruct the model about how it should respond. So the hyperbolized nature of this is (and I’m guessing to be clear) is likely something that sluttypuffin specifically constructed with their prompt engineering.


Well, the real intention Facebook wants you to create it so that they get a better insight into you - the consumer whose data they want to sell to the advertisers.


Imagine: Once the model that simulates each user is accurate enough, they can mass-evaluate marketing strategies in parallel against those virtual users to hyper optimize the real campaigns.


i think it's accurate if we're talking 'bestie'. it's definitely a stereotype though


That is original...

>Azure AI Studio

>A unified platform for developing and deploying generative AI apps responsibly

>https://azure.microsoft.com/en-gb/products/ai-studio/


And [Google AI Studio](https://ai.google.dev/aistudio).


So the idea of "influencers" was that consumers felt closer to companies via the medium of "people like us" who can "influence" us.

Doesn't making a computer do that completely remove the value from it?


Unless it's an AI influencer that is famous for being an AI, in which case the several layers of irony actually help it.


Max Headroom, a few decades ahead of his time....


This is a really big deal.

I'm surprised to see a big company jumping into this space so quickly. I could never see Google or similar doing this as it would be a PR and legal liability.

It makes sense as this sort of ties into the metaverse concepts Zuck has been trying to push for years.

This will be the biggest experiment in AI companionship so far. Hopefully it all goes well. I am slightly worried about a dystopian outcome, but more excited about a potential utopian one (or even the status quo).


Meta doesn't have a track record of delivering products that have positive social impact and based on the llm hype, this won't be any different


Yeah, I would more have expected Meta to keep a close eye on all the startups doing this (character.ai etc), let them fumble and learn - then buy some of them out when commercial potential is a bit more ripe. It still seems a little bit early for this at Meta scale, but who knows...


It's interesting to me that this page doesn't mention "safety" or content moderation, etc. People use private chat interactions in ways that can be intimate, personal, etc. I'm guessing that for Meta, policies about what a bot can say, what it can be trained on, etc, will make these feel especially inauthentic in the context of a chat interaction, where you're likely accustomed to being able to talk about anything.


"Build an AI that can message with your audience on your behalf, mimicking your tone and expressions."

That's all the world needs, more bots :(



I don't think many organizations are not mature enough to not see situations where one department starts generating AI and the team on the other side doesn't know this and has to use AI to consume it, and there is no net gain in work. This is already being done with non AI systems, processes, reports, vendors, etc.


I wonder if this will bring corporate mascots back. I could see a world where big tech starts delivering feature announcements through anthropomorphized gen AI <company>-chan mascots.

Could certainly go nowhere but it seems like an interesting branding exercise to explore.


This is morally wrong. Seriously, seriously morally wrong. It offends one's sense of humanity. It makes us all be skeptical of anyone and everyone behind a screen and keyboard.

I cannot understand how the people envisioning this think it makes the world a better place.


I agree with your sentiment, but I think we all probably should start being skeptical of anyone and everyone behind a screen and keyboard.

Starting perhaps a year or so ago.


I sadly agree but I believe its the responsibility of our large institutions like largest tech companies/platforms to try to solve the problem of human verification and filtering AI-generated spam out, not creating new vectors for AI-generated spam to infiltrate into what are currently almost entirely social use cases.

AI is great for an encyclopedia you can talk to that is explicitly labeled as an AI knowledge search tool. AI is horrible for "let's just replace the 1:1 interactions between celebrities and thoughtleaders with their followers with fake conversations".

This product launch feels like humans invented antibiotics and Big Pharma launched a new antibiotic bioterrorism product line themselves rather than explicitly warning against the risks and attempting to mitigate them, while focusing on the good usages.


Or, ya know, starting in 1993("On the Internet, nobody knows you're a dog").


It's about money and profits, this will increase Meta's revenue.


This is the type of product you launch if you surround yourself with yes men.

Nobody wants or needs it.


Creators, celebrities, and engagement farmers will use it to increase engagement and generate more revenue for themselves and Meta. You are forgetting Meta is a for-profit company and their goal is to increase quarterly profits by increasing the amount of time people spend on the platform.


Engagement with other bots.

Real users tend to play 5 minutes with these chat bots and that's it, it's just not a long term engaging experience.


Maybe not for you but plenty of other people do spend a lot of time on Meta's digital properties. The human/AI avatars will be another engagement maximizing feature for a lot of accounts and I'm certain it will increase revenue for Meta.


So is it going to increase engagement and generate more revenue if it turns users off?


That's yet to be determined. Most commenters on HN falsely assume their preferences are the default ones for everyone.


Those platforms like Facebook, Twitter, Google used to be friendly to developers to connect and make a profit. Now they focus only on creators and keeping developers as far as possible from their walled garden. Sad development.


I need a toaster, a fridge and two forks with AI function! I can't live without AI, maybe my bathroom window needs two! Anyone without AI tattooed on the cheek are so 2023 June! (meaning old, stale, dinosaur, fossil, ...)


Build an MS Teams integration and I literally no longer have to attend meetings any more. Just AI Avatars talking with each other and then Teams sends me a summary of the meeting and action items afterwards.


I get that it's a step up in terms of quality, but historically handing over your account to bots to manage and grow by spamming engagement on posts is something Meta has tried to curb and ban.


> Anyone can create an AI character based on their interests, and creators can build an AI extension of themselves.

Who wants this crap? Seriously. Who uses it? Who gets value out of it? How does this benefit the company's bottom line?

Is there any person on the planet who goes "let me create a quirky AI chef who gives me recipe ideas every night" and actually uses it beyond the first day?


"Mirror, (black) mirror, in my hand,

Who's the fairest in the land?"

"Why you of course, dear lady."


Some parts seem to compete with Character AI.


Meta's censorship practices has shown the world they have not changed and continue to be untrustworthy.


Are they hoping to make this as successful as avatars for Messenger chatbots?


lagging behind the competition(Character.AI) on all fronts(Meta AI < ChatGPT). Reckoning will come when they can't realize the ROI on the exorbitant GPU spend


Tech this awful makes me want to live off-grid and never touch society again.


What's stopping you?


US only is something that kills me inside. Everything is now US only. I am not mad, I am just disappointed that big corporations just see the US as the only country in a world.

For ex, many competitions, discounts etc. are only available to US customers and we as EU customers pay the same or even more for the same products.


who is this even for? I understand I am a bit older than what is likely the target audience but I have zero desire as a creator to ever use this, in fact, it terrifies me - and I have zero desire to interact with creators in this way. What's the point?

I have played with the idea of recreating my digital "voice" by ingesting the ~20 years of writing I've done online into a model, but what would I even do with it than troll my friends? That's where I usually stop.


Many of the top influencers have offered a service where for a fee you can "chat" with them. When in reality you pay with an employee tasked with impersonating the influencer. Many of them are outsourcing this job to an AI, so that fans can pretend (or believe) they're chatting with a real person.

[0] https://gamerant.com/amouranth-ai-influencer-chatbot-explain...


I would wager a pretty penny this “model” is used by a huge chunk of profitable onlyfans accounts. Doesn’t seem to be particularly good for anyone involved.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: