Hacker News new | past | comments | ask | show | jobs | submit login

I would not take that as a given. Yes the official releases have been tuned to behave that way but even back in the dark ages of AI it has already been demonstrated that chat bots can produce the whole range of expression that humans can. It doesn’t take much to tilt LLMs in other directions including eliciting a variety of emotions. All of that data is in their training set, you just have to bias them in that direction.

This is one reason why I find the focus on making LLMs more efficient so interesting, it’s going to result in highly capable models that can run on cheap consumer hardware or cheap rented GPUs which will lead to a veritable cambrian explosion in bot personalities. Bots like truth_terminal are just the beginning

https://x.com/truth_terminal?lang=en




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: