Hacker News new | past | comments | ask | show | jobs | submit login

Why aren't the open source models like this then? Seems like it would've already happened.

To me, at least, the guardrails are there for both the human and the bot. Without them the bot steers too far out of the conversation subject.






In most cases, "confuse humans in a Turing test" is counter to other more important goals.

Do you want your LLM to have an encyclopedic knowledge? So it knows who Millard Fillmore is even if the average human doesn't?

Do you want your LLM to be able to perform high-school-level math with superhuman speed and precision?

Do you want your LLM to be able to translate text to and from dozens of languages?

Do you want your LLM to be helpful and compliant, even when asked for something ridiculous or needlessly difficult - like solving "Advent of Code" problems using bash scripting?

If you answered yes to any of these questions, you probably don't want your LLM optimised to behave like an average human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: