Hacker News new | past | comments | ask | show | jobs | submit login

just tell them something nonsensical. They are unable to take a hint and continue with the nonsense. They start to be stuck on local minima. All of them. Video/images/text. I haven't seen LLM that is able to take a hint and understand the hidden meaning in absurdity of following up.

there is infinitely larger amount of prompts that will break a model than prompts that won't break it.

you just have to search outside of most probable space






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: