Hacker News new | past | comments | ask | show | jobs | submit login

It’s important to caveat here the language model was prompted into the suggestions explicitly, it’s not that it spontaneously suggested dangerous recipes.

I’d not also that it’s getting harder to do that with current ChatGPT (this article uses GPT3.5), and I suspect “alignment” research in the 5 years time frame will make these sorts of things pretty hard to trick the models into doing.




That, plus:

> this article uses GPT3.5

Which alone disqualifies it from opining on what LLMs can or can't be used for.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: