Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you using a tool other than ChatGPT? If so, check the full prompt that's being sent. It can sometimes kneecap the model.

Tools having slightly unsuitable built in prompts/context sometimes lead to the models saying weird stuff out of the blue, instead of it actually being a 'baked in' behavior of the model itself. Seen this happen for both Gemini 2.5 Pro and o3.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: