Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just tried it and sure enough, 3 Bs. But which the model to "ChatGPT 5 Thinking" and it gets the answer right.

Is that where we're going with this? The user has to choose between fast and dumb or slow and right?



Fast: when wrong is good enough.


Acceptable in the business world.


If you look at the "reasoning" trace of gpt-oss when it handles this issue, it repeats the word with spaces inserted between every letter. If you have an example that you can get the dumber model to fail on, try adjusting your prompt to include the same thing (the word spelled out with spaces between each letter).

This isn't a solution or a workaround or anything like that; I'm just curious if that is enough for the dumber model to start getting it right.


Isn't that usually the choice for most things?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: