Hacker News new | past | comments | ask | show | jobs | submit login

The Chain of Thought in the reasoning models (o3, R1, ...) will actually express some self-doubt and backtrack on ideas. That tells me there's a least some capability for self-doubt in LLMs.





That's not sslf-doubt, that's programmed in.

A Poorman's "thinking" hack was to edit the context of the ai reply to where you wanted it to think and truncate it there, and append a carriage return and "Wait..." Then hit generate.

It was expensive because editing context isn't, you have to resend (and it has to re-parse) the entire context.

This was injected into the thinking models, I hope programmatically.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: