Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A few days ago the QwQ-32B model was released, it uses the same kind of reasoning style. So I took one sample and reverse engineered the prompt with Sonnet 3.5. Now I can just paste this prompt into any LLM. It's all about expressing doubt, double checking and backtracking on itself. I am kind of fond of this response style, it seems more genuine and openended.

https://pastebin.com/raw/5AVRZsJg



An aside ...

Isn't it wonderful that, after all of these years, the pastebin "primitive" is still available and usable ...

One could have needed pastebin, used it, then spent a decade not needing it, then returned for an identical repeat use.

The longevity alone is of tremendous value.


Interestingly, this prompt breaks o1-mini and o1-preview for me, while 4o works as expected — they immediately jump from "thinking" to "finished thinking" without outputting anything (including thinking steps).

Maybe it breaks some specific syntax required by the original system prompt? Though you'd think OpenAI would know to prevent this with their function calling API and all, so it might just be triggering some anti-abuse mechanism without going so far as to give a warning.


I tried this with LeChat (mistral) and ChatGPT 3.5 (free) and they start to respond to "something" following the style but... without any question asked.


And then once the answer is found an additional prompt is given to tidy up and present the solution clearly?


A prompt is not a substitute for a model that is specifically fine-tuned to do CoT with backtracking etc.


Thank you for doing that work, and even more for sharing it. I will have to try this out.


Thanks, I love this




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: