Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The chain of thought is incredibly useful, I almost dont care about the answer now I just follow what I think is interesting from the way it broke the problem down, I tend to get tunnel vision when working for a long time on something so its a great way to revise my work and make sure I am not misunderstanding something


Yesterday, I had it think for 194 seconds. At some point near the end, it said "This is getting frustrating!"


I must not be hunting the right keywords but I was trying to figure this out earlier. How do you set how much time it “thinks”? If you let it run too long does the context window fill and it’s unable to do anymore?


It looks like their API is OpenAI compatible but their docs say that they don’t support the `reasoning_effort` parameter yet.

> max_tokens:The maximum length of the final response after the CoT output is completed, defaulting to 4K, with a maximum of 8K. Note that the CoT output can reach up to 32K tokens, and the parameter to control the CoT length (reasoning_effort) will be available soon. [1]

[1] https://api-docs.deepseek.com/guides/reasoning_model




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: