Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> which is it generates an outline, it kind of revises that outline, it generates a detailed version of the script and then it has a kind of critique phase and then it modifies it based on the critique

I’m seeing this to be true in almost every application.

Chain of thought is not the best way to improve LLM outputs.

Manual divide and conquer with an outlining or planning step, is better. Then in separate responses address each plan step in turn.

I’m yet to experiment with revision or critique steps, what kind of prompts have people tried in those parts?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: