Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The usual workflow I see skeptic folks take is throw a random sentence and expect the LLM to correctly figure out the end result. And then just keep sending small chunks of code expanding the context with poor instructions.

LLMs are tools that need to be learned. Good prompts aren’t hard, but they do take some effort to build.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: