The usual workflow I see skeptic folks take is throw a random sentence and expect the LLM to correctly figure out the end result. And then just keep sending small chunks of code expanding the context with poor instructions.
LLMs are tools that need to be learned. Good prompts aren’t hard, but they do take some effort to build.
LLMs are tools that need to be learned. Good prompts aren’t hard, but they do take some effort to build.