I don't think that's true with current-gen models. You can even go so far as to write pseudocode for the LLM to translate to a real language, and for anything out-of-the-box my experience is that it will blatantly ignore your instructions. A baseline-competent junior at least has the context to know that if there are 5 steps listed and they only did 3 then there's probably a problem.
Prompting an LLM is definitely a different skillset from actually coding, but just "describing it better" isn't good enough.
I don't believe it is good enough, it's also not as relevant.
my prompts have gotten less and less. It's the hooks and subagents, and using the tools that matter far more.
This is a thread about claude code. the other LLms don't matter. Nothing ever blatantly ignores my instructions in claude code. that's a thing of the past.
Of course, not using claude code, for sure. But all my instructions are followed with my setup. That really isn't an issue for me personally anymore.
Your experience echoes my own for sufficiently trivial tasks, but I haven't gotten any of this to work for the actual time-consuming parts of my job. It's so reliably bad for some tasks that I've reworked them into screening questions for candidates trying to skate by with AI without knowing the fundamentals. Is that really not your experience, even with claude code?
Right, and I wasn't able to get this to work for any actual time consuming parts of my job until last weekend with sub-agents, and testing head to head battles with sub-agents, and selecting the best one and repeating.
Last weekend I did nothing but have different ideas battle it out against each other, with me selecting the most successful one, and repeating.
And now, my experience is no longer the same. Before last weekend, i had the same experience you are describing.
The suggested ones are terrible, and it's guidance is terrible.
Last weekend I ran head to head tests of agents against each other with a variety of ideas, and selected the best one, and did it again. It has caused me to have a very specific subagent system, and I have an agent who creates those.
Prompting an LLM is definitely a different skillset from actually coding, but just "describing it better" isn't good enough.