> I can't evaluate the correctness of a choice without either knowing the motivation for it, or by learning the problem domain well enough to identify and make the choice myself - at which point the convenience of the AI solution is abnegated because I may as well have written it myself.
I think I usually accept code that is in the latter - the convenience is I did not need to spend any real energy implementing the solution or thinking too deeply about it. Sometimes the LLM will produce a more interesting approach that I did not consider initially but is actually nicer than what I wanted to do (afaik). Often it does what I want or something similar enough to what I would've written - just that it can do it instantly instead of me manually typing, doc searching, adding types, and correcting the code. If it does something weird that I don't agree with, I instead modify the prompt to align closer to the solution I had in mind. Much like Google, sometimes the first query does not do the trick and a query reformulation is required.
I wouldn't trust an LLM to write large chunks of code that I wouldn't have been able to write/figure out myself - it's more of a coding accelerant than an autonomous engineer for me (maybe that's where our PoVs diverged initially).
I suspect the similarity with PRs is that when I'm assigned a PR, I generally have enough knowledge about the proposed modification to have an opinion on how it should be done and the benefits/drawbacks of each implementation. The divergence from a PR is that I can ask the LLM for a modification of approach with just a few seconds and continue to ask for changes until I'm satisfied (so it doesn't matter if the LLM chose an approach I don't understand - I can just ask it to align with the approach I believe is optimal).
I think I usually accept code that is in the latter - the convenience is I did not need to spend any real energy implementing the solution or thinking too deeply about it. Sometimes the LLM will produce a more interesting approach that I did not consider initially but is actually nicer than what I wanted to do (afaik). Often it does what I want or something similar enough to what I would've written - just that it can do it instantly instead of me manually typing, doc searching, adding types, and correcting the code. If it does something weird that I don't agree with, I instead modify the prompt to align closer to the solution I had in mind. Much like Google, sometimes the first query does not do the trick and a query reformulation is required.
I wouldn't trust an LLM to write large chunks of code that I wouldn't have been able to write/figure out myself - it's more of a coding accelerant than an autonomous engineer for me (maybe that's where our PoVs diverged initially).
I suspect the similarity with PRs is that when I'm assigned a PR, I generally have enough knowledge about the proposed modification to have an opinion on how it should be done and the benefits/drawbacks of each implementation. The divergence from a PR is that I can ask the LLM for a modification of approach with just a few seconds and continue to ask for changes until I'm satisfied (so it doesn't matter if the LLM chose an approach I don't understand - I can just ask it to align with the approach I believe is optimal).