Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> just run the code given by the copilot and check that it works

You've misunderstood my point. I'm not discussing the ability to check whether the code works as _I_ believe it should (as you say, that's easy to verify directly, by execution and/or testing); I'm referring to asking about intention or motivation of design choices by an author. Why this data structure rather than that one? Is this unusual or unidiomatic construction necessary in order to work around a quirk of the problem domain, or simply because the author had a brainfart or didn't know about the usual style? Are we introducing a queue here to allow for easy retries, or to decouple scaling of producers and consumers, or...? I can't evaluate the correctness of a choice without either knowing the motivation for it, or by learning the problem domain well enough to identify and make the choice myself - at which point the convenience of the AI solution is abnegated because I may as well have written it myself.

(ref: "Code only says what it does" - https://brooker.co.za/blog/2020/06/23/code.html)

And, yes, you can ask an LLM to clarify or explain its choices, but, like I said, the core problem is that they will confidently and convincingly lie to you. I'm not claiming that humans never lie - but a) I think (I hope!) they do it less often than LLMs do, and b) I believe (subjectively) that it tends to be easier to identify when a human is unsure of themself than when an LLM is.



> I can't evaluate the correctness of a choice without either knowing the motivation for it, or by learning the problem domain well enough to identify and make the choice myself - at which point the convenience of the AI solution is abnegated because I may as well have written it myself.

I think I usually accept code that is in the latter - the convenience is I did not need to spend any real energy implementing the solution or thinking too deeply about it. Sometimes the LLM will produce a more interesting approach that I did not consider initially but is actually nicer than what I wanted to do (afaik). Often it does what I want or something similar enough to what I would've written - just that it can do it instantly instead of me manually typing, doc searching, adding types, and correcting the code. If it does something weird that I don't agree with, I instead modify the prompt to align closer to the solution I had in mind. Much like Google, sometimes the first query does not do the trick and a query reformulation is required.

I wouldn't trust an LLM to write large chunks of code that I wouldn't have been able to write/figure out myself - it's more of a coding accelerant than an autonomous engineer for me (maybe that's where our PoVs diverged initially).

I suspect the similarity with PRs is that when I'm assigned a PR, I generally have enough knowledge about the proposed modification to have an opinion on how it should be done and the benefits/drawbacks of each implementation. The divergence from a PR is that I can ask the LLM for a modification of approach with just a few seconds and continue to ask for changes until I'm satisfied (so it doesn't matter if the LLM chose an approach I don't understand - I can just ask it to align with the approach I believe is optimal).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: