Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why does it matter where the code came from if it is correct?


I really hope you're not a software engineer and saying this. But just as a lighting round of issues.

1. code can be correct but non-performant, be it in time or space. A lot of my domain is fixing "correct" code so it's actually of value.

2. code can be correct, but unmaintainable. If you ever need to update that code, you are adding immense tech debt with code you do not understand.

3. code can be correct, but not fit standards. Non-standard code can be anywhere from harder to read, to subtly buggy with some gnarly effects farther down the line.

4. code can be correct, but insecure. I really hope cryptographers and netsec aren't using AI for anymore than generating keys.

5. code can be correct, but not correct in the larger scheme of the legacy code.

6. code can be correct, but legally vulnerable. A rare, but expensive edge case that may come up as courts catch up to LLM's.

7. and lastly (but certainly not limited to), code can be correct. But people can be incorrect, change their whims and requirements, or otherwise add layers to navigate through making the product. This leads more back to #2, but it's important to remember that as engineers we are working with imperfect actors and non-optimal conditions. Our job isn't just to "make correct code", it's to navigate the business and keep everyone aligned on the mission from a technical perspective.


This is a straw man. Consider that from my perspective, each of your points amount to saying "code can be correct, but incorrect" (take a look at number 5) and you may realize that your argument does not make any sense:

1. If code is "correct" but non-performant when it needs to be performant, then it's not correct.

2. If code is "correct" but unmaintainable when it needs to be maintainable, then it's not correct.

3. If code is "correct" but does not fit standards when it needs to fit standards, then it's not correct.

4. If code is "correct" but not secure when it needs to be secure, then it's not correct.

5. If code is "correct" but not correct when it needs to be correct, then it's not correct.

6. If code is "correct" but legally risky when it needs to be legally not risky, then it's not correct.

7. If code is "correct" but people think it's incorrect when they need to think it's correct, then it's not correct.

The person who submits the code for code review is effectively asserting that the code meets the quality standards of the project to which they are submitting the code. If it doesn't meet those standards, then it's not correct.


I agree with the overall point you're making in this comment…except that I don't think this is what "correct" means in the way that both I and the other person who replied thought you meant.

We took you to mean correct as in, given the right inputs, you get the expected outputs. And in that case, our objections do apply. In addition, if correct does mean overall fit-to-purpose the way you are suggesting here, then by gosh my points stands and no code generated by AI is correct! (Because of a variety of factors outside of simply "does the output of this code indicate that it seems to be working")


> if correct does mean overall fit-to-purpose the way you are suggesting here, then by gosh my points stands and no code generated by AI is correct

This is patently false per my experience generating code with LLMs. It was not a lot; it changed one line to update a global variable to a new value per my request. It was exactly the “correct” change per the stated instructions. (Okay, not exactly because it added an extra new line that wasn’t there and which I didn’t want.)

It is certainly a fallacy to say that “no code generated by an AI is correct”. Unless you are making a point about the semantics of what is making the code “correct” (as in, is it the human reviewer or AI generator?), my point is that, in theory, the human reviews the code and submits changes for further review. The code was still generated by an AI and it can still be precisely “correct” for a given intended change.

It is understandable that you misunderstood my meaning because I was rather unclear about it (though “correct” is still the closest word I can think of to mean what I mean). However, it’s a bit wild that you say you do understand that meaning before turning around to say that it actually supports your point with a vague claim of a “variety of factors”. I actually get the feeling, based on this response, that your argument is effectively refuted by the point I raised. I’m willing to keep an open mind if you’d like to show me that I’m wrong; maybe I’m just missing something.


Why does it matter where the paint came from if it looks pretty?

Why does it matter where the legal claims came from if a judge accepts them?

Why does it matter where the sound waves came from if it sounds catchy?

Why does it matter?

Why does anything matter?

Sorry, I normally love debating epistemology but not here on Hacker News. :)


I understand the points about aesthetics but not law; the judge is there to interpret legal arguments and a lawyer who presents an argument with false premises, like a fabricated case, is being irresponsible. It is very similar with coding, except the judge is a PM.

It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.


>It does not seem to matter where the code nor the legal argument came from.

You haven't read enough incoherent laws, I see.

https://www.sevenslegal.com/criminal-attorney/strange-state-...

I'm sure you can make a coherent argument for "It is illegal to cry on the witness stand", but not a reasonable one for actual humans. You're in a formal setting being asked to recall potentially traumatic incidents. No decent person is going to punish an emotional reaction to such actions. Then there are laws simply made to serve corporate interests (the "zoot suit", for instance within that article. Jaywalking is another famous one).

There's a reason an AI Judge is practically a tired trope in the cyberpunk genre. We don't want robots controlling human behavior.


An AI judge is not what I'm talking about and I think that would be a terrible idea. The only thing I'm expecting an AI lawyer to do is generate text that may or may not read as a coherent legal argument. It is the human lawyer's responsibility to present the argument to the court and it does not matter whether the argument came from their head or from a computer; they are responsible for it similar to how a programmer is responsible for the code they include in a pull request.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: