I've been thinking about this - you're right that LLMs are not going to be deterministic (AIUI) when it comes to producing code to solve a problem.
BUT neither are humans, if you give two different humans the same task, then, unless they copy one another, then you will get two different results.
Further, as those humans evolve through their career, the code that they produce will also change.
Now, I do want to point out that I'm very much still at the "LLMs are an aid, not the full answer.. yet" point, but a lot of the argument against them seems to be (rapidly) coming to the point where it's no longer valid (AI slop and all).
You keep making these claims as though you are some sort of authority, but nothing you have said has matched reality.
I mean full credit to you with your disingenuous goalpost shifting and appeals to authority, but reality has no time for you (and neither do I anymore).
BUT neither are humans, if you give two different humans the same task, then, unless they copy one another, then you will get two different results.
Further, as those humans evolve through their career, the code that they produce will also change.
Now, I do want to point out that I'm very much still at the "LLMs are an aid, not the full answer.. yet" point, but a lot of the argument against them seems to be (rapidly) coming to the point where it's no longer valid (AI slop and all).