Its not, but it does matter. LLMs, being next word guessers, perform differently with different inputs. Its not hard to imagine a feedback loop of bad code generating worse code and good code generating more good code.
My ability to get good responses from LLMs has been tied to me writing better code, docstrings, and using autoformatters.
I don't think that feedback loop is really a loop because code that doesn't actually do its job doesn't grow in popularity for long. We already have a great source of selection pressure to take care of shitty products that don't function: users and their $.
There is nothing about LLMs that make them bias towards "better" code. LLMs are every bit as good at making low effort reddit posts and writing essays for Harper's Magazine. In fact, there's a lot more shit reddit posts (and horrible student assignment github repos) than there are Harper's Magazine articles.
The only thing standing between your LLM and bad code is the quality of the prompt (including context and the hiddem OEM prompt).
Its not, but it does matter. LLMs, being next word guessers, perform differently with different inputs. Its not hard to imagine a feedback loop of bad code generating worse code and good code generating more good code.
My ability to get good responses from LLMs has been tied to me writing better code, docstrings, and using autoformatters.