Because it's a discouragement of learning based on mediocrity of AI. I find such idea perpetuating the mediocrity (not just of AI itself but of whatever it's used for).
It's like imagine saying, I don't want to learn how write a good story because AI always suggests me writing a bad one anyway. May be that delivers the idea better.
It's not at all clear to me what this has to do with the practical delivery of software. In languages that LLMs handle well, with a careful user (ie, not a vibe coder; someone reading every line of output and subjecting most of it to multiple cycles of prompting) the code you end up with is basically indistinguishable from the replacement-level code of an expert in the language. It won't hit that human expert's peaks, but it won't generally sink below their median. That's a huge accelerator for actually delivering projects, because, for most projects, most of the code need only be replacement-grade.
Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing? Like, the same reason I'd use only hand tools when doing joinery in Japanese-style woodworking? There's a place for that! But most woodworkers... use table saws and routers.
It's not about delivery of software, it's about avoidance of learning based on mediocrity of AI. I.e. original post literally brings LLMs being poor at suggestions for Rust as a reason to avoid it.
That implies that proponents of such approach don't want to pursue learning which requires them to do something that exceeds the mediocrity level set by the AI they rely on.
For me it's obvious that it has a major negative impact on many things.
Your premise here being that any software not written in Rust must be mediocre? Wouldn't it be more productive to just figure out how to evolve LLM tooling to work well with Rust? Most people do not write Rust, so this is not a very compelling argument.
Rust is just an example in this case, not essential to the point. If someone will evolve LLM to work with Rust better, it will still be mediocre at something else, and using this as an excuse to avoid it is problematic in itself, that's what I'm saying.
Basically, learn Rust based on whether it's helping solve your issues better, not on whether some LLM is useless or not useless in this case.
> Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing?
The strongest reason I can think of to discard this kind of automation, and do so proudly, is that it's effectively plagiarizing from all of the experts whose code was used in the training data set without their permission.
No plausible advance in nanotechnology could produce a violin small enough to capture how badly I feel about out professional being "plagiarized" after decades of rationalizing about the importance of Star Wars to the culture justifying movie piracy.
Artists can come at me with this concern all they want, and I feel bad for them. No software developer can.
I disagree with you about the "plagiaristic" aspect of LLM code generation. But I also don't think our field has a moral leg to stand on here, even if I didn't disagree with you.
I'm not making an argument from grievance about my own code being plagiarized. I actually don't care if my own code is used without even the attribution required by the permissive licenses it's released under; I just want it to be used. I do also write proprietary code, but that's not in the training datasets, as far as I know. But the training datasets do include code under a variety of open-source licenses, both permissive and copyleft, and some of those developers do care how their code is used. We should respect that.
As for our tendency to disrespect the copyrights of art, clearly we've always been in the wrong about this, and we should respect the rights of artists. The fact that we've been in the wrong about this doesn't mean we should redouble the offense by also plagiarizing from other programmers.
And there is evidence that LLMs do plagiarize when generating code. I'll just list the most relevant citations from Baldur Bjarnason's book _The Intelligence Illusion_ (https://illusion.baldurbjarnason.com/), without quoting from that copyrighted work.
I don't mean to attribute the overwhelmingly common sentiment about intellectual property claims for things other than code to you, and I'm sorry that I communicated that (you didn't call me on it, but you'd have had every right to).
I stand by that argument, but acknowledge it isn't relevant here.
My bigger thing is just, having the experience of writing many thousands of lines of backend code with an LLM (just yesterday), none of what I'm looking at can meaningfully be described as "plagiarized". It's specific to my problem domain (indeed, to my extremely custom stack) and what isn't domain-specific is just extremely generic stuff (opening a boltdb, printing a table with lipgloss), just assembled precisely.