All of this fighting against LLMs is pissing in the wind.
It seems that LLMs, as they work today, make developers more productive. It is possible that they benefit less experienced developers even more than experienced developers.
More productivity, and perhaps very large multiples of productivity, will not be abandoned due roadblocks constructed by those who oppose the technology due to some reason.
Examples of the new productivity tool causing enormous harm (eg: bug that brings down some large service for a considerable amount of time) will not stop the technology if it being considerable productivity.
Working with the technology and mitigating it's weaknesses is the only rational path forward. And those mitigation can't be a set of rules that completely strip the new technology of it's productivity gains. The mitigations have to work with the technology to increase its adoption or they will be worked around.
> It seems that LLMs, as they work today, make developers more productive.
Think this strongly depends on the developer and what they're attempting to accomplish.
In my experience, most people who swear LLMs make them 10x more productive are relatively junior front-end developers or serial startup devs who are constantly greenfielding new apps. These are totally valid use cases, to be clear, but it means a junior front-end dev and a senior embedded C dev tend to talk past each other when they're discussing AI productivity gains.
> Working with the technology and mitigating it's weaknesses is the only rational path forward.
Or just using it more sensibly. As an example: is the idea of an AI "agent" even a good one? The recent incident with Copilot[0] made MS and AI look like a laughingstock. It's possible that trying to let AI autonomously do work just isn't very smart.
As a recent analogy, we can look at blockchain and cryptocurrency. Love it or hate it, it's clear from the success of Coinbase and others that blockchain has found some real, if niche, use cases. But during peak crypto hype, you had people saying stuff like "we're going to track the coffee bean supply chain using blockchain". In 2025 that sounds like an exaggerated joke from Twitter, but in 2020 it was IBM legitimately trying to sell this stuff[1].
It's possible we'll look back and see AI agents, or other current applications of generative AI, as the coffee blockchain of this bubble.
> In my experience, most people who swear LLMs make them 10x more productive are relatively junior front-end developers or serial startup devs who are constantly greenfielding new apps. These are totally valid use cases, to be clear, but it means a junior front-end dev and a senior embedded C dev tend to talk past each other when they're discussing AI productivity gains.
I agree with this quite a lot. I also think that those greenfield apps quickly become unmanageable by AI as you need to start applying solutions that are unique/tailored for your objective or you want to start abstracting some functionality into building components and base classes that the AI hasn't seen before.
I find AI very useful to get me to a from beginner to intermediate in codebases and domains that I'm not familiar with but, once I get the familiarity, the next steps I take mostly without AI because I want to do novel things it's never seen before.
But this doesn't mean that the model/human combo is more effective at serving the needs of users! It means "producing more code."
There are no LLMs shipping changesets that delete 2000 lines of code -- that's how you know "making engineers more productive" is a way of talking about how much code is being created...
My wife's company recently hired some contractors and they were touting their productivity with AI by saying how it allowed them (one person) to write 150k lines of code in 3 weeks. They said this without sarcasm. It was funny and scary at the same time that anyone might buy this as a good outcome. Classic lines-of-code metric rearing its ugly head again.
I think you’re arguing against something the author didn’t actually say.
You seem to be claiming that this is a binary, either we will or won’t use llms, but the author is mostly talking about risk mitigation.
By analogy it seems like you’re saying the author is fundamentally against the development of the motor car because they’ve pointed out that some have exploded whereas before, we had horses which didn’t explode, and maybe we should work on making them explode less before we fire up the glue factories.
I didn't see the post as pissing into the wind so much as calling out several caveats of coding with LLMs, especially on teams, and ideas on how to mitigate them.
It is funny (ego) I remember when React was new and I refused to learn it, had I learned it earlier I probably would have entered the market years earlier.
Even now I have this refusal to use GPT where as my coworkers lately have been saying "ChatGPT says" or this code was created by chatGPT idk, for me I take pride writing code myself/not using GPT but I also still use google/stackoverflow which you could say is a slower version of GPT.
this mindset does not work in software. My dad would still be programming with punchcards if he thought this way. instead he using copilot daily writing microservices and isnt some annoying dinosaur
yeah it's pro con, I also hear my coworkers saying "I don't know how it works" or there are methods in the code that don't exist
But anyway I'm at the point in my career where I am not learning to code/can already do it. Sure languages are new/can help there for syntax
edit: other thing I'll add, I can see the throughput thing, it's like a person has never used opensearch before and it's a rabbithole, anything new there's that wall you have to overcome, but it's like we'll get the feature done, but did we really understand how it works... do we need to? Idk. I know this person can barely code but because they use something like chatGPT they're able to crap out walls of code and with tweaking it will work eventually -- I am aware this sounds like gatekeeping from my part
Ultimately personally I don't want to do software professionaly/trying to save/invest enough then get out just because the job part sucks the fun out of development. I've been in it for about 10 years now, should have been plenty of time to save but I'm dumb/too generous.
I think there is healthy skepticism too vs. just jumping on the bandwagon that everyone else is doing and really my problem is just I'm insecure/indecisive, I don't need everyone to accept me especially if I don't need money
Last rant, I will be experimenting with agentic stuff as I do like Jarvis, make my own voice rec model/locally runs.
It seems that LLMs, as they work today, make developers more productive. It is possible that they benefit less experienced developers even more than experienced developers.
More productivity, and perhaps very large multiples of productivity, will not be abandoned due roadblocks constructed by those who oppose the technology due to some reason.
Examples of the new productivity tool causing enormous harm (eg: bug that brings down some large service for a considerable amount of time) will not stop the technology if it being considerable productivity.
Working with the technology and mitigating it's weaknesses is the only rational path forward. And those mitigation can't be a set of rules that completely strip the new technology of it's productivity gains. The mitigations have to work with the technology to increase its adoption or they will be worked around.