AI does 0% of my work and we are actively hiring. As someone mentioned on another AI thread, if AI is so good why aren't people just doing 15 PRs a day on open source projects like Node.js, React, Kubernetes, Linux, Ansible, etc?
AI is sometimes a productivity booster for a dev, sometimes not. And it's unpredictable when it will and won't be. It's not great at giving you confidence signals when you should be skeptical of its output.
In any sufficiently complex software project, as much of the development is about domain knowledge, asking the right questions, balancing resources, guarding against risks, interfacing with a team to scope and vet and iterate on a feature, managing resources, analyzing customer feedback, thinking of new features, improving existing features, etc.
When AI is a productivity booster, it's great, but modern software is an evolving, organic product, that requires a team to maintain, expand, improve, etc. As of yet, no AI can take the place of that.
You don't use any AI - drafting documentation, write boilerplate code, transcribing meetings, simplifying team communications, searching product documentation, alternative to Google or StackOverflow, creating presentations, as a brainstorming partner? I would consider all of these "work".
If you say AI does 0% of your work, I'd say you're either a genius, behind the curve or being disingenuous.
LLM does 0% of my work, don't know what to tell you. LLMs are like 3 years old, I learned to do everything I know without LLMs, how is that hard to believe? How do you think all the software you use everyday, including this site itself, was written? 99.99% of it without any LLMs at all.
Do I use LLMs as an alternative to Googling? Absolutely. That doesn't mean AI is doing my job. Google and Stack Overflow also do 0% of my job. It's great as a reference tool. But if you're going to be that pedantic, we've got to count any help I receive from any human or tool as doing some % of my job. Do I count the open source software I build on? Do I count Slack as doing some % of my job since I don't have to go into the office and interface with everyone face-to-face? Does Ford get some of the credit for building the vehicle that gets me to the office?
Have I used a meeting transcription tool? Occasionally, yeah. That doesn't mean it does any part of my work. My job was never to transcribe meetings. Do I use it to brainstorm? No, I've found it's fairly useless for that. Do I use it to create presentations? No, I just write my slides the old fashioned way.
It's hard for nupeople to understand that people from our generations learned how to do things and to use our brains. The younger tech folks only do things that the computer automatically knows how to do and/or AI can help with. Even UI design and basic stuff is purely done on web-based tools with prebuilt components and AI help these days. I'm completely shocked from day-to-day when I interface with front end/design. I would venture to guess there aren't many front end peeps that even know how to code HTML & CSS these days.
In my opinion, the only thing that AI is helpful for is doing all the menial boilerplate nonsense that is only necessary because the unexperienced people in charge of so many projects. For example, setting up 30 totally unnecessary GitHub actions, etc. Anything that is worth doing, I'd rather do myself and not lose my skills.
How can AI search into documentation, if the documentation is a thousands of obsolete and contradicting Jira tickets, few outdated Confluence pages with mail attachments and handful of excel files on SharePoint?
Supposedly there is a way to get an AI to do exactly this, we have it slated as an "intern project". Which feels ironic in itself. Using an intern to figure out how to get an AI to rectify our Jira and train on our confluence to help us and our users
No, even if you got clearance... What's that gonna help with? The point was that the jira tickets are obsolete and likely contradict each other with changing requirements over time. More advanced tooling might be able to guess from looking at the git history and double-checking via linked tickets etc, but there is currently no tooling available that actually does this, today.
And that's coming from someone that has repeatedly gone on record saying "my expectation for our industry is a gigantic contraction because of LLM", ...but this isn't a scenario that's plausible with current models.
You're sending the intellectual property of your employer to a third party without their consent. Hell, it's much worse as it sounds like they've explicitly told you not to.
Is that so hard to believe? My work uses proprietary language. Something like ABAP for SAP [1]. AI has ingested lot of documentation available on the internet. But it cannot tell the difference between versions. So, AI code often has correct but deprecated functions.
And don't get me started on the "time savings" for boiler plate documentation. It messes up every time.
We used to do all our math on slide rules. They're just as effective as they always were.
But when you're being graded on a curve, standing still can still mean falling behind.
Which isn't to say that AI is definitively ahead of the curve; I think we're a bit early for that. But as actual answers to your actual questions - it's important because if everyone else gets ahead of you, your boss will STOP paying you
(and if you're "good at AI", you can at least make bank until the bubble bursts)
Are you saying everyone who isn't barely starting their career is a genius? In the current state of things I'd gladly take mediocre work from a human over slop from an AI.
Seriously this. Doing code reviews on LLM created code is so frustrating. If the code was submitted by a junior engineer I could get on zoom with them and educate them, which would make them a better team mate, advance their career goals, and make the world slightly better. With AI created code the review process is a series of tiny struggles to dig up out of the hole the LLM created and get back to baseline code quality, and it'll probably be the same Sisyphean struggle with the next PR.
I had to review code that couldn't even do a straightforward map, filter, and reduce properly. But with management pushing hard for AI use, I feel powerless to push back against it.
Ha! Not just "the next PR", in my experience about 30% of the time you tell it "hey this slop you gave me is horribly broken because <reason>", and it says "you're absolutely right! I totally 100% understand the problem now, and I'll totally 100% fix that for you right now!", and then proceeds to deliver exactly the same broken slop it game me before.
It knows that the apologetic tone and acknowledging understanding of your critique is the most probable response for it to generate. But that’s very different from actually understanding how it should change the code.
Since that study demonstrated that experienced developers currently suffer a decline in their productivity when using LLMs, it's perfectly likely that less experienced/junior developers who normally will struggle with syntax or simple tasks like organizing their code are the ones experiencing the boost of productivity from LLMs.
Thus, it seems the devs benefitting the most from LLMs are the ones with the skill issue/more junior/early in their career.
How open are you to the possibility that it's the other way around? Because the study suggests that it's actually junior code monkeys that benefit from LLMs, and experienced software engineers don't instead get a decline of their productivity.
At least that's what the only available study so far shows.
That's corroborated with my experience mentoring juniors, the more they struggle with basic things like syntax or expressing their thoughts clearly in code, the more benefit they got from using LLM tools like Claude.
Once they go mid-level and above, the LLMs are a detriment to them. Do you currently get big benefit from LLMs? Maybe you are more early in your career?
I think you are making a couple of very good points getting bogged down in the wrong framework of discussion. Let me rephrase what I think you are saying:
Once you are very comfortable in a domain, it is detrimental to have to wrangle a junior dev with low IQ, way too much confidence but encyclopediac knowledge of everything instead of just doing it yourself.
The dichotomy of Junior vs. Senior is a bit misleading here, every junior is uncomfortable in the domain they are working in, but a Senior probably isn't comfortable in all domains. For example, many people with 10+ SE experience I know aren't very good with databases and data engineering, which is becoming an increasingly large part of the job. For someone who has worked 10+ years on Java Backends, now attempting to write Pythin data pipelines, Coding Agents might be a useful tool to gap that bridge.
The other thing is creation vs. critique. I often let my code, writing and planning be rewiewed by Claude or Gemini, because once I have created something, I know it very well, and I can very quickly go through 20 points of criticism/recommendations/tips and pick out the relevant ones. - And honestly, that has been super helpful. Using it that way around, Claude has caught a number of bugs, taught me some new tricks and made me aware of some interesting tech.
Those "experienced" actually are just senior code monkeys if u ask me, it's trivial right ? I don't assume the reason why, but it's just illogical for a junior to get benefits and the seniors don't. The wrong ones here is the "experienced".
I know how to use the AI tools for my purpose (that's why i use them), and of course, to make the impossible possible. Even if i failed to do so, it's not the decrease in productivity because without them, i don't think i can do better than the LLM.
> Those "experienced" actually are just senior code monkeys if u ask me, it's trivial right
Well, it seems you are not open for discussion. There is no reason to disparage the senior devs that participated in the study just because you don't like the results of the study. But the study happened, and it is clear: experienced developers are the ones that suffered from using LLMs.
> but it's just illogical for a junior to get benefits and the seniors don't
Experienced car drivers won't benefit from a youtube tutorial how to drive, junior car drivers might. That's similar to
junior developers being potentially the ones who can benefit from the basic things that an LLM can help you with, e.g helping you with syntax and structure your thoughts and write a scaffold to get you started. Those are concerns that experienced developers don't need help with, similarly how experienced drivers don't need youtube tutorials how to shift a gear. There is nothing illogical in that premise. Do you agree?
> i don't think i can do better than the LLM
I most certainly can tell you that there are 1000s of developers that can do infinitely better than any of the current LLMs, and those developers are fairly often senior. It seems like a skill issue you mentioned in the beginning of your post might actually be on your side.
That study was not conducted well at all. The participants haven’t learned how to use these tools. For example one was interviewed later and said a lot of the time they would wait for an agent and get distracted playing with something irrelevant and then forget to go back until much later. That has solutions they are not aware of to implement.
> For example one was interviewed later and said a lot of the time they would wait for an agent and get distracted playing with something irrelevant and then forget to go back until much later
Counterpoint: the agents are the reason for the distraction.
> That has solutions they are not aware of to implement.
Counterpoint: there are no other current studies that suggest otherwise. Given the impact of LLMs on open source (net negative, maintainers are drowning in slop: e.g curl: https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...) maybe it makes sense to be a bit more critical on LLM's supposed gains.
Let's see what we have so far:
- The only study to date suggests a net negative from using LLMs on experienced developers
- OSS maintainers are rejecting AI generated PRs due to low quality
- No other studies have come out so far to suggest otherwise
Based on my anecdotal experience and based on the currently available evidence, for me the conclusion is clear: LLMs and agents are mostly hype.
You keep talking about "no other studies" as if that holds power but the strength of your argument rests on a single study
It's no surprise to me that devs who are accustomed to working on one thing at a time due to fast feedback loops have not learned to adapt to paralellizing their work (something that has been demonized at agile style organizations) and sit and wait on agents and start watching YouTube instead. The study reflects usage of emergent tools without training, and with regressive training on previous generation sequential processes, so I would expect these results. If there is any merit in coordinating multiple agents on slower feedback work, this study would not find it.
Productivity could just be simple automation. U just describe one part of the whole process. My point stands still. If u cannot get llm to benefit u, u are the problem.
The responses in this thread captures the absurdity of the AI hype so well that it's satirical, even. Putting all blame of AI deficiency on "bad prompting," denial of concrete evidence, and the refusal to provide one either is a recurring pattern in these discussions. The repeated angry name-calling towards experienced developers who failed to uphold your beliefs is the cherry on top.
I'd just like to point out just how sad, self-defeating, and ignorant this statement is.
I could literally teach a below-average-intelligence 16-year-old how to write better code than any LLM I've ever seen - if they're interested and willing to learn.
My understanding is that a code monkey just does what they're told. All the planning and behind the scenes negotiations that the senior devs and management do is completely opaque to them.
> You don't use any AI - drafting documentation, write boilerplate code, transcribing meetings, simplifying team communications, searching product documentation, alternative to Google or StackOverflow, creating presentations, as a brainstorming partner? I would consider all of these "work".
If you say AI does 0% of your work, I'd say you're either a genius, behind the curve or being disingenuous.
There are reasons that seasoned OSS developers reject AI PRs: https://news.itsfoss.com/curl-ai-slop/ (like the creator of curl). Additionally, the only study to date currently measuring the impact on LLMs on experienced developers found a modest 19% decline in productivity when using an LLM for their daily work.
Now we can ponder behind the reasons that the study showed experienced developers get a decrease of productivity, and you anecdotally experience a boost of "productivity", but why think about things when we can ask an LLM?
- experienced developers -> measured decrease of productivity
- you -> perceived increase of productivity
Here is what ChatGPT-5 thinks about the potential reason (AI slop below):
"Why You Might Feel More Productive
If senior developers are seeing a decline in productivity, but you are experiencing the opposite, it stands to reason that you are more junior. Here are some reasons why LLMs might help junior developers like you to feel more productive:
Lower Barrier to Entry
- LLMs help fill in gaps in knowledge—syntax, APIs, patterns—so you can move faster without constantly Googling or reading docs.
- Confidence Boost You get instant feedback, suggestions, and explanations. That can make you feel more capable and reduce hesitation.
- Acceleration of Learning You’re not just coding—you’re learning as you go. LLMs act like a tutor, speeding up your understanding of concepts and best practices.
- More Output, Less Friction You might be producing more code, solving more problems, and feeling that momentum—especially if you are just starting your coding journey."
AI is sometimes a productivity booster for a dev, sometimes not. And it's unpredictable when it will and won't be. It's not great at giving you confidence signals when you should be skeptical of its output.
In any sufficiently complex software project, as much of the development is about domain knowledge, asking the right questions, balancing resources, guarding against risks, interfacing with a team to scope and vet and iterate on a feature, managing resources, analyzing customer feedback, thinking of new features, improving existing features, etc.
When AI is a productivity booster, it's great, but modern software is an evolving, organic product, that requires a team to maintain, expand, improve, etc. As of yet, no AI can take the place of that.