I agree with you and its confusing to me. I do think there is a lot of emotion at play here - rather than cold rationality.
Using LLM based tools effectively requires a change in workflow that a lot of people aren't ready to try. Everyone can share their anecdote of how an LLM has produced stupid or buggy code, but there is way too much focus on what we are now, rather than the direction of travel.
I think existing models are already sufficient, its just we need to improve the feedback loop. A lot of the corrections / direction I make to LLM produced code could 100% be done by a better LLM agent. In the next year I can imagine tooling that:
- lets me interact fully via voice
- a separate "architecture" agent ensures that any produced code is in line with the patterns in a particular repo
- compile and runtime errors are automatically fed back in and automatically fixed
- a refactoring workflow mode, where the aim is to first get tests written, then get the code working, and then get the code efficient, clean and with repo patterns
I'm excited by this direction of travel, but I do think it will fundamentally change software engineering in a way that is scary.
> Using LLM based tools effectively requires a change in workflow that a lot of people aren't ready to try
This is a REALLY good summary of it I think. If you lose your patience with people, you'll lose your patience with AI tooling, because AI interaction is fundamentally so similar to interacting with other people
Exactly, and LLM based tools can be very frustrating right now - but if you view the tooling as a very fast junior developer with very broad but shallow knowledge then you can develop a workflow which for many (but not all) tasks is much much faster writing code by hand.
Using LLM based tools effectively requires a change in workflow that a lot of people aren't ready to try. Everyone can share their anecdote of how an LLM has produced stupid or buggy code, but there is way too much focus on what we are now, rather than the direction of travel.
I think existing models are already sufficient, its just we need to improve the feedback loop. A lot of the corrections / direction I make to LLM produced code could 100% be done by a better LLM agent. In the next year I can imagine tooling that: - lets me interact fully via voice - a separate "architecture" agent ensures that any produced code is in line with the patterns in a particular repo - compile and runtime errors are automatically fed back in and automatically fixed - a refactoring workflow mode, where the aim is to first get tests written, then get the code working, and then get the code efficient, clean and with repo patterns
I'm excited by this direction of travel, but I do think it will fundamentally change software engineering in a way that is scary.