> But the idea of letting an LLM write/move large swaths of code seems so incredibly irresponsible.
People felt the same about compilers for a long time. And justifiably so, the idea that compilers are reliable is quite a new one, finding compilers bugs used to be pretty common. (Those experimenting with newer languages still get to enjoy the fun of this!)
How about other code generation tools? Presumably you don't take much umbrage with schema generators? Or code generators that take a scheme and output library code (OpenAPI, Protocol buffers, or even COM)? Those can easily take a few dozen lines of input and output many thousands of LoC, and because they are part of an automated pipeline, even if you do want to fix the code up, any fixes you make will be destroyed on the next pipeline run!
But there is also a LOT of boring boilerplate code that can be automated.
For example, the necessary code to create a new server, attach a JSON schema to a POST endpoint, validate a bearer token, and enable a given CORS config is pretty cut and dry.
If I am ramping up on a new backend framework, I can either spend hours learning the above and then copy and paste it forever more into each new project I start up, or I can use an AI to crap the code out for me.
(Actually once I was setting up a new server and I decided to not just copy and paste and to do it myself, I flipped the order of two `use` directives and it cost me at least 4 hours to figure out WTF was wrong....)
> As a programmer of over 20 years
I'm almost up there, and my view is that I have two modes of working:
1. Super low level, where my intimate knowledge of algorithms, the language and framework I'm using, of CPU and memory constraints, all come together to let me write code that is damn near magical.
2. Super high level, where I am architecting a solution using design patterns and the individual pieces of code are functionally very simple, and it is how they are connected together that really matters.
For #1, eh, for some popular problems AI can help (popular optimizations on Stack Overflow).
For #2, AI is the most useful, because I have already broken the problem down into individual bite size testable nuggets. I can have the AI write a lot of the boilerplate, and then integrate the code within the larger, human architected, system.
> So the thought of opening up a codebase that was cobbled together by an AI is just scary to me.
The AI didn't cobble together the system. The AI did stuff like "go through this array and check the ID field of each object and if more than 3 of them are null log an error, increment the ExcessNullsEncountered metric counter, and return an HTTP 400 error to the caller"
Edit: This just happened
I am writing a small Canvas game renderer, and I am having an issue with text above a character's head renders off the canvas. So I had Cursor fix the function up to move text under a character if it would have been rendered above the canvas area.
I was able to write the instructions out to Cursor faster than I could have found a pencil and paper to sketch out what I needed to do.
People felt the same about compilers for a long time. And justifiably so, the idea that compilers are reliable is quite a new one, finding compilers bugs used to be pretty common. (Those experimenting with newer languages still get to enjoy the fun of this!)
How about other code generation tools? Presumably you don't take much umbrage with schema generators? Or code generators that take a scheme and output library code (OpenAPI, Protocol buffers, or even COM)? Those can easily take a few dozen lines of input and output many thousands of LoC, and because they are part of an automated pipeline, even if you do want to fix the code up, any fixes you make will be destroyed on the next pipeline run!
But there is also a LOT of boring boilerplate code that can be automated.
For example, the necessary code to create a new server, attach a JSON schema to a POST endpoint, validate a bearer token, and enable a given CORS config is pretty cut and dry.
If I am ramping up on a new backend framework, I can either spend hours learning the above and then copy and paste it forever more into each new project I start up, or I can use an AI to crap the code out for me.
(Actually once I was setting up a new server and I decided to not just copy and paste and to do it myself, I flipped the order of two `use` directives and it cost me at least 4 hours to figure out WTF was wrong....)
> As a programmer of over 20 years
I'm almost up there, and my view is that I have two modes of working:
1. Super low level, where my intimate knowledge of algorithms, the language and framework I'm using, of CPU and memory constraints, all come together to let me write code that is damn near magical.
2. Super high level, where I am architecting a solution using design patterns and the individual pieces of code are functionally very simple, and it is how they are connected together that really matters.
For #1, eh, for some popular problems AI can help (popular optimizations on Stack Overflow).
For #2, AI is the most useful, because I have already broken the problem down into individual bite size testable nuggets. I can have the AI write a lot of the boilerplate, and then integrate the code within the larger, human architected, system.
> So the thought of opening up a codebase that was cobbled together by an AI is just scary to me.
The AI didn't cobble together the system. The AI did stuff like "go through this array and check the ID field of each object and if more than 3 of them are null log an error, increment the ExcessNullsEncountered metric counter, and return an HTTP 400 error to the caller"
Edit: This just happened
I am writing a small Canvas game renderer, and I am having an issue with text above a character's head renders off the canvas. So I had Cursor fix the function up to move text under a character if it would have been rendered above the canvas area.
I was able to write the instructions out to Cursor faster than I could have found a pencil and paper to sketch out what I needed to do.