> A lot of the concerns you describe make me think you work in a larger company or team and so both the organizational stakes (maintenance, future changes, tech debt, other people taking it over) and the functional stakes (bug free, performant, secure, etc) are high?
The most financially rewarding project I worked on started out as an early stage startup with small ambitions. It ended up growing and succeeding far beyond expectations.
It was a small codebase but the stakes were still very high. We were all pretty experienced going into it so we each had preferences for which footguns to avoid. For example we shied away from ORMs because they're the kind of dependency that could get you stuck in mud. Pick a "bad" ORM, spend months piling code on top of it, and then find out that you're spending more time fighting it than being productive. But now you don't have the time to untangle yourself from that dependency. Worst of all, at least in our experience, it's impossible to really predict how likely you are to get "stuck" this way with a large dependency. So the judgement call was to avoid major dependencies like this unless we absolutely had to.
I attribute the success of our project to literally thousands of minor and major decisions like that one.
To me almost all software is high stakes. Unless it's so trivial that nothing about it matters at all; but that's not what these AI tools are marketing toward, are they?
Something might start out as a small useful library and grow into a dependency that hundreds of thousands of people use.
So that's why it terrifies me. I'm terrified of one day joining a team or wanting to contribute to an OSS project - only to be faced with thousands of lines of nonsensical autogenerated LLM code. If nothing else it takes all the joy out of programming computers (although I think there's a more existential risk here). If it was a team I'd probably just quit on the spot but I have that luxury and probably would have caught it during due diligence. If it's an OSS project I'd nope out and not contribute.
adding here due to some resonance with the point of view.. this exchange lacks crucial axes.. what kind of programming ?
I assume the parent-post is saying "I ported thousands of lines of <some C family executing on a server> to <python on standard cloud environments>. I could be very wrong but that is my guess. Like any data-driven software machinery, there is massive inherent bias and extra resources for <current in-demand thing> in this guess-case it is python that runs on a standard cloud environment with the loaders and credentials parts too perhaps.
Those who learned programming in the theoretic ways know that many, many software systems are possible in various compute contexts. And those working on hardware teams know that there are a lot of kinds of computing hardware. And to add another off-the-cuff idea, so much web interface ala 2004 code to bring to newer, cleaner setups.
I am not <emotional state descriptor> about this sea change in code generation, but actually code generation is not at all new. It is the blatent stealing and LICENSE washing of a generation of OSS that gets me, actually. Those code generation machines are repeating their inputs. No authors agreed and no one asked them, either.
The most financially rewarding project I worked on started out as an early stage startup with small ambitions. It ended up growing and succeeding far beyond expectations.
It was a small codebase but the stakes were still very high. We were all pretty experienced going into it so we each had preferences for which footguns to avoid. For example we shied away from ORMs because they're the kind of dependency that could get you stuck in mud. Pick a "bad" ORM, spend months piling code on top of it, and then find out that you're spending more time fighting it than being productive. But now you don't have the time to untangle yourself from that dependency. Worst of all, at least in our experience, it's impossible to really predict how likely you are to get "stuck" this way with a large dependency. So the judgement call was to avoid major dependencies like this unless we absolutely had to.
I attribute the success of our project to literally thousands of minor and major decisions like that one.
To me almost all software is high stakes. Unless it's so trivial that nothing about it matters at all; but that's not what these AI tools are marketing toward, are they?
Something might start out as a small useful library and grow into a dependency that hundreds of thousands of people use.
So that's why it terrifies me. I'm terrified of one day joining a team or wanting to contribute to an OSS project - only to be faced with thousands of lines of nonsensical autogenerated LLM code. If nothing else it takes all the joy out of programming computers (although I think there's a more existential risk here). If it was a team I'd probably just quit on the spot but I have that luxury and probably would have caught it during due diligence. If it's an OSS project I'd nope out and not contribute.