Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly my thoughts reading this article. Luckily if next few years we will have thousands of projects written using 'ai' there will be need for someone to debug and fix all of that broken software.

Or maybe not, maybe it will be cheaper to just slap another ten k8s pods to mitigate poor performance...



I believe, we are beyond the point of “bad software written, bad software deployed, business as usual” point long ago when AWS/GCP/Azure became an important requirement in job description.

A bad piece of software can be decently hidden by burning more money in cloud bills, which gives the inflated sense to the leadership that their products are doing global scale ground breaking.

With AI, I would not be surprised if the quality actually improves and the cost comes down(or stays same). Of course, more bad software will be written by now many aspiring entrepreneurs to realize their dream idea of spotify clone, then sacrificing their life saving on complex cloud bills and ever so profitable rise of revenue of all cloud services citing this as benefit of AI while doing some more layoffs to jack up the stock prices.

The real revelations will come(it always does, nature and economy works in cycles), when excessive layoff caused damage will come due and now everyone will scramble to rehire people in few years. Unlike the Ford innovation of replacing horse carts, software is more prevalent in our every aspects of life, same as doctors and lawyers and civil service, hence we need to honestly play the game until the wave turns and then cash in by making in 200x killing just like the businesses are cashing in on right now.


> I believe, we are beyond the point of “bad software written, bad software deployed, business as usual” point long ago when AWS/GCP/Azure became an important requirement in job description.

> A bad piece of software can be decently hidden by burning more money in cloud bills, which gives the inflated sense to the leadership that their products are doing global scale ground breaking.

Doesn't this apply to almost all software out there nowadays?

Bloated enterprise frameworks (lots of reflection and dynamic class loading on the back end, wasteful memory usage; large bundles and very complicated SPA on the front end), sub optimal DB querying, bad architectures, inefficient desktop and mobile apps built on web technologies because of faster iteration speed, things like messing up OS package management where it's not easy to halt updates and they don't integrate well with the rest of the system (e.g. snap packages), messy situation with operating systems where you get things like ads in the start menu or multiple conflicting UI styles within it (Windows), game engines that are hard to use well to the point where people scoff just hearing UE5 and so on.

Essentially just Wirth's law, taken to the maximum of companies and individuals optimizing for shipping quickly and things that catch attention, instead of having good engineering underneath it all: https://en.wikipedia.org/wiki/Wirth%27s_law

Not the end of the world, but definitely a lot of churn and I don't see things improving anytime soon. If anything, I fear that our craft will be cheapened a lot due to prevalence of LLMs and possible over-saturation of the field. I do use them as any other tool when it makes sense to do so... but so does everyone else.


> With AI, I would not be surprised if the quality actually improves and the cost comes down(or stays same). Of course, more bad software will be written by now many aspiring entrepreneurs to realize their dream idea of spotify clone, then sacrificing their life saving on complex cloud bills and ever so profitable rise of revenue of all cloud services citing this as benefit of AI while doing some more layoffs to jack up the stock prices.

At this point we all speculating really. But from logical point of view, LLMs are trained on code written by humans. When more and more code will be written by LLMs instead, models will be trained on content written by other models. It will be very hard to distinguish which code on Github was wrote by human or some model (unless the quality will differ substantially). If this will be the case I would say that quality of code written by them will drop. Or the quality of models will drop. Or code written by model will be still using the pre-LLM patterns, because model-written code will not be part of training data. It may be that LLM written code will be working but hardly comprehensible for human. For now models does not have negative feedback loop that humans have ('oh code does not compile' or 'code does compile but throws an exception' or 'code compile and works but perform poorly').

Anyway, I am sure that there will be impact to the whole industry, but I doubt models will be primary source of source code. Helpful tool for sure but not a drop-in replacement for developers.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: