Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sometimes I wonder why we would want LLMs spit out human readable code. Wouldn’t be a better future where LLMs generate highly efficient machine code and eventually we read the “source map” for debugging? Wasn’t source code just for humans?


You just reinvented the compiler.


Because you can't trust what the LLM generates, so you have to read it. Of course the question then is whether you can trust your developer or not.


I’d rather reply with LLMs aren’t just capable of that. They’re okay with Python and JS simply because there’s a lot of training data out in the open. My point was that it seems like we’re delegating the future to tools that could generate critical code using languages originally thought to be easy to learn.. it doesn’t make sense


I think they spit out human-readable code, because they've been tried on human authors.

But you make an interesting point: eventually AI will be making for other AI's + machines, and human verification can be an after thought.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: