Sometimes I wonder why we would want LLMs spit out human readable code. Wouldn’t be a better future where LLMs generate highly efficient machine code and eventually we read the “source map” for debugging? Wasn’t source code just for humans?
I’d rather reply with LLMs aren’t just capable of that. They’re okay with Python and JS simply because there’s a lot of training data out in the open. My point was that it seems like we’re delegating the future to tools that could generate critical code using languages originally thought to be easy to learn.. it doesn’t make sense