Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe the solution isn't to train LLMs on human-written code, but to train some sort of text-to-AST generator on descriptions of programs and then the output of programs. That way it doesn't learn to include human-inspired bugs at least.


I always wondered why bother having an AI generate human readable code. Isn't it a machine? Why not have it generate machine code? of course, it wouldn't be readable, but would it be performant? Maybe.

I guess my point is that if an AI is doing computing why would it doing computing resemble a person doing it? It's much closer to the machine then me. Why not take advantage of that?


Because then you'd have no control over what the AI/code does. If you blindly want to trust the AI, you might as well just make the AI the black box between your input and output.


Haha what do you think people do with AI/ML? do you think that's currently in control?


Isn’t that kind of what neural networks are in a way? Large blobs of computation we cant really parse fully


Yea but there are a bunch of layers, lol, between that specification and the machine and I'm wondering if there's a way to reduce that assuming it's a bottleneck. But I'm not sure it is I'm just guessing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: