Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My current suggestion is to consider it the work of a just on-boarded intern. It will save you some time but you still need to walk thru the code to make sure it will work as intended.


First, it's worth noting the code in the blog post is not "production code," but rather one-off or periodically used scripts for accelerating manual business processes, with results that are easy to manually check.

But in regards to production code, I agree. When code is committed to a codebase, a human should review it. Assuming you trust your review process, it shouldn't matter whether the code submitted for review was written by a human or a language model. If it does make a difference, then your review process is already broken. It should catch bad code regardless of whether it was created by human or machine.

It's still worth knowing the source of commits, but only for context in understanding how it was generated. You know humans are likely to make certain classes of error, and you can learn to watch out for the blind spots of your teammates, just like you can learn the idiosyncrasies and weak points of GPT generated code.

Personally, I don't think we're quite at "ask GPT to commit directly to the repo," but we're getting close. The constant refrain of "try GPT-4" has become a trope, but the difference is immediately noticeable. Whereas GPT-3.5 will make a mistake or two in every 50 line file, GPT-4 is capable of producing fully correct code that you can immediately run successfully. At the moment it works best for isolated prompts like "create a component to do X," or "write a script to do Y," but if you can provide it with the interface to call an external function, then suddenly that isolated code is just another part of an existing system.

As tooling improves for working collaboratively with large language models and providing them with realtime contextual feedback of code correctness (especially for statically analyzeble or type-checked languages), they will become increasingly indispensable to the workflow of productive developers. If you haven't used co-pilot yet, I encourage you to try it for at least a month. You'll develop an intuition for what it's capable of and will eventually wonder how you ever coded without it. Also make sure to try prompting GPT-4 to create functions, components or scripts. The results are truly surprising and exciting.


My experience has been it's faster to write code yourself, than via a just on boarded intern + review + fixes.


The time savings isn't down to quality, the difference is that an LLM does in seconds what an intern does in hours or days.


Yes, but part of that time is an investment into the intern's professional development. Everyone started there at some point.

It can be hard to remember though when there are unrealistic deadlines and helping someone inexperienced to do the work is twice the effort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: