Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having "industry professionals" in this sort of study actually puts it in the top tier of studies. Most studies don't even have that, they're all undergrad based.

(Sometimes people express frustration that we don't pay much attention to "the science" in the programming field, and my response is generally to tell such people to take a closer look at the "science" they're trying to wave around. Studies based on "a class full of juniors in college" top out at zero value, and it's really easy for them to be negative as they can be actively wrong about how the topic under study affects professionals.)

In this case, though, I'd submit that one doesn't need to run some sort of enormous study to establish the point that these code assistants are not a magic gateway to quality code at any experience level. I've been banging this drum on HN just from an understanding of how the tech works. Confabulation engines can't help but confabulate. You can't trust them. This GPT stuff is perhaps a better view into human psychology than a useful tool; we've never built an AI that so strongly emits signals of confidence before. They're the best artificial confidence game players ever.



> one doesn't need to run some sort of enormous study to establish the point that these code assistants are not a magic gateway to quality code at any experience level

You just have to use it a couple times to figure this out. It’s pretty obvious what the limitations are and most programmers are smart enough to understand what it is and what it isn’t.

This is why I’m skeptical it will be a problem, it’s not being sold that way and after using it that will be obvious.

So anyone dumb enough to treat it like that and trust the output blind probably wasn’t a good programmer before. And if they keep doing it they don’t have proper incentive structures to not produce buggy code (senior devs, bosses, customers etc will notice that output is getting worse at a minimum when the product breaks or doing QA).


> In this case, though, I'd submit that one doesn't need to run some sort of enormous study to establish the point that these code assistants are not a magic gateway to quality code at any experience level.

For me the interesting question is not whether they can improve quality.

The interesting question is, given that they can be used to produce code faster (as a sort or auto complete on steroids), whether that improvement can be achieved in a way that doesn’t involve a decrease in quality.

I think it’s possible, for sufficiently competent professionals that can spot and correct mistakes on the fly, and I have anecdotal evidence to support the idea, but it would be nice to see serious research around it.


> This GPT stuff is perhaps a better view into human psychology than a useful tool

It would be a great search engine if it cited its sources (but then people would notice it's basically copying code from the internet). It is actually good at getting you the names that come in a certain context, so you can search them. But only if you know enough to establish the context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: