Hacker News new | past | comments | ask | show | jobs | submit login

If there's anything in this press release to justify how "characters added by AI" is more true a reflection of quality than commit count is of productivity, I didn't see it.

It's a short release and I read it twice, so if it was there I feel like I'd have noticed.




In its current state, I look at it as just a much smarter coding auto-complete, which is still very useful.

With that perspective, "characters added by AI" is an ok metric to track.


I haven't found it particularly smart, at least in its GitHub Copilot incarnation.

Per the metrics I added to the integration when I began to trial it, I accepted about 27% of suggestions.

I didn't track how many suggestions I accepted unmodified, because that would have been orders of magnitude more difficult; I would be fascinated to see Google's solution to the same problem documented, but doubt strongly that I will. I'm sure it's entirely sound, though, and like all behavioral science in no sense a matter of projection, conjecture, interpretation, or assumption.

I turned off the Copilot integration months ago, when I realized that the effort of understanding and/or dismissing its constant kibitzing was adding more friction on net to my process than its occasions of usefulness alleviated.

I do still use LLMs as part of my work, but in a "peer consultant" role, via a chat model and a separate terminal. In that role, I find it useful. In my actual editor, conversely, it was far more than anything else a constant, nagging annoyance; the things it suggested that I'd been about to write were trivial enough that the suggestion itself broke my flow, and the things it suggested that I hadn't been about to write were either flagrantly misguided for the context or - much worse! - subtly wrong, in a way that took more time to recognize than simply going ahead and writing it by hand in the first place would have.

I've been programming for 36 years, and it's been well more than two decades since I did any other kind of paying work. The idea that these tools are becoming commonplace, especially among more junior devs without the kind of confidence and discernment such tenure can confer, worries me - both on behalf of the field, and on theirs, because I believe this latest hype bubble ill serves them in a way that will make them much more vulnerable than they should need to be to other, later, attacks by capital on labor in this industry.


On a related note, Microsoft published a press release last year [1] where they seemed to suggest that 30% of accepted copilot suggests was a 30% productivity boost for devs.

> users accept nearly 30% of code suggestions from GitHub Copilot

> Using 30% productivity enhancement, with a projected number of 45 million professional developers in 2030, generative AI developer tools could add productivity gains of an additional 15 million “effective developers” to worldwide capacity by 2030. This could boost global GDP by over $1.5 trillion

They were probably just being disingenuous to drum up hype but if not they'd have to believe that:

1) All lines of code take the same amount of time to produce 2) 100% of a developer's job is writing code

[1]: https://github.blog/2023-06-27-the-economic-impact-of-the-ai...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: