Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My concern is that students/novices are going to be using this, without the ability to double-check the output of the tool. It inspires overconfidence, looks okay at the surface level, and bugs go unnoticed. The younger generation using this as a crutch, treating their own creations as a black box, will not have an adequate feedback mechanism to learn from their mistakes. Code quality and performance will deteriorate over time. You, an expert, learned without this crutch. Your use-case is frankly uninteresting.

Amusingly, without careful curation, I'd predict that buggy code will tend to self-replicate and these tools that indiscriminately slurp public code will enter a death spiral because the novices outnumber the experts. It's only a matter of time before viruses are written to propagate through this garbage stream. http://www.underhanded-c.org/



I definitely agree with your point about it being used as a crutch. My criticism was more towards how the authors evaluated AI’s effect on writing secure code. I’m not saying they shouldn’t have student participants, but they should be fully representative across the skill demographics.

To me it’s comparable to a study where you make a general claim about driving ability with lane assist but then 2/3 of the participants only have their learner’s permits.


What is the current feedback mechanism and will they not use existing feedback mechanisms if available? Professionally someone should be there to enforce quality/mentor, but for students or hobbyists, even without AI assistants, they often don't have anyone to say "this is bad, this is best practise" except stackoverflow




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: