90-95% of programmers I've worked with can't "grok" asynchronous or multi-threaded programming, and when forced to write async code it is littered with bugs.
ChatGPT doesn't need to find 100% of bugs in 100% of code out there.
Think of it as a "better linter". It's cheap, and finding just 50% of bugs would be an enormous step up for quality in most code bases.
No way. A linter is deterministic. ChatGPT is all over the place and, for me, it’s wrong almost every time I ask it anything. I wouldn’t trust it to tell me the ingredients in a pepperoni pizza. I’m definitely not letting it give me programming advice.
Ironically, I had a lot of trouble with a particular recipe (pan-fried gyoza with a crispy bottom), and it was only GPT 4 that gave me a working recipe!
The lack of determinism can be considered a type of strength. Run it multiple times! It might find different bugs each time.
Humans are the same, by the way. If you show a random set of programmers random snippets of code, you'll get a non-deterministic result. They won't all find all of the bugs.
> The lack of determinism can be considered a type of strength. Run it multiple times! It might find different bugs each time.
I’ve only tried it with code a little bit, but what I find is that it gives me hallucinations that I need to spend time figuring out. I don’t know what it’s saying, because it’s gibberish, and then I have to spend my time figuring out it’s not accurate. I don’t want to run on that treadmill.
I’m guessing software will continue the trend of getting less reliable as more people are willing to generate it via an AI.
ChatGPT doesn't need to find 100% of bugs in 100% of code out there.
Think of it as a "better linter". It's cheap, and finding just 50% of bugs would be an enormous step up for quality in most code bases.