What I got out of that essay is that you should discredit most responses of LLMs unless you want to do just as much or more work yourself confirming the accuracy of an unreliable and deeply flawed partner. Whereas if a human "hallucinated a non-existent library or method you would instantly lose trust in them." But, for reasons, we should either give the machine the benefit of the doubt or manually confirm everything.
> If your reaction to this is “surely typing out the code is faster than typing out an English instruction of it”, all I can tell you is that it really isn’t for me any more. Code needs to be correct. English has enormous room for shortcuts, and vagaries, and typos, and saying things like “use that popular HTTP library” if you can’t remember the name off the top of your head.
Using LLMs as part of my coding work speeds me up by a significant amount.