I have to say I finally "caved in" to LLMs last month.
While I still think Copilot is useless, I recently had a very complex code that did a lot of crazy bit-flipping and xoring, and I had no idea what is it doing, so I threw it to ChatGPT.. and it knew what it was doing.
I also needed to rewrite this code to PHP (for... reasons) while I know very little PHP. And it did that! It was a bit wrong, I needed to correct a bunch of stuff (based on domain knowledge), but... it helped me a ton.
I still can't image using it daily for domain and language I already know (that's why I never used copilot). But it actually helped me in measurable ways when it's something new.
Effectively using AI tools is a skill. Much like effectively using Google is a skill. You already saw glimpses of what it can do. I suggest you to keep trying to find out boundaries where it works reliably and where it does not.
I'm using Copilot daily. I don't use it to write code instead of me. But I'm using it to generate lots of obvious code just the same way that I would do. I know when to expect it to do its work perfectly and I know when I need to supervise it. I know when I'd spend more time editing generated code so I'd write that code myself and I know when I'd spend less time editing generated code.
I don't think that AI brings 10x or even 2x to my productivity, so you can avoid using AI. But I certainly can say, that using Copilot makes programming less tenuous in the same way using autocomplete, autoimports and similar IDE stuff makes programming less tenuous.
I also think that whether copilot helps or not depends on type of code that you're writing. If you're very careful about DRY and your language does not have much boilerplate, may be you'd find it less useful. For example when I'm writing Go, every second line is of kind `if err != nil { return fmt.Errorf("Cannot bla: %w", err); }`. The only "intellectual" part here is error message and Copilot generates it 99% perfectly along with surrounding stuff.
That's basically my experience. It's great for learning or getting things done when the subject is related to one you know well (i.e. you understand the fundamentals and can verify responses quickly).
It's not so good for a completely new subject, or one you have a lot of experience in.
They're not useful when you're completely new to a subject? On the contrary, I've found that they are excellent when you have limited knowledge in a domain and less useful when you have expertise. They have allowed me to go from zero to functioning MVP on numerous computer vision projects, even though I have zero experience.
Do you have programming experience? I probably didn't explain well, what I meant is that if you have zero relevant experience it can be difficult to verify correctness.
For example, I'm comfortable with frontend development but hadn't used webworkers or websockets. ChatGPT was useful for getting up to speed quickly. I've had less luck with topics that are completely new to me, one example is coming up with a training regimen for long distance running. I have to manually verify every little thing, which ends up taking longer than doing research the old fashioned way.
I'd be surprised if you could go from zero to a useful CV app with LLMs, but it's possible I just haven't given it a fair shake.
While I still think Copilot is useless, I recently had a very complex code that did a lot of crazy bit-flipping and xoring, and I had no idea what is it doing, so I threw it to ChatGPT.. and it knew what it was doing.
I also needed to rewrite this code to PHP (for... reasons) while I know very little PHP. And it did that! It was a bit wrong, I needed to correct a bunch of stuff (based on domain knowledge), but... it helped me a ton.
I still can't image using it daily for domain and language I already know (that's why I never used copilot). But it actually helped me in measurable ways when it's something new.