You're mostly right in your experience. I have spent quite a bit of time trying to get ChatGPT to be a worthwhile piece of my workflow, and I guess sometimes it is, but most of the time the basic code or config or content I try to generate, it gets very fundamental things incorrect. It feels like it's mostly just hype these days.
Can you give an example of a technical software question where you found it wasn't helpful? I'll see if I can get a good answer and post the permalink for you. I suspect you're not phrasing your questions well.
Can you please specify whether you use (paid) GPT-4? Would you kindly provide links to a few examples of very fundamental things incorrect?
My experience - the free version made up a lot of things but still felt very useful - enough to want to upgrade to the paid version. With the paid version, I notice very rarely that it hallucinates. It does make errors but it can correct them when I provide feedback. It is possible that I just do not notice the errors you would notice, it is also possible that we use it differently. I would like to know.
> Can you please specify whether you use (paid) GPT-4?
Paid.
> Would you kindly provide links to a few examples of very fundamental things incorrect?
No, definitely not.
> I notice very rarely that it hallucinates.
Unsure of what "hallucinates" means in this case. Some examples of things I've used it for: docker configuration, small blocks of code, generating a cover letter, proofreading a document, YAML validation, questions about various software SDKs. The outcome is usually somewhere on the spectrum of "not even close/not even valid output" to "kind of close but not close enough to warrant a paid service". When I ask for a simple paragraph and I get a response that isn't grammatically correct/doesn't include punctuation, I'm not sure what I'm paying for.
>> Unsure of what "hallucinates" means in this case
The term "hallucinations" is now commonly used for instances of AI making stuff up - like when I asked ChatGPT (before I had paid account) to recommend 5 books about a certain topic and two of the recommended books looked totally plausible, but when I tried to find them, I discovered there are no such books. This is where I see a big difference between GPT-3.5 and GPT-4.
>> I get a response that isn't grammatically correct/doesn't include punctuation
What punctuation? If you mean stuff like commas separating complex sentences, my English is definitely not good enough to spot that. But your mention of punctuation reminded me of problems that ChatGPT has with my native language... any chance you are using ChatGPT in a language other than English?