I feel the same way. Anytime someone says they don't find LLMs all that useful, the exact same comments come out:
"They clearly aren't using the right model!"
"It's obvious they don't know how to prompt, or they would see the value."
"Maybe it can't do that today, but GPT-5 is just around the corner."
I feel more and more that people have just decided that this is a technology that will do everything you can imagine, and no evidence to the contrary will change their priors.
For me llm is faster stackoverflow without fear of my question being closed. I know the first answer will not be what I want. I know that I will have to refractor it to suit my style. I know it will be full of subtle bugs.
Oh and I expect it to be free, I ain't paying for this just like I wasn't paying for stackoverflow.
Finally I hope than in few years I will be able to just "sudo apt-get install llm llm-javascript llm-cooking llm-trivia llm-jokes" and it will all run locally on my low end computer and when I report bug, six months later it will be fixed when I update OS.
You are paying, the same way you paid for stack overflow.. you become part of the process, ask follow up questions, deepening the knowledge of the system.
The same applies to AI. The old learning material is gone, your interaction is now the new learning material and ground truth.
PS: Hourly rates for sw engineers: Range:€11 - €213 - so one hour on stackoverflow, searching and sub-querying resolving problems costs you or your employer up to 213€. It really depends on what you have negotiated.
And unlike Stack Overflow, which is available to everyone online and has an open content license, the IP the users of the ChatGPT style services is entirely proprietary to the company.
I am not interested in feeding their machine with my brainpower. On the other hand, I happily contribute to Stack Overflow, and open source software+hardware. I do not think I will integrate LLM into my engineering workflow until there is a good service/solution which builds up the commons. The huge companies already have way too much influence over key aspects of knowledge-based society.
Every month, every update, every week, every contract change, every day - the setting is in another menu, in another castle, the setting is a princess, kiss the button, the button is now on, now off, is now dark patterned, now it isn't, deliver proof of work to set the setting, solve 1 captcha, solve 20.. come on.. its rodeo time.
I'm with you. Every time I've used LLMs in my work, I've ended up using more time tidying up after the robot than it would have taken to just do the work myself from the start. I can believe that there are some tasks that it can do very fast, but my experience is that if you're using it for anything that matters, you can't trust it enough to let it do it on its own, and so you just end up doing the work twice.
It's like having an unusually fast-but-clumsy intern, except interns learn the ropes fast and understand context.
More likely to be confirmation bias; most of us ask the wrong questions, try to confirm what we already believe rather than choose questions that may falsify our beliefs.
I have some stand tests for LLMs: write a web app version of tetris, write a fluid dynamics simulation, etc., and these regularly fail (I must try them again on 4o).
But also, I have examples of them succeeding wildly, writing a web based painting app just from prompting — sure, even with that success it's bad code, but it's still done the thing.
As there are plenty of examples to confirm what we already believe, it's very easy to get stuck, with nay-sayers and enthusiasts equally unaware of the opposite examples.
I mean when the people saying it doesn't work for them, would kill them to give links to the chats on ChatGPT.com so everyone can see the prompts used? when they do, it's a different conversation. like the number of R's in strawberry, or 9.11 - 9.2. When the complaints are generic with no links, the responses are similarly generic, because both sides just biases that they're right and the other side is the one that's wrong.
I welcome people picking apart chats that I link to. it's not that I believe that LLMs are magic and refuse to adjust my model of how good these things are and aren't, but when people don't give specific evidence is hard to actually move the conversation forwards.
because yeah, these things are plenty stupid and have to be tricked into doing things sometimes (which is stupid, but here we are). they're also pretty amazing but like any hammer, not everything is a nail.
It codes great for me, helps me deliver faster, better tested and more features. It literally saves time and money every day. If you can't do that, maybe, just maybe, you have a you problem. But there are many like you in this thread so you are not alone.
"They clearly aren't using the right model!"
"It's obvious they don't know how to prompt, or they would see the value."
"Maybe it can't do that today, but GPT-5 is just around the corner."
I feel more and more that people have just decided that this is a technology that will do everything you can imagine, and no evidence to the contrary will change their priors.