Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you been using ChatGPT to solve work problems? I have. It's amazing. This technology can do amazing things and companies should be using it to solve problems. Of course I think it can improve things. Think about what happens when companies start to build corporate LLM's trained on their docs, code, knowledge base, support tickets, etc. You're going to have super human support. Sure, this might assist the current staff but they are going to be so much better equipped to solve problems.

Imagine for a second, that there is actually a LLM agent that can not only understand what you want but actually do things too. Like, why not have these systems reset passwords, check on the status of a refund, update a mailing address, change billing info, cancel your account, email me some documents, etc. This frees up people to actually work on more important things. The hardest part of all this pre-ChatGPT was understanding fully what the person wanted. That's petty much solved now.

I think we're headed to a future where you'll actually want an LLM agent vs a human in that they will know everything and can solve your issue in seconds. It's like when you win the lottery and get the support person who's been at the company 15 years and knows everything in and out. That's what these LLM's can be.



What do you mean by “work problems”. Writing regex? SQL? Exclusively software development?

Outside of this, even using for “summarizing” documents, you are lucky if it doesn’t distort or twist meanings such that it isn’t useful, except now you have spent as much or more time checking it’s work than just doing it yourself. Checking others’ work is much harder than writing it.

Every time I’ve attempted to ask it something I can’t answer myself or through immediate googling it has been completely useless.

I’m unconvinced that it isn’t just developers with a poor eye for nuance who aren’t realising how much information they are giving in the questions who rave about it. Horses can count, if you give enough context.

It seems to be generally good at novelty style transfers.

> It's like when you win the lottery and get the support person who's been at the company 15 years and knows everything in and out. That's what these LLM's can be.

This is pure fantasy, extrapolating what you want to see into an arbitrary future where it’s true. More likely it gaslights the customer onto thinking problems are their fault until they give up, but this scenario is mildly cheaper for the companies who don’t need to pay humans to do the runaround.


I recently gave a code review to a colleague where the regex they had was obviously unfit for its purpose and I politely informed them of such. They responded "Then why would ChatGPT have told me to use it?"

I trust exactly 0 output from any LLM. The problem with any of this sort of generative AI is that there's nothing that stops it from hallucinating facts and spewing those with a confident tone. Until we can figure out the trust and validation step, none of it is truly helpful.

I'm not a luddite, I just find these tools to be woefully lacking. Anything they can do takes me more time to validate than just doing it myself.


It's also pretty terrible at non-generic-webcrap coding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: