Hacker News new | past | comments | ask | show | jobs | submit login

Why not ignore the hype, and just quietly use what works?

I don’t use anything other than ChatGPT 4o and Claude Sonnet 3.5v2. That’s it. I’ve derived great value from just these two.

I even get wisdom from them too. I use them to analyze news, geopolitics, arguments around power structures, urban planning issues, privatization pros and cons, and Claude especially is able to give me the lay of the land which I am usually able to follow up on. This use case is more of the “better Google” variety rather than task-completion, and it does pretty well for the most part. Unlike ChatGPT, Claude will even push back when I make factually incorrect assertions. It will say “Let me correct you on that…”. Which I appreciate.

As long as I keep my critical thinking hat on, I am able to make good use of the lines of inquiry that they produce.

Same caveat applies even to human-produced content. I read the NYTimes and I know that it’s wrong a lot, so I have to trust but verify.




I agree with you, but it's just simply not how these things are being sold and marketed. We're being told we do not have to verify. The AI knows all. It's undetectable. It's smarter and faster than you.

And it's just not.

We made a scavenger hunt full of puzzles and riddles for our neighbor's kids to find their Christmas gifts from us (we don't have kids at home anymore, so they fill that niche and are glad to because we go ballistic at Christmas and birthdays). The youngest of the group is the tech kid.

He thought he fixed us when he realized he could use chatgpt to solve the riddles and cyphers. It recognized the Caesar letter shift to negative 3, but then made up a random phrase with words the same length to solve it. So the process was right, but the outcome was just outlandishly incorrect. It wasted about a half hour of his day. . .

Now apply that to complex systems or just a simple large database, hell, even just a spreadsheet. You check the process, and it's correct. You don't know the outcome, so you can't verify unless you do it yourself. So what's the point?

For context, I absolutely use LLM's for things that I know roughly, but don't want to spend the time to do. They're useful for that.

They're simply not useful for how they're being marketed, which is too solve problems you don't already know.


>We're being told we do not have to verify. The AI knows all.

Where are you being told all of these things? I haven't heard anything like it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: