Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agreed, but I'm also open to the likely possibility that LLMs genuinely work quite well in a few niches that I don't happen to work in, like writing run-of-the-mill React components where open source training data is truly abundant.

In day to day work I could only trust it to help me with the most conventional problems that the average developer experiences in the "top N" most popular programming languages and frameworks, but I don't need help with that because search engines are faster and lead to more trustworthy results.

I turn to LLMs when I have a problem that I can't solve after at least 10 minutes of my own research, which probably means I've strayed off the beaten path a bit. This is where response quality goes down the drain. The LLM now succumbs to hallucinations and bad pattern-matching like disregarding important details, suggesting solutions to superficially similar problems, parroting inapplicable conventional wisdoms, and summarizing the top 5 google search results and calling it "deep research".





> LLMs genuinely work quite well in a few niches that I don't happen to work in, like writing run-of-the-mill React components where open source training data is truly abundant

I write run of the mill React components quite often and this has not been my experience with AI either so I really don't know what gives




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: