Hacker News new | past | comments | ask | show | jobs | submit login

Interesting, in my experience LLMs hallucinate so much on stuffs I know about that I instinctively challenge most of their assumptions and outputs, and found out that this kind of dialectic exchange bring the most out of the "relationship" so to speak, co creating something greater than the isolation of us two.

Relevant 2018 essay by Nicky Case «How To Become A Centaur»: https://jods.mitpress.mit.edu/pub/issue3-case/release/6




I haven't used LLMs a lot and have just experimented with them in terms of coding.

But about a year ago, I had a job to clean up a bunch of, let's call them, reference architectures. I mostly didn't mess with the actual architecture part or went directly to the original authors.

But there wasn't a lot of context setting and background for a lot of them. None of it was rocket science; I could have written myself. But the LLM I used, Bard at the time, gave me a pretty good v 0.9 for some introductory paragraphs. Nothing revelatory but probably saved me an hour or two per architecture even including my time to fixup the results. Generally, nothing was absolutely wrong but some I felt was less relevant and other stuff I felt was missing.


> in my experience LLMs hallucinate so much on stuffs I know about that I instinctively challenge most of their assumptions and outputs

In my experience most people don’t do that. Therein lies the problem.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: