Hacker News new | past | comments | ask | show | jobs | submit login

Your problem may be with those jackasses at work.

I get very useful answers from ChatGPT several times a day. You need to verify anything important, of course. But that's also true when asking people.




I have never personally met any malicious actor that knowingly dump unverified shit straight from GPT. However, I have met people IRL who gave way too much authority to those quantized model weights, got genuinely confused when the generated text doesn't agree with human written technical information.

To them, chatgpt IS the verification.

I am not optimistic about the future. But also perhaps some amazing people will deal with the error for the rest of us, like how most people don't go and worry about floating point error, and I'm just not smart enough to see how it looks like.


Reminds me of the stories about people slavishly following Apple or Google maps navigation when driving, despite the obvious signs that the suggested route is bonkers, like say trying to take you across a runway[1].

[1]: https://www.huffpost.com/entry/apple-maps-bad_n_3990340


There’s some people I trust on certain topics such that I don’t really need to verify them (and it would be a tedious existence to verify everything).


Exactly. If you don't trust anybody, who would you verify with?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: