Hacker News new | past | comments | ask | show | jobs | submit login

It’s not just you, and your reaction makes perfect sense.

When someone wrote a post, that implied some kind of research and insight you might’ve not arrived at on your own without a significant time investment and perhaps a number of specific skills or contacts. You could follow the logic and the sources and evaluate each step of the journey. Even if a conclusion were wrong, you might still glean important accurate information, such as a specific resource to use in the future, or appreciate the investigative journey the author went on.

When a post starts by mentioning an LLM as the source of the information, they might just as well tell you the prompt and end the post there. You can run the query yourself and interpret the result on your own. Saying they used an LLM is like saying they used a search engine, clicked a result without thinking, and everything is based off of that but they won’t even tell you the URL. You have zero idea if the information is accurate and can’t even trust or learn from the analysis.

Like yourself, I’m not trying to use this particular submission as an example. Rather, I’m attempting to decode where your feeling (again, you’re not alone) might in general come from.




I agree with the general thrust of this, but it's worth noting that the author does _slightly_ better than is typical for LLM-based analyses: they released the dataset of book-labeled posts. You can at least estimate the false-positive rate from that, by sampling the results. (You can't estimate false-negative rate, though.)

Ideally authors attempt to do some sort of validation of the results at the LLM-labeling step and present that, but that rarely happens with these sorts of posts. I think that's pretty telling.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: