Hacker News new | past | comments | ask | show | jobs | submit login

What you’re missing here regards diversity and volume.

Diversity: because language models are in some sense capturing an average (even if that average can be from a subset of training data), its comments will not be very diverse. I suspect if you try sampling away from the center of the distribution, you’ll find the quality degrades to nonsense.

Volume: the biggest problem is that you’ll end up with complete overwhelm of plausible comments. Language models cannot reason, which means they are incapable of producing the best insights (they can probably provide insight through “monkeys at typewriters” effects).

So the smart human who has analysed the subject really well, who currently has to rise above a certain amount of background noise, will now get completely swamped by it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: