It might be hubris, delusion, or some combination of the two, but while reading through comments some of them jump out at me as being ChatGPT generated. I'm sure the tech will advance and eventually be able to appear as more organic, but for now it seems to me to be more of a spam issue than an issue of appearing authentic.
Absolutely not, it would produce 0 value since it cant generate interesting insight. Ask it why cow's eggs are better than monkey's eggs, will it reply "wait are you trying to trap me here, I know im just a software but cows and monkeys have placentas and dont make eggs"?...
The vast majority of Hacker News comments don't generate interesting insight, either, and most conversations are banal and repetitive anyway. And that's not just here, it's everywhere - Sturgeon's Law prevails over everything.
But if ChatGPT can be trained to remain civil and avoid prejudice and conspiracy theory, the quality of Hacker News would objectively improve even if everything ChatGPT says is qualitative nonsense. And it won't be, and as software improves, it will be less and less so. You can't say that about people, unfortunately. The quality of people tends to degrade over time, the more people the more degradation. That's how you get Eternal September. ChatGPT, however, can maintain a high degree of quality over time, unimpeded by irrationality or emotion.
Maybe we could split the difference and have all comments filtered through ChatGPT. Run sentiment analysis to catch and remove or edit low quality comments where feasible.
Wouldn't that be an ideal application for ChatGPT etc?