Hacker News new | past | comments | ask | show | jobs | submit login

The most extreme cases of self-citation are definitely bad news.

There seem to be some authors and author groups who rely almost entirely on self-citation for impact factor, allowing them to get by with irrelevant or unchecked work. It might be possible to detect that with a metric like self-citation or high author-placement self-citation as a fraction of overall citations.

But overall, it seems like this metric should be limited to exploratory use. There are wholly legitimate cases of frequent self-citation, like mathemeticians pioneering a new technique, or astronomy research groups which cite a large support team and product many sequential findings. Discerning an apparent citation-mill like Vel Tech R&D from a legitimate research group like the LSST requires thought, not just statistics.

Meanwhile, the most egregious self-citers are usually doing something else wrong too. Robert Sternberg wasn't just self-citing, he was reusing large amounts of text without acknowledgement, and abusing his journal editorship to publish his own works without peer review. The Vel Tech author in the article seems to be citing his own past works which are irrelevant beyond vaguely falling in the same field, and the enormous range in his work (from food chain models to neurobiology to machine learning to fusion reactors) makes me suspect it's either inaccurate or insignificant.

Ioannidis is damn good at what he does, and was far too sensible to broadly condemn high self-citation researchers. But it would be a real shame to see self-citation rate blindly added to university standards the way citations and impact factor were. The lesson here is that reducing academic impact to statistical measures of papers doesn't work, not that we need some more statistical measures.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: