Hacker News new | past | comments | ask | show | jobs | submit login

You need a source of trust in these systems. Journals used to have that role. They had high standards that were upheld by editors selecting only worthy publications. Today it seems that many journals aren't as trustworthy as they seemed to be in the past. It's also easier to spam the journals with your publication and to bullshit your way into publication. The incentives to publish a lot are also way higher now that your grant money is highly dependent on your citation count. Journals can publish more and easier and lower the standards for submission to earn more money. The system is basically eating itself and we haven't found a cure yet.

Filtering for self-citations is useful to identify the bubbles. But it is not sufficient to determine if those bubbles only contain hot air or if these scientists are actually working on something with substance in a narrow field where few others publish.




"It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It" -Upton Sinclair


Citations should primarily serve to mention relevant work, which often includes authors' earlier works.

The problem really is the abuse of citation metrics and journal brand names (and especially journal-based metrics) as a means of evaluating researchers. What we really need is a different method of evaluating researchers that does not rely on where they publish or what they cite.

(But I would say that, given that I work on one such a system.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: