Hacker News new | past | comments | ask | show | jobs | submit login

Just to be clear, I understood what you were saying before. I just disagree. I think the right approach is to select trusted experts and let them make the decisions about their fields.

Again, since this is a tech community, let me use that for an analogy. It's a classic problem for non-technical founders to evaluate their technical hires. They aren't qualified.

The right solution is not to find some gameable metric of tech-ness, like LoC/day or Github stars. Instead one uses either direct experience-based trust or some sort of indirect trust, like where you have a technical expert you trust and have that person interview your first tech hires.

Yes, having expert humans make the decisions is imperfect. But it's not like a managerialist approach is either. And the advantage of using expert humans, rather than a gameable metric and managerial control, is that we have centuries of experience in how people go wrong and many good approaches for countering it.




This raises a simple issue: how would the non-experts in an area choose the most appropriate set of experts? In this case it would correspond to funding agencies or governments needing to decide on a "fair" way to establish the right experts to ask. It is very difficult to suggest a way to do this that would not correlate strongly with "highly successful under the current system". That group of experts would, of course, have a strong bias towards the current system.


I think we solve this problems not through finding a universal approach, but through heterogeneity.

We fund academic work because we see value in it. But there are many kinds of value, and many different sorts of value. So I think it's appropriate that we have many different universities which have many different departments. Many different funding agencies and many different foundations. Each group has their own heuristics for picking the seed experts.

There are still systemic biases, of course, but that's true of any approach. And distributed power is much more robust to that then centralized power or a single homogeneous system.


It seems like "Selecting trusted experts" alone would defer more to human subjectivity and biases than would be necessary if objective measures were utilized as much as possible.

Existing community/expertise based moderation and reputation systems might not be directly transferable or adequate. But it shows there are new ways to think about more decentralized measures of reputation that are new to this century and haven't been tried. New ways that may be preferable to a small group of kingmakers.

I think the biggest problem is leadership and cooperation of community to try something different. It's not just that there is no person who can mandate these things, it's that multiple constituencies have widely diverging interests, i.e. authors, universities, corporations, journals.


I understand why it seems that nominally objective measures would be better. But I don't think cross-field, non-gameable objective measures of research quality are practically possible

I also don't think it's a problem that different groups have different interests, etc. As I say elsewhere, I think that diversity is the solution.


You could be right, but I don't see how it can be known with any confidence until a few approaches are given extended good faith trials. There are anecdotal examples supporting both scenarios and the problem simply seems too unknowable and important not to test drive whatever the top 2 or 3 approaches end up being.

>I also don't think it's a problem that different groups have different interests, etc.

I don't see how you can disagree that cooperation of community to try something different is not a major hurdle.

How many years has it been since important issues in the academic process were widely known? How much success in adoption has there been to date, regarding any fundamental changes?

It seems on its face to be crucial.


I doubt there's a single solution, so I think trying to get people in many, many fields to coordinate will just slow down improvement. If anything, I think the drive to centralize and homogenize, which is part of managerialism, is a big part of some of the prominent problems in academia.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: