Hacker News new | past | comments | ask | show | jobs | submit login

I am surprised the committees were "tasked with a 22.5% acceptance rate". Couldn't more than 77.5% of the submissions have been of poor quality?



NIPS gets a ton of submissions, so the law of large numbers governs pretty strongly. Imagine that each paper submitted is independently either good or bad, with 22.5% probability of being good. With 1660 submissions, the total number of good papers follows a Binomial(1660, 0.225) distribution, which has mean 374 and standard deviation 17. Under this model, the fraction of good papers would be somewhere in the range 20.5-24.5% (corresponding to a two-standard-deviation window around the mean) in 95% of reviewing cycles. So even though the quality of the individual papers is totally random, the randomness mostly "cancels out" and the overall number of good submissions is relatively constant.

Of course this is assuming an objective standard for what constitutes a "good" paper. As others have pointed out, the only really meaningful standard is "how does this paper compare to other work being done in this field"? So it's also reasonable to think of NIPS's goal as just trying to present the best papers that were written in any given year, not as bestowing a strictly-defined stamp of objective quality.


When I've arranged conferences, we had a certain number of time slots. It's a bit flexible, in that we can decide to allocate a longer time for two talks, or shorter time for three, depending on the talks.

It could be that they had a first pass at a schedule, used that to set a first cut for the reviewers, then adjusted the schedule once they figured they needed to add another 42 papers.

Also, not being accepted does not mean that a paper is poor. They used a rank system, so it only means that others had papers which appeared to be better.


I am more curious of the inverse: what if more that 22.5% of papers were at acceptable quality levels? Wouldn't that leave each committee to pick and choose, thus artificially inflating their disagreements?


Yes, and I think that is essentially why we're seeing these disagreements.

I've heard from lots of professors that a good conference gets a lot of "very-good-but-not-great" submissions and the job of the program committee is to pick the best among these. I wouldn't be surprised at all if minor personal preferences (which from the outside look rather random) ended up having a big say in the fate of a particular paper. Maybe some reviewers are more forgiving of poorly-written but technically strong papers, maybe some reviews consider certain fields "dead" and so are biased against them, reviewers tend to wildly different standards on how extensive an experimental analysis should be to be acceptable, ...


Highly (almost vanishingly) unlikely at a "top-tier" venue like NIPS.


Can you say more about why you believe this is true?


Consider the demographic submitting to NIPS. It's a self-selected group within the top researchers in the world in that area. The best people in the field don't want to be seen publishing the so called "second-tier" conferences, so they will submit exclusively to the likes of NIPS. And if you're an up and coming researcher or research group, you will want to establish credibility by publishing in these sorts of venues, and you will almost surely send your best work there. Add to this the fact that this is a "hot" field, so more and more researchers and research groups are getting into the field and trying to publish papers, I think it's very likely that NIPS gets a lot more good papers than they can possibly accept.


What does "poor quality" mean? There is no absolute standard for quality. "Poor" is something like "less good than usual compared to the recent work in this community". So the top-scoring third-ish of papers sent to the currently-converged-on favourite venue of a community are pretty much by definition not poor. Unless something very weird indeed happens one year. There are usually only very few really excellent papers, though. Most papers are filler in retrospect.

Also, conferences need to accept a decent number of papers so that people will show up and cover the costs of the meeting. Venues are usually booked long before the program is fixed.


Ok, we've gone from "top tier venue, basically impossible to have a large fraction of poor papers submitted" to "Most papers are filler in retrospect" and "conferences need to accept a decent number of papers so that people will show up and cover the costs of the meeting". I guess if I am deciding whether or not to hire a professor I would be tempted to disregard publications in this conference.


It depends what you are optimizing for.

Number of publications is a proxy for how much funding a professor can generate. Not much else.

> "Most papers are filler in retrospect" and "conferences need to accept a decent number of papers so that people will show up and cover the costs of the meeting"

None of these are conflicting. Conferences are often more about networking than the papers. Many paper are filler, but often only in retrospect. They are not obviously filler when presented.


> I would be tempted to disregard publications in this conference.

That was not something I suggested. NIPS is very good conference, and a paper there is suggestive of quality work. Lots of past NIPS authors have been aqui-hired or regular-hired by Google and Facebook recently in their machine learning spending sprees, for example.


I think its a bit harsh to call the papers "filler", but the reality is that most papers (in CS, anyway) are incremental work on important but well-studied problems or work on problems that are fairly narrow or not universally considered to be important. Reviewers tend to have wildly divergent opinions on how important or interesting that kind of work is.


The "in retrospect" was an important part of that point. Reviewers don't have access to it when reviewing.

Some conferences and journals have a retrospective prize for the best paper of, say, ten years ago. It's a neat way to recognize papers that turned out to be useful.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: