Hacker News new | past | comments | ask | show | jobs | submit login

> A decent journal will stop using reviewers who do a lousy job, and complaints by authors about their paper's reviewers ought to be listened to and not dismissed as sour grapes.

As a data point in ML conferences, I had a reviewer give me a strong reject with only a few sentences. First was a generic "not novel" with no further explanation. The rest was a complaint that my paper had serious flaws and pointed to an intentionally redacted link and a broken cross-reference. We reported them to the AC and the other two reviewers were borderline and weak accept, but both had low confidence (of course the inane reviewer was high confidence). The result was that the reviewer's rebuttal was much more angry and the weak accept said "authors addressed my concerns, but I've decided to lower my score."

I tell this story because at a certain point it is not a problem with bad reviewers, it is a problem with bad management. The most important thing that a system can have is self correcting behavior. A system that gives a bad faith actor a slap on the wrist and allows them to then escalate their bad behavior and influence others is not a well functioning system. Granted, this is just one example, but I've seen many others (such as a theory paper being rejected for lack of experiments). I've started collecting author tweets I find about these kinds of situations, but I think there's a lot more that aren't easily conveyed on social media.

I do really want to promote discussion of these things because I think we should always be improving our systems. You're right that nothing is every perfect, but we can still recognize flaws without being consumed with the impossible goal of perfectionism.




Indeed, my paper to CACM based on this:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2399580

was shitcanned by a Microsoft reviewer (if you're the one and you're reading this: GFY)

First of all, he ignored it for six months, and it wasn't until I begged the editor that he got it moving. Then he gave one round of critique, and when I addressed it, he said, "the patent law changed" and closed the process. He knew nothing about patents.

As I said, the editors should not defer to these douchebags. Fortunately I don't need pubs for my career, but some people do.


Funny enough, none of my first author works have been "published" but since they are on arxiv I have a competitive citation count and h-index (most cites are from my first author works). Only group upset at me is my grad school. It's extremely frustrating because I have all indications of doing good work and clearly the community agrees. I think the funniest part of it all is that I've offered to show them my reviews and so we can discuss if I am doing something wrong or "just having bad luck." But no one wants to even entertain the idea that the system is dysfunctional.

I will never understand why in CS we use a conference system as our indicator of merit. It is an antagonistic zero sum game. There is no recourse for giving a bad faith review and you are incentivized to reject works. It's easier to reject and thus less time consuming -- every paper has flaws and limitations, just point at them and ignore context--, you marginally increase the chance of your own paper getting in, and venues use acceptance rates as the main measure to indicate their level of prestige (if I cared enough I'd just get LLMs to spam papers to abuse this). Every part of the system that does something good and beneficial to the scientific community relies entirely on people acting in good faith and against other incentives they have, favoring the purity of science. That clearly doesn't scale and clearly sets a stage for bad actors to overwhelm. I think we only continue this because they've successfully convinced good faith actors that there is no other way and that it isn't as bad as it is. I just wonder how much money and time is lost due to all of this (before we even account for the ridiculousness of charging for what arxiv does for free. The paper is given to them for free as well as the reviewing is done for free, and much of the organizing committee and all that is also free labor. Something this important shouldn't have so many similarities to a scam that seeks to extract money from the government and universities).


I am also in a conference-centric CS subfield, but I have published quite a lot in journals as well (both because of multidisciplinary collaborations and because my country has coarse-grained metric-based evaluation systems were conferences don't count much, so at least a part of my work has to go to journals to appease the system).

In my experience, journal reviewing is much worse, and especially much more corrupt, than conference reviewing.

At conferences you get bad actors with almost no oversight or accountability, true. But at least they typically don't have an axe to grind, because accept/reject decisions tend to be binary (there is no "revise and resubmit", at most optional recommendations for the final version that aren't effectively enforced).

At journals, the "revise and resubmit" option gives reviewers and editors a lot of leverage to push papers their way, and I have very often seen reviewers tacitly and subtly hint that the paper would easily be accepted if the authors included three or four references to papers by Author X (which are irrelevant to the subject matter, but of course, Author X will be the reviewer or a friend). Sometimes it's clear that the reviewers didn't read the paper, their sole interest on it is that you include their citations, that's the reason why they accepted to review and all they look at. Editors could in theory prevent this, but often they just echo reviewer comments, and sometimes they are complicit or even the originators of the corruption (it's also typical that the editor asks to cite a bunch of papers from the same journal, even if they're not related, because that inflates impact factor). In any of these cases, since the author is in a position of weakness and there's always the career of some PhD student or other down the line, one doesn't typically dare to complain unless the case is really blatant (which is typically not, because they know how to drive the point subtly). It's easier to go along with it, add the citations (knowing that they will lead to acceptance) and move on.

This has happened to me very often and I'm not talking about shady special issues in journals that print papers like sausages, I'm talking about the typical well-regarded journals from major traditional publishers. In conferences I've gotten all sorts of unfair rejections, of course, but at least the papers I've published accurately reflect my views, I can stand behind them. In journals, maybe half of my papers have this material that doesn't make sense and was added to appease a corrupt reviewer.

I find that many CS authors who haven't had to publish in journals have a "grass is always greener" mentality, and expect that if we moved to journals we would find a fairer review process... and if at some point we do so, they will receive a blow of reality (not saying it's your case, of course. There can also be people who have published in journals and disagree with me due to different experiences! And there are some journals that don't tend to engage in that kind of corruption, only that there are not many of them).


Yeah I don't submit to journals often but have had similar experiences. They have also asked for addition of significant experiments with significant compute. Like they wanted us to try our technique on an additional 3-5 networks (which are all of the same general architecture we originally did) and it is very clear that one of the reviewers was an author of one of those (but adding the other two to "mask" themselves). It would have doubled the project's compute and they weren't happy when we responded that we simply don't have the budget and aren't convinced the experiments would be meaningful but we'd be happy to add them to our table for comparison. All three suggested that the paper did not have much value but the paper already had >200 citations via arxiv by this time too... (several works built on ours and even got published...)

It's a really weird political game and I just think we need to move to a system where we either have reviewing seen as a collaborative/allied effort (as opposed to adversarial) or we rethink the whole system entirely and let reviews happen naturally (i.e. submit to OR and allow comments). It's just very clear that a system can't scale if it relies on all members acting in good faith. Especially when there's high amounts of competition.

Edit: one thing I find interesting is I contextualize different than other reviewers. Like I see a experiment section where they have a node of 2080Tis and think "okay, students or small lab. Compute bound, so do the experiments they performed optimize under those conditions?" Whereas I think most reviewers don't contextualize in this manner and so I see many act as if every lab has access to many A100 nodes. I think this matters because the small labs can still do essential work, but we just need to recognize the signal may not be as strong. Their workload is probably higher due to lack of compute too (and I subsequently tend to see deeper analysis and less reliance on letting the numbers do all the talking). I don't think the GPU poor "aren't contributing" because they can't, I think they can't because gatekeeping. It's insane to expect academics to compete with big tech in a one-to-one comparison. If all you care about is benchmarks then compute always wins.


> Then he gave one round of critique, and when I addressed it, he said, "the patent law changed" and closed the process.

Was this back in 2014, or more recently?


slightly after that.

The thing is, "the patent law" had not changed with regard to obviousness (103). CLS Bank v. Alice which I think you're alluding to was about 101.


No I just meant how much could it have changed since 2014 if it was also 2014 or soon after it, seems like a weird thing for a reviewer to say without further qualification

I'd have to go ask my dad about how all this works specifically lol


OK. CLS Bank was in 2014.

103 is about obviousness. 101 is about patentable subject matter. I actually had an application rejected on 101 grounds after CLS Bank.


That makes sense to me, since large classes of abstract ideas are not patentable anymore.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: