I reread this list every now and then, it never loses its power. I've never reviewed papers in math or hardcore CS like on topic mentioned here, but here's a similar informal list that I mentally use when I review articles for IEEE:
* Bad English: I'm not talking about a few typos here and there (although too many typos is a problem), mostly it is gross grammatical mistakes that makes sentences unparseable. The thought here is that sloppiness in language correlates with sloppiness in research.
* Omitting important references or too many self-references: This is a sure sign that the authors have not really done a good study of the area.
* Sloppy LaTex equation formatting: Similar to language sloppiness, this also shows that either the authors have not read enough technical articles to see how it's done, or that they don't care (or worse, they cannot learn).
* Glossing over outliers in results: Usually results tables would contain outcomes that don't fit into the model/approach the authors are proposing. I look to see if this is discussed.
I've rejected papers for bad English, but not because I assume sloppy language will result in sloppy research. Rather, I'm reviewing for conferences which don't have a "major revision" option, and I have to assume that accepted papers will not be heavily revised. So if the submitted paper has language that is sloppy and hard to understand, then I assume the published paper would be the same. And that's not acceptable, because no one benefits from a paper which is hard to understand because of its presentation (not its content).
Don't know about this one. There is probably a correlation there, but we should stop at that. I think the temptation to explain it can lead into all kinds of nasty stereotypes.
There are two types of bad English. There's English from non-native speakers, which can be hard to read at times, but it's not a red flag. Then there's the other type of bad English. It's really hard to describe, but after you've seen it a few times, it's easy to pick up on it after a page or so. It's... a certain type of incoherence and lack of logical thinking, that superficially resembles logical thinking. The difference manifests in the structure of how they communicate, from low-level grammar all the way up to top-level organization of the paper.
Since I can't really describe what I'm talking about, I'll given the most blatant and obvious example I know of: Time Cube[1]. Even if you ignore the content, and just focus on the sentence structure, it's incoherent, often failing to parse as valid English, with a variety of ambiguous or defined referents, and freely introducing new undefined concepts. If Time Cube is 100, most papers you would see in this category are never higher than 3 or 4. But, even at that level, the lack of clarity at the structural level usually implies a similar lack of coherence at the content level.
I disagree. My point is was not that not being able to write perfect English is a prerequisite for good research. However, consistent bad usage to me is an indicator of either a lack of self knowledge/criticism (unaware of one's own level of knowledge) or of indifference, which are both red flags to me. It's an easy matter to have your paper reviewed by a native speaker or an English major from your university.
You perhaps wouldn't argue that attention to clean, professional attire when going to an interview would lead to nasty stereotypes, and generally fashion sense has no bearing with, say, coding ability (I think they are inversely proportional), yet what would you think if someone showed in shorts and a shirt to an interview?
I realize I'm being pedantic here, but you always need to qualify statements like, "what would you think if someone showed in shorts and a shirt to an interview" with "outside of the bay area."
Shorts, T-Shirt, and flip-flops are fine attire for a technical interview at most places you would want to work here. Might not be as acceptable at HP, IBM, or Oracle (or those sorts of places) - but any <500 employee company looking for a technical employee, particularly a Pre-IPO startup, are interested in your ability to deliver, and shorts are completely acceptable [1].
This has relevance to your meta-point though, which is, "know your field's audience, and realize what values they'll judge you on, and take care and attention to deliver on those values - it will demonstrate you have both awareness and the ability to focus on details important in that field."
[1] The [un]fortunate flip side of this lack-of-attention to what you are wearing, is that it comes with an overwhelming focus on what-you-know and what-you-can-do. You can't hide behind a suit and a congenial manner.
Even before reading the paper I was 99% sure it was a hoax. Something about the video clinched it and I was sure. But many others (even here on Hacker News) were not sure. They (like me) found the idea fascinating and did not want to dismiss it out of hand. Many people weighed in on the extreme physics and engineering challenges that would have to be overcome to make it possible. But the video's defenders pointed out again and again that the nay-sayers should read and directly address the explanatory technical documents that the "pilots" had posted on their web site.
I didn't even look at the pilots' web site, and I expect most other skeptics didn't either. But should we have?
Because later the video makers admitted it was a hoax. So should we, who were so skeptical really have spent time reading that paper and picking it apart point for point? Or was it enough to let time prove us right?
His argument against computer checked proofs is wrong, because doing a computer checked proof for your claim is NOT NEARLY as common as wearing your seat belt is. If it were, then all the big proofs would come with a computer checked proof, just as wearing a seat belt is normal when seeing the police nearby.
I don't think it's an argument against computer checked proofs. He even seems to acknowledge that automatic verification is a good idea. Unfortunately the world we live in is filled with proofs that cannot be checked by a computer.
At some point, there might be nothing left to do except to roll up your sleeves, brew some coffee, and tell your graduate student to read the paper and report back to you.
You can also delegate initial paper evaluation to your colleagues - allow them to let you know if the paper is plausible.
* Bad English: I'm not talking about a few typos here and there (although too many typos is a problem), mostly it is gross grammatical mistakes that makes sentences unparseable. The thought here is that sloppiness in language correlates with sloppiness in research.
* Omitting important references or too many self-references: This is a sure sign that the authors have not really done a good study of the area.
* Sloppy LaTex equation formatting: Similar to language sloppiness, this also shows that either the authors have not read enough technical articles to see how it's done, or that they don't care (or worse, they cannot learn).
* Glossing over outliers in results: Usually results tables would contain outcomes that don't fit into the model/approach the authors are proposing. I look to see if this is discussed.