Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On a general note, how do meta analyses handle publication biases? For example, this study used 343 studies to derive their conclusions. But what if there were 5000 studies which showed that organic and non-organic food have no differences, but were not published or peer reviewed because they were deemed "not interesting"?


Generally: https://en.wikipedia.org/wiki/Publication_bias#Effect_on_met...

From the paper:

> Funnel plots, Egger tests of funnel plot asymmetry and fail-safe number tests were used to assess publication bias (37) (see online supplementary Table S13 for further information).

(Some fail-safe tests are completely worthless and misleading, but I haven't looked to see which they used.)

> Strong or moderate funnel plot asymmetry consistent with a publication bias was detected for approximately half of the parameters. However, it is not possible to definitively attribute discrepancies between large precise studies and small imprecise studies to publication bias, which remains strongly suspected rather than detected where asymmetry is severe (see Table 1 and online supplementary Table S13).

There's a table of all the endpoints they looked at, like the various antioxidants, with a column grading none to high:

> Publication bias was assessed using visual inspection of funnel plots, Egger tests, two fail-safe number tests, and trim and fill (see online supplementary Table S13). Overall publication bias was considered high when indicated by two or more methods, moderate when indicated by one method, and low when indicated by none of the methods. The overall quality of evidence was then assessed across domains as in standard GRADE appraisal.

A copy-paste of the publication bias column and then a `xclip -o | sort | uniq -c` says that there were 16 parameters where publication bias was estimated to be low or 'none', 17 'medium', and 3 'strong'. That said, publication bias tests are considered to be fairly weak in that you need a lot of studies to be confident bias isn't there, and looking at figure 3, the # of studies for the parameters may vary from what looks like a low of 4 to a high of 332; so the publication bias estimate for the latter will probably be good, but the former means next to nothing.


They usually have a definition for the kinds of studies they'll use. For example, they might say "only studies of this size betwee these years that measure this thing". I'll bet if you read the full analysis, you'll find something like that in it.


You're correct. The full study has a cool flow chart of how they selected their papers, how they removed certain studies, mixed in other studies, etc. But it doesn't handle the case of papers that were never published in the first place. I'm not even sure it's possible to handle, which is what I'm asking.


In addition to publication bias, it's also very easy to pick a selection criteria that produces the result you're trying to get, and justify that criteria after the fact.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: