Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Replicating cancer papers: This is one of those ideas which although pointless and impractical, is seemingly impossible to criticize.

Now the leaders of this project are basically saying - we can't do difficult experiments, and can't replicate studies which have already been replicated by others.

What have we learned exactly, except that experimental biology is difficult?



> pointless and impractical

These papers underlie a lot of modern research and treatments. If they aren't reproducible, the papers' results, and other papers that use their results come into question. This is a serious, documented problem [0].

> What have we learned exactly, except that experimental biology is difficult?

The problem wasn't that the project couldn't do difficult experiments, it was that not enough information was provided by the original papers. That's what we learned here.

[0]: https://en.wikipedia.org/wiki/Replication_crisis


I disagree with your characterisation of the problem and the proposed solution.

Cancer research is not an edifice constructed on foundational results. It is not like physics in this regard, where for example, accurate calculation of the gravitational constant is vitally important, and over time repeat measurements are generally more precise but not wildly different, or do not dispute the importance of the gravitational constant.

Take one of the papers they reproduced - Transcriptional amplification in tumor cells with elevated c-Myc. This is an interesting result conducted in highly contrived experimental conditions. Nobody is going to base their career, a drug development program, treat a patient or even start a PhD based on this result alone. I say this is a translational cancer scientist and medical oncologist. The contribution of this paper is to our knowledge of the biology of Myc, which has a multitude of actions which are context dependent. Myc is studied in a variety of different ways using many different methods. If this paper were being repeated today, the technologies and techniques would be quite different. The result of this paper is not plugged into the central dogma of cancer biology, setting us down an erroneous path for the next thousand years. So to return to my original question, what have we learned in trying to replicate this study, that we didn't already know? The money would have been better spent on orthogonal validation/extension of the result using modern techniques - another name for this is 'science'. The replication crisis suggests that this routine extension/validation process is somehow less important than going back and repeating the original experiment, which is I think a complete misunderstanding. You also seem to be saying that we should ask researchers to document in excruciating detail all experimental conditions such that a pastry chef or meteorologist could walk into a lab and successfully reproduce the experiment - this is an impossibly high bar to set for scientists who are already working under very difficult conditions, and is not the solution.


>"what have we learned in trying to replicate this study, that we didn't already know?"

It sounds like you think it doesn't matter if that result was published vs "Transcriptional amplification in tumor cells with depressed c-Myc".

If I misread that title replace it with whatever is the opposite result in this case. Anyway, like I said elsewhere if it isn't worth trying to replicate, then the original study should have never been funded. How many of these studies just exist as a "jobs program"?

It sounds like you think it is most of them, in which case great. We can then easily cut out 90% plus of funding current going towards jobs program stuff and devote it to the <10% that is worthwhile...


The effects of Myc have been tested in many different ways since that study. Just type Myc and transcription into pubmed and you can see for yourself.

I'm not sure what you are trying to achieve by making outlandish comments about cutting funding or jobs programs (the idea that cancer research is a jobs program is utterly hilarious!!), but it doesn't really seem you read my comment.


I don't see whats confused you about my post.

If no one cares whether the authors got it all wrong and "Transcriptional amplification in tumor cells with elevated c-Myc under conditions xyz" should actually be "Transcriptional amplification in tumor cells with depressed c-Myc under conditions xyz", then why was this funded?

If someone does care then it should be replicated.


> If someone does care then it should be replicated.

It is tested in other forms, but isn't generally replicated in the sense you seem to think is paramount. Nor should it be. I'll give you a silly example - would you support a project to go back and replicate electromagnetism experiments performed at the start of the century? Say we do repeat Millikan's oil drop experiment and get a different result (which is actually what happened) - does this mean there is a reproducibility crisis in physics? If we don't repeat the exact experiment, does that mean that Millikan shouldn't have received funding? Why is it that replicating the result the way Millikan did it more useful than doing other related experiments with more sophisticated or different apparatus? The latter is actually MORE useful.


>"would you support a project to go back and replicate electromagnetism experiments performed at the start of the century?"

Yes, of course! That is a great idea. Everyone should be doing this experiment in high school or undergrad science class by now. In fact that seems to be a thing:

https://hepweb.ucsd.edu/2dl/pasco/Millikans%20Oil%20Drop%20M...

https://www.ucd.ie/t4cms/EXP18%20millikan.pdf


Maybe this makes more sense to you:

If no one cares whether the authors got it all wrong and "Transcriptional amplification in tumor cells with elevated c-Myc under conditions xyz" should actually be "Transcriptional suppression in tumor cells with elevated c-Myc under conditions xyz", then why was this funded?


Maybe not a pastry chef or meteorologist, but certainly someone reasonably "skilled in the art" should be able to, with minimal difficulty.


What's the point of cancer papers at all if they're un-reproducable? Are we just sinking government money into artificially inflated bio portfolios?


> although pointless and impractical

I disagree. Consider computational biology papers – you should be able to show me code that takes the raw data and turns it into your results. Peer review is a sort of global, public code review in this case. There is a lot of value (education, validation, propagation, sharing, etc), and it's completely practical. Sadly, we (as a field) are nowhere near that, IMO.


I have not said that reproducing a result is pointless or impractical. If you are a computational biologist, you will surely know that there are endless bench marking papers comparing published methods for sequence alignment, variant calling, differential expression, genome reconstruction, phylogeny reconstruction, cell segmentation etc etc. You will also know that if a methodology for producing a result is obscure, it will not be repeated or cited much, and a re-implementation of the method that is accessible will easily over take it. Published data sets are also endlessly re-analysed. If someone publishes a paper that is code heavy without any code, then they get chewed out on twitter or at conferences (rightly so). Top tier journals have also started requiring publication of custom code.

My argument is that a 'reproducibility project' of the kind described above is pointless and impractical. I do not see evidence that this project has taught us anything.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: