Hacker News new | past | comments | ask | show | jobs | submit login
Response to “Unexpected mutations after CRISPR-Cas9 editing in vivo” (biorxiv.org)
121 points by lucapinello on July 5, 2017 | hide | past | favorite | 49 comments




I love that. Xiaolin Wu is really going in on them. More science needs to be like that.


In most labs there are journal clubs that function as this, but a lot of the discussion stays within the institute.



They did whole-genome sequencing for mice treated with CRISPR-Cas9 and compared it to a different mouse to see if any mutations were caused?

Why didn't they just take samples from the mouse before and after?


Some of my colleagues speculated that this was not the intended study. Originally, they might have tried to delete a gene, but the expected phenotype didn't show up, which would have put an end to that line of research. But instead of completely abandoning the project they figured they could sequence the mice that were still around to see what the off-target effects were, and they thought they found something surprising and important.


Does that qualify as "p-hacking?"


Of all the stupidity and hype around blockchain tech, the idea of committing the hypothesis of your research to an immutable record beforehand so as to avoid p-hacking seems to be one of the best ideas I've heard, provided you could avoid the problem of people simply committing a tournament of alternative answers...


Even more interesting is the proposal I've seen where you submit the intro, theory and methods section for review at a journal before you do the expeiments. Once what you're planning to do is deemed good and interesting by the reviewers, you get the go-ahead, and the journal will publish your results no matter what they are. Not only does it prevent p-hacking, it also removes bias against negative results.

FWIW, anti-p-hacking measures are already implemented for clinical trials; they have to register expected outcomes at the beginning.


Journals should not be given the power to implicitly decide what research gets done and conversely they will never accept to be forced to publish all results because of the vast amount of research that goes nowhere.

The whole value for a journal is to be able to select once the results are known because that's how you build prestige.

All you are doing in effect is to lay out a good case why journals should and probably will simply die because this level of transparency would be a great thing to have but is incompatible with the publishers model.


Many of the journals currently publishing these kind of Registered Reports only publish special issues, which might help with your concern about journals being given control over what work is done in a field, though it also limits how much of the literature can benefit from the new approach.

Another way of thinking about a journal's incentives is that, by moving the primary peer-review stage before a study has actually been conducted, you greatly expand the number of studies that a journal will interact with, which may provide a larger and more valuable role for entities like journals in world where actual publishing is a trivial matter.


The format is called "Registered Reports" and a current list of journals that use it is available at https://cos.io/rr/


Except we don't need a blockchain for it: just publish it in a well known place where others can collect it.


They are suggesting a blockchain to prevent mutation of the publication (pun intended) after the fact.


The "where others can collect it" part solves that: Suppose that university A announces their study method and university B makes a mirror. Then if A were to forge the announcement after the fact, B could tell. (You would need cryptographic signing, though, to see if the record mirrored by B has not been tampered with.)


> (You would need cryptographic signing, though, to see if the record mirrored by B has not been tampered with.)

Yes, hence the blockchain, which provides all that in a fairly convenient package.

You could even piggyback on any existing public blockchain, for a few fractions of a penny.


If you're making a notary system, the part you can get from a blockchain is a very minor percent of your code, and comes with a lot of complicated baggage. It's not convenient at all.


Or you could publish an announcement in a newspaper or similar. This problem has been solved for a long time and doesn't really need fancy crypto.


You could also just distribute a hash of, or the entire original text of, the original text. No need for a blockchain and all that processing.


The problem with using the blockchain for this is that there are legitimate reasons to withdraw such a registration ranging from misstyping (s/increase/decrease anyone?) to, depending on your kind of study, the inclusion of personally identifiable information. The solution currently being pursued is to expand pre-registration, a practice from clinical research, into other fields.

In pre-registration a trusted intermediary is used to store a read-only copy of your study design and materials, ideally including a pre-analysis plan that specifies the analysis you will run for hypothesis testing (so that we can cut P-hacking out of the picture at the same time). That intermediary can allow researchers to withdraw registrations while preserving a stub that shows everyone a registration used to be there and why it was removed.

That is how pre-registrations work on the Open Science Framework (https://osf.io), a cross-disciplinary FOSS web tool run by the Center for Open Science.


The real problem is not p-hacking (this is bad), but the lack of replication. All problems in science can be solved with replication, but almost nobody does it because it is near impossible to get a replication study published in a worthwhile journal.

No other problem in science is more important to solve.


Just like peer review is a necessary step for a study to be taken seriously, replication studies should be regarded as the direct next step. There needs to be incentive to reproduce the experiment.


Replication is far more important than peer review. I would rather a 1/10th the number of "advances" published if I could rely on the results to be true, and even more importantly, robust.

When I used to run a lab we used to only consider results that had been replicated in at least two unconnected labs. Everything else was just a waste of time (on average).


Eh, I would classify it as simple exploratory/observational science. Not all science is done to prove a hypothesis. Sometimes you just go take a look at a forest / cave / stretch of the sky / animal genome, and see what you find.

Once you have enough observations, then you start forming and testing hypothesis.


Great question. The answer is yes, but you can control for it.


Hard to believe, isn't it? You can see why the field was furious.


They have a response paper though :

"Here we provide additional confirmatory data and clarifying discussion, including sequencing data showing extensive heterozygous mutations throughout the genome in the CRISPR treated mice, which are all progeny of inbred mice purchased from a commercial vendor (JAX). The heterozygosity in these cases cannot be parentally inherited. The summary statements in our Correspondence reflect observations of a secondary outcome following successful achievement of the primary outcome using CRISPR to treat blindness in Pde6b/rd1 mice. As the scientific community considers the role of WGS in off-target analysis, future in vivo studies are needed where the design and primary outcome focuses on CRISPR off-targeting. We agree that a range of WGS controls are needed that include parents, different gRNAs, different versions of Cas9, and different in vivo protocols. We look forward to the publication of such studies. Combined, these results will be essential to fully understand off-targeting and can be used to create better algorithms for off-target prediction. Overall, we are optimistic that some form of CRISPR therapy will be successfully engineered to treat blindness."

http://www.biorxiv.org/content/early/2017/06/23/154450


Thanks, I was looking for this info earlier (https://news.ycombinator.com/item?id=14448371):

>"In our case, 56 zygotes were harvested from six pregnant females bred to six stud males and injected with CRISPRCas9." http://www.biorxiv.org/content/early/2017/06/23/154450

So they started with 56 zygotes, of which 11 survived injection of Crispr/etc. Of these 11, five and two mice showed some evidence of containing NHEJ/HDR cells, respectively.

In the case of NHEJ, they do not tell us what percent of cells were "edited" per mouse (or I missed it when I read that paper earlier). However for HDR it was only about 1/3 and 1/5 for the two mice they found. So this suggests the "editing" is occurring not at the zygote stage, but somewhat later (based on the percentages, perhaps at the 4 or 8 cell stages).

Assuming the edit/mouse rate is similar for NHEJ and HDR (somewhat dubious), it looks like 4x56 = 224 to 8x56 = 448 cells were treated with crispr/etc and ~ 7 got "edited". This gives ~1-3% "edits" per treated cell.



Great example of why the peer review system is broke.

I've seen first hand how the peer review system works in the United States. Foreign Postdocs studying in the United States are desperate for a Green Card. They peer review as many papers as they can as quickly as they can to prove something or other for the purposes of a securing a Green Card. Some of these foreign postdocs do a good job, many do not. Also, overworked professors don't always take the time to carefully read a paper and don't care. The result is that a lot papers make it through peer review that shouldn't.

I'd rather have my paper peer reviewed live on the internet by allowing people to comment on it beneath a PDF link than to suffer through another drawn out peer review process. People who care have a text-box where they say what they want, and if no one cares then it indicates I need to do research on something else!


I do not think this is correct, at least for most of the sub-fields of computer science and physics. Prestigious conferences and journals invite only well known researchers to review, who are mostly professors. Few post-docs get invited. Each paper gets multiple reviews (3-5), and conflicting reviews get discussed making the review process quite rigorous.

Reviewing a paper is mostly charity and gets you nothing in return, except the fact that when you submit a paper, someone will do the same. It does not help in the Green Card process at all, or boosts your academic credentials.

Regarding overworked professors, they are allowed to delegate reviews internally to students to ease the time they have to spend.


It seems that for the glamor journals in biology it definitely is broken, at least for click-baity manuscripts like this. This paper was immediately shat upon by everyone in my department. There's a running joke often said when things like this crop up: 'just because it's in Cell/Nature/Science doesn't mean it's false".


> I do not think this is correct, at least for most of the sub-fields of computer science and physics. Prestigious conferences and journals invite only well known researchers to review, who are mostly professors. Few post-docs get invited.

What about PhDs? You've never seen PhDs review papers in CS? It happens for prestigious conferences too.


CS is diverse in field cultures but ya. You don't even need a PhD to be on a PC, but usually every field has a crowd of reviewers that defines it by constantly being on PCs.


Experience in CS, Math or Physics does not necessarily carry over to other branches of science.


Seriously, I don't know any top quality journals is using large quantity of postdocs as reviewers. In academy if one is invited by a top journal as a reviewer, that is an honor, meaning one is acknowledged by peers. In my field I can't think of any quality journals' reviewers are majorly post docs.

Also your pointing fingers at foreigners is quite disturbing. Why you think they caused the problem? Do you have any evidence? I don't think in any academic workplace you can say this without unfavorable consequence.


He mentioned them because their unique situation creates perverse incentives.

Your last line is one of the most frustrating parts of academia. Not being able to criticize misaligned incentives simply because it involves foreigners is crazy.


>Your last line is one of the most frustrating parts of academia. Not being able to criticize misaligned incentives simply because it involves foreigners is crazy.

I don't know why you summarize my lines this way. I basically want to know whether he can offer some evidence to support that idea.


What does happen very frequently however is a prestigious researcher being asked to review papers, and passing them to his/her postdocs.


>Foreign Postdocs studying in the United States are desperate for a Green Card. They peer review as many papers as they can as quickly as they can to prove something or other for the purposes of a securing a Green Card. Some of these foreign postdocs do a good job, many do not. A

I have never heard of this. No-one knows if you review a paper, it has no bearing on your job security or your visa status, this post is baffling


You might not be credited by name but having a college professor who is willing to say you are indispensable to the productivity of the department isn't worth nothing.


He would likely be more inclined to argue this if a postdoc helps his lab produce more research or takes care of a lot of admin stuff


That's it - a publication from the postdoc is, in career terms, worth a fortune, while a paper review is worth a nickel


The comment on visas potentially refers to the fact that one of the criteria for certain visa categories is that one be distinguished in one's field. Serving as a reviewer demonstrates that one has acted as a judge of one's peers, and is therefore of superior standing.


Also a great example of the system working as intended. A group publishes a dubious study with potentially important implications and people queue up to refute it, correcting the scientific record.


> Great example of why the peer review system is broke.

s/broke/broken/

(unless you meant bankrupt, then broke is fine).

> Foreign Postdocs studying in the United States are desperate for a Green Card. They peer review as many papers as they can as quickly as they can to prove something or other for the purposes of a securing a Green Card. Some of these foreign postdocs do a good job, many do not. Also, overworked professors don't always take the time to carefully read a paper and don't care. The result is that a lot papers make it through peer review that shouldn't.

Well, it's unfortunate that you seem to have experienced the worst review processes. It is also unfortunate that you automatically assume that it's _foreign_ postdocs that are doing the shoddy work. Maybe you had bad luck, or maybe you're oversimplifying for the sake of argument, dunno, but that does not jive very well with what I've seen, all the way up from being a student author, student reviewer, reviewer, PC member, journal reviewer, and Program Chair of a conference.

At least in CS, the number of reviews done by postdocs is not a large percentage. Most PCs are composed of academic and industry researchers, primarily professors and professionals, and increasingly reviewers are no longer allowed to use sub-reviewers without attribution. The number of postdoc PC members is low; and for good reason. Postdocs typically last less than two years, and during this period they are more focused on publishing their own research as a last push to improve their academic standing before finding a job. I get the impression that it's known that postdocs are too busy to do a good job at reviewing.

> I'd rather have my paper peer reviewed live on the internet by allowing people to comment on it beneath a PDF link...

I guess you never had a Boaty McBoatface moment. I think open review without experts would be disastrous, as the nitty gritty fine details that absolutely have to be right should be vetted carefully by experts, rather than spending forever in a shouting match with the loudest voices in the crowd. Can you imagine peer review by YouTube comments? shudder


The peer review system is broke because this manuscript was published? You seem to be arguing two opposing ideas. More rigorous peer review, be that on a pre-print server (comment box) or during publication (suffering the drawn out process), would've prevented this paper from being published. Many journals welcome post-publication review, like this response, as well. There are reasons why the peer review system is broken, but this doesn't seem like one of them.


> Great example of why the peer review system is broke.

Don't know about that, but the inbred cloned mouse business seems to have quality control issues.


Is a green card that much of a big deal to justify that kind of behavior?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: