This recent note in an "obscure but high-profile journal" (what?!) is entirely a technical observation. I fear that the CRISPR phenomenon has a lot of people, even scientists, who know just enough about the topic to want to affect stock prices. And this too seems to be the Wired article's interpretation even after its own semi-clickbaity headline: "But most scientists, while skeptical of the results, were more disappointed in the way the paper was blown out of proportion."
The promise of CRISPR was not that it was to be the singular tool that would change how genetics and biology behaved. Rather the Cas9 protein could be used as a cheap and fast technique to colocalize with specific DNA sequences. And that is fantastic, useful, and worth all of the praise the tool has gotten. But that's it.
That Cas9 happened to also actually allow a modification at specific sites using its inherent DNA cleavage capability was a great little bonus - and that it could be cleverly controlled to use that capability to actually, today, cure some diseases, was amazing.
There is very little in biology that is binary in nature - it takes a lot of energy to maintain such an entropic dam between two states. So a protein that cuts at ATTGCTTGTA with 80%/hr/molecule efficiency, will also cut ATTGGTTGTA with some non-0% efficiency as well. Every scientist who works with Cas9 (should) already know this. I don't think Cas9 is fantastic for it's ability to cleave DNA - we've had restriction enzymes for a long time - but rather for it's ability to colocalize with arbitrary DNA sequences. And with that colocalization we can now bring to bear all the rest of the fantastic tools we already have in synthetic molecular biology. And that is the interesting part of our future - in which Cas9 is but a single (very useful) tool in our kit.
“If it squirms, it's biology. If it stinks, it's chemistry. If it doesn't work, it's physics. And if you can't understand it, it's mathamatics.”
― Magnus Pyke
Anyway, thinking about your example it occured to me that someone using Cas9 should be able to look at the sequenced genome and identify all sequences sufficiently similar to the target sequence that could be unintentionally affected by Cas9. There's one idea for an experiment. As for mitigation, they could try to find another specimen (fat chance) or species which does not have such similar sequences. Of course, mutations will still happen for a myriad of reasons other than Cas9, as they always do.
Sure - those are good ideas, and they do (did) such searches prior to determining which sequence to look for in the genome. Just a few caveats off the top of my head though:
- When you are sequencing a genome how do you know whether or not the windowed sequence you have is found only once in the genome? A solvable problem for most sequences, but still not a trivial caveat.
- Organisms utilize genomic duplication in order to create backups/redundancies/variants of their most critical systems. These backups eventually 'drift' from their parent, but how much drift is required to know whether or not it will be accidentally 'found' by Cas9.
- Cas9, like squirmy biology, is time dependent. If you give it 100 years it will likely cut most any sequence at some point during the period. So how do you measure/account for the literal number of Cas9 proteins, much less the amount of time they remain in a given nucleus.
- DNA is relatively hardy as far as biological molecules go, but there are all sorts of different kinds of chemical errors (much less intentional modifications) associated with DNA that could make a given sequence more/less likely to be misread by Cas9.
- Are all your gRNAs actually the exact sequence you think they are? RNA is much less stable than the hardy DNA mentioned above. What if some of your RNA is being modified before it even gets to go homing around for its complementary sequence.
If you calculate information density, it turns out that 20-base pairs is VERY unique. The human genome is ~3 billion base pairs. Each base pair is one of four different bases. So 15 base pairs at 4^15 carries 1 billion bits of information. So long as the information is evenly distributed (which it is not), 15 random base pairs has a likelihood of occurring ~3/genome. So 20 base pairs, at 4^20 is an obscenely localized search space. But again, biology is mushy, so it's more of a time-dependent lossy grep than a true exclusive search. There are of course a lot of caveats to this kind of back-of-the-envelop calculation, but on the whole it's mostly accurate. 20 base pairs contains a LOT of information, and is generally sufficient to be unique in a particular genome.
To be clear- the fact that gene therapy is "messy" has been one of the biggest issues holding it back for the past 20 years. This isn't a new concern based on "one study," as Wired seems to frame it. Crispr is just the latest and most promising method and it should be framed in that context. One step forward, not a silver bullet.
I'd argue that it's closer to five steps forward than one, on a ten step scale.
CRISPR is a gigantic leap forward in simplifying and expediting gene editing across the board. It has also significantly reduced the cost. The tools, such as SHERLOCK from Broad, that are already cropping up around CRISPR are well beyond what was previously possible.
China as one example has found it so easy and inexpensive to work with, they're likely to rapidly leap ahead of the West in gene editing due to their willingness to allow for higher rates of risk. They'll have perhaps dozens of human trials under way by the end of 2017 / mid 2018. Just on the back of CRISPR they're going to go from lagging far behind in biotech, to being a global leader within two decades. Technologically they're going to skip the decades of traditional pharma / biotech build-up, as with jumping to cellphones and bypassing landlines.
> due to their willingness to allow for higher rates of risk
You're making this sound more harmless than it is. I'd rather say "due to their willingness to reject and destroy all previously common-held ethical boundaries of science".
Oh, we'll tolerate it. We'll definitely complain, but nobody's actually going to put any skin in the game. There won't be any significant embargoes, much less military action.
Then, some years down the line, we'll make use of the benefits of whatever technology the Chinese develop. It would be evil not to, given how it could spare people the suffering of genetic diseases, or create carbon-capturing crops, or whatever.
Derek Lowe is an awesome writer about these kinds of issues. He highlights some of the best of criticisms out there, notably that it's not clear whether the increased rate of apparent mutation and the striking amount of mutations shared between the two treated animals was due to the "experimental" animals being siblings, but the "control" animal being more distantly related. This article isn't the final word by far, and I think the authors didn't claim it to be either. They wanted to put a concern out there for the scientific community to investigate, as well they should, and the community will figure this out.
Remember though, CRISPR is by definition a mutagen. It does create rather substantial disruption in DNA structure (double stranded breaks), and relies on intrinsic DNA repair processes to affect the desired end result. This is likely to be a somewhat noisy process, and we shouldn't be terribly surprised if unexpected changes do occur. Indeed the scientific community is well aware of these limitations, and I think the more popular press shouldn't sell CRISPR as equivalent to programming DNA in the same way we write computer programs.
Off-targets is something you always look for. In a research setting, you'll usually be back-crossing to the parental line to clean up the background anytime you transform something.
My experience is in plants, so my point of reference is agrobacertium tumefaciens transformation (agro). To say that CRISPR is cleaner is a massive understatement. It takes a little doing, though, with some labs optimizing for particular species. Some of these optimized results have been astounding, often with no detectable off-site modifications, and 100% efficiency (no need to wait for the T2s for homozygotes)!
I'm not worried. I suppose it's good to point out that it's not ready for full-scale gene-therapy yet, but I'm not aware of anyone saying that it is.
I had a roommate once who was a molecular biologist. I told him I was excited about the CRISPR developments and his response was "Yeah, man. Fucking CRISPR." -- in a tone of voice which I read as "I'll wait until the hypetrain passes on this one before sharing your enthusiasm."
I guess his domain knowledge allowed him to foresee challenges the rest of us are now becoming aware of.
The promise of CRISPR was not that it was to be the singular tool that would change how genetics and biology behaved. Rather the Cas9 protein could be used as a cheap and fast technique to colocalize with specific DNA sequences. And that is fantastic, useful, and worth all of the praise the tool has gotten. But that's it.
That Cas9 happened to also actually allow a modification at specific sites using its inherent DNA cleavage capability was a great little bonus - and that it could be cleverly controlled to use that capability to actually, today, cure some diseases, was amazing.
There is very little in biology that is binary in nature - it takes a lot of energy to maintain such an entropic dam between two states. So a protein that cuts at ATTGCTTGTA with 80%/hr/molecule efficiency, will also cut ATTGGTTGTA with some non-0% efficiency as well. Every scientist who works with Cas9 (should) already know this. I don't think Cas9 is fantastic for it's ability to cleave DNA - we've had restriction enzymes for a long time - but rather for it's ability to colocalize with arbitrary DNA sequences. And with that colocalization we can now bring to bear all the rest of the fantastic tools we already have in synthetic molecular biology. And that is the interesting part of our future - in which Cas9 is but a single (very useful) tool in our kit.