Using a bacteria-derived antiviral mechanism against our own viruses sounds obvious to anyone who hears about CRISPR's origins for the first time, I'd hasard.
A less-explored idea is the one I first got from the title: would oversuse of CRISPR lead to "superbugs" issues like we had with antibiotics? This sounds like something interesting to work on, even before starting their large-scale use.
I guess a lot could go wrong if viruses started to become resistant to bacteria's immune system mechanisms.
I've been thinking about the nature of security in biology. How do you keep enemies out when you are guarding something valuable like an oasis or your heart/brain?
Well, we can install layers of access control like cell membranes which apply access control. We can cordon off the nuclear power plants (like mitochondria) with a "dumb" API that produces energy in return for nutrients and avoids "smart contracts".
In the medieval sense, we can dig a moat and install a drawbridge. Bridges adds latency, but at least you can retract them during wartime. But the enemy is pretty clever, so they carry long ladders across the desert that can span the moat.
So you add a winding cave before the moat so the ladders don't fit. But then they wisen up and pay spies behind your lines to lower the bridge, which is analogous to hijacking your immune system, or they invent a folding ladder, and so on.
But none of this helps you if your enemy can consume you at a macro scale by swallowing you whole. So complex species tend to grow larger or add multiple layers of specialised topology (organs) with disposable individuals - not to mention growing their societies.
I suspect there exists a "Shannon Information Theory of Survival" that can guarantee a defensible strategy as long as you can inject sufficient variance into the environment (high pressure, low pressure, vacuum, hot, cold, acidic, etc.) so that it would be more costly for an attacker to usurp you than to forge an alliance.
The key is that you can't really stop the bacteria from coming in. You have to actually feed them so that they don't get desperate enough to blow a hole through your defenses. But what you feed them is a complex mix of polysaccharides that take take a long time to chew through.
https://www.nature.com/articles/nature23292
Your variance hypothesis is basically correct. The goblet cells of the epithelium express ~20 enzymes that decorate the protein backbone of mucus with six O-linked sugars in such a way that the composition is never the same, because of all the different types of linkage combinations possible, which means that no one bacteria can sweep through and gain permanent access.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4982847/
I was looking for this reference yesterday but couldn't find it:
Cooperative Metabolic Adaptations in the Host Can Favor Asymptomatic Infection and Select for Attenuated Virulence in an Enteric Pathogen
https://www.ncbi.nlm.nih.gov/pubmed/30100182
It warrants to be careful with analogies though, because no matter how clever or graphic or seemingly appropriate they might seem, the point of analogies is simply to make our human mind understand complex processes better - they don't necessarily provide any new information about those processes or what would work to affect those processes and what would not. Empirical evidence is what is interesting and applicable in practice. Analogies can be used after that evidence is already collected, to put in a biology book to make students sweat less. But trying to use analogies to make up new information is over-intellectualizing - rarely a wise thing to do.
> But trying to use analogies to make up new information is over-intellectualizing - rarely a wise thing to do.
Is it? It seems like analogies can often point into the right direction to look at.
Of course analogies are not sufficient and you do need to do the actual empirical work, but it also seems a bit unwise to discard their ability to detect similarities. The only reason you can make a good analogy is because there is an underlying shared abstraction .
What you will actually find is that pathogens are generally pretty bad at what they do, and nature selects against them because they tend to kill their host too quickly. There are many organisms that infect but do not cause symptoms that propagate much more genetic material that pathogens.
Similarly, we don't want to be too defensive. It's expensive, and the more complex our immune systems become, the greater the chance of autoimmune disorders. The more energy we spend protecting ourselves, and not growing. Therefore we will never develop perfect immune systems, and pathogens will never be perfect killers.
The arms race analogy does a poor job of illustrating that in actuality, host-pathogen interactions are something of a dynamic equilibrium at this point.
That would be a very interesting thing to explore indeed. I suspect we could already see some of it appear with some simple cellular automata. We could probably get many insights from it, and better predict issues such as resistant germs.
> with disposable individuals
And this is also something interesting: if an individual is too resistant, you risk propagating a disease to a large portion of the population. If each individual is too weak, you risk taking down the whole population. There are probably some evolutive pressure at play: a population with a rampant disease (a parasite, for instance) might be less competitive than another one that sheds its infected members.
One potential risk I see would be that Crispr becomes too easy for anyone to do, and someone starts selling dodgy kits that allow for simple mistakes to turn into complex problems.
An extreme outcome might be like the fictional movie "I Am Legend", which was a horrible title for the movie. Someone gets desperate to cure a late stage cancer in a loved one and ends up creating a pathogen that can manage to spread to other people and wipe out a portion or all of the population.
Or perhaps someone starts selling kits that "make your muscles bigger", but in 0.7% of the population, that kit also makes the persons heart become oversized.
Is it safe to assume that these scenarios are too difficult to accomplish? I am asking because people will do what people can do and we will see online crispr kits before long.
The smart thing to do (and this has already been done in vitro) is to use phages to deliver spacers guided to antibiotic resistance genes. That way, the cells delete their own antibiotic resistance genes, and the phages propagate the changes throughout the bacterial population. Bonus points if you use a budding phage.
The "superbug" would probably need to take the form of essentially "encrypting" the genes to make them impossible to recognize, but somehow still having a way to decode them.
Could happen, but might also be evolutionarily difficult.
That would be probably the best that can happen, because then you just target that one mechanism which just one superbug has. When it cannot decrypt properly, it dies. It also would be almost impossible, it's like one mutation that changes whole genome at once.
The delivery method is generally the missing piece.
For the first of the reports mentioned here the authors used bacterial conjugation [1] as the delivery method. They note that:
"In culture conditions that enhance cell-to-cell contact, conjugation rates approach 100% with the cis-acting plasmid."
So, I guess what is less clear is if this would me the case in a real bacterial infection. And would this generalize to a broad spectrum of pathogenic bacteria?
The delivery mechanism in the second, anti-viral, paper is less clear to me. Perhaps someone else can comment.
In general, I still like the idea of using bacteriophages. But finding broad spectrum bacteriophages seems problematic. Using DNA sequencing as a diagnostic to target therapies seems like in interesting idea (once DNA sequencing becomes cheap enough and easily available).
I'm not so convinced about using phages anymore, at least not on the same large scale that antibiotics are used (which is an economical necessity).
Bacteria co-evolve with phages, especially using CRISPR (https://www.nature.com/articles/s41586-019-1662-9) and can adapt much faster to a given phage strain than to antibiotics.
The really wild thing will be if autologous CAR-T or CAR-M can become a thing. Then you can just dump in already targeted and angry T-cells or macrophages to fight the infection. Additionally cool, fighting things like mold, yeast, and protist infections could then become "routine".
If the patient isn't too sick, you could even get to in situ programming of endogenous lymphocytes and myeloid cells a la Matthias Stephan's nanoparticle loaded with DNA or mRNA. https://stephanlab-fhcrc.squarespace.com/research-projects
I wonder if using Crispr would create a new class of pathogen that are tailored to a small set of population (or even to one individual). That would make research on that impossible and so cure also impossible. We can use Crispr only if we have studied and understood how the pathogen impacts our DNA. But over time, if the pathogen and individual DNAs divert far apart then no single research can be used as a generic cure. That would really suck and make cure a feature for the rich. Hope someone tells me I dead wrong.
There's only so much variation between people. Everyone uses the same proteins, more or less, to do the same things. Therefore, there's only so many ways to attack cells and you can bet that nature has already tried them all. There's a limited angle of attack, plus we already have broad immune systems to protect against most ways.
There's other problems with bioterrorism as well. If you make something lethal, it tends not to spread well because it kills the host too quickly. That's why way more people get the cold than Ebola. Pathogens tend to reach equilibrium within a generation inside a population. The narrower and more specific the group it infects, the harder it is for it to transmit.
You just need to combine a harmless virus with a deadly one, such that the whole thing gets copied, but the deadly part remains dormant until it detects a particular sequence in the host.
I'm sure that's easier said than done, but it seems plausible that genetic engineering could progress to the point where targeted bioweapons become a real concern.
Because of the nature of viral genetics, if there's no selection pressure on the lethal part, it will mutate or be lost entirely before it ever reaches the target.
Sure. You could even do a STUXNET type attack where the pathogen is highly transmissible and perfectly benign to everyone but your target. Release it in a train station in Budapest and wait.
Extremely unlikely. Viruses would lose the gene if it's non-essential, before it got to your target. They're designed to make lots of mistakes in replication to diversify quickly, and without selective pressure, will not retain high fidelity genetic material. Sure, you could mutate the polymerase, but then you have a replication problem as well. How is this virus spread? Particularly without inducing a cough. There's no guarantee of exposure.
Bacteria have more genetic stability but are even harder to spread because most people are already colonized and disruption of that homeostasis is often difficult, and symptomatic.
None of that even addresses how you'd engineer something to be benign to everyone but one person.
It's at least a couple orders of magnitude more horrible than that.
All we have is the binaries. There's no high-level language they can "decompile" into, because the binaries were not compiled from anything. They were literally "grown" from previous binaries by randomness and time. There was no mind guiding the changes so the only way to deduce some design lines is by looking at the system constraints. Nothing makes "sense" besides that.
That's how you get, for example, the perception of time subroutine entangled with the vision routine [1]
Oh and the hardware is the same deal: we can examine existing working "processors", but each one was "grown", not designed.
As far as clean code goes, I tend to think of DNA as the opposite. It is worth noting, especially in bacteria, that DNA is a scarce and expensive resource. So evolution tends to prioritise efficent use of DNA, rather than clean and orderly use. The result is that the same bit of DNA will often do several things. This gets extreme in viruses where the short genome will have 2 genes literally on top of the each other and simply use different compilers to generate their respective proteins. It only gets messier from there (in all organisms). It's worth remembering that some genes are not on your side and like to copy themselves (transposons), your immune system will adapt itself (bacterial CRISPr does this too), things sometimes just break randomly.
Finally, unlike code, none of this has any meaningful process control, its all random things influenced by random fluctations in the environment!
Hey, it's hard to refactor API's constantly because requirements keep changing. Commit more, or commit 'better': choose one for evolution. ;-)
Joke aside, epigenetics in our funny imagery here stand out as "interface" to me —how a particular (DNA) codebase reacts and presents itself differently to different environments, different i/o and somehow manages to share this on-the-fly engineering with descendents.
Biology truly is fascinating from an engineer's standpoint.
A less-explored idea is the one I first got from the title: would oversuse of CRISPR lead to "superbugs" issues like we had with antibiotics? This sounds like something interesting to work on, even before starting their large-scale use.
I guess a lot could go wrong if viruses started to become resistant to bacteria's immune system mechanisms.