This week, CIDD had a discussion of the "gain-of-function" experiments that have been recently controversial in virology. I couldn't attend, but it's an interesting topic, so I've curated a useful (though not exhaustive) bibliography touching on history, and two stories where the issue has arisen -- influenza evolution and botullinum discoveries. Scroll down to investigate.
The whole argument here feels like it's in denial of the modern realities of biotechnology. To illustrate, allow me to draw a parallel with cybersecurity.
In the early days of home computers, the idea of a computer virus and digital life, with depictions like Tron (1982) and Wonderwork's "Hide and Seek" (1984), was fantastic. But, over 3 decades, that silicon ecology some of us dreamed of has failed to appear. Instead of being ingredients in the emergence of a new life as rich as our own, computer and their networks have become heavily engineered and highly constrained systems. Perhaps there is little practical difference between the dream and the reality, since the complexity of our networks now escapes our collective grokking. But what is important is that these are systems that we have built in our own images of order and regularity -- they are not organic or spontaneously magical -- we create them and they are the digital appendages with with we attempt to bend the world to our wills.
And as an outside observer might guess, we have adapted war to new forms suitable for application to these silicon systems. Our computer systems designed for communication, accounting, and regulation are targets for attack with malware in various forms. There are laws against the creation and dissemination of this malware, but they do little good -- thieves and extortionists pollute the internet with it. Malicious software is easy to make, the lack of diversity in our systems creates larger opportunities out of small chinks in our security practices, and the perforation of national borders by communications networks makes the risk of reprisal very small.
So, our computers face attack that we can not avoid. Our best option then is bolster our defences by seeking out and closing the security holes that malware exploits. And so, many programmers have entered into security and cyber-defense roles where they comb the code gardens, seeking out and patching the security holes that malware exploits. Even in our heavily engineered computer systems, this is a daunting game of wack-a-mole that seems like it will go on for years.
Now, why am I talking about computer security in a public log on biosecurity and "gain-of-function" experiments in virology? Because there is a systematic parallel between the cybersecurity problems and the biosecurity problems. Each of us humans lives in an organic shell. Our genomes and affiliated proteins are our operating systems, while viruses and parasites are one form of malware that attacks use from the outside world. But in this case, the our and our parasites codes are evolved under natural selection, not engineered. Evolved systems are both robust, and buggy -- they quickly evolve adequate defenses against the common insults we face, but they have great difficulty evolving and maintaining resistance again rarer threats -- there's just not enough selection to stamp out all of the deleterious noise in the code. So it seems in all likelihood that we are very buggy pieces of bioware, with scads of security holes just waiting to be exploited by germs.
If we have so many holes in our defenses, why aren't these holes being exploited? Well, right now, most of the germs doing the exploiting are also naturally evolved organisms -- they can evolve up a strong fitness gradient relatively easily, but making the big programming changes to exploit a novel hole is (fortunately) an exceedingly rare event. With an engineered germ, the story would be very different. Then we must ask ourselves why germs aren't being engineered. First, it's hard -- much harder than coding cyber-malware. The DNA re-encoding, protein interactions, and testing procedures are all expensive, in terms of time, money, and expertise. Second, there's no clear and easy pay-off for the effort, and individual or state scales. But given the rapid rate of advancement of technology and the general economic instability, both of these hurdles may go away. And if that happens, we'll be playing the biosecurity game the same way we're playing the cybersecurity game, only from a much weaker starting point.
It seems that if we really want to take biosecurity seriously, we need to start looking for ways to patch our own security bugs. This raises the all-too uncomfortable and taboo specter of genetic engineering and humans. This does not have to be the scary eugenics of the early 20th century, but I don't think it's something we can stick our heads in the sand over much longer.
The American Experience did a documentary with supporting material on the offensive US biological weapons program, which was unilaterally ended in 1969 by Richard Nixon. Since then, all research has been defensive. The Federation of American Scientistis also tells the story.
While there is no longer weapons research, biological research on microparasites and infectious diseases continues around the world. While this research is targeted at improving human health, some research experiments can have dual-use, accidentally illuminating and enabling potentially dangerous applications of new discoveries. The National Academies has a really brief introduction to dual-use research issues available.
For a topic like this that is so young and so complicated, there's a great diversity of opinions and positions that need to be considered. To help, there was a survey by the National Academies, on what biologists collectively think about these kinds of issues.
A couple years ago, there was a story on NPR about the discovery of a new botellinum toxin.
Last year, the story made the news again because of withholding of scientific information, including DNAs sequences, from interested parties.
In 2005, scientists reconstructed the 1918 influenza virus genome.
The National Science Advisory Board for Biosecurity had a October, 2014 meeting that you can watch online. Interesting stuff starting in the video around the 1:00 mark, including Marc Lipsitch's talk at about 1:30. NAS hosted a follow-up symposium in December, 2014.