|Home | About | Journals | Submit | Contact Us | Français|
The rapid improvements in DNA synthesis technology hold the potential to revolutionize biosciences in the near future. Traditional genetic engineering methods are template dependent and make extensive but laborious use of site-directed mutagenesis to explore the impact of small variations on an existing sequence “theme”. De novo gene and genome synthesis frees the investigator from the restrictions of the pre-existing template and allows for the rational design of any conceivable new sequence theme.
Viruses, being amongst the simplest replicating entities, have been at the forefront of the advancing biosciences since the dawn of molecular biology. Viral genomes, especially those of RNA viruses, are relatively short, often less than 10,000 bases long, making them amenable to whole genome synthesis with the currently available technology. For this reason viruses are once again poised to lead the way in the budding field of synthetic biology – for better or worse.
The chemical synthesis of nucleotide chains took its first infant steps soon after the discovery of the DNA double helix. The race to elucidate the genetic code was driven by the use of triplet sequences of ribonucleotides synthesized by liquid-phase chemistry. Depending on their sequence these triplets selectively interacted with amino-acylated tRNA (the codon:anticodon recognition)(Nirenberg and Leder, 1964; Soll et al., 1965), which led to the assignment of codons to their respective amino acids, and to a much deserved Nobel Prizes for these heroic efforts in these earliest days of synthetic biology. Khorana’s group “raced” to synthesize the first DNA copy of the 75 base pair long tRNAAla in 1970 (Agarwal et al., 1970) a monumental task requiring 20 man-years of labor, only to be outclassed by himself in 1979 by a 207 bp DNA cassette containing the tyrosine suppressor tRNA gene (Khorana, 1979).
The innovations of synthesizing DNA oligonucleotides (“oligos”) on solid supports (Letsinger and Mahadevan, 1965) combined with new activated phosphoramidite nucleosiodes (Caruthers et al., 1987) led to steady improvements in the availability of quality oligos up to 100 bases long. This resulted in a boost in gene synthesis activity throughout the 1990’s that continues unabatedly today. Some of the most notable synthesis achievements are summarized in Figure 1 (Agarwal et al., 1970; Becker et al., 2008; Blight, Kolykhalov, and Rice, 2000; Cello, Paul, and Wimmer, 2002; Chan, Kosuri, and Endy, 2005; Edge et al., 1981; Ferretti et al., 1986; Gibson et al., 2008; Gupta et al., 1968; Kalman et al., 1990; Khorana, 1979; Kodumal et al., 2004; Nirenberg and Leder, 1964; Pan et al., 1999; Soll et al., 1965; Stemmer et al., 1995; Tian et al., 2004). Significant landmarks include the synthesis of an entire 2.7 kb plasmid sequence by Stemmer et al. (Stemmer et al., 1995), the 4.9 kb MSP-1 gene of Plasmodium (Pan et al., 1999), the 7.5 kb of the poliovirus genome as the first synthetic self replicating organism (Cello, Paul, and Wimmer, 2002), and the 32 kb polyketide synthase gene cluster (Kodumal et al., 2004). The trend has culminated in the recent synthesis of 582,970 base pairs corresponding to the first artificial bacterial genome by the group of Craig Venter (Gibson et al., 2008). Starting with 101 prefabricated segments of 5–7 kb in length (purchased from commercial vendors), Gibson et al. used state of the art methods and brute force to assemble larger and larger DNA pieces, at first by recombination in bacteria, and finally in yeast (Gibson et al., 2008). Alas, the synthetic genome was not, or could not, be “booted” to life, by transplanting the genome into an “empty” chassis as the group has shown previously with a natural genome (Lartigue et al., 2007). Therefore, the first synthetic autonomous life form is still just below the horizon.
It is not yet possible to synthesize entire genes as long continuous strands of DNA from scratch. Rather, all synthetic genes are assembled from short custom made single stranded DNA oligonucleotides or “oligos”, which are literally strings of a few nucleotides. Oligos are by-and-large still synthesized the same way as they were 15 or 20 years ago. Through incremental improvements in instrumentation and higher throughput, oligos have become a cheap commodity for use in standard recombinant DNA technologies. But, more than anything else, great demand and even greater competition by manufacturers has driven the oligo prices down by about 10-fold over the past 15 years (Figure 2). In comparison, the prices of finished, sequence confirmed, gene synthesis by commercial gene foundries have plummeted 50 fold in only 10 years (Figure 2). As a reference point, at the outset of the poliovirus synthesis project  in 1999 commercial gene synthesis was simply unheard of. As recently as 2000, after much searching, we found a vendor who agreed to synthesize parts of the genome by special arrangement at a price of $12/bp (Cello, Paul, and Wimmer, 2002).
In the ideal world, an efficient and economical de novo gene synthesis platform would combine cheap error-free oligo synthesis with accurate assembly methods. Neither one are currently available. There are two dramatically different methods of synthesiszing oligos. In the traditional, time-proven, method of solid-phase oligo synthesis each oligo is synthesized individually, on a separate small column or a well on a multiwell plate. The method is high yielding but costly ($ 0.10–0.20 per nucleotide synthesis cost), which is a critical aspect if the oligos are needed for the assembly of long DNA sequences. The price given above translates into an oligonucleotide cost of approximately $ 200–400 for a 1kb DNA sequence and that’s for the raw material only.
The development of optical deprotection chemistries heralded a new era of parallel synthesis methods on micro biochips (Fodor et al., 1991) that can be used for both oligo or peptide synthesis. Depending on the chip platform being used, several thousands to hundreds of thousands of distinct oligonucleotides can theoretically be synthesized on a single chip.
In an ingenious extension Tian and collegues (Tian et al., 2004) mated the light-induced deprotection chemistry with microfluidic technology that allows the programmable synthesis of thousands individual oligonucleotides on a tiny chip (Figure 3A). At the heart of this method is the Digital Light Processing technology (DLP) that was developed for digital projectors and High Definition Projection TV sets. On a microfluidic chip containing a labyrinth of thousands of connected tiny reaction chambers (Figure 3C), each chamber is computer-addressable by a light beam generated on a digital micromirror device (Singh-Gasson et al., 1999) (akin to the individual color light spots making up the projection-TV picture). A DNA synthesis mixture containing the first nucleotide (A, for instance) is pumped through the system. Here, A only “sticks” to the chambers which call for an A at the specific position in their sequence, which are the ones that are being illuminated at that time (Figure 3A). Although all chambers receive the same synthesis mixture at any given times, no reaction occurs in the chambers that are “left in the dark” (in the example above, the ones that need a C, G, or T at their corresponding position). After the first reaction, the A-mix is washed out and the next reaction mix, containing the next nucleotide is pumped in and the process is repeated, four times in total. After all four nucleotide reaction mixes have gone through the chip, in each chamber the oligonucleotide chain has now grown by at least one nucleotide of the desired sequence.
At the end of the reaction the oligonucleotides are eluted from the chambers as a single pool. Each of the oligo sequences is only present in minute quantities. This may present a challenge in further increasing the throughput by increasing the number of reaction chambers per chip, while decreasing chip size. Tian et al. demonstrated the potential power of this technology for the synthesis of large numbers of oligonucleotides to be used in synthetic gene assembly (Tian et al., 2004).
Companies already offer parallel on-chip-synthesized custom oligo mixtures that are amenable for gene synthesis (LC Sciences, Houston Texas). Currently the price of a pool of 3,912 90-mers is approximately $1000. This technology is still very much in the exploratory stage. One inherent difficulty of the method is that all oligos are released from the chip as a mixture. The low yields of oligos that come off the chip (107 –108 molecules per sequence) are insufficient to drive a gene assembly reaction, which mandates a post-synthesis PCR amplification step before oligos can be used. For this purpose each oligo is synthesized with two flanking generic adaptor sequences, which allows amplification of all oligos in parallel in a single PCR reaction using the corresponding adaptor primer pair (Figure 4) (Tian et al., 2004). Using distinct sets of adaptors on distinct subsets of oligos in the same chip-synthesis reaction allows the subsequent selective amplification of a desired subset of oligos, for instance a set necessary for the assembly of one particular gene. Therefore, it is possible that in a separate reaction a different set of oligos can be amplified from the same chip-eluted oligo mix. Thus fractioning the entire oligo pool into gene-specific subsets will reduce complexity of the mixture, increase concentration of each specific oligo, and reduce potential interference or cross-hybridization from other oligos in the pool. This will be especially useful as the number of individual sequences synthesized on the chip increases. The higher the number of discrete oligo sequences synthesized per chip, the lower the absolute yield per oligonucleotide (sub fmole range) because the total yield of DNA is a direct function of the total reaction surface on the chip. With more distinct oligos the potential for unwanted cross-hybridizations during the gene assembly step also increases.
The second drawback of the chip-based oligo synthesis is that the PCR amplified oligos are now in a double stranded form. The presence of a perfectly matched antisense strand may reduce the efficiency in the subsequent assembly of these oligos into larger genes. The assembly reaction depends on the complementarity of the overlapping “construction” oligos, those designed to build the gene, and the antisense oligos are likely to compete more effectively for the same hybridization partner. To overcome this problem the desired single stranded construction oligos can be selectively enriched by specific hybridization to antisense selection-oligos affixed to a column and subsequent elution (Tian et al., 2004). When done under stringent enough conditions this procedure also contributes to a significant elimination of error-containing oligos, as they produce mismatches with the selection oligo and consequently elute from the column at a lower temperature. On the downside, this method requires twice the amount of selection oligos than there are contruction oligos. In other words, to produce one chip’s worth of oligos one needs two additional chips’s worth of selection oligos, tripling the cost of synthesis (Tian et al., 2004). This brings the current “rock-bottom” cost of the final construction oligos before the gene assembly to about $0.03/bp.
While these new multiplex synthesis systems are technically feasible it is our understanding that the major suppliers of large synthetic DNA for now continue to assemble genes from individually synthesized overlapping oligonucleotides by traditional methods.
The sheer number of different oligonucleotides synthesized on a chip mandates the use of new software programs to handle the complexity of possible interactions of the various oligo sequences in the mix (Czar et al., 2009). Several software programs are freely available to design optimal sets of assembly oligonucleotides. The basic tasks that successful software needs to perform are:
There are two basic methods available for assembling long DNA sequences, such as virus genomes, from short overlapping synthetic oligonucleotides, direct assembly PCR and ligase chain reaction (LCR) followed by fusion PCR with flanking primers.
Assembly PCR is based on the principle of generating stepwise elongation of the amplicon, a piece of DNA formed in an amplification event, by one oligonucleotide at each end of the growing amplicon with each PCR cycle (Stemmer et al., 1995), and on the possibility of intermediate products to act as overlapping megaprimers to assemble even larger amplicons (Figure 4). Theoretically, the reaction continues until the two outermost oligos are incoproated to give the full length product. The full length product is subsequently amplified with an excess of the two flanking PCR primers. Practically, obtaining large DNA fragments in a single assembly reaction is exceedingly difficult. For this reason, and for error-management purposes, it is generally necessary to first synthesize, clone and verify the sequence of several intermediate size sub-fragments (500–1000 bp). These can then be linked by fusion PCR to form larger genes or by standard cloning methods.
The ligase chain reaction (LCR) is similar in that it uses overlapping oligos. But unlike with PCR assembly, oligos for LCR have to be designed to anneal without gaps between them, head to toe, forming annealed stretches of DNA which are then ligated using a thermostable DNA ligase (Barany, 1991). In contrast to PCR assembly where a single oligo is added at each end of a synthon in each cycle, during LCR several overlapping oligos can be ligated to one another. Owing to the thermostability of the ligase, LCR can be cycled similar to a PCR reaction, leading to assembly of longer and longer chains, but no net amplification. The desired product is finally amplified by PCR using gene-flanking primers.
Regardless of the many variations on the theme of how to assemble a large synthetic DNA, at the core of all current methods are chemically synthesized oligonucleotides. The downward price trend for oligos has slowed significantly over the past 5 years and appears to be bottoming out (currently in the $0.10–0.20/base range). As the price gap, and therefore the profit margin, between finished synthetic genes and their oligo building blocks is narrowing, it can be expected that oligo-based gene synthesis prices will soon follow. For long DNA synthesis to become economical, radically new technologies need to be developed that either reduce the errors in run-of-the-mill oligos by orders of magnitude, or allow de novo gene synthesis independent of the error-prone oligonucleotide chemistry, perhaps by developing enzyme based synthesis of long accurate polynucleotides. Barring such breakthrough, the routine synthesis of bacterial or larger genomes will likely remain prohibitively expensive for some time to come. As a case in point the recent synthesis of the Mycoplasma genome (Gibson et al., 2008) cost an estimated $ 10 million (Herper, M. 2007, http://www.forbes.com/2007/06/28/venter-synthetic-bacteria-tech-science-cx_mh_0628venter.html). At the research level on the other hand, once gene synthesis hits the $0.10–0.20/bp price range synthesis will very likely replace the traditional recombinant DNA methods for many smaller scale cloning projects within the next few years.
A major problem with genes assembled from overlapping oligos is the inherent error rate of about 1% during the chemical synthesis of the oligos themselves. The most frequent error is the failure to incorporate bases due to less than perfect deprotection of the reactive groups or incorporation of the incoming nucleotide. It appears that there is a rather hard limit for improving the oligo accuracy during the synthesis step much beyond the 1/100. Therefore several techniques are being employed, often in combination, to improve the accuracy of oligos and the assembled DNA intermediates.
The quality of the oligos critically determines the practical size of the synthesis intermediates that need to be cloned and sequence verified (Carr et al., 2004). If sequence errors follow a normal Gaussian distribution along the length of the DNA an error rate of 1 in 600 would make it impractical to assemble a DNA longer than 1–2 kb in a single reaction without intermediate sequence verification (Figure 5).
In many cases it is desirable to express a gene of interest (often a human gene) in a heterologous, more economical, expression system, such as bacteria or yeast. All too often, however, the codon usage within the gene is at odds with the codon usage of the new host species. As a result the gene expresses poorly. Thus, the need for “codon optimization” was born (Itakura et al., 1977). During codon optimization the codon usage of the gene is altered to reflect that of the host species by replacing suboptimal codons with preferred synonymous codons. Since this often involves many simultaneous sequence changes, it is best done by de novo gene synthesis. Probably the best known example of codon optimization is the “humanization” of the Green Fluorescent Protein (GFP) of the jellyfish A. Victoria (Zolotukhin et al., 1996). Codon optimization is currently still the most prevalent reason for de novo gene synthesis (Gustafsson, Govindarajan, and Minshull, 2004).
In some instances gene synthesis has been used to recreate a DNA sequence from a publicly available sequence database in an effort to sidestep licensing, patenting or material transfer issues.
It is theoretically possible to synthesize a bacterial genome in which the redundancy of the genetic code is eliminated, such that each amino acid in every bacterial protein is represented by exactly one codon only. Thus, only 20 codons plus one Stop codon would be needed to synthesize all the bacteria’s own genes. At the same time, the remaining 43 “orphaned” codons could be freed up to specify non-natural amino acids. Bacteria with such an expanded genetic code could one day become a powerful chassis for the production of artificial proteins (2006; Carr and Isaacs, 2006).
Viruses are amongst the simplest replicating genetic systems. For this reason they have been at the forefront of the advancing biosciences since the dawn of molecular biology. Their small genome sizes (most RNA virus genomes are 10+/−5 kb) makes them amenable to whole genome synthesis with the currently available technology. For this reason viruses are poised to lead the way in the budding field of synthetic biology.
A significant use for genome synthesis consists in the recreation of viruses or perhaps other organisms in the future, for which no intact natural template is available. The synthesis of the 1918 flu virus was accomplished by piecing together sequence fragments recovered from victims buried in the Alaskan permafrost and archived tissue samples (Tumpey et al., 2005). The creation of bat SARS coronavirus (Becker et al., 2008) and HIV from Chimpanzee feces (Takehisa et al., 2007) also fall into this category. A clever extension of this idea has been the resurrection of live infectious retroviruses assembled from a consensus of ancient remnants that are endogenous to the human genome, and which have perhaps been inactive for millions of years (Dewannieux et al., 2006; Lee and Bieniasz, 2007). Once the stuff of science fiction movies, these “Jurassic Parkesque” projects are likely to be just the teaser trailers of the coming attractions in the budding synthetic technology.
Through the process of natural selection, evolution favors systems that work, especially those that work better than their direct predecessors and competitors. This selection process however does not follow what humans would consider a logical design process. Evolutionary changes are small and incremental following a one-directional ratchet that does not move backward. There is no “reset” button that allows evolution to jump back to an earlier version and try again. De novo gene and genome synthesis provides this virtual reset button by allowing the creation of any conceivable genome at will and at once, no matter how different from its predecessor.
One recurring theme in viral genomes is the evolution of overlapping reading frames. This space saving measure allows a virus to encode portions of two proteins on the same stretch of genome sequence, but in two different reading frames. Studying individual genes and proteins of such a virus genetically and biochemically poses a problem for the experimenter, since manipulating one protein inadvertently changes the other. To simplify these interdependencies in the genome Chan and colleagues redesigned and synthesized parts of the bacteriophage T7 genome, eliminating the overlapping reading frames (Chan, Kosuri, and Endy, 2005). In the resulting virus, the individual genes could be then manipulated and studied independently, a process they called “refactoring” in analogy to the process of redesigning and improving computer code, while retaining it’s basic function.
The basic mechanism of mRNA translation is preserved from the simplest virus to the most complex organism. Viruses, just like human cells need to produce mRNA molecules, which are used to convert their genetic information into proteins. Different viruses have devised different strategies to accomplish this, and have different ways to store this genetic information in their genome. Invariably, however, viruses need to divert the host’s cellular machinery for the translation of their proteins, as they themselves cannot execute this function. The degeneracy in the genetic code (several synonymous codons specify the same amino acid) gives an organism the flexibility to encode a given protein sequence in its genome in an unimaginably large number of ways. The poliovirus polyprotein, for instance, could be encoded by a staggering 101100 different mRNA sequences, all of them specifying the same protein sequence (for comparison, the number of atoms in the observable universe is estimated to be on the order of 1080). This raises the question to what extend the natural encoding of a gene is optimal or special. The cell’s preference of one synonymous codon over another to specify the same amino acid is termed “codon bias”. It is thought that codon bias is correlated with the abundance of the corresponding cognate tRNAs in the cell. Consequently, rare codons are associated with a suboptimal translation of an mRNA. In addition, the frequencies of which two codons occur next to one another in the genome are not what is statistically expected from the frequencies of the two codons that make up the pair - a phenomenon called the “codon-pair bias”. There are codon-pair combinations that are statistically greatly underrepresented while others are greatly overrepresented. The significance of codon pair bias has been largely unknown and underappreciated. We have recently shown that it is possible to exploit the codon-pair bias phenomenon for the synthesis of novel live attenuated forms of viruses with incredible properties (Coleman et al., 2008). By large-scale computer-aided redesign of the viral genome we engineered hundreds of silent mutations into poliovirus. These mutations were targeted to introduce a maximum number of unfavorable synonymous codon-pairs, without changing codon bias or protein sequence. By forcing a virus to “make do” with this heavily biased synthetic genome we showed that viral protein translation is greatly reduced. Thus, codon-pair deoptimized viruses cannot reproduce their genetic information as quickly as their wild type cousins which puts them at a decisive disadvantage against the host’s innate and immune defences. One of the major benefits of the whole-genome deoptimization strategy is that the resulting attenuated viruses are phenotypically and genotypically extremely stable. The attenuation (att) phenotype is dependent on many hundreds, even thousands, of silent mutations, each by themselves virtually inconsequential, or “death by a thousand cuts”. Therefore, the fitness gain from reverting individual mutations appears to be too small to drive genetic selection, and thus, reversion apparently does not occur (Coleman et al., 2008). We termed this process of perturbing intrinsic viral genome biases by synthetic genome re-design SAVE for Synthetic Attenuated Virus Engineering (Figure 6).
SAVE attacks a virus at one of the most fundamental processes common to all living systems, the translation of protein, for which viruses depend on the host cell’s machinery. Thus it should be predicted that SAVE may work on most if not any virus. The rational genetic changes imposed on SAVE designed viral genomes are completely independent of protein sequence. The viral protein sequences, and therefore their function remain 100% preserved in the recoding process. Therefore an understanding of the proteins function is not necessary, sidestepping the need of most of classic virology in order to produce an attenuated vaccine candidate in a very short time with a predictable degree of attenuation in virtually any virus system. Viruses live lives of genetic austerity, and therefore don’t usually carry unnecessary genes around. By that rationale most viral genes product can be considered essential. Depending upon the virus system, interfering with the synthesis of several of those genes just a little bit turns out to pack a great punch against the overall fitness of the virus (Coleman et al., 2008; Mueller et al., 2006)
Using the SAVE method we can profit from these genomic biases that have arisen over evolutionary time-scales and turn them upside down and inside out, undoing eons of viral evolution. If we think of evolution as “walking” along a dirt path, SAVE allows us to “leap” across the evolutionary universe at warp speed. Since it is evident that many viruses have actively selected against the occurrence of certain sequence features, such as unfavorable codons, codon-pairs, as well as other sequences motifs, the whole genome recoding approach by de novo synthesis will very likely have a profound effect on any virus.
Since SAVE targets a virus at the level of protein translation, a function elementary to all viruses, we believe this approach is applicable to many virus systems for which the following basic requirements are met:
Provided above requirements are met, SAVE strategy can successfully be employed for redesign and synthesis of viruses.
Synthetic virology, i.e. the redesign and synthesis of custom-tailored whole virus genomes, has become economically feasible with recent rapid improvements in DNA synthesis technology. This holds the potential to revolutionize the way virology and vaccinology is done. Viral genomes, especially of RNA viruses and retroviruses are short enough to make them amenable to whole genome synthesis with currently available technology. Such freedom of design could provide tremendous power to perform large-scale redesign of DNA/RNA coding sequences, to study the impact of large-scale changes in codon bias, codon-pair bias, dinucleotide biases, GC content, RNA secondary structures, and other sequence signatures, on viral fitness, with the aim to develop a new platform for vaccine design and genetic engineering.
What is synthetic biology? It is neither a field in its own right, nor a separate science. It is perhaps best described as an improvement of existing enabling technologies that are beginning to penetrate mainstream sciences, as they become more and more economical. This has led to an “organized” crossover of different scientific fields (e.g. biology, chemistry, mathematics, engineering etc.) that promises to yield organisms with useful biochemical pathways never seen before.
The new reality of synthetic genes and genomes calls for a fundamental revision of the ways biology is taught to students. The Johns Hopkins University has already embraced these cutting-edge developments, and is now offering an undergraduate course, in which the students collaboratively work toward synthesizing the yeast genome. Impressively, within only one year this unified effort resulted in the synthesis of hundreds of 750bp cassettes amounting to the 280kb of the yeast chromosome III (Dymond et al., 2009). An equally imaginative and playful introduction to engineering of biological systems is fostered by the International Genetically Engineered Machine Competition (iGEM; http://www.igem.org) organized by synthetic biologists at MIT. Here undergraduate teams compete in designing and building genetic circuits and systems from an ever expanding toolkit of standard genetic parts, or “BioBricks™” (Goodman, 2008)
However, although the excitement about synthetic biology is substantial enough, it faces equally big scepticism and “fear of the new” in our society. A disservice to their own science is perhaps the tendency of some researchers in the “synthetic biology field” to overvalue its novelty and uniqueness. The most commonly cited public concerns with regard to synthetic biology are probably the ethical implications connected with the creation of “new life forms” and the fear of synthetic “killer viruses”. These sentiments are often picked up and fuelled by the media potentiating the perceived fear of the uncertain.
Virtually every organism ever modified in molecular or genetic research is by definition a new life form. This definition could be expanded to all naturally occurring organisms that genetically differ from their parent, in other words: all the living creatures. Why would an organism created by synthetic methods be qualitatively different? The question presents itself: “Why do we, as a society, worry more about the possibility of a synthetic designer pathogen, when some of the worst pathogens known to mankind are still raging?” Measles virus, as a case in point, is one of the most contagious viruses to humans. As recently as in 2000, approximately 777,000 people died per year from measles, and in third world countries with poor health care systems the fatality rate can be as high as 28% (Perry and Halsey, 2004). Annually, 250,000 – 500,000 people die from complications of the flu (WHO, 2003). Additionally, only a few critical mutations in the H5N1 bird flu virus separate us from a virus that can easily spread amongst humans and lead to an influenza pandemic. The AIDS pandemic, caused by primate viruses that jumped the species barrier to humans, claims approximately 2 million lives annually (http://www.avert.org/worldstats.htm). In 2003, the world barely escaped a pandemic by a SARS-coronavirus now thought to have jumped from bats to humans ((Becker et al., 2008) and references therein).
Although in theory at least, we have the capacity to generate any genetic sequence that we can conceive, what we can do with this capacity is in fact quite limited. While it’s easy to think up fantastic and scary scenarios of a synthetic killer viruses wiping out mankind, bio-terrorists and the brightest scientific thinkers alike would be hard pressed to say what such a designer super-pathogen would look like. In reality, all that can basically be accomplish via synthesis for now and for some time to come, is to emulate, copy and recreate what mother nature has brought forth and thrown at us incessantly throughout our history on this planet. It is possible to produce variations on an existing theme. It is not possible, as yet, to design from scratch a qualitatively new pathogen, that is completely different from any organism that exists now or has existed in the past. The level of abstraction required to “piece together” qualitatively new lifeforms form defined of the shelf parts (genes), is far from being realized (Goler, Bramlett, and Peccoud, 2008). It is probably this misconception, trumpeted by the media, which strikes a cord of fear in the general population. Cases in point:
All the above considerations notwithstanding, de novo genome synthesis, like many technologies in the past, does hold a potential for dual use. And unlike many technologies before it, nuclear proliferation for instance, which require immense resources that cannot escape detection, the intentional misuse of genome synthesis technologies will become increasingly undetectable. It seems next to impossible that genome synthesis can ever be government-regulated effectively. The technology and its components are too ubiquitous already, and too easy to jury-rig from off-the-shelf parts. The nature of genome synthesis is such that in the very near future pathogens can, and perhaps will, be synthesized in the proverbial hobbyist’s basement, high school Science lab or by a bio-terrorist organization. These possibilities are not an academic’s hyperbole either. In fact the grass roots “bio-hacker” culture is already flourishing, outside the realm of academia, industry and government oversight (Nair, 2009). When considering these issues our society would be prudent to shift focus from prevention of such dual use proliferation to preparing for it. The latter may include the development of new vaccines and/or the stockpiling of available vaccines against the most likely bio-terrorist agents.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.