|Home | About | Journals | Submit | Contact Us | Français|
The rapid advances of high-throughput sequencing technologies dramatically prompted metagenomic studies of microbial communities that exist at various environments. Fundamental questions in metagenomics include the identities, composition and dynamics of microbial populations and their functions and interactions. However, the massive quantity and the comprehensive complexity of these sequence data pose tremendous challenges in data analysis. These challenges include but are not limited to ever-increasing computational demand, biased sequence sampling, sequence errors, sequence artifacts and novel sequences. Sequence clustering methods can directly answer many of the fundamental questions by grouping similar sequences into families. In addition, clustering analysis also addresses the challenges in metagenomics. Thus, a large redundant data set can be represented with a small non-redundant set, where each cluster can be represented by a single entry or a consensus. Artifacts can be rapidly detected through clustering. Errors can be identified, filtered or corrected by using consensus from sequences within clusters.
Metagenomics [1, 2] is a genomic approach that uses culture-independent sequencing to study the microorganism populations under different environments. It offers unprecedented vision of the identities, composition, dynamics, functions and interactions of the diverse microbial world and has become an important tool in many fields such as ecology, energy, agriculture and medicine.
Earlier metagenomics projects, such as Sargasso Sea , human gut  and soil , relied on traditional Sanger sequencing technology, so most of these projects have limited throughput. In recent years, the rapid advances of next-generation sequencing (NGS) technologies , such as 454, Illumina, SOLiD, PacBio and Ion Torrent, dramatically propelled the expansion of metagenomics research, and large ‘waves’ of metagenomics sequencing projects were launched to study a diverse range of microbial communities in their environments, such as the virome , farm animals  and the human microbiome [9, 10]. It is widely expected that many more environmental and microbiome samples will be studied by NGS technologies. However, the intrinsic complexity and massive quantity of metagenomics data create tremendous challenges for data analysis.
First, because of the sheer number of sequences, all kinds of sequence analyses, including database search, multiple alignment, sequence mapping, assembly and phylogenetic analysis, are getting more time consuming and memory demanding and require more manual efforts for parsing the output results. Second, the growth of sequence data in the public databases has been very uneven due to highly biased efforts toward model organisms and those populations or environments of special interest. Metagenomic analyses that rely on comparison with these biased and redundant reference databases may lead to incorrect conclusions. Third, different NGS techniques and protocols show quite different bias and artifacts. For example, single-cell multiple displacement amplification produces very non-uniform coverage by orders of magnitude . Many sequencers generate tens or hundreds of copies of artificially duplicated reads for same templates . Fourth, NGS platforms have higher error rates than traditional Sanger sequencers and also have platform-specific error patterns, such as homopolymer indels for 454 and Ion Torrent reads and degraded quality at 3′-ends for Illumina reads. Finally, sequence errors and artifacts are propagated from reads to protein sequences, which can be false genes, fragmented or with frame-shift errors.
Clustering analysis, a method that identifies and groups similar objects, is a powerful tool to explore and study large-scale complex data. It can effectively resolve many of the challenges stated earlier. By sequence clustering, a large redundant data set can be represented with a small non-redundant (NR) set, which requires less computation. Errors can be identified, filtered or corrected by using consensus from sequences within clusters. In addition, many fundamental questions in metagenomics can be readily addressed by clustering, such as the identification of gene families and the classification of species in a population. So, since the infancy of metagenomics, clustering analysis has been an essential part of this field for applications, such as identification of artificial duplicates [12, 13], classification of operational taxonomic units (OTUs) , protein family analysis [15, 16] and transcriptomics analysis .
In this article, we will discuss several common clustering applications in metagenomics and the methodologies for different types of analysis.
Sequence clustering is not a new topic; it existed long before the emerging of metagenomics and NGS technologies. In the past, many available clustering programs were used for clustering protein sequences such as ProtoMap , ProtoNet , RSDB , GeneRAGE , TribeMCL , ProClust , UniqueProt , OrthMCL , MC-UPGMA , Blastclust  and CD-HIT [28–31]. Many methods were also used for clustering expressed sequence tags (ESTs), such as Unigene , TIGR Gene Indices , d2_cluster  and several others [35–37].
Many of the above clustering methods require all against all comparisons of sequences for optimal results, so they are very computational intensive for large data sets. A method for reducing the intensive requirement arose with CD-HIT. Thus, with the rapid growth of sequence data, the fast program CD-HIT become a very popular clustering tool; it has been widely used in many areas such as preparing NR reference databases . CD-HIT uses a greedy incremental algorithm. Basically, sequences are first ordered by decreasing length, and the longest one becomes the seed of the first cluster. Then, each remaining sequence is compared with existing seeds. If the similarity with any seed meets a pre-defined cutoff, it is grouped into that cluster; otherwise, it becomes the seed of a new cluster. More recently, several new fast programs, including Uclust , DNACLUST  and SEED , have been developed using greedy incremental approaches similar to that introduced by CD-HIT. These methods use various heuristics and achieved high speed in clustering NGS sequences. Herein, we briefly introduce the features and functions of these programs.
CD-HIT [28–31] is a comprehensive clustering package. The current version (v 4.5) has seven programs. CD-HIT and CD-HIT-EST cluster protein and deoxyribonucleic acid (DNA) data sets, respectively. CD-HIT-454 identifies duplicates from 454 reads. PSI-CD-HIT clusters proteins at low-identity cutoff (20–50%). CD-HIT-DUP identifies duplicates from single or paired Illumina reads. CD-HIT-LAP identifies overlapping reads. CD-HIT-OTU is a multi-step pipeline to generate OTU clusters for ribosomal ribonucleic acid (rRNA) tags from 454 and Illumina platforms. CD-HIT uses a heuristics based on statistical k-mer filtering to speed up clustering calculations. It also has a multi-threading function, so it can run in parallel on multi-core computers. CD-HIT is open source software available from http://cd-hit.org. It is also available from the cd-hit web server , the CAMERA web portal  and the WebMGA server  for metagenomic data analysis.
Uclust  follows CD-HIT’s greedy incremental approaches, but it uses a heuristics called Usearch for fast sequence comparison. It also gains speed by comparing a few top sequences instead of the full database. Uclust can run on DNA, protein and rRNA sequences. Currently, its 32-bit pre-compiled binaries are freely available from http://www.drive5.com/usearch/. DNACLUST  also follows greedy incremental approach; it uses a suffix array to index the input data set. Unlike CD-HIT and Uclust, which can process both proteins and DNAs, DNACLUST only works on DNA sequences, and it is suitable for clustering highly similar DNAs, especially for rRNA tags. It is available as open source program at http://dnaclust.sourceforge.net/. SEED  only works with Illumina reads and only identifies up to three mismatches and three overhanging bases. It uses an open hashing technique and a special class of spaced seeds, called block spaced seed. SEED is also an open source software available at http://manuals.bioinformatics.ucr.edu/home/seed.
Although some programs are claimed to be faster than other programs, those claims are usually based on a certain type of sequences and clustering parameters (e.g. an identity cutoff). Herein, we do not intent to make a side-by-side performance comparison but simply list some examples that we ran to give some hints on the speed and results for some common clustering analyses by these programs (Table 1). CD-HIT and Uclust often produce comparable results in both protein and DNA clustering tests. SEED is faster than other programs in clustering Illumina reads, but it yields many more clusters. Except for SEED, the other three programs all work on rRNA sequences, where Uclust is fastest and CD-HIT gives the fewest clusters.
NGS platforms, such as 454 and Illumina, commonly produce artificially duplicated reads, which can lead to an overestimated abundance of species, genes or functions. The duplicates originate from the same template but are separately sequenced, so they can be exactly identical or can be nearly identical with variable read lengths (454 reads) and mismatches due to sequence errors.
In 454 data sets, duplicated reads can make up 11–35% of the raw reads . As finding identical sequences is very easy, only exact duplicates were identified and removed in some early studies . Nearly identical duplicates were considered ever since the study by Gomez-Alvarez et al. . Gomez-Alvarez’s method applies CD-HIT-EST to cluster the reads at 90% identity and then parses the clustering results. Later, CD-HIT-454  was introduced by reengineering CD-HIT-EST. CD-HIT-454 is faster and more accurate than Gomez-Alvarez’s method. It identifies duplicates that are either exactly identical or meet the following criteria: (i) reads must be aligned at 5′-ends; (ii) for sequences of different length, a shorter read must be fully aligned with the longer one (the seed) and (iii) they have less than user-defined percentage of indels and substitutions (default 4%). The default cut-off value, which is trained according to the pyrosequencing’s error model, maximizes the sensitivity and specificity of identification of duplicates from 454 reads. Another common, easy way for finding duplicates is to compare prefixes and consider that the reads are duplicates if they share a common prefix of a certain length. Both MG-RAST  and IMG/M  use prefix checking for identification of duplicates. Prefix checking is faster but less accurate than CD-HIT-454. CD-HIT-454 only needs a few minutes to run a typical 454 data set with less than a million reads, so it is still very efficient, similar to the original CD-HIT.
For Illumina data sets, prefix checking has more advantages, because it is relatively faster than CD-HIT-454, and it fits features of Illumina reads, which have fewer indels and exhibit worse quality at the 3′-ends. For pair-ended Illumina reads, a reasonable way for finding duplicates is to check prefixes at both ends. This function is available in CD-HIT-DUP.
When removing duplicates, a question that needs to be considered is ‘are these duplicates all artificial’? The experimentally observed duplicated sequences also include natural duplicates, i.e. those that happen to be duplicates by chance. So, simply removing all duplicates may also cause an underestimation of abundance associated with natural duplicates. The CD-HIT-454 article investigated the occurrence of natural duplicates for different types of metagenomic samples and found that (i) the rate of natural duplicates highly correlates with the read density (the number of reads divided by genome size); (ii) for high-complexity metagenomic samples, natural duplicates make up a few percent of all duplicates and (iii) for viral metagenomic samples or metatranscriptomics, natural duplicates can be more abundant than artificial duplicates. These guidelines help to decide whether to remove or to keep duplicated reads in a metagenomic sample .
Metagenomic projects (e.g. [9, 47, 48]) often survey both genomic DNAs and 16S rRNAs. The later are used to estimate the microbial diversity, which is often quantitatively described in OTUs. Because of read length limitation, it is not practical to sequence the full length of 16s rRNA (~1.5kb), so 16s rRNA studies often use individual variable regions (V1–V9) or sections that cover a few variable regions (e.g. V1–V3 and V3–V5). Pyrosequencing of 16S rRNA amplicons has been the dominant approach in rRNA studies. Finding OTUs from 16S rRNA tags can be readily addressed by clustering. Conventionally, tags with ≥97% identity are placed in the same OTUs at the species level. CD-HIT  and DOTUR  were often used for OTU clustering during early studies.
However, a big problem in OTU analysis is that directly clustering the raw rRNA reads or even the high-quality reads often greatly over-estimates the diversity. A recent review  analyzed a list of methods and discussed solving this problem at the clustering algorithm level. This article suggested using average linkage-based hierarchical clustering methods such as ESPRIT , instead of greedy incremental methods such as CD-HIT  and Uclust  for OTU clustering.
In the meantime, many other studies [52–56] found that the single biggest cause of the over-estimation problem is the sequence errors or noise, so new methods such as SLP , PyroNoise , Denoiser  and Ampliconnoise  focus at identifying and removing sequence noise. All these methods find sequence errors by clustering analysis and are based on a principle that a high-abundance cluster can recruit small clusters and singletons, which have more sequence errors. SLP clusters the actual rRNA tags, and the rest of the methods cluster the original flowgram data. Currently, the best performing method among them is AmpliconNoise , which has been benchmarked by several commonly used Mock data sets; these data sets are artificial mixtures of 16S rRNA clones at different abundance levels from a number of known species.
Although the speed of AmpliconNoise is considerably improved over its predecessor version (PyroNoise), it is still quite computational intensive. Recently, CD-HIT-OTU was introduced to the CD-HIT package. CD-HIT-OTU also uses a multi-step clustering method to remove reads with sequence errors and achieves results comparable with AmpliconNoise. However, as CD-HIT-OTU clusters sequences instead of flowgram data and inherits unique heuristics from CD-HIT, it is orders of magnitude faster than AmpliconNoise and other methods such as Denoiser. Table 2 lists the performance of CD-HIT-OTU, AmpliconNoise and Denoiser (implemented in QIIME ) on clustering the Mock benchmark data sets  at 97% identity level.
CD-HIT-OTU has following steps: (i) the raw reads with ambiguous base calls are removed. Reads are also removed if their 5′-ends do not match user-provided primer sequence or a consensus, which is built from the 5′ of all reads of k bases (k=6 by default, adjustable by users). For long reads, it also trims off the tails portion at 3′-ends that are beyond median read length. (ii) Processed reads are clustered at 100% identity using CD-HIT-DUP. At this step, the reads from a unique rRNA template will form one large primary cluster (it contains error-free reads) and some small clusters, which contain reads with sequence errors. (iii) The representative sequences from step 2 are sorted by abundance and then clustered by CD-HIT-EST at a threshold that allows up to two mismatches. For example, 200-bp reads are clustered at 99.0% identity, so that small clusters are recruited into their primary clusters. (iv) Let x to be the median size of small clusters recruited into the most abundant primary cluster with two mismatches. Clusters smaller than x are dominated by reads with more than two errors from the most abundant template; so these clusters are removed. Herein, x is often very small (2 or 3), so that rare species will still be kept in the analysis. (5) The remaining representative sequences from step 2 are clustered into OTUs using CD-HIT-EST (parameters: -c 0.97 -n 10 -l 11 -p 1 -d 0 -g 1). Herein, option ‘-c 0.97’ means 97% identity. (6) The non-representative tags are recruited into the OTUs using CD-HIT-EST-2D (parameters: -c 0.97 -n 10 -l 11 -p 1 -d 0 -g 1).
The ultra-high speed of CD-HIT-OTU allows clustering multi-million rRNA tags pooled from a series of related samples. Such clustering can significantly increase the accuracy of OTU identification, because tags shared by different samples validate each other. Clustering pooled samples may identify very rare OTUs, which may be missed if individual samples are processed independently. We applied CD-HIT-OTU on two pooled data sets, Human_gut_V6  and Human_body_V2 ; these include 33 gut samples from obese and lean twin families and 815 samples from different body sites, respectively (Table 3). CD-HIT-OTU only used a few minutes for these two data sets.
In this analysis, we found that clustering the pooled samples identified 19–80 more rare OTUs than clustering individual samples for the 33 human gut data sets. For the 815 human body data sets, clustering pooled samples found up to 50 more rare OTUs. Clustering pooled samples also provides a very straightforward way to define a ‘core microbiome’ and to compare the diversity and composition of samples. For example, we calculated NAT50 for each sample. Herein, NAT50 is a diversity indicator we defined, which stands for the number of most abundant taxonomic groups covering 50% populations. Figure 1 shows that obese samples have less diversity than lean samples. Please note the abundance of OTUs is the abundance of rRNA genes and may not be the abundance of species, because the rRNA copy numbers are unknown. However, rRNA genes abundance largely correlates with species abundance. The full results for human gut and human body are also available as examples with the CD-HIT-OTU software, which is available from http://weizhongli-lab.org/cd-hit-otu. CD-HIT-OTU is also available as a web server within WebMGA , a collection of web servers for metagenomic data analysis.
As shown in the previous section, clustering-based approaches very well address sequencing errors in rRNA tags. Similar clustering analyses can also filter out errors in genomic and metagenomic reads and, therefore, improve sequence assembly, gene prediction and other analyses. However, finding errors from genomic reads is more difficult than from rRNA tags, which can be aligned at their 5′-ends, because they all start with the same universal primers. For genomic reads, there are several existing methods in detecting sequence errors by various clustering approaches. For example, FreClu  and EDAR  use k-mer frequency; Hammer  uses a Hamming graph and ECHO  clusters overlapping reads through k-mer hashing. These methods avoid very time-consuming full-length sequence alignment in clustering the reads. However, full-length sequence alignment is feasible using ultra-fast sequence clustering algorithms. For example, the analysis in the SEED article  shows that genome assembly can be notably improved by only assembling cluster representatives.
Herein, we show an example using clustering-based filtering to improve metagenome assembly. Metagenomic samples often contain a small number of dominant organisms along with hundreds or more less abundant species. Because of sequencing errors, major problems in metagenome assembly often occur for the high-abundance species. Clustering methods, including the k-mer frequency-based approaches, benefit from high sequence redundancy, from which better consensus can be derived. So, the assembly difficult for the dominant species in metagenome can be effectively corrected.
Herein, as a demonstration, we use an Illumina data set representing the high-abundance species of a human gut sample (MH0006) from MetaHIT project  at http://gutmeta.genomics.org.cn. The MetaHIT study also provided Sanger reads for sample MH0006 as reference and assembled them into 995 contigs. We mapped the Illumina reads to these reference contigs using SOAP2  (option: -M 4); 144 contigs have a coverage of at least 200. The reads mapped to these contigs are selected as a high-abundance subset, which contains 36175286 of 75bp pair-ended reads. The clustering-based approach has the following steps: (i) reads are clustered with CD-HIT-EST (options: ‘-c 0.96 -n 10 -r 1 –aS 0.5 -b 2 -G 0’); (ii) for each cluster, we only kept at most N reads that have the best average quality score per base and filtered out the extra sequences, where N is a redundancy cutoff parameter and (iii) the remaining reads were assembled at different N levels and optimal assembles achieved. The comparison of contigs between original reads and filtered reads using Velvet  and SOAPdenovo  is shown in Figure 2. The filtered data sets largely improve the N50 and longest contig. Actually, for the unfiltered data set, because the coverage is so low, there is no valid N50. The accuracy and coverage are also much higher with the filtered reads.
There are two reasons that we used the high-abundance subset instead of the full MH0006 data. First, sequence errors deteriorate sequence assembly for high-abundant reads. So, our filtering method only improves the high-abundance species. Second, we evaluated the contigs assembled from Illumina reads by comparing them with high-quality references (contigs from Sander reads). Most contigs assembled from the low-abundance Illumina reads cannot be mapped to any reference sequences. So, we cannot evaluate these contigs.
In metagenomic projects, an important annotation step is to query the reads or Open Reading Frames (ORF) against reference databases of known genomes, DNAs or proteins with an alignment program such as basic local alignment search tool (BLAST) , BWA , BLAT , FR-HIT  or Rapsearch . Because of the huge size of both reference databases and the query, such database searches can be very time consuming. However, both reference databases and the query sample can be very redundant, so simply using NR data sets may save a great deal of computation time and, in some cases, also improve the accuracy of database search .
As illustrated in Figure 3, before database searching, both the reference database and the query are clustered at certain similarity thresholds. Then, the NR sequences (i.e. representatives) from the query are aligned to the NR sequences in the reference database. Finally, the annotation results are copied from the representatives to other sequences in the same clusters.
A big concern of this approach is to ascertain how much difference there is between annotations calculated from the NR data sets and from the original full data sets. The key of this approach is to use appropriate, conservative clustering parameters, such that the clusters are homogeneous, as required by annotation goals. For example, we can use 97% as the identity cutoff for 16S rRNAs at species level, to obtain a taxonomy annotation for 16S rRNA reads at a species level by aligning them to a reference rRNA database such as Silva , RDP  and Greengene . Then, the query and references should be clustered at a similarity cutoff greater than 97%. If the goal is to annotate ORFs using the KEGG database , then clustering both KEGG reference sequences and the ORFs at 90% will be harmless, because sequences sharing 90% identity will rarely belong to different KEGG orthology groups.
To further reduce the annotation difference between NR data sets and full data sets, this approach should only be used to cluster sequences of similar length and with enough overlapping regions (Figure 3D), instead of other clustering settings (Figure 3B, C). The CD-HIT program has many parameters such as sequence length, alignment length and alignment coverage for users to finely tune the clustering process to form more homogeneous clusters.
Table 4 lists the clustering results for commonly used reference databases in metagenomic studies at conservative thresholds. Herein, the sizes of the NR data sets are 28–58% of the original ones. After clustering, the size of a NR query data set, which highly depends on the sequencing depth, can often be 50% to many times smaller than the original data set. So overall, the annotation using NR data sets can be easily accelerated by 10-fold.
Reference-based metagenome annotation by comparison with known sequences is essential but has drawbacks, with the biggest limitation being the inability to annotate novel sequences. Large metagenomes and those from under-explored environments contain a large number of novel genes, which might be specific to the environment. These novel proteins can well be overlooked by reference-based annotation.
Clustering analysis is the most effective way to discover novel gene families from large data sets. This has been demonstrated by the global ocean sampling (GOS) study, which identified 3995 novel protein clusters from 17.4 million ORFs . Other large-scale studies, such as MetaHIT , also found novel gene families through sequence clustering.
Clustering metagenomic proteins into families is more complicated than creating a NR data set. In both the GOS and MetaHIT projects, the analyses started by removing highly similar sequences (95–98% identity), followed by several steps of protein clustering. In GOS, these steps include (i) calculation of all-against-all similarities using BLAST; (ii) construction of core sequence clusters, which are dense sub-graphs in the whole graph where the vertices are sequences and the edges are defined by a set of very strong similarities cutoffs; (iii) calculation of sequence profiles for large core clusters using FFAS  and PSI-BLAST , (iv) creation of protein families by merging core clusters using FFAS profiles and (5) recruitment of small clusters and singletons into large core clusters using PSI-BLAST profiles. In MetaHIT, families were clustered from all-against-all BLAST results with an algorithm called MCL, whose details were not described in the MetaHIT article.
The above clustering pipelines require very time-consuming BLAST calculation (e.g. GOS used 1 million CPU hours), so they are not very feasible for small labs, which now also generate large-scale metagenomic data by using NGS technologies. Earlier, the speed of the GOS clustering pipeline was improved  by adopting CD-HIT as a fast clustering and recruiting tool. An independent study using only CD-HIT to build protein families from GOS proteins was also introduced later . This CD-HIT-based clustering produced comparable results to the original GOS study but only used ~10000 CPU hours. Thus, this approach is more suitable for those projects with large data sets that lack the computation resources for exploring protein families.
The above CD-HIT-based protein family finding process has three clustering steps where each subsequent clustering uses the representative sequences generated in the previous step. The similarity thresholds for these three clustering steps are 90%, 60% and 30%, respectively. The first two steps perform regular CD-HIT, and the last step uses PSI-CD-HIT, which also allows an alterative e value threshold. The details of this method are described in Refs.  and , where this method was further improved. The GOS data set is available from CAMERA project at http://camera.calit2.net. Herein, we demonstrate this easy approach for protein family identification using MetaHIT data. The original 14792886 proteins were downloaded from MetaHIT project at from http://gutmeta.genomics.org.cn/. These proteins were hierarchically clustered at 90%, 80%, 60% and 30% identity or an e value of 1e-6. These four steps cost 10, 2, 102 and 720 CPU hours and got 3076514; 2471148; 1554866 and 732063 clusters, respectively. We used a 4-step clustering for MetaHIT data set, instead of the 3-step clustering we used earlier on the GOS data set, because the 4-step clustering provides better classification accuracy. The cluster distributions of MetaHIT are illustrated along with GOS clusters produced in Ref.  (Figure 4A, B). Compared with GOS ORFs, which contain more than half spurious ORFs due to the six reading frame translation, the MetaHIT genes predicted using Metagene  have far fewer false ORFs. So, using similar clustering parameters, MetaHIT data set has far fewer small clusters and singletons than GOS (Figure 4A). Cluster distributions for MetaHIT clusters grouped by known and novel are shown in Figure 4C and D. Herein, novel clusters are those clusters with no detectable similarity to Pfam families  using HMMER3 . The 732063 MetaHIT clusters contain 20328 large clusters with at least 20 NR proteins, and 2580 of them are novel (Figure 4C), which covers ~9% all MetaHIT sequences (Figure 4D). These novel clusters may represent human gut-specific gene families and should be further investigated.
As a summary, the tools tested in this article are listed in Table 5.
The key parameters of these programs include the clustering similarity cutoff and some algorithmic parameters. For clustering similarity cutoff, CD-HIT, CD-HIT-EST, Uclust, CD-HIT-454, PSI-CD-HIT and OTU clustering packages use sequence identity; DNACLUST uses a similarity cutoff based on edit distance, which is very similar to sequence identity; SEED and CD-HIT-DUP allow certain number of mismatches. Word size is the most important algorithmic parameter for many programs (Table 5). The choice of word size depends on the clustering similarity cutoff and the type of sequences (protein or DNA). A higher similarity cutoff works with a longer word, which yields higher clustering speed. An important property of clustering methods is whether the results will change when the order of inputted sequences is different. Most methods introduced herein, including programs in CD-HIT package, Uclust and DNACLUST sort sequences by length and process them from long to short. The OTU clustering packages (e.g. CD-HIT-OTU) sort sequences by abundance and process them from high to low. So the order of inputted sequences does not change the output clusters except when the inputted sequences of the same length (or abundance) are in different order. Reads in most Illumina data sets have identical length, so the clustering results of Illumina reads depend on the order of inputted sequences.
National Institutes of Health [R01RR025030 and R01HG005978 to W.L.] and the Gordon and Betty Moore Foundation [CAMERA project].
Weizhong Li is an Associate Research Scientist at the Center for Research in Biological Systems at University of California San Diego. Dr. Li has a background in computational biology. His research focuses on developing computational methods for sequence, genomic and metagenomic data analysis.
Limin Fu is a Postdoctoral Associate at the Center for Research in Biological Systems at University of California San Diego. Dr. Fu’s background is mathematics. His research focuses on bioinformatics algorithm development.
Beifang Niu is a Postdoctoral Associate at the Center for Research in Biological Systems at University of California San Diego. Dr. Niu was trained as a computer scientist. His research focuses on next-generation sequence analysis.
Sitao Wu is a Staff Scientist at the Center for Research in Biological Systems at University of California San Diego. Dr. Wu has a background in electric engineering. His research interests include protein structure prediction and metagenomics.
John Wooley is a Professor of Pharmacology and Associate Vice Chancellor, Research at the University of California San Diego, as well as a member of the Center for Research in Biological Systems and the California Institute of Telecommunications and Information Technology. Dr. Wooley’s background is biophysics and his current research interests include structural genomics and metagenomics.