The increasing use of DNA microarrays in biomedical research, toxicogenomics, pharmaceutical development, and diagnostics has focused attention on the reproducibility and reliability of microarray measurements. While the reproducibility of microarray gene expression measurements has been the subject of several recent reports, there is still a need for systematic investigation into what factors most contribute to variability of measured expression levels observed among different laboratories and different experimenters.
We report the results of an interlaboratory comparison of gene expression array measurements on the same microarray platform, in which the RNA amplification and labeling, hybridization and wash, and slide scanning were each individually varied. Identical input RNA was used for all experiments. While some sources of variation have measurable influence on individual microarray signals, they showed very low influence on sample-to-reference ratios based on averaged triplicate measurements in the two-color experiments. RNA labeling was the largest contributor to interlaboratory variation.
Despite this variation, measurement of one particular breast cancer gene expression signature in three different laboratories was found to be highly robust, showing a high intralaboratory and interlaboratory reproducibility when using strictly controlled standard operating procedures.
An outline of the processes involved in both certified clinical reference material production and clinical reference measurement procedure development at the National Institute of Standards and Technology (NIST), the national metrology institute of the United States, is presented. The role that NIST and other national metrology institutes play in the metrological traceability of certified reference material is discussed. Highlighted are the challenges associated with the development of reference measurement systems for complex clinical analytes, such as proteins, and examples of existing efforts in this area are given. Examples of recent international collaborations in developing certified reference materials for analytes such as cardiac troponin I, brain natriuretic peptide, and serum creatinine demonstrate the close cooperation that national metrology institutes must have with the clinical community to establish complete reference measurement systems.
DNA microarrays currently provide measurements of sufficiently high quality to allow a wide variety of sound inferences about gene regulation and the coordination of cellular processes to be drawn. Nonetheless, a desire for greater precision in the measurements continues to drive the microarray research community to seek higher measurement quality through improvements in array fabrication and sample labeling and hybridization. We prepared oligonucleotide microarrays by printing 65-mer on aldehyde functional group derivatized slides as described in the previous study xxx. We could improve the reliability of data by removing enzymatic bias during probe labeling and hybridizing under a more stringent condition. This optimized method was used to profile gene expression patterns for nine different mouse tissues and organs, and MDS analysis of data showed both strong similarity between like samples and a clear, highly reproducible separation between different tissue samples. Three other microarrays were fabricated on commercial substrates and hybridized following the manufacturer’s instructions. The data was then compared with in-house microarray data and RT-PCR data. The microarray printed on the custom aldehyde slide was superior to microarrays printed on commercially available substrate slides in terms of signal intensities, background and hybridization characteristics. The data from the custom substrate microarray generally showed good agreement in quantitative changes up to 100-fold changes of transcript abundance with RT-PCR data. However, more accurate comparisons will be come as more genomic sequence information is gathered in public data domain.
Microarrays have the potential to significantly impact our ability to identify toxic hazards by the identification of mechanistically relevant markers of toxicity. To be useful for risk assessment, however, microarray data must be challenged to determine reliability and interlaboratory reproducibility. As part of a series of studies conducted by the International Life Sciences Institute Health and Environmental Science Institute Technical Committee on the Application of Genomics to Mechanism-Based Risk Assessment, the biological response in rats to the hepatotoxin clofibrate was investigated. Animals were treated with high (250 mg/kg/day) or low (25 mg/kg/day) doses for 1, 3, or 7 days in two laboratories. Clinical chemistry parameters were measured, livers removed for histopathological assessment, and gene expression analysis was conducted using cDNA arrays. Expression changes in genes involved in fatty acid metabolism (e.g., acyl-CoA oxidase), cell proliferation (e.g., topoisomerase II-Alpha), and fatty acid oxidation (e.g., cytochrome P450 4A1), consistent with the mechanism of clofibrate hepatotoxicity, were detected. Observed differences in gene expression levels correlated with the level of biological response induced in the two in vivo studies. Generally, there was a high level of concordance between the gene expression profiles generated from pooled and individual RNA samples. Quantitative real-time polymerase chain reaction was used to confirm modulations for a number of peroxisome proliferator marker genes. Though the results indicate some variability in the quantitative nature of the microarray data, this appears due largely to differences in experimental and data analysis procedures used within each laboratory. In summary, this study demonstrates the potential for gene expression profiling to identify toxic hazards by the identification of mechanistically relevant markers of toxicity.
Several studies, including the MicroArray Quality Control (MAQC) project, have reported that concordant results in relative expression measures can be obtained in different microarray laboratories using the same RNA samples. These encouraging results have led to increased interest in and utilization of microarray technology in clinical and diagnostic assays, which require stringent regulatory approval. Many laboratories monitor microarray quality metrics calculated from each hybridization image, such as background or percent present. While these internal metrics provide essential information for individual hybridizations, they are not sufficient for validating overall competency of the facility, including precision and accuracy of the results, which may vary between laboratories and over time. Comprehensive assessments of laboratory performance must include inter-laboratory comparisons, such as a proficiency testing program, where identical samples are tested in multiple facilities.
We have initiated a proficiency testing program for microarray laboratories using the Affymetrix platform. In previous testing rounds, we compared data from 18 laboratories over a nine-month period using rat samples. Now we are expanding the program to additional laboratories using human samples. Three replicates of the two human MAQC reference RNA samples were repeatedly tested in multiple facilities. The poster will show the results of several analyses to assess the repeatability and comparability of each laboratory’s results with a variety of microbiomes to understand complex metabolic interactions and their impact on the host health and nutritional status.
Using a microarray platform for allergy diagnosis allows for testing of specific IgE sensitivity to a multitude of allergens, while requiring only small volumes of serum. However, variation of probe immobilization on microarrays hinders the ability to make quantitative, assertive, and statistically relevant conclusions necessary in immunodiagnostics. To address this problem, we have developed a calibrated, inexpensive, multiplexed, and rapid protein microarray method that directly correlates surface probe density to captured labeled secondary antibody in clinical samples. We have identified three major technological advantages of our calibrated fluorescence enhancement (CaFE) technique: (i) a significant increase in fluorescence emission over a broad range of fluorophores on a layered substrate optimized specifically for fluorescence; (ii) a method to perform label-free quantification of the probes in each spot while maintaining fluorescence enhancement for a particular fluorophore; and (iii) a calibrated, quantitative technique that combines fluorescence and label-free modalities to accurately measure probe density and bound target for a variety of antibody–antigen pairs. In this paper, we establish the effectiveness of the CaFE method by presenting the strong linear dependence of the amount of bound protein to the resulting fluorescence signal of secondary antibody for IgG, β-lactoglobulin, and allergen-specific IgEs to Ara h 1 (peanut major allergen) and Phl p 1 (timothy grass major allergen) in human serum.
The maturing of gene expression microarray technology and interest in the use of microarray-based applications for clinical and diagnostic applications calls for quantitative measures of quality. This manuscript presents a retrospective study characterizing several approaches to assess technical performance of microarray data measured on the Affymetrix GeneChip platform, including whole-array metrics and information from a standard mixture of external spike-in and endogenous internal controls. Spike-in controls were found to carry the same information about technical performance as whole-array metrics and endogenous "housekeeping" genes. These results support the use of spike-in controls as general tools for performance assessment across time, experimenters and array batches, suggesting that they have potential for comparison of microarray data generated across species using different technologies.
A layered PCA modeling methodology that uses data from a number of classes of controls (spike-in hybridization, spike-in polyA+, internal RNA degradation, endogenous or "housekeeping genes") was used for the assessment of microarray data quality. The controls provide information on multiple stages of the experimental protocol (e.g., hybridization, RNA amplification). External spike-in, hybridization and RNA labeling controls provide information related to both assay and hybridization performance whereas internal endogenous controls provide quality information on the biological sample. We find that the variance of the data generated from the external and internal controls carries critical information about technical performance; the PCA dissection of this variance is consistent with whole-array quality assessment based on a number of quality assurance/quality control (QA/QC) metrics.
These results provide support for the use of both external and internal RNA control data to assess the technical quality of microarray experiments. The observed consistency amongst the information carried by internal and external controls and whole-array quality measures offers promise for rationally-designed control standards for routine performance monitoring of multiplexed measurement platforms.
Robustness is defined as the ability to uphold performance in face of perturbations and uncertainties, and sensitivity is a measure of the system deviations generated by perturbations to the system. While cancer appears as a robust but fragile system, few computational and quantitative evidences demonstrate robustness tradeoffs in cancer. Microarrays have been widely applied to decipher gene expression signatures in human cancer research, and quantification of global gene expression profiles facilitates precise prediction and modeling of cancer in systems biology. We provide several efficient computational methods based on system and control theory to compare robustness and sensitivity between cancer and normal cells by microarray data. Measurement of robustness and sensitivity by linear stochastic model is introduced in this study, which shows oscillations in feedback loops of p53 and demonstrates robustness tradeoffs that cancer is a robust system with some extreme fragilities. In addition, we measure sensitivity of gene expression to perturbations in other gene expression and kinetic parameters, discuss nonlinear effects in feedback loops of p53 and extend our method to robustness-based cancer drug design.
robustness tradeoffs; sensitivity analysis; robustness-based cancer drug design; feedback loops of p53
To characterize global structural features of large-scale biomedical terminologies using currently emerging statistical approaches.
Given rapid growth of terminologies, this research was designed to address scalability. We selected 16 terminologies covering a variety of domains from the UMLS Metathesaurus, a collection of terminological systems. Each was modeled as a network in which nodes were atomic concepts and links were relationships asserted by the source vocabulary. For comparison against each terminology we created three random networks of equivalent size and density.
Average node degree, node degree distribution, clustering coefficient, average path length.
Eight of 16 terminologies exhibited the small-world characteristics of a short average path length and strong local clustering. An overlapping subset of nine exhibited a power law distribution in node degrees, indicative of a scale-free architecture. We attribute these features to specific design constraints. Constraints on node connectivity, common in more synthetic classification systems, localize the effects of changes and deletions. In contrast, small-world and scale-free features, common in comprehensive medical terminologies, promote flexible navigation and less restrictive organic-like growth.
While thought of as synthetic, grid-like structures, some controlled terminologies are structurally indistinguishable from natural language networks. This paradoxical result suggests that terminology structure is shaped not only by formal logic-based semantics, but by rules analogous to those that govern social networks and biological systems. Graph theoretic modeling shows early promise as a framework for describing terminology structure. Deeper understanding of these techniques may inform the development of scalable terminologies and ontologies.
Many researchers are concerned with the comparability and reliability of microarray gene expression data. Recent completion of the MicroArray Quality Control (MAQC) project provides a unique opportunity to assess reproducibility across multiple sites and the comparability across multiple platforms. The MAQC analysis presented for the conclusion of inter- and intra-platform comparability/reproducibility of microarray gene expression measurements is inadequate. We evaluate the reproducibility/comparability of the MAQC data for 12901 common genes in four titration samples generated from five high-density one-color microarray platforms and the TaqMan technology. We discuss some of the problems with the use of correlation coefficient as metric to evaluate the inter- and intra-platform reproducibility and the percent of overlapping genes (POG) as a measure for evaluation of a gene selection procedure by MAQC.
A total of 293 arrays were used in the intra- and inter-platform analysis. A hierarchical cluster analysis shows distinct differences in the measured intensities among the five platforms. A number of genes show a small fold-change in one platform and a large fold-change in another platform, even though the correlations between platforms are high. An analysis of variance shows thirty percent of gene expressions of the samples show inconsistent patterns across the five platforms. We illustrated that POG does not reflect the accuracy of a selected gene list. A non-overlapping gene can be truly differentially expressed with a stringent cut, and an overlapping gene can be non-differentially expressed with non-stringent cutoff. In addition, POG is an unusable selection criterion. POG can increase or decrease irregularly as cutoff changes; there is no criterion to determine a cutoff so that POG is optimized.
Using various statistical methods we demonstrate that there are differences in the intensities measured by different platforms and different sites within platform. Within each platform, the patterns of expression are generally consistent, but there is site-by-site variability. Evaluation of data analysis methods for use in regulatory decision should take no treatment effect into consideration, when there is no treatment effect, "a fold-change cutoff with a non-stringent p-value cutoff" could result in 100% false positive error selection.
Molecular biomarkers that are based on mRNA transcripts are being developed for the diagnosis and treatment of a number of diseases. DNA microarrays are one of the primary technologies being used to develop classifiers from gene expression data for clinically relevant outcomes. Microarray assays are highly multiplexed measures of comparative gene expression but have a limited dynamic range of measurement and show compression in fold change detection. To increase the clinical utility of microarrays, assay controls are needed that benchmark performance using metrics that are relevant to the analysis of genomic data generated with biological samples.
Ratiometric controls were prepared from commercial sources of high quality RNA from human tissues with distinctly different expression profiles and mixed in defined ratios. The samples were processed using six different target labeling protocols and replicate datasets were generated on high density gene expression microarrays. The area under the curve from receiver operating characteristic plots was calculated to measure diagnostic performance. The reliable region of the dynamic range was derived from log2 ratio deviation plots made for each dataset. Small but statistically significant differences in diagnostic performance were observed between standardized assays available from the array manufacturer and alternative methods for target generation. Assay performance using the reliable range of comparative measurement as a metric was improved by adjusting sample hybridization conditions for one commercial kit.
Process improvement in microarray assay performance was demonstrated using samples prepared from commercially available materials and two metrics - diagnostic performance and the reliable range of measurement. These methods have advantages over approaches that use a limited set of external controls or correlations to reference sets, because they provide benchmark values that can be used by clinical laboratories to help optimize protocol conditions and laboratory proficiency with microarray assays.
A new, rugged, precise, accurate and fast primary method of measurement has been proposed for the determination of gold in various gold articles. Precise and accurate measurement of gold is the primary requirement for hall marking and to trade gold internationally, as billions of dollars of gold are trading world wide for the various applications. At present Fire Assay ASTM E 1335–08 is the only Standard Test Method for the determination of Gold, which is accepted internationally. But the method is time consuming, cumbersome and required expertise to perform the test. In the present investigation, a method has been developed gravimetry based on direct determination of gold after reducing gold in zero-valent state by hydroxylamine hydrochloride. Gravimetry is the most reliable technique and having highest metrological qualities in comparison to titremetry and instrumental method and the results of gravimetry are directly traceable to SI unit. The results of gravimetric method are accepted without reference to a standard of the same quantity. Several experiments were carried out with and without impurities and it has been concluded that gold can be determined accurately and precisely in presence of several impurities. Five replicates of approximate 0.2 g gold samples were analyzed following method proposed and percentage purity were found to be 99.993 ± 0.0056 with 95% confidence level (k = 2). The combined uncertainty in gold measurement has also been evaluated using potential sources of the method according to the EURACHEM/GUM guidelines.
The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient.
This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared.
Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
Microarray technology has become a widely used tool in the biological sciences. Over the past decade, the number of users has grown exponentially, and with the number of applications and secondary data analyses rapidly increasing, we expect this rate to continue. Various initiatives such as the External RNA Control Consortium (ERCC) and the MicroArray Quality Control (MAQC) project have explored ways to provide standards for the technology. For microarrays to become generally accepted as a reliable technology, statistical methods for assessing quality will be an indispensable component; however, there remains a lack of consensus in both defining and measuring microarray quality.
We begin by providing a precise definition of microarray quality and reviewing existing Affymetrix GeneChip quality metrics in light of this definition. We show that the best-performing metrics require multiple arrays to be assessed simultaneously. While such multi-array quality metrics are adequate for bench science, as microarrays begin to be used in clinical settings, single-array quality metrics will be indispensable. To this end, we define a single-array version of one of the best multi-array quality metrics and show that this metric performs as well as the best multi-array metrics. We then use this new quality metric to assess the quality of microarry data available via the Gene Expression Omnibus (GEO) using more than 22,000 Affymetrix HGU133a and HGU133plus2 arrays from 809 studies.
We find that approximately 10 percent of these publicly available arrays are of poor quality. Moreover, the quality of microarray measurements varies greatly from hybridization to hybridization, study to study, and lab to lab, with some experiments producing unusable data. Many of the concepts described here are applicable to other high-throughput technologies.
High-density DNA microarrays require automatic feature extraction methodologies and softwares. These can be a potential source of non-reproducibility of gene expression measurements. Variation in feature location or in signal integration methodology may be a significant contribution to the observed variance in gene expression levels.
We explore sources of variability in feature extraction from DNA microarrays on Nylon membrane with radioactive detection. We introduce a mathematical model of the signal emission and derive methods for correcting biases such as overshining, saturation or variation in probe amount. We also provide a quality metric which can be used qualitatively to flag weak or untrusted signals or quantitatively to modulate the weight of each experiment or gene in higher level analyses (clustering or discriminant analysis).
Our novel feature extraction methodology, based on a mathematical model of the radioactive emission, reduces variability due to saturation, neighbourhood effects and variable probe amount. Furthermore, we provide a fully automatic feature extraction software, BZScan, which implements the algorithms described in this paper.
Motivation: According to current consistency metrics such as percentage of overlapping genes (POG), lists of differentially expressed genes (DEGs) detected from different microarray studies for a complex disease are often highly inconsistent. This irreproducibility problem also exists in other high-throughput post-genomic areas such as proteomics and metabolism. A complex disease is often characterized with many coordinated molecular changes, which should be considered when evaluating the reproducibility of discovery lists from different studies.
Results: We proposed metrics percentage of overlapping genes-related (POGR) and normalized POGR (nPOGR) to evaluate the consistency between two DEG lists for a complex disease, considering correlated molecular changes rather than only counting gene overlaps between the lists. Based on microarray datasets of three diseases, we showed that though the POG scores for DEG lists from different studies for each disease are extremely low, the POGR and nPOGR scores can be rather high, suggesting that the apparently inconsistent DEG lists may be highly reproducible in the sense that they are actually significantly correlated. Observing different discovery results for a disease by the POGR and nPOGR scores will obviously reduce the uncertainty of the microarray studies. The proposed metrics could also be applicable in many other high-throughput post-genomic areas.
Supplementary information: Supplementary data are available at Bioinformatics online.
Commutable reference materials (RMs) are suitable for end-users for evaluating the metrological traceability of values obtained using routine measurement systems. We assessed the performance of 6 routine measurement systems with validated secondary RMs.
We tested the homogeneity, stability, and commutability of 5 minimally processed human serum pools according to the standard guidelines. The serum pools were assigned values as per the reference procedure of the United States Centers for Disease Control and were used to evaluate the trueness of results from 6 commercial measurement systems based on enzymatic methods: 3 glucose oxidase (GOD) and 3 hexokinase (HK) methods.
The prepared RMs were validated to be sufficiently homogenous, stable, and commutable with the patient samples. Method bias varied for different systems: GOD01, -0.17 to 2.88%; GOD02, 1.66 to 4.58%; GOD03, -0.17 to 3.14%; HK01, -3.48 to -0.85%; HK02, -3.83 to -0.11%, and HK03, -1.82 to -0.27%.
We observed that the prepared serum glucose RMs were qualified for trueness assessment. Most of the measurement systems met the minimal quality specifications.
Glucose; Reference material; Commercial system; Bias
The ability to extract meaningful information from transcriptome technologies such as cDNA microarrays relies on the precision, sensitivity and reproducibility of the measured values for a given gene across multiple samples. Given the lack of a ‘gold standard’ for the production of microarrays using current technologies, there is a high degree of variation in the quality of data derived from microarray experiments. Poor reproducibility not only adds to the cost of a given study but also leads to data sets that are difficult to interpret. For glass slide DNA microarrays, much of this variation is introduced systematically, during the spotting, or deposition, of the DNA onto the slide surface. In order to reduce this type of systematic variation we tested spotting solutions containing different detergent additives in the presence of one of two different denaturants and determined their effect on spot quality. We show that spotting cDNA in a solution consisting of the zwitterionic detergent 3-[(3-cholamidopropyl)dimethylammonio]-1-propane sulfonate (CHAPS) in the presence of formamide or dimethyl sulfoxide yields spots of superior quality in terms of morphology, size homogeneity and signal reproducibility, as well as overall intensity, when used with popular, commercially available slides.
There is a growing demand for highly parallel gene expression analysis with whole genome coverage, high sensitivity and high accuracy. Open systems such as differential display are capable of analyzing most of the expressed genome but are not quantitative and generally require manual identification of differentially expressed genes by sequencing. Closed systems such as microarrays use gene-specific probes and are, therefore, limited to studying specific genes in well-characterized species. Here, we describe Tangerine, a PCR-based system that combines the scope and generality of open systems with a robust and immediate identification algorithm using publicly available sequence information. By combinatorial analysis of three independent and complete DNA indexing profiles, each displaying the complete set of expressed transcripts on capillary electrophoresis, the method allows transcripts to be simultaneously quantified and identified. The method is sensitive, accurate and reproducible, and is amenable to high-throughput automated operation.
Trueness must be independent of analytical platform and measurements comparable regardless of the analytical procedure used.Traceability requirements for the clinical laboratory are via National Metrology Institutes, Reference (Calibration) laboratories and finally the routine laboratory.Traceability information required by today’s clinical laboratory may be requested from the manufacturer of the analytical kits and the internet.Traceable laboratory results will greatly enhance the role of the laboratory in patient management.
Interlaboratory comparison of microarray data, even when using the same platform, imposes several challenges to scientists. RNA quality, RNA labeling efficiency, hybridization procedures and data-mining tools can all contribute variations in each laboratory. In Affymetrix GeneChips, about 11–20 different 25-mer oligonucleotides are used to measure the level of each transcript. Here, we report that ‘labeling extension values (LEVs)’, which are correlation coefficients between probe intensities and probe positions, are highly correlated with the gene expression levels (GEVs) on eukayotic Affymetrix microarray data. By analyzing LEVs and GEVs in the publicly available 2414 cel files of 20 Affymetrix microarray types covering 13 species, we found that correlations between LEVs and GEVs only exist in eukaryotic RNAs, but not in prokaryotic ones. Surprisingly, Affymetrix results of the same specimens that were analyzed in different laboratories could be clearly differentiated only by LEVs, leading to the identification of ‘laboratory signatures’. In the examined dataset, GSE10797, filtering out high-LEV genes did not compromise the discovery of biological processes that are constructed by differentially expressed genes. In conclusion, LEVs provide a new filtering parameter for microarray analysis of gene expression and it may improve the inter- and intralaboratory comparability of Affymetrix GeneChips data.
A remarkable feature of development is its reproducibility, the ability to correct embryo-to-embryo variations and instruct precise patterning. In Drosophila, embryonic patterning along the anterior-posterior axis is controlled by the morphogen gradient Bicoid (Bcd). In this report, we describe quantitative studies of the native Bcd gradient and its target Hunchback (Hb). We show that the native Bcd gradient is highly reproducible and is itself scaled with embryo length. While a precise Bcd gradient is necessary for precise Hb expression, it still has positional errors greater than Hb expression. We describe analyses further probing mechanisms for Bcd gradient scaling and correction of its residual positional errors. Our results suggest a simple model of a robust Bcd gradient sufficient to achieve scaled and precise activation of its targets. The robustness of this gradient is conferred by its intrinsic properties of "self-correcting" the inevitable input variations to achieve a precise and reproducible output.
The MicroArray Quality Control (MAQC) project evaluated the inter- and intra-platform reproducibility of seven microarray platforms and three quantitative gene expression assays in profiling the expression of two commercially available Reference RNA samples (Nat Biotechnol 24:1115-22, 2006). The tested microarrays were the platforms from Affymetrix, Agilent Technologies, Applied Biosystems, GE Healthcare, Illumina, Eppendorf and the National Cancer Institute, and quantitative gene expression assays included TaqMan® Gene Expression PCR Assay, Standardized (Sta) RT-PCR™ and QuantiGene®. The data showed great consistency in gene expression measurements across different microarray platforms, different technologies and test sites. However, SYBR® Green real-time PCR, another common technique utilized by half of all real-time PCR users for gene expression measurement, was not addressed in the MAQC study. In the present study, we compared the performance of SYBR Green PCR with TaqMan PCR, microarrays and other quantitative technologies using the same two Reference RNA samples as the MAQC project. We assessed SYBR Green real-time PCR using commercially available RT2 Profiler™ PCR Arrays from SuperArray, containing primer pairs that have been experimentally validated to ensure gene-specificity and high amplification efficiency.
The SYBR Green PCR Arrays exhibit good reproducibility among different users, PCR instruments and test sites. In addition, the SYBR Green PCR Arrays have the highest concordance with TaqMan PCR, and a high level of concordance with other quantitative methods and microarrays that were evaluated in this study in terms of fold-change correlation and overlap of lists of differentially expressed genes.
These data demonstrate that SYBR Green real-time PCR delivers highly comparable results in gene expression measurement with TaqMan PCR and other high-density microarrays.
The expression microarray is a frequently used approach to study gene expression on a genome-wide scale. However, the data produced by the thousands of microarray studies published annually are confounded by “batch effects,” the systematic error introduced when samples are processed in multiple batches. Although batch effects can be reduced by careful experimental design, they cannot be eliminated unless the whole study is done in a single batch. A number of programs are now available to adjust microarray data for batch effects prior to analysis. We systematically evaluated six of these programs using multiple measures of precision, accuracy and overall performance. ComBat, an Empirical Bayes method, outperformed the other five programs by most metrics. We also showed that it is essential to standardize expression data at the probe level when testing for correlation of expression profiles, due to a sizeable probe effect in microarray data that can inflate the correlation among replicates and unrelated samples.
While microarrays hold considerable promise in large-scale biology on account of their massively parallel analytical nature, there is a need for compatible signal amplification procedures to increase sensitivity without loss of multiplexing. Rolling circle amplification (RCA) is a molecular amplification method with the unique property of product localization. This report describes the application of RCA signal amplification for multiplexed, direct detection and quantitation of nucleic acid targets on planar glass and gel-coated microarrays. As few as 150 molecules bound to the surface of microarrays can be detected using RCA. Because of the linear kinetics of RCA, nucleic acid target molecules may be measured with a dynamic range of four orders of magnitude. Consequently, RCA is a promising technology for the direct measurement of nucleic acids on microarrays without the need for a potentially biasing preamplification step.