Since its introduction quantitative real-time polymerase chain reaction (qPCR) has become the standard method for quantification of gene expression. Its high sensitivity, large dynamic range, and accuracy led to the development of numerous applications with an increasing number of samples to be analyzed. Data analysis consists of a number of steps, which have to be carried out in several different applications. Currently, no single tool is available which incorporates storage, management, and multiple methods covering the complete analysis pipeline.
QPCR is a versatile web-based Java application that allows to store, manage, and analyze data from relative quantification qPCR experiments. It comprises a parser to import generated data from qPCR instruments and includes a variety of analysis methods to calculate cycle-threshold and amplification efficiency values. The analysis pipeline includes technical and biological replicate handling, incorporation of sample or gene specific efficiency, normalization using single or multiple reference genes, inter-run calibration, and fold change calculation. Moreover, the application supports assessment of error propagation throughout all analysis steps and allows conducting statistical tests on biological replicates. Results can be visualized in customizable charts and exported for further investigation.
We have developed a web-based system designed to enhance and facilitate the analysis of qPCR experiments. It covers the complete analysis workflow combining parsing, analysis, and generation of charts into one single application. The system is freely available at
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as ‘fold-difference’ results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.
Quantitative PCR (qPCR) is a workhorse laboratory technique for measuring the concentration of a target DNA sequence with high accuracy over a wide dynamic range. The gold standard method for estimating DNA concentrations via qPCR is quantification cycle () standard curve quantification, which requires the time- and labor-intensive construction of a standard curve. In theory, the shape of a qPCR data curve can be used to directly quantify DNA concentration by fitting a model to data; however, current empirical model-based quantification methods are not as reliable as standard curve quantification.
We have developed a two-parameter mass action kinetic model of PCR (MAK2) that can be fitted to qPCR data in order to quantify target concentration from a single qPCR assay. To compare the accuracy of MAK2-fitting to other qPCR quantification methods, we have applied quantification methods to qPCR dilution series data generated in three independent laboratories using different target sequences. Quantification accuracy was assessed by analyzing the reliability of concentration predictions for targets at known concentrations. Our results indicate that quantification by MAK2-fitting is as reliable as standard curve quantification for a variety of DNA targets and a wide range of concentrations.
We anticipate that MAK2 quantification will have a profound effect on the way qPCR experiments are designed and analyzed. In particular, MAK2 enables accurate quantification of portable qPCR assays with limited sample throughput, where construction of a standard curve is impractical.
Reverse transcription quantitative real-time PCR (RT-qPCR) is a key method for measurement of relative gene expression. Analysis of RT-qPCR data requires many iterative computations for data normalization and analytical optimization. Currently no computer program for RT-qPCR data analysis is suitable for analytical optimization and user-controllable customization based on data quality, experimental design as well as specific research aims. Here I introduce an all-in-one computer program, SASqPCR, for robust and rapid analysis of RT-qPCR data in SAS. This program has multiple macros for assessment of PCR efficiencies, validation of reference genes, optimization of data normalizers, normalization of confounding variations across samples, and statistical comparison of target gene expression in parallel samples. Users can simply change the macro variables to test various analytical strategies, optimize results and customize the analytical processes. In addition, it is highly automatic and functionally extendable. Thus users are the actual decision-makers controlling RT-qPCR data analyses. SASqPCR and its tutorial are freely available at http://code.google.com/p/sasqpcr/downloads/list.
Real-time quantitative PCR (qPCR) is a widely used technique in microbial community analysis, allowing the quantification of the number of target genes in a community sample. Currently, the standard-curve (SC) method of absolute quantification is widely employed for these kinds of analysis. However, the SC method assumes that the amplification efficiency (E) is the same for both the standard and the sample target template. We analyzed 19 bacterial strains and nine environmental samples in qPCR assays, targeting the nifH and 16S rRNA genes. The E values of the qPCRs differed significantly, depending on the template. This has major implications for the quantification. If the sample and standard differ in their E values, quantification errors of up to orders of magnitude are possible. To address this problem, we propose and test the one-point calibration (OPC) method for absolute quantification. The OPC method corrects for differences in E and was derived from the ΔΔCT method with correction for E, which is commonly used for relative quantification in gene expression studies. The SC and OPC methods were compared by quantifying artificial template mixtures from Geobacter sulfurreducens (DSM 12127) and Nostoc commune (Culture Collection of Algae and Protozoa [CCAP] 1453/33), which differ in their E values. While the SC method deviated from the expected nifH gene copy number by 3- to 5-fold, the OPC method quantified the template mixtures with high accuracy. Moreover, analyzing environmental samples, we show that even small differences in E between the standard and the sample can cause significant differences between the copy numbers calculated by the SC and the OPC methods.
Quantitative PCR (qPCR) is more sensitive than microscopy for detecting Pneumocystis jirovecii in bronchoalveolar lavage (BAL) fluid. We therefore developed a qPCR assay and compared the results with those of a routine immunofluorescence assay (IFA) and clinical data. The assay included automated DNA extraction, amplification of the mitochondrial large-subunit rRNA gene and an internal control, and quantification of copy numbers with the help of a plasmid clone. We studied 353 consecutive BAL fluids obtained for investigation of unexplained fever and/or pneumonia in 287 immunocompromised patients. No qPCR inhibition was observed. Seventeen (5%) samples were both IFA and qPCR positive, 63 (18%) were IFA negative and qPCR positive, and 273 (77%) were both IFA and qPCR negative. The copy number was significantly higher for IFA-positive/qPCR-positive samples than for IFA-negative/qPCR-positive samples (4.2 ± 1.2 versus 1.1 ± 1.1 log10 copies/μl; P < 10−4). With IFA as the standard, the qPCR assay sensitivity was 100% for ≥2.6 log10 copies/μl and the specificity was 100% for ≥4 log10 copies/μl. Since qPCR results were not available at the time of decision-making, these findings did not trigger cotrimoxazole therapy. Patients with systemic inflammatory diseases and IFA-negative/qPCR-positive BAL fluid had a worse 1-year survival rate than those with IFA-negative/qPCR-negative results (P < 10−3), in contrast with solid-organ transplant recipients (P = 0.88) and patients with hematological malignancy (P = 0.26). Quantifying P. jirovecii DNA in BAL fluids independently of IFA positivity should be incorporated into the investigation of pneumonia in immunocompromised patients. The relevant threshold remains to be determined and may vary according to the underlying disease.
Quantitative real-time PCR (qPCR) has been the method of choice for the quantification of mRNA. Due to the various artifactual factors that may affect the accuracy of qPCR, internal reference genes are most often used to normalize qPCR data. Recently, many studies have employed computer programs such as GeNorm, BestKeeper and NormFinder in selecting reference genes, but very few statistically validate the outcomes of these programs. Thus, in this study, we selected reference genes for qPCR of liver and ovary samples of yellow (juvenile), migratory (silver) and 11-KT treated juveniles of New Zealand shortfinned eels (Anguilla australis) using the three computer programs and validate the selected genes statistically using REST 2009 software and the Mann-Whitney test. We also tested for the repeatability of use for the best reference genes by applying them to a data set obtained in a similar experiment conducted the previous year.
Out of six candidate genes, the combination of 18 s and eef1 was found to be the best statistically validated reference for liver, while in ovary it was l36. However, discrepancies in gene rankings were found between the different programs. Also, statistical validation procedures showed that several genes put forward as being the best by the programs were in fact, regulated, making them unsuitable as reference genes. Additionally, eef1 which was found to be a suitable - though not the top ranked - reference gene for liver tissues in one year, was regulated in another.
Our study highlights the need for external validations of reference gene selections made by computer programs. Researchers need to be vigilant in validating and reporting the rationale for the use of reference gene in published studies.
“Candidatus Accumulibacter” and total bacterial community dynamics were studied in two lab-scale enhanced biological phosphorus removal (EBPR) reactors by using a community fingerprint technique, automated ribosomal intergenic spacer analysis (ARISA). We first evaluated the quantitative capability of ARISA compared to quantitative real-time PCR (qPCR). ARISA and qPCR provided comparable relative quantification of the two dominant “Ca. Accumulibacter” clades (IA and IIA) detected in our reactors. The quantification of total “Ca. Accumulibacter” 16S rRNA genes relative to that from the total bacterial community was highly correlated, with ARISA systematically underestimating “Ca. Accumulibacter” abundance, probably due to the different normalization techniques applied. During 6 months of normal (undisturbed) operation, the distribution of the two clades within the total “Ca. Accumulibacter” population was quite stable in one reactor while comparatively dynamic in the other reactor. However, the variance in the clade distribution did not appear to affect reactor performance. Instead, good EBPR activity was positively associated with the abundance of total “Ca. Accumulibacter.” Therefore, we concluded that the different clades in the system provided functional redundancy. We disturbed the reactor operation by adding nitrate together with acetate feeding in the anaerobic phase to reach initial reactor concentrations of 10 mg/liter NO3-N for 35 days. The reactor performance deteriorated with a concomitant decrease in the total “Ca. Accumulibacter” population, suggesting that a population shift was the cause of performance upset after a long exposure to nitrate in the anaerobic phase.
The XML-based Real-Time PCR Data Markup Language (RDML) has been developed by the RDML consortium (http://www.rdml.org) to enable straightforward exchange of qPCR data and related information between qPCR instruments and third party data analysis software, between colleagues and collaborators and between experimenters and journals or public repositories. We here also propose data related guidelines as a subset of the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) to guarantee inclusion of key data information when reporting experimental results.
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented.
Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available.
It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
Quantitative polymerase chain reaction; qPCR; Kinetic model
While many decisions rely on real time quantitative PCR (qPCR) analysis few attempts have hitherto been made to quantify bounds of precision accounting for the various sources of variation involved in the measurement process. Besides influences of more obvious factors such as camera noise and pipetting variation, changing efficiencies within and between reactions affect PCR results to a degree which is not fully recognized. Here, we develop a statistical framework that models measurement error and other sources of variation as they contribute to fluorescence observations during the amplification process and to derived parameter estimates. Evaluation of reproducibility is then based on simulations capable of generating realistic variation patterns. To this end, we start from a relatively simple statistical model for the evolution of efficiency in a single PCR reaction and introduce additional error components, one at a time, to arrive at stochastic data generation capable of simulating the variation patterns witnessed in repeated reactions (technical repeats). Most of the variation in values was adequately captured by the statistical model in terms of foreseen components. To recreate the dispersion of the repeats' plateau levels while keeping the other aspects of the PCR curves within realistic bounds, additional sources of reagent consumption (side reactions) enter into the model. Once an adequate data generating model is available, simulations can serve to evaluate various aspects of PCR under the assumptions of the model and beyond.
The purpose of this manuscript is to describe a reliable approach to quantitative real-time polymerase chain reaction (qPCR) assay development and project management, which is currently embodied in the Excel 2003-based software program named “PREXCEL-Q” (P-Q) (formerly known as “FocusField2-6Gallup-qPCRSet-upTool-001,” “FF2-6-001 qPCR set-up tool” or “Iowa State University Research Foundation [ISURF] project #03407”). Since its inception from 1997-2007, the program has been well-received and requested around the world and was recently unveiled by its inventor at the 2008 Cambridge Healthtech Institute's Fourth Annual qPCR Conference in San Diego, CA. P-Q was subsequently mentioned in a review article by Stephen A. Bustin, an acknowledged leader in the qPCR field. Due to its success and growing popularity, and the fact that P-Q introduces a unique/defined approach to qPCR, a concise description of what the program is and what it does has become important. Sample-related inhibitory problems of the qPCR assay, sample concentration limitations, nuclease-treatment, reverse transcription (RT) and master mix formulations are all addressed by the program, enabling investigators to quickly, consistently and confidently design uninhibited, dynamically-sound, LOG-linear-amplification-capable, high-efficiency-of-amplification reactions for any type of qPCR. The current version of the program can handle an infinite number of samples.
PCR; qPCR; RT; gene expression; inhibition; RNA integrity; micro-array; real-time PCR; software
Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) is the gold standard technique for mRNA quantification, but appropriate normalization is required to obtain reliable data. Normalization to accurately quantitated RNA has been proposed as the most reliable method for in vivo biopsies. However, this approach does not correct differences in RNA integrity.
In this study, we evaluated the effect of RNA degradation on the quantification of the relative expression of nine genes (18S, ACTB, ATUB, B2M, GAPDH, HPRT, POLR2L, PSMB6 and RPLP0) that cover a wide expression spectrum. Our results show that RNA degradation could introduce up to 100% error in gene expression measurements when RT-qPCR data were normalized to total RNA. To achieve greater resolution of small differences in transcript levels in degraded samples, we improved this normalization method by developing a corrective algorithm that compensates for the loss of RNA integrity. This approach allowed us to achieve higher accuracy, since the average error for quantitative measurements was reduced to 8%. Finally, we applied our normalization strategy to the quantification of EGFR, HER2 and HER3 in 104 rectal cancer biopsies. Taken together, our data show that normalization of gene expression measurements by taking into account also RNA degradation allows much more reliable sample comparison.
We developed a new normalization method of RT-qPCR data that compensates for loss of RNA integrity and therefore allows accurate gene expression quantification in human biopsies.
The quantitative polymerase chain reaction (qPCR) is widely utilized for gene expression analysis. However, the lack of robust strategies for cross laboratory data comparison hinders the ability to collaborate or perform large multicentre studies conducted at different sites. In this study we introduced and validated a workflow that employs universally applicable, quantifiable external oligonucleotide standards to address this question. Using the proposed standards and data-analysis procedure, we obtained a perfect concordance between expression values from eight different genes in 366 patient samples measured on three different qPCR instruments and matching software, reagents, plates and seals, demonstrating the power of this strategy to detect and correct inter-run variation and to enable exchange of data between different laboratories, even when not using the same qPCR platform.
RNA degradation can distort or prevent measurement of RNA transcripts. A mathematical model for degradation was constructed, based on random RNA damage and exponential polymerase chain reaction (PCR) amplification. Degradation, measured as the number of lesions/base, can be quantified by amplifying several sequences of a reference gene, calculating the regression of Ct on amplicon length and determining the slope. Reverse transcriptase–quantitative PCR (RT–qPCR) data can then be corrected for degradation using lesions/base, amplicon length(s) and the relevant equation obtained from the model. Several predictions of the model were confirmed experimentally; degradation in a sample quantified using the model correlated with degradation quantified using an additional control sample and the ΔΔCt method and application of the model corrected erroneous results for relative quantification resulting from degradation and differences in amplicon length. Compared with RIN, the method was quantitative, simpler, more sensitive and spanned a wider range of RNA damage. The method can use either random or specifically primed complementary DNA and it enables relative and absolute quantification of RNA to be corrected for degradation. The model and method should be applicable to many situations in which RNA is quantified, including quantification of RNA by methods other than nucleic acid amplification.
Pathway-targeted or low-density arrays are used more and more frequently in biomedical research, particularly those arrays that are based on quantitative real-time PCR. Typical QPCR arrays contain 96-1024 primer pairs or probes, and they bring with it the promise of being able to reliably measure differences in target levels without the need to establish absolute standard curves for each and every target. To achieve reliable quantification all primer pairs or array probes must perform with the same efficiency.
Our results indicate that QPCR primer-pairs differ significantly both in reliability and efficiency. They can only be used in an array format if the raw data (so called CT values for real-time QPCR) are transformed to take these differences into account. We developed a novel method to obtain efficiency-adjusted CT values. We introduce transformed confidence intervals as a novel measure to identify unreliable primers. We introduce a robust clustering algorithm to combine efficiencies of groups of probes, and our results indicate that using n < 10 cluster-based mean efficiencies is comparable to using individually determined efficiency adjustments for each primer pair (N = 96-1024).
Careful estimation of primer efficiency is necessary to avoid significant measurement inaccuracies. Transformed confidence intervals are a novel method to assess and interprete the reliability of an efficiency estimate in a high throughput format. Efficiency clustering as developed here serves as a compromise between the imprecision in assuming uniform efficiency, and the computational complexity and danger of over-fitting when using individually determined efficiencies.
Quantitative real-time PCR (qPCR) is the gold standard for the quantification of specific nucleic acid sequences. However, a serious concern has been revealed in a recent report: supercoiled plasmid standards cause significant over-estimation in qPCR quantification. In this study, we investigated the effect of plasmid DNA conformation on the quantification of DNA and the efficiency of qPCR. Our results suggest that plasmid DNA conformation has significant impact on the accuracy of absolute quantification by qPCR. DNA standard curves shifted significantly among plasmid standards with different DNA conformations. Moreover, the choice of DNA measurement method and plasmid DNA conformation may also contribute to the measurement error of DNA standard curves. Due to the multiple effects of plasmid DNA conformation on the accuracy of qPCR, efforts should be made to assure the highest consistency of plasmid standards for qPCR. Thus, we suggest that the conformation, preparation, quantification, purification, handling, and storage of standard plasmid DNA should be described and defined in the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) to assure the reproducibility and accuracy of qPCR absolute quantification.
Fitting four-parameter sigmoidal models is one of the methods established in the analysis of quantitative real-time PCR (qPCR) data. We had observed that these models are not optimal in the fitting outcome due to the inherent constraint of symmetry around the point of inflection. Thus, we found it necessary to employ a mathematical algorithm that circumvents this problem and which utilizes an additional parameter for accommodating asymmetrical structures in sigmoidal qPCR data.
The four-parameter models were compared to their five-parameter counterparts by means of nested F-tests based on the residual variance, thus acquiring a statistical measure for higher performance. For nearly all qPCR data we examined, five-parameter models resulted in a significantly better fit. Furthermore, accuracy and precision for the estimation of efficiencies and calculation of quantitative ratios were assessed with four independent dilution datasets and compared to the most commonly used quantification methods. It could be shown that the five-parameter model exhibits an accuracy and precision more similar to the non-sigmoidal quantification methods.
The five-parameter sigmoidal models outperform the established four-parameter model with high statistical significance. The estimation of essential PCR parameters such as PCR efficiency, threshold cycles and initial template fluorescence is more robust and has smaller variance. The model is implemented in the qpcR package for the freely available statistical R environment. The package can be downloaded from the author's homepage.
Motivation: Quantitative real-time polymerase chain reaction (qPCR) is routinely used for RNA expression profiling, validation of microarray hybridization data and clinical diagnostic assays. Although numerous statistical tools are available in the public domain for the analysis of microarray experiments, this is not the case for qPCR. Proprietary software is typically provided by instrument manufacturers, but these solutions are not amenable to the tandem analysis of multiple assays. This is problematic when an experiment involves more than a simple comparison between a control and treatment sample, or when many qPCR datasets are to be analyzed in a high-throughput facility.
Results: We have developed HTqPCR, a package for the R statistical computing environment, to enable the processing and analysis of qPCR data across multiple conditions and replicates.
Availability: HTqPCR and user documentation can be obtained through Bioconductor or at http://www.ebi.ac.uk/bertone/software.
Quantitative polymerase chain reactions (qPCR) are used to monitor relative changes in very small amounts of DNA. One drawback to qPCR is reproducibility: measuring the same sample multiple times can yield data that is so noisy that important differences can be dismissed. Numerous analytical methods have been employed that can extract the relative template abundance between samples. However, each method is sensitive to baseline assignment and to the unique shape profiles of individual reactions, which gives rise to increased variance stemming from the analytical procedure itself.
We developed a simple mathematical model that accurately describes the entire PCR reaction profile using only two reaction variables that depict the maximum capacity of the reaction and feedback inhibition. This model allows quantification that is more accurate than existing methods and takes advantage of the brighter fluorescence signals from later cycles. Because the model describes the entire reaction, the influences of baseline adjustment errors, reaction efficiencies, template abundance, and signal loss per cycle could be formalized. We determined that the common cycle-threshold method of data analysis introduces unnecessary variance because of inappropriate baseline adjustments, a dynamic reaction efficiency, and also a reliance on data with a low signal-to-noise ratio.
Using our model, fits to raw data can be used to determine template abundance with high precision, even when the data contains baseline and signal loss defects. This improvement reduces the time and cost associated with qPCR and should be applicable in a variety of academic, clinical, and biotechnological settings.
Quantitative real-time PCR (qPCR) is the method of choice for specific and sensitive quantification of nucleic acids. However, data validation is still a major issue, partially due to the complex effect of PCR inhibition on the results. If undetected PCR inhibition may severely impair the accuracy and sensitivity of results. PCR inhibition is addressed by prevention, detection and correction of PCR results. Recently, a new family of computational methods for the detection of PCR inhibition called kinetics outlier detection (KOD) emerged. KOD methods are based on comparison of one or a few kinetic parameters describing a test reaction to those describing a set of reference reactions. Modern KOD can detect PCR inhibition reflected by shift of the amplification curve by merely half a cycle with specificity and sensitivity >90%. Based solely on data analysis, these tools complement measures to improve and control pre-analytics. KOD methods do not require labor and materials, do not affect the reaction accuracy and sensitivity and they can be automated for fast and reliable quantification. This review describes the background of KOD methods, their principles, assumptions, strengths and limitations. Finally, the review provides recommendations how to use KOD and how to evaluate its performance.
Accurate quantification of nucleic acids by competitive (RT)–PCR requires a valid internal standard, a reference for data normalization and an adequate mathematical model for data analysis. We report here an effective procedure for the generation of homologous RNA internal standards and a strategy for synthesizing and using a reference target RNA in quantification of absolute amounts of nucleic acids. Further, a new mathematical model describing the general kinetic features of competitive PCR was developed. The model extends the validity of quantitative competitive (RT)–PCR beyond the exponential phase. The new method eliminates the errors arising from different amplification efficiencies of the co-amplified sequences and from heteroduplex formation in the system. The high accuracy (relative error <2%) is comparable to the recently developed real time detection 5′-nuclease PCR. Also, corresponding computer software has been devised for practical data analysis.
Quantitative real-time PCR has revolutionized many aspects of genetic research, biomedical diagnostics and pathogen detection. Nevertheless, the full potential of this technology has yet to be realized, primarily due to the limitations of the threshold-based methodologies that are currently used for quantitative analysis. Prone to errors caused by variations in reaction preparation and amplification conditions, these approaches necessitate construction of standard curves for each target sequence, significantly limiting the development of high-throughput applications that demand substantive levels of reliability and automation. In this study, an alternative approach based upon fitting of fluorescence data to a four-parametric sigmoid function is shown to dramatically increase both the utility and reliability of quantitative real-time PCR. By mathematically modeling individual amplification reactions, quantification can be achieved without the use of standard curves and without prior knowledge of amplification efficiency. Combined with provision of quantitative scale via optical calibration, sigmoidal curve-fitting could confer the capability for fully automated quantification of nucleic acids with unparalleled accuracy and reliability.
Quantitative real-time PCR (qPCR) has become a gold standard for the quantification of nucleic acids and microorganism abundances, in which plasmid DNA carrying the target genes are most commonly used as the standard. A recent study showed that supercoiled circular confirmation of DNA appeared to suppress PCR amplification. However, to what extent to which different structural types of DNA (circular versus linear) used as the standard may affect the quantification accuracy has not been evaluated. In this study, we quantitatively compared qPCR accuracies based on circular plasmid (mostly in supercoiled form) and linear DNA standards (linearized plasmid DNA or PCR amplicons), using proliferating cell nuclear gene (pcna), the ubiquitous eukaryotic gene, in five marine microalgae as a model gene. We observed that PCR using circular plasmids as template gave 2.65-4.38 more of the threshold cycle number than did equimolar linear standards. While the documented genome sequence of the diatom Thalassiosira pseudonana shows a single copy of pcna, qPCR using the circular plasmid as standard yielded an estimate of 7.77 copies of pcna per genome whereas that using the linear standard gave 1.02 copies per genome. We conclude that circular plasmid DNA is unsuitable as a standard, and linear DNA should be used instead, in absolute qPCR. The serious overestimation by the circular plasmid standard is likely due to the undetected lower efficiency of its amplification in the early stage of PCR when the supercoiled plasmid is the dominant template.