Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
N Engl J Med. Author manuscript; available in PMC 2011 September 3.
Published in final edited form as:
PMCID: PMC3066456

The Results Database — Update and Key Issues

Deborah A. Zarin, M.D., Tony Tse, Ph.D., Rebecca J. Williams, Pharm.D., M.P.H., Robert M. Califf, M.D., and Nicholas C. Ide, M.S.



The trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research.


We analyzed data that were publicly available between September 2009 and September 2010.


As of September 27, 2010, received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 (median, 1; range, 1 to 25). Of these trials, 24% reported results for 90% or less of their participants.

CONCLUSIONS provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data.

The trial registry was launched more than a decade ago. Since that time, it has been evolving in response to various policy initiatives. The registry now contains information on more than 100,000 clinical studies and has emerged as a key element of many public health policy initiatives aimed at improving the clinical research enterprise. In 2008, a database for reporting summary results was added to the registry. In this article, we present an update on relevant policies, summarize the structure and contents of the results database, and show how data can be used to gain insight into the state of clinical research.


Section 801 of the Food and Drug Administration Amendments Act (FDAAA)1 expanded the legal requirements for trial reporting at It was passed into law amid concerns about ethical and scientific issues affecting the design, conduct, and reporting of clinical trials,2 including the suppression and selective reporting of results based on the interests of sponsors,3 unacknowledged alterations of prespecified outcome measures,4 “offshoring” of human-subjects research,5 and failure to report relevant adverse events.6 Among other things, the FDAAA mandates the submission of summary results data for certain trials of drugs, biologics, and devices to, whether the results are published or not,7 and imposes substantial penalties for noncompliance. The law’s scope is not limited to industry-sponsored trials intended to support marketing applications but includes studies not intended to inform FDA action (e.g., comparative-effectiveness trials of approved drugs or devices), regardless of sponsorship. Table 1 summarizes the scope of key reporting requirements of the FDAAA and two other policies: the registration policy of the International Committee of Medical Journal Editors8 and regulations being implemented by the European Medicines Agency for registration and results reporting of clinical drug trials conducted in the European Union.9,10

Table 1
Scope of Interventional Studies Covered by Major Reporting Policies.*


Data in are self-reported by trial sponsors or investigators by means of a Web-based system.7 Registration information is generally reported at trial inception. Each record contains a set of mandatory data elements that describe the study’s purpose, recruitment status, design, eligibility criteria, and locations, as well as other key protocol details.11 Additional information may be provided with the use of optional data elements. Before public posting, conducts a quality review of the submitted information. Each trial (regardless of the number of study sites) is represented by a single record, which is assigned a unique identifier (i.e., NCT number). Each record is expected to be corrected or updated throughout the trial’s life cycle, and all changes are tracked on a public archive site that is accessible from each record (through a “History of Changes” link). Summary results data are entered in the results database after a trial is completed or terminated (Table 2). Once posted, results records are displayed with corresponding registry (summary protocol) information for each study. Resources and links to additional information are inserted by the National Library of Medicine to enhance the overall usefulness of the database. is designed to benefit the general public by expanding access to trial information, but different parts of the database are likely to be of more or less direct use to different audiences.

Table 2
Summary Objectives and Description of Requirements for the Results Database.

QUALITY ASSURANCE uses automated business rules to alert data providers when required information is missing or when certain data elements are internally inconsistent. After passing automated validation, all submissions are individually reviewed before public posting to assess whether entries are complete, informative, internally consistent, and not obviously invalid; specific criteria for this assessment are described on the Web site.15 Although the review of summary protocol information is generally straightforward, that of results submissions is more complex. The goal, at a minimum, is to determine whether entries provide an accurate depiction of the study design and whether the results can be understood by an educated reader of the medical literature. Some invalid data can be detected by staff; however, other data cannot be verified because does not have an independent source of study data (e.g., “624 years” is clearly an invalid results entry for mean age, whereas “62.4 years” may or may not be the true mean age). Submissions are not posted on the public site until quality requirements are met; if any important problems are detected (Table 3), results records are returned to the data providers for revision. However, individual record review has inherent limitations, and posting does not guarantee that the record is fully compliant with either or legal requirements.

Table 3 Quality Review Criteria.

RELATION TO PUBLICATION is designed to complement, not replace, journal publication. The results database provides public access to a complete set of summary results in a structured system that supports search and analysis. These data are primarily tabular in format and lack significant narrative portions. The database facilitates identification of acts of omission (e.g., incomplete reporting of outcome measures) and acts of commission (e.g., unacknowledged changes to prespecified outcome measures). Journals select research articles for publication on the basis of their target audiences, and the articles supplement reported data with peer-reviewed discussions of background, rationale, context, and implications of findings. Journal editors who abide by the standards set by the International Committee of Medical Journal Editors recognize these complementary roles and consider manuscripts for publication even when the results of a trial have already been posted on


Table 4 provides summary data on registry and results records for interventional studies that were publicly available on September 27, 2010. As of this date, approximately 330 new registrations and 2000 revised registrations had been submitted each week.

Table 4
Characteristics of Interventional Study Records Posted at as of September 27, 2010.*


All studies registered at are eligible for results submission; however, submission of results is required only for trials covered by the FDAAA (Table 2). Approximately 30 new and 80 revised records had been received each week; we estimate that full compliance with the FDAAA would lead to results submission for approximately 40% of newly registered studies, or over 100 new records per week.

The results of 3284 registered trials had been submitted by 666 data providers. Of these trials, 2324 had been posted publicly; the remainder either were undergoing quality-assurance review by staff or were returned for correction.

Of 2178 clinical trials with posted results records, 20% had more than two reported primary outcome measures and 5% had more than five. For some studies, posted results include more than 100 primary and secondary outcome measures. The FDAAA requires the reporting of all primary and secondary outcome measures, and does not limit the number of primary and secondary outcome measures that can be listed. Other prespecified and post hoc outcome measures may also be listed.

Of the 2324 posted results entries, 14% were linked to a PubMed citation through an indexed NCT number16; other publications that may exist could be found only through focused PubMed searches. We randomly selected a sample of 150 posted results records in September 2009 and conducted manual searches in an attempt to identify all associated publications. Using all available data, we found that 38 of these studies (25%) had an associated citation in September 2009, and 78 (52%) by November 2010. Although this percentage may continue to increase, it is unlikely that all outcomes from these studies will be published.17


A growing number of researchers are using data to examine various aspects of the clinical research enterprise. For example, recent studies evaluated registration records to analyze trends in the globalization of the clinical research enterprise,5,18 the level of selective publication of study results,19,20 and the degree of correspondence between registered and published outcome measures.19-21 Scoggins and Patrick reviewed registration records to identify the types of trials for which patient-reported outcomes were most likely to be reported and the specific instruments used.22 Some authors of systematic reviews have also integrated into their search strategies.23 The integrity of trial reporting is a common theme among these studies, which generally focus on whether prespecified procedures in the study protocol (and any subsequent amendments) are appropriate and were followed. This interest has been fueled by highly publicized cases in which trial protocols were not followed, and the subsequent publication of partial results was considered to be misleading.24 The requirement for registering outcome measures at trial inception is designed to address two problems: publication of only some measures and unacknowledged changes in prespecified measures.17,21,25

We used data to examine two data fields that are integral to the interpretation of study results: outcome measure and analysis population. These can be considered the “numerator” (outcome measure) and “denominator” (analysis population), respectively, of a study result (e.g., the number of events per number of participants in each group studied). The accuracy and specificity of the information within these fields partly determine their usefulness to a reader as well as their usefulness for assessing the fidelity of published reports to prespecified protocols. (Summaries of the methods used in these analyses are available in the Supplementary Appendix, available with the full text of this article at

SPECIFICATION OF OUTCOME MEASURES instructs data providers to report the specific measure and time frame for each primary and secondary outcome measure at registration, reflecting the current international standard for trial reporting.26,27 Experience with reporting outcomes in a tabular format in the results database has emphasized the need for the description of a measure to be specific in order to sufficiently form the rows for the results table (with comparison groups as columns). In addition to time frame, a fully specified outcome measure includes information about the following: domain (e.g., anxiety), specific measurement (e.g., Hamilton Anxiety Rating Scale), specific metric used to characterize each participant’s results (e.g., change from baseline at specified time), and method of aggregating data within each group (e.g., a categorical measure such as proportion of participants with a decrease greater than or equal to 50%) (Fig. 1).

Figure 1
An Example of the Four Levels of Specification in Reporting Outcome Measures.

We reviewed the first primary outcome measure, as initially registered, from 100 randomly selected non-phase 1 clinical trials in August 2010. Entries were assessed for whether a specific time frame was provided and were categorized according to level of specification (Fig. 1). We categorized 36% as level 1 (i.e., domain only), 25% as level 2, 26% as level 3, and 13% as level 4; of these, 72% included a specific time frame. When only a specific measurement or domain is registered, as occurred in 61% of the entries in our sample, post hoc choices of the specific metric or method of aggregation could mask the fact that multiple comparisons were conducted, potentially invalidating the reported statistical analyses and allowing for cherry-picking of results. Some argue that the method of aggregation (level 4) is part of the statistical analysis plan and may properly be specified later — after data accrual but before unblinding. The archive feature of enables those viewing such records to see the originally registered outcome measure and the full timeline of changes (if relevant).


The analysis population is another source of potential bias in results reporting. Substantial distortion of results can occur if all data are not accounted for or if missing data are not handled appropriately. The use of different analysis populations for different outcomes may not be noticed by many readers, but it can exert a strong effect on reported results. In a sample of 700 records (representing 1749 study groups and 5160 outcome measures), the mean number of different analysis populations per study group with at least one participant was 2.5 (median, 1; range, 1 to 25). The magnitude of the difference across groups and outcomes varied. To further explore the magnitude of these differences, we evaluated the percentage of participants who started the study and were analyzed for the first primary outcome measure in a sample of 684 eligible studies (representing 1706 groups). Approximately 31% of trials included 100% of participants in the analysis, and 24% of trials reported results for 90% or less of their participants (see the Supplementary Appendix). Determination of the appropriateness of the analysis population for any specific outcome analysis would require a detailed methodologic review of each study and would potentially involve subjective judgments.


In the past 5 years, prospective registration of clinical trials has become standard practice. Public reporting of summary results, independent of the interests of the trial sponsor, represents the next step in this international experiment in systematic disclosure of clinical trial information. It has been 2 years since the launch of the results database, and people can now access summary trial data that were not previously available publicly. Researchers, policymakers, and others can now examine features and trends of the clinical research enterprise that were previously difficult to study. For example, methodologists may evaluate the appropriateness of designing trials with many primary outcome measures. Policymakers may consider how current patterns in the use of data monitoring committees might affect the quality and safety of the resulting research. Others may use the data to monitor trends in the clinical research enterprise and raise questions about the portfolio of trials relative to public health needs. The ultimate usefulness of the registry and results database will become apparent as more trial information and results are posted and as persons with different interests and needs incorporate these data into their analyses. is continually adding features and linkages to facilitate the use of the data by different audiences, and other groups repackage these data for more specific audiences. For example, ( augments registry data to serve the breast-cancer community. The Clinical Trials Transformation Initiative is leading an effort to develop a publicly accessible research-quality data set in order to facilitate examination of the clinical research enterprise.28

When one is using the data in, however, certain limitations should be kept in mind. First, there are undoubtedly trials that are not registered in or any other publicly accessible registry. Coverage in is likely to be most complete for trials of drugs or devices that are sponsored by U.S.-based or multinational organizations (e.g., major pharmaceutical companies). Second, some records are missing information (e.g., optional data elements) or contain imprecise entries. We are not able to impose requirements beyond those of the prevailing federal law; trials registered since the passage of the FDAAA have to meet more requirements than do older trials, but investigators who use all data will encounter many records with missing fields. In addition, given the demands of individual record review, some problematic entries will find their way onto the public site. Third, new registry and registration policies are being implemented in specific regions and countries around the world. The World Health Organization has established a search portal that includes data from and 11 other registries, totaling more than 123,500 records as of November 23, 2010. However, overlapping scope and inadequate coordination internationally have contributed to the difficulty in determining the precise number of unique trials being conducted.

Disclosure requirements for clinical trials continue to evolve. In the United States, the FDAAA calls for the expansion of the basic results database through rulemaking “to provide more complete results information” and mandates the consideration of issues such as requiring results reporting for trials of drugs and devices that have not been approved by the FDA, the inclusion of narrative summaries, and the submission of full study protocols. In general, a guiding principle is that expansion of the registry and results database should only improve on, not reduce, the functionality and usefulness of Information about the status of the rulemaking process, including notification of the opportunity to provide comments, can be found at Internationally, the European Medicines Agency is planning to make summary protocol and results information publicly available for clinical trials of approved and unapproved drugs conducted in the European Union. Efforts are under way to ensure the compatibility of the European Medicines Agency database with, thus potentially minimizing reporting burdens for those conducting multinational trials and supporting seamless access to results from many parts of the world.29

Despite the change in cultural expectations regarding trial disclosure and the fact that many trial sponsors and investigators are successfully meeting the requirement to submit summary results, our experience to date indicates that others are still struggling. In addition, the poor quality of some submitted entries is troubling. As Beecher observed in 1966, a “truly responsible investigator [emphasis in the original]” is essential if the rules governing clinical research are to have the intended effect.30 Similarly, the usefulness of ultimately depends on whether responsible investigators and sponsors make diligent efforts to submit complete, timely, accurate, and informative data about their studies.

Supplementary Material



We thank Drs. Alastair J.J. Wood, Harlan M. Krumholz, and Joseph S. Ross for comments on earlier versions of this manuscript; Sarah O. Kornmeier and Annice M. Bergeris for help with data analysis; and Jonathan McCall for editorial assistance.


Disclosure forms provided by the authors are available with the full text of this article at


1. Public Law 110-95. Food and Drug Administration Amendments Act of 2007.
2. Zarin DA, Tse T. Medicine: moving toward transparency of clinical trials. Science. 2008;319:1340–2. [PMC free article] [PubMed]
3. Chan AW. Bias, spin, and misreporting: time for full access to trial protocols and results. PLoS Med. 2008;5(11):e230. [PMC free article] [PubMed]
4. Vedula SS, Bero L, Scherer RW, Dickersin K. Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med. 2009;361:1963–71. [PubMed]
5. Glickman SW, McHutchison JG, Peterson ED, et al. Ethical and scientific implications of the globalization of clinical research. N Engl J Med. 2009;360:816–23. [PubMed]
6. Curfman GD, Morrissey S, Drazen JM. Expression of concern: Bombardier et al., “Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis,” N Engl J Med. 2000;343:1520–8. [PubMed]N Engl J Med. 2005;353:2813–4. [PubMed]
7. Tse T, Williams RJ, Zarin DA. Reporting “basic results” in Chest. 2009;136:295–303. [PubMed]
8. Laine C, Horton R, DeAngelis CD, et al. Clinical trial registration — looking back and moving ahead. N Engl J Med. 2007;356:2734–6. [PubMed]
9. Communication from the Commission regarding the guideline on the data fields contained in the clinical trials database provided for in Article 11 of Directive 2001/20/EC to be included in the database on medicinal products provided for in Article 57 of Regulation (EC) No 726/2004. European Commission, ed Official Journal of the European Union. 2008 2008/C 168/02.
10. Guidance on the information concerning paediatric clinical trials to be entered into the EU Database on Clinical Trials (EudraCT) and on the information to be made public by the European Medicines Agency (EMEA), in accordance with Article 41 of Regulation (EC) No 1901/2006. European Commission, ed Official Journal of the European Union. 2009 2009/C 28/01.
11. protocol data element definitions (May 2010 draft)
12. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med. 2010;152:726–32. [PubMed]
13. International Conference on Harmonisation. Technical requirements for registration of pharmaceuticals for human use: tripartite harmonized ICH guideline E3: structure and content of clinical study reports. [PubMed]
14. Ide NC, Loane RF, Demner-Fushman D. Essie: a concept-based search engine for structured biomedical text. J Am Med Inform Assoc. 2007;14:253–63. [PMC free article] [PubMed]
15. PRS and U.S. Public Law 110-85. 2010.
16. Bethesda, MD: National Library of Medicine; 2009. Clinical trial registry numbers in MEDLINE/PubMed records.
17. Chan AW, Krleza-Jerić K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ. 2004;171:735–40. [PMC free article] [PubMed]
18. Thiers FA, Sinskey AJ, Berndt ER. Trends in the globalization of clinical trials. Nat Rev Drug Discov. 2008;7:13–4.
19. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in a cross-sectional analysis. PLoS Med. 2009;6(9):e1000144. [PMC free article] [PubMed]
20. Bourgeois FT, Murthy S, Mandl KD. Outcome reporting among drug trials registered in Ann Intern Med. 2010;153:158–66. [PMC free article] [PubMed]
21. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009;302:977–84. Erratum, JAMA 2009;302:1532. [PubMed]
22. Scoggins JF, Patrick DL. The use of patient-reported outcomes instruments in registered clinical trials: evidence from Contemp Clin Trials. 2009;30:289–92. [PMC free article] [PubMed]
23. Kassab S, Cummings M, Berkovitz S, van Haselen R, Fisher P. Homeopathic medicines for adverse effects of cancer treatments. Cochrane Database Syst Rev. 2009;2:CD004845. [PubMed]
24. Marusic A, Haug C. The journal editor’s perspective. In: Foote M, editor. Clinical trial registries: a practical guide for sponsors and researchers of medicinal products. Basel, Switzerland: Birkhäuser; 2006. pp. 13–26.
25. Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3(8):e3081. [PMC free article] [PubMed]
26. The Consolidated Standards of Reporting Trials (CONSORT) Group check list: item 6 — study outcomes.
27. Geneva: World Health Organization; WHO Trial Registration Data Set (Version 1.2.1), items 19 (“Primary Outcome(s)”)and 20 (“Key Secondary Outcomes”)
28. Clinical Trials Transformation Initiative. Improving the public interface for use of aggregate data in
29. European Commission, Health and Consumers Directorate-General. Implementing technical guidance — list of fields for results-related information to be submitted to the ‘EudraCT’ clinical trials database, and to be made public, in accordance with Article 57(2) of Regulation (EC) No 726/2004 and Article 41 of Regulation (EC) No 1901/2006 and their implementing guidelines 2008/C168/02 and 2009/C28/01. June 1, 2010.
30. Beecher HK. Ethics and clinical research. N Engl J Med. 1966;274:1354–60. [PubMed]