We developed CONSORT 2010 to assist authors in writing reports of randomised controlled trials, editors and peer reviewers in reviewing manuscripts for publication, and readers in critically appraising published articles. The CONSORT 2010 Explanation and Elaboration provides elucidation and context to the checklist items. We strongly recommend using the explanation and elaboration in conjunction with the checklist to foster complete, clear, and transparent reporting and aid appraisal of published trial reports.
CONSORT 2010 focuses predominantly on the two group, parallel randomised controlled trial, which accounts for over half of trials in the literature.2
Most of the items from the CONSORT 2010 Statement, however, pertain to all types of randomised trials. Nevertheless, some types of trials or trial situations dictate the need for additional information in the trial report. When in doubt, authors, editors, and readers should consult the CONSORT website for any CONSORT extensions, expansions (amplifications), implementations, or other guidance that may be relevant.
The evidence based approach we have used for CONSORT also served as a model for development of other reporting guidelines, such as for reporting systematic reviews and meta-analyses of studies evaluating interventions,16
and observational studies.18
The explicit goal of all these initiatives is to improve reporting. The Enhancing the Quality and Transparency of Health Research (EQUATOR) Network will facilitate development of reporting guidelines and help disseminate the guidelines: www.equator-network.org
provides information on all reporting guidelines in health research.
With CONSORT 2010, we again intentionally declined to produce a rigid structure for the reporting of randomised trials. Indeed, SORT19
tried a rigid format, and it failed in a pilot run with an editor and authors.20
Consequently, the format of articles should abide by journal style, editorial directions, the traditions of the research field addressed, and, where possible, author preferences. We do not wish to standardise the structure of reporting. Authors should simply address checklist items somewhere in the article, with ample detail and lucidity. That stated, we think that manuscripts benefit from frequent subheadings within the major sections, especially the methods and results sections.
CONSORT urges completeness, clarity, and transparency of reporting, which simply reflects the actual trial design and conduct. However, as a potential drawback, a reporting guideline might encourage some authors to report fictitiously the information suggested by the guidance rather than what was actually done. Authors, peer reviewers, and editors should vigilantly guard against that potential drawback and refer, for example, to trial protocols, to information on trial registers, and to regulatory agency websites. Moreover, the CONSORT 2010 Statement does not include recommendations for designing and conducting randomised trials. The items should elicit clear pronouncements of how and what the authors did, but do not contain any judgments on how and what the authors should have done. Thus, CONSORT 2010 is not intended as an instrument to evaluate the quality of a trial. Nor is it appropriate to use the checklist to construct a “quality score.”
Nevertheless, we suggest that researchers begin trials with their end publication in mind. Poor reporting allows authors, intentionally or inadvertently, to escape scrutiny of any weak aspects of their trials. However, with wide adoption of CONSORT by journals and editorial groups, most authors should have to report transparently all important aspects of their trial. The ensuing scrutiny rewards well conducted trials and penalises poorly conducted trials. Thus, investigators should understand the CONSORT 2010 reporting guidelines before starting a trial as a further incentive to design and conduct their trials according to rigorous standards.
CONSORT 2010 supplants the prior version published in 2001. Any support for the earlier version accumulated from journals or editorial groups will automatically extend to this newer version, unless specifically requested otherwise. Journals that do not currently support CONSORT may do so by registering on the CONSORT website. If a journal supports or endorses CONSORT 2010, it should cite one of the original versions of CONSORT 2010, the CONSORT 2010 Explanation and Elaboration, and the CONSORT website in their “Instructions to authors.” We suggest that authors who wish to cite CONSORT should cite this or another of the original journal versions of CONSORT 2010 Statement, and, if appropriate, the CONSORT 2010 Explanation and Elaboration.13
. All CONSORT material can be accessed through the original publishing journals or the CONSORT website. Groups or individuals who desire to translate the CONSORT 2010 Statement into other languages should first consult the CONSORT policy statement on the website.
We emphasise that CONSORT 2010 represents an evolving guideline. It requires perpetual reappraisal and, if necessary, modifications. In the future we will further revise the CONSORT material considering comments, criticisms, experiences, and accumulating new evidence. We invite readers to submit recommendations via the CONSORT website.
Box 1: Noteworthy general changes in CONSORT 2010 Statement
- We simplified and clarified the wording, such as in items 1, 8, 10, 13, 15, 16, 18, 19, and 21
- We improved consistency of style across the items by removing the imperative verbs that were in the 2001 version
- We enhanced specificity of appraisal by breaking some items into sub-items. Many journals expect authors to complete a CONSORT checklist indicating where in the manuscript the items have been addressed. Experience with the checklist noted pragmatic difficulties when an item comprised multiple elements. For example, item 4 addresses eligibility of participants and the settings and locations of data collection. With the 2001 version, an author could provide a page number for that item on the checklist, but might have reported only eligibility in the paper, for example, and not reported the settings and locations. CONSORT 2010 relieves obfuscations and forces authors to provide page numbers in the checklist for both eligibility and settings
Box 2: Noteworthy specific changes in CONSORT 2010 Statement
- Item 1b (title and abstract)—We added a sub-item on providing a structured summary of trial design, methods, results, and conclusions and referenced the CONSORT for abstracts article21
- Item 2b (introduction)—We added a new sub-item (formerly item 5 in CONSORT 2001) on “Specific objectives or hypotheses”
- Item 3a (trial design)—We added a new item including this sub-item to clarify the basic trial design (such as parallel group, crossover, cluster) and the allocation ratio
- Item 3b (trial design)—We added a new sub-item that addresses any important changes to methods after trial commencement, with a discussion of reasons
- Item 4 (participants)—Formerly item 3 in CONSORT 2001
- Item 5 (interventions)—Formerly item 4 in CONSORT 2001. We encouraged greater specificity by stating that descriptions of interventions should include “sufficient details to allow replication”3
- Item 6 (outcomes)—We added a sub-item on identifying any changes to the primary and secondary outcome (endpoint) measures after the trial started. This followed from empirical evidence that authors frequently provide analyses of outcomes in their published papers that were not the prespecified primary and secondary outcomes in their protocols, while ignoring their prespecified outcomes (that is, selective outcome reporting).4 22 We eliminated text on any methods used to enhance the quality of measurements
- Item 9 (allocation concealment mechanism)—We reworded this to include mechanism in both the report topic and the descriptor to reinforce that authors should report the actual steps taken to ensure allocation concealment rather than simply report imprecise, perhaps banal, assurances of concealment
- Item 11 (blinding)—We added the specification of how blinding was done and, if relevant, a description of the similarity of interventions and procedures. We also eliminated text on “how the success of blinding (masking) was assessed” because of a lack of empirical evidence supporting the practice as well as theoretical concerns about the validity of any such assessment23 24
- Item 12a (statistical methods)—We added that statistical methods should also be provided for analysis of secondary outcomes
- Sub-item 14b (recruitment)—Based on empirical research, we added a sub-item on “Why the trial ended or was stopped”25
- Item 15 (baseline data)—We specified “A table” to clarify that baseline and clinical characteristics of each group are most clearly expressed in a table
- Item 16 (numbers analysed)—We replaced mention of “intention to treat” analysis, a widely misused term, by a more explicit request for information about retaining participants in their original assigned groups26
- Sub-item 17b (outcomes and estimation)—For appropriate clinical interpretability, prevailing experience suggested the addition of “For binary outcomes, presentation of both relative and absolute effect sizes is recommended”27
- Item 19 (harms)—We included a reference to the CONSORT paper on harms28
- Item 20 (limitations)—We changed the topic from “Interpretation” and supplanted the prior text with a sentence focusing on the reporting of sources of potential bias and imprecision
- Item 22 (interpretation)—We changed the topic from “Overall evidence.” Indeed, we understand that authors should be allowed leeway for interpretation under this nebulous heading. However, the CONSORT Group expressed concerns that conclusions in papers frequently misrepresented the actual analytical results and that harms were ignored or marginalised. Therefore, we changed the checklist item to include the concepts of results matching interpretations and of benefits being balanced with harms
- Item 23 (registration)—We added a new item on trial registration. Empirical evidence supports the need for trial registration, and recent requirements by journal editors have fostered compliance29
- Item 24 (protocol)—We added a new item on availability of the trial protocol. Empirical evidence suggests that authors often ignore, in the conduct and reporting of their trial, what they stated in the protocol.4 22 Hence, availability of the protocol can instigate adherence to the protocol before publication and facilitate assessment of adherence after publication
- Item 25 (funding)—We added a new item on funding. Empirical evidence points toward funding source sometimes being associated with estimated treatment effects30