Data were derived from the IMPAC2 Data File maintained by NIH's Office of Extramural Research. The data set included R01 (standard research grant) applications reviewed by CSR for the January 2009 review round. The dataset was a sample of all R01 application reviews.
A research application is defined as clinical when the principal investigator indicated involvement of human subjects in the proposed research by checking “yes” on page 1 of the grant application form in response to a query about involvement of human subjects. Excluded from this definition are applications that identified human subjects in the research and claimed Exemption 4. Exemption 4 applies to research involving the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these sources are publicly available or if the information is recorded by the investigator in a manner that subjects cannot be identified, directly or through identifiers linked to the subjects. This definition of clinical research captures research on mechanisms of disease, therapeutic intervention, clinical trials, development of technologies, epidemiological and behavioral studies and outcomes as well as health services research.
Study sections that review R01 applications are of two main types: Standard Review Groups (SRGs) and Special Emphasis Panels (SEPs). SRGs are panels that have defined charters describing their areas of scientific expertise. They typically meet three times a year and are comprised of appointed members who serve for four years combined with temporary members who serve once to provide supplemental expertise. The initial peer review panels meet face-to-face and usually include 20 to 35 reviewers.
This analysis includes only SRG panel data. It does not include reviews occurring in SEPs since these reviews use many formats (e.g., face-to face and web-based discussion) that could complicate the interpretation of results. The small number of R01 applications reviewed by SEPs also limits the ability to analyze results from these meetings.
For each application a minimum of three review panel members are assigned to review the application: at least two provide written critiques and one may be only a discussant who adds to comments made by the first two assigned reviewers. All three assigned reviewers provide independently derived “preliminary” priority scores using a standardized set of evaluation criteria.
At the beginning of the panel discussion the assigned reviewers verbally re-state their independent assessments of the appropriate priority scores for the panel. After the discussion the assigned reviewers were asked to verbally re-state their priority scores. These scores may be the same as or different than their pre-discussion preliminary scores. These scores, which are given after the discussion, are not captured and thus were not part of this assessment. These scores would also have been modified by the discussion and not independent.
If an application was deemed to be “non-competitive” (in the lower half, quantitatively of applications reviewed by the panel), by unanimous agreement of the members, it was not discussed at the meeting and did not receive a final priority score 
. Because these applications were not discussed they did not include a final priority score, and could not be included in this study. Approximately 50% of the applications go on to further discussion by the full panel.
The panels used incremental units of 0.1 in scoring applications from 1.0 (highest merit) to 5.0 (lowest merit). The individual panel members, including the three assigned reviewers, independently and privately score the application after the discussion. The purpose was to preserve confidentially and independence of personal voting. CSR review panel members were allowed to score applications up to 0.5 points outside the range stated by the assigned reviewers' without indicating to the other members of the panel they were ‘out of range’. Reviewers were not allowed to vote outside more than 0.5 outside this range unless they made a statement to the panel since the score could be due to a substantive difference of opinion or fact that had not been fully explored during the discussion.
For the purposes of this study the preliminary score refers to the average of the three assigned reviewers' independent scores, expressed to two decimal places. The final priority score of the SRG, also reported to two decimal places, is the average score of all the voting members of the panel (some members may be in conflict and would not participate in the discussion or vote).
The ICs use the priority scores and a calculated percentile ranking in assisting in their decisions regarding funding 
. Using percentile ranking enables ICs to integrate the outcomes of multiple SRGs. After the SRG meeting each application that received a final priority score was also assigned a percentile value. The percentile for an R01 application is its relative rank within that SRG. The calculated percentile value for a given R01 application specifies the percent of applications with scores equal to or better than that application, The base used for calculating the SRG percentile for an application is defined by all R01 applications assigned for review by the SRG over three review rounds, whether the application was discussed and scored or not. Because preliminary scores are based on only thee members of the study section there is no base and thus they are not percentiled. Therefore a direct comparison of the preliminary evaluation with the final percentile for the panel is not possible. Thus, comparisons and analysis were limited to priority scores.