PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Eval Health Prof. Author manuscript; available in PMC 2014 May 14.
Published in final edited form as:
PMCID: PMC4019992
NIHMSID: NIHMS578281

Common Metrics to Assess the Efficiency of Clinical Research

Abstract

The Clinical and Translational Science Award (CTSA) Program was designed by the National Institutes of Health (NIH) to develop processes and infrastructure for clinical and translational research throughout the United States. The CTSA initiative now funds 61 institutions. In 2012, the National Center for Advancing Translational Sciences, which administers the CTSA Program, charged the Evaluation Key Function Committee of the CTSA Consortium to develop common metrics to assess the efficiency of clinical research processes and outcomes. At this writing, the committee has identified 15 metrics in 6 categories. It has also developed a standardized protocol to define, pilot-test, refine, and implement the metrics. The ultimate goal is to learn critical lessons about how to evaluate the processes and outcomes of clinical research within the CTSAs and beyond. This article describes the work involved in developing and applying common metrics and benchmarks of evaluation.

Keywords: Clinical Research, Common metrics, CTSA, Efficiency of clinical research, Evaluation

Introduction

Researchers continue to struggle with the slow pace at which findings are translated from bench to bedside and, in particular, with the amount of time required to conduct clinical trials and publish results. In cancer research, for example, Dilts and colleagues (2009) found there are almost 300 distinct processes involved in activating a phase III trial and that the median time from conception to activation is over 600 days. While many clinical trials in various fields are never completed because of recruitment and other problems, Ross and colleagues (2012) found that the results of completed trials are published within 30 months in fewer than half of cases and that the overall publication rate is only 68%.

In an attempt to improve processes involved in clinical research, the National Institutes of Health (NIH) created and funded the Clinical and Translational Science Award (CTSA) Program. Administered by the National Center for Advancing Translational Sciences (NCATS), this program was designed to develop infrastructure for clinical and translational research at different institutions throughout the United States. The CTSA initiative now funds 61 institutions and thus represents the largest NIH-funded program to date (CTSA Central, access 2013). Although CTSA-funded institutions have responded to the challenge of improving the processes of clinical and translational research in various ways, they all have core entities that help improve the efficiency of research by eliminating barriers and offering specialized services to investigators. Examples of such cores include biostatistics, regulatory compliance, informatics, and clinical research facilities cores.

Recognizing the importance of evaluation, NIH has required that each institution that applies for a CTSA have an evaluation plan that explains in detail how it would evaluate its program and assess its use of funds if it were to receive a CTSA. NIH has required an evaluation plan in every Request for Applications for CTSAs since its inception in 2005. Many institutions employ a variety of evaluation methods, including surveys, bibliometric analyses, and social network analysis to gain a better understanding of teams and multidisciplinary research. Evaluators at each CTSA institution are members of the Evaluation Key Function Committee. This committee has 4 workgroups and 2 interest groups, with focuses on methodology (bibliometrics, qualitative methods, and social network analysis) and learning or defining best practices (research translational mapping and measurement, definitions, and shared resources).

The Evaluation Key Function Committee meets regularly via conference calls and annually at face-to-face meetings to share best practices in evaluation and to collaborate on evaluation projects. The purpose of the committee is not to engage in a national evaluation. In fact, during the first 6 years of the CTSA program, the NIH employed consultants to conduct a national evaluation, focusing on a summative approach and studying progress of the CTSAs, with special emphasis on the accomplishments of scholars trained through the Research Training and Education Key Function (Rubio, Sufian, & Trochim, 2012). The national evaluation was not designed to generate tools or metrics for the individual institutions but, rather, to report on what the institutions with CTSAs had accomplished.

In 2012, the acting director of Division of Clinical Innovation within NCATS, where the CTSA program is managed, charged the Evaluation Key Function Committee with generating common metrics to assess the efficiency of clinical research in terms of processes and outcomes. These metrics can then be used for benchmarking that would allow each institution to see where its performance falls with regards to efficiency across CTSA funded institutions. The intent is to provide a tool for institutions to know whether or not they should engage in a process improvement so that they can improve the efficiency of clinical research at their institution. Collectively, the data can be used to document the efficiency of clinical research across all of the CTSA funded institutions.

Also in 2012, the Institute of Medicine (IOM) was charged with evaluating the CTSA Program. In their report, they argue for the need for common metrics that can be used consistently at all CTSA sites to demonstrate progress of the CTSA Program (IOM 2013). The overarching goal of the CTSA program is to improve health. However, as they note, this is not feasible or practical to evaluate. One thing that the CTSA Program can do is to develop common metrics that can demonstrate improvements over time with regards to the efficiency of clinical research.

Development of Clinical Research Metrics

In an effort to develop common metrics, the chair and co-chair of the Evaluation Key Function Committee began by asking an evaluation liaison from each of the 61 institutions to meet with the principal investigator of his or her institution’s CTSA to generate a list of 5 to 8 metrics for clinical research processes and outcomes and to bring the list to the annual face-to-face meeting. At the October 2012 face to face meeting the 127 participants met in small groups and shared their metrics. Each group was assigned a facilitator and was asked to rank the metrics in order to identify the top 5–10 metrics. Afterwards, the facilitators from the small groups met to synthesize top metrics. The process resulted in 15 metrics that they believed to be the most promising and feasible to collect. The following day a clicker system was used so that the participants could rate the metrics on the importance of the metric and the feasibility of collecting data on the metric. All of the metrics were strongly endorsed by the participants.

The committee presented a list of the 15 metrics, along with the importance and feasibility scores for each metric, to the CTSA Steering Committee (CCSC), which consists of principal investigators from each CTSA institution. The CCSC gave its enthusiastic and unanimous support for the evaluation committee’s effort.

The 15 metrics (Table 1) can be grouped into 6 categories: clinical research processes, careers, services used at the institution, economic return, collaboration, and products. While the rationales for most of these categories are evident, the rationales for the careers and collaboration categories deserve mention. The 2 metrics in the career category (career development and career trajectory) reflect the training of the investigators, and the 2 metrics in the collaboration category (researcher collaboration and institutional collaboration) reflect the willingness to engage in multidisciplinary approaches to conducting clinical research (i.e., investigators from different disciplines collaborating on research) and overcoming barriers to this research. Together, the training and collaboration affect the efficiency of the research endeavors.

Table 1
Fifteen Metrics Developed by the Clinical and Translational Science Award (CTSA) Consortium to Assess the Efficiency of Clinical Research

The Evaluation Key Function Committee formulated a smaller workgroup (Common Metrics Workgroup) to further refine each metric. To help consistently define the metrics, the members of this workgroup examined and modified a template of measure attributes that had been developed earlier by the National Quality Measurement Clearing House (NQMC) and Agency for Healthcare Research and Quality (AHRQ, accessed 2012). Our modified template is shown in Table 2.

Table 2
Template of Measurement Attributes to Be Used by the Clinical and Translational Science Award (CTSA) Consortium*

In modifying the template, the Common Metrics Workgroup recognized that descriptive data needed to be specified and collected in conjunction with each metric. The descriptive data would provide a context for the metric and enable better interpretation of the data for the metric. For example, expectations regarding the time that lapses between receipt of a grant award and recruitment of the first study subject would vary based on the type of study (phase I, II, or III) and whether the disease being studied was common or rare.

Definition of the Proposed Metrics

At this writing, these metrics have been defined: 1) time from institutional review board (IRB) submission to approval, 2) studies meeting accrual goals, and 3) time from notice of grant award to study opening (Table 38). While these metrics may seem to be straightforward, their definition proved to be challenging.

Table 3
IRB Completion Time (Time from IRB submission to approval)
Table 8
Problems with subject recruitment.

The time from IRB submission to approval is defined as the number of days between the date that the IRB office received the IRB application for review and the date that the IRB gave final approval with no IRB-related contingencies remaining. One of the challenges in defining the metric was that some institutions require a scientific review of a protocol before the IRB reviews the protocol, while other institutions do not. For the institutions that do require a scientific review, we grappled with whether to define the first date as the date that the proposal was submitted to the IRB or the date that the scientific review was completed. The goal was to provide a definition that could be consistently applied by all institutions. Thus, whenever issues of differences arose, we used descriptive data to define the differences and refine the definitions. This approach resulted in the need to collect more data for the metrics.

When we tried to define the second metric, which is called study meeting accrual goals, but found that it was too broad to define as a single metric. We found that we needed to describe 4 metrics to capture the initial metric: studies with adequate accrual (recruitment/retention), length of time spent in recruitment, study startup time, and problems with subject recruitment.

The third metric, called time from notice of grant award to study opening, was probably the least problematic, but it also presented challenges. We ended up defining study opening as the date that the first subject provided informed consent for participation in the study. Since the prevalence of the disease being studied can greatly affect the length of time until study opening, we included information about disease prevalence in the descriptive data section.

Development of a Standardized Protocol

In addition to defining the first 3 metrics, the Common Metrics Workgroup developed a standardized protocol to do the following: define the remaining metrics, recruit CTSA institutions to pilot-test the metrics, use the results to refine the metrics, implement the refined metrics across the CTSA Consortium, and create benchmarks. The Common Metrics Workgroup recognizes the need to work with the leadership of the CTSA consortium to implement the metrics (e.g., CCSC).

A standardized protocol enables the Common Metrics Workgroup to solicit assistance from other groups that may be interested in helping to define the common metrics. Within the Evaluation Key Function Committee, for example, groups include the definitions workgroup, which has already engaged in defining key constructs of the CTSA Consortium, and the bibliometric workgroup, which could be instrumental in defining metrics regarding publications.

To help implement our protocol, we are creating a database that will contain the various lists of metrics that were brought to the 2012 face-to-face meeting of the Evaluation Key Function Committee. While our initial efforts will focus on defining the 15 metrics listed in Table 1, the database will enable us to prioritize other metrics that need to be defined. As we progress through this process, we will strive to minimize the redundancies across the metrics and to keep the number of metrics to be implemented at a minimum. The intent is for the common metrics work to be useful, not burdensome to institutions.

Because we have a large number of metrics to consider, we anticipate that the metrics will be rolled out in waves, with definitions introduced every 4 months and with pilot-testing instituted during each new wave. For each wave, we will recruit 3–5 CTSA institutions to pilot-test the metrics for 6 weeks. During this time, we will ask each institution to gather data on at least 10 protocols and input the data into REDCap™ (Research Electronic Data Capture) which are electronic data capture tools hosted at Vanderbilt University (Harris et al 2009). REDCap is a secure, web-based application designed to support data capture for research studies, providing: 1) an intuitive interface for validated data entry; 2) audit trails for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages; and 4) procedures for importing data from external sources.

At the end of 6 weeks, we will ask the piloting institutions to complete a brief survey about the feasibility of collecting data and about the barriers and obstacles they encountered. Then we will review all of the data to determine if the metric needs to be further refined. Given the diversity in how research is conducted and implemented by the CTSA institutions, we believe that the definitions will have to undergo several iterations. For example, some institutions have an electronic IRB submission and review process, while other institutions still rely on paper applications. These differences will impact the way in which data can be collected.

By collecting the data from all of the institutions in one database, we will be able to use the database for benchmarking. The long-term plan is that for each metric, each CTSA institution will be able to log in to the system, generate a report that displays the de-identified distribution of responses for the metric, and then determine where it lies on the continuum of all institutions. The benchmarking information will remain confidential for individual institutions. Institutions can use the data to determine if they should develop a process improvement plan to increase the efficiency of clinical research at their institution.

Future Directions

The future of clinical research is dependent on developing significant efficiencies for translating findings from the bench to bedside more quickly and with fewer resources. The metrics that we are working to define and implement can help move us in that direction, but common metrics are not a panacea; they are the first step in assessing several areas for possible improvement.

The overwhelming support for this work across the CTSA funded institutions and the enthusiasm of the CCSC have strengthened our commitment to establishing common metrics for clinical research. Using these common metrics, we will learn critical lessons about how to evaluate and change the processes and the outcomes of clinical research within CTSA funded institutions. We believe that this, in turn, will affect clinical research throughout the academic research community and beyond.

Table 4
Time from Notice of Grant Award (NOGA) to Date of First Accrual (FA)
Table 5
Studies with adequate accrual (recruitment)
Table 6
Length of time spent in recruitment
Table 7
Study Start-up time

Acknowledgments

Funding: The project reported here was supported by the National Institutes of Health (NIH) through the Clinical and Translational Science Award (CTSA) Program. The NIH CTSA funding was awarded to the University of Pittsburgh (UL1 TR000005).

Footnotes

Declaration of Conflicting Interests: The author declares no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

REFERENCES

  • Agency for Healthcare Research and Quality (no date) Template of Measure Attributes. [Accessed December 4, 2012]; http://www.qualitymeasures.ahrq.gov/about/template-of-attributes.aspx.
  • CTSA Central. [Accessed June 26, 2013]; https://ctsacentral.org/institutions.
  • Dilts DM, Sandler AB, Cheng SK, Crites JS, Ferranti LB, Wu AY, Finnigan S, Friedman S, Mooney M, Abrams J. Steps and time to process clinical trials at the Cancer Therapy Evaluation Program. Journal of Clinical Oncology. 2009;27:1761–1766. [PMC free article] [PubMed]
  • Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics. 2009;42:377–381. [PMC free article] [PubMed]
  • IOM (Institute of Medicine) The CTSA program at NIH: Opportunities for advancing clinical and translational research. Washington, DC: The National Academies Press; 2013. [PubMed]
  • Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: Cross sectional analysis. BMJ. 2012;344:d7292. [PMC free article] [PubMed]
  • Rubio DM, Sufian M, Trochim WM. Strategies for a national evaluation of the Clinical and Translational Science Awards. Clinical and Translational Science. 2012;5:138–139. [PMC free article] [PubMed]