The recent debate on healthcare reform in the US has drawn attention to national
policies to improve the quality, safety and cost-effectiveness of patient care. Yet, the delivery of healthcare is a local
phenomenon, dictated by local standards and policies.1
Some industrialized nations have recognized this, and are using hospital-based "health technology assessment" or "comparative effectiveness" centers to improve care locally.2,3
Medical centers in the US, however, have invested less in such infrastructure.2,3
As the passage of healthcare reform prompts us to consider new national strategies to improve patient care, we explore in this Perspective the role that local hospital-based comparative effectiveness centers can play.4,5
The terms "comparative effectiveness" and "health technology assessment" similarly refer to an evaluation of the benefits, harms, and costs of drugs, devices, and clinical practices.6,7
Comparative effectiveness (CE) reviews use scientific evidence to compare one approach to another to estimate incremental benefit. In the US, these reviews are developed by the Evidence-based Practice Centers8
and other Effective Health Care Program partners9
of the Agency for Healthcare Research and Quality (AHRQ), as well as for-profit entities, medical professional societies, and payers. Reviews are subsequently used by payers and purchasers to inform coverage or reimbursement decisions. The national non-profit Patient-Centered Outcomes Research Institute established by healthcare reform may also play a role in the development and dissemination of future CE reviews, as well as funding of CE research to inform clinical practice.10,11
In contrast to reviews created by national bodies or local payers, reviews created by hospital-based CE centers are funded by their home institutions to help inform decision making on the ground, from device purchasing and drug formulary choices to decisions involving clinical practice. These centers can adapt reviews from outside agencies to their local settings and develop new reviews to address their local needs. In addition, they can use local utilization, outcomes, and cost data to fill gaps in the evidence and enhance the relevance of reviews.12
Some hospital-based CE centers have even funded local researchers to address the evidence gaps identified in their own local reviews.13
Most importantly, these centers can play a critical role in implementing report findings, including integrating them into computerized clinical decision support (CDS) or quality improvement (QI) initiatives, and measuring their impact using administrative or clinical data. Such centers thus help to create and foster a culture of evidence-based practice at their local institutions (Table ).
Comparing and Contrasting National and Hospital-Based Comparative Effectiveness Centers in the United States
Few studies have examined the impact of hospital-based CE centers on healthcare practices and costs. The Technology Assessment Unit at McGill University Health Center is one that has been evaluated. Of the 27 reports generated in their first five years, 25 were fully implemented, with 6 (24%) recommending investments in new technologies and 19 (76%) recommending rejection, for a reported net hospital savings of 10 million dollars.14
In the US, integrated health systems and managed care organizations like Kaiser Permanente have clear incentives to establish CE centers, and a few have formally done so. Incentives are also aligned for other hospitals to establish CE centers since most hospital payers reimburse inpatient care through prospective payment systems. Hospital reimbursements based on diagnosis-related groups (DRGs), which provide fixed payments per patient hospitalization 15
, coupled with the recent push by payers towards quality improvement via value-based purchasing encourages hospitals to provide the best care at the lowest possible cost. Hospital administrators can use CE centers to maximize the value generated from each dollar the hospital spends, which is especially important as the costs of providing care rise in the face of decreasing reimbursements, such as those resulting from healthcare reform.10
Individual providers practicing in fee-for-service models, however, may have fewer incentives to improve cost-effectiveness. In these models, individual providers get paid based on what they do, not on the value of the service they provide. This is the case in many ambulatory settings and even in many inpatient units of hospitals, where providers get reimbursed based on evaluations and procedures they perform. Hence, in fee-for-service models, the hospital administrators, rather than the individual clinicians practicing in the hospitals, would have the greatest incentive to provide cost-effective care, and thus support a hospital-based CE center.
Yet, even in fee-for-service models, the return on investment (ROI) for a CE center can be significant. This is because CE centers can support evidence-based practice at an organizational level, which can improve pay-for-performance and publicly reported metrics, potentially resulting in higher reimbursements and market share. Hospital-based CE centers that disseminate evidence through CDS can also help their organizations meet certification criteria for "meaningful use" of their electronic health records, resulting in further reimbursement increases.16
Despite the potential economic benefits, most US hospitals do not have a formal CE center. Instead, many rely on outsourced or less formal evaluations to inform a relatively narrow set of decisions regarding formularies, marketing, and large capital purchases. In many cases, CE is the work of individuals or committees who may not have the expertise to appraise or synthesize scientific evidence adequately, and may be at risk for conflicts of interest. This is especially the case when CE is performed at the level of a clinical department, rather than a hospital, where an evaluation may be too narrow in scope and biased towards interventions performed by that department. Such individuals and committees often rely on financial analyses as well as political clout to help them make decisions.17,18
By contrast, more formal hospital-based CE centers are staffed by individuals trained in evidence-based medicine, who use systematic and objective methods to identify and synthesize scientific evidence for the hospital perspective. For example, the Center for Evidence-based Practice at the University of Pennsylvania Health System is staffed by two hospitalist co-directors trained in epidemiology, two research analysts, a librarian, a health economist, and primary care and infection control liaisons, totaling 4.5 full time equivalents (FTE). More than 100 reports have been completed for hospital committees and leaders since the Center was established in 2006. These guidelines and systematic reviews have examined topics ranging from lower-cost practices affecting the quality and safety of care, such as the comparison of heparin versus saline for catheter flushing 19
, to higher-cost and emerging technologies, like the use of telemedicine in critical care 12
. Descriptions of other hospital-based CE centers in the US is limited, but a survey of hospital-based centers located internationally suggests common characteristics, including: 1) they're often located in public or academic hospitals; 2) they usually consist of more than one member, and most often include clinicians, administrators, economists and epidemiologists; 3) they focus on clinical practice as well as administrative decision making; 4) they assess devices, drugs and procedures; 5) they examine effectiveness, safety and cost data; 6) they use workshops and websites for dissemination; and 7) they target internal users, as well as those in collaborative networks2
Despite their benefits, there are a number of challenges to operating a hospital-based CE center. First, centers need to balance academic rigor with operational efficiency to complete reviews in a timely way so that they can impact decisions. Working with leaders to prioritize projects, limiting the scope of reports to those issues most critical to a decision, and using existing reviews when available can help achieve this balance. Second, it can be challenging to consider costs when published cost analyses are not available, or are not from the hospital perspective. However, when critical to a decision, such analyses can often be performed locally and populated with hospital-specific cost data. Third, CE can be viewed as a threat to innovation, particularly innovations perceived to help medical centers retain or enhance market share. Similarly, providers not educated in evidence evaluation may be resistant to processes informed by CE. By involving key stakeholders up front, and making decisions in a fair and consultative manner, these negative impressions can usually be overcome. Moreover, it's important to explicitly acknowledge that evidence-based decisions are not only informed by evidence, but also by resource and value considerations, such as how important a particular technology might be to a given market. A final challenge is the fear of liability on the behalf of providers, particularly when policies informed by CE are not followed. Yet, we believe that an organizational approach to addressing clinical questions is the best defense against malpractice, fostering a culture of evidence-based practice.
Implementation of CE reviews can also present challenges. Reviews with the most impact usually begin with clearly defined questions and next steps, are developed in a timely way alongside key stakeholders, and are valid and actionable. When reviews focus on drugs or devices, integrating their results into practice is often straightforward. The reviews can be presented to the relevant formulary or device purchasing committees to inform their decisions. However, when reviews involve changing clinical practice, implementation requires collaboration between key stakeholders and clinical and administrative leaders, and often the use of IT tools like CDS. The Veterans Affairs' Quality Enhancement Research Initiative (VA QuERI) is a prime example of how such implementation can occur.20
Measuring the impact of reviews is straightforward for those focusing on drugs or devices. One can simply determine whether reviews have been presented to the appropriate committees, and whether committee decisions were consistent with review recommendations. In contrast, extramural funding and more robust data is often required to adequately measure the effect of reviews informing practice changes. Potential sources of such funding include the National Institutes of Health Clinical and Translational Science Awards21
and evidence dissemination grants such as those fueled by the American Recovery and Reinvestment Act.22
So how does one set up a hospital-based CE center? Based on our experience, a center could start virtually, at the desk of a single clinician trained in systematic reviews and critical appraisal of the literature, who has an interest in quality and patient safety. The hospital would have to support the clinician's time, and depending on the size and complexity of the hospital, a 0.5 FTE commitment may be a reasonable start. Having access to a biomedical library, and a librarian's assistance with developing search strategies, choosing information resources, and retrieving articles, would be crucial. But the critical component of success is the support of high-level clinical leadership. In our case, the chief medical officer of our health system was the driving force behind our CE center, and initially encouraged stakeholders to access the center as a resource. This allowed us to demonstrate the strength and utility of our center early in its development. A hospital-based CE center also needs to identify and build relationships with the multiple leaders making clinical decisions and policy across a hospital. As stakeholders begin to realize an ROI from the center's reviews, the unit may be able to attract further resources to hire analysts to perform systematic reviews, and integrate more closely with QI and IT staff to implement reviews of clinical practice, and measure their impact.
One potential limitation of the hospital-based model is the redundancies or inefficiencies that may result from the independent activities of multiple local centers. Yet, the local nature of this model is also its greatest strength, for it allows centers to address local priorities, take local considerations into account, and use local evidence when gaps in the scientific literature exist. In addition, local centers can complement and strengthen the activities of a national center by providing expertise to adapt, implement and measure the impact of national evidence reports locally.5
Moreover, when evidence from a national center is not available, hospital-based centers can create CE evidence to address their own local questions, and post these reports on a nationally coordinated site for others to adapt and implement.
As a concrete next step, we recommend an in-depth evaluation of the activity and impact of hospital-based CE centers already established across the US. If these centers prove effective, then start-up and maintenance of these local activities could be supported by incentives from national payers like Medicare and accreditors like the Joint Commission, as well as the local value they create at their own institutions.