|Home | About | Journals | Submit | Contact Us | Français|
More often than not, commentaries highlight the findings of studies with important new positive findings. Instead, we highlight three papers that we believe are important and have largely negative or null findings. We are all familiar with efforts, particularly in the last decade or so that have been undertaken to improve the quality and efficiency of medical care. Changing the behavior of physicians and other health care workers is at the crux of many of these efforts. Some progress has been made in improving quality, but three papers in this issue deal with various aspects of these efforts, and they highlight the difficulty of communicating and bringing about some of the changes and improvements that many would agree need to occur.
The papers report on three commonly used approaches for changing clinical practice. Clinical pathways and computerized decision support have been advocated to improve adherence with practice guidelines, and Quality Improvement Organizations (QIOs) have been charged with assisting providers in quality improvement (QI) activities. Tierney et al. (2005) report on a randomized trial of just-in-time, tailored guideline-based suggestions via an electronic medical record for clinicians for improving outpatient management of asthma or chronic obstructive pulmonary disease. They found that the intervention had no effect on adherence with recommendations, medication compliance, quality of life, satisfaction, or emergency room/hospital visits.
Clinical pathways, structured care plans that note the essential elements of care by hospital day, have been widely used by hospitals in efforts to improve the quality and efficiency of care. In a second paper, Dy et al. (2005) report that only seven of 26 pathways used at a large academic medical center had the desired impact. They report on a qualitative analysis to describe characteristics that differentiate effective from ineffective pathways. They found that many of these care plans were unused. Even among the few that did appear to have an impact, their effectiveness may not have been related to their actual use.
Bradley et al. (2005) in another paper surveyed hospital quality management directors about their interactions with QIOs, formerly known as Professional Review Organizations (PROs), that are contracted by the Centers for Medicare & Medicaid Services (CMS) to promote quality of care in the Medicare program. They found that hospital quality management directors had largely favorable views of QIOs, but the findings also indicate that the interactions may have been fairly limited and superficial. A fifth of the sample could not name anyone at the QIO involved in a major initiative. About a quarter of the sample could recall no contact with the QIO on the initiative, and half of those who could recall no contact also desired no contact!
Taken together, these papers indicate that computerized suggestions to improve care are often ignored, that care plans to improve hospital care are often ineffective and/or go unused, and that outside technical assistance to improve quality is unwanted by many. Other studies have documented benefits of clinical pathways, of computerized reminders, and of Medicare's quality improvement efforts (Campbell et al. 1998; Grol et al. 2003; Jencks et al. 2003). Indeed, a few of the authors' prior studies, using the same techniques, were effective (McDonald et al. 1984). There are several possible explanations for these disappointing findings. Perhaps these three studies differed because of methodological approaches or perhaps design weaknesses. However, we believe they were well conducted and employed appropriate health services research techniques (including randomized trials, qualitative analysis, and survey research), and fatal methodological flaws are not likely to explain differences from previous reports. Perhaps, the particular interventions were not useful in actual clinical situations or were otherwise poorly designed or inadequately implemented. This would be consistent with the finding that clinical pathways seemed to be more effective for procedures with less complex patients Dy et al. (2005). The interventions, however, appear to be a reasonable representation of the state of the art. If anything, they were likely to have been more systematically and faithfully implemented because they were part of a study.
Perhaps the novelty of these interventions simply wore off. It is worth noting that in their previous studies, Tierney and colleagues found that the same computer-based reminder system improved physician adherence to preventative care recommendations (McDonald et al. 1984). Similarly, Dy et al. (2005) indicate that the first critical pathways at their institution were more effective. We have also noted a waning in efficacy over time with similar QI interventions (Horowitz and Chassin 2002; Halm et al. 2004). It is possible the first time a new technology or management tool is deployed, the newness garners the precious attention from the target physicians. Early projects may also be those aimed at “low hanging fruit,” the easiest, highest yield problems to solve.
Many of these initial efforts also involve strong local clinical leaders—a factor known to be critical to success. QI initiatives may have started out as local and internal undertakings where all of the principals, often personal colleagues, were well-known within the institution. This is undoubtedly part of the social marketing power that drives the “opinion leader” or “clinical champion” effect. The growth of the quality movement, however, has meant that these initiatives are now increasingly coming from health systems, health plans, or regulatory bodies. Thus, these QI programs as externally driven, may be viewed no longer as about or by “us” but about “them.” From a pessimistic perspective, this may mean that such messages are viewed cynically as motivated by external special interests. A more benign explanation is that there are so many messages with respect to changing practice from so many different messengers, they all begin look and sound the same. Practitioners are barraged at work and at home with paper and electronic “educational” and “new policy” information from their employers, specialty societies, health departments, drug companies, pharmacy benefit management companies, health plans, and regulatory bodies, among other organizations. In addition, they are constantly required to respond to paper, telephone, and e-mail requests for more information and documentation from these same organizations and from referring groups (e.g., home health agencies, pharmacies, etc.). QI messages that come over the same paper, email, and telephone media are at risk of being drowned out in the information overload and data requests.
While there is no scientific way to tease out which of many of these forces conspired against the QI initiatives that are featured in this issue, we suggest that taken together, we may be seeing the beginning of phenomenon we call “quality improvement or QI fatigue” (which is a subset of a larger issue of information overload). While only time will tell if other QI initiatives will face increasingly stiff challenges, it is important for advocates for change to recognize that approaches which may have been effective may no longer be so and more efficient and persuasive ways of communicating QI messages are needed. In addition, programs need to frame their requests for provider behavior change in a timely and specific fashion, ideally in a way that may actually make it easier for physicians to do the right thing or at least reduce their effort and hassles. Financial incentives may also play an important role in engaging the physicians and other providers as pay-for-performance programs are testing. Similarly, public disclosure of quality measures in the form of report cards or quality data can motivate interest in performance improvement via both “carrot” and “stick” mechanisms. Although there has been some empirical work and policy initiatives supporting both approaches, much more conceptual and empirical research needs to be done to understand how and when these approaches can be effectively used.
The more systems solutions make it easier for practitioners to do the right thing at the right time the better. At times, it will make sense to design systems so that actions cannot be ignored. Tierney and colleagues describe how simply disabling the escape key on a keyboard may increase adherence with computerized decision support (Tierney et al. 2005). In other cases, this may mean designing systems that bypass the clinician when it makes sense to do so as has been done with standing orders for vaccinations (Dexter et al. 2004) or for rapid response teams to manage hospitalized patients with worsening conditions (Bellomo et al. 2004).
As major payors and regulators like CMS, JCAHO, and managed care plans are requiring and pushing even more QI efforts, the QI airspace is about to get even more cluttered. The three studies in this issue offer a cautionary tale that what may have worked in the past may not work again or as readily. The challenges for those seeking to change physician and organizational behavior may only be getting greater. Physicians and other providers all want improved patient outcomes, reduced adverse events, and greater patient satisfaction. This may need to happen in the context of growing QI fatigue. If QI interventions have been ignored or resisted, it is possible that the proponents have failed to present a clear, compelling case for what exactly needs to get done, why, when, and by whom. Having failed to make that case, it should not be surprising that anything other than the simplest requests go unheeded no matter how well intentioned.