We found that EBQI approaches to helping practices implement guideline-adherent smoking cessation care into routine practice achieved a limited set of evidence-based process changes but failed to improve patient quit rates. We explore a number of possible explanations for our results and their implications for implementing evidence-based practices.
First, we found that practices, as they had intended, succeeded in implementing increased smoking cessation clinic referrals. Practices rated such referrals as their top-ranked QI strategy (as opposed to primary care or telephone-based guideline alternatives), despite advice from the study's expert panel to the contrary. While the published literature on the efficacy of smoking cessation supports smoking cessation clinic referral as a virtual gold standard (Fiore, Jorenby, and Baker 1997
), using this approach to improve outcomes across large and heterogeneous primary care populations may not be effective. For example, QI teams may not have adequately considered the impact of attempting to direct a larger flow of patients to a scarce resource, such as the potential for “bottlenecks” due to limited capacity. Experts also cautioned teams about the substantial evidence that many patients do not agree to referral, and that many who agree do not actually attend (Thompson et al. 1988
; Van Sluijs, Van Poppel, and Van Mechelen 2004
), making referrals unrealistic for large segments of the primary care population. In addition, referral delays may reduce the immediacy of PC physician responsiveness to patient readiness to change. Recent evidence regarding the effectiveness of smoking cessation helplines (i.e., Quitlines) and e-mail messaging suggest that immediate intervention may be especially beneficial (Fiore et al. 2004
; Lenert et al. 2004
On one level, the choice of the referral strategy by QI teams is understandable. Busy PC physicians may have found the referral option, accompanied by rigorous evidence of smoking cessation clinic effectiveness, especially attractive. Study practices also experienced substantial increases in patient volume over the course of the study, making referral, with its low burden on clinician time, potentially easier to implement than alternatives that require more PC-based participation. Increasing patient volume may also have contributed to generally reduced levels of organizational slack (Rogers 1995
) for undertaking more complex or difficult strategies. The referral strategy, however, may not have had a large enough reach across primary care patients to impact population smoking cessation outcomes.
We expected our EBQI approach to enable participating sites to implement packages of recommended evidence-based strategies geared to accommodating the full range of needs of primary care smokers. Instead, the EBQI process resulted in QI plans with a mix of evidence-based and nonevidence-based interventions, many of the most promising of which did not get implemented as intended. The practices tried to add several PC-based activities into their QI plans (e.g., provider feedback reports) on top of the focus on smoking cessation program referrals in response to expert feedback. Ultimately, however, few practices succeeded in implementing these additional strategies, with the exception of provider and patient education, neither of which are considered sufficient in and of themselves. Most practices ended up trying to incorporate additional unplanned strategies that were not really evidence based (i.e., good ideas but lacked prior empirical evidence of their effectiveness).
So why did the intervention practices listen selectively to the evidence and the advice of smoking cessation “experts”? One possibility is that our intervention practices’ more ambitious QI plans were not accompanied by adequate resources. In applying EBQI to depression care improvement, for example, study practices applied to their organizations for specific resources to implement their proposed QI strategies (Rubenstein et al. 2006
). The study also provided QI team members with paid release time and on-site QI facilitation. In contrast, our study provided only education and facilitation from a distance. Without more support, stakeholder teams may accomplish the “plan-do” (PD) phase of PDSA cycles, without investing in the remaining processes necessary to accomplish true change (Walley and Gowland 2004
). While we provided considerable data on local smoking cessation-related performance to the QI teams (e.g., smoking cessation visit rates), we did not provide information technology (IT) tools for them (e.g., no reminders or templates). IT capabilities might have boosted practice success (Hawe et al. 2004
). Overall, more attention should be paid in future smoking cessation QI efforts to the level and types of resources needed to accomplish major change (Flottorp, Havelsrud, Oxman 2003
Another possibility is that, while we tried to stack the QI processes in favor of the evidence, it was easier for participants to expand on something that was already in place (the smoking cessation clinic) than take on new initiatives focused on counseling and treatment in primary care. Many participants in the initial priority-setting meetings voiced comfort with having a smoking cessation clinic to solve their performance problems. Consistent with findings from depression care improvement, practices tended to choose passive strategies such as education rather than active change strategies (Sherman et al. 2007a
), whereas the active strategies that encompass organizational change may be most effective (Stone et al. 2002
In addition, tension exists between wanting to learn from an expert and the realities of not wanting to be told what to do and the notion that the expert does not know “our patients” or “our place.” The EBQI process also relies on local authority to make organizational changes, and is thus dependent on the strength of local leadership (Rubenstein et al. 2002
). Also, these practices were not selected based on perceived need to improve smoking cessation, and there is some evidence that practices choosing to buy help, rather than partake of offered help, fare better (Parker et al. 2007
). The high level of practice choice may have helped stakeholders to “own” the process and outcomes of their EBQI experience, and might be expected to increase the likelihood of their sustaining adopted changes, but also supported the observed variability between sites.
After this intervention, our next studies focused instead on premade intervention options (i.e., pick A or B) (Sherman et al. 2007b
). What is not clear is where the handoffs in this strategy occur. In other words, when does the researcher walk away and the practice stand alone? Unlike QI models promulgated by the Institute for Healthcare Improvement that require substantial meeting time within the context of “collaboratives” (Pearson et al. 2005
), EBQI is designed to leverage initial planning meetings into local innovation and ownership. The literature is not clear on how these alternate QI models differ (Mittman 2004
), but one issue remains central to all of them and that is the need for better insights on how one gets the QI process to reflect real life.
By the luck of the draw, intervention practices also appeared to be at an early disadvantage compared with control practices. Control practices appeared more likely to be early adopters of smoking cessation interventions at baseline. Specifically, a greater proportion of them used tools to encourage PC counseling and gave PC clinicians authority to prescribe smoking cessation medications (e.g., nicotine patches), and their baseline rates of smoking cessation clinic attendance were higher than those in intervention practices based on administrative data and patient self-report. By 12-month follow-up, intervention practices had significantly increased their attendance rates, while control practices had lost some ground at the practice level. The EBQI process may have helped convince managers and providers of the value of smoking cessation improvement (Michie et al. 2004
) and in turn given them a structured process for successfully implementing at least one facet of evidence-based care they had targeted (i.e., increased referrals to smoking cessation programs). However, because the groups were not balanced at baseline despite randomization, it is difficult to interpret with certainty the cause for equivalence at follow-up. If intervention practices were in fact later adopters, then we may be observing a natural catch-up process independent of EBQI.
While intervention practices accomplished higher attendance rates practice-wide, EBQI-fostered changes failed to have an impact on patient outcomes in the form of smoking cessation. Control practices’ efforts, without support from EBQI implementation, accomplished equivalent quit rates. There are several possible explanations for this result. First, our central outcome was smoking cessation rates in the population of smokers in intervention practices. If the best possible “evidence-based treatment” achieved a 10 percent increase in cessation rates (i.e., a very reasonable intervention) and the implementation method (here, EBQI) improved delivery of this intervention by 30 percent (i.e., a very successful QI implementation strategy), at best we can achieve a 3 percent increase in population cessation rates. While effects of this size can be difficult to measure in the context of a scientific study, an effect of this order would be important from a public health standpoint. Second, both intervention and control practices were operating under national VA performance measures incentivized at network and facility levels. Thus, control practices may have moved forward on a host of QI activities in the absence of an EBQI process to foster priority-setting, external expert review, and practice feedback. Instead, their practice feedback came in the form of nationally provided measures of local smoker identification and tobacco counseling rates that may have resulted in their higher levels of PC-based smoking cessation interventions at baseline. The value of PC-based changes is further supported by our patient-level trial results demonstrating that the strongest independent predictor of smoking cessation was usually being seen by a primary care provider for their health care.
Our study lends itself to several teaching points for research–clinical partnerships. First, researchers must be cautious in overselling the potential absolute impacts (e.g., percent change) of evidence-based practice when applied to the practice or population of patients served. In essence, pushing a large volume of patients into a small “box” (i.e., smoking cessation clinics)—even if it is a great “box”—is not going to work and only a fraction of smokers will be affected. Second, practicing clinicians and managers must be mindful that even small but consistently positive impacts at the population level may still yield important benefits (i.e., 3 percent of 46,000 smokers translates into almost 1,400 fewer smokers, with the concomitant improvements in health status and potential cost savings over time). Viewing these efforts as learning partnerships and using them to confront barriers, address local resources (human and financial) and refine processes in the spirit of continuous improvement in the context of the evidence will shorten the learning curve and improve the yield of future initiatives.
This study has a number of notable limitations. First, in the absence of a practice registry of smokers, we had to screen thousands of patients to identify a systematic sample of smokers. We used enrollment weights to address patterns of refusal and noncontact. We also incurred significant sample losses at follow-up; we used attrition weights to address potential response biases that might have resulted in retaining smokers at advanced stages of change (Emery et al. 2000
). We empirically found that patients’ readiness to change was not predictive of participation at follow-up. Consistent with the veteran population of VA users, our sample of smokers also over-represented older men, limiting the generalizability of patient-level results to similar groups. We measured smoking cessation attendance rates using national VA administrative data files, and may not have captured all visits due to local coding differences.
We also found discrepancies in rates of smoking cessation clinic attendance between administrative and survey data at follow-up. Administrative data demonstrated comparable attendance rates between intervention and control practices (i.e., intervention sites had “caught up”), while patient-reported attendance was higher in control practices. While we randomly sampled clinic visitors, it is possible that enrolled smokers represented more frequent users (Lee et al. 2002
). We also had access only to age and gender of nonparticipants, limiting the precision of our ability to weight to the population of smokers. Patient-reported histories of referral and attendance were also higher in control practices at baseline, so it should not be surprising that they remained higher at follow-up. Time windows also differed somewhat (Rubenstein 2006
The VA has been a leader in nationally implementing smoking cessation guidelines, made possible by computerized reminders, routine feedback of chart-based audits and performance incentives since the mid-1990s (Ward et al. 2003
). While the focus on Ask
has helped the VA achieve remarkable results on screening for tobacco use and physician counseling, we believe our findings point to the fact that attention to Assess, Assist
, or Arrange
in the “5 A's” is now warranted as smoking cessation treatment remains low (Anderson et al. 2002
; Jonk et al. 2005
). VA's common purpose and priorities are also important vehicles for knowledge creation when QI is armed with research evidence (Francis & Perlin 2006
). EBQI holds promise for overcoming barriers to translating evidence into practice (Shojania and Grimshaw 2005
), by making relevant research knowledge, data and tools accessible to managers and QI teams (Solberg et al. 2004
). However, EBQI poses pitfalls for practices not prepared to support their priorities with organizational resources for training, IT support, and protected time to design and implement planned QI activities (Solberg et al. 2000
; Feifer et al. 2004