|Home | About | Journals | Submit | Contact Us | Français|
To evaluate the impact of a locally adapted evidence-based quality improvement (EBQI) approach to implementation of smoking cessation guidelines into routine practice.
We used patient questionnaires, practice surveys, and administrative data in Veterans Health Administration (VA) primary care practices across five southwestern states.
In a group-randomized trial of 18 VA facilities, matched on size and academic affiliation, we evaluated intervention practices’ abilities to implement evidence-based smoking cessation care following structured evidence review, local priority setting, quality improvement plan development, practice facilitation, expert feedback, and monitoring. Control practices received mailed guidelines and VA audit-feedback reports as usual care.
To represent the population of primary care-based smokers, we randomly sampled and screened 36,445 patients to identify and enroll eligible smokers at baseline (n = 1,941) and follow-up at 12 months (n = 1,080). We used computer-assisted telephone interviewing to collect smoking behavior, nicotine dependence, readiness to change, health status, and patient sociodemographics. We used practice surveys to measure structure and process changes, and administrative data to assess population utilization patterns.
Intervention practices adopted multifaceted EBQI plans, but had difficulty implementing them, ultimately focusing on smoking cessation clinic referral strategies. While attendance rates increased (p<.0001), we found no intervention effect on smoking cessation.
EBQI stimulated practices to increase smoking cessation clinic referrals and try other less evidence-based interventions that did not translate into improved quit rates at a population level.
Tobacco use is the leading preventable cause of mortality, accounting for 435,000 deaths in the United States (Mokdad et al. 2004) and 4.8 million deaths due to tobacco worldwide (Ezzati and Lopez 2003). In the United States alone, smoking is responsible for $157 billion in annual health-related economic losses, which translates into each pack of cigarettes sold in the United States leading to $3.45 in medical expenditures and $3.73 in lost productivity (MMWR 2002).
While the prevalence of tobacco use has decreased dramatically over the last 40 years chiefly through public health interventions (MMWR 2003), we have not seen further declines in the past decade despite the availability of an increasing array of efficacious treatments (Ranney et al. 2006). Routine treatment of smokers by physicians has been a national health objective, but physician detection of smokers, counseling of smokers to quit, and prescription of pharmacotherapy to aid them in quitting have been well below quality standards (Thorndike et al. 1998). Dissemination of smoking cessation clinical practice guidelines during the mid-1990s offered substantial promise for making greater inroads by promoting evidence-based recommendations targeting changes in physician behavior and adaptive changes in health care settings. In particular, the guidelines reflect strong empirical evidence of the value of systematic screening for tobacco use, advising smokers to quit, and providing smoking cessation treatment (both counseling and medications) (Cromwell et al. 1997; Fiore 2000). The guidelines offer an explicit roadmap for integrating interventions with demonstrated effectiveness into routine clinical care (Fiore, Jorenby, and Baker 1997; Raw, McNeil, and West 1999).
However, most guideline dissemination efforts have met with only limited success. While early dissemination encouraged primary care physicians to ask about smoking and advise their patients to quit, few practicing physicians met criteria for adequate counseling to help smokers quit and fewer still provided smokers with necessary assistance or arranged follow-up services (Goldstein et al. 1998; DePue et al. 2002). System-level interventions have generally been recommended to help individual clinicians adopt guideline-adherent practices (Bero et al. 1998; Solberg 2000). These include improved forms of documentation to record smoking status (e.g., intake forms, “vital sign” stamps, stickers), clinician prompts or reminder systems for fostering guideline-adherent actions, provision of on-site educational materials, designation of a coordinator or clinical “champion,” training of nurses/support staff in person or by phone to replace physician counseling, audit-and-feedback of clinician guideline adherence, computerized decision support, and incentives (Lichtenstein et al. 1996; Katz et al. 2004; An et al. 2006). Adoption of these approaches into practice settings, however, involves organizational change (Wensing and Grol 1994; Oxman et al. 1995). There is growing consensus that implementing research into practice through organizational change (Stone et al. 2002) depends in large part on the degree to which they account for or are adapted to the context of individual practices (Grol 1992, 1997) to facilitate diffusion (Rogers 1995).
Quality improvement (QI) methods assist practices in implementing improvements that are synchronous with local needs, priorities, and resources. Evidence-based quality improvement (EBQI) methods are based on the premise that practices will have greater success in achieving true improvements through organizational change using prior evidence from the literature as a guide for their activities. The objective of this study was to evaluate the effectiveness of an EBQI method for enabling health care managers, rather than researchers, to implement evidence-based smoking cessation interventions in the context of local practice needs and under routine conditions and to determine its impact on practice-level smoking cessation.
We approached for participation all facilities with three or more primary care providers and 3,000 or more primary care patients in the three contiguous and geographically proximal Veterans Health Administration (VA) health care networks in the southwestern United States. Three facilities declined to participate due to competing demands and time constraints. The research team's home institution was excluded from trial participation. We paired the remaining 20 eligible VA facilities on size and academic affiliation blocked on VA network to control for the influence of network-level policies, resource allocation, and leadership differences. One site's Institutional Review Board (IRB) was closed, eliminating the site and its matching facility, leaving 18 sites.
We used a group-randomized trial design to evaluate the impact of this EBQI approach to smoking cessation guideline implementation (Campbell et al. 2000). We randomly allocated one site from each pair to the intervention (n = 9) or control group (n = 9). IRB approval was maintained at all final participating sites.
Using the U.S. Public Health Service smoking cessation guidelines as our foundation for evidence-based practice recommendations, we designed and used an organizational QI intervention comprised of physician and patient educational materials, structured evidence review, local priority setting, QI plan development and adaptation, and site-specific audit-and-feedback, supplemented with ongoing expert review. At each intervention practice, 30-minute didactic sessions on population-based approaches to smoking cessation were delivered, followed by implementation planning using an expert panel process to help local opinion leaders set institutional priorities. Participating sites and their QI teams, rather than researchers, were the decision makers, ensuring local control of the adaptation of intervention features (Rubenstein et al. 1995). In addition to promoting detection of smokers through screening, smoking cessation experts from the research team provided evidence summaries for the main approaches advocated by the Department of Defense (DoD)/VA clinical practice guidelines for smoking cessation—namely, referral to a smoking cessation program, treatment within primary care, or telephone counseling (DoD/VA, 2004)—and offered recommendations for minimum protocols and implementation strategies that could be used to achieve organizational benefit (Sherman et al. 2005).
Following the panel, each intervention practice received a compendium of smoking cessation resource materials and tools for patients and providers shown to be effective in practice, as well as a QI manual outlining intervention processes and linking sites with research team assistance in obtaining and implementing any of the best practices included in the compendium. Academic experts participated in monthly audio or video conferences with site leadership to facilitate ongoing local adaptation of the prioritized interventions into explicit QI plans that laid out specific steps and resources necessary to act upon their chosen strategies (e.g., telephone outreach, provider profiling). Site leaders were encouraged but not required to revise their QI plans based on expert review and input. Bimonthly newsletters highlighted practice successes and challenges among participating sites. Each practice also received quarterly audit-and-feedback progress reports, including comparisons with other sites. Table 1 describes implementation characteristics.
Control sites received guideline copies. Because smoking cessation was among VA's performance measures, all sites also received audit-and-feedback reports from externally audited random patient records, which rank ordered site- and network-level performance (Kizer 1999).
We enrolled a representative cross section of smokers by screening random samples of all practice patients with three or more visits in the previous 12 months. Patients were contacted by trained interviewers using computer-assisted telephone interviewing, who described the study, administered informed consent using standardized protocols, and screened for smoking status (California Tobacco Surveys 2002). Current smokers were enrolled and further surveyed in the same call between March 22, 2000 and February 23, 2001; 12-month follow-up interviews were conducted on approximate anniversary dates between April 9, 2001 and January 2, 2002.
Following enrollment, trained telephone interviewers administered a 20–30-minute survey instrument including measures of smokers’ sociodemographics (e.g., age, gender, race–ethnicity, insurance), health status (including the Veterans’ SF-36 [Jones et al. 2001], Mental Health Index [Veit and Ware 1983], depression [Andresen et al. 1994], physical activity [Ainsworth, Jacobs, and Leon 1993], alcohol use [Saunders et al. 1993]), smoking behavior, nicotine dependence (Heatherton et al. 1991), readiness to change (DiClemente et al. 1991), and prebaseline treatment experience. Random interviews were selected for supervisor “observation” (i.e., silent listening-in to a live interview) in accordance with California law. At follow-up, smokers were resurveyed regarding their smoking behavior and treatment experience. Site surveys and intervention logs were used to track the implementation decisions of each primary care practice (Sherman et al. 2006). The number of unique patients, visits and mean visits per patient for primary care and smoking cessation clinics for intervention and control practices were obtained using designated clinic stop codes (301 and 323 for primary care; 707 for smoking cessation) from the VA Outpatient Clinic file.
Baseline characteristics of enrolled patients were compared by intervention group using t-tests for continuous variables and χ2 tests for categorical variables. Variables hypothesized to be predictive of smoking cessation were tested; predictor variables significantly correlated with smoking cessation were evaluated for collinearity and subsequently included in the logistic regression model in an intent-to-treat analysis (Mickey and Greenland 1989).
We used random hot deck imputation (Little 1988) to handle missing covariate values (fewer than 1–2 percent). We weighted patient data for the probability of enrollment (i.e., to better represent the target population of all primary care users) and attrition (i.e., to adjust for nonresponse between baseline and follow-up) using recursive logistic regression models (Rubin 1987). Predictor variables for enrollment weights included study randomization blocks and patient demographics (age and gender). Predictors for attrition weights included patient age, readiness to change, access to other public or private insurance, disability status (i.e., recipient of worker's compensation or Social Security Income), and reliance on VA care (i.e., exclusive VA use versus non-VA and low-VA use). Final weights were the reciprocal of the product of the model-predicted probabilities for the stages of enrollment and follow-up.
We used weighted logistic regression (SPSS version 11.5) to evaluate intervention effects, adjusting for patient-level predictors of smoking cessation (30+ days without smoking) at 12-months follow-up. We assessed the intraclass correlation coefficient (ICC) to determine the need for cluster adjustment; because the ICC was not statistically significant from zero, an unadjusted analytic approach was used. We assessed goodness of fit with the Hosmer–Lemeshow statistic (Hosmer and Lemeshow 2000).
We analyzed smoking cessation clinic attendance rates by identifying all unique patients with evidence of a visit to a smoking cessation clinic stop using codes in VA administrative data files for 12 months before (fiscal year or FY1999), during (FY2000) (concurrent with patient enrollment of the smokers’ cohort), and after EBQI implementation (FY2001). We divided these by the estimated number of smokers in each practice (estimated prevalence of smoking) multiplied by the number of primary care patients seen in the same time periods as rates per 1,000.
Randomized practices were equivalent at baseline, with exceptions of higher use of provider incentives and tracking of screening performance in intervention practices and more provider counseling feedback and pharmacotherapy prescription authority in control practices (Table 1). Ranging from five to 13 participants, kickoff intervention meetings all included the heads of Medicine and/or Primary Care, while senior leadership (i.e., director) was present in six of nine practices. Practice-level site champion qualifications varied (four Ph.D.s, three R.N.s, one M.D., and one other). All intervention practices (100 percent) chose and endorsed their top 5 priorities for smoking cessation, received tailored expert advice to help develop locally customized QI plans and then received expert feedback on them. All QI plans focused on smoking cessation clinic referrals as the main guideline implementation strategy. Local QI plans for accomplishing this goal varied, with emphasis on smoking-related provider education, feedback, and patient education (Table 1).
Intervention practices’ actual levels of implementation varied (Table 1). Provider and patient education activities were most commonly implemented (both rated as “not too difficult” to implement). While more broadly proposed, provider feedback (e.g., # of referrals, # on nicotine replacement therapy) was only partially implemented in a single site (rated “very difficult” in sites unable to implement). Changes to primary care (e.g., brief counseling, computerized reminders) and to smoking cessation clinics (e.g., counselors or “health techs” hired) were each planned and implemented in only two sites (and rated as “very difficult”). Provider incentives linked to referral behavior were only partially implemented in one site (rated “difficult”). Five practices (three of which were unable to implement the majority of the activities in their QI plans) added new activities not part of their original plans. These new activities were generally not part of the evidence-based toolkit that they were provided and thus varied in the degree to which they were evidence based (e.g., addition of 10 hours of psych tech time, increased availability of a variety of behavioral programs, establishment of a separate smoking cessation pharmacy clinic, addition of clinical pharmacists to provide individual counseling, and tobacco cessation clinical reminder).
Intervention practices saw a large absolute increase in smoking cessation clinic referrals (from 1,137 to 1,926 smokers seen) compared with control practices (from 1,979 to 1,993) (p<.0001) between baseline and 12-month post-EBQI implementation (Table 2). Intervention practices also demonstrated increased attendance rates (from 56.0 to 72.6 per 1,000) versus control practices whose attendance rates actually declined in the face of influxes of more primary care patients to the system (from 77.0 to 68.3 per 1,000) (p<.0001). Among those who attended, smokers in intervention practices had more visits post-EBQI (3.9 versus 3.5, p<.001).
We screened over 36,000 randomly sampled primary care patients to identify and enroll about 1,000 smokers in intervention and control practices, with a 56 percent response rate at 12-month follow-up (Figure 1). Overall, 77 percent of primary care patients had smoked in their lifetimes, while 20 percent were current smokers. Enrolled patients tended to be men (94 percent), with a high school diploma or better (86 percent), married (52 percent), and average age of 63.7 years.
We found no baseline differences in sociodemographics, health habits, readiness to change, or primary care visits (Table 3). Control site patients were more likely to smoke everyday, wake up to smoke, and to have tried nicotine patches, attended a smoking cessation program, and tried other ways to quit preintervention. They reported lower general health status and greater receipt of disability or Social Security Income, and were less likely to live alone. They were also more likely to use primary care providers for their health care compared with specialists.
At 12-month follow-up, 8.7 and 9.0 percent of enrolled smokers quit in intervention and control sites, respectively (Table 4). Control practice smokers reported more nicotine patch prescriptions and more referrals to smoking cessation programs, consistent with their self-reported histories pre-EBQI implementation; attendees, in turn, were more likely to be prescribed Zyban or Wellbutrin. Adjusting for baseline differences (noted in Table 3), we found no intervention effect on quit rates (Table 4). We had a 71 percent chance of detecting a statistically significant difference at the p<.05 level between intervention and control smokers in post hoc power analyses (assuming a two-sided test). Across all practices, patients who quit were more likely to see primary care providers for their usual care (OR=2.68, 95 percent CI 1.41–5.68) and less likely to be everyday smokers (OR=0.47, 95 percent CI 0.28–0.79) (Table 4).
We found that EBQI approaches to helping practices implement guideline-adherent smoking cessation care into routine practice achieved a limited set of evidence-based process changes but failed to improve patient quit rates. We explore a number of possible explanations for our results and their implications for implementing evidence-based practices.
First, we found that practices, as they had intended, succeeded in implementing increased smoking cessation clinic referrals. Practices rated such referrals as their top-ranked QI strategy (as opposed to primary care or telephone-based guideline alternatives), despite advice from the study's expert panel to the contrary. While the published literature on the efficacy of smoking cessation supports smoking cessation clinic referral as a virtual gold standard (Fiore, Jorenby, and Baker 1997), using this approach to improve outcomes across large and heterogeneous primary care populations may not be effective. For example, QI teams may not have adequately considered the impact of attempting to direct a larger flow of patients to a scarce resource, such as the potential for “bottlenecks” due to limited capacity. Experts also cautioned teams about the substantial evidence that many patients do not agree to referral, and that many who agree do not actually attend (Thompson et al. 1988; Van Sluijs, Van Poppel, and Van Mechelen 2004), making referrals unrealistic for large segments of the primary care population. In addition, referral delays may reduce the immediacy of PC physician responsiveness to patient readiness to change. Recent evidence regarding the effectiveness of smoking cessation helplines (i.e., Quitlines) and e-mail messaging suggest that immediate intervention may be especially beneficial (Fiore et al. 2004; Lenert et al. 2004).
On one level, the choice of the referral strategy by QI teams is understandable. Busy PC physicians may have found the referral option, accompanied by rigorous evidence of smoking cessation clinic effectiveness, especially attractive. Study practices also experienced substantial increases in patient volume over the course of the study, making referral, with its low burden on clinician time, potentially easier to implement than alternatives that require more PC-based participation. Increasing patient volume may also have contributed to generally reduced levels of organizational slack (Rogers 1995) for undertaking more complex or difficult strategies. The referral strategy, however, may not have had a large enough reach across primary care patients to impact population smoking cessation outcomes.
We expected our EBQI approach to enable participating sites to implement packages of recommended evidence-based strategies geared to accommodating the full range of needs of primary care smokers. Instead, the EBQI process resulted in QI plans with a mix of evidence-based and nonevidence-based interventions, many of the most promising of which did not get implemented as intended. The practices tried to add several PC-based activities into their QI plans (e.g., provider feedback reports) on top of the focus on smoking cessation program referrals in response to expert feedback. Ultimately, however, few practices succeeded in implementing these additional strategies, with the exception of provider and patient education, neither of which are considered sufficient in and of themselves. Most practices ended up trying to incorporate additional unplanned strategies that were not really evidence based (i.e., good ideas but lacked prior empirical evidence of their effectiveness).
So why did the intervention practices listen selectively to the evidence and the advice of smoking cessation “experts”? One possibility is that our intervention practices’ more ambitious QI plans were not accompanied by adequate resources. In applying EBQI to depression care improvement, for example, study practices applied to their organizations for specific resources to implement their proposed QI strategies (Rubenstein et al. 2006). The study also provided QI team members with paid release time and on-site QI facilitation. In contrast, our study provided only education and facilitation from a distance. Without more support, stakeholder teams may accomplish the “plan-do” (PD) phase of PDSA cycles, without investing in the remaining processes necessary to accomplish true change (Walley and Gowland 2004). While we provided considerable data on local smoking cessation-related performance to the QI teams (e.g., smoking cessation visit rates), we did not provide information technology (IT) tools for them (e.g., no reminders or templates). IT capabilities might have boosted practice success (Hawe et al. 2004). Overall, more attention should be paid in future smoking cessation QI efforts to the level and types of resources needed to accomplish major change (Flottorp, Havelsrud, Oxman 2003).
Another possibility is that, while we tried to stack the QI processes in favor of the evidence, it was easier for participants to expand on something that was already in place (the smoking cessation clinic) than take on new initiatives focused on counseling and treatment in primary care. Many participants in the initial priority-setting meetings voiced comfort with having a smoking cessation clinic to solve their performance problems. Consistent with findings from depression care improvement, practices tended to choose passive strategies such as education rather than active change strategies (Sherman et al. 2007a), whereas the active strategies that encompass organizational change may be most effective (Stone et al. 2002).
In addition, tension exists between wanting to learn from an expert and the realities of not wanting to be told what to do and the notion that the expert does not know “our patients” or “our place.” The EBQI process also relies on local authority to make organizational changes, and is thus dependent on the strength of local leadership (Rubenstein et al. 2002). Also, these practices were not selected based on perceived need to improve smoking cessation, and there is some evidence that practices choosing to buy help, rather than partake of offered help, fare better (Parker et al. 2007). The high level of practice choice may have helped stakeholders to “own” the process and outcomes of their EBQI experience, and might be expected to increase the likelihood of their sustaining adopted changes, but also supported the observed variability between sites.
After this intervention, our next studies focused instead on premade intervention options (i.e., pick A or B) (Sherman et al. 2007b). What is not clear is where the handoffs in this strategy occur. In other words, when does the researcher walk away and the practice stand alone? Unlike QI models promulgated by the Institute for Healthcare Improvement that require substantial meeting time within the context of “collaboratives” (Pearson et al. 2005), EBQI is designed to leverage initial planning meetings into local innovation and ownership. The literature is not clear on how these alternate QI models differ (Mittman 2004), but one issue remains central to all of them and that is the need for better insights on how one gets the QI process to reflect real life.
By the luck of the draw, intervention practices also appeared to be at an early disadvantage compared with control practices. Control practices appeared more likely to be early adopters of smoking cessation interventions at baseline. Specifically, a greater proportion of them used tools to encourage PC counseling and gave PC clinicians authority to prescribe smoking cessation medications (e.g., nicotine patches), and their baseline rates of smoking cessation clinic attendance were higher than those in intervention practices based on administrative data and patient self-report. By 12-month follow-up, intervention practices had significantly increased their attendance rates, while control practices had lost some ground at the practice level. The EBQI process may have helped convince managers and providers of the value of smoking cessation improvement (Michie et al. 2004) and in turn given them a structured process for successfully implementing at least one facet of evidence-based care they had targeted (i.e., increased referrals to smoking cessation programs). However, because the groups were not balanced at baseline despite randomization, it is difficult to interpret with certainty the cause for equivalence at follow-up. If intervention practices were in fact later adopters, then we may be observing a natural catch-up process independent of EBQI.
While intervention practices accomplished higher attendance rates practice-wide, EBQI-fostered changes failed to have an impact on patient outcomes in the form of smoking cessation. Control practices’ efforts, without support from EBQI implementation, accomplished equivalent quit rates. There are several possible explanations for this result. First, our central outcome was smoking cessation rates in the population of smokers in intervention practices. If the best possible “evidence-based treatment” achieved a 10 percent increase in cessation rates (i.e., a very reasonable intervention) and the implementation method (here, EBQI) improved delivery of this intervention by 30 percent (i.e., a very successful QI implementation strategy), at best we can achieve a 3 percent increase in population cessation rates. While effects of this size can be difficult to measure in the context of a scientific study, an effect of this order would be important from a public health standpoint. Second, both intervention and control practices were operating under national VA performance measures incentivized at network and facility levels. Thus, control practices may have moved forward on a host of QI activities in the absence of an EBQI process to foster priority-setting, external expert review, and practice feedback. Instead, their practice feedback came in the form of nationally provided measures of local smoker identification and tobacco counseling rates that may have resulted in their higher levels of PC-based smoking cessation interventions at baseline. The value of PC-based changes is further supported by our patient-level trial results demonstrating that the strongest independent predictor of smoking cessation was usually being seen by a primary care provider for their health care.
Our study lends itself to several teaching points for research–clinical partnerships. First, researchers must be cautious in overselling the potential absolute impacts (e.g., percent change) of evidence-based practice when applied to the practice or population of patients served. In essence, pushing a large volume of patients into a small “box” (i.e., smoking cessation clinics)—even if it is a great “box”—is not going to work and only a fraction of smokers will be affected. Second, practicing clinicians and managers must be mindful that even small but consistently positive impacts at the population level may still yield important benefits (i.e., 3 percent of 46,000 smokers translates into almost 1,400 fewer smokers, with the concomitant improvements in health status and potential cost savings over time). Viewing these efforts as learning partnerships and using them to confront barriers, address local resources (human and financial) and refine processes in the spirit of continuous improvement in the context of the evidence will shorten the learning curve and improve the yield of future initiatives.
This study has a number of notable limitations. First, in the absence of a practice registry of smokers, we had to screen thousands of patients to identify a systematic sample of smokers. We used enrollment weights to address patterns of refusal and noncontact. We also incurred significant sample losses at follow-up; we used attrition weights to address potential response biases that might have resulted in retaining smokers at advanced stages of change (Emery et al. 2000). We empirically found that patients’ readiness to change was not predictive of participation at follow-up. Consistent with the veteran population of VA users, our sample of smokers also over-represented older men, limiting the generalizability of patient-level results to similar groups. We measured smoking cessation attendance rates using national VA administrative data files, and may not have captured all visits due to local coding differences.
We also found discrepancies in rates of smoking cessation clinic attendance between administrative and survey data at follow-up. Administrative data demonstrated comparable attendance rates between intervention and control practices (i.e., intervention sites had “caught up”), while patient-reported attendance was higher in control practices. While we randomly sampled clinic visitors, it is possible that enrolled smokers represented more frequent users (Lee et al. 2002). We also had access only to age and gender of nonparticipants, limiting the precision of our ability to weight to the population of smokers. Patient-reported histories of referral and attendance were also higher in control practices at baseline, so it should not be surprising that they remained higher at follow-up. Time windows also differed somewhat (Rubenstein 2006).
The VA has been a leader in nationally implementing smoking cessation guidelines, made possible by computerized reminders, routine feedback of chart-based audits and performance incentives since the mid-1990s (Ward et al. 2003). While the focus on Ask and Advise has helped the VA achieve remarkable results on screening for tobacco use and physician counseling, we believe our findings point to the fact that attention to Assess, Assist, or Arrange in the “5 A's” is now warranted as smoking cessation treatment remains low (Anderson et al. 2002; Jonk et al. 2005). VA's common purpose and priorities are also important vehicles for knowledge creation when QI is armed with research evidence (Francis & Perlin 2006). EBQI holds promise for overcoming barriers to translating evidence into practice (Shojania and Grimshaw 2005), by making relevant research knowledge, data and tools accessible to managers and QI teams (Solberg et al. 2004). However, EBQI poses pitfalls for practices not prepared to support their priorities with organizational resources for training, IT support, and protected time to design and implement planned QI activities (Solberg et al. 2000; Feifer et al. 2004).
Joint Acknowledgment/Disclosure Statement: This study was funded by the VA HSR&D Service (Project #CPG 97-012). The trial was registered through ClinicalTrials.gov (Registry No. NCT00012987). Dr. Farmer was supported by a VA HSR&D career development award (Project #MRP 04-221). Dr. Yano was supported by the original grant, and subsequently by the VA Greater Los Angeles HSR&D Center of Excellence (Project #HFP 94-028) and a VA HSR&D Research Career Scientist Award (Project #RCS 05-195). We acknowledge the support of our site principal investigators: Timothy Carmody, Ph.D., Carol Chavez, N.P., Linda Ferry, M.D., Michael Gould, M.D., Betty Hedrick, N.P., Linda Hill, R.N., James Howard, M.D., Susan Blair Knepper, F.N.P., Charles McCreary, M.D., Matthew Meyer, Ph.D., Celia Michael, Ph.D., Nicole Miller, Ph.D., Sharon Rapp, Ph.D., Mitch Rice, R.N., M.N., Robert Smyer, M.D., David Webb, M.D., Brian Yee, M.D., and Sheila Young, Ph.D. We also acknowledge MingMing Wang, M.P.H., for administrative data acquisition and analysis, Ismelda Canelo, M.P.A., for administrative support, and the many interviewers and staff at California Survey Research Services Inc. (CSRS) in Van Nuys, CA. We would also like to thank the editors and reviewers for their extremely helpful, substantive input leading to improved reporting of these trial results.
Disclosures: The authors have no relevant financial interests or advocacy positions pertaining to this manuscript. VA policy requires submission of a copy of manuscripts on acceptance for internal preparation of briefings and/or press releases as needed in anticipation of publication, but they do not undergo or require internal peer review or comment periods. Preliminary versions of this work were presented at the VA HSR&D Annual Meeting (2004, 2006) and the Society for General Internal Medicine (2003).
Disclaimers: Views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs.
The following supplementary material for this article is available online:
Appendix SA1. Author matrix.
This material is available as part of the online article from http://www.blackwell-synergy.com/doi/abs/10.1111/j.1475-6773.2008.00865.x (this link will take you to the article abstract).
Please note: Blackwell Publishing is not responsible for the content or functionality of any supplementary materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.