PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of transbehavmedSpringer HomeThis journalTOC AlertsSubmit OnlineAims & ScopesThis journal
 
Transl Behav Med. 2016 June; 6(2): 202–211.
Published online 2016 February 22. doi:  10.1007/s13142-016-0389-5
PMCID: PMC4927452

Implementation of collaborative goal setting for diabetes in community primary care

Andrea S. Wallace, PhD, RN,corresponding author Yelena Perkhounkova, PhD, Andrew L. Sussman, PhD, MCRP, Maria Hein, MSW, Sophia Jihey Chung, PhD, RN, and Toni Tripp-Reimer, PhD, RN, FAAN

Abstract

Collaborative goal setting (CGS) is a cornerstone of diabetes self-management support, but little is known about its feasibility and effectiveness during routine care. The aim of this study was to evaluate the implementation of an existing CGS intervention when integrated by primary care staff. Using a mixed-methods approach guided by the RE-AIM framework, intervention adoption, implementation, reach, and effectiveness were evaluated over 12 months. Three of four sites adopted the CGS intervention, in which 521 patients with type 2 diabetes (9–29 % of those targeted) received CGS. For those with suboptimal glycemic control (A1C  7.5 %), %A1C decreased by 1.1 for those receiving CGS (n = 204, p < 0.001) compared to 0.4 for a group who did not (n = 41, p = 0.23). Practice characteristics influenced adoption and implementation, while isolation of CGS from the remainder of clinical care likely influenced reach and effectiveness. CGS may benefit patients with diabetes, but a lack of integration by practice staff is a key barrier to overcome during implementation.

Keywords: Diabetes mellitus, Self-management, Implementation research, Pragmatic designs

BACKGROUND

Collaborative goal setting (CGS) between health-care providers and patients has been proposed as a strategy for providing diabetes self-management support in primary care [1, 2]. Because of its demonstrated effectiveness in increasing patients’ self-efficacy and motivation [35], CGS has been proposed as a measure of clinical quality [6, 7] and is a component of the popular Chronic Care Model [8] and a part of Patient-Centered Medical Home (PCMH) certification criteria [9]. However, goal setting and follow-up support activities are seldom reported in primary care [1012] and it is not well understood how CGS can be feasibly, effectively, and sustainably integrated into busy primary care practice settings.

Based on formative work in 2010 with Federally Qualified Community Health Centers (FQHCs) across the state of Iowa, we identified a need for means to improve the quality of diabetes self-management support provided to patients. Subsequently, with input from Community Health Center (CHC) leadership, the living with diabetes (LWD) intervention was identified from among interventions incorporating CGS as potentially feasible to implement into existing practice structures. Developed by a national team through an intensive, participatory process, the purpose of the LWD intervention is to offer a way of promoting diabetes self-management across all literacy groups [13]. The LWD intervention includes (a) a brief process for engaging in CGS by assisting patients in developing short-term, explicit, and attainable goals for health behavior change and (b) a low-literacy patient education book (Diabetes Guide) that uses plain language and photographs to communicate diabetes-related concepts and to reinforce the goal setting process. In a feasibility study conducted in three academic medicine clinics, non-clinician research staff engaged in CGS using the Diabetes Guide in an initial in-person session and by phone at 2, 4, and 12 weeks. The intervention resulted in improved patient-reported outcomes (e.g., self-efficacy, self-care) [14, 15]. The Diabetes Guide (available in English, Spanish, and Chinese) and a written pamphlet and brief video for health-care providers about how to engage in CGS using the Diabetes Guide are publicly available through the American College of Physicians (http://www.acponline.org/).

Early engagement with community practice sites during study planning uncovered a great deal of enthusiasm for trying the LWD intervention as a means of facilitating CGS. However, different interests on the part of clinical staff and clinical researchers emerged: practice managers and clinicians sought to quickly and efficiently improve the quality of care provided to patients within the logistical and financial contexts of their practice sites, and academic researchers sought to conduct a rigorous study regarding the effectiveness of the LWD intervention in improving diabetes outcomes. While the LWD intervention was designed with feasibility and adaptability in mind (i.e., can be conducted by non-clinicians and involves 10–15 min per contact), there is little evidence about its use and effectiveness in improving patient outcomes when incorporated into practice structures without research-related resources. In fact, unpublished evaluation data collected in 2010 suggested that those ordering the LWD materials were not using the Diabetes Guide to facilitate and reinforce CGS with patients but were, rather, using it as a simple, one-time, patient education handout. As a result, practice partners repeatedly communicated a need to focus on means for implementing the LWD intervention before investing in an experimental trial.

Despite the widespread support for CGS as a strategy for providing diabetes self-management support in primary care, evidence has focused largely on its use as part of group medical visits [16], comprehensive diabetes self-management classes [17], or counseling provided by certified diabetes educators [18]. Only one randomized controlled trial, using the LWD intervention, has examined implementation and effectiveness of CGS in individual, primary care sessions [19]. The study’s findings generally favored conducting CGS via telephone with contracted “health coaches” outside practice workflow: this appeared to be the more reliable means of delivering the intervention (reflected by a greater number of contacts with patients and a greater recall of contacts) and resulted in greater improvement in patient outcomes. However, randomization to the intervention by practice site resulted in baseline group differences; thus, differences in patient outcomes might be explained by contextual variables such as differences in practice supports, diabetes guideline adherence, or access to care. Further, recent studies have reported high dropout rates and patient distrust of health behavior interventions delivered entirely by telephone [20], creating additional confusion among the clinical research teams regarding the best strategy for implementation (e.g., counseling by primary care provider (PCP) vs. other clinical staff, conducted during or outside usual clinical workflow) and suggesting a need for more thoroughly examining causal mechanisms that may underlie the effectiveness of CGS in primary care.

PURPOSE

In order to better identify CGS strategies that can be feasibly implemented into routine care delivery as a precursor to testing larger-scale effectiveness, the purpose of this study was to assess the implementation processes of the LWD intervention when introduced in real-world primary care conditions.

METHODS

We used a quasi-experimental, mixed-methods approach proposed to speed translation of health-care research into practice settings [21, 22] in four FCHCs serving medically underrepresented and underserved patient populations. Community health centers (CHCs), which provide comprehensive primary and preventive care regardless of patients’ ability to pay, have a strong focus on quality improvement and have adopted a standard system to collect and report quality measures for prevalent conditions among underserved communities [23]. Participating CHCs were those that volunteered and had existing relationships with the Institute for Clinical and Translational Science at the University of Iowa and represented a geographically diverse sample.

We introduced the LWD intervention to participating CHCs with Web-based instruction about how to use the intervention and evaluated it using the popular RE-AIM (reach, effectiveness, adoption, implementation, maintenance) model. The RE-AIM model, which has been widely applied in the evaluation and planning of clinical and public health interventions, has proven particularly helpful in understanding the external validity of behavioral interventions targeting multiple social and organizational levels (i.e., patient, provider, site, community) [2426]. Data collection and analysis in this study were organized by the five criteria proposed in the RE-AIM model: adoption (the intervention agents willing to initiate a program), implementation (adherence to the various elements of an intervention’s protocol), reach (number, proportion, and representativeness of participants), efficacy/effectiveness (impact of an intervention on important outcomes), and maintenance (institutionalization of program or intervention) [27, 28]. See Fig. 1 for the overview of the study using the expanded CONSORT model linking the RE-AIM framework, data sources, and measures used [21, 29]. The University of Iowa Institutional Review Board approved all aspects of the study.

Fig. 1
Expanded CONSORT model RE-AIM issue content data source

Implementation resources

In a preliminary study with participating CHCs, described elsewhere [30], we applied a participatory process to identify resources to support integration of the LWD intervention into the workflow of primary care settings. The resulting Web-based materials include (a) the background and purpose of the LWD intervention; (b) clear instruction in how the LWD intervention was designed to be introduced during an initial in-clinic goal setting session with follow-up via telephone at 2 and 4 weeks; (c) interactive, time-limited training modules for health-care providers (e.g., physicians, nurses, medical assistants) giving specific strategies for goal setting using the Diabetes Guide; (d) instruction in basic principles for practice change for those responsible for planning and carrying out the goal setting intervention; and (e) information on how to use and troubleshoot the interactive trainings.

Procedures, sources, and measures

Data were collected through (a) structured surveys with clinicians, (b) open-ended surveys of program champions, (c) focus group interviews with clinical staff and program champions, (d) disease registries, (e) clinical encounter records, and (f) electronic health records. To aid interpretation, procedures, sources, and measures are presented sequentially and linked to RE-AIM criteria (see Table Table11).

Table 1
Participants across all sites and measures by RE-AIM criteria

Clinician surveys (February 2012)

Six months before intervention use, an e-mail was distributed to providers and staff involved in diabetes care asking them to complete the well-validated and widely used Assessment of Chronic Illness Care (ACIC), a 28-item survey that evaluates clinical structures and processes supporting effective chronic illness care [8, 31]. A total of 23 providers and staff from the four sites completed baseline ACIC surveys, representing a 50–72 % response rate among targeted providers and staff in each site (Table (Table1).1). Related to the RE-AIM model, clinician surveys provided evidence of adoption.

Longitudinal patient sample (RE-AIM reach and effectiveness, March–May 2012)

Patients across the four sites meeting eligibility criteria for inclusion (type 2 diabetes, English or Spanish speaking, seen for at least 1 year in the practice, ability to be reached by telephone) were approached and recruited during routine clinical visits (n = 99). All participants consented to (a) contacts at 3, 6, and 9 months after intervention implementation and (b) medical record review.

The reason for this longitudinal sample was to understand the effect of the intervention under real-world conditions, so no attempt at randomization was made [29]. Rather, recruiting a sample of patients and following them during the implementation period was meant to provide additional estimates of reach, and access to a comparison group exposed to similar clinical conditions (i.e., practice context and medical management), but not necessarily to the intervention over the 9-month study period. In other words, related to the RE-AIM model, data from the longitudinal patient sample provided evidence of intervention reach (receipt of intervention) and effectiveness (comparison of changes in glycemic control for those not receiving the intervention vs. those who did).

Program champion surveys (June 2012)

Once the hard copy Diabetes Guide and newly developed Web-based resources were in place, each practice site was asked to identify a “program champion” to facilitate the use of the intervention (see Table Table11).

After being introduced to the LWD intervention and associated Web-based implementation resources but prior to initiating the intervention in the clinic setting, program champions were asked to complete an 8-item, open-ended survey focusing on anticipated barriers and facilitators. Given the small number of program champions, answers were aggregated and summarized for confidentiality. Related to the RE-AIM model, program champion surveys provided evidence of adoption.

Intervention implementation planning (June–July 2012)

Once program champions were trained in the intervention and as a step in the implementation plan, they were then asked to use the Web-based introductions and trainings to introduce and facilitate group presentations to all providers and practice staff regarding the intervention.

Intervention implementation (August 2012)

Because understanding the implementation process and external validity of the intervention was the primary concern of this study, details of how to implement goal setting and follow-up were left entirely to the practice sites (other than offering the newly developed Web-based implementation resources and the hard copy Diabetes Guides). Each site was asked to use the implementation resources to facilitate group presentations to all providers and practice staff and to implement the intervention in August 2012.

Clinician and staff focus groups (April 2013)

To gain detailed information about implementation, 8 months after clinics began using the intervention with patients, all practice staff involved in the intervention were invited to attend focus group sessions held in the three participating sites. The 1-h discussions with 24 staff members (see Table Table1)1) across the three sites were digitally audio-recorded, transcribed, and reviewed by the research team. Related to the RE-AIM model, clinician and staff focus groups provided details of implementation.

Disease registries, clinical encounter, and electronic health records (June 2013)

The use of intervention materials, goals, and follow-up, were tracked through clinic-based disease registries used by those conducting the intervention which, related to the RE-AIM model, provided evidence of intervention reach and effectiveness. Because the targeted population was patients with type 2 diabetes, de-identified clinical encounter data were then used to estimate the total number of patients with a known diagnosis of type 2 diabetes seen at least once during the first 9 months after implementation of the LWD intervention. Finally, A1Cs were abstracted from electronic health records data for two groups of patients: (a) the longitudinal sample (n = 99, 16 of whom received the intervention and 83 did not) consented before clinic-wide implementation and followed for 1 year and (b) all other patients (n = 505) who received the intervention during the 9-month implementation period.

Intervention reach was estimated using two methods. First, using disease registry (which provided the number of patients receiving the intervention) and encounter data (which provided estimates of the total number of patients with type 2 diabetes seen during the first 9 months after implementation from each site), the number of patients receiving the intervention was divided by all patients with type 2 diabetes seen during the first 9 months after implementation of the LWD intervention. Second, consented patients (i.e., longitudinal sample) were contacted at 3, 6, and 9 months after clinic-wide implementation and asked whether they had received the Diabetes Guide during a recent primary care visit.

Because clinical encounter data (i.e., the total number of patients seen during the 9-month period) resulted only in de-identified counts, these data could not be linked to individual changes in A1C. As such, estimates of intervention effectiveness relied on secondary analyses of electronic health records data from both the longitudinal sample of patients (some of which received the intervention, and some of which did not) and all patients who received the intervention during the 9-month implementation period. A1Cs recorded 12 months prior to receiving the patient materials and setting an initial goal and the last recorded A1C between 3 and 9 months after the receiving the materials were abstracted for each patient. For those who did not receive the intervention, A1Cs recorded prior to clinic-wide implementation (August 1, 2012) and the last recorded A1C between 3 and 9 months after clinic-wide implementation were used.

The selected time ranges for these data collections were informed by the nature of A1C laboratory values, which is an estimate of average blood glucose over 3 months, and clinical guidelines recommending that A1Cs be assessed quarterly in patients with suboptimal control [32]. Further, because clinical guidelines currently recommend targeting A1Cs to ≤6.5–7.0 % and a focus by practices on patients with suboptimal control, analyses were then limited to patients with baseline A1Cs  7.5 % and complete data for the two time periods.

Follow-up contact with practice leaders (February 2014)

Maintenance of the intervention was assessed through contact with program champions and practice leadership, asking if they continued conducting CGS with patients using the Diabetes Guide. Related to the RE-AIM model, information from practice leaders provided evidence of intervention maintenance.

Data analysis and integration

Findings from clinician and program champion surveys and focus group transcripts were analyzed for content and summarized for each practice site. Next, themes across sites were compared/contrasted while taking into consideration data regarding the reach and patient outcomes. We organized our analysis and findings in accordance with the RE-AIM framework, grouping both descriptive features of each site as well as adoption and implementation elements identified as most salient.

MS Excel was used for simple calculations of encounter estimates provided by practice sites, and the number of consented patients reporting receipt of intervention materials during the 9-month implementation period. SAS 9.3 was used for statistical analyses related to estimates of intervention effectiveness. Descriptive statistics were calculated, and the intervention and comparison groups were compared at baseline on age, BMI, and A1C, using independent-samples t test.

To examine the effect of the intervention on A1C, the SAS procedure MIXED was used to fit a linear mixed model (LMM) to A1C, after evaluating its distribution for normality and homogeneity of variance. The LMM approach was chosen because it (a) uses all available data from repeated measurements on the same person to provide more precise estimates of the effects [33, 34] and (b) provides valid estimates and tests under the assumption that the data are missing at random [35]. A residual maximum likelihood (REML) approach was used to estimate parameters, with Kenward and Roger’s adjustment to standard errors [36]. Group (intervention vs. comparison), time (before and after intervention), and interaction between group and time were fitted as fixed effects. The primary test for this analysis was the test of the interaction between group and time. A significant interaction would suggest that groups changed differently from pre- to post-intervention, regardless of baseline differences. Pre- to post-intervention changes in A1C were tested using parameter estimates from the fitted LMM. For the tests of interaction, α = 0.10 was used, and for comparisons of means, α = 0.05 was used.

RESULTS

Adoption

Three of the four CHCs initially engaged in the study moved forward with implementation. Site D was unable to identify a program champion, citing competing priorities and staff turnover for reasons why they did not implement the intervention. The ACIC scores from sites A, B, and C indicated they felt there was “reasonably good support for chronic illness care.” In contrast, participants from the site opting to not move forward with implementation (site D) indicated having “basic support for chronic illness care.”

Program champions were all clinical care providers (PharmD, ARNP, MD) with a wide range (3 months to 17 years) of experience at each site. All indicated having an average to above average comfort level in practice change strategies. All three sites opting to move forward with the intervention had gained recognition as a PCMH by the National Committee for Quality Assurance during the year prior to implementing the LWD intervention and acknowledged that the intervention helped maintain PCMH criteria.

All program champions identified that a major competing priority for implementing the intervention was the transition to electronic health records. They anticipated a lack of understanding of how the intervention fit the needs of the practice and, as a result, lack of support for time and resources that would need to be dedicated to carry out the intervention. They believed that these could be overcome by the practice focusing efforts on the clinical aspects associated with the intervention (vs. data entry) and a commitment on the part of the practice administration and research team to maintain access to the intervention materials (i.e., Diabetes Guide and online trainings). They felt that the one factor that would assist with this process was enthusiasm for a straightforward, uniform intervention to facilitate behavior change.

Implementation

See Table Table22 for the overview of the similarities and differences in implementation pathways among the three practice sites.

Table 2
Similarities and differences in the implementation pathways among practice sites

Two primary models for goal setting and follow-up with patients emerged from the three sites, driven by roles already created in the practices. One model relied on expanding the role and responsibilities of clinic nurses (RNs) who had contact with the entire population of patients with diabetes (site A), and the other one relied on case managers (i.e., RNs, PharmDs, certified health educators) whose roles focus on providing additional education and support to patients identified as high risk by PCPs (sites B and C).

Site A identified clinic nurses as those who would deliver goal setting and follow-up support, and provided intervention at a group meeting. Each nurse at site A is assigned to work with two providers on a regular basis, so they get to know the patients over time. Each nurse was instructed to give the Diabetes Guide to patients with diabetes, to explain the purpose, and to negotiate a goal. If patients were unable/unwilling to set a goal that day, nurses would follow up by phone later in the week. For patients who set a goal, nurses would call the patient after 1 to 2 weeks and assess progress. If patients said they had completed the goal, nurses would sign off on the charts. For patients not completing a goal, the nurse would offer encouragement, problem solve, and offer another session.

In contrast, sites B and C relied on health professionals (PharmDs, RNs) already in the role of care managers/diabetes educators to conduct goal setting and follow-up support. While there were some differences in the roles and responsibilities of the care managers/diabetes educators across the two sites, patients were referred to the care managers by PCPs if they were at risk for developing diabetes, newly diagnosed, poorly controlled, or in need of additional education or assistance. In both sites, the intervention was introduced during meetings for the larger community of providers, and the care managers/diabetes educators completed the online training individually. Site B asked that the Diabetes Guide be used with all patients with diabetes in contact with care managers/diabetes educators. At site C, the use of the Diabetes Guide and associated goal setting with a patient was left up to the individual care manager/diabetes educator. Both sites asked that care managers/diabetes educators conduct a minimum of two follow-up calls at 2 and 4 weeks after establishing the initial goal setting session.

All three sites tracked goals through templates in disease registries and linked them with electronic health records. Focus group participants delivering the intervention to patients (i.e., nurses, care managers/diabetes educators) reported communicating patients’ goals to primary care providers through the electronic health records (site A) or in person (sites B and C). However, primary care providers in all three sites, while acknowledging they valued the counseling and were willing to reinforce messages, had minimal recollection regarding patient goals.

Reach

Based on the number of patients seen in each site during the same time period with a diagnosis of type 2 diabetes, the reach of the intervention ranged from 9 to 29 % of patients potentially benefitting from the intervention. These estimates using encounter data were confirmed by the number of those in the longitudinal sample, all of whom had been seen over the course of the study period, but in which 16 patients of the 69 patients from the three sites (23 %) implementing the intervention received the intervention. See Table Table33 for the estimates of reach by site characteristics and implementation pathway used.

Table 3
Characteristics of practice sites, model adopted, and estimated reach among patients with type 2 diabetes mellitus (T2DM) with suboptimal glycemic control seen during implementation period

Effectiveness

Table Table44 shows the mean age, BMI, and A1C at baseline, and the changes in A1C from before to after intervention, for all patients who received the intervention and a comparison group, composed of those in the longitudinal sample who did not receive the intervention. The intervention and comparison groups overall were similar at baseline with respect to age, BMI, and A1C. However, when limited to patients with A1C  7.5, the mean A1C was higher for the intervention group than for the comparison group (A1C = 10.0 %, SD = 2.1, vs. A1C = 9.3 %, SD = 1.3, t = −3.04, df = 88, using Satterthwaite’s approximation due to unequal variances, p = 0.003).

Table 4
Patient characteristics, mean %A1C change, and % with clinically important (≥0.5) A1C reduction in intervention and comparison groups

The interaction between group and time was significant at α = 0.10 for all sites combined (p = 0.06), suggesting that groups changed differently rather than the change being due solely to differences at baseline among those with inadequate glycemic control. The effect of time was significant for the intervention group (A1C decreased from 10.0 % by 1.1 %, SD = 2.4, p < 0.001), but not for the comparison group (A1C decreased from 9.3 % by 0.4 %, SD = 1.8, p = 0.23). At the site level, only sites A and C had data for both intervention and comparison groups. The interactions between group and time were not significant at these sites, due to smaller samples as compared to all sites combined (p = 0.26 and p = 0.28, respectively). However, at both sites, A1C significantly decreased for the intervention groups (p = 0.003 and p < 0.001), but not for the comparison groups (p = 0.94, and p = 0.08). At site B that had only intervention group, A1C also decreased significantly (p = 0.001).

Because mean changes in A1C ranged from negative to positive and standard deviations for the difference scores were greater than the difference means indicating heterogeneity of change, the percentage of patients with clinically important (−0.5 %) improvements in A1C were calculated for those in the intervention and comparison groups. Table Table44 shows the variability among sites, but a greater proportion of patients in the intervention group experienced a 0.5 or greater improvement in %A1C.

Maintenance

According to both program champions and administrators, all three sites adopting the intervention wished to continue using the LWD patients. However, in all sites, staff turnover had begun to limit its use. Particularly in practice site A where an enthusiastic nurse manager left the practice, nurses and administrators were skeptical about being able to maintain the LWD intervention despite support for doing so. Further, two practices reported that, while they continued conducting collaborative goal setting with patients, the cost of the Diabetes Guide had started to serve as a barrier to its continued use and there was no formal strategy for conducting collaborative goal setting without the LWD intervention.

CONCLUSION

Our results demonstrate that, while collaborative goal setting using an existing health literacy diabetes intervention can be successfully delivered by practice staff and seems to benefit patients with suboptimal glycemic control, it is unlikely to reach the majority of patients with diabetes when introduced without additional resources. Even though practice sites used different tactics to implement the intervention, it reached only a minority of patients and the lack of integration of the intervention into the remainder of clinical care may further limit its effectiveness and maintenance. Data from multiple stakeholders uncovered themes among the practice sites, suggesting that future research activities should focus on establishing the processes related to, and effectiveness of, outside resources aiming to provide behavior change counseling and improve the quality of self-management support offered to patients with diabetes.

At baseline, the decision to adopt the intervention was influenced by practice sites viewing the goal setting and follow-up support prescribed by the intervention as a means of meeting requirements for National Committee for Quality Assurance (NCQA)-PCMH recognition. In this way, it seems that these policy initiatives may be successful drivers of practice change related to diabetes self-management support. In addition, the results of the ACIC survey of PCPs and practice staff at baseline found that the site not moving forward with implementation was with the fewest resources in place to deliver chronic illness care. These findings confirm those who have recently identified the importance of incentives, resources, and priorities as necessary elements in the process of engaging practices in improvement efforts [37] and that assessing and supporting infrastructure may be an important, but often overlooked, first step toward implementing and sustaining new interventions.

Decisions regarding how to implement the LWD intervention was influenced by a desire to integrate activities within existing roles, from which two primary models emerged for establishing and following up on patient goals: one added goal setting and follow-up support to clinic nurses who remained in their usual ambulatory clinic roles (site A), and one incorporated nurses, pharmacists, and health educators already in roles responsible for providing ongoing care to high-risk patients referred to them by PCPs (sites B and C). However, these two implementation strategies potentially explain differences in the reach of the intervention. In site A, where clinic RNs came in contact with all patients seen by the PCPs, the intervention reached 29 % of patients targeted for the intervention during the 9-month period evaluated, while site B reached 13 % and site C reached 9 % of patients targeted to receive the intervention. Having individual care managers/health educators in sites B and C pre-determining who may or may not benefit from the intervention among a subset of diabetes patients already labeled as high risk may have further restricted its reach in these practice settings. In addition, this difference in selection may explain higher A1Cs at baseline among patient with less than optimal glycemic control in effectiveness analyses. However, it is currently unknown whether this form of tailoring may be an appropriate use of resources, potentially resulting in less reach but greater impact among patients with already poorer diabetes outcomes.

In this study, the use of electronic health records did not result in a coordinated approach involving primary care providers but, rather, an isolated activity on the part of those setting goals with patients: despite expressing enthusiasm for reinforcing LWD goals, clinical notes in the EHR were insufficient to inform and activate PCPs. These findings suggest that, even in environments actively adopting new care delivery models such as those envisioned by the NCQA, significant work may need to be done to coordinate communication among all members of the health-care team. Further, given the recent evidence that brief counseling by PCPs leads to health behavior changes among patients with pre-diabetes [38, 39], the inability of practice sites to develop channels for communicating patients’ health goals with PCPs, and in developing means for having PCPs reinforce goals with patients, may impact effectiveness and suggests an area for further research.

Finally, and perhaps most importantly, the limited reach of our intervention across the three clinic sites that made use of different implementation models suggests the continued need to explore and compare alternative methods of delivering diabetes self-management support. Others attempting to implement collaborative goal setting with the LWD intervention have concluded that diabetes self-management may be most reliably delivered via telephone, with resources outside the practice structure [19, 40]. However, there is scant evidence related to how telephone-based approaches using staff outside the competing demands of busy primary care practice settings were compared to in terms of patient receptivity and effectiveness. Exploring models that attempt to strike a balance between integration within practice settings and feasibility of delivery, such as call centers with structured communication with PCPs, ought to be explored in future research. In addition, research related to ongoing diabetes self-management support needs to be placed in the context of reimbursement models, without which proactive care management and coordination like the CGS model described cannot be sustained.

This study is limited in several ways. First, because this was a trial emphasizing implementation in real-world context, patients were not randomized to the non-intervention group and intervention fidelity was not measured. Further, incomplete medical records data resulted in an inability to test for statistical significance across all sites and incomplete demographic data from unconsented patients did not allow for a thorough comparison between those who did and did not receive the intervention. Our experiences serve as an important reminder that, even in systems with electronic health records, missing clinical, laboratory, and/or data continues to be a challenge in clinical research and may serve as an area for future work with CHCs. Related to the implementation process, our assessment focused on the earliest stages of implementation, so it is unclear how practice context and processes may affect long-term maintenance. And, finally, it is unclear whether the experiences of these practice sites, and those working within them, are generalizable to the larger population of CHCs. Because all three settings were CHCs serving vulnerable patient populations, these findings may not be generalizable across other types of primary care settings.

Data collected during this study provide important information to better guide more rigorous effectiveness studies focusing on translation to practice. Our analyses suggest that those receiving the intervention changed differently than those who did not, and the fact that all patients included in analyses were seen in the practice sites during the study period eliminates a lack of follow-up as an explanation of group differences. The overall benefit of the intervention and consistency of themes across sites both confirm and expand on others who have sought to identify barriers to effective diabetes management in primary care [41], suggesting that addressing competing demands, communication gaps between practice staff, and role integration of those assuming expanded roles related to counseling and educating patients with diabetes are important areas needing to be addressed in efforts attempting to implement diabetes-related behavioral counseling in primary care.

Acknowledgments

The authors would like to thank their community-based partners, Bery Engebretsen, Barbara Ericson, Mary Venteicher, Emily Garcia, Chris Espersen, and Sachin Bagade for supporting the work described, Darren DeWalt, Brian Mittman, Edith Parker, and Gary Rosenthal for the expertise they lent to this project, Hilary Seligman and Liana Castel for reviewing the manuscript, Tyler Goss and Andrea Mulhausen-Johnson for assisting with data collection, Linda Curran for her editorial assistance, and the clinicians and staff who graciously participated.

Support for this study was provided by the Robert Wood Johnson Foundation, Nurse Faculty Scholars Program (no. 68031; A. Wallace, P.I.), and by the University of Iowa’s Institute for Clinical and Translational Science (National Center for Research Resources no. UL1RR024979).

Compliance with ethical standards

Compliance with ethical standards

Conflict of interest

The authors whose names are listed have no affiliations with or involvement in any organization or entity with any financial interest in the subject matter or materials discussed in this manuscript.

Adherence to ethical principles

All procedures were conducted in accordance with the study protocol approved by the University of Iowa Institutional Review Board.

Footnotes

Implications

Practice: Collaborative goal setting is unlikely to reach the majority of diabetes patients when implemented into existing clinical structures by primary care staff.

Research: Future investigations into the effectiveness of collaborative goal setting as a means of supporting the self-management efforts of patients with diabetes ought to focus on interventions that can feasibly and sustainably integrate resources outside routine primary care service delivery.

Policy: Although collaborative goal setting is a quality indicator for self-management support provided to patients with diabetes and has the potential for improving patient outcomes, it needs to be supported by intensive efforts and outside resources that go beyond what can be feasibly delivered during routine primary service delivery

Andrea S. Wallace is an Assistant Professor at the University of Iowa College of Nursing.

Yelena Perkhounkova is a Statistician/Biostat Manager at the University of Iowa College of Nursing.

Andrew L. Sussman is an Assistant Professor at the University of New Mexico School of Medicine.

Maria Hein is a Research Associate and Data Manager at the University of Iowa College of Nursing.

Sophia Jihey Chung is a Graduate Research Assistant at the Department of Nursing, University of Ulsan Daehakro 93, Nam-Gu, Ulsan, South Korea, 44610.

Toni Tripp-Reimer is a Professor and Senior Advisor to the Dean at the University of Iowa College of Nursing.

References

1. Bodenheimer T, Handley MA. Goal-setting for behavior change in primary care: an exploration and status report. Patient Educ Couns. 2009;76:174–180. doi: 10.1016/j.pec.2009.06.001. [PubMed] [Cross Ref]
2. MacGregor K, Handley M, Wong S, et al. Behavior-change action plans in primary care: a feasibility study of clinicians. J Am Board Fam Med. 2006;19:215–223. doi: 10.3122/jabfm.19.3.215. [PubMed] [Cross Ref]
3. Estabrooks PA, Nelson CC, Xu S, et al. The frequency and behavioral outcomes of goal choices in the self-management of diabetes. Diabetes Educ. 2005;31:391–400. doi: 10.1177/0145721705276578. [PubMed] [Cross Ref]
4. Lorig K. Action planning: a call to action. J Am Board Fam Med. 2006;19:324–325. doi: 10.3122/jabfm.19.3.324. [PubMed] [Cross Ref]
5. Marks R, Allegrante JP, Lorig K. A review and synthesis of research evidence for self-efficacy-enhancing interventions for reducing chronic disability: implications for health education practice (part II) Health Promot Pract. 2005;6:148–156. doi: 10.1177/1524839904266792. [PubMed] [Cross Ref]
6. Haas L, Maryniuk M, Beck J, et al. National standards for diabetes self-management education and support. Diabetes Care. 2013;36(Suppl 1):S100–S108. doi: 10.2337/dc13-S100. [PMC free article] [PubMed] [Cross Ref]
7. Institute for Healthcare Improvement. Self-management support for people with chronic conditions. Available at: http://www.ihi.org/knowledge/Pages/Changes/SelfManagement.aspx. Accessibility verified September 4, 2015
8. Wagner EH, Austin BT, Davis C, Hindmarsh M, Schaefer J, Bonomi A. Improving chronic illness care: translating evidence into action. Health Aff. 2001;20:64–78. doi: 10.1377/hlthaff.20.6.64. [PubMed] [Cross Ref]
9. Stange KC, Nutting PA, Miller WL, et al. Defining and measuring the patient-centered medical home. J Gen Intern Med. 2010;25:601–612. doi: 10.1007/s11606-010-1291-3. [PMC free article] [PubMed] [Cross Ref]
10. Duke SA, Colagiuri S, Colagiuri R. Individual patient education for people with type 2 diabetes mellitus. Cochrane Database Syst Rev. 2009;21(1):CD005268.d. [PubMed]
11. Glasgow RE, Whitesides H, Nelson CC, King DK. Use of the Patient Assessment of Chronic Illness Care (PACIC) with diabetic patients: relationship to patient characteristics, receipt of care, and self-management. Diabetes Care. 2005;28:2655–2661. doi: 10.2337/diacare.28.11.2655. [PubMed] [Cross Ref]
12. Wallace AS, Carlson JR, Malone RM, Joyner J, DeWalt DA. The influence of literacy on patient-reported experiences of diabetes self-management support. Nurs Res. 2010;59:356–363. doi: 10.1097/NNR.0b013e3181ef3025. [PMC free article] [PubMed] [Cross Ref]
13. Seligman HK, Wallace AS, DeWalt DA, et al. Facilitating behavior change with low-literacy patient education materials. Am J Health Behav. 2007;31(Suppl 1):S69–S78. doi: 10.5993/AJHB.31.s1.9. [PMC free article] [PubMed] [Cross Ref]
14. DeWalt DA, Davis TC, Wallace AS, et al. Goal setting in diabetes self-management: taking the baby steps to success. Patient Educ Couns. 2009;77:218–223. doi: 10.1016/j.pec.2009.03.012. [PMC free article] [PubMed] [Cross Ref]
15. Wallace AS, Seligman HK, Davis TC, et al. Literacy-appropriate educational materials and brief counseling improve diabetes self-management. Patient Educ Couns. 2009;75:328–333. doi: 10.1016/j.pec.2008.12.017. [PMC free article] [PubMed] [Cross Ref]
16. Naik AD, Palmer N, Petersen NJ, et al. Comparative effectiveness of goal setting in diabetes mellitus group clinics. Arch Intern Med. 2011;171:453–459. doi: 10.1001/archinternmed.2011.70. [PMC free article] [PubMed] [Cross Ref]
17. Gonzales R, Handley MA. Improving glycemic control when “usual” diabetes care is not enough. Arch Intern Med. 2011;171:1999–2000. doi: 10.1001/archinternmed.2011.496. [PubMed] [Cross Ref]
18. Weinger K, Beverly EA, Lee Y, et al. The effect of a structured behavioral intervention on poorly controlled diabetes: a randomized controlled trial. Arch Intern Med. 2011;171:1990–1999. doi: 10.1001/archinternmed.2011.502. [PMC free article] [PubMed] [Cross Ref]
19. Wolf MS, Seligman H, Davis TC, et al. Clinic-based versus outsourced implementation of a diabetes health literacy intervention. J Gen Intern Med. 2014;29:59–67. doi: 10.1007/s11606-013-2582-2. [PMC free article] [PubMed] [Cross Ref]
20. Jubelt LE, Volpp KG, Gatto DE, Friedman JY, Shea JA. A qualitative evaluation of patient-perceived benefits and barriers to participation in a telephone care management program. Am J Health Promot. 2015;30:117–9. doi: 10.4278/ajhp.131203-ARB-610. [PubMed] [Cross Ref]
21. Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40:637–644. doi: 10.1016/j.amepre.2011.02.023. [PubMed] [Cross Ref]
22. Klesges LM, Estabrooks PA, Glasgow RE, Dzewaltowski D. Beginning with the application in mind: designing and planning health behavior change interventions to enhance dissemination. Ann Behav Med. 2005;29(2):66–75. doi: 10.1207/s15324796abm2902s_10. [PubMed] [Cross Ref]
23. National Association of Community Health Centers. Research and data. Available at: http://www.nachc.com/research-data.cfm. Accessibility verified September 4, 2015
24. Glasgow RE, Green LW, Klesges LM, et al. External validity: we need to do more. Ann Behav Med. 2006;31(2):105–108. doi: 10.1207/s15324796abm3102_1. [PubMed] [Cross Ref]
25. Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy to effectiveness transition. Am J Public Health. 2003;93:1261–1267. doi: 10.2105/AJPH.93.8.1261. [PubMed] [Cross Ref]
26. Glasgow RE. Translating research to practice: lessons learned, areas for improvement, and future directions. Diabetes Care. 2003;26:2451–2456. doi: 10.2337/diacare.26.8.2451. [PubMed] [Cross Ref]
27. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–1327. doi: 10.2105/AJPH.89.9.1322. [PubMed] [Cross Ref]
28. Virginia Polytechnic Institute and State University. What is RE-AIM? Available at: http://www.re-aim.hnfe.vt.edu/about_re-aim/what_is_re-aim/index.html. Accessibility verified September 4, 2015.
29. Gaglio B, Phillips SM, Heurtin-Roberts S, Sanchez MA, Glasgow RE. How pragmatic is it? Lessons learned using PRECIS and RE-AIM for determining pragmatic characteristics of research. Implement Sci. 2014;9:96. doi: 10.1186/s13012-014-0096-x. [PMC free article] [PubMed] [Cross Ref]
30. Wallace AS, Sussman AL, Anthoney M, Parker EA. From intervention to innovation: applying a formal implementation strategy in community primary care. Nurs Res Pract. 2013; 605757. [PMC free article] [PubMed]
31. Bonomi AE, Wagner EH, Glasgow RE, VonKorff M. Assessment of Chronic Illness Care (ACIC): a practical tool to measure quality improvement. Health Serv Res. 2002;37:791–820. doi: 10.1111/1475-6773.00049. [PMC free article] [PubMed] [Cross Ref]
32. American Diabetes Association Standards of Medical Care in Diabetes—2015 abridged for primary care providers. Clin Diabetes. 2015;33:97–111. doi: 10.2337/diaclin.33.2.97. [PMC free article] [PubMed] [Cross Ref]
33. Brown H, Prescott R. Applied mixed models in medicine. 2. Hoboken: John Wiley; 2006.
34. Westfall P, Tobias R, Wolfinger R. Multiple comparisons and multiple tests using SAS®. 2. Cary: SAS Institute Inc.; 2001.
35. Allison PD. Handling missing data by maximum likelihood. Orlando: SAS Global Forum; 2012.
36. Kenward MG, Roger JH. Small sample inference for fixed effects from restricted maximum likelihood. Biometrics. 1997;53:983–997. doi: 10.2307/2533558. [PubMed] [Cross Ref]
37. Goldberg DG, Mick SS, Kuzel AJ, Feng LB, Love LE. Why do some primary care practices engage in practice improvement efforts whereas others do not? Health Serv Res. 2013;48(2 pt 1):398–416. doi: 10.1111/1475-6773.12000. [PMC free article] [PubMed] [Cross Ref]
38. Dorsey R, Songer T. Lifestyle behaviors and physician advice for change among overweight and obese adults with prediabetes and diabetes in the United States. Prev Chron Dis. 2011;8:A132. [PMC free article] [PubMed]
39. Yang K, Lee YS, Chasens ER. Outcomes of health care providers’ recommendations for healthy lifestyle among U.S. adults with prediabetes. Metab Syndr Relat Dis. 2011;9:231–237. doi: 10.1089/met.2010.0112. [PubMed] [Cross Ref]
40. Schillinger D, Handley M, Wang F, Hammer H. Effects of self-management support on structure, process, and outcomes among vulnerable patients with diabetes: a three-arm practical clinical trial. Diabetes Care. 2009;32:559–566. doi: 10.2337/dc08-0787. [PMC free article] [PubMed] [Cross Ref]
41. Elliott DJ, Robinson EJ, Sanford M, Herrman JW, Riesenberg LA. Systemic barriers to diabetes management in primary care: a qualitative analysis of Delaware physicians. Am J Med Qual. 2011;26:284–290. doi: 10.1177/1062860610383332. [PubMed] [Cross Ref]

Articles from Translational Behavioral Medicine are provided here courtesy of Springer-Verlag