Researchers at the National Opinion Research Center (NORC), the University of Chicago, and the MidWest Clinicians' Network surveyed HC staff to measure possible effects and consequences resulting from implementation of the HDC. The development of survey instruments was a collaborative process and included researchers directly involved with HDC implementation efforts. Our design efforts were further informed by the results of qualitative, structured interviews previously conducted with 40 staff members at eight HCs. This range of input was necessary as we anticipated wide variation among HC staff with regards to knowledge of and participation in the HDC. In the end, we developed five versions of the survey instrument, each tailored to the respondent's specific role at the HC and in the collaborative effort. The majority of items in the instruments were common across respondent types; common questions were identical in wording, response options, and format. However, some respondent types received additional questions not asked of others. Most survey items included a five-point Likert response scale, and addressed attitudes regarding the QI effort including: perceptions of clinical outcomes and costs; support from staff and leadership; resource availability; and employee satisfaction.
The sampling frame was restricted to centers located in the Midwest and West Central regions of the United States, which had participated in the HDC for at least 1 year. While there are several conditions that are officially covered by the HDC, our sampling criteria were not based on any official or unofficial expansion of the protocols to other diseases. Of the 173 HCs identified, 95 percent provided complete staff listings including individual names, positions at the center, and role, if any, in the HDC. Our sample was then selected from this list, and included all identified Chief Executive Officers, Executive Directors, Medical Directors, HDC Team Leaders, HDC Team Members, and a random sample of up to three staff members not participating in the collaborative effort. A respondent's position at the HC had no effect on his or her eligibility, and our sample was not restricted to practitioners.
Data were collected between March and December 2004 following the standards of Dillman's Total Design Method (Dillman 1978
). Our initial mailing was sent to more than 1,500 eligible respondents, and up to two additional copies of the questionnaire were sent to nonrespondents via express delivery. To further increase response we conducted telephone prompting and mailed additional surveys along with letters of support by relevant BPHC officials. These efforts produced a final overall response rate of 68 percent, with questionnaire-specific rates ranging from 79 percent for Team Leaders, 71 percent for Team Members, to 58 percent for staff not participating in the Collaborative.
Outcome Measures and Covariates
To examine the impact of the HDC on reported job satisfaction we analyzed data from two of the five surveys described earlier—Team Leader and Team Member—the two respondent types most heavily involved in the HDC effort. Two measures of job satisfaction are addressed: staff morale and team member burnout. The concept of employee burnout has evolved over time but is generally defined by an employee's feelings of discouragement and dissatisfaction and a decline in effectiveness in their work (Maslach, Schaufeli, and Leiter 2001
All respondents were asked the following questions regarding morale and burnout. Perceived change in staff morale was measured by a single survey item that asked, “During calendar year 2003, to what degree did the Collaborative worsen or improve staff morale at your center?” Response options were “greatly worsened,”“somewhat worsened,”“no change,”“somewhat improved,” and “greatly improved,” with higher scores reflecting perceptions of improved staff morale. Perceived staff burnout was measured by a single survey item that asked respondents to rate their degree of agreement with the statement, “Team members became burned out because of their Collaborative duties.” Response format was a five-point Likert-type scale with options ranging from “Strongly Disagree” to “Strongly Agree,” with higher scores reflecting greater agreement with the statement.
Predictive variables are grouped into distinct categories: HDC quality of care outcomes; HDC integration; institutional support of the HDC; the use of incentives; and demographic characteristics—with categories including both modifiable and nonmodifiable characteristics of respondents and HCs. Each category is described below and specific questionnaire items used as independent variables and covariates in our models are presented in .
Questionnaire Items Used as Independent Variables and Covariates with Respective Response Options
HDC quality of care outcomes are measured by two separate indicators. These measures are not used to evaluate whether actual quality of care increased as a result of the HDC effort, but rather are used as surrogates of overall HDC performance in our analysis of the predictors of staff satisfaction. The first measure is a composite summary score of five items measuring respondents' self-reported perceptions of overall success of the HDC effort, perceived value of the collaborative effort, and perceived improvements to processes and patient outcomes and satisfaction. Each measure included a five-point Likert response scale; the sum of individual measures gave us a total score from 5 to 25. The resulting summary score was then rescaled, based on its distribution, to create a final three-point scale ranging from “little or no improvement” to “great improvement.” Complete descriptions of each component measure are provided in .
For a second, independent assessment of outcomes, eight regional HDC Cluster Coordinators were asked to evaluate the success of each center with which they worked (Cluster Coordinators work with multiple HCs to provide technical assistance through telephone calls, information management, access to key materials required for the HDC effort, and regional meetings). For each HC the following question was asked: “For 2004, how would you rate the overall performance and participation of each of the following centers in the HDC” with the response categories of “Excellent,”“Very Good,”“Good,”“Fair,” and “Poor.” In making these assessments Cluster Coordinators had access to self-reported performance measures from each HC, but it is unknown to what extent this information was used during assessments, and these data were not available to us for supplemental analyses.
HDC Integration measures the degree to which collaborative activities are embedded within centers, including staff participation levels. Institutional support, in contrast, addresses allocation of resources by HCs, while incentives refer to whether HDC team members receive monetary or nonmonetary benefits for their participation.
We further divided predictive variables into those that were modifiable and nonmodifiable by HDC policy changes. There are no established definitions of modifiable and nonmodifiable variables facilitating QI programs. We defined a nonmodifiable variable as any respondent or HC characteristic that was fixed for a respondent at the time the survey was completed (i.e., length of tenure at the HDC, rural location) and not affected by changes in HDC policies. In comparison, modifiable characteristics were those that can be influenced by policy change, such as the provision of incentives for QI activities or opportunities for professional development of new skills related to the HDC. These definitions and assignments were determined by consensus of all investigators but we recognize that specific characterizations could be debated.
To better ensure accurate and consistent measurement of HC-level, nonmodifiable descriptors, we also included key measures from the Uniform Data System (UDS), an annual compilation of demographic information on BPHC-funded HCs and their patient populations. Specifically, UDS data were used to identify (1) the percent of patients that are primarily non-English speaking; (2) the percent of HC patients with private health insurance coverage; (3) the total number of medical staff (FTEs) at each HC; and (4) the total number of medical visits per FTE. UDS data are from 2004 and are reported at the HC-level with records for HCs with multiple sites describing characteristics of all associated sites combined, while survey data are reported for each specific site individually.
Descriptive statistics were used to characterize respondents and HCs. Associations of job satisfaction indices with modifiable and nonmodifiable characteristics of HCs and individuals were evaluated using mixed linear regression models (Drum 2002
; SAS Institute Inc. 2004
). Mixed models are characterized by the inclusion of both fixed and random factors; in this case, the fixed factors are the covariates of interest, and the single random factor is the HC. Our purpose in specifying HC as a random factor was to incorporate variation due to clustering of respondents within HCs in order to obtain appropriate standard errors for regression coefficients.
Bivariate associations were examined first. Next, for each dependent variable, four preliminary models were developed (not shown)—each including measures that intersected by modifiability and unit of analysis (i.e., Model 1: Nonmodifiable HC variables [e.g., HC located in a rural area]; Model 2: Modifiable HC variables [e.g., provider participation in the HDC decreased]; Model 3: Nonmodifiable team member variables [e.g., gender of respondent]; Model 4: Modifiable team member variables [e.g., received extra money for work on HDC]). Finally, separate multivariate models were fit for each dependent variable, which included all covariate domains. Covariates that were significant in the preliminary models were candidates for inclusion, and stepwise procedures were used to obtain the most parsimonious model (Searle, Casella, and McCulloch 1992
; Verbeke et al. 1997). This multi-model approach was completed twice for each dependent variable, once controlling for HDC outcomes in order to measure the direct effects of the covariates above and beyond their effects through success of the HDC, and once ignoring HDC outcomes to estimate the combined direct and indirect effects of the covariates. Below we present results from these final, most parsimonious models.
Distributions of dependent variables were examined before analysis to assess the suitability of a linear model. Mixed linear regression models were fit using the PROC MIXED procedure, SAS version 9.1.