|Home | About | Journals | Submit | Contact Us | Français|
Recurrent operational problems in teaching clinics may be caused by the different medical preferences of patients, residents, faculty, and administrators. These preference differences can be identified by cultural consensus analysis (CCA), a standard anthropologic tool.
This study tests the exportability of a unique CCA tool to identify site-specific operational problems at 5 different VA teaching clinics.
We used the CCA tool at 5 teaching clinics to identify group preference differences between the above groups. We averaged the CCA results for all 5 sites. We compared each site with the averages in order to isolate each site's most anomalous responses. Major operational problems were independently identified by workgroups at each site. Cultural consensus analysis performance was then evaluated by comparison with workgroup results.
Twenty patients, 10 residents, 10 faculty, members, and 10 administrators at each site completed the CCA. Workgroups included at minimum: a patient, resident, faculty member, nurse, and receptionist or clinic administrator.
Cultural consensus analysis was performed at each site. Problems were identified by multidisciplinary workgroups, prioritized by anonymous multivoting, and confirmed by limited field observations and interviews. Cultural consensus analysis results were compared with workgroup results.
The CCA detected systematic, group-specific preference differences at each site. These were moderately to strongly associated with the problems independently identified by the workgroups. The CCA proved to be a useful tool for exploring the problems in depth and for detecting previously unrecognized problems.
This CCA worked in multiple VA sites. It may be adapted to work in other settings or to better detect other clinic problems.
Many VA teaching clinics experience recurrent operational problems that are barriers to improvement and are a waste of time and money. In order to understand why a problem recurs, it may be useful to examine stable, possibly self-replicating factors such as conflicting expectations about how the health care system should function. To assess this possibility, our team has adapted an anthropologic technique called cultural consensus analysis (CCA), which was designed to identify the medical expectations of different groups in a clinic. This CCA is a preference sorting task, in which individuals sort 16 laminated cards, each printed with 1 statement about things that might happen in a clinic.1 We report here on a pilot study designed to test the exportability of this CCA tool in 5 different teaching clinics.
Cultural consensus analysis is a method that has been developed by anthropologists to determine groups with shared values. It is a mathematical model similar to that used by psychometricians in test construction, identifying coherence between individuals instead of questions. It assumes that cultural knowledge is shared and systematically distributed within groups.2, 3 The affiliation of individuals to groups is inferred by similarity of responses to a set of meaningful statements (in this case the 16 cards). We have deployed the CCA as a tool to focus attention on important, recurrent problems in teaching clinics and thus act as a shortcut to remediation.
We selected CCA as our method of choice because it is uniquely capable of uncovering specific difficult-to-detect differences in medical preference between groups that we hypothesize are root causes of most recurrent problems. In the previous research, we used a CCA tool to detect preference differences between groups in our clinic1 and found that these differences were strongly associated with recurrent operational problems in this clinic.4 Some of these medical preference differences were tacit, not being recognized by participants until pointed out by the CCA instrument. Preliminary evidence suggests that this tool may work at other VA clinics,5 and this study is designed to test the exportability of the CCA to other clinic settings. To examine this, we were interested in 3 questions:
This study was part of a larger study performed at 5 VA teaching clinics between March 2002 and July 2004. The affiliated Human Subjects Divisions and Institutional Review Boards approved the project at all sites. In summary, we collected and analyzed CCA data. We compared each site's data with the group average in order to identify CCA results (differences in medical preferences) that were extreme at that site. We searched for associations between these extreme responses and the recurrent problems at that site, as generated by interdisciplinary workgroups using a structured protocol. Although these sites were selected by convenience for this pilot study, they represented a variety of sizes, network affiliations, and levels of service.
We had previously developed a CCA tool from in-depth ethnographic observations, interviews, and focus groups.1 This CCA tool consisted of a set of 16 cards with 1 statement per card about “things that could happen during a clinic visit.” We asked a convenience sample of 20 patients, 10 residents, 10 faculty members, and 10 administrators at each site to rank order these cards by order of importance to them. Sample sizes were calculated to assure 95% confidence of answering at least 90% of the questions per group norm.2 Each statement was printed on laminated 3 × 5 cards (see Table 1). The CCA exercise was carried out by a single trained research assistant (RA) at each site using the methods from our prior study.1 Briefly, subjects were approached individually and asked to sort the cards in a private or semiprivate area. Their responses were recorded on a standardized form, and later transferred to Excel spreadsheets for analysis.
We used the analytic method described by Smith et al.1 An N×N matrix was constructed (where N is the number of subjects), and each element was filled with the proportion of statements that pair of subjects ranked identically. Each subject is assumed to have a specific (but unknown) cultural knowledge, C, or proportion of the group's “correct” rankings (also unknown to us) that they can correctly distinguish. The proportion of matches between subjects can be shown to be a function of these individual “C” values.6 The matrix is analyzed for the difference between the observed data and that which would be predicted by various assigned values of C for each subject. The solution with the minimum sum of squared discrepancies provides the best estimate of the cultural competence for each subject.7
The “correct” orders (unknown to us), which would align that subject with his or her cultural partners, were then calculated a posteriori, using Bayes' theorem.2 By correct order, we are not making a judgment about appropriateness. We are simply identifying the most likely overall group order that would explain the existing individual rankings observed. For instance, at nearly every site, the statement Have the same doctor for more than 1 year was ranked #1 by almost all patients. Therefore, #1 is assumed to be the correct ranking of this statement for the group “patients.” In comparison, the group “faculty” ranked this statement #10. We assumed that the prior probability of a statement being #1 for a group was 1/16 (all statements having an equal chance) and adjusted for the conditional probabilities calculated using the actual rankings by each subject and their estimated cultural knowledge.
We operationally defined as “enthusiasm” the rank order, or preference (out of a possible 16), given to a CCA topic by each group. Thus, a statement determined to have a “correct” rank of “3” would indicate that the group has more “enthusiasm” for that topic than for a topic “correctly” ranked at “10,” and the statement ranked “1” would be most important to that group. Patients are most enthusiastic about Have the same doctor… in the above example because it is ranked #1 by them. The consensus analysis was performed using Anthropac software (Borgatti, 1992, Anthropac 4.0, Columbia Analytic Technologies).
In order to focus on site-specific CCA differences (and to diminish the effect of group differences observed consistently across all sites), we performed an intersite CCA analysis. We compared each individual site's rankings (by group and for each CCA statement) with the average ranking for that group and statement across all of the other sites. For example, we calculated the average ranking by patients (P) at site 1 for CCA statement 1 (P at site 1, CCA 1) and subtracted the average from all other sites
For the statement Have the same doctor for more than 1 year, the rankings at each site were 1, 6, 1, 1, and 1. Thus, the ranking at site 2 shows a lower enthusiasm by patients for this statement and is anomalous. We would expect a ranking of 1 (site 1)+1 (site 3)+1 (site 4)+1 (site 5) divided by 4, or #1 at this site.
This allowed us to observe, for each group at each individual site, how the CCA statements deviated from the average of all other sites. We calculated the standard deviation of this ranking difference for each site (all groups and all CCA statements). We then focused our analysis on only those CCA statements whose ranking differences were most anomalous, which we defined as enthusiasm more than 1 standard deviation above or below the mean. We further defined a more stringent standard of outlier CCA statements as those, that had at least 2 groups that were anomalous and were polarized in opposite directions (1 more enthusiastic, 1 less).
To evaluate whether CCA performance detected substantive problems in the clinics, we systematically selected a multidisciplinary workgroup at each site to identify, in their own view, their recurrent problems. We used a “key informant”8 sampling technique to identify at minimum: a patient, senior resident, faculty member, nurse, and receptionist or clinic administrator at each site (although illness affected the final composition of the workgroup at some sites; see Table 2). Each workgroup was instructed to brainstorm about typical problems. These problems were recorded on a white board or flip chart. The brainstorming process continued until no new problems were identified. The problems were then entered, 1 to a line, in a word processor and a copy was printed for each workgroup member. A 3 round Delphi process (anonymous multivoting) was then used to prioritize the problems at each site.
The prioritized problem lists generated by the workgroup were reviewed with each site coordinator for verification and elaboration. Confirmation and further elaboration were provided by site-visit field-notes from our team's observation of the workgroup, observations of the teaching clinics, and limited interviews.
These data were analyzed to determine the top 2 problems specific to each site. Discrepancies were discussed and clarified by e-mail with the site coordinator, who provided further details. Follow-up visits were conducted with the workgroups at all sites to confirm the top 2 problems for each site. We used N-vivo qualitative analysis software (NUD*IST Vivo 1.0, Scolari, Sage Publications Software, Thousand Oaks, CA) to support this analysis.
The workgroup problems were then associated with the CCA outlier results to determine whether or not the CCA tool was successful in detecting operational problems. We looked for conceptual correspondence between a site's most anomalous CCA statements, on the one hand, and the problems expressed by the workgroups conversely. Initially, we examined only outlier CCA statements, the most extreme CCA ranking differences, where the same CCA statement had at least 2 anomalous group responses polarized in opposite directions (1 with higher than average enthusiasm and 1 with lower than average enthusiasm). We compared these with the top 2 problems identified by the independent workgroups.
Next, we examined intersite difference graphs at each site to determine whether the CCA results contributed new information toward the remediation of known recurrent problems, and whether the CCA tool detected new problems.
The CCA tool performed as expected at all 5 sites. Nearly every group fulfilled rigid criteria (eigenvalue ratio >3)2 for shared preferences at each site (data not presented), there were systematic differences in the ranked preferences, and these differences corresponded to specific groups in the clinic.
When compared with the “gold standard” of self-identified problems in clinic, the CCA performed surprisingly well. Table 3 shows the 2 major problems identified by the interdisciplinary workgroups at each site and that site's “outlier” CCA statements. Outlier CCA statements occurred at 4 of the 5 sites, and corresponded conceptually to at least 1 of the major problems at 3 of the 5 sites (sites 1, 4, and 5). For instance, the 2 major problems identified at site 1 were: tension about an electronic medical record that the administration had just adopted and the faculty disliked, and significant productivity demands on faculty impacting the teaching mission. The only outlier CCA statements at this site were Use a computer to check the patient record, where administration were high-enthusiasm outliers and faculty were low-enthusiasm outliers; and Senior doctor around to answer questions or review student doctor's work (a composite of 2 separate CCA statements that tended to perform identically), where patients were high-enthusiasm outliers (perhaps perceiving a greater need for resident supervision?) and residents and faculty were low-enthusiasm outliers (perhaps perceiving the faculty overburden?). While the CCA tool performs well at all sites (identifying large between-group preference differences), the value of CCA for detecting problems and guiding interventions is much higher at sites that are performing below expectations (data not shown).
We further examined the intersite CCA performance graphs for each site (Figs 1and and2)2) and found that these frequently did contribute new information toward understanding a known recurrent problem or identifying a new problem. In the figures, a point above the diagonal means that a CCA statement is valued more than average by this group at this site. Below the diagonal means it is valued less. The values marked by (•) indicate the outlier CCA statements summarized in Table 3. The values marked by () indicate other CCA statements that were anomalous (more than 1 SD above or below the mean) but did not polarize at least 2 groups in opposite directions.
Figure 1 shows the performance of our CCA tool at this site. The outlier CCA statements at this site, (•), were strongly associated with the major problems. The fact that patients were high-enthusiasm outliers for the CCA statement Senior doctors review student doctor's work suggested that faculty overburden may be affecting the quality of resident supervision as perceived by patients. Observations and interviews at this site confirmed this concern. This was new information that triggered administrators to decrease faculty overburden.
In addition, there was 1 other anomalous CCA response. Residents had high enthusiasm outliers for the statement See the patient within 15 minutes of the appointment time (). This unexpected CCA finding led faculty to interview residents about any need for increased efficiency. Several procedural inefficiencies were identified and corrected, and a tension between inpatient and outpatient demands, the magnitude of which was unrecognized, led to decreased resident panel sizes.
There were no CCA outlier statements here (no polarization between at least 2 groups for the same statement, data not shown). However, the anomalous CCA statements at this site are modestly associated with 1 of the site's major problems. The problem “increased productivity demands (partially because of newly adopted open access)” is reflected in the increased enthusiasm seen by faculty, residents, and administrators for the statement Get quick treatment for the pain or sickness the patient is feeling; in the enthusiasm by administrators for the statement Stay on time to see as many patients as possible; and in the lower enthusiasm by faculty for the statement Have senior doctors around to answer questions for student doctors. The lower than average enthusiasm by patients for the statement Have the same doctor for more than 1 year suggests that productivity pressure, although identified as a major problem, did not affect continuity at this site.
The outlier CCA results at this site had some association with major problems (perhaps reflecting the communication problem, data not shown). Unexpected anomalous CCA findings were strongly validated at this site. There was an unusual amount of agreement between faculty and administration compared with other sites, with high enthusiasm by both groups for the statement Talk to patients until they understand what the doctor is doing (administrators usually low) and low enthusiasm by both groups for the statement Get quick treatment for the pain or sickness the patient is feeling (administrators usually high). Administrators were also less enthusiastic than average about another “efficiency” statement See the patient within 15 minutes of the appointment time. This was explained during our second site visit by the fact that this was the only site where all the administrators surveyed were still active clinicians and valued the front-line perspective.
Because of this strong “face validity” of CCA at this site, faculty wanted to further investigate the anomalous enthusiasm by residents to the statement Have senior doctors around to answer questions for student doctors, and the paradoxic outlier responses by residents (who are usually computer enthusiasts) and faculty (who are often computer-phobic) to the statement, Use a computer to check the patient record were taken quite seriously. The site planned to review their faculty-resident supervision model based on the former CCA result. The latter CCA result was explained during the site visit. We discovered that faculty had gone from paper records to an electronic medical record (EMR) and were enthusiastic, while residents had to use 3 or 4 different EMRs and found this one the most cumbersome. The site was unaware of the magnitude of resident unrest about this EMR, and planned to revisit the issue.
Figure 2 shows the performance of our CCA tool at this site. The CCA results at this site have a strong association with the site's major problems. The major problem “increased number of patients with fewer staff” is reflected in the anomalous faculty and administration's high enthusiasm for the CCA statement See the patient within 15 minutes of the appointment time. The outlier statement Have senior doctors around to answer questions for student doctors suggests that the productivity pressure (because of fewer staff and more patients) is affecting supervision in the training program.
The other major problem “many patients without a primary care doctor” is reflected in the anomalous high enthusiasm by residents and faculty (and nearly so by patients), for the statement Have the same doctor for more than 1 year.
The CCA results at this site show a strong association with the site's major problems (data not shown). The major problem “next open appointment in primary care and specialty clinics” is reflected in the outlier responses to the statement Get quick treatment for the pain or sickness the patient is feeling by faculty (high enthusiasm) and patients (low enthusiasm). This suggests that the productivity pressure on faculty may be creating a patient perception of insufficient time during each visit. In addition, the anomalous results for the CCA statements Talk to patients until they understand what the doctor is doing (faculty and administration low) and See the patient within 15 minutes of the appointment time (faculty and administration high) also suggest significant productivity pressures.
The other major problem “communication between departments and with resident when outside VA” may be reflected in the anomalous high enthusiasm by patients for the statement, Let the patient know about lab results.
Our version of CCA, developed specifically for use in VA teaching clinics, shows promise for identifying and explaining recurrent operational problems in this setting. Our CCA tool may be more useful than other methods of problem detection because it has credibility with constituents that other data may not have; it identifies differences empirically (rather than relying on perceptions); it is founded on extensive prior ethnography, rich with examples; it is based on explicit mathematical criteria; and it explicitly compares the rankings between groups, which we have shown can lead to new insights. The CCA tool also makes explicit tradeoffs in medical preferences, such as faculty time spent on seeing patients versus supervision of residents, and it helps to objectify the impact of 1 clinic goal on another.
This CCA tool was found to be moderately to strongly associated with site-specific problems in VA teaching clinics, especially at lower functioning sites, where the potential value of this tool for guiding interventions is much higher. Even the absence of a common problem was detectable by the CCA. Patients at site 2, where continuity was highest, had an unusually low enthusiasm for Have the same doctor for more than 1 year. Conversely, this statement ranked highest at the other sites, where continuity was a common problem.
Cultural consensus analysis does provide new information useful for the remediation of these problems. Although we may often recognize that a problem exists, CCA can pinpoint where and why tensions exist that sustain the problem and how severe it is. This is especially true when efforts toward 1 value, such as high productivity, are shown to be unexpectedly impacting another, such as the quality of residency supervision (as seen at site 1).
As with any research method, the CCA occasionally produces unexpected results. Sometimes, these results reflect new, unrecognized problems (such as resident overload at site 1). Other times, they are not explained and may represent “noise” in the data.
This having been said, there are some limitations of this pilot study that should be considered. It was conducted at a small number of sites (5), and some results may have been influenced by factors inherent to the operation of the larger hospital context in which the teaching clinics are embedded. Also, it was conducted in the VA system and may not generalize to other types of teaching clinics.
Therefore, the current CCA tool may benefit from modification. Because only 10 of the 16 CCA statements were potent enough triggers to create anomalous results, a simpler CCA based on these 10 statements alone may be easier to use and perform just as well. This suggestion will need to be tested, as there are potential drawbacks. For example, communication issues were common on the problem lists but were not well detected by the current CCA tool. Also, problem areas, and thus the important CCA statements, may be different at other types of teaching clinics. Finally, the size of this study precludes any comment on nuances such as the difference between productivity pressures identified by anomalous enthusiasm for See the patients within 15 minutes… versus Stay on time to see as many patients as possible. Further research on the CCA method and this specific tool is warranted.
This CCA tool, which takes about 5 minutes per subject and requires a relatively small sample size, may be used as the first step in identifying and remediating important operational problems in VA teaching clinics. “Outlier” CCA statements (anomalous and polarizing) can focus attention on the largest problems. Other anomalous statements can often illuminate contributions to the problem, the impact of this problem on other clinic missions, and can even identify previously unrecognized problems. Once the CCA identifies areas of concern (e.g., anomalously high resident enthusiasm for efficiency at site 1), these can be further addressed within the context of the particular clinic (e.g., need for process improvement and decreased panel size at site 1). Cultural consensus analysis response was associated with at least 1 of the self-identified major problems at all sites, and could work even better with the modifications suggested above. Interested users can create the tool by copying the statements from Table 1 (1 to a card) onto 3 × 5 cards for sorting. The analytical program is available on the internet for $75.10
The CCA method may also be useful in other settings. Our experience suggests that it may not be necessary to repeat the extensive observations, interviews, and focus groups used to create this tool. For instance, the tensions between productivity pressure and time spent on teaching, which this CCA tool detects well, are common at most teaching clinics in the United States. We believe that the current CCA statements could be modified, based on brief focus groups with knowledgeable insiders, at almost any teaching clinic and this should be tested. CCA continues to show promise as a tool for identifying the subtle preference differences that sustain recurrent problems and poor performance in teaching clinics.
This material is based upon work supported by the Office of Research and Development, Health Services R&D Service, Department of Veterans Affairs (grant #PCC 01-178).