|Home | About | Journals | Submit | Contact Us | Français|
Research concept and design: Chaufan, Karter, Moffet, Quan, Parker, Schillinger, Fernandez; Acquisition of data: Karter, Moffet, Quan, Parker, Schillinger, Fernandez; Data analysis and interpretation: Chaufan, Moffet, Quan, Parker, Kruger, Fernandez; Manuscript draft: Chaufan, Karter, Quan, Parker, Kruger, Schillinger, Fernandez; Statistical expertise: Karter, Quan, Parker; Acquisition of funding: Karter, Schillinger, Fernandez; Supervision: Chaufan, Karter, Fernandez; Administrative: Karter, Moffet, Quan, Kruger, Schillinger, Fernandez
Language barriers negatively impact health care access and quality for US immigrants. Latinos are the second largest immigrant group and the largest, fastest growing minority. Health care systems need simple, low cost and accurate tools that they can use to identify physicians with Spanish language competence. We sought to address this need by validating a simple and low-cost tool already in use in a major health plan.
A web-based survey conducted in 2012 among physicians caring for patients in a large, integrated health care delivery system. Of the 2,198 survey respondents, 111 were used in additional analysis involving patient report of those physicians’ fluency.
We compared health care physicians’ responses to a single item, Spanish language self-assessment tool (measuring “medical proficiency”) with patient-reported physician language competence, and two validated physician self-assessment tools (measuring “fluency” and “confidence”).
Concordance between medical proficiency was moderate with patient reports (weighted Kappa .45), substantial with fluency (weighted Kappa .76), and moderate-to-substantial with confidence (weighted Kappas .53 to .66).
The single-question self-reported medical proficiency tool is a low-cost tool useful for quickly identifying Spanish competent physicians and is potentially suitable for use in clinical settings. A reasonable approach for health systems is to designate only those physicians who self-assess their Spanish medical proficiency as “high” as competent to provide care without an interpreter.
Approximately 21 million Latinos in the United States have limited English proficiency (LEP) and report speaking English less than “very well.”1 Language barriers impact access, quality, and safety of health care, and are an important contributor to health care disparities disproportionately affecting immigrants.2-6 LEP patients often seek physicians who can speak their native language and need confidence in a health system’s ability to identify these physicians. Therefore, health systems need tools to assess a physician’s ability to deliver linguistically appropriate care or to plan for language access services via professional interpreter services or bilingual staff.
There are three ways to assess a physician’s foreign language competence: 1) asking patients to evaluate a physician’s language competence; 2) administering formal language competency tests; and 3) asking each physician to evaluate his/her own language competence (self-assessment). The first two methods provide a more accurate assessment of physician language competence; however, they are significantly more expensive and time consuming than physician self-assessment. A patient’s evaluation of their provider’s language competency is a reasonable standard as it likely has more face validity than formal language testing. However, gathering patient reported evaluations is time-consuming and potentially costly. Formal language testing can also be costly. A large commercial vendor reports that a professional language competence assessment takes about 40 minutes per participant to administer and is priced at about $100 per test (personal communication, August 25, 2015). In contrast, physician self-assessment is inexpensive and easy to implement, but the evidence that this approach yields accurate information is limited.
In a previous study of physician Spanish language competence, our team validated two physician self-assessment tools against patient reports in a research setting.7 One of these tools measured physician general fluency in Spanish whereas the other measured physician clinical confidence in using Spanish for specific clinical tasks. Both were strongly and positively correlated to patient report of their physician’s Spanish language competence, a reasonable reference standard.7 These tools have shortcomings, however. The first tool, fluency, may be considered too general to generate accurate responses for clinical care and the second tool, confidence, involves four questions, making it cumbersome to administer. Finding simple, validated tools that identify physicians with Spanish language competence would be highly useful. One potential tool, consisting of a simple, single question asking physicians to rate their clinical Spanish language competence, is already in use in a major health plan, but it has not been validated.
In this study, our goal was to address the need to identify clinical Spanish competence by validating this single question and low-cost tool by comparing question responses with patient report and with our own previously validated self-assessment measures of physician Spanish language fluency and confidence. Based on our prior work, we hypothesized that the single question, Spanish proficiency self-assessment tool would be useful only at the extremes of the self-report scale.
We conducted a web-based survey from August to October, 2012, among all physicians caring for diabetes patients in a large, integrated health care delivery system (Kaiser Permanente Northern California). The objective of the survey was to ask physicians about their Spanish language competence and use of interpreter services. We contacted each physician via the health care system’s internal email system. The survey was conducted using DatStat® software and run on a secure server. Physicians were ineligible for the survey if they were no longer employed by the health care system, or if there was an “out-of-office” reply that covered the entire survey period. Physician demographic data were obtained from health care system administrative records. The Institutional Review Board of the University of California, San Francisco and Kaiser Permanente Northern California approved the study.
In the survey, we asked physicians to report their Spanish language competence via a brief, self-assessment tool currently in use at the Kaiser Permanente Northern California health care system. This tool asks physicians to assess their Spanish language proficiency in four domains: speaking, reading, writing, and discussing medical issues. (Figure 1.) Our study focused only on the responses regarding the domain of discussing medical issues, which we refer to as medical proficiency. The medical proficiency question used a 4-point Likert scale. We collapsed responses into three categories (not at all/low; moderate; and high).
We also asked physicians via the survey to respond to the two previously validated self-assessment tools mentioned earlier, which we refer to as fluency and confidence, respectively.7 The fluency question asks physicians, “How would you rate your level of fluency in Spanish?” on a 5-point Likert scale. We collapsed fluency responses into three categories (none/poor; fair; very good/excellent) because in our prior work, the categories of “none” and “poor,” and “very good” and “excellent,” were associated with similar patient responses in measures of comprehension and interpersonal care.7 To assess physician’s medical Spanish language confidence, we presented four hypothetical clinical interactions with a Spanish-speaking patient without the use of an interpreter: 1) a straightforward history and physical; 2) lifestyle counseling (eg, diet); 3) medication reconciliation; and 4) depression diagnosis and treatment. Physicians were asked to respond to each scenario on a 4-point Likert scale that we collapsed into three categories (not at all confident; somewhat confident; confident/very confident) because our prior work showed that physicians who responded “confident” or “very confident” had similar patient responses to queries on physician language skills.7
A second data source was the Diabetes Study of Northern California (DISTANCE) survey8 from which we extracted patients’ perspectives on their physicians’ Spanish language competence (N=111). Respondents were asked: “Without using an interpreter, how well does your personal physician speak your language?” Responses were on a 6-point Likert scale, collapsed, as in prior work, into three broader categories (does not speak my language/poorly, fair/well, very well/excellently).
For those physicians who responded to the survey (N=2,198), we compared their responses to the medical proficiency question (the new tool) with their responses to the fluency and confidence questions (previously validated tools).
Of the 2,198 survey respondents, we identified the subset of physicians whose patients had responded to the DISTANCE question about their physician’s Spanish language competence. For those physicians with both patient evaluation of their Spanish language competence and survey data on self-assessment of Spanish language competence, we compared responses with the medical proficiency question with patient report from the DISTANCE survey.
We generated crosstabs of patient report of physician Spanish language competence with medical proficiency, fluency, and each confidence item, and calculated weighted Kappa statistics. We also generated crosstabs of medical proficiency with fluency and each confidence item, and estimated weighted Kappa statistics to analyze the degree of concordance. We report weighted Kappa statistics because the data are ordinal and the weighted Kappa statistics demonstrates the degree of agreement/disagreement and corrects for chance agreement. This conservative correction is particularly appropriate when a large percentage of people are clumped at one or the other end of the scale on both measures. Based on recommended practice, we classified a weighted Kappa statistic .00-.20 as poor/slight concordance; .21-.40 fair; .41-.60 moderate; .61-.80 substantial; and .81 to 1.00 almost perfect concordance.9
In total, 2,198 physicians participated in the survey and were included in our study. Participants’ mean age was 47 years. The sample was evenly distributed among women (51%) and men (49%), and a majority (57%) of participants had graduated from medical school more than 15 years earlier. The largest ethnic group was White (51%), followed by Asian (25%) and Hispanics (7%). The majority (73%) were specialty physicians while the remaining physicians (27%) were primary care providers (Table 1).
In this analysis, which used data from the 2,198 participants, with small discrepancies in total table numbers due to missing data (Table 2, Table 3), there was high agreement between medical proficiency and the previously validated measures of Spanish language competence, fluency and confidence, for physicians who self-reported high or low Spanish language competence. For example, the majority (89.2%) of physicians who rated themselves in the “not at all/low” category of medical proficiency also rated themselves in the “none/poor” category of fluency, whereas the majority (89.2%) of those who rated themselves in the “high” category of medical proficiency also rated themselves in the “very good/excellent” category of fluency. In contrast, agreement was lower in the middle group, where one-third (29.2%) of physicians who rated themselves as “moderate” in medical proficiency rated themselves in the “very good/excellent” category of fluency. The overall weighted Kappa was .76, indicating substantial concordance. Similarly, when comparing medical proficiency against each of the confidence items, agreement was high at the extremes with column percents ranging from 71 to 98 yet lower in the middle response category, where column percents ranged from 23.7 to 31.4, yielding overall weighted Kappas that ranged from .53 to .66, ie, moderate to substantial concordance.
In the subset analysis involving 111 physician-patient pairs, when comparing medical proficiency to patient report of physician Spanish language competence (Table 4), we also found high agreement among the high and low response categories and low agreement in the middle category. The majority (86.9%) of physicians rated by their patients as speaking Spanish “very well/excellently” reported their medical proficiency as “high”; the majority (75%) of physicians rated by their patients as speaking Spanish “poorly/not at all” reported their medical proficiency as “low” or “not at all.” However, only a minority (20.6%) of physicians rated by their patients as speaking Spanish “fair/well” reported their medical proficiency as “moderate.” The overall weighted Kappa was .45, a moderate concordance, for this comparison. In analyses comparing patient report of physician language competence with fluency and confidence, we found similar patterns of agreement, with an overall weighted Kappa of .48 for fluency and weighted Kappas ranging from .38 to .46 for the four confidence measures (data not shown).
In this study, we validated a single-question, self-assessment tool of medical Spanish fluency that is already in use in a large US health system. We found that the Spanish medical proficiency tool, a question asking physicians to report their Spanish language proficiency when discussing medical issues, performed well when compared with a reasonable reference standard, patients’ reports of their physician’s Spanish competency. The single-question medical proficiency tool also had a high level of agreement with two previously validated measures of physician self-reported Spanish language competence, fluency and confidence. Concordance between medical proficiency and these two measures ranged from fair-to-substantial with high agreement in high and low response categories of competence, but low agreement in the middle response categories. Our results support judicious use of this simple tool for health care systems seeking to identify physicians with high levels of Spanish competence.
Our findings support the practice of referring patients seeking a Spanish-speaking physician to only those physicians who self-assess their Spanish medical proficiency as high. The medical proficiency tool does not discriminate as well in the middle range of Spanish language competency and physicians who rate themselves in the middle range are a heterogeneous group, whose competence in Spanish may not be sufficient to see Spanish-speaking patients without a certified interpreter.
Our results are consistent with other studies indicating that self-report may be useful in clinical practice only when clinicians self-rate at the top of the scale and not in mid-scale range. in one study, clinicians rated their foreign language competence twice, once on a generic 3-level scale and then on a 5-level scale that included detailed descriptors of competence for each level.10 The scales had only moderate correlation, and the largest discrepancy between the scales was observed among those who had self-rated in the middle response category on the 3-level scale, reflecting heterogeneous proficiency levels. Formal language testing also reveals heterogeneous proficiency in the middle response category of self-assessment scales. In a recent study of pediatric residents who used a Spanish self-assessment tool, one in three residents reporting moderate Spanish proficiency did not test at their self-reported level.11 Self-assessment difficulties are not limited to physicians: a study of dual-role staff that served as interpreters for LEP patients revealed that, in formal language testing, 20% of those tested had only ‘basic” skills, inadequate for interpreting.12 Other small studies comparing clinician self-report with formal language testing have had similar findings, with high correlations only at the extremes of the self-report scales.13,14
Larger studies conducted in real world settings of self-assessment vs patient report or formal testing would advance the field, as physicians may be sensitive to practice incentives and exaggerate or discount their self-report of Spanish language competence. As systems move to incentivize bilingual speakers,15 research is needed to contrast physician self-assessment of Spanish language competence against the two other, more expensive modes of assessment, ie, patient report of physician language competence and formal language testing. Crucially, all forms of competence assessment should be examined as they relate to clinical outcomes.
Our study has important limitations. First, the physician self-assessment of Spanish language competence was conducted as part of a research survey. Physician self-assessment may vary in a clinical setting, particularly if reporting high Spanish language proficiency is associated with perceived desirable outcomes (eg, higher pay for bilingual competence) or with undesirable outcomes (eg, greater or lesser number of Spanish-speaking patients). Second, the medical proficiency tool was validated against patient report in a small sample (n=111) so estimates may be imprecise. Third, the reference standard of patient report may be itself imperfect, as patients’ reports of their doctor’s Spanish fluency may be subject to recall bias, or conflated with how much they like their doctor or how committed their physician is to communication with them. Fourth, we were unable to examine clinical outcomes, including patient comprehension, which might help interpret the middle response category of the physician self-report scale where many physicians cluster.
Health plans are trying to meet the growing needs of LEP patients by providing information necessary for patients to choose language competent physicians. There is a need for tools to evaluate physician Spanish language competence that are easy to administer and low cost. The new medical proficiency tool is a simple, self-report, single question and the characteristics we describe make it suitable for use in identifying Spanish competent physicians. A reasonable approach for health systems caring for the growing population of LEP Spanish-speaking patients is to designate only those physicians who self-assess their Spanish medical proficiency as “high” as competent to provide care without an interpreter. More research is needed to identify the clinical consequences, if any, when physicians who identify themselves as moderately medical proficient choose to provide clinical care to LEP patients without using a professional interpreter. Depending on resource limitations, it is also reasonable to offer formal language testing to the subset of physician who self-assess as “moderate” and are interested in providing care for LEP patients without an interpreter.15,16
While our team had already identified two reliable and low cost tools, the medical proficiency tool has several advantages. It utilizes a question about Spanish competence that is more specific to medical practice than the question utilized to assess fluency, so it may help respondents to reflect better about their medically relevant language competence. It also has the advantage over confidence in that it involves a single, rather than four questions, and thus, it is more likely to be answered. Moreover, there are cost and time advantages of using a single question as an assessment tool. This medical proficiency tool is appropriate for use in research to identify Spanish competent physicians and would be practical for use in clinical settings.
In conclusion, physician self-assessment of Spanish language competence through the use of one single, simple question about medical proficiency identifies physicians with high clinical Spanish language competence with accuracy comparable to patient reports and to previously validated, self-report measures.
Funding for this study was provided by R01 DK090272 from the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). This study was an ancillary study of DISTANCE which provided additional support for data collection and analysis via DK081796, DK080726, and DK065664, from NIDDK and R01 HD046113 from the National Institute of Child Health and Human Development. Drs. Karter’s and Schillinger’s work was also supported by the Center for Diabetes Translational Research (P30DK092924). Dr. Fernandez was additionally partially supported by K24 DK102057 from the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK).