We believe this is the first systematic review to critically assess the quality of studies that determine whether educational interventions to improve the cultural competence of health professionals are associated with improved patient outcomes. This paper updates the 2005 findings of Beach et. al.13
in examining new curricular offerings (adding four new studies) and provides an analysis of research quality. As is the case with many educational studies20,24
, researchers faced threats to external validity. Importantly, the majority of the studies did not provide sufficient information on the curricula, provider/learners or patients to allow replication. Many studies lacked descriptions of potential variables that may have impacted results. Related to the provider, these include their prior training, age, ethnicity, gender, baseline attitudes and skills, and motivation to participate in training. Patient factors were not adequately accounted for a priori
. Provider and patient race and language concordance and their potential effects were not consistently reported. Generalizability of findings was limited as study communities, and settings were often unique. Moreover, some studies had multiple objectives with cross-contamination affecting patient outcomes, making it difficult to isolate the effect of provider training from system changes. The studies, albeit of limited quality, reveal a trend in the direction of a positive impact on patient outcomes. However, overall the current evidence appears to be neither robust nor consistent enough to derive clear guidelines for CC training to generate the greatest patient impact. It is also possible that CC training as a standalone strategy is inadequate to improve patient outcomes and that concurrent systemic and systems changes, such as those directed at reducing errors or improving practice efficiency, and the inclusion of interpreters and community health promoters as part of the health care team, are needed to optimize its impact.
Our review used a comprehensive search strategy and a systematic process to assess study quality and identify potential reasons for inconsistent results. However, we were challenged to find an ideally designed tool for quality assessment. By using both the STROBE criteria (29-30, Appendix
) and the MERSQI20
, we strove to maximize the validity of the quality review.
Synthesizing existing conceptual models of cultural competence with an established framework for evaluating methodological rigor in education research, we propose an algorithm (Fig. ) as a guide to achieving excellence in methodological design. This model addresses both experimental (randomized, cluster or quasi-randomized) and field (pre/post case control or cohort, or cross-sectional) designs. The algorithm is based first on the theoretical framework described by Cooper and colleagues39
in which the quality of providers including their cultural competence is one of four mediators (the other three being appropriateness of care, efficacy of treatment and patient adherence) of high quality patient outcomes (categorized as health status, service equity and patient views of care). They stated that ‘… important limitations of previous studies include the lack of control groups, nonrandom assignment of subjects to experimental interventions, and use of health outcome measures that are not validated. Interventions might be improved by targeting high-risk populations, focusing on quality of care and health outcomes
. Second, we built on the model of methodological excellence advocated by Reed et al.20
, who noted that existing educational studies of the highest quality used randomized controlled designs, had high response rates and utilized objective data, valid instruments and statistical methods that included appropriate subgroup analyses, with accountability for confounding variables.
Suggested algorithm for educational studies on patient outcomes.
We recommend that educators consider Figure a realistic guiding roadmap. When designing the conceptual framework of proposed studies, we advocate that researchers consider the strength of the existing evidence linking cause and effect40
to perform sample size calculations, as well as the reproducibility and generalizability of the results (internal and external validity, respectively). We propose that the description of providers/learners include information on past training, demographics, cultural and linguistic background, baseline skills and attitudes and the health system (context) within which they function. Patients should be characterized by medical condition, demographics, health literacy, language proficiency, health beliefs, socioeconomic background and other potential confounders. The curriculum implemented should be sufficiently well described for replication, to include core resources and teachers. The cost of training should be made explicit. Study designs should consider the type of subsequent analyses testing the relationship between the intervention and patient outcomes29,39
. Provider educational interventions are often distant from clinical outcomes, and subjective constructs such as patient trust and the quality of the patient experience using validated measures41
have emerged as outcomes of intrinsic value that should also be considered in the cause-effect dynamic. As well as traditional objective clinical indicators, outcomes should include process measures of the patient-physician partnership42,43
, which may be considered intermediate or standalone goals in the attainment of best health care quality. All reasonable confounders should be captured to rule out alternative hypotheses and increase confidence in the results. Heterogeneity of providers and patients should be accounted for by subgroup analyses when reporting results as CC training may have differential effects on different patients by ethnicity or disease. The durability of training on patient outcomes should be tested. Where system interventions other than provider training are concurrently introduced, three separate study arms may be needed to isolate the effect of training from the system change. As our systematic review revealed, these standards have not been adequately met. However, two registered trials currently underway (results pending) appear to meet many of the suggested criteria for experimental design, and results are eagerly awaited44,45
, personal communication with first author).
In an era of systems interventions, using models such as the patient-centered medical home46,47
, interprofessional education48
and teamwork training49
to achieve high quality care, a purely disease-oriented approach with attention only to clinical interventions is no longer adequate. Educators can and should take up the challenge to isolate specific training strategies as cost-effective and sustainable interventions for improving health care quality, particularly for chronic diseases. However, the quality of educational research has been shown to be directly associated with study funding29
, and we acknowledge that prohibitive cost, noted as a limiting factor in at least one study we assessed36
, is one factor limiting implementation of rigorous studies. Some solutions to improve study quality include increasing power with multi-institutional studies similar to those utilized in multicenter clinical trials and the development of multi-institutional shared databases50
. For curricula that apply to different health professions (such as cross-cultural communication skills), providers/learners could be combined and results analyzed by subgroups. This approach allows resource management to reduce cost. Curricular standardization and quality control can be better achieved when materials are developed and delivered by expert groups with rigorous peer assessment. Training materials should be based on transparent, evidence-based, reproducible and validated techniques that incorporate attention to baseline competencies. As materials are developed, universal affordable access would be helpful in advancing the field.
In conclusion, we assert that there is a critical need for increased resources to examine education as an independent intervention to improve health outcomes. The same level of planning, attention and scrutiny should be invested in comparative efficacy studies of educational interventions as for clinical and health services research. In light of our findings and proposed algorithm, a modified or new validated tool for evaluating the quality of such studies would be desirable.