|Home | About | Journals | Submit | Contact Us | Français|
Despite wide-spread endorsement of patient-centered communication (PCC) in health care, there has been little evidence that it leads to positive change in health outcomes. The lack of correlation may be due either to an overestimation of the value of PCC or to a measurement problem. If PCC measures do not capture elements of the interaction that determine whether the resulting care plan is patient-centered, they will confound efforts to link PCC to outcomes.
To evaluate whether one widely used measure of PCC, the Roter Interaction Analysis System (RIAS), captures patient-centered care planning.
RIAS was employed in the coding of unannounced standardized patient (USP) encounters that were scripted so that the failure to address patient contextual factors would result in an ineffective plan of care. The design enabled an assessment of whether RIAS can differentiate between communication behavior that does and does not result in a care plan that takes into account a patient’s circumstances and needs.
Eight actors role playing four scripted cases (one African American and one Caucasian for each case) in 399 visits to 111 internal medicine attending physicians.
RIAS measures included composites for physician utterance types and (in separate models) two different previously applied RIAS patient-centeredness summary composites. The gold standard comparison measure was whether the physician’s treatment plan, as abstracted from the visit note, successfully addressed the patient’s problem. Mixed effects regression models were used to evaluate the relationship between RIAS measures and USP measured performance, controlling for a variety of design features.
None of the RIAS measures of PCC differentiated encounters in which care planning was patient-centered from care planning in which it was not.
RIAS, which codes each utterance during a visit into mutually exclusive and exhaustive categories, does not differentiate between conversations leading to and not leading to care plans that accommodate patients’ circumstances and needs.
While patient-centered communication (PCC) is widely regarded as an essential component of high quality health care,1,2 there has been little evidence that it is associated with improved patient outcomes. The lack of a correlation has been attributed to variability in how the concept is defined and measured.3,4 Studies comparing various measures of PCC show poor correlations among them. Hence, although PCC is highly valued, there remains a gap between the anticipated and demonstrated benefit of PCC on health care quality.5
An important factor that determines the effectiveness of patient-centered communication is whether or not the communication results ina patient-centered planof care. One potential explanation for the lack of relationship between PCC and patient outcomes is that the tools for measuring PCC are not capturing elements of the interaction that determine whether the actual care plan is patient-centered. Consider, for instance, a patient presenting with worsening asthma. Without consideration for patient context, such as medication adherence, the care plan would focus on reducing triggers and intensifying medication therapy. During the encounter, however, the patient comments, “Boy, it’s been tough since I lost my job.” Compare two alternative responses: In the first, the physician replies with the supportive comment, “yes, that must be tough; the economy has been really rough for people lately. I’m sorry to hear of your difficulties,” but fails to consider the link between worsening asthma and the patient’s economic situation. In the second, the physician inquires about the patient’s circumstances, asking “how has it been tough since you lost your job?” and identifies the underlying problem, which is that the patient has lost insurance coverage and is unable to afford a brand name medication. She prescribes a lower cost generic. Although both responses show a sensitivity to the patient’s situation and may be classified as patient-centered, only the latter adapts the care plan to the patient’s context.
A patient-centered plan of care is one that adaptsresearch evidenceto essentialpatient context. “Research evidence” refers to best practices, grounded in clinical studies. Clinical studies are inherently decontextualized, meaning that they are not designed to take into the account idiosyncratic circumstances, needs and preferences of individual patients. “Essential patient context” refers to the circumstances, needs and preference of a particular patient that, if disregarded, would confound an otherwise appropriate care plan.
One of the most widely used methodologies for assessing PCC is the Roter Interaction Analysis System (RIAS), which assigns each complete thought, or utterance, by the patient and the physician into mutually exclusive and exhaustive categories of communication.6 For studies of PCC, the categories are grouped into clusters that characterize both patient-centered and doctor-centered behaviors. For instance, patient-centered behaviors include all social talk by physicians and patients, all physician open-ended questions, and all patient questions. Doctor-centered communication includes all closed-ended biomedical questions by physicians, and biomedical information giving. A ratio of these behaviors provides a metric for assessing the extent to which a visit is characterized by PCC. RIAS has been particularly useful because it quantifies PCC with a high degree of inter-rater reliability.7,8 There are 248 published studies that utilized RIAS to analyze provider-patient communication, 29 with a focus on PCC.9
In this study, we explore the extent to which patient-centered communication behaviors, as quantified by RIAS, predict patient-centered care planning. Because so many studies characterizing PCC as a discrete set of behaviors have failed to demonstrate a link with patient outcomes, we hypothesized a weak or absent association. The application of RIAS to the example above illustrates the problem: Responding to the statement “Boy, it’s been tough since I lost my job” with a sympathetic acknowledgment of the patient’s plight is coded as “empathic socioemotional exchange;” the comment about the rough economy is “legitimizing.” Both are classified in RIAS as patient-centered behaviors. Conversely, instructions to discontinue a brand name medication and switch to a generic, is coded as a task-focused biomedical information giving utterance and classified as a doctor-centered behavior. Yet it is the latter communication that leads to a patient-centered care plan.
This study assesses RIAS as a tool for measuring PCC against a gold standard measure—the performance of physicians seeing unannounced standardized patients (USPs) presenting with cases embedded or not embedded with patient factors essential to address to plan patient-centered care. USPs are considered a gold standard measure of physician performance because the assessment occurs in the practice setting, the clinician is unaware when they are being assessed, and the “patients” are intrinsically risk adjusted—meaning that they present an “equivalent and objective standard for comparing practicing physicians”.10,11 Our premise is that an instrument that is measuring PCC should be able to predict which interactions will conclude with a care plan that incorporates the patient’s needs and circumstances into the plan of care.
Four cases were developed, each with four variants, one of which was an uncomplicated variant requiring only application of guidelines for appropriate care. The other three were “complicated,” with the addition of atypical biomedical, contextual, or both biomedical and contextual information essential to planning appropriate care. The uncomplicated variant consists of a typical presentation of a common clinical problem. The complicated variants are similar except that the patient provides, if asked, additional information that challenges the provider to broaden their differential. In all four variants, the actor drops the same hints, or “red flags” that point to possible biomedical and contextual factors that could be essential to planning appropriate care (e.g. “Boy, it’s been tough since I lost my job”). In the uncomplicated variant, if asked, the actor provides reassurance that the red flags are only “false alarms.” (e.g. “Oh, I’m actually on my wife’s insurance, so health care is not a problem”). Otherwise, depending on the variant, he or she reveals biomedical or contextual information that should alter the plan of care. Further details of case development and study design are described in a prior publication, in which we first reported the contextual error rates of the physicians in this study.12
The assessment of performance in the management of the uncomplicated and biomedically complicated variants of cases is based on guidelines grounded in research evidence. However, assessment of performance in the management of the contextually complex variants—where appropriate care requires not following guidelines—required a novel method of case development and validation which is detailed elsewhere.13 In brief, contextual factors were added to scripts to render the evidence-based plan for the uncomplicated variant no longer appropriate. In the asthma case, appropriate management in the uncomplicated variant involves stepping up medication therapy in response to unavoidable environmental irritants. In the contextual variant, the physician must recognize that the patient’s inability to afford his medication is the underlying problem to address.
Evidence for validity of the criteria for scoring physician performance was developed by randomizing links of a script, of either the baseline or contextual versions of each case on websites to 16 board certified internal medicine primary care physicians. Physicians were asked to plan care for each case, with instructions to “strive for optimal care, but avoid recommendations that are not necessary for optimal care.” The contextual versions of cases were considered valid instruments for assessing physician performance at implementing a patient-centered plan of care when four out of four physicians who had not seen previous iterations of the case independently agreed that the addition of the contextual information would require a deviation from the standard guideline approach to care. Hence, for the asthma case, the addition of the information about the patient’s non-adherence to a brand name medication following loss of health insurance uniformly triggered physicians to recommend addressing the cost issue rather than simply increasing the medication currently prescribed. We developed four such cases which are summarized in Table 1.
Eight unannounced standardized patients (USPs) were employed in this study, one Caucasian and one African American for each case, trained at the University of Illinois at Chicago Dr. Allan L. and Mary L. Graham Clinical Performance Center, a specialized facility for standardized patient training and testing. The actors presented as real patients in primary care internal medicine practices, surreptitiously audio-recording the visit. Following each visit, the fidelity of the actor’s portrayal was verified by listening to the encounter, and by completing a checklist of the USP behaviors essential to the script. All encounters occurred between April 2007 and April 2009.12 The institutional review boards of the University of Illinois at Chicago, the Jesse Brown VA Medical Center, and all affiliates approved the study.
We invited 152 attending physicians at 14 practice locations in two cities. Participants provided demographic information on their age, ethnicity, gender, medical education, and clinical experience. They were randomly assigned to one of 16 permuted blocks that combined four cases and variants in a partial factorial arrangement, so that each physician was assigned one of each of four cases with a different variant in each, for a total of four visits per physician.12 After the physician had written their note, following each encounter, they were notified they had seen a fake patient and asked via email if they thought the patient had been authentic. Participants provided demographic information on their age, ethnicity, gender, medical education, and clinical experience.
The primary USP measure was whether the physician’s plan of care successfully addressed the USP’s care needs (incorrect or correct), for each case variant. This information was extracted from the physician’s note using checklists. Secondary measures included whether the physician probed the biomedical red flag (yes or no) and the contextual red flag (yes or no). These were coded off of the audio recordings.
RIAS was conducted at Johns Hopkins University, supervised by Debra Roter, originator of the system coding. In addition to RIAS coding, global affect of the physician and patient in the encounter was coded by the RIAS raters, using a set of five-point scales measuring the level of different types of affect in the encounter. Interrater agreement was checked using a second rater scoring a subsample of 10 % of the audio recordings at random, and was high (average correlation=0.86). RIAS raters were blinded to the assignment of physicians to conditions.
Utterances were then grouped into “composites” (Table 2). In this study, for instance, the RIAS encoding team grouped closed-ended and open-ended questions related to diagnosis and therapy into a “Biomedical data gathering composite.” At the bottom of the table, these composites are combined to create two RIAS PCC summary measures used in RIAS studies of patient-centered communication (PTCENT1 and PTCENT2).14 In both PTCENT1 and 2, the numerator contains behaviors associated with patient-centered communication, and the denominator contains behaviors associated with physician-centered communication. They differ only in regards to whether physician biomedical information giving is counted as patient-centered or doctor-centered behavior.
The frequency of each type of utterance as coded by the RIAS composites for physician and patient speech were summarized across all encounters, and separately for those encounters in which the physician’s care plan was correct or incorrect. RIAS patient-centeredness summary measures PTCENT1 and PTCENT2 were computed for each encounter.
We fitted mixed effects logistic regression models to each of the USP measures with full likelihood estimation using SAS 9.2. All models controlled for clustering of encounters within physicians, as well as a variety of design features, such as case, complications present (biomedical, contextual, or both, coded as main effects and an interaction), USP race, clinical site, whether the physician completed all four scheduled study visits, whether the physician failed to provide demographic information in the study, face time between physician and USP, whether the physician responded to our post-visit question about whether they believed they had seen a USP and, if so, whether the physician responded positively to that question. In one set of models, PTCENT1 was also included as a potential predictor; in the other set, PTCENT2 was included as a potential predictor. All odds ratios reported are adjusted for the presence of all variables in the model, and reported with 95 % confidence intervals. Sample size calculations suggested that our sample provides 80 % power to detect an adjusted odds ratio of approximately 1.4 for either predictor.
We also fitted exploratory models in which each individual RIAS composite variable was included as the potential predictor. These models included the same clustering and design controls.
Of the 152 invited physician participants, 21 declined, and we were unable to schedule appointments with 20, leaving 111 participants. 92 of these saw four USPs, 3 were visited by three, 6 were visited by two, and 10 saw just one USP. 81 % reported they thought they were seeing a real patient. Their plans were correct in 73 % of the uncomplicated baseline encounters, 38 % of the biomedically complicated encounters, 22 % of the contextually complicated encounters, and 9 % of the biomedically and contextually complicated encounters.12
Tables 3 and and44 present the frequency of utterance types for physicians and USPs, respectively, across all encounters, and separately for those encounters in which the physician’s plan was correct vs. incorrect. Encounters with correct plans appeared to have slightly higher median numbers of physician utterances classified as patient engagement and positive rapport-building, and higher median numbers of patient utterances classified as information giving and positive rapport-building.
Neither PTCENT1 nor PTCENT2 summaries significantly predicted the primary USP measure, an incorrect plan of care (Table 5). The adjusted odds ratio (AOR) for PTCENT1 was 1.3 (95 % CI [0.4, 4.5]) and the AOR for PTCENT2 was 1.0 ([0.6, 1.5]). Higher PTCENT1 scores were, however, positively associated with physician probing of contextual red flags (AOR=4.1, [1.2, 14.7]) and negatively associated with physician probing of biomedical red flags (AOR=0.04, [0.01, 0.1]). Higher PTCENT2 scores were negatively associated with physician probing of biomedical red flags (AOR=0.5, [0.4, 0.8]), but not significantly associated with physician probing of contextual red flags.
In exploratory analyses, higher numbers of psychosocial information-gathering questions by the physician were associated with a decreased likelihood of probing biomedical red flags in the encounter (AOR=0.95 [0.91, 1.0]). Social rapport-building was associated with an increased likelihood of probing the biomedical red flag (AOR=1.06, [1.02, 1.09]). No RIAS physician composites were associated with correct care planning or probing contextual red flags.
RIAS global affect ratings of physicians did not predict correct care plan or probing of biomedical red flags, but several were associated with probing of contextual red flags, including sympathy (AOR=3.1 [1.9,5.1]), assertiveness (AOR=1.6 [1.0, 2.6]), attentiveness (AOR=1.9 [1.0, 3.5]) and respectfulness (AOR=2.2 [1.4, 3.6]).
We found little evidence that the Roter Interaction Analysis System (RIAS) could differentiate between physicians who succeed or fail at planning patient-centered care. We interpret these findings as evidence that RIAS is not measuring all potential domains of patient-centered communication, particularly those related to tailoring patient-centered care plans.
A limitation of the RIAS focus on classifying and counting utterances is that it does not consider the substance of an interaction. The association of RIAS coded PCC with physician probing of relevant patient context suggests that, in general, physicians who do more psychosocial data gathering, emotional rapport building, etc., are more likely to ask the right questions about patient context (but, conversely, may be less likely to discover biomedical complications). By “right questions” we mean those that elicit the essential information necessary to adapt an otherwise uncomplicated care plan to the patient’s context. We have previously identified ten contextual domains in patients’ lives to explore depending on the presentation, including their economic situation, competing responsibilities, social support etc.15 The clinician who hones in on the right questions is, paradoxically, categorized as less patient-centered if the overall number of RIAS PCC utterances are, as a result, fewer.
Once the essential patient context is elicited, planning patient-centered care requires incorporating the information into the care plan. For the asthma case, this requires addressing the patient’s inability to afford medication rather than increasing dosing of the current medication; for the woman seeking elective hip surgery because she thinks she will take better care of her dying son—not appreciating the extent to which she herself will be temporarily disabled—it requires educating her and postponing the surgery; for the man with severe health numeracy problems whose diabetes has been out of control since he lost his social support, it means accommodating his cognitive limitations, rather than just increasing his medication; and for the man with weight loss since he became homeless and unable to access adequate food, it requires addressing his nutritional needs rather than working up an underlying malignancy (Table 1).
There are limitations to our method for assessing patient-centered care planning. First, we assessed the physician’s performance at addressing just one contextual issue in each case. RIAS codes the physicians’ behavior across the entire encounter, and it is possible there were other issues that physicians addressed. However, given the limited latitude for actors to embellish their scripts, unplanned psychosocial issues were uncommon. Second, we assessed RIAS with only four cases. RIAS might be more predictive, for instance, when a patient-centered care plan consists only of providing reassurance and education, such as when the contextual complication is a patient’s misunderstanding about a treatment plan. Finally, an assumption of this study is that tailoring patient care plans is in fact a significant aspect of patient-centered communication.
Arriving at a patient-centered care plan is a nuanced process that depends on the precision of the physician’s listening and processing of contextual information, follow up questions, and their sensitivity to each patient’s strengths and limitations with regard to communication and self care. Although we explored only RIAS, we have similar concerns with a range of PCC measures which, as Epstein and colleagues have noted, consist of coding systems, interactional analyses, checklists and rating scales.1 Insofar as these systems focus on physician behaviors without attention to the logic of the interaction—whether the clinician is responding constructively to each patient’s particularly needs and circumstances—they are unlikely to be either sensitive or specific for PCC.
Alan Schwartz and Saul Weiner are owners of a company that provides management consulting services to health care providers and institutions interested in collecting customer service and performance data using methods employed in this study (unannounced standardized patients). They have not to date received consulting fees, honorarium, contracts or other payments. The remaining authors declare that they do not have a conflict of interest.
This study was supported by Veteran Affairs, Health Services Research and Development.