|Home | About | Journals | Submit | Contact Us | Français|
Patients want all their concerns heard, but physicians fear losing control of time and interrupt patients before all concerns are raised.
We hypothesized that when physicians were trained to use collaborative upfront agenda setting, visits would be no longer, more concerns would be identified, fewer concerns would surface late in the visit, and patients would report greater satisfaction and improved functional status.
Post-only randomized controlled trial using qualitative and quantitative methods. Six months after training (March 2004—March 2005) physician-patient encounters in two large primary care organizations were audio taped and patients (1460) and physicians (48) were surveyed.
Experimental physicians received training in upfront agenda setting through the Establishing Focus Protocol, including two hours of training and two hours of coaching per week for four consecutive weeks.
Outcomes included agenda setting behaviors demonstrated during the early, middle, and late encounter phases, visit length, number of raised concerns, patient and physician satisfaction, trust and functional status.
Experimental physicians were more likely to make additional elicitations (p<0.01) and their patients were more likely to indicate agenda completion in the early phase of the encounter (p<0.01). Experimental group patients and physicians raised fewer concerns in the late encounter phase (p<0.01). There were no significant differences in visit length, total concerns addressed, patient or provider satisfaction, or patient trust and functional status
Collaborative upfront agenda setting did not increase visit length or the number of problems addressed per visit but may reduce the likelihood of “oh by the way” concerns surfacing late in the encounter. However, upfront agenda setting is not sufficient to enhance patient satisfaction, trust or functional status. Training focused on physicians instead of teams and without regular reinforcement may have limited impact in changing visit content and time use.
Recent studies suggest it would take 24 hours per day for a primary care physician to address all the acute, chronic and preventive needs of a panel of 2,500 patients.1-3 Providers are understandably concerned about having sufficient time to address all their patients’ issues during their visits.4,5 For their part, patients want to voice all their concerns, but research has shown that their physicians interrupt them on average within 18–23 seconds from the start of their interactions.6,7 In this study, we taught physicians to use The Establishing Focus protocol (EF)—a collaborative upfront agenda setting model designed for use in primary care settings. The model was designed by one of our authors (LM) to support the patient-provider relationship by addressing provider concerns about time management and patient needs to have all their concerns heard. Physicians are encouraged to elicit and prioritize patient concerns and, when necessary, use negotiation to develop a mutually agreed upon agenda. EF showed promise in a pilot study,8 and variants9,10 of EF are used for training medical students, residents and community physicians in many settings. We assessed EF’s use in practice and its impact on visit length, patient and provider satisfaction, agenda size, and how agenda topics were raised over the course of encounters. This experimental effort is the first of which we are aware that tests the impact of training clinicians to elicit patients’ reasons for their visits at the outset of the encounter.
According to self-determination theory, patients demonstrate more motivation to adhere to recommended treatments and have better health outcomes if their providers afford them choice, encourage self-initiation, provide a rationale for recommended actions, and accept their decisions.11,12 We predicted that if a full list of patient concerns was elicited and collaboratively prioritized with provider concerns that: a) patients would be more satisfied and report better functional status b) that visits would be no longer than control group visits even if more problems might be mentioned and, c) that fewer new problems would be raised in the closing moments of EF encounters, as demonstrated in previous observational studies.13,14
We conducted a randomized controlled post-only educational intervention with patients clustered within physicians to examine the qualitative and quantitative outcomes of EF training and usage (Fig. 1).
This design randomly assigns subject to condition, but collects response data only at follow-up. The design is less costly and intrusive to its subjects while providing better control of confounds including history, maturation, and instrumentation. The intervention consisted of two phases informed by previous successful randomized trials to improve communication skills15-17: 1) a two-hour group training session led by Mr. Mauksch, which included an overview of the protocol, a videotape demonstration, role-play practice, and moderated group discussion and 2) coaching by trained behavioral scientists, who shadowed the physicians for two hours per week over four weeks. At the training session, physicians received an EF handbook. After the first coaching session they received a video and a cue card detailing EF behaviors. To avoid biasing our conclusions, Mr. Mauksch’s role in developing EF and the training of study participants, precluded him from participating in data coding activities.
The trained sequence of EF skills included: 1) orienting the patient to the EF process, 2) asking the patient to list concerns, 3) making space for pressing patient stories early on, when necessary, 4) avoiding premature ‘diving’ into diagnostic sequences or patient story telling before a full agenda has been set, 5) asking the patient to prioritize their concerns, 6) when necessary, negotiating priorities with the patient, and 7) seeking confirmation and commitment from the patient. Cognitive cues, after skills 1 and 4, reminded physicians to “askyourself whether you feel able to address all the patient's concerns” and if not, to use prioritization, negotiation, and a follow-up appointment for deferred issues.
Participants Between July 2003 and October 2004 we invited all physicians (n=75) in a convenience sample of 12 community-based primary care clinics serving the Puget Sound region to participate in this study. A total of 59 (79%) physicians consented to participate. Physicians were randomly assigned to the intervention or the control group stratified by clinic and gender. Forty-eight physicians participated in all aspects of the study (Table 1). Thirty-one worked in a university-affiliated primary care network consisting of eight neighborhood clinics. Seventeen physicians worked in a consumer-governed, nonprofit health care system. Physicians received CME credits and a $150 payment for participation. Institutional review boards at each institution approved the study.
Patients were recruited approximately 6 months following completion of physician EF training (March 2004 – March 2005). Eligibility criteria included: being 18 years or older, acting as their own legal guardian, having seen the physician at least twice in the previous two years, having no serious cognitive impairment, and fluency in English. Clinic staff advised study coordinators when eligible patients arrived. The majority (71%) of patients approached agreed to participate. Most (98%) participants completed the study questionnaires following the visit. Concerns about the burden of completing the survey were uncommon (<5%). Visits were audio recorded and patients were paid $20. Prior to analysis, scheduled health maintenance exams were removed from the data set because the physician view of agenda setting in these visits is heavily influenced by quality of care criteria. Thirteen patients were removed because their encounters were shorter than three minutes. Our final data set included an average of 30 visits per physician (n=1460) (Table 2).
Patient Questionnaires Questionnaires were established research instruments selected to represent a wide range of variables important to describing patient physical and mental status, satisfaction, trust, and perceptions of the patient/physician relationship. Patient self-report instruments included the: 1) SF-8 (24-hour version), a functional status measure with sub-scales for physical and mental health, 2) Primary Care Evaluation of Mental Disorders (PRIME-MD)18 depression sub-scale, 3) Patient Health Questionnaire (PHQ-15)19 to assess somatization, 4) Medical Outcomes Study Participatory Decision-Making Scale (PDM)20 assessing perceptions of physician decision-making style, 5) Health Care Climate Questionnaire (HCCQ)21 assessing beliefs regarding physician supported autonomy, 6) trust sub-scale of the Primary Care Assessment Survey (PCAS)22 assessing confidence in physician integrity and competence, and 7) Mauksch et al.8 Scale assessing patient satisfaction within the EF pilot study. We hypothesized these scales would be positively impacted when a physician adopted EF skills The PDM, HCCQ, PCAS trust, and Mauksch scales provide a multidimensional view of patient satisfaction.
Physician Questionnaires Immediately following each patient encounter, physicians completed a self-report questionnaire assessing satisfaction with the visit and perceptions of difficulty experienced with the patient. This questionnaire included a subset of six items developed from the Difficult Doctor Patient Relationship Questionnaire (DDPRQ)23 used to assess physicians’ perceptions of the relationship they held with their patients.
Audio Coding Trained coders coded each encounter for the presence of key linguistic and quantitative data including patient and physician raised concerns and EF behaviors All coding achieved acceptable inter-rater reliabilities (kappas>0.70 for patient and physician raised concerns; kappas>0.60 for EF behaviors). Random sampling was used to select a sufficiently powerful sample of encounters for estimating patient voiced concerns. Purposeful sampling was used for the qualitative linguistic assessments. Funding constraints did not allow coding of all audios.
Establishing Focus Protocols Four trained raters, blinded to condition, listened to 936 audio files selected using purposeful sampling and coded for the presence of the EF behaviors taught in training. Since the full protocol was rarely demonstrated, but specific behaviors were present in many encounters, we defined adoption of collaborative upfront agenda setting as exhibiting or not exhibiting one of three behavioral combinations: 1) physician requested a list of concerns OR initiated an additional elicitation AND the patient indicated that they had completed listing their concerns; 2) physician asked for a list of concerns OR initiated an additional elicitation AND demonstrated negotiation or prioritization; 3) physician made multiple additional elicitations OR asked for a list of concerns multiple times.
Physician and Patient Raised Concerns Five trained raters, blinded to provider condition, coded randomly selected audio files (n=746) for patient and provider raised concerns.
Time Spent with Physician Physician entry and exit times were recorded for every audible file longer than three minutes (n=1282). Total face-to-face time spent with the physician was calculated.
Skill Use and Concerns by Phase of Encounter We assigned each behavior and concern to one of three encounter phases: the first third (up to 300 seconds from the start), the last third (the last third of the total encounter time up to 300 seconds), or the middle of the encounter (the remainder of the encounter time after removal of the first and last thirds). We coded each demonstrated EF behavior and each physician or patient raised concern as occurring in one of these phases. This coding was completed for a random subset of encounters (n=679).
The patient/physician encounter represented our primary unit of analysis. Exploratory analysis of unadjusted comparisons between the intervention and the control groups were first conducted to assess differences using SPSS 15.0 and included t-tests and analysis of covariance (ANCOVA) tests controlling for attitudes and functional status in delineating the relationship between patient variables and coded behaviors. When significant associations were not revealed we elected to report bivariate relationships examining mean differences. Because encounters were clustered within physicians possibly resulting in significant intra-class correlations, and a violation of independence among study subjects, hierarchical linear model analyses were used to generate unbiased estimators. Analyses were conducted using the SPSS 15.0 MIXED procedure. Means, confidence intervals, and statistical tests reported for patient level data reflect these unbiased estimates.
Different sampling techniques were used to address questions to be answered by the data. For example, random sampling was employed for counting concerns and purposeful sampling was used for coding EF behaviors. These different sampling techniques represent the authors’ attempts to best sample the data for the specific analyses being conducted.
Visit Length Adjusted times for control physicians averaged 886.36 seconds (95% CI=805.79 – 966.94) while the trained physicians averaged 796.79 seconds (95% CI=720.73 – 872.85). This 90-second difference was not statistically significant (p=0.10).
Adoption Table 3 shows that trained physicians (mean=0.26, 95% CI=0.18 – 0.34) were significantly (p<0.001) more likely than control physicians (mean=0.13, 95% CI=0.05 – 0.21) to demonstrate one of the three behavioral sets described as upfront agenda setting. Trained physicians made significantly more (p=0.01) additional elicitations (mean=0.20, 95% CI=0.15 – 0.26) then control physicians (mean=0.10, 95% CI=0.04 – 0.15) in the first interview phase. The patients of trained physicians (mean=0.11, 95% CI=0.08 – 0.14) were significantly more likely (p=0.01) to indicate that the naming of concerns was complete during the first phase than were control patients (mean=0.05, 95% CI=0.02 – 0.07). However, other upfront collaborative agenda setting behaviors taught in the protocol were negligible in both the control and intervention groups (all ps >0.05).
No differences were seen between the control and experimental groups on adjusted means for number of concerns voiced in Phase 1 or Phase 2 or for the total number of concerns voiced across the encounter (all ps>0.10). However, the recorded total concerns for Phase 3 were significantly higher (p=0.01) in the control group (mean=1.96, 95% CI=1.66 – 2.27) than in the experimental group (mean=1.46, 95% CI=1.18 – 1.73) due to increased concerns named by physicians and patients (Table 4).
Physician Satisfaction with Patient Encounters EF training had no effect on physician reported difficulty or satisfaction (all ps>0.10) with the encounter.
Patient Self-Reported Health Status and Experience Significant differences (Table 5) were not found for the SF8 Physical Score, the PHQ Depression Severity Index or Panic Score, the HCCQ, the PCAS, the PDM, for the PHQ 15 somatization score, or Satisfaction items (all ps> 0.10). Control encounters did differ significantly (p=0.01) on the SF8 Emotional score from intervention encounters.
Training in upfront agenda setting did demonstrate a measurable impact on physician and patient variables. In the early phase of the visit experimental group physicians were more likely to ask for additional concerns and patients were more likely to indicate that they had named a complete list of concerns. Patients and physicians brought up fewer problems in the third phase of experimental encounters. Experimental visits were 90 seconds shorter with 0.4 fewer concerns. While these two differences were not significant, they may have clinical importance and deserve further study. Training did not influence patient satisfaction, trust, patient functional status, or perceptions of the patient/physician encounter.
Upfront agenda requests did not increase the total number of concerns per encounter compared to the control group. Some evidence suggests that early interruptions may not affect the number of concerns ultimately named by patients.24 One explanation for our findings is that an implicit, mutual contract is created between physician and patient about what will be covered in the encounter when upfront elicitations are used. EF patients were more likely than non-EF patients to indicate agenda completion (e.g., “that’s it”) in the first interview phase, suggesting an implied agreement not to bring up a new topic. Our findings are consistent with the observational research by White, Levinson and Rotter13 who found that physicians who made early efforts to understand patient concerns and feelings were less likely to address late breaking “oh by the way” problems.
Explanations for why orientation, prioritization, and negotiation were not adopted include: 1) their use may represent a degree of power sharing that is in conflict with the culture of medical training, 2) institutional pressures to meet quality of care criteria for chronic illnesses, may override attending to patient priorities 3) physicians may fear displeasing patients by not attending to all their concerns and 4) experimental group physicians may not have felt the need to use these skills because of decreased concern about “oh by the way” issues.
How do we explain that additional upfront elicitations did not affect patient reports on an assortment of functional status and satisfaction parameters including trust and shared decision making? Our intervention taught the skills of agenda setting—the collaborative identification and ranking of concerns. Our training did not teach essential skills related to relationship development, empathy, understanding the patient perspective or shared decision-making in creating a plan.25 Enhancing the patient experience likely requires skills beyond upfront agenda setting—specifically demonstrating curiosity about patient beliefs and helping the patient feel less anxious about illness26 and more able to engage in self management.27
Our findings suggest that some of the responsibility for longer visits may rest with physicians who raise new problems late in the visit. Flocke et al.1 found that in the 73% of primary care visits with multiple problems, 36% of the time physicians brought up additional problems. This parallels our findings where 42% of all problems raised in the third phase of the interview were raised by physicians. In our study control physicians raised 34% more problems in the final moments than did experimental physicians whose visits had fewer total third phase concerns.
All physicians in this study use upfront agenda setting less than half the time, which has implications for how effectively time is used in an encounter. Tai-Seale et al.28 found that in primary care encounters with a median of six topics, one problem gets most of the attention and the remaining problems get an average of 1.1 minutes each. These researchers further reported that the number of problems or lifestyle topics addressed in a visit was inversely related to the likelihood of arriving at clear management decisions.29
Several limitations deserve mention. Successfully trained behaviors may have been extinguished in the 6-month period between training and data collection. It might also be that the coding schemes applied to the audio data were not sufficiently nuanced to fully illuminate patient and provider interactions. The algorithm we used to determine phases of the patient interview was based on rational expectations following review of the audio-tapes, but not formally rooted in what was directly occurring within the encounter. Finally, we did not study how upfront agenda setting affects patient expression of sensitive issues or physician detection of occult concerns. Other research30 found that asking, “is there something else?” decreased the prevalence of unmet concerns compared to asking, “is there anything else?”. Our physicians were trained to ask “anything else?”.
Our findings suggest that collaborative upfront agenda setting may alter time use in primary care encounters; opening up ways to spend time more effectively. However, similar training efforts that are targeted solely at physicians may not produce lasting behavior change. Medical communication takes place in relationships embedded within diverse organizational cultures31 serving a variety of patient populations. Therefore, training should account for the preparation and experience of physicians and accommodate the diversity of patient, medical, social and psychological variables. Future efforts to sustain physician behaviors may need to include a patient activation component (e.g., training patients to create a list of concerns prior to meeting the physician).32 Incorporating health care team members such as medical assistants33 and nurses could help patients name concerns prior to meeting the physician. Institutional support for patient engagement and team efforts to elicit concerns would be essential. To impact health outcomes communication training needs to extend beyond upfront agenda setting to include skills such as emotion handling, eliciting the patient’s perspective, self management support, and co-creating a plan. Upfront agenda setting may help health care team members protect time for these and other important interaction skills.34
The authors wish to express their appreciation to Helene Starks, PhD for her excellent project management and to Ron Epstein, MD, Wendy Levinson, MD, and Kim Marvel, PhD for their design and analysis advice.
This work was supported by the funds from the Agency Healthcare Research and Quality, RO1 HS 13172–01
Conflicts of Interest Mr. Mauksch declares that he receives consultation and training fees from health care organizations to train health care providers in upfront agenda setting and other communication skills.