This initiative focused on developing a peer evaluation program that taught trainees about the components of high-quality consultation notes and then engaged them in self-improvement through reflection on performance data and practice-based learning (). The program included the development of a consultation assessment tool, followed by the implementation of the peer-evaluation across our Department’s fellowship programs.
Process map for “Improving Quality of Fellow-Written Consult Notes” project
Development of Quality of Consultation Measures
Development of consultation performance metrics began with a literature review, followed by focus groups with both referring physicians (attendings and housestaff) and consulting physicians (specialty fellows) to review findings from the literature search, refine the list of potential metrics, and identify important domains not identified in the literature. Specifically, discussions focused on characteristics of effective consultation notes, the most important elements of consultation notes, and the practice of giving/getting feedback about written consultations. In addition, focus groups provided essential input as to the elements of high quality consultation notes.
We identified five quality domains, consistent with prior literature5, 6
: 1) Reason for consultation (specifying a consult question; explaining thought processes for conclusions); 2) Diagnostic plan (citing rationales for recommended labs or studies); 3) Therapeutic plan (listing medications with proper dose, route, schedule; discussing procedures and peri-procedural tasks); 4) Communication (documenting verbal discussions with providers; providing anticipatory guidance); and 5) Educational value (citing relevant articles; providing a well-developed differential diagnosis).
Development of Consultation Assessment Tool
Using these domains, we developed an electronic Quality of Consultation Assessment Tool (QCAT) through an iterative process. The QCAT was refined to maximize objectivity and reproducibility with input from key stakeholders (attending specialist physicians, fellowship directors, and Department physician leaders). The final instrument, developed using an online database technology7
for easy dissemination and use, comprised a 13-point checklist across the 5 equally weighted domains, and two global measures of quality using 5-point Likert scales (). A percentage score was calculated for each domain of each note. “Yes”, “Somewhat”, and “No” answers were awarded 5, 3, and 0 points respectively. The sum of those scores was the numerator. The total number of possible points, after excluding questions with the answer “Not Applicable”, was the denominator. Domain scores were averaged to calculate the total score for each consult note.
Quality of Assessment Tool (QCAT)
Implementation of a peer evaluation program
Following several rounds of pilot testing, we established baseline “quality” scores using the QCAT on a random selection of inpatient fellow-written consultation notes. Notes were identified using billing data and confirmed via medical record review. Concurrently, we developed an electronic Users’ Guide to the QCAT that introduced the tool, explained how results would be tracked and disseminated, and provided rationales for QCAT quality measures. Additionally, the User’s Guide provided sample consultation notes with guidance on how to critically evaluate them and ways in which those notes could be improved to achieve higher quality scores.
We initiated the peer evaluation program with a kick-off email to our Department’s fellows and fellowship directors. This communication described the program, highlighted baseline quality data at the divisional-level, and provided the QCAT Users’ Guide. Fellows were asked to use the QCAT to blindly evaluate 3–5 peer-written initial consultation notes that had been redacted to remove patient and provider identifiers.
Results, consisting of domain scores and total scores, were compiled at the divisional-level and distributed to fellows and fellowship-directors (phase 1). Several months later, the peer evaluation process and dissemination of results were repeated (phase 2).
Early Experiences with QCAT
Overall, our fellow-written initial consultation notes were of average baseline quality (mean score 60%), with deficiencies in “Communication” and “Education” domains (mean scores 29% and 52%, respectively). No changes in the quality of consultation notes were observed between baseline, phase 1, and 2. Fellow involvement was low (27%) in all time periods.
Participating fellows stated that QCAT provided valuable information about important components of consultation notes and that the program provided some opportunity for self-reflection, but not enough for meaningful change.
Enhancing the program to achieve success
After this pilot phase, we refined our program to increase fellow participation. First, we recruited Divisional QCAT champions. These fellows self-identified as interested in quality improvement and patient safety and were willing to act as ambassadors to their divisions. QCAT champions were responsible for increasing their colleagues’ participation and presenting results and areas for improvements to their peers. Second, we hosted a kick-off event for QCAT champions to officially recognize their work and to provide a forum to brainstorm ways to increase their colleagues’ participation. This venue allowed for cross-fertilization of ideas and best practices across Divisions. Third, we asked fellows to evaluate fewer notes but at more regular intervals throughout the year, thereby providing feedback to fellows and fellowship directors on a monthly rather than quarterly basis.