|Home | About | Journals | Submit | Contact Us | Français|
OBJECTIVE: To determine whether physicians document office visits differently when they know their patients have easy, online access to visit notes.
PATIENTS AND METHODS: We conducted a natural experiment with a pre-post design and a nonrandomized control group. The setting was a multispecialty group practice in Minnesota. We reviewed a total of 400 visit notes: 100 each for patients seen in a rheumatology department (intervention group) and a pulmonary medicine department (control group) from July 1 to August 30, 2005, before online access to notes, and 100 each for patients seen in these 2 departments 1 year later, from July 1 to August 30, 2006, when only rheumatology patients had online access to visit notes. We measured changes in visit note content related to 9 hypotheses for increased patient understanding and 5 for decreased frank or judgmental language.
RESULTS: Changes occurred for 2 of the 9 hypotheses related to patient understanding, both in an unpredicted direction. The proportion of acronyms or abbreviations increased more in the notes of rheumatologists than of pulmonologists (0.6% vs 0.1%; P=.01), whereas the proportion of anatomy understood decreased more in the notes of rheumatologists than of pulmonologists (−5.9% vs −0.8%; P=.02). One change (of 5 possible) occurred related to the use of frank or judgmental terms. Mentions of mental health status decreased in rheumatology notes and increased in pulmonology notes (−8% vs 7%; P=.02).
CONCLUSION: Dictation patterns appear relatively stable over time with or without online patient access to visit notes.
CI = confidence interval; OR = odds ratio
Electronic medical records make it possible to share information easily with patients. Many leading health care systems offer patients secure online access to portions of their medical record, such as medication, problem, and allergy lists; immunization records; and laboratory results.1-5 Multiple studies indicate that patients are eager for more information from their physicians, including visit notes.6-10 Under the Health Insurance Portability and Accountability Act of 1996,11 patients are, in fact, entitled to review their complete medical record. Although routinely sharing visit notes remains rare,12-14 the increasing availability of online records makes easy patient access inevitable.15-19
Physicians tend to write visit notes for themselves or other physicians, in part because clinical training typically has not addressed sharing notes with patients. Physicians have expressed concern that sharing visit notes with patients could lead them to write more vague (and potentially less precise) notes so as to avoid upsetting patients, who might misunderstand, be confused by, or be offended by more direct and detailed notes.20-25 We designed this study to test whether physician concerns that visit notes would change are warranted. It represents the first content analysis of physicians' visit notes in the peer-reviewed literature.
This study aims to understand whether making visit notes available online to patients affects how physicians document the visit. We reviewed the literature and interviewed 10 physicians in the study setting to develop hypotheses around the study objective. We asked the physicians to describe how and why the content of visit notes might change and to provide specific examples of possible changes in terminology.
On the basis of these interviews, we hypothesized 2 types of changes: (1) increased ease of patient understanding and (2) decreased use of frank or judgmental language. Within these 2 categories of changes, we identified 14 specific hypotheses. Nine hypotheses reflected the potential for patients' increased ease of understanding visit notes, and 5 addressed decreased use of frank or judgmental terms. These hypotheses and their operational definitions are described in Tables Tables11 and and22.
The study was conducted in a multispecialty group practice. The practice's approximately 660 physicians deliver primary and specialty care to about 25% of the population in the western suburbs of Minneapolis. The Park Nicollet Institutional Review Board approved the study and waived the need for informed consent.
In September 2005, the group practice began offering patients online access to their medical records. Patients who enrolled in this secure Web service had access to information about their registration, medications, health problems, allergies, immunizations, laboratory test results, and selected radiographs.
A natural experiment provided a pre-post design with a nonrandomized control group. This experiment occurred when medical records became available to all patients in the group practice; the rheumatology department included visit notes, whereas most other departments did not.
For a control group, we selected the pulmonology department, which had chosen not to release its visit notes. Like rheumatologists, pulmonologists see many patients with chronic conditions.
Visit notes were selected on the basis of 3 characteristics: physician, diagnosis, and date. We selected notes of rheumatologists who (1) worked full-time during both study periods and (2) made visit notes available to their patients online. Five of the 6 rheumatologists met these criteria. We selected notes of pulmonologists who worked full-time during both study periods. Of the 7 eligible pulmonologists, we randomly chose 5 for the control group.
We also selected notes that contained diagnoses of 1 or more of 3 conditions commonly seen by physicians in these subspecialties: rheumatoid arthritis, systemic lupus erythematosus, or fibromyalgia for rheumatologists and chronic obstructive pulmonary disease, asthma, or pulmonary nodule for pulmonologists.
We selected visit notes from these physicians for these diagnoses for 2 time periods. Visit notes from July 1 to August 30, 2005 (before patients could access their medical records online) served as a baseline, against which follow-up visit notes from July 1 to August 30, 2006, 1 year after baseline, were compared. Rheumatology patients had online access to their visit notes; pulmonology patients did not. We excluded notes of any patients who had not consented to have their medical records used for research.
Traditionally, sample size is determined by defining clinically relevant differences in the measures and is based on published findings. For this topic, no previous research was available to guide the sample size. To ensure that we captured examples for each hypothesis, we randomly selected 20 notes for each of the 10 eligible physicians for each time period, providing a total sample of 400 visit notes (100 notes from each department for each time period).
We postulated that a meaningful change in the note content would be a 15% increase in any of the hypothesized areas of change related to ease of patient understanding or a 15% decrease in any of the hypothesized areas of change related to use of frank or judgmental language.
We interviewed 5 patients to gain insight into the lay person's grasp of medical terminology. In face-to-face interviews, these volunteers reviewed a copy of 1 of their recent visit notes and discussed the parts they did and did not understand. The results of the interviews helped us to assign words and phrases to understood or not understood coding categories.
We excluded words physicians could not—or probably would not—change, such as standard headings (eg, Subjective, Objective, Assessment, and Plan)26,27 and standard abbreviations (eg, VS for vital signs). We also excluded the patient's age and sex, as well as some parts of speech, such as articles, conjunctions, pronouns, and selected verbs. Additional exclusions were references to other parts of the visit note (eg, “as noted above”), to time periods (eg, “at this point”), and to numbers (eg, for medication doses). Of all words in the visit notes, 41% were coded.
Generally, we coded individual words rather than phrases. We combined words, however, and coded them as a unit when (1) the combination was integral to the meaning, such as blood pressure, or (2) the individual words would be assigned to coding categories different from the category used for the words combined, such as false negative. Patients may understand the individual words false and negative but not their meaning when combined. Idioms (eg, run down) were also coded as a unit.
We recognized that patients' familiarity with terms would vary on the basis of their experience.28,29 Thus, we coded a patient's own medical condition and common diagnoses (eg, diabetes, hypertension) as understood. Newly diagnosed conditions, rule-out diagnoses, and less common diagnoses were coded as not understood.
We designed a hierarchy of categories to emphasize the hypotheses most integral to ease of patient understanding and use of frank or judgmental language. Positive and negative words and phrases took precedence over terminology understood or not understood. For example, pleasant was coded as a positive word describing behavior rather than as terminology understood. Medical jargon took precedence over acronyms and abbreviations. For instance, WDWN (well-developed, well-nourished) was coded as medical jargon rather than as an acronym or abbreviation. Medication frequency understood or medication frequency not understood took precedence over acronyms or abbreviations. Accordingly, t.i.d. was coded as medication frequency not understood rather than as an acronym or abbreviation. Acronyms and abbreviations took precedence over anatomy understood or anatomy not understood. For example, HEENT (head, eyes, ears, nose, throat) was coded as an acronym or abbreviation rather than as anatomy not understood.
Stories and mental health status were coded separately. A story was characterized by content not specifically related to a medical issue (eg, “patient will be going on a trip soon”). It was coded both by the words it comprised and as an overall assessment. We also coded each visit note as to whether the patient's mental health status was mentioned.
Four of the authors with differing professional perspectives served as coders: 2 registered nurses (C.E.C., E.A.K.), 1 physician (A.C.K.), and 1 health services researcher (J.B.F.). Before coding began, a research assistant assigned a study number to each visit note and removed all identifying information, including the date of the office visit and the names of the physician and patient. One coder (C.E.C.) reviewed each deidentified note and underlined the words to be coded. Coders worked in randomly assigned pairs. Each coder was paired with each of the other 3 coders, forming 6 unique pairs; equal numbers of notes were randomly assigned to each pair. Each coder within a pair independently coded the content of each visit note. The pair then met to compare results and resolve any discrepancies. To further ensure coding consistency, we developed lists of frequently used words and their codes. After coding all notes, we did an electronic search for frequently encountered words to confirm that we had assigned the same code for all occurrences of a particular word.
We used χ2 tests to assess differences in time periods within each department for all hypotheses related to proportions; when the expected frequencies were less than 5, we used the Fisher exact test instead. Two-sample t tests were used to compare differences in time periods within a department for hypotheses regarding continuous variables (eg, the number of words in the plan and the assessment).
To determine whether the content of rheumatology notes changed more than that of the pulmonology notes, we used a 2-level general linear mixed model with a logit link for each hypothesis regarding proportion, with individual notes (lower level) nested within a variable for dictating physician (higher level). The sample size for each model was the total number of words in the denominator of each hypothesis. The dependent variable was an indicator variable at the word level signaling whether the word was coded in the numerator of each hypothesis. Fixed independent variables in all models included effects for department, time period, and a department by time interaction. Correlations between words in the same note and between notes dictated by the same physician were considered as random effects in all models. For the hypotheses about the total number of words in the assessment and plan, a 2-level linear mixed model was used with total number of words as the dependent variable, the same fixed effects as earlier described, and the random effect of correlation between notes dictated by the same physician. The main regression coefficient of interest in all models, and the one that is reported throughout the results and tables, was the interaction between department and time period. This coefficient tests whether changes from baseline to follow-up differed between rheumatology and pulmonology visit notes. All testing was 2-sided, and a significance level of α=.05 was used throughout the analysis. All statistical computations were done using statistical software SAS, version 9.1 (SAS Institute, Cary, NC).30
Raw data (proportions or counts) are provided in Tables Tables33 and and4.4. The P value in these tables, however, is from each model's department by time interaction term. When comparing baseline and follow-up notes within each department, we found no significant differences in the pulmonology department. Rheumatology notes had 3 statistically significant changes in the follow-up period compared with baseline: a significantly higher proportion of medication trade names to all medication names (from 60% to 62%; P=.02), a significantly higher proportion of acronyms or abbreviations to all words coded (from 2.4% to 3.0%; P<.001), and a significantly smaller proportion of anatomy understood to all words coded as anatomy (from 75.0% to 69.1%; P<.001).
When examining the results of the regression models, we identified 3 statistically significant changes, 2 related to ease of patient understanding and 1 to use of frank or judgmental terms (Tables (Tables33 and and4).4). The changes related to ease of patient understanding occurred in an unpredicted direction. The proportion of anatomy understood to all words coded as anatomy decreased more in the notes of rheumatologists than of pulmonologists (−5.9% vs −0.8%; P=.02) and the proportion of acronyms or abbreviations to total words coded increased more in the notes of rheumatologists than of pulmonologists (0.6% vs 0.1%; P=.01). The proportion of notes mentioning mental health status decreased from 19% to 11% for rheumatology and increased from 5% to 12% for pulmonology (P=.02).
No changes occurred for the 7 remaining hypotheses related to ease of patient understanding or for 4 of the 5 hypotheses related to use of frank or judgmental language.
The Figure presents the adjusted odds ratios (ORs) and 95% confidence intervals (CIs) for the main regression coefficient (department by time interaction) for each hypothesis. The variable department by time interaction represents the change over time in rheumatology notes compared with the change over time in pulmonology notes. Compared with baseline, rheumatology notes were less likely than pulmonology notes to have anatomy understood (OR, 0.77; 95% CI, 0.62-0.96) and were less likely to mention mental health status (OR, 0.20; 95% CI, 0.05-0.78). Rheumatology notes were more likely to use acronyms or abbreviations (OR, 1.24; 95% CI, 1.04-1.47).
Although we found limited changes from baseline to follow-up, we observed other patterns of interest. Overall, the visit notes from the 2 departments were remarkably similar in length, averaging 454 words in both time periods. Considering combined data from both time periods, the rates of medical jargon (8% in rheumatology, 7% in pulmonology) and acronyms or abbreviations (3% in each department) were virtually identical.
Notes from the departments, however, differed in other ways. With regard to ease of patient understanding, the rheumatology notes were more frequently coded as having terminology understood (78% in rheumatology vs 69% in pulmonology; P<.001), medication frequency understood (85% vs 59%; P<.001), anatomy understood (72% vs 58%; P<.001), and medication trade names (60% vs 38%; P<.001). The pulmonology notes were more frequently coded as having medication route understood (60% vs 90%; P=.003). Rheumatology notes had fewer words in the assessment portion of the note (mean, 33.4 vs 73.3 words; P<.001) and more words in the plan (mean, 127.3 vs 49.5 words; P<.001).
With regard to use of frank or judgmental terms, the departments differed significantly. The rheumatologists' notes were less frequently coded as having the word obese as a reference to weight (33% vs 94%; P<.001), negative words to describe appearance (42% vs 72%; P=.006), or negative words to describe behavior (9% vs 32%; P=.02). Rheumatology notes more frequently had stories (25% vs 7%; P<.001).
Nonetheless, weight references and subjective descriptions about appearance and behavior were uncommon for both departments. Only 15% (60/400) of the visit notes mentioned weight; 23% (90/400), any description of appearance; and 26% (105/400), any description of behavior. Mental health status was rarely mentioned (47/400 [12%]), and few stories were noted (63/400 [16%]).
To our knowledge, this report represents the first analysis of the content of physicians' visit notes in the peer-reviewed literature. Contrary to many physicians' expectations, we found little change in the content of visit notes when they were made available to patients online. Changes related to only 3 of the 14 hypotheses. Although the increase in the rate of acronyms or abbreviations used was statistically significant, the absolute rate of acronyms or abbreviations remained very low (about 3%). The decrease in words coded as anatomy understood was also modest (6%). One anticipated change, that reference to the patient's mental health status would decrease, was borne out, suggesting that the rheumatologists became more sensitive compared with the pulmonologists to the use of words such as anxious or depressed when their patients had ready access to visit notes.
Because of the study's novelty, no established framework was available to guide our content analysis. We referred to the patient interviews to help identify terminology as understood and not understood for the coding schema. For example, one patient noted that anyone with a specific diagnosis would be more familiar with terminology in that area “than somebody walking down the street. Anybody gets a certain education by just having whatever it is.” On the basis of that and similar comments, we concluded that a patient who has been diagnosed as having fibromyalgia would recognize the condition, whereas someone who was simply being evaluated for it probably would not. Conversely, we postulated that conditions such as hypertension would be familiar to patients both with and without the condition. A second patient commented, “I think hypertension is something that's being used so much with the ads, with the drug companies, that people probably understand it.” Although the science of determining what patients understand is very much in its infancy,31,32 work is ongoing to develop consumer health vocabularies.
On the basis of our coding, we estimated patients would understand 70% to 80% of the visit note. This estimate is consistent with estimates from patients themselves in other studies. Two-thirds of patients in a 2007 study found that physician notes were easy to understand or were neutral on the topic.6 In another study, 80% of patients found that consultation details were easy to understand.13 Patients interviewed said they did not expect to understand everything in visit notes but wanted to follow the gist of the note. This observation is consistent with observations made by Golodetz12 more than 3 decades ago that “80 percent of patients felt they had understood enough to satisfy themselves.” In a recent review, Baxter et al33 noted that most of the reviewed studies demonstrated that patients accept medical terminology.
With a single exception, our second general hypothesis that use of frank or judgmental terms would decrease was not borne out. In both the intervention and control groups, frank or judgmental references were uncommon, allowing little room for decreases. Some physicians interviewed told us of patients chastising them for recording information in medical records that the patients interpreted as pejorative (eg, describing patient as obese). These physicians indicated they had already (before patient access to records online) adjusted their dictation to avoid terminology they knew some patients found inflammatory. They did not mention being particularly sensitive to observations about mental health before their patients had online access. Changes in mention of mental health status within a department after vs before online availability were not significant for either rheumatology (P=.11) or pulmonology (P=.08); however, the overall change in mention of mental health status between departments over time was significant (P=.02). We attribute this overall significance to small cell counts moving in opposite directions (rheumatology decreased from 19% to 11%; pulmonology increased from 5% to 12%).
The study took advantage of a natural experiment but was not a randomized, controlled trial. Rheumatologists could have been predisposed to sharing medical record information with their patients even before online access to visit notes was available.
We examined only 2 departments, both treating patients with chronic illnesses. Notes reflecting minor, acute, emergent, or surgical problems may exhibit different patterns. We reviewed visit notes of patients with chronic conditions because we thought they would incorporate more personal information and that patients would be more likely to need and understand these notes to manage their condition.
Another limitation of the study is that the authors, not patients, made judgments about patients' likelihood of understanding visit notes. We made these judgments in part on the basis of patient interviews and extensive patient experience. On the basis of our past research in this patient population, we assumed patients had at least a high school education,8 but this assumption may not be true of other patient populations.
Previous discussion about the content of visit notes has been driven primarily by interest in reducing costs,34 supporting payment,35-37 improving safety,38 and avoiding litigation.39 Future studies should broaden the focus to understand how to make visit notes useful to patients.
As we work to increase patient engagement,40-43 patient activation (ie, the ability of patients to manage their health and health care),44-46 and patient responsibility,47,48 we should strive to ensure that patients have all the information they need. Visit notes contain critical information regarding how well the physician heard the patient (subjective), test results and physical findings (objective), the physician's assessment (assessment), and the plan for future care (plan). Without access to such information, patients may be unduly handicapped in assuming the responsibilities the current health care system places on them.
Growing interest for patients to assemble personal health records from multiple physicians further emphasizes the need and opportunity for patients to have access to visit notes. Increasingly, we are developing technologically feasible ways for patients to easily gather disparate medical information about their care and share it with other physicians wherever they go.49-53 Inclusion of the visit notes using electronic data-collection tools can make personal health records more complete and hence more helpful for supporting collaborative care among clinicians and engaging patients to self-manage health problems.
The results of this study provide important evidence to inform the current debate about the release of visit notes online. Physician resistance to sharing visit notes is widespread, but their concerns about changing content may be unjustified. Our results indicate that, in the setting studied, changes to the content of the visit note were minimal when the notes were made available online to patients.
The authors acknowledge the important insights from the physicians and patients interviewed. Scott Glickstein, MD, provided additional insight and access to the Rheumatology Department. Beverly Gray, Senior Research Assistant, copied and blinded all visit notes for date of visit and name of patients and physician.
This study was funded by a grant from the Curtis L. Carlson Family Foundation and the Park Nicollet Institute.