Search tips
Search criteria 


Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
J Gen Intern Med. 2012 August; 27(8): 968–973.
Published online 2012 March 17. doi:  10.1007/s11606-012-2033-5
PMCID: PMC3403130

Use of an Electronic Problem List by Primary Care Providers and Specialists

Adam Wright, PhD,corresponding author1,2,3 Joshua Feblowitz, MS,1,2,3 Francine L. Maloney, BA,2 Stanislav Henkin, BA,1,2 and David W. Bates, MD, MSc1,2,3,4



Accurate patient problem lists are valuable tools for improving the quality of care, enabling clinical decision support, and facilitating research and quality measurement. However, problem lists are frequently inaccurate and out-of-date and use varies widely across providers.


Our goal was to assess provider use of an electronic problem list and identify differences in usage between medical specialties.


Chart review of a random sample of 100,000 patients who had received care in the past two years at a Boston-based academic medical center.


Counts were collected of all notes and problems added for each patient from 1/1/2002 to 4/30/2010. For each entry, the recording provider and the clinic in which the entry was recorded was collected. We used the Healthcare Provider Taxonomy Code Set to categorize each clinic by specialty.


We analyzed the problem list use across specialties, controlling for note volume as a proxy for visits.


A total of 2,264,051 notes and 158,105 problems were recorded in the electronic medical record for this population during the study period. Primary care providers added 82.3% of all problems, despite writing only 40.4% of all notes. Of all patients, 49.1% had an assigned primary care provider (PCP) affiliated with the hospital; patients with a PCP had an average of 4.7 documented problems compared to 1.5 problems for patients without a PCP.


Primary care providers were responsible for the majority of problem documentation; surgical and medical specialists and subspecialists recorded a disproportionately small number of problems on the problem list.

KEY WORDS: patient problem list, electronic medical records, primary care


The patient problem list comprises an essential part of the modern electronic medical record. Improved problem list documentation has been associated with a higher quality of care and greater adherence to evidence-based guidelines.1 Effective clinical decision support (CDS) interventions also frequently depend on the problem list, since many CDS rules require accurate, coded problem entries.2 An electronic problem list can also be a valuable tool for quality and research initiatives as it facilitates the rapid creation of patient registries.3

As part of Stage 1 and 2 “meaningful use” guidelines, providers are required to actively “maintain an up-to-date problem list of current and active diagnoses,” with 80% of patients having at least one problem recorded or an indication of no known problems.4,5 However, research has shown that problem lists are frequently inaccurate and out-of-date.610 In a previous study, we demonstrated that common problems were frequently omitted from the problem list at one large hospital network—completeness ranged from 4.7% for renal disease to a maximum of 78.5% for breast cancer.11 In previous qualitative research, we found that provider attitudes towards appropriate use and content varies widely and problem lists are frequently perceived as inaccurate, incomplete and out-of-date.9,10

In order to shape effective problem list policy and meet “meaningful use goals,” it will be necessary to gain an improved understanding of current problem list usage patterns. Based on prior research9,10 we hypothesized that primary care providers (PCPs) would be the primary problem list users while specialists would use the tool much less frequently. For the purposes of this paper, “primary care” is defined to include providers specializing in family medicine, internal medicine (excluding subspecialties), geriatrics and pediatrics. Our goal was to assess the differences in problem list use across specialties and subspecialties and to quantify these differences for the first time.



The study was reviewed and approved by the Partners HealthCare Human Subjects Committee. This study was carried out at Brigham & Women’s Hospital (BWH) in Boston, MA. All BWH clinicians use the Longitudinal Medical Record (LMR), a self-developed, ONC-ATB-certified outpatient EHR. Care delivered in the outpatient setting (excluding the emergency department) is documented in the LMR—inpatient care is documented in another system. Surgeons use the LMR to document outpatient visits, although surgical records are documented in a different system. Many PCPs affiliated with BWH have used the LMR for a number of years, and the remaining PCPs and specialists have used the LMR since a major initiative to get all providers online in 2007. The LMR includes an electronic problem list tool that allows providers to document patient problems using both coded and uncoded entries. Providers are strongly encouraged to use coded entries they drive clinical decision support features. Problems are coded using a proprietary terminology that is mapped to SNOMED CT. When an uncoded entry is added, the clinician is alerted and is asked to confirm that he or she wishes to make an uncoded entry. The LMR problem list is distinct from the billing system—physicians can bill for any diagnosis regardless of whether it is on the problem list, and billing diagnoses entered do not feed the problem list. The problem list appears on the main summary screen of the LMR (shown in Figure 1). Problem list use is technically required by hospital and Joint Commission policy,12 but this requirement is not consistently or centrally enforced. Providers receive only basic technical training in how to add problems using the LMR, but no training is provided on what problems to add or who should add them. BWH does not have a formal policy on who is responsible for maintaining the problem list or which conditions belong on it. In a previous qualitative study on the use of problem list by clinicians, none reported receiving specific guidance on problem list utilization.10 When a provider modifies the problem list, the LMR records the identity of the provider and the clinic they are logged into at the time of the change.

Figure 1.
The LMR summary screen with the problem list shown in the center.


In order to study provider use of the problem list by specialty, we randomly selected 100,000 patient records from the LMR. Participants were drawn from the group of all patients with at least one visit recorded in the LMR from 2007 to 2008 and two or more outpatient notes in their record (n = 839,300). Once this sample was defined, we collected all available problem and note data for each of the 100,000 patients dating from 1/1/2002 to 4/30/2010. We limited the note and problem data collected at practices affiliated with the BWH network, which includes hospital-affiliated primary care and specialty clinics, community clinics and two federally-qualified health centers.

The total problem count included all problems recorded on each patient’s current problem list and the identity of the provider who added it and the clinic that they were signed into. The total note count for each patient included all available note headers—the descriptive header stored with each note (e.g. Cardiology Note). In addition to progress notes, the note count included letters to patients, “no show” notes and other entries that are recorded in the notes module of the LMR because the LMR does not make a distinction between these note types. Like problem entries, all notes were tagged with the provider and the clinic they were signed into. We selected total note volume as a proxy for visit volume in each specialty. Note volume is an imperfect proxy because some clinics may have different prevalence of non-progress-note storage; however, schedule data were not available for all specialties so it was the best proxy available.

Identifying Specialty

LMR problem and note records do not contain direct information on the specialty of the recording provider—only the provider’s identity and clinic. We initially attempted to link providers directly to specialties using the Partners enterprise master provider file and the National Provider Identifier database. However, this linking proved inexact because some providers were listed under general categories (e.g. “medicine” for a cardiologist) and some providers practice more than one type of medicine. All providers, however, have schedules and see patients in one or more clinics, which are almost always specific to a single specialty in our network and we used the clinic the provider was logged into to classify the data by specialty. Since the provider must log into the proper clinic to view their schedule and properly classify notes, the clinic appears to be a more robust indicator of specialty than the provider themselves.

There was no existing mapping from clinics to specialties, so we mapped the clinics manually, using the Healthcare Provider Taxonomy Code Set13 to classify each specialty. The Healthcare Provider Taxonomy Code Set defines the type, classification and specialization of health care providers, and is a mandated standard for NPI registration and HIPAA transactions.

Because of issues with legacy data in the LMR, there are many clinic designations which are no longer in current use, or which are used very infrequently. To make the mapping task manageable, we only mapped clinics that had generated at least 50 notes or had added at least five problems to the problem list over the study period.

After mapping all clinics that met our inclusion criteria, we identified 72 separate categories of provider specialty. We combined highly related subspecialties where appropriate (e.g.” Anesthesiology” and “Anesthesiology: Pain Medicine” were combined into “Anesthesiology”). In addition, we excluded a small group of notes (n = 4,932) that were simply scanned consent forms. A portion of all clinics could not be classified according to the Code Set including: 1) clinics providing a mix of services across specialties, 2) entries without a specific clinic recorded and 3) clinics that did not reasonably map to an existing Code Set term. Additionally, a number of clinics were not classified because they did not meet the threshold inclusion criteria. These groups were combined into the category “Other” which encompassed 0.4% of problems and 6.2% of notes. We were left with 43 distinct categories with total notes counts and problem counts.

Additional Analyses

In addition to mapping clinics to their associated specialties, we assessed the proportion of patients in our sample that had an assigned PCP at BWH. A patient’s PCP can be recorded directly in the LMR but this field is not reliably populated. Instead, to determine which patients had BWH-affiliated assigned PCPs, we examined whether they had any notes recorded by a BWH PCP during the entire study period. This proxy method relied on the following assumptions: 1) patient with notes recorded by a BWH PCP had a very high likelihood of having an assigned PCP (BWH PCPs do not schedule urgent care visits with patients without an assigned PCP) and 2) patient without any notes recorded by a BWH PCP was very unlikely to have an assigned PCP (patients without any primary care notes recorded during the study period most likely do not receive regular care within the system). Consequently, we believe this to be a reasonable method of assessing the proportion of patients with a BWH PCP. Once all patients were classified using this method, we calculated the average total number of problems on patient problem lists for both BWH PCP and non-BWH PCP patients.

For statistical analyses, Student’s t-test was used for continuous variables and chi-square test for categorical variables with R Statistical Computing version 2.13.1 (Austria). Statistical significance was set at two-tailed p < 0.05.


A total of 2,264,051 notes and 158,105 problems were recorded in the BWH LMR during the defined study period (approximately 8.3 years). The average number of problems added per note was 0.07. The complete results are shown in Table 1 with the note and problem count for each identified specialty and the rate of problems added per note. Overall, 49.1% of patients had an assigned PCP affiliated with BWH. The average problem list length at the end of the study period was 4.7 (standard deviation [SD] 5.2) problems for patients with an assigned BWH PCP and 1.5 (SD 3.6) problems for patients without an assigned BWH PCP (p < 0.001).

Table 1
Note and Problem Counts by Specialty (in Descending Order by Number of Problems Added)

PCPs added 82.3% of all problems to problem lists, despite only writing 40.4% of all notes (RR = 6.86, 95% Confidence Interval 6.77-6.95, p < 0.001 compared to specialists). Internal medicine, pediatrics, geriatric medicine and family medicine accounted for 77.1%, 2.8%, 1.3%, 1.2% of total problems added, respectively. With the exception of obstetrics and gynecology, rheumatology, cardiology and infectious disease, all other specialties accounted for less than 1% of problems added. Internal medicine, pediatrics, geriatrics and family medicine also had some of the highest rates of problems added per note: 0.146 problems/note (4th of 43 categories), 0.077 problems/note (9th of 43), 0.970 problems/note (1st of 43) and 0.093 problems/note (7th of 43) respectively. Other specialties with high rates of problems added per note included transplant surgery, pulmonary disease, hematology/oncology, infectious disease, obstetrics and gynecology and cardiology.

As mentioned previously, partners undertook an initiative in 2007 to bring all providers live on the LMR. In order to ensure that our results were not confounded by this initiative we repeated the same analysis using a subsample of data from 1/1/2008 to 4/30/2010. This analysis revealed a similar pattern: PCPs were responsible for 73.4% of problems added but wrote only 35.6% of notes. Specialists’ share of problem list use increased slightly, but their volume of care also increased—the ratio of problem and note shares between the two groups remained the same, with PCPs performing slightly more than double the share of problem list documentation after controlling for note volume.


These results show that, at practices affiliated with this hospital in an integrated delivery system in which all providers use the LMR, PCPs document more than four-fifths of all problems, even though they account for less than half of all care provided as measured by note volume. Providers in internal medicine alone documented the vast majority of patient problems on the problem list (~77%). In addition, patients with an assigned PCP had a substantially higher average number of problems at the end of the study period. In contrast, specialists documented very few problems relative to the number of notes they generate. For example, specialists in orthopedic surgery, psychiatry and neurology documented less than 500 total problems during the study period despite generating over 100,000 notes. However, there were some notable exceptions to this pattern, namely obstetrics and gynecology and transplant surgery which generated a relatively large number of problem list entries given their volume.

These results quantify, for the first time, a difference in problem list use between PCPs and specialists and validate previous qualitative findings on problem list documentation patterns.9,10 It is important to note that PCPs have, in general, used EHR longer than specialists in this network, and that the problem list is integrated so that everyone can see the problems entered by others. This is especially important in light of the substantial financial incentives, which have been invested to increase provider use of EHRs and improve care.

The meaningful use (MU) criteria are intended to encourage both PCPs and specialists to use EHR in ways that will result in tangible improvements in quality, safety, and efficiency, and an accurate problem list is a cornerstone of the EHR. Stage 1 and 2 MU rules stipulate that 80% of all patients must have an up-to-date problem list. Our findings suggest that it may be especially important to track how specialists use problem lists. Dealing with this issue should become easier as problem lists are populated, and data exchange begins to occur, which should decrease the need for multiple individuals to enter problem lists for the same patient across different sites. Furthermore, institutional and national policy on problem list use may need to differ based on provider specialty; a specialist, who is seeing a patient once or twice, should not be required to enter a full problem list, but they should record new problems that they diagnose.

Nonetheless, the difference in problem list use across specialties is concerning given that many specialists are actively involved in diagnosing patient problems, and about half the patients seen by specialists did not have a PCP in the Partners system. While PCPs are probably more likely to diagnose and document common problems, it is unlikely that the observed discrepancy can be explained solely by variations in the problems that each type of specialist deals with. The more likely explanation is that many problems are also going undocumented when diagnosis is carried out by a specialist.

This finding is consistent with our prior qualitative study on problem list utilization by PCPs and specialists.10 However, there were some notable exceptions to the pattern –obstetrics and gynecology, transplant medicine and oncology all had relatively high use of the problem list compared to other specialties. Some of this difference may be attributable to the fact that providers in these specialties are more likely to have a longitudinal relationship with patients and more incentive or confidence to document problems.

The observed variation between PCP and specialist use of the problem list is likely due to differences in culture and governance with regards to the patient problem list.9,10 PCPs, who may possess greater awareness of a patient’s clinical state over time, may feel more responsibility and ownership of the problem list. In contrast, specialists, who may interact with individual patients in a more condition-focused and time-limited fashion, may believe that the responsibly for maintaining the problem list rests on the shoulders of a patient’s PCP. However, the difference may not be entirely cultural. For example, some patients may already have a diagnoses made and entered by the PCP before seeing a specialist for that specific condition, negating the need for the specialist to enter a coded problem. Nevertheless, certain specialties do appear to have a high rate of problem list use, despite the dominant pattern of lower overall specialist problem list usage. Additional research, both quantitative and qualitative, will be needed in order to explore proper problem list policy across specialties and characterize the quality and accuracy of problem documentation.

As MU goals generate increased use of EHRs and problem documentation, we will need to devise policies for problem list use that will improve the value of this shared good for all providers. Even though most clinicians integrate the problem list in their workflow, PCPs tend to shoulder the responsibility for maintaining its accuracy but also believe that specialists should share the responsibility for maintaining the list to make it as accurate as possible. Additionally, problem list usage differs by clinic, where formal or non-formal leadership set expectations for the use and maintenance of the problem list.10 Although some guidelines for use exist,14 more specific and consistently applied directives need to be established in order to maximize the utility of this important tool. Specifically, by developing consensus on the optimal use of the problem list by clinicians themselves, specific guidelines could be put into places that govern its use. If the rules were established and the problem list were to be more accurate, more providers may feel responsible for maintaining an accurate and up to date problem list, especially since the MU regulations do not specify what is or is not a problem and who is responsible for maintaining the problem list. Finally, EHR vendors could contribute to improved problem list use via expansion of product functionality. For example, if some orders within a CPOE system require documentation of a specific problem in order to carry out a related order, this may improve documentation of problems dramatically; such requirements could exclude orders for reasons related to prevention or acute, self-limited problems that do not belong on the problem list.


This study has a number of limitations. First, it was more challenging than anticipated to establish the specialty of each provider. We could not identify an adequate method for automatically mapping providers to associated specialties. While we believe that using clinics is an effective method of ascertaining specialty, and that it has the significant advantage of accommodating providers with multiple specialties, the mapping is nevertheless imperfect and resulted in a small number of lost data. In addition, whether a given patient has assigned PCP in the system was also determine via a proxy method. Although we believe our underlying assumptions to be reasonable, a small number of patients may have been misclassified.

Second, this study was conducted at a single academic medical center and thus our results my not generalize to other sites using different EHR. Additionally, clinicians at our site have reported using alternatives to the problem list including past medical history section of the progress notes, discharge summaries, medication lists, relying on their own memory, and asking the patient directly about medical problems.10 We did not review any progress notes for the presence of the problem list and cannot comment on whether certain subset of clinicians uses this method. Next, to identify the specialty, we looked at clinic level rather than specialty of a specific provider and do not know whether the note was generated by an attending physician or a junior clinician. However, since BWH is a large teaching hospital, we expect that there are trainees in every clinical setting and do not believe that residents in any one clinic significantly influenced the results.

Another limitation is our use of notes as a proxy for volume, whichis twofold: first, clinical volume is an imperfect proxy for the volume of new diagnoses made by providers—it is possible that many visits do not yield any new diagnoses, and that the rate of new diagnoses per visit might differ by specialty, potentially introducing bias. We considered using billing diagnoses to identify potential new clinical diagnoses; however, we previously found that billing codes have a low positive predictive value for clinical problems.11 Second, notes are an imperfect proxy for visit volume as not all visits result in a note and notes are sometimes written for other purposes. It is possible that the relationship between note volume and visit volume might differ systematically between specialties but we believe that the size of such differences would be insufficient to explain the magnitude of differences seen in problems per note across specialties.

Finally, the specific reasons for the observed differences in problem list use were beyond the scope of this investigation. Additional research is needed in order to determine the factors that drive the differences in problem list use across specialties. Further research might also look into additional behaviors, such as updating or inactivating problems.


Our findings suggest that in this setting PCPs are responsible for the majority of problem documentation and specialists record a disproportionately small number of problems on the problem list. More research is needed to understand what documentation rates are appropriate, especially for specialists. Additionally, specific and consistently applied policies are needed to encourage appropriate use of the problem list across specialties.


This work was supported by a grant from the Partners Community HealthCare Incorporated (PCHI) System Improvement Grant Program and approved by the Partners HealthCare Institutional Review Board. PCHI was not involved in the design, execution or analysis of the study or in the preparation of this manuscript.

Prior Presentations

None to report.

Conflict of Interest

The authors declare that they do not have a conflict of interest.


1. Hartung DM, Hunt J, Siemienczuk J, Miller H, Touchette DR. Clinical implications of an accurate problem list on heart failure treatment. J Gen Intern Med. 2005;20:143–7. doi: 10.1111/j.1525-1497.2005.40206.x. [PMC free article] [PubMed] [Cross Ref]
2. Wright A, Goldberg H, Hongsermeier T, Middleton B. A description and functional taxonomy of rule-based decision support content at a large integrated delivery network. J Am Med Inform Assoc. 2007;14:489–96. doi: 10.1197/jamia.M2364. [PMC free article] [PubMed] [Cross Ref]
3. Wright A, McGlinchey EA, Poon EG, Jenter CA, Bates DW, Simon SR. Ability to generate patient registries among practices with and without electronic health records. J Med Internet Res. 2009;11:e31. doi: 10.2196/jmir.1166. [PMC free article] [PubMed] [Cross Ref]
4. Poon EG, Wright A, Simon SR, et al. Relationship between use of electronic health record features and health care quality: results of a statewide survey. Med Care. 2010;48:203–9. doi: 10.1097/MLR.0b013e3181c16203. [PubMed] [Cross Ref]
5. Meaningful Use Workgroup Presentation to HIT Policy Committee. 2011. Available at: Accessed June 8, 2011.
6. Kaplan DM. Clear writing, clear thinking and the disappearing art of the problem list. J Hosp Med. 2007;2:199–202. doi: 10.1002/jhm.242. [PubMed] [Cross Ref]
7. Szeto HC, Coleman RK, Gholami P, Hoffman BB, Goldstein MK. Accuracy of computerized outpatient diagnoses in a Veterans Affairs general medicine clinic. Am J Manage Care. 2002;8:37–43. [PubMed]
8. Tang PC, LaRosa MP, Gorden SM. Use of computer-based records, completeness of documentation, and appropriateness of documented clinical decisions. J Am Med Inform Assoc. 1999;6:245–51. doi: 10.1136/jamia.1999.0060245. [PMC free article] [PubMed] [Cross Ref]
9. Feblowitz J, Wright A. The Patient Problem List: An Ethnographic Study of Primary Care Provider Use and Attitudes. AMIA 2011 Annual Symposium. Washington, D.C.; 2011 (under review).
10. Wright A, Maloney F, Feblowitz J. Clinician attitudes toward and use of electronic problem lists: a thematic analysis. BMC Med Inform Decis Mak. 2011;11:36–45. doi: 10.1186/1472-6947-11-36. [PMC free article] [PubMed] [Cross Ref]
11. Wright A, Pang J, Feblowitz J, et al. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record. J Am Med Inform Assoc. 2011;18:859–67. doi: 10.1136/amiajnl-2011-000121. [PMC free article] [PubMed] [Cross Ref]
12. Information Management Processes (Standard IM 6.40): 2008 Comprehensive Accreditation Manual for Hospitals: The Official Handbook. Oakbrook Terrace, Illinois: Joint Commission Resources: 2008.
13. McMullen CK, Ash JS, Sittig DF, et al. Rapid assessment of clinical information systems in the healthcare setting. An efficient method for time-pressed evaluation. Methods Inf Med. 2010;50:299–307. [PubMed]
14. Bonetti R, Castelli J, Childress JL, et al. Best practices for problem lists in an EHR. J AHIMA / Am Health Inf Manag Assoc. 2008;79:73–7. [PubMed]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine