Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Med Care. Author manuscript; available in PMC 2013 October 1.
Published in final edited form as:
PMCID: PMC3444625

Effectiveness of an Electronic Health Record-Based Intervention to Improve Follow-up of Abnormal Pathology Results: a Retrospective Record Analysis

Archana Laxmisan, MD MA,1,2 Dean F. Sittig, PhD,3 Kenneth Pietz, PhD,1,2 Donna Espadas, BS,1,2 Bhuvaneswari Krishnan, MD,2,4 and Hardeep Singh, MD MPH1,2


Background and Objective

On March 11, 2009, the Veterans Health Administration (VA) implemented an electronic health record (EHR)-based intervention that required all pathology results to be transmitted to ordering providers via mandatory automated notifications. We examined the impact of this intervention on improving follow-up of abnormal outpatient pathology results.

Research Design and Subjects

We extracted pathology reports from the EHR of two VA sites. From 16,738 pre- and 17,305 post-intervention reports between 09/01/2008 and 09/30/2009, we randomly selected about 5% and evaluated follow-up outcomes using a standardized chart review instrument. Documented responses to the alerted report (e.g., ordering follow-up tests or referrals, notifying patients, and prescribing/changing treatment) were recorded.


Primary outcome measures included proportion of timely follow-up responses (within 30 days) and median time to direct response for abnormal reports.


Of 816 pre- and 798 post-intervention reports reviewed, 666 (81.6%) and 688 (86.2%) were abnormal. Overall, there was no apparent intervention effect on timely follow-up (69% vs. 67.1%;p=0.4) or median time to direct response (8 days vs. 8 days; p=0.7). However, logistic regression uncovered a significant intervention effect (pre-intervention OR, 0.7; 95%CI 0.5-1.0) after accounting for site-specific differences in follow-up, with a lower likelihood of timely follow-up at one site (OR,0.4; 95%CI 0.2-0.7).


An electronic intervention to improve test result follow-up at two VA institutions using the same EHR was found effective only after accounting for certain local contextual factors. Aggregating the effect of EHR interventions across different institutions and EHRs without controlling for contextual factors might underestimate their potential benefits.

Keywords: Anatomic pathology, electronic health record, communication, follow-up, post-analytic phase


Lack of follow-up of test results is a widely-prevalent safety concern.(1-4) While delays in follow-up of anatomic pathology results have been described in cancer-related malpractice claims,(5) prevalence of this problem is unknown. The use of electronic health records (EHRs) can potentially overcome these concerns. However, electronic solutions to facilitate communication have had varied success(1,6-9) and few studies have explored the complex intersection between the pathology laboratory and the clinical microsystem.(10)

In the Veterans Health Administration (VA), test results are communicated from the laboratory to providers through automated messages in a “View Alerts” inbox within the EHR. (11) Alerts are delivered asynchronously (such as in email, when message transmitter and receiver are not simultaneously engaged) and are distinct from the synchronous order-check alerts dynamically created during order entry, for example, to warn providers of potential drug-interactions. Asynchronous alerts of this type are “passive” alerts and do not directly interrupt workflow or necessitate an immediate response. They are not unique to the VA; various EHR systems have comparable tools for notification.(12-13)

Across VA sites, local policies and committees decide which types of test result alerts are “mandatory” (notifications that cannot be switched off by providers).(11) Anatomic pathology deals with the diagnosis of disease based on gross, microscopic and molecular examination of tissues. Until recently, these results were not mandatory. Although providers can customize their alert settings to receive some additional non-mandatory notifications, many providers only use locally-set default options.(11). In response to concerns that important findings might be missed in such scenarios, a national intervention was implemented throughout the VA on March 11, 2009, requiring all pathology results (normal or abnormal) to be transmitted to ordering providers as mandatory alerts (i.e. automatically generated notification for every result). We examined the impact of this intervention on follow-up of abnormal pathology results in the outpatient setting of two VA sites. We hypothesized that timely follow-up will improve post-intervention at both institutions by decreasing time to follow-up and eliminating missed abnormal results.



We evaluated test result follow-up outcomes through retrospective chart reviews six-months pre-and post-intervention. The project was IRB approved.


We selected two geographically disparate VA sites (Sites A and B) where the local default for anatomic pathology alerts was non-mandatory to the ordering providers prior to March 2009. Sites were chosen based on research team’s convenience and feasibility of data extraction and review. Site A is larger; both are teaching, tertiary-care referral centers with multispecialty clinics and community-based satellite clinics that provide care to urban and rural veterans.

At both sites, we reviewed the EHR and collected data on normal and abnormal outpatient pathology results reported between 09/01/2008 and 02/27/2009 pre-intervention and between 04/01/2009 and 09/30/2009 post-intervention. The total number of reports generated during the study period was comparable at these sites (Site A: 8497 pre-intervention and 8839 post-intervention; Site B: 8241 pre-intervention and 8466 post-intervention).

Data Sources

We queried the EHR database to generate a list of outpatient pathology reports that contained information on patient-identifiers, ordering-provider name, type of pathology report (surgical pathology or cytology reports) and date. Other descriptive variables (ordering-provider type, specialty and final results) and follow-up outcomes were collected through manual record-reviews.

Study population and sample

We identified 16,738 pre-intervention and 17,305 post-intervention reports at both sites. In absence of prior data, we estimated our review sample size based on the intervention leading to a significant reduction of time to follow-up action by 5 days (from average follow-up time of 12 days for abnormal results based on pilot reviews to an anticipated 7 days). This suggested a minimum sample size for each group to be 284 abnormal tests per site (power of 0.80, alpha=0.05, total abnormal tests =568). Based on an abnormal to normal test ratio of 7:3, we randomly selected records on a monthly basis, oversampling by 5 records at each site to account for any inappropriate records (i.e. inpatient records miscoded as outpatient, duplicate reports, etc.). Thus we randomly selected just over 800 total reports for chart review pre- and post-intervention.

Chart Reviews

At each site, a trained physician-reviewer collected data from the EHR using a standardized data collection instrument. Reviewers were trained during pilot testing to ensure consistent data collection on follow-up actions.(8-9,14) Definitions of follow-up actions and the respective examples are listed in Table 1. Lack of follow-up was defined as absence of direct response to the test and indirect follow-up actions in situations where follow-up was required.

Table 1
Follow-up Outcomes and Definitions

Outcome measures

Primary outcome measures (i.e. most sensitive to intervention) included:

  1. Proportion of abnormal reports pre-and post-intervention with timely follow-up action (i.e. within 30 days).
  2. Median time to documentation of direct response to an abnormal report

Pre-and post-intervention secondary outcome measures included:

  1. Proportion of abnormal reports with lack of follow-up at six months
  2. Proportion of abnormal reports with documentation of patient notification of test result.


In addition to generating descriptive data, we compared the distribution (as proportions) of several independent variables within each group including ordering provider specialty (primary care, medical or surgical subspecialties) and ordering provider type. Each individual report was considered the unit of analysis because each required a unique action.(15) Association of dichotomous or categorical variables was accomplished using the chi-squared test. A two-sided p-value was used to test for significance in all cases. Statistical significance was defined using a criterion of P<0.05. Each variable was tested individually for inclusion in the overall logistic model using logistic regression with the variable as the only covariate. All variables p < 0.25 were included as candidates. A generalized estimating equation (GEE) model was used for the overall logistic regression to account for patients being nested within providers. (16) All analyses were performed using R statistical software version 2.10.1 and SAS 9.2.


Sample Characteristics

Of 830 pre- and 807 post-intervention reports reviewed, 23 charts with missing data were excluded from analysis; abnormal reports included 666 of 816(81.6%) and 688 of 798(86.2%) respectively. Post-intervention, there were more general medicine practitioners [227/798(28.5%) vs. 176/816(21.6%)] and fewer dermatologists [135/798(16.9%) vs. 174/816(21.3%)] and pulmonologists [14/798(1.8%) vs. 32/816(3.8%)] in our sample. There were more Papanicolaou smears [257/798(32.2%) vs. 211/816(25.9%)] and fewer shave biopsies [101/798(12.7%) vs. 139/816(17.0%)] performed post-intervention in our sample. Trainees accounted for about half of the ordering providers [382/798(47.9%) post-intervention vs. 427/816(52.3%) pre-intervention].

We further compared characteristics of pathology reports in pre- and post-intervention groups at each site (Table 2). There were proportionally less trainees and nurse practitioners sampled in Site A. More malignant lesions were found in reports sampled from Site A[14.1% vs. 7.8% pre- and 11.4% vs. 6.4% post-intervention].

Table 2
Site Specific Differences In Characteristics Of Pathology Reports In Pre-Intervention And Post-Intervention Groups

Primary Outcomes

Overall, timely follow-up for abnormal reports was not significantly changed post-intervention; [447/666(67.1%) pre- versus 477/688 (69.3%) post-intervention (p=0.4)]. There was no significant change in rate of timely follow-up at the two sites, although Site B had higher rates of timely follow-up [104/117(88.9%) pre- and 111/117(94.9%) post-intervention, p=0.09] compared with Site A [343/549 (62.5%) pre- and. 366/571(64.1%) post-intervention, p=0.6]. The median time to direct response was unchanged post-intervention [8 days (inter-quartile range (IQR) 5-18 days) vs. 8 days IQR (5-15 days) respectively; p=0.65]. Individually, it was unchanged at 15 days for Site A, but decreased slightly at site B from 9 to 7 days (p>0.05).

Table 3 shows outcomes in terms of follow-up. Direct responses to abnormal reports were unchanged post-intervention (p=0.3).

Table 3
Types of Follow-Up Actions Taken On Pathology Reports

Secondary Outcomes

Lack of follow-up for abnormal reports at 6 months decreased post-intervention [10.1% pre- vs. 3.1% post-; p<0.05] (Table 3). Site A accounted for nearly all reports without follow-up [11.8% pre- vs. 4.2% post-;p<0.05]. Overall, documentation of patient notification for abnormal reports decreased slightly post-intervention [423/666 (63.5%) pre- vs. 411/688 (59.7%) post-intervention; data not shown]. Individually, documentation at Site B was higher [100/117(85.5%) pre- and 97/117(82.9%) post-intervention vs. 323/549(58.8%) pre-and 314/571(54.9%) post-intervention; data not shown].

Logistic Regression

In a logistic regression model for timely follow-up (Table 4), an intervention effect was demonstrated; the pre-intervention group was less likely to receive timely follow-up (OR,0.7; 95% CI 0.5-1.0). Site-specific differences existed; Site A was less likely to provide timely follow-up (OR,0.4; 95%CI 0.2-0.7), even after accounting for differences in provider and test report characteristics The following specialties were significantly more likely to be associated with timely follow-up after accounting for the possible intervention and site effect: Hematology/Oncology (OR, 8.7;95%CI 1.3-57.5), Pulmonology (OR,24.4;95%CI 3.3-181.2), and Urology (OR,5.3;95%CI 1.8-15.6); Older patients (>80 years were more likely to receive timely follow-up (OR,1.6; 95%CI 1.1-2.4).

Table 4
Multivariable (step-wise) logistic regression model of predictors of timely follow-up


An EHR-based “mandatory” notification of anatomic pathology results improved the proportion of patients who received follow-up at six months. However, an intervention effect on timely follow-up was shown only after accounting for various site, provider and test variables in a logistic regression model. After controlling for facility differences, certain types of specialists and older patients were more likely and trainees were less likely to be associated with timely follow-up. Follow-up was remarkably different in the two study sites despite the use of the same EHR. This likely reflected differences in local practices and workflow features which we are unable to capture using chart review.(17) Our findings suggest that technology-based interventions to improve test results management in different organizations are likely to exert a highly variable “real-world” effect even when health care systems and technology are similar.

To our knowledge, this is the first study to establish rates of follow-up of anatomic pathology results in the setting of an integrated EHR. Our study also has significant implications for EHR-based interventions targeting effective communication of test results. Despite the same intervention in the same EHR, the intervention had no impact on the pre-existing differences in follow-up patterns between the two sites. Implementation and use of health information technologies in complex systems requires addressing many contextual factors beyond technology for achieving their effectiveness.(18-25) Local “socio-technical” factors such as existing workflows or practices, concomitant quality improvement initiatives and other context factors (personnel and organizational features etc.) must be taken into account.(26)

Although further qualitative work is essential to fully understand our findings, several contextual factors could likely explain these differences.(27) For instance, there are few standardized clinical practices or workflows for fail-safe management of test results and the level of institutional support providers receive for test result management activities is variable. Individual provider factors, related to how they manage test results in the EHR, might be especially prominent and need to be explored further.(11) Some providers might not have been able to access alerts. For instance, certain specialists and trainees who rotate within the VA might not remotely access the EHR. Currently, these alerts reliably only go to a single person (i.e. the ordering provider), who might be off-site. Site-specific differences in management of alerts sent to trainees may exist, but test result follow-up by trainees was still untimely after controlling for these differences. Additionally, many providers can receive over 50 different types of notifications a day (28) and due to a large number of notifications, a “needle in haystack” phenomenon might result where abnormal pathology reports may be under-prioritized or overlooked.(29) This might explain why general medicine providers, who typically receive more alerts, were less likely to provide timely follow-up than sub-specialists.

Our study limitations include a lack of control group for comparison to account for temporal trends. This was not feasible because this was a natural experiment throughout the VA. Improvements may occur beyond six months post-intervention, which we did not measure. While our study findings might not be considered generalizable beyond the VA, many EHRs are adopting notification systems similar to the VA and our lessons could be useful for them. Finally, we relied on EHR documentation to determine outcomes and might have missed actions not documented. However, if at all, documentation should have been higher post-intervention because a VA directive co-incidentally also implemented in March 2009 required all test results to be communicated to patients within 14 days of result and for this communication to be documented in the EHR.(15)

In conclusion, our study suggests that aggregating the effect of EHR interventions across different institutions and EHRs without controlling for local “socio-technical” contextual factors might underestimate their potential benefits.


Funding Source: The study was supported by an NIH K23 career development award (K23CA125585) to Dr. Singh, the VA National Center of Patient Safety, Agency for Health Care Research and Quality and in part by the Houston VA HSR&D Center of Excellence (HFP90-020). These sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.

Funding Source

The study was supported by an NIH K23 career development award (K23CA125585) to Dr. Singh, the VA National Center of Patient Safety, Agency for Health Care Research and Quality and in part by the Houston VA HSR&D Center of Excellence (HFP90-020).

These sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.


Health information technology
Veterans Health Administration
electronic health record
Computerized Patient Record System
Veterans Health Information Systems and Technology Architecture


Conflicts of Interest



Dr. Laxmisan and Dr. Pietz had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.


1. Casalino LP, Dunham D, Chin MH, Bielang R, Kistner EO, Karrison TG, et al. Frequency of Failure to Inform Patients of Clinically Significant Outpatient Test Results. Arch Intern Med. 2009 Jun 22;169(12):1123–9. [PubMed]
2. Singh H, Thomas E, Mani S, Espadas E, Khan M, Arora H, et al. Will Providers Follow-up on Abnormal Test Result Alert If They Read It? Society of General Internal Medicine 31st Annual Meeting Pittsburgh, PennsylvaniaApril 9-12, 2008. JGIM. 2008 Apr;23(supplement 2):374.
3. Hysong S, Sawheny M, Wilson L, Sittig D, Esquivel A, Singh H. Management of Electronic Health Record Based Alerts of Abnormal Test Results: A Qualitative Approach. 2010
4. Hysong S, M S, L W, D S, A E, M W, et al. Improving outpatient safety through effective electronic communication: a study protocol. Implementation Science. 2009;4(1):62. [PMC free article] [PubMed]
5. Gandhi TK, Kachalia A, Thomas EJ, Puopolo AL, Yoon C, Brennan TA, et al. Missed and delayed diagnoses in the ambulatory setting: a study of closed malpractice claims. Ann.Intern.Med. 2006 Oct 3;145(7):488–96. [PubMed]
6. Elder NC, McEwen TR, Flach J, Gallimore J, Pallerla H. The management of test results in primary care: does an electronic medical record make a difference? Family medicine. 2010 May;42(5):327–33. [PubMed]
7. Wahls T, Haugen T, Cram P. The continuing problem of missed test results in an integrated health system with an advanced electronic medical record. Jt.Comm.J.Qual.Patient Saf. 2007 Aug;33(8):485–92. [PubMed]
8. Singh H, Thomas EJ, Sittig DF, Wilson L, Espadas D, Khan MM, et al. Notification of abnormal lab test results in an electronic medical record: do any safety concerns remain? Am.J.Med. 2010 Mar;123(3):238–44. [PMC free article] [PubMed]
9. Singh H, Thomas EJ, Mani S, Sittig D, Arora H, Espadas D, et al. Timely follow-up of abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential? Arch.Intern.Med. 2009 Sep 28;169(17):1578–86. [PMC free article] [PubMed]
10. Raab SS, Grzybicki DM. Measuring quality in anatomic pathology. Clin.Lab.Med. 2008 Jun;28(2):245–59. vi. [PubMed]
11. Hysong S, M S, Wilson L, Sittig DF, Espadas D, Davis T, et al. Provider Management Strategies of Abnormal Test Result Alerts: A Cognitive Task Analysis. JAMIA. 2010;17:71–7. [PMC free article] [PubMed]
12. McDonald C, Abhyankar S. Clinical Decision Support and Rich Clinical Repositories: A Symbiotic Relationship: Comment on “Electronic Health Records and Clinical Decision Support Systems. Archives of Internal Medicine. 2011 Jan 24;171(10):903–5. [PMC free article] [PubMed]
13. Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, et al. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. Journal of the American Medical Informatics Association: JAMIA. 2005 Aug;12(4):431–7. [PMC free article] [PubMed]
14. Singh H, Esquivel A, Sittig DF, Murphy D, Kadiyala H, Schiesser R, et al. Follow-up actions on electronic referral communication in a multispecialty outpatient setting. J Gen Intern Med. 2011;26(1):64. [PMC free article] [PubMed]
15. Department of Veterans Affairs Ordering and reporting of test results-VHA Directive 2009-019. 2009
16. Hardin J, Hilbe J. Generalized estimating equations. Chapman & Hall/CRC; Boca Raton, Fla.: 2003.
17. Metzger J, Welebob E, Bates DW, Lipsitz S, Classen DC. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010 Apr;29(4):655–63. [PubMed]
18. Romano MJ, Stafford RS. Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Arch.Intern.Med. 2011 May 23;171(10):897–903. [PubMed]
19. Desroches CM, Agarwal R, Angst CM, Fischer MA. Differences between integrated and stand-alone E-prescribing systems have implications for future use. Health.Aff.(Millwood) 2010 Dec;29(12):2268–77. [PubMed]
20. Linder JA, Ma J, Bates DW, Middleton B, Stafford RS. Electronic health record use and the quality of ambulatory care in the United States. Arch.Intern.Med. 2007 Jul 9;167(13):1400–5. [PubMed]
21. Keyhani S, Hebert PL, Ross JS, Federman A, Zhu CW, Siu AL. Electronic health record components and the quality of care. Med.Care. 2008 Dec;46(12):1267–72. [PubMed]
22. Zhou L, Soran CS, Jenter CA, Volk LA, Orav EJ, Bates DW, et al. The relationship between electronic health record use and quality of care over time. J.Am.Med.Inform.Assoc. 2009 Aug;16(4):457–64. [PMC free article] [PubMed]
23. Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual.Saf.Health.Care. 2010 Oct;19(Suppl 3):i68–74. [PMC free article] [PubMed]
24. Shekelle PG, Pronovost PJ, Wachter RM, Taylor SL, Dy SM, Foy R, et al. Advancing the science of patient safety. Ann.Intern.Med. 2011 May 17;154(10):693–6. [PubMed]
25. Taylor SL, Dy S, Foy R, Hempel S, McDonald KM, Ovretveit J, et al. What context features might be important determinants of the effectiveness of patient safety practice interventions? BMJ Qual.Saf. 2011 Jul;20(7):611–7. [PubMed]
26. Selvan MS, Sittig DF, Thomas EJ, Arnold C, Murphy RE, Shabot MM. Improving erythropoietin stimulating agent administration in a multi-hospital system through quality improvement initiatives: A pre-post comparison study. J Patient Safety. 2011 in press. [PubMed]
27. Singh H, Vij MS. Eight recommendations for policies for communicating abnormal test results. Joint Commission journal on quality and patient safety / Joint Commission Resources. 2010 May;36(5):226–32. [PubMed]
28. Murphy DR, Reis B, Sittig DF, Singh H. Notifications received by primary care practitioners in electronic health records: a taxonomy and time analysis. Am. J. Med. 2012 Feb;125(2):209.e1–7. [PubMed]
29. Murphy DR, Reis B, Kadiyala H, Hirani K, Sittig DF, Khan MM, et al. Electronic health record-based messages to primary care providers: valuable information or just noise? Arch. Intern. Med. 2012 Feb 13;172(3):283–5. [PubMed]