PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Int J Med Inform. Author manuscript; available in PMC 2010 December 1.
Published in final edited form as:
PMCID: PMC2784251
NIHMSID: NIHMS128451

Enhanced Identification of Eligibility for Depression Research Using an Electronic Medical Record Search Engine

Lisa Seyfried, MD,1 David Hanauer, MD, MS,2,3,4 Donald Nease, MD,5 Rashad Albeiruti, BS,1 Janet Kavanagh, MS,1,6 and Helen C. Kales, MD1,6,7

Abstract

Purpose

Electronic medical records (EMR) have become part of daily practice for many physicians. Attempts have been made to apply electronic search engine technology to speed EMR review. This was a prospective, observational study to compare the speed and accuracy of electronic search engine vs. manual review of the EMR.

Methods

Three raters reviewed 49 cases in the EMR to screen for eligibility in a depression study using the electronic search engine (EMERSE). One week later raters received a scrambled set of the same patients including 9 distractor cases, and used manual EMR review to determine eligibility. For both methods, accuracy was assessed for the original 49 cases by comparison with a gold standard rater.

Results

Use of EMERSE resulted in considerable time savings; chart reviews using EMERSE were significantly faster than traditional manual review (p=0.03). The percent agreement of raters with the gold standard (e.g. concurrent validity) using either EMERSE or manual review was not significantly different.

Conclusions

Using a search engine optimized for finding clinical information in the free-text sections of the EMR can provide significant time savings while preserving reliability. The major power of this search engine is not from a more advanced and sophisticated search algorithm, but rather from a user interface designed explicitly to help users search the entire medical record in a way that protects health information.

Keywords: Information storage and retrieval, Medical record systems, computerized databases, Clinical research, Medical informatics

INTRODUCTION

Documentation and information management are fundamental aspects of patient care. Health information technology (Health IT) such as electronic medical records (EMR), decision aids, computerized order entry and electronic prescribing have rapidly become part of daily practice for many physicians [1]. A 2006 Cochrane review concluded that Health IT has been shown to significantly improve quality by increasing guideline adherence, enhancing disease surveillance, and decreasing medication errors [2]. While data on the impact of EMRs on clinical care is increasing, there is less information investigating the use of EMRs in clinical research [3].

The University of Michigan Health System (UMHS) has stored clinical information in CareWeb [4], an in-house developed electronic documentation creation and viewing system. Since 1998, UMHS has utilized CareWeb as a unified EMR. Patient encounters, problem lists and medication data are largely encoded as free text. This allows clinicians to easily and rapidly enter data, either by dictating or typing, without the constraints of a controlled medical vocabulary or predefined document structure. In 2007 alone, over 108 million lines of text representing 2.6 million clinical documents were entered into CareWeb. These patient data are not coded, making extraction of this information challenging.

Given the time and effort required for manual chart review, attempts have been made to apply search engine technology to the EMR [5,6,7]. Existing approaches are often not optimal given concerns with: 1) impracticality of use for patient data containing hundreds of documents (e.g. search in MS Word); 2) presentation of results in manner that makes it difficult to efficiently review medical records and keep information for each patient distinct (e.g. text processing applications such as jEdit); and 3) inability to search for words based on the case sensitivity associated with many medical terms, as well as security concerns arising from creation of an external data index (e.g. Lucene). The Electronic Medical Record Search Engine (EMERSE) was created at UMHS to provide a secure and efficient way to utilize the EMR for research and clinical data abstraction [8]. EMERSE offers an intuitive user interface for searching the EMR. Search results from EMERSE are shown in a format consistent with the organization of the EMR, segregated by individual patients and separate categories for demographics, the problem summary list, clinical notes, and pathology and radiology reports. One of its most powerful features is the ability to perform batch searches across multiple patients. “Bundles”, or groups of search terms, can be created to perform standardized searches of patient lists. EMERSE search bundles enable the user to search for or ignore phrases, as well as include case-sensitive searches and wildcard matches. Using the bundle, the system searches the record for these terms, and produces context-sensitive search results or “hits”. EMERSE functions to protect health information; unlike manual chart reviews, records that are not of interest are ignored, not appearing as “hits”.

Currently EMERSE is being used in more than 150 research studies in a variety of medical departments across our entire health system of 3 hospitals, 30 health centers and 120 clinics. Our study team is utilizing EMERSE for several research studies on later-life depression for two main purposes: 1) to quickly and efficiently screen thousands of patients for study eligibility in a manner that protects personal health information; and 2) to search or “mine” the free-text medical record to examine patient health variables that could not previously be studied in large-scale administrative data.

While we and many other study teams at our institution have found EMERSE to be invaluable, the standard in clinical research remains manual chart reviews performed by trained chart abstractors. Methods created for EMR information extraction need to be of similar accuracy and faster than manual review of medical records in order to add value. The purpose of this study was to compare the accuracy and speed of eligibility chart reviews performed using EMERSE with those done manually through the EMR. Based on our experience with this tool in several studies, we hypothesized that using EMERSE would be faster than manual chart reviews while maintaining accuracy.

METHODS

Setting

As the first step in a NIMH and IRB-approved study, we use our EMR to screen for eligibility by identifying patients who attended appointments in participating family medicine and internal medicine clinics in the prior week. Eligibility participation criteria include 1) age 60 and older, 2) primary care provider recommendation of new depression treatment within the previous month, 3) White or African American race, and 4) no history of dementia, bipolar disorder or schizophrenia. These criteria require searching numerous chart sections to confirm eligibility.

Material

For the depression study, a list of older patients who attend clinic visits are obtained weekly from an administrative database, averaging 1200 total patients from the 12 participating primary care clinics. For the purposes of the current study, a one-week convenience sample of patients (n=1383) was screened using EMERSE by an experienced staff recruiter for potential eligibility. Patients’ medical records contained on average 102 free-text progress notes (range 6-322).

Of these patients, 13 were determined to be eligible. An additional 37 non-eligible patients were randomly chosen from the set to form a review group of 50 cases. The study principal investigator (HCK) reviewed the eligibility decisions via manual chart review using the eligibility criteria noted above to form a gold-standard for comparison. One eligible case was removed on the basis of complexity and ambiguity, leaving a sample of 49. Nine additional cases (4 eligible, 5 ineligible) were chosen from a subsequent data set for use as distractors for the second chart review. Time taken for the 9 additional records during manual review was subtracted from total time measures.

Method

The search bundle, was created from the same criteria used in manual searching. The principle investigator developed the initial bundle, which was then refined by the study team through several weekly iterations of chart screening (see Appendix A). This method for creating “bundles” has proven effective for a number of studies [9]. Figure 1a depicts an example of a bundle search result. Raters then used search results to review individual documents (Figure 1b) to determine eligibility.

Figure 1Figure 1
EMERSE Screenshots

Patient records were evaluated on two occasions, by three raters with different levels of screening experience both in making study eligibility determinations and using EMERSE (8 months, 4 months, and 1 month respectively). Raters classified each case as either eligible or ineligible for the study, and recorded the time spent in the screening process. One week later, raters received a scrambled set of the same patients along with 9 distractor cases, and used a manual review of CareWeb to determine eligibility. They were instructed that the second set consisted of different patients than the first.

Concurrent validity (the measure of agreement between the results obtained by one method and the results obtained for the same population by a method acknowledged as the “gold standard”) was measured by comparing eligibility decisions for the original 49 cases obtained by each rater using either method (e.g. manual review or EMERSE) with the gold standard. Cohen’s Kappas (measure of agreement) were used to make these pairwise comparisons. For each rater, differences in total time for chart review, manual vs. EMERSE, were compared using a nonparametric two-sample median test. All analyses were conducted using SAS Version 9.1 (Cary, NC).

RESULTS

For all raters, use of EMERSE resulted in a considerable time savings regardless of level of experience (see Table 1); chart reviews using EMERSE were significantly faster than traditional manual review (p=0.03). The time savings were greater for the raters with more experience (raters 1 and 2), yet even the most inexperienced rater (rater 3) saved over an hour by using EMERSE.

Table 1
Total time (minutes) for chart review and agreement with gold standard rater, manual vs. EMERSE.

In terms of valid eligibility decision-making, the percent agreement of raters with the gold standard using either EMERSE or manual review was not significantly different. The majority of cases where raters showed disagreement with the gold standard resulted from falsely classifying a patient as eligible. When screening for study inclusion, this sensitivity will have less impact on targeted enrollment as there are typically several occasions in the process to reconfirm true eligibility.

DISCUSSION

In this study, use of a medical record search engine, EMERSE, was shown to be as accurate and significantly faster than manual chart abstraction for the purposes of eligibility chart reviews. Rater experience did not adversely impact level of accuracy between the two methods.

These results indicate that a medical record search engine such as EMERSE has tremendous utility for reducing the time required for chart screening in clinical research both for prospective subject eligibility screening as well as for retrospective chart reviews. In addition, use of such a tool could potentially augment studies using large administrative databases with thousands of patients in order to search for variables (e.g. Mini-Mental State Exams) that are embedded in free text chart notes and that would otherwise be impracticable to find. One of the unique features of EMERSE is that it is not fully automated, but rather augments the search capabilities of a human abstracter, which allows for input of clinical and research experience into the process. In fact, specific document ranking techniques, such as term frequency-inverse document frequency weighting, are not used by EMERSE. As opposed to finding the “best” document with an answer to a clinical question, users must often review multiple documents, each with unique components that together help answer a clinical question. Nevertheless, the utility of using document ranking techniques should be explored in future studies.

In addition, a medical record search engine such as EMERSE also has tremendous clinical utility, particularly in urgent situations. For example, a clinician who wonders whether a patient has ever had a trial of a certain medication can do a simple electronic search, rather than manually go through multiple prior chart notes. A recent study on the use of EMRs in the Netherlands reported that over a third of practitioners chose not to search the EMR when they had questions about patients’ medical issues due to time constraints [10]. An even higher percentage reported that they gave up searching for information in the EMR because it required too much time.

A growing body of literature has explored the use of information retrieval systems for health care purposes [11], in some cases to help busy clinicians access appropriate literature to answer clinical questions at the time of a patient encounter. [12,13] In contrast, little work has been done to apply such search strategies and tools to the free text documents in EMRs [5-7]. This may be due to privacy regulations, or it may be because a different set of tools is needed. Further work should explore these issues.

This study has some limitations restricting the generalizability of the results. Both the CareWeb EMR as well as EMERSE were developed and are in use at a single institution, the University of Michigan. However, EMRs are common at many institutions and search engines like EMERSE could be adapted for those settings. Since this investigation, we have modified EMERSE for use with the EMR of the Veteran Affairs Healthcare System in Ann Arbor for use in several studies. In addition, because we standardized users to a single “bundle”, we did not test each person’s ability to determine an ideal set of search terms. This would have made it more difficult for us to differentiate the effects based on the users vs. the search engine itself. Among the more than 150 studies at our institution currently using EMERSE, investigators often choose to standardize the use of Bundles among team members to ensure a consistent protocol for searching the documents. Finally, with use of technology such as EMERSE, there is a chance that cases otherwise eligible might be missed due to unusual chart-note terminology or because a user might not be adept at using a search engine. However, prior work has also shown the opposite. Namely, that cases missed by professional abstractors could be identified by EMERSE [8].

CONCLUSIONS

Using technologies such as an electronic medical record search engine to augment manual chart reviews for patient eligibility determination can result in significant time savings while preserving the ability to make valid decisions. The major power of EMERSE is not from a more advanced and sophisticated search algorithm, but rather from a user interface designed explicitly to help users search the entire medical record in a way that protects health information.

Summary Points

What was already known

  • Electronic medical records (EMRs) have become part of daily practice for many physicians
  • Extraction of information contained in EMRs is needed for both clinical and research purposes
  • Extraction of clinically-relevant concepts from free-text data in EMRs is complex and time consuming
  • Methods created for EMR information extraction need to be of similar accuracy and faster than manual review of medical records in order to add value

What this study added to our knowledge

  • Compared the speed and accuracy of an EMR search engine to manual medical record review
  • Results showed that using a search engine optimized for finding clinical information in the EMR can provide significant time savings while preserving the ability to make valid decisions regarding clinical eligibility
  • Relatively simple and user-friendly search tools can effectively assist researchers in accessing rich clinical data in EMRs in an efficient manner

Acknowledgments

This research was supported by an NIMH grant (R21MH073002) awarded to Dr. Helen Kales. Portions of this manuscript were presented as a poster presentation at the American Association for Geriatric Psychiatry Annual Meeting, Orlando FL, March 2008.

Appendix A

EMERSE search bundle used in this study

Search terms or phrases are listed, one on each line. Multi-word phrases are placed in quotes. The @ symbol is a wildcard that will match anything as long as it is part of the word to which it is attached. The ^ symbol in front of a word means that it should be case sensitive. Custom colors can be assigned to terms to help distinguish inclusion from exclusion terms. Phrases with a – symbol in front are ignored by EMERSE which serves as a form of negation so that, for example, ‘psychosis’ will not be highlighted in the context of ‘no psychosis’.

InclusionExclusion

Phrases to findPhrases to find
“major depress@”schiz@
“depressed mood”bipolar
^MDDmani@
^MDEpsychosis
antidepressant@psychotic
depress@“mood stabilizer”
^SSRIlithium
^SNRIvalp@
celexadepak@
citalopramcarbamaz@
lexaprotegretol
fluoxetinegabapentin
prozacneurontin
paroxetinerisp@
paxilolanz@
zoloftZyprexa
sertralinequetiapine
bupropionseroquel
wellbutrinhaldol
deloxetinehaloperidol
cymbaltaabilify
mirtazapinearipiprazole
remerondemen@
venlafaxinearicept
effexordonepezil
trazodonenamenda
desyrelmemantine
desipraminegalantamine
norpraminreminyl
nortriptylinerivastigmine
Pamelorexelon
tacine
Phrases to ignorecognex
“depressed heart rate”alz@
-zybanlewy
“cognitive impairment”
Phrases to ignore
“no psychosis”
-”no mani@”
-”no dementia”

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

1. Simon SR, McCarthy ML, Kaushal RR, et al. Electronic health records: which practices have them, and how are clinicians using them? Journal of evaluation in clinical practice. 2008;14:43–47. [PubMed]
2. Chaudhry B, Wang J, Wu S, et al. Systematic Review: Impact of Health Information Technology on Quality, Efficiency, and Costs of Medical Care. Ann Intern Med. 2006;144:742–752. [PubMed]
3. Ozcan YA, Kazley AS. Do Hospitals With Electronic Medical Records (EMRs) Provide Higher Quality Care? An Examination of Three Clinical Conditions. Med Care Res Rev. 2008 February; 1077558707313437. [PubMed]
4. Bria WF, Shabot MM. The electronic medical record, safety, and critical care. Critical care clinics. 2005;21:55–79. viii. [PubMed]
5. Malamateniou FF, Vassilacopoulos GG, Mantas JJ. A search engine for virtual patient records. International Journal of Medical Informatics. 1999;55:103–115. [PubMed]
6. Gregg W, Jirjis J, Lorenzi NM, et al. StarTracker: an integrated, web-based clinical search engine. AMIA Annu Symp Proc. 2003:855. [PMC free article] [PubMed]
7. Fisk JM, Mutalik P, Levin FW, Erdos J, Taylor C, Nadkarni P. Integrating query of relational and textual data in clinical databases: a case study. J Am Med Inform Assoc. 2003 Jan-Feb;10:21–38. [PMC free article] [PubMed]
8. Hanauer DA. EMERSE: The Electronic Medical Record Search Engine. AMIA Annu Symp Proc. 2006:941. [PMC free article] [PubMed]
9. Hanauer DA, Engelsbe MJ, Cowan JA, Campbell DA. Informatics and ACS-NSQIP: Automated Processes Could Replace Manual Record Review. Journal of the American College of Surgeons. 2009 Jan;208(1):37–41. [PubMed]
10. Christensen T, Grimsmo A. Instant availability of patient records, but diminished availability of patient information: a multi-method study of GP’s use of electronic patient records. BMC Med Inform Decis Mak. 2008;8:12. [PMC free article] [PubMed]
11. Wiesman F, Hasman A, van den Herik HJ. Information retrieval: an overview of system characteristics. Int J Med Inform. 1997 Nov;47:5–26. [PubMed]
12. Magrabi F, Westbrook JI, Coiera EW. What factors are associated with the integration of evidence retrieval technology into routine general practice settings? Int J Med Inform. 2007 Oct;76:701–9. [PubMed]
13. Fontelo P, Liu F, Ackerman M. askMEDLINE: a free-text, natural language query tool for MEDLINE/PubMed. BMC Med Inform Decis Mak. 2005 Mar 10;5:5. [PMC free article] [PubMed]