Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
JAMA. Author manuscript; available in PMC 2014 March 20.
Published in final edited form as:
PMCID: PMC3711676

Comparison of presenting complaint vs. discharge diagnosis for identifying “non-emergency” emergency department visits

Maria Raven, MD, MPH, MSc, Assistant Professor of Emergency Medicine,corresponding author Robert A. Lowe, MD, MPH, Professor, Judith Maselli, MSPH, Senior Statistician, and Renee Y. Hsia, MD, MSc, FACEP, Assistant Professor of Emergency Medicine



Reduction in emergency department (ED) utilization is frequently viewed as a potential source for cost savings. One consideration has been to propose denying payment if the patient’s diagnosis upon ED discharge appears to reflect a “non-emergency” condition. This approach does not incorporate other clinical factors such as chief complaint that may inform necessity for ED care.


To determine whether ED presenting complaint and ED discharge diagnosis correspond sufficiently to support use of discharge diagnosis as the basis for policies discouraging ED use.

Design, Setting, and Participants

The New York University emergency department algorithm (EDA) has been commonly used to identify “non-emergency” ED visits. We applied the EDA to publicly available ED visit data from the 2009 National Hospital Ambulatory Medical Care Survey (NHAMCS) for the purpose of identifying all “primary care treatable” visits. For each visit with a discharge diagnosis classified as “primary care treatable” we identified the chief complaint. To determine whether these chief complaints correspond to “non-emergency” ED visits, we then examined all ED visits with this same group of chief complaints to ascertain the ED course, final disposition, and discharge diagnoses.

Main Outcome Measures

Patient demographics, clinical characteristics, and disposition associated with chief complaints related to ”non-emergency ” ED visits.


Although only 6.3% (95% CI 5.8–6.7) of visits were determined to have “primary care treatable diagnoses” based on discharge diagnosis and our modification of the EDA, the chief complaints reported for these ED visits with “primary care treatable” ED discharge diagnoses were the same chief complaints reported for 88.7% (95% CI 88.1–89.4) of all ED visits. Of these visits, 11.1% (95% CI 9.3–13.0) were identified at triage as needing immediate or emergent ED care; 12.5% (95% CI 11.8–14.3) required hospital admission; and 3.4% (95% CI 2.5–4.3) of admitted patients went directly from the ED to the operating room.


Among ED visits with the same presenting complaint as those ultimately given a “primary care treatable” diagnosis based on ED discharge diagnosis, a substantial proportion required immediate emergency care or hospital admission. The limited correspondence between presenting complaint and ED discharge diagnoses suggests that these discharge diagnoses are unable to accurately identify ”non-emergency” ED visits.


With increasing medical care costs, policy-makers have turned to emergency department (ED) utilization as a potential source for cost savings. Although the assumptions driving this policy approach are unproven,1 recent attempts to reduce ED use have occurred in Medicaid programs.26 If implemented for patients in Medicaid programs, it is likely that such practices may result in similar policies by other payers, potentially affecting access to ED care for other segments of the population.

One approach aimed at reducing ED use has been to deny or limit payment if the patient’s diagnosis on discharge from the ED appears to reflect a “non-emergency” condition. 3,7,8 Legislatures or regulators in Tennessee, Iowa, New Hampshire, and Illinois have considered or enacted legislation or regulations that would limit payment for “non-emergency” ED visits by Medicaid enrollees, based on discharge diagnosis. Other states, including Arizona, Oregon, Illinois, Iowa, Nebraska, North Carolina, and New Mexico, have recently implemented or considered implementation of some level of copayment requirement for non-emergency use of the ED (personal communications, Craig Price, American College of Emergency Physicians; April 13, 2012 and February 11, 2013). Although criteria for determining “non-emergency” ED visits vary by state and no systematic review of states’ practices is available, Washington State recently drew attention for a proposal in which the payer may make a determination about payment based only on the ED discharge diagnosis and whether the patient is hospitalized during the ED visit, without other clinical information,9 and other states appear to have similar practices. For this approach to be effective at reducing “non-emergency” ED use without discouraging ED use for more serious conditions, it would be necessary to predict discharge diagnosis based on information available before the patient is seen in the ED – i.e., based on presenting symptoms. Many have questioned whether this approach is possible. For example, a 65-year-old patient with diabetes may be discharged with the “non-emergency” diagnosis of gastroesophageal reflux after presenting with a chief complaint of chest pain; however, that patient still required an emergency evaluation to rule out acute coronary syndrome. In addition, there is concern that this approach may violate the prudent layperson standard, which establishes the “criteria that insurance coverage is based not on ultimate diagnosis, but on whether a prudent person might anticipate serious impairment to his or her health in an emergency situation.” 10

The purpose of this study was to determine the correspondence between ED presenting complaint and ED discharge diagnosis


Study Design and Data Source

This study is a secondary analysis of data collected in the 2009 National Hospital Ambulatory Medical Care Survey (NHAMCS). As described by its developers, “The NHAMCS is an annual, national probability sample of ambulatory visits made to non-federal, general, and short-stay U.S. hospitals conducted by the Centers for Disease Control and Prevention, National Center for Health Statistics (NCHS). Although the survey includes visits to selected ambulatory care departments, this analysis focuses solely on the visits to hospital emergency departments (EDs). The multi-staged sample design is comprised of a three stages for the ED component: 1) 112 geographic primary sampling units (PSUs); 2) approximately 480 hospitals within PSUs; and 3) patient visits within emergency service areas.”11 Per NHAMCS protocol, trained hospital staff members abstract ED visit data using a structured data entry form during 4-week data periods randomly assigned for each sampled hospital. The sampled data are extrapolated to national estimates through use of assigned patient visit weights, which account for probability of visit selection, nonresponse, and ratio of sampled hospitals to hospital universe. The study was exempt from review by the institutional review board of the University of California, San Francisco.

Key Variables

ED visit with “primary care treatable diagnosis” based on ED discharge diagnosis

We sought a method for identifying “non-emergency” ED visits that would maximize the probability of successfully classifying such visits based on the ED discharge diagnosis. Although the process for defining “non-emergency” diagnoses varies by state and various lists of “non-emergency” diagnoses have been proposed,12,13 many are based at least in part on the Emergency Department Algorithm (EDA) developed at New York University.14,15 Although the EDA was developed for other purposes and the EDA developers caution that “the algorithm is not intended as a triage tool or as a mechanism to determine whether ED use is appropriate for required reimbursement by a managed care plan,”15 it has been used by policymakers both to characterize “overuse” of EDs in several states (e.g., Connecticut16, Oregon17 and Massachusetts18) and, in modified form, as a basis for denying payment for ED visits in Washington State. 9,12

We selected the EDA as the basis for classifying “non-emergency” ED visits for this study, both because of its use for similar purposes and because its classification system is more evidence-based than others that have been proposed. The EDA was developed with input from emergency physicians and based on ED visit data abstracted from 5,700 ED visit records. After excluding visits for injuries, mental health, and drug- and alcohol-related conditions, physician reviewers used data including chief complaint, demographic data, duration of symptoms, presenting vital signs, and medical history to classify visits as “emergent” (requiring care in under 12 hours) or “non-emergent.” Emergency cases were then further categorized as “emergency, primary care treatable” or “emergency, ED needed,” based on whether the resources required during the ED visit (including radiology, blood work, etc.) are normally available in the outpatient setting, in the judgment of the algorithm’s creators. In addition, “ED needed” visits were categorized based on whether or not they were “preventable or avoidable” with timely and effective outpatient care. 15,19

The final step in the development of the EDA was designed to allow the EDA to be applied to administrative datasets. To do so, the above classifications were “mapped” to the discharge diagnoses for each case in the sample to determine the percentage of sample cases in each of the 4 categories for each diagnosis.14 As stated elsewhere, “For instance, multiple patients in the sample were discharged with ICD-9 code 789.00 (abdominal pain, unspecified site). All were deemed to require care within 12 hours and were classified as emergent. Two-thirds of these patients were managed with resources available in primary care settings, while one-third received interventions not available outside the ED. Therefore, the ICD-9 code 789.00 is assigned a 0.67 probability of emergency, primary care–treatable and a 0.33 probability of emergent, ED needed.”20

We used 2 additional strategies to maximize the probability that our classification system would identify only “non-emergency” ED visits. First, based on the ED discharge diagnosis, we classified a visit as “primary care treatable diagnosis” only if the EDA predicted that the probability of the diagnosis being primary care treatable was 100% (Figure 1, Step 1). This approach leads to a more limited number of “primary care treatable diagnoses” than that used by some other researchers.2022 Second, because some policymakers have proposed only denying payment for visits after which the patient was discharged home, we excluded visits resulting in hospital admission. By eliminating these higher-acuity visits, we eliminated some of the higher-risk chief complaints associated with a diagnosis.

Figure 1
Algorithm for creation of “non-emergency complaint” ED visits based on NYU ED algorithm

Chief Complaints Associated with “non-emergency” ED visits

Because patients present to the ED with a chief complaint, not with a discharge diagnosis, our next step in identifying “non-emergency” visits was to determine what chief complaints were associated with the ”primary care treatable diagnoses” (Figure 1, Step 2). The NHAMCS database contains a field for the most important reason for visit (RFV), in which the patient’s chief complaint is coded according to a standardized classification system developed by the National Center for Health Statistics.23,24 This coding system is similar to ICD9-CM coding, in that it allows conversion of free-text data into a structured system. The RFV coding for chief complaints has been widely used by NCHS in NHAMCS and other surveys.25,26 At each hospital, triage nurses document the patient’s chief complaint per hospital protocol. Then, chart abstractors trained by the National Center for Health Statistics review the patient record and record the verbatim text, which is later classified as a chief complaint using RFV codes by an NCHS contractor. “As part of the quality assurance procedure, a 10 percent quality control sample of Patient Record Forms is independently keyed and coded. Error rates typically range between 0.3 and 0.9 percent for various survey items.”11

For all visits with “primary care treatable diagnoses,” based on the ED discharge diagnosis, we generated a list of RFVs. We then identified all ED visits in the dataset with RFVs identical to those on our list. These are referred to as ED visits with “non-emergency complaints” (Figure 1, Step 3). We chose the term, “non-emergency complaint” because, if it were possible to prospectively identify ED visits with diagnoses on the “primary care treatable diagnosis” list, the chief complaints resulting in these diagnoses might not require ED care.

Variables reflecting the acuity of “non-emergency complaint” ED visits

Next, for the group of ED visits with “non-emergency complaints” we identified diagnoses, disposition, and other key factors related to the patient’s initial presentation and ED course. Demographic variables included age, gender, race/ethnicity, insurance type, region, and urban/rural status. We also included triage categories defined on a scale of 1–5 by nursing staff as “immediate,” “emergent,” “urgent,” “semi-urgent,” or “non-urgent.”27 Triage vital signs were classified as normal or abnormal using standards based on published guidelines.28,29 Pain scale was by patient self-report on presentation, with a scale from 1–10, with 10 being the most severe. We also identified whether patients arrived by ambulance. We ascertained whether the visit resulted in hospital admission and, if so, whether the patient was admitted to an observation unit, to a standard bed or to a higher level of care. NHAMCS does not distinguish between observation unit stays occurring in the ED and those occurring in the inpatient facility. For the purposes of this analysis, we grouped observation unit stays with inpatient admissions, reasoning that an observation unit admission indicated a patient could not safely be discharged home.

Statistical Analysis

If presenting complaint corresponded closely with the discharge diagnosis, visits with “non-emergency complaints” based on the reasons for visit would be expected to correspond to visits with “primary care treatable diagnoses.” In this situation, the number of visits with “non-emergency complaints” would be similar to the number of visits with “primary care treatable diagnoses.” Conversely, if presenting complaint corresponded poorly with the discharge diagnosis, multiple chief complaints would be expected to be associated with each diagnosis and multiple diagnoses would be associated with each complaint. In this situation, the number of visits with “non-emergency complaints” would exceed the number of visits with “primary care treatable diagnoses.” Therefore, we compared the proportion of ED visits with “primary care treatable diagnoses” based on ED discharge diagnosis to the proportion ED visits with “non-emergency complaints” based on the reason for the visit. In addition, we calculated descriptive statistics for “non-emergency complaint” ED visits, presenting frequencies and proportions.

We report actual ED visits from the hospitals included in the NHAMCS sample, national estimates based on survey visit weights, and 95% confidence intervals (CIs) based on standard errors provided by NHAMCS. The analyses follow recommendations on the NHAMCS website for using the sampling weights in the dataset to project, for all ED visits in the United States, the proportions with the specified characteristics. 11 Confidence intervals were calculated using standard methods for survey data collected with stratified sampling, based on weights provided by NHAMCS.30 All estimates conform to NCHS standards.11 Unweighted estimates based on less than 30 records are considered unreliable by NCHS and are marked by an asterisk. Estimates were sufficiently precise with a single year of data to avoid the need to combine multiple years of NHAMCS data, which would have added complexity to the analyses given changes in variable definitions (e.g., triage category) over time. All analyses were performed using SAS 9.2 (Cary, NC) and SUDAAN (version 10.0; RTI International, Research Triangle Park, North Carolina).


The 2009 NHAMCS dataset contains 34,942 records, each representing a unique ED visit. Of these visits, an estimated 6.3% (95% CI 5.8–6.7) had “primary care treatable diagnoses” based on the ED discharge diagnosis and our modification of the EDA. However, the presenting complaints associated with the ED visits (i.e., “non-emergency complaints”) were also the presenting complaints for 88.7% (95% CI 88.1–89.4) of all ED visits, reflecting poor correspondence between ED discharge diagnosis and chief complaint.

These findings were similar for age-stratified subgroups. For children under age 18, 5.5 % (95% CI 4.7–6.3) had “primary care treatable diagnoses” and 90.0% (95% CI% 88.6–91.1) had “non-emergency complaints.” For adults age 65 and older, 3.2% (95% CI 2.8–3.8) had “primary care treatable diagnoses” and 86.9% (95% CI 85.8–88.1) had “non-emergency complaints.” (See eTables 1 and 2 for additional age-stratified results.)

Of the ED visits for chief complaints identical to chief complaints generated by the group of ED visits with “primary care treatable” diagnoses, 9.3% (95% CI 8.2–10.5) had an "emergent" triage category; 3.7% (95% CI 3.4–4.1) of patients had been seen in the same ED within the last 72 hours; and 2.1% (95% CI 1.7–2.5) had been discharged from a hospital within the past 7 days (Table 1). In addition, 79.7% (95% CI 78.2–81.3) had at least one abnormal triage vital sign recorded. Although the most common vital sign abnormalities were respiratory rate (61.8%, 95% CI 59.9–63.8) and blood pressure (34.2%, 95% CI 32.7–35.8), patients presented with abnormal heart rates at 21.8% (95% CI 20.8–22.8) of visits, were hypoxic at 6.6% (95% CI 5.3–7.9) of visits, and were either hypo- or hyperthermic at 6.1% (95% CI 5.5–6.7) of visits . The mode of arrival for 13.8 % (95% CI 12.8–14.8) of “non-emergency” patients was ambulance, and 38% (95% CI 35.9–40.1) reported pain scales ≥ 6 (Table 2).

Table 1
Demographic characteristics of study population with original “primary care treatable” diagnosesa and “non-emergency complaint” ED visitsb, NHAMCS 2009
Table 2
Severity of Illness Characteristics Associated with original “primary care treatable” diagnosesa and “non-emergency complaint” ED visitsb, NHAMCS 2009

Regarding disposition of patients with “non-emergency complaint” ED visits, 12.5% (95% CI 11.8–14.3) were admitted to the hospital. Of admitted patients, 11.2% (95% CI 9.5–12.9) were admitted to a critical care unit, 22.9% (95% CI 18.4–27.4) required step-down or telemetry monitoring, 3.4% (95% CI 2.5–4.3) required the operating room, and 7.0% (95% CI 5.7–8.4) of the admissions were to an observation unit (Table 3).

Table 3
Disposition and admitted patients location for “non-emergency complaint” ED visitsa, NHAMCS 2009

There were 192 different “primary care treatable diagnoses” (eTable 3) and 304 “non-emergency complaints” (eTable 4) represented. “Unspecified disorder of the teeth and gums” was the most common “primary care treatable diagnosis” and accounted for 11.6% of ED visits with “primary care treatable diagnoses.” The 3 most common “non-emergency complaints” (Table 4) were toothache (10.05%), skin rash (5.99%), and abdominal pain, cramps, spasms (5.03%); other “non-emergency chief complaints” were as variable as skin itching, insect bite, ingrown nail, foreign body to eye, migraine headache, blood in stool, and symptoms of labor. For patients with “non-emergency complaint” ED visits, the 3 most common diagnoses identified were abdominal pain/unspecified site, acute respiratory infection, and chest pain/unspecified (Table 5).

Table 4
10 most common reasons for visit (“non-emergency complaints”) associated with “primary care treatable diagnoses”a, NHAMCS 2009
Table 5
27 most common discharge diagnoses associated with “non-emergency complaint” ED visitsa, NHAMCS 2009

Because our analysis was conducted in aggregate, it was possible that a subset of “primary care treatable” diagnoses might be concordant with chief complaints and therefore appropriate targets for discouraging ED use. Therefore, we used the same techniques described above to analyze some of the most common primary care treatable diagnoses individually (unspecified disorder of the teeth and supporting structures, diarrhea, and esophageal reflux). Each of these common “primary care treatable” diagnoses was associated with 20 or more “non-emergency complaints.” In turn, these “non-emergency complaint” visits were associated with 29 to over 300 distinct discharge diagnoses with a wide range of clinical severity, consistent with our overall study findings.


Patients present to the ED with chief complaints, symptoms, and signs, but usually not with diagnoses. For a list of ED discharge diagnoses to be considered “non-emergency,” the ED discharge diagnoses must be predictable based on chief complaint information available at triage. Our study illustrates the challenges of mapping from discharge diagnosis to chief complaint. Although only 6.3% of ED visits had “primary care treatable” discharge diagnoses, the chief complaints reported for these visits encompassed 88.7% of all ED visits. If a triage nurse were to redirect patients away from the ED based on “non-emergency complaints,” 93% of the redirected ED visits would not have “primary care treatable diagnoses.” Adding vital signs to the decision rule would add little discriminatory power, because 79.7% of ED visits with “non-emergency complaints” had abnormal vital signs, including 76.8% of ED visits with “primary care treatable diagnoses” and 80.0% with other diagnoses.

These results highlight the flaws of a conceptual framework that fails to distinguish between information available at arrival in the ED and information available at discharge from the ED. The results call into question reimbursement policies that deny or limit payment based on discharge diagnosis. Attempting to discourage patients from using the ED based on the likelihood that they would have “non-emergency diagnoses” risks sending away patients who require emergency care.3136 The majority of Medicaid patients, who stand to be disproportionately affected by such policies, visit the ED for “urgent or more serious” problems.37

Our results are in keeping with the original intention of the EDA. The EDA does not classify specific diagnoses as “non-emergency” or “primary care treatable,” as policy-makers have attempted to do.6 Instead, as the developers of the algorithm acknowledge, “few diagnostic categories are clear cut in all cases;”15 a discharge diagnosis related to a given ED visit can be in each of multiple categories (based on the initial complaint, vital signs, resources used in the ED, etc. that have been mapped to the discharge diagnosis),19 highlighting the complexity of the issue.

A limitation of this study is the choice of the EDA as the basis for classifying ED visits as “non-emergency.” In theory, a different list of “non-emergency diagnoses” might correspond better with chief complaints. We chose to use the EDA because – despite the intent of the EDA’s developers – the EDA has been modified for this purpose, and had been developed more rigorously than other proposed “non-emergency” diagnosis lists. We used two methodological strategies to try to optimize the classification system. First, in selecting ED visits with “non-emergency diagnoses” based on ED discharge diagnosis, we also limited the visits to those that did not result in hospital admission. Second, in using the EDA, we selected only diagnoses that, in the EDA classification system, had 100% probability of not requiring care in the ED. Had we chosen to classify visits less conservatively as other authors have – for example by including visits with low probability (but not zero probability) of needing ED care in our sample – it is likely that our results would reflect final diagnoses and ED visit characteristics of even greater severity. A list of “non-emergency diagnoses” such as the one recently proposed in Washington State,10,12 which includes some diagnoses that the EDA classifies as having substantial risk of requiring ED care, is likely to have worse performance than the approach we tested.

A second potential limitation of our study is that the only triage information we used was the patient’s chief complaint. It is possible that a combination of chief complaint and vital signs could map to diagnoses in a more helpful manner. However, previous attempts at developing triage decision rules based on chief complaint and vital signs have not succeeded.31,32 In addition, the majority of current state proposals to deny or limit Medicaid payments for ED visits use discharge diagnosis alone and do not incorporate other patient characteristics such as vital signs. A payment system that used vital signs as well as diagnosis would require more complex billing datasets and information technology than currently exist. Given the lack of alternative approaches, we anticipate that there will be further attempts to discourage ED use through retroactive denial for “non-emergency” diagnoses.

A complex interplay of community, patient, and health system factors influence ED use. 3840 Strategies aimed narrowly at reducing such use are unlikely to improve population health or to reduce health system costs.41,42 Instead, a more innovative and sustainable path forward is through policies that allow for the creation of integrated systems of health and community care where risk is shared and resources allocated rationally. It is possible that other diagnosis lists may correspond better with chief complaints that do not occur in true emergencies. Policy-makers who are considering such approaches that involve lists of diagnoses may wish to use our rather simple methodology to evaluate the proposed lists prior to implementation.

Supplementary Material

eTables 1-4


We especially thank Professor John C. Billings, JD, for his thoughtful input, Amy J. Markowitz, JD for editorial assistance, and Nicole Gordon, BA for her technical support; none of them received any financial compensation for their contributions. Dr. Hsia and Ms. Maselli had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Dr. Lowe has served as a consultant to the American College of Emergency Physicians regarding the appropriate use of emergency departments. Dr. Raven has served as a consultant for the United Hospital Fund related to patterns of ED utilization in New York City. This publication was supported by the Emergency Medicine Foundation, as well as the NIH/NCRR/OD UCSF-CTSI grant number KL2 RR024130 (RYH), and the Robert Wood Johnson Foundation Physician Faculty Scholars (RYH). Its contents, including design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, and approval of the manuscript are solely the responsibility of the authors and do not necessarily represent the official views of any of the funding agencies.

Contributor Information

Maria Raven, Department of Emergency Medicine, University of California, San Francisco, 505 Parnassus Ave, San Francisco, CA 94707, ude.fscu.ycnegreme@nevar.airaM /917-499-5608 (mobile)

Robert A. Lowe, Department of Medical Informatics and Clinical Epidemiology, Department of Emergency Medicine, Department of Public Health and Preventive Medicine, Senior Scholar, Center for Policy and Research in Emergency Medicine (CPR-EM), Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Mail Code BICC-504, Portland, Oregon 97239-3098, ude.usho@orewol /503 494-7134.

Judith Maselli, Department of Medicine, University of California, San Francisco, 3333 California St, Box 1211, San Francisco, CA 94143-1211, ude.fscu.enicidem@illesamj/ 415-502-4068.

Renee Y. Hsia, University of California San Francisco, San Francisco General Hospital, Department of Emergency Medicine, 1001 Potrero Ave, 1E21, San Francisco, CA 94110, ude.fscu@aish.eener/ 415-206-4612.


1. Lowe RA, Schull M. On easy solutions. Ann Emerg Med. 2011;58(3):235–238. [PubMed]
2. Matthews AW. Medicaid Cuts Rile Doctors: Hospitals Also Fight Washington State's Drive to Trim Emergency Room Vistis. [Accessed December 21, 2012];The Wall Street Journal. 2012 Feb 25; Health Industry. Available online at
3. Smith VK, Gifford K, Ellis E, Rudowitz R, Snyder L. Moving Ahead Amid Fiscal Challenges: A Look at Medicaid Spending, Coverage and Policy Trends. Results from a 50-State Medicaid Budget Survey for State Fiscal Years 2011 and 2012. 2011 Oct; 2011.
4. Mortensen K. Copayments Did Not Reduce Medicaid Enrollees' Nonemergency Use of Emergency Departments. Health Affairs. 2010;29(9):1632–1650. [PubMed]
5. Dunn K. [Accessed January 15, 2013];Notice Regarding Medicaid Service Limits. Department of Health and Human Services Office of Medicaid Business and Policy; 2012. Available online at
6. Washington State's Medicaid Program Will No Longer Pay For Unnecessary ER Visits. [Accessed January 15, 2013];Kaiser Health News. 2012
7. Galewitz P. [Accessed Feb 11, 2013];Kaiser Health News: Hospitals Demand Payment Upfront from ER Patients with Routine Problems. 2012
8. Pear R. Many Medicaid Patients Could Face Higher Fees Under a Proposed Federal Policy. The New York Times. 2013 Jan;22 2013; Health.
9. Kellermann AL, Weinick RM. Emergency Departments, Medicaid Costs, and Access to Primary Care--Understanding the Link. New Engl J Med. 2012;366(23):2141–2143. [PubMed]
10. Blachly L. ACEP Initiative Supporting 'Prudent Layperson' Standard Becomes Law in Health Care Reform Act. [Accessed December 25, 2012];ACEP News: Clinical Practice & Management. 2010
11. McCaig LF, Burt CW. Understanding and Interpreting the National Hospital Ambulatory Medical Care Survey: Key Questions and Answers. Ann Emerg Med. 2012;60(6):716–721. [PubMed]
12. Lowe RA. Evaluation of the Washington State HCA Proposed List of "Non-Emergency" Diagnoses. Oregon Health and Science University; 2012. [Accessed January 16, 2013]. Available online at
13. Texas Medicaid. Bimonthly update to the Texas Medicaid Provider Procedures Manual. National Heritage Insurance Company; 2003. available at
14. Billings JC. nyu ed algorithm background. [Accessed July 31, 2012];nyu ed algorithm.
15. Billings JC, Parikh N, Mijanovich T. Emergency Room Use: The New York Story. New York, NY: The Commonwealth Fund; 2000.
16. Greci LK. Issue Brief: Profile of emergency department visits not requiring inpatient admission to a Connecticut acute care hospital, Fiscal Year 2006–2009. 2010
17. OMPRO. Comparative Assessment Report: Emergency Department Utilization, Oregon Health Plan Managed Care Plans, 2002–2003. Portland: Oregon; Mar 18, 2005.
18. Massachusetts Division of Health Care Finance and Policy. Analysis in Brief. Vol 2004. Boston, MA: 2004. Non-emergency and preventable ED visits.
19. Billings JC, Parikh N, Mijanovich T. Issue Brief: Emergency Department Use in New York City: A Substitute for Primary Care? New York University and The Commonwealth Fund: 2000. [PubMed]
20. Lowe RA, Fu R. Can the emergency department algorithm detect changes in access to care? Acad Emerg Med. 2008 Jun;15(6):506–516. [PubMed]
21. Ballard DW, Price M, Fung V, et al. Validation of an algorithm for categorizing the severity of hospital emergency department visits. Med Care. 2010 Jan;48(1):58–63. [PMC free article] [PubMed]
22. Wharam JFLB, Galbraith AA, Kleinman KP, Soumerai SB, Ross-Degnan D. Emergency department use and subsequent hospitalizations among members of a high-deductible health plan. JAMA. 2007;297(10):1093–1102. [PubMed]
23. Centers for Disease Control and Prevention. [Accessed July 25, 2012];Ambulatory Health Care Data-2009 Survey Methodology. 2012
24. Schneider D, Appleton L, McLemore R. A reason for visit classification for ambulatory care. Vital Health Stat. 1979 Feb;2(78):i–vi. 1–63. [PubMed]
25. Raman S, Levin J, Hall D, Frey K. Using Categorization of Reason-for-Visit Strings as the Basis for an Outbreak Detection System---Minnesota 2002–2003. Vol 54: Centers for Disease Control. MMWR. 2005
26. Pitts SR, Niska RW, Xu J, Burt CW. National Hospital Ambulatory Medical Care Survey: 2006 Emergency Department Summary. 7. Vol. 6. Natl Health Stat Report; Aug, 2008. pp. 1–38. [PubMed]
27. McCaig LF, Woodwell D. Analyzing data from the NAMCS and the NHAMCS. [Accessed May 11, 2012];Data Users Conference Presentations Web site 2007. 2006 2007;
28. Dickinson ET, Lozada KN. Trend Alert: The trending and interpretation of vital signs. Journal of Emergency Medical Services. 2010 Mar 1; 2012. [PubMed]
29. Normal Vital Signs Guidelines for EMS, by Age Group. [Accessed March 1, 2012];
30. Centers for Disease Control and Prevention. NHAMCS Micro-Data File Documentation. Atlanta, GA: National Center for Health Statistics; 2009. pp. 23–24.
31. Abbuhl SB, Lowe RA. The inappropriateness of "appropriateness". Acad Emerg Med. 1996;3(3):189–191. [PubMed]
32. Lowe RA, Abbuhl SB. Appropriate standards for "appropriateness" research. Ann Emerg Med. 2001;37(6):629–632. [PubMed]
33. Lowe RA, Bindman AB, Ulrich SK, et al. Refusing care to emergency department patients: Evaluation of published triage guidelines. Ann of Emerg Med. 1994;23(2):286–293. [PubMed]
34. Young GP, Lowe RA. Adverse outcomes of managed care gatekeeping. Acad Emerg Med. 1997;4:1129–1136. [PubMed]
35. O’Brien GM, Shapiro MJ, Wollard RW, O'Sullivan PS, Stein MD. “Inappropriate” emergency department use: a comparison of three methodologies for identification. Acad Emerg Med. 1996;3(3):252–257. [PubMed]
36. Birnbaum A, Gallagher EJ, Utkewicz M, Gennis P, Carter W. Failure to validate a predictive model for refusal of care to emergency department patients. Acad Emerg Med. 1994;1(3):213–217. [PubMed]
37. Sommers A, Boukus ER, Carrier ER. Dispelling Myths About Emergency Department Use: Majority of Medicaid Visits Are for Urgent or More Serious Symptoms. Washington, DC: Center for Studying Health System Change; 2012. [PubMed]
38. Lowe RA, Rongwei F, Ong ET, et al. Community Characteristics Affecting Emergency Department Use by Medicaid Enrollees. Med Care. 2009;47:15–22. [PubMed]
39. Cunningham PJ. What Accounts for Differences In The Use Of Hospital Emergency Departments Across U.S. Communities? Health Affairs. 2006;25(5):w324–w336. [PubMed]
40. Cheung PT, Wiler JL, Lowe RA, Ginde AA. National Study of Barriers to Timely Primary Care and Emergency Department Utilization Among Medicaid Beneficiaries. Ann Emerg Med. 2012 [PubMed]
41. Tyrance PH, Himmelstein DU, Woolhandler SW. US Emergency Department Costs: No Emergency. Am J Public Health. 1996;86(11):1527–1531. [PubMed]
42. Young GP, Wagner MB, Kellermann AL, Ellis J, Bouley D. for the 24 hours in the ED Study Group. Ambulatory Visits to Hospital Emergency Departments. JAMA. 1996 Aug 14;276(6):460–465. 1996. [PubMed]