Data on EMR implementation came from the 1998–2007 HIMSS Analytics Databases. HIMSS Analytics annually surveys a sample of U.S. nonfederal hospitals affiliated with integrated health care delivery systems. The 2007 database included information on 5,066 hospital facilities (381 in California) and contained details on each hospital's adoption of EMR applications. Data on nurse staffing came from the 1998–2007 Annual Financial Disclosure Reports of the California Office of Statewide Health Planning and Development (OSHPD). OSHPD required acute care hospitals to annually submit a Financial Report, which contains information on costs, nurse staffing, discharges, and patient days by hospital unit. Data on nurse-sensitive patient outcomes came from the public version of the 1998–2007 OSHPD Patient Discharge Databases, which included information on patient risk factors, diagnosis codes, and in-hospital mortality.
The sample included medical–surgical acute units within short-term, general acute care hospitals in California. We excluded federal government, specialty, children's, and long-term acute hospitals. We excluded Financial Reports that were not based on 365 days of reported data. The analytical dataset was an unbalanced panel of 326 hospitals and comprised 2,828 hospital-year observations.
We relied on expert opinion to create measures of EMR sophistication. Based on the HIMSS EMR Adoption Model (Garets and Davis 2006
), we grouped EMR applications into three categories representing “stage of EMR implementation” (Table SA1). Hospitals at “EMR stage 1” (EMR-S1) have started implementation of the three core ancillary department information systems—pharmacy, laboratory, and radiology—and a clinical data repository. EMR-S1 functionality is characterized by automation of the patient record, facilitating communication within and between departments, and improving access to clinical information. Hospitals at “EMR stage 2” (EMR-S2) have implemented all EMR-S1 applications and have started implementation of Nursing Documentation (DOC) and Electronic Medication Administration Records. EMR-S2 functionality is characterized by automation of nursing workflow processes, including clinical documentation and electronic recording of medication administration. Hospitals at “EMR stage 3” (EMR-S3) have implemented all EMR-S1 and EMR-S2 applications and have started implementation of Clinical Decision Support (CDS) and Computerized Physician Order Entry (CPOE). EMR-S3 functionality is characterized by automation of clinical decision processes, including order entry management and support of clinical decision making. Our classification of EMR stages is similar to the taxonomy developed by a consensus panel of experts (Jha et al. 2009
) in which HIT refers to the general term, and EMR (or EHR—Electronic Health Records) is one type of HIT that is further classified into three categories based on the sophistication and completeness of such relevant applications as DOC, CDS, and CPOE.
The HIMSS Analytics Database reported the year of contract date for each application and the current year's automation status (i.e., Automated/Live and Operational). Because we could not observe the actual start date of EMR implementation, we assumed that implementation began, on average, 1 year after the contract date. For observations where the contract date was missing, we used the earliest reported year where the application's status was Automated/Live and Operational.
We measured the effect of EMR based on the implementation start date rather than the Live and Operational date for several reasons. First, we believed that process changes were likely to occur during the early phases of implementation, and we wanted to capture these workflow-related changes to business processes during this initial period. Second, EMR may become Live and Operational on a pilot basis rather than hospital-wide, and we were unable to determine when EMR became Live and Operational in each hospital's medical–surgical acute units. If there were measurement errors in the contract date, this would bias against finding any effects of EMR. Thus, our measure of EMR implementation is relatively conservative and would underestimate the effect of EMR on costs, staffing, and outcomes.
We estimated the effect of EMR implementation for each EMR stage. Because EMR implementation is an incremental process spanning several years, we allowed the effect of EMR to vary over the first 3 years since the implementation started (i.e., year 1, year 2, and year 3).
Medical–Surgical Acute Unit Costs and LOS
We used the total direct cost, total discharges, and total patient days for the medical–surgical acute unit and created measures of cost per discharge, cost per patient day, and LOS. For cost per discharge, we divided total direct costs by total discharges. For cost per day, we divided total direct costs by total patient days. For LOS, we divided total patient days by total discharges.
Nursing Hours per Patient Day (HPPD), Nursing Skill Mix, and Nurse Cost per Hour
We specified measures of nursing HPPD, nursing skill mix, and nurse cost per hour. For nursing HPPD, we divided total productive hours by total patient days. We created separate variables for total nursing and for registered nurses (RN), licensed vocational nurses (LVN), and aides/orderlies (AID).
For nursing skill mix, we divided productive hours for each nurse type by total productive hours. We assumed that all Registry productive hours were for RNs. We created separate variables for RN percent, LVN percent, AID percent, and Registry percent.
For nurse cost per hour, we divided total salaries cost by total productive hours. This measure of cost per hour included the average hourly wage plus overtime. We created separate variables for RN cost per hour, LVN cost per hour, and AID cost per hour. For Registry cost per hour, we divided total contracted costs by total productive hours.
Nurse-Sensitive Patient Outcomes
We created measures of patient outcomes using the PSI and inpatient quality indicators (IQI) from the Agency for Healthcare Research and Quality. We applied the PSI and ISI software to the patient discharge data to calculate a hospital-level risk-adjusted rate per 1,000 hospitalizations for each PSI and IQI indicator. Using all of the PSI and IQI indicators with equal weights, we created composite scores for rates of patient safety complications, in-hospital mortality for conditions, and in-hospital mortality for procedures. These standardized composite scores were defined as the ratio of observed to expected outcomes.
We also created variables for specific patient outcomes known to be associated with nursing care (Needleman, Kurtzman, and Kizer 2007
). These variables included hospital rates of decubitus ulcer, failure to rescue, infections due to medical care, acute myocardial infarction (AMI) mortality, congestive heart failure mortality, and pneumonia mortality.
Descriptive statistics for all dependent variables by year are reported in Table SA2.
We found a small number of implausible values for unit costs, productive hours, discharges, and days. To minimize potential bias from data error, we trimmed the top and bottom 1 percent from the distributions of variables created from these measures.
The longitudinal analysis specified fixed-effects regressions estimated by ordinary least squares. These regressions estimated the within-hospital effect of EMR implementation associated with changes in staffing and outcomes at the same facility. The strength of fixed-effects is the ability to control for confounding factors that vary across hospitals but are constant over time.
All regressions included control variables for staffed beds, case mix index, a quadratic time trend, and estimated robust standard errors. We took the natural logarithm of all dependent variables (except nursing skill mix) and reported marginal effects, which can be interpreted as the percent change in the dependent variable associated with EMR implementation.
Descriptive statistics for hospital characteristics by year are reported in Table SA3.