|Home | About | Journals | Submit | Contact Us | Français|
To determine the utility of a fall evaluation service to improve the ascertainment of falls in acute care.
Six-month observational study.
Sixteen adult nursing units (349 beds) in an urban, academically affiliated, community hospital.
Patients admitted to the study units during the study period.
Nursing staff identifying falls were instructed to notify, using a pager, a trained nurse ‘‘fall evaluator.’’ Fall evaluators provided 24-hour-per-day 7-day-per-week coverage throughout the study. Data on patient falls gathered by fall evaluators were compared with falls data obtained through the hospital’s incident reporting system.
During 51,180 patient-days of observation, 191 falls were identified according to incident reports (3.73 falls/1,000 patient-days), whereas the evaluation service identified 228 falls (4.45 falls/1,000 patient-days). Combining falls reported from both data sources yielded 266 falls (5.20 falls/1,000 patient-days), a 39% relative rate increase compared with incident reports alone (P<.001). For falls with injury, combining data from both sources yielded 79 falls (1.54 injurious falls/1,000 patient-days), compared with 57 falls (1.11 injurious falls/1,000 patient-days) filed in incident reports—a 28% increase (P = .06). In the 16 nursing units, the relative percentage increase of captured fall events using the combined data sources versus the incident reporting system alone ranged from 13% to 125%.
Incident reports significantly underestimate both injurious and noninjurious falls in acute care settings and should not be used as the sole source of data for research or quality improvement initiatives.
Falls in hospitalized patients are of significant public health interest for several reasons. First, they are one of the most common adverse events that occur in hospitalized patients, with reported rates of between two and five events per 1,000 patient-days.1–6 Also, falls are an important cause of disability and are a leading cause of injurious death in older adults.7,8 In addition, persons who sustain falls while hospitalized use more resources than persons who do not fall,9 and recently enacted Centers for Medicare and Medicare Services regulations will limit hospital reimbursement for care related to a fall-related injury.10 Finally, injury is an important source of liability for hospitals.11,12
The Joint Commission on Accreditation of Healthcare Organizations has designated patient falls as one of its National Patient Safety Goals.13 Healthcare organizations, including acute care settings, must reduce the risk of patient harm resulting from falls. They must implement a fall reduction program and regularly evaluate the effectiveness of the program. Inherent within the evaluation component of the safety goal is the ability of organizations to track and monitor the rates of falls.
Nearly all studies on falls in institutional settings rely on incident reports to describe events, but incident reports are largely designed to minimize litigation,14 and several reports have suggested that there is poor capture of events.15,16 This article reports on the comparison of rates of fall events in an acute care setting using two sources of data: an incident reporting system and a formal fall evaluation service.
This 6-month observational study was conducted from September 9, 2005, to March 8, 2006, at Methodist Health Care of Memphis-University Hospital—a 693-bed urban, academically affiliated, community hospital in Memphis, Tennessee. The Methodist Hospital institutional review board reviewed and approved the research protocol. The study was conducted on 16 adult medical and surgical units (349 beds) as part of a larger study testing the efficacy of proximity alarm monitoring to reduce patient falls.
A fall was defined as a sudden, unintentional change in position resulting in the subject coming to rest on the ground or other lower level.2 Persons found on the floor or requiring assistance to the floor or lower level by nursing staff were also included. The definition of an injurious fall included major (e.g., fracture) and minor (e.g., bruises, lacerations) injuries.
The Fall Evaluation Service consisted of trained healthcare professionals (fall evaluators), who assessed each patient that sustained a fall using a standardized data collection tool. The fall evaluators in this study consisted of nurse managers, nurse supervisors, and study personnel. Thus, 24-hour-per-day, 7-day-per-week coverage was available. All fall evaluators attended a formal training session or received individualized instruction from members of the study team (RIS, LCR, DL).
To publicize the availability of the Fall Evaluation Service, a single pager (418-FALL) was used. The Fall Evaluation Service was announced at meetings of nurse managers. In addition, posters, magnets, and buttons were distributed to the 16 study units, and study personnel visited each unit to explain the service. To maximize exposure, visits to nursing units were conducted each weekday shift and over weekends. Each nursing shift was visited at least twice during the introduction of the program. Nursing staff were reminded that the service did not replace incident reporting.
After examination for injury, the fall evaluator completed a standardized assessment. This assessment consisted of an evaluation for injury; a brief physical examination including assessment of orientation and postural blood pressure change; and descriptions of the fall from the patient, witnesses, and nursing staff, including location, use of restraining and medical devices, and events preceding the fall. Copies of the instrument are available by request from the lead author (RIS). After completion of the instrument, fall evaluators reminded the nurses to complete an incident report. The fall evaluation took approximately 10 minutes to complete.
Incident reports were completed using a ‘‘pen and paper’’ method and filed with the Office of Clinical Risk Management. Data on the incident reports were limited and focused primarily on the location of the fall and any subsequent injury. The Office of Clinical Risk Management created a list of all fall incident reports using the traditional reporting system for this 6-month observational study period. The incident report listing was compared with the fall evaluation log for patient-specific identifiers (name, date of birth, and medical record number), nursing unit, and date of fall for matching. No incident reports were excluded because of ambiguous information.
Fall rates were determined using data from incident reports alone, fall evaluators alone, and combined data from incident reports and fall evaluators. Patient-days were determined for each nursing unit using hospital billing data. All fall rates were calculated as (number of fall events/patient-days) × 1,000. Relative rate increases in fall rates using the combined data sources versus either source alone were calculated as ((combined data source rate – single source rate)/single data source rate) × 100. Injurious fall rates were determined in a similar manner. All P-values were estimated using Poisson regression.
During the 51,180 patient-days of observation, 229 patients experienced 266 falls on the study units. These 229 patients had a mean age of 64 (range 24–101), 58% were female, 68% were African American and 30% were Caucasian. Most falls (n = 184, 69%) occurred on a general medical, cardiology, or surgical unit. Falls also occurred on neurology/neurosurgery (n = 59, 22%) and cancer care units (n = 23, 9%).
The incident reporting system identified 191 fall events (3.73 falls/1,000 patient-days), whereas the Fall Evaluation Service identified 228 fall events (4.45 falls/1,000 patient-days) (Table 1). Combining the fall events reported from both approaches yielded 266 falls (5.20 falls/1,000 patient-days), a 39% relative rate increase compared with incident reports alone and a 17% relative rate increase compared with fall evaluators alone. On the 16 nursing units, the relative increase of captured fall events using the combined data sources versus incident reports alone ranged from 13% to 125% (Table 2).
The fall rate recorded by incident reports in the 6-months before the study was 3.08/1,000 patient-days, which was slightly lower (P = .07) than the rate recorded using incident reports during the study. For falls with injury, combining data from both sources yielded 79 falls (1.54 injurious falls/1,000 patient-days), compared with 57 falls (1.11 injurious falls/1,000 patient-days) filed in incident reports—a 28% increase (P = .06).
The utility of incident reports for falls have been conducted in long-term care settings,17,18 but there is emerging literature questioning the reliability of incident reports for hospital-related adverse events.16,19 For example, 75 falls occurred in a study of 574 hospitalized patients, yet it was found that 26 (35%) were unreported.20 Fourteen of these unreported falls occurred in three patients. Patient interviews suggested that the staff knew about the unreported falls in the majority of cases. Furthermore, because the purpose of incident reports is to document extent of injury for risk management concerns, there is often no assessment of the physical conditions that predisposed the patient to falling. Determining risk factors related to fall events is necessary in planning and implementing targeted interventions to reduce falls in hospital settings. The Fall Evaluation Service confirms that incident reports miss many falls, including falls with injuries, that occur in hospitals. In addition, the information captured by the service can assist in determining underlying conditions and preventability of falls in acute care.
Several limitations of the study deserve comment. First, because the study was conducted in a single hospital, it is not clear whether the findings would be generalizable to other acute care facilities. Second, although nurses were reminded to complete an incident report, they may have thought the fall evaluation replaced the incident report. Thus, the underuse of incident reports may have been due to the study design. However, there was no decrease in incident reports during the period of study from the 6 months before the study, suggesting that this was not a large source of bias. Finally, it is possible that some falls were missed by both reporting approaches. Nevertheless, the results lend support to other findings that incident reporting systems underestimate fall events.
Although incident reports and fall evaluators missed falls, the addition of the Fall Evaluation Service increased the apparent fall rate nearly 40% and the injurious fall rate 28%. In no nursing unit did incident reports capture all falls, and in some units, the number of reported falls more than doubled when the two sources of data were combined. In conclusion, incident reports significantly underestimate falls in acute care settings and should not be used as the sole source of data for research or quality improvement initiatives.
The authors would like to thank Dr. Stephen T. Miller and the Methodist Healthcare administration for their support of the project. We also thank Daniel Clark and Danielle Squires, who determined patient-day information from the hospital billing data, and the Methodist Healthcare Nurse Managers who performed the fall evaluations.
The contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institute on Aging, National Institutes of Health.
Conflict of Interest: The project was supported by Grant R01-AG025285 from the National Institute on Aging. The editor in chief has reviewed the conflict of interest checklist provided by the author and has determined that none of the authors have any financial or any other kind of personal conflicts with this paper.
Author Contributions: Concept and design: Shorr, Mion, Rosenblatt, Kessler; Acquisition of subjects and data: Rosenblatt, Lynch; Analysis: Shorr, Mion; Manuscript preparation: Shorr, Mion, Kessler.
Sponsor’s Role: None.