|Home | About | Journals | Submit | Contact Us | Français|
Changes in OSHA recordkeeping regulations appear to explain the reported decreasing rates of occupational injury and illness in the US
Counting how much and determining under what circumstances diseases occur are the basic principles upon which preventive health programmes are built. The lack of accurate surveillance information leads to the inability to allocate appropriate resources, the inability to initiate and prioritise targeted interventions, and the inability to evaluate the effectiveness of those interventions. The paper by Friedman and Forst1 in this issue (see page 454) highlights the complexity of answering on what first appear to be relatively simple questions: (1) how many occupational injuries and illnesses occur each year? and (2) are the rates of occupational injuries and illnesses changing over time? The problem they address is how the rules and definitions of recording what is an occupational injury or illness may change the quantitative measures of those conditions. Although the data they examine are specific to the US, the issue they address is of concern for surveillance systems in all countries.
Theoretically, surveillance of work‐related injuries and illnesses has a potential advantage over reporting systems for non‐work‐related conditions, because the employer could be an additional reporting source that complements and expands healthcare system reporting sources such as clinicians or hospitals. Unfortunately, the US system for counting non‐fatal work‐related injuries and illnesses rather than using both healthcare and employer data relies solely on employer reporting, despite the inherent limitations in an employer‐based system. These limitations include: (1) intentional underreporting or discouragement of workers from notifying supervisors; (2) lack of understanding of reporting requirements by employers; (3) lack of priority/resources allocated to maintaining records by employers; and (4) lack of awareness of occupational injury or illness by the employer because the worker received medical care from their personal health care provider. All of these limitations are in addition to the healthcare provider's lack of recognition and/or avoidance of diagnosing work‐related conditions. The incompleteness of the US employer‐based system has been documented.2,3 In 1992 the system for counting acute traumatic work‐related fatalities was changed to include multiple data sources such as death certificates, newspaper clippings, police reports as well as employer reports. Increasing the data sources provided a more accurate count, and the number of acute traumatic work‐related fatalities was twice as great in 1992 as in 1991, the last year that the number of acute traumatic fatalities identified depended solely on employer reporting. In addition, the single source employer‐based system is more open to political manipulation and self‐serving interests.
A surveillance system with an inherent undercount may still be viable as long as the limitations of the system are made very clear when the data are presented. (How many users of the US official statistics remember that these numbers exclude work‐related injuries and illnesses among all governmental workers, including high‐risk employees such as firefighters, as well as those among most agricultural workers and the self‐employed?) As long as the limitations remain constant over time, trends in the data may be evaluated. This evaluation of trends is an important use of a surveillance programme. However, changes in the surveillance programme, either intentional or unintentional, may require adjustments in the data or even make analysis of trend data meaningless.
Administrative changes in the Occupational Safety and Health Administration (OSHA) recordkeeping regulations in 1995 and 2001 appear to be the major explanation for the reported decrease in the rate of work‐related injuries and illness in the US from 1992–2003. Friedman and Forst's paper1 does a thorough job in analysing the possible causes of the decrease in the official statistics of occupational injuries and illnesses in the US since 1992. The alternative reasons for this decrease that they examined include: increases in employment with new hires who would be expected to be at increased risk of injuries and illnesses; a numerator‐denominator bias caused by routine undercounting of the numerator with an increasing denominator; a shift from more to less hazardous employment; an increase in regulatory enforcement activity or staffing; or a change in sampling methods. None of these alternatives provided a viable explanation for what was seen with the data. Instead, most of the decrease in reported occupational injuries and illnesses that occurred between 1992 and 2003 is associated with two points in time. Beginning in 1995 companies began to submit data electronically or via mail and in 2001 definitions of what was a recordable occupational injury and illness were changed. The change in 1995 meant that companies would not receive site visits where background documentation could be viewed, and could therefore correctly assume that there would be no review of their reporting accuracy. Examples of changes in definition in 2001 included: revising the definition of what is a new injury versus what is an aggravation of a previous injury; adding the word “significant” before an injury is recordable; and excluding restricted work activity that occurred only on the day of injury. Collectively these changes appear to have had a major effect in what was recorded.
The findings of the Friedman and Forst study1 are very significant: 83% of the reported decrease in occupational injuries and illnesses in the US from 1992 to 2003 was secondary to changes in recordkeeping rules and only 17% secondary to a true decrease in morbidity.
The results of this study should be a reminder to all personnel who manage surveillance systems and all those who use data from these systems—particularly those who use them to set policies and evaluate interventions—that relatively minor changes in data collection and definitions may have significant effects on the data outcomes.
The implications of this study's results are even more significant for US policy makers. They highlight the previously known weaknesses of the employer‐based US system to collect statistics on occupational injuries and illnesses, and call into question previous press releases that announced significant strides in reducing the public health burden of occupational injuries and illnesses.