PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of qualsafetyQuality and Safety in Health CareCurrent TOCInstructions for authors
 
Qual Saf Health Care. 2007 October; 16(5): 342–348.
PMCID: PMC2464979

Using a survey of incident reporting and learning practices to improve organisational learning at a cancer care centre

Abstract

Objectives

To motivate improvements in an organisational system by measuring staff perceptions of the organisation's ability to learn from incidents and by analysing their personal experience of incidents.

Methods

Respondents were questioned on the components of the incident learning system from both a personal and an organisational perspective. The respondents (n = 125) were radiotherapists, nurses, dosimetrists, doctors, and other staff at a major academic cancer centre. Responses were analysed in terms of per cent positive responses and response rate, differences between “frontline” and “support” staff, and the respondent's experience with incidents.

Results

Respondents were more familiar with and more positive about incident identification and reporting—the first two stages of incident learning. Their overall perception of incident learning was most influenced by the investigation and learning components of the system. Respondents in frontline positions were more positive than those in support positions about responding to, identifying and reporting incidents. Respondents reported having experienced a mean of three incidents per year, of which two were reported and two out of three of the reported incidents were investigated, and a median of two incidents being experienced and reported, but none investigated. Most incidents experienced were not captured by the organisation's existing incident reporting system.

Conclusion

The survey tool was effective in measuring the ability of the organisation to learn from incidents. Implications of the survey results for improving organisational learning are discussed.

The Institute of Medicine has suggested that healthcare organisations should work towards enhancing their safety culture, moving from a culture in which errors are viewed as personal failures to one in which errors are viewed as opportunities for system improvement.1,2 There have been similar calls for system improvement in Canada.3 As a result of these and other calls for change, many organisations have sought to measure safety culture as a first step towards improving their safety performance.4 A recent review suggests that although many safety climate survey tools have been published, only one of these surveys has been used to explore the effect of safety climate on patient outcomes.5

From a safety perspective, one measure of patient outcomes is the number of adverse events affecting patients. Our hypothesis is that an organisation with a greater ability to learn from incidents will have fewer incidents having an adverse effect on patients than an organisation with lesser ability to learn. We define an incident as an unwanted or unexpected change from a normal system behaviour, which causes, or has a potential to cause, an adverse effect to people or equipment. For every adverse event that has an impact on the patient, there may be hundreds of potential incidents and lower‐severity incidents that have little or no effect on patients. The organisational learning literature suggests that more effective organisational learning from these potential and lower severity incidents could lead to system improvements that will reduce the risk of adverse events.6,7,8 This motivates our interest in measuring organisational ability to learn from incidents.

An ideal survey instrument will measure the relationship between safety performance and organisational culture, but there is no existing “one size fits all” instrument. Indeed, the context and purpose of a culture assessment should determine the choice of the survey instrument.9 The survey tool described in this paper was developed for the purpose of assessing the ability of an organisation to learn from incidents. Although developed for a specific healthcare organisation, it could easily be generalised for use at other types of organisation.

The organisation in which this survey was conducted was the Tom Baker Cancer Centre, Calgary, Alberta, Canada, a tertiary cancer care centre of the provincial Alberta Cancer Board. Both the Tom Baker Cancer Centre and the Alberta Cancer Board are committed to providing safe, high quality and comprehensive cancer programmes to the 3.3 million population of Alberta. For many years, the Cancer Centre and the Alberta Cancer Board have employed a mechanism for tracking medical incidents involving harm to patients. This mechanism required the reporting of such incidents to the highest administrative levels and would have been regarded as adequate in the patient safety environment that existed until a few years ago. However, as pointed out above, with the publication of several seminal papers in the USA, Canada and elsewhere there is now increased awareness of patient safety issues in healthcare facilities.

Safety culture assessment tools can be used for evaluating the effect of safety interventions over time, for conducting internal and external benchmarking, and for satisfying corporate or regulatory requirements.10 We have recently introduced an improved system for learning from incidents in the radiotherapy programme at the Tom Baker Cancer Centre.11 We are using the survey tool described here in a longitudinal study of the effect of this new system on the organisation's ability to learn from incidents. We now describe the development of the incident learning survey tool, the baseline results obtained prior to implementation of the new system, and how the survey has stimulated improvements in incident learning practices.

Methods

Survey tool

The survey tool has two parts. Our choice of questions used in both parts of the survey was influenced by the literature on incident learning in high‐hazard industries.12,13 Incident learning is a continuous improvement cycle in which all five components shown in fig 11 should be present and effective for system improvement to occur.14

figure qc18754.f1
Figure 1 The incident learning system.

Box 1: Definitions given to survey participants

  • Incidents: Unwanted and unexpected events or conditions that represent changes from what you would normally expect. These are changes that cause, or have the potential to cause, an adverse outcome that could affect a patient, a colleague or you; or could impair quality, efficiency or effectiveness of the patient care system.

Using this definition, an incident will fall on a severity scale that ranges from a “close call” (near miss, near hit, etc.) or an “unsafe condition” having the potential to cause a loss, all the way to an adverse event causing injury, illness or death.

  • Incident learning: The organisation's ability to identify, report and investigate incidents, and to take corrective actions that improve the patient care system and reduce the risk of recurrence.

The first part of the survey, dealing with organisational learning from incidents, is designed to measure the five components. There are five to seven questions associated with each component. Respondents are provided formal definitions of the terms “incidents” and “incident learning” to help them answer the survey questions (box 1).

The second part of the survey elicits the respondents' personal experience with incidents, including how many incidents a year they typically experience, how many of these are reported, how many of the reported incidents are close calls and how many of the reported incidents are investigated. Respondents are also asked to rate several factors in terms of their importance as reasons why they would not report an incident. Finally, respondents are invited to describe a memorable incident that they were involved in, to summarise what they learned from this incident, and what they thought the organisation learned.

Data analysis

We re‐coded the raw data for this study in terms of positive response to a given question. A positive response was defined as a response that was positive in terms of incident learning. Many questions were worded such that agreement to the statement would be positive in terms of incident learning—for example, “My organisation allocates sufficient priority to incident reporting”. For these “positive” statements, a response of “strongly agree” or “agree” was interpreted as a positive response. Some questions were worded such that disagreement with the statement would be positive in terms of incident learning—for example, “Secrecy between different departments, specialisations or functions makes it difficult to learn from incidents.” For these “negative” statements, a response of “strongly disagree” or “disagree” was interpreted as a positive response.

We used reliability analysis and factor analysis15 to assess the internal consistency of the overall incident learning scale and each component of the scale. The components of the scale include the process steps in the incident learning cycle plus the overall perception of incident learning. For components of the scale having Cronbach α <0.7, questions contributing to low reliability were identified as possible candidates for discard. The components of the final scale had two to five questions per component and Cronbach α in the range of 0.711–0.866; the overall 25‐item incident learning scale had a Cronbach α of 0.944. Factor analysis was used to confirm the unidimensionality of the components of the final scale (shown in table 11 below). We considered the possibility that the eight discarded questions might be measuring some element of the incident learning system that had not been identified, but factor analysis found no relationships between the discarded items.

Table thumbnail
Table 1 Responses by component of the incident learning scale

We analysed the data in terms of per cent positive responses and per cent response rate to identify strengths and opportunities for the studied organisation. Multivariate regression analysis was used to examine the relationship between different components of the incident learning scale. The incident experience data were analysed and compared with the number of incidents reported in the existing incident reporting system, however a direct comparison was difficult to make because multiple respondents may have experienced the same incident. Finally, responses from different groups of respondents were analysed and compared using independent sample t tests. We were interested in whether respondents had a different perception of incident learning depending on whether they interacted with patients directly. We defined positions that primarily provided direct treatment to patients as “frontline”, and those that did not as “support”. While recognising that all positions have varying degrees of patient contact, we attempted to identify two groups that were as distinct as possible in terms of their direct interaction with patients. Thus, we defined “frontline” staff to include doctors, registered nurses and radiotherapists. All other positions were classified as “support” staff.

Results

Survey response demographics

A total of 125 valid surveys were returned, giving a 29.3% (125/426) response rate. Eighteen of the 426 surveys (4.2%) were returned uncompleted (in many cases, uncompleted surveys were returned by staff who noted that they were too remote from operations to have an opinion on them). Surveys were returned from staff in a broad cross‐section of occupations, with 57% (71/125) of them being from staff in frontline positions (fig 22);); 65% (81/125) of respondents work in the radiotherapy programme.

figure qc18754.f2
Figure 2 Occupation of survey respondents.

Organisational ability to learn from incidents

We interpreted the questions yielding the most positive responses as indicating areas of organisational strength or activities which the organisation did well. We interpreted the questions with the fewest positive responses as pointing to areas of organisational weakness or activities in which the organisation had opportunities to improve. Table 11 shows the results, sorted by the percentage of responses that were positive.

The three questions having the highest positive response (greater than 80%) were associated with individual and organisational commitment to identify and report incidents. The three questions with the lowest positive response (less than 40%) were associated with allocation of resources to incident investigations and the sharing of learning between organisations or organisational units. Multivariate regression analysis showed that the variance in the “overall perception” component of the scale was best predicted by a model including “investigation” and “learning” (R2 = 0.691, p<0.05). Although the components of the incident learning scale were correlated, the collinearity statistic for the regression analysis (VIF = 1.86) was within acceptable bounds (VIF<10) for the independent variables to be judged uncorrelated.

Figure 33 shows the mean results for each component of the incident learning scale. In general, respondents were more familiar with, and felt more positive about, the early stages of the incident learning system (“identification and response” and “reporting”; see fig 11).). Below are some examples of qualitative comments made by respondents that lend support to what the organisation did well in terms of incident identification and reporting.

figure qc18754.f3
Figure 3 Mean survey responses for each component of the incident learning scale.

I would not consider NOT reporting an incident.

If I tried to hide the mistake then I would be disciplined. If an error is caught that I was unaware of, we will discuss it but there is no discipline.

In fig 33,, there is a downward trend in positive response as we move around the incident learning cycle (fig 11).). Respondents were least positive about “learning” and least familiar with “investigation”. Below are some examples of comments made by respondents that show the need for organisational improvements in incident investigations and learning.

I have received training in incident reporting but not in incident investigations.

I learn from my own incidents, but zero information about others.

Differences between frontline and support staff

We expected different views of incident learning to be expressed by respondents in frontline roles (n = 71), compared with respondents in support roles (n = 54). There was a significant difference (p<0.05) between the perceptions of frontline staff and support staff on the “identification and response” component of the incident learning scale, with frontline staff having a more positive response. There was no significant difference between the perceptions of frontline staff and support staff on any other components of incident learning.

Out of the 25 questions in the incident learning scale, there were four questions for which positive response of the two groups differed significantly (p<0.05). Frontline staff (response rate 100%) gave more positive responses than support staff (response rate 96%) to questions 1, 4, and 7 in table 11.. Support staff (response rate 39%) gave more positive responses than frontline staff (response rate 92%) to question 22.

The responses to questions 1, 4 and 7 suggest that frontline staff felt more positive than support staff about responding to, identifying, and reporting incidents. Both groups had high response rates to these questions, so both groups were probably confident in terms of understanding these questions. The more positive response on the part of frontline staff may be because the existing incident reporting system is geared more towards clinical incidents rather than operational incidents. A more positive response from support staff to question 22, despite their lower response rate, could imply that support staff views the organisation's learning ability more optimistically than frontline staff. The much higher response rate on the part of frontline staff suggests a higher familiarity and comfort level in answering the question but less confidence in the organisation's ability to share learning.

Experience with incidents

A high number of respondents (118/125, 94.4%) reported having experienced incidents, either as a participant or an observer. However, 16.8% (21/125) of respondents had experienced incidents but did not feel comfortable evaluating their experience with incident learning in the organisation. This may suggest that some people are experiencing incidents but not reporting them. It is also possible that some people are reporting incidents but there is no feedback and therefore no learning experience. On further examination, the respondents who experienced incidents but did not answer the question about their experience with incident learning were mostly support staff—that is, 90% (64/71) of frontline staff answered the question compared with 61% (33/54) of support staff. This may suggest that the organisation's existing incident reporting system is not capturing the kinds of incidents that support staff experience. A slight majority (49/97, 50.5%) of respondents felt positive about their overall experience with learning from incidents in this organisation, with no statistically significant difference in terms of positive response between frontline and support staff.

The number of incidents experienced varied by position as shown in a box and whisker plot (fig 44).). One extreme value point for a technologist who experienced 70 incidents in a year was excluded from the plot and from the analysis. The reasons for this person's high number of incident experiences are unknown, but—for example—the person could have a very low threshold for perceiving a near miss incident. Excluding this highly extreme value from the data, the remaining respondents experienced an average of 3.0 incidents in a year.

figure qc18754.f4
Figure 4 Number of incidents experienced by staff position. Each box shows the middle 50% of data values, with a heavy horizontal bar at the median value. The “whiskers” show the distance from the smallest to the largest ...

Table 22 summarises the statistics for the four questions concerned with incident experience, excluding the extreme value case of 70 incidents. Using the mean values, about three incidents a year were experienced by respondents, of which two were reported. Of these two reported incidents, one of them was a close call. About two out of three incidents were investigated. However, the mean values were strongly influenced by the outliers such as a person experiencing 25 incidents, all of which were reported and investigated, and most of which were close calls. A different picture emerges if we look at the median values. In this case, two incidents were experienced and reported, none of which were close calls, and none of which were investigated.

Table thumbnail
Table 2 Personal experience with incidents

If a more effective system for learning from incidents had been in place, we might expect staff members to have reported all three of the incidents that they experienced on average, or at least the median value of two incidents a year. Thus, in a 400–500 member organisation such as the one studied we might expect 800–1500 incidents a year to be reported. However, this estimate is probably on the high side as it is possible that more than one respondent experienced the same incident and reported their experience in the survey.

Barriers to reporting

The survey included a list of the commonly cited reasons for not reporting an incident and asked respondents to rate each of them on a scale from “extremely important” to “not at all important”. The response rate for these questions was high (94–96%), and the results are tabulated (table 33)) in rank order of importance as a reason for not reporting. Three of the top four reasons for not reporting an incident are organisational factors rather than personal factors. This suggests that the success of an incident learning system will depend on organisational encouragement for reporting potential and minor incidents, emphasis on incident reporting as an important part of one's job, and commitment of time and resources to incident investigation and corrective actions.

Table thumbnail
Table 3 Reasons why a respondent would not report an incident

An open‐ended question gave respondents an opportunity to identify any other reasons for not reporting incidents. Other reasons cited included: nothing will be done about it; lose patient's trust; unwilling to fill out form; unaware of the process; maintain relations/avoid conflict with person who made error; indifference; outside my area. On average, more than 50% of respondents said that the reasons for not reporting an incident were “not at all important”, suggesting that the barriers to incident reporting are in the organisational systems and priorities rather than in the willingness of the staff. Thus, the overall result suggests that staff willingness to report incidents should put the Cancer Centre in a good position to benefit once an effective incident learning system is in place.

Three open‐ended questions in this section of the survey invited respondents to share their experience with an incident they were involved in as a participant or observer. The text of each question was as follows:

  • Could you please describe a memorable incident that you have been involved in?
  • Could you please summarise what you learned from this incident?
  • In your opinion, what do you think the organisation learned from this incident?

Just over 50% of respondents took the opportunity to describe a memorable incident. In total, 63 incidents were described. These ranged from near misses, some of them potential major incidents, to serious and major incidents in which patients were harmed. In some cases the respondent reported what the organisation learned from the incident in terms of improved procedures. However, in many cases a respondent either did not respond to the question about what the organisation learned from the incident or wrote that the organisation did not learn anything from the incident. One respondent found the situation created by the incident reporting process to be “very difficult and unpleasant”, illustrating how easy it is for organisational learning barriers to discourage incident reporting.

Several of the respondents emphasised the importance of making system improvements so that the memorable incident that they described would not happen again. However, in some cases, it was not clear that the organisation put effective processes in place to ensure that identified improvements were implemented. Another case showed that whereas learning from incidents is difficult enough within a single organisation, it is challenging indeed when system improvements have to be coordinated between separate institutions. Several respondents described incidents after which individuals or departments took steps to reduce the risk of recurrence. However, there was a common theme in the respondents' comments that systemic improvement and cross‐organisational learning will not occur unless managers at all levels take action and have processes in place to make it happen.

Discussion

Using the survey results to reduce barriers to incident reporting

Previous research shows that not all incidents are reported, even when actual harm occurs, but especially when no harm occurs and the incident is a close call or near miss.16,17,18 Healthcare workers are especially unwilling to report adverse events to superiors. To solve this problem, which is counterproductive to organisational learning, alternative approaches are needed to focus on system failures before an adverse event occurs.19 This means that organisations must do a better job of learning from potential and minor incidents.

We found that many of the incidents the respondents said they were reporting were not finding their way into the organisation's existing incident reporting system. The quality assurance specialist responsible for collating incident data at the studied organisation stated that about 85 incidents per year had been reported over the past 3 years, a number that we confirmed by examination of the physical records. This official statistic is an order of magnitude fewer than the possible 800–1500 reportable incidents a year estimated from the survey results. Even if the same incident is experienced by four respondents, the reportable incidents estimate would be in the range of 200–375 a year, which is still about two to four times the number of incidents actually reported.

There are several possible explanations for this discrepancy. The first is that the discrepancy could result from non‐response bias if it was only the staff experiencing incidents who responded to the survey. A second explanation could be that respondents were given a broad definition of an incident (box 1), including close calls or near misses, whereas organisational reporting requirements at the corporate level focus more on incidents causing harm or damage. Third, a respondent might have thought they have reported an incident by verbally informing their supervisor, but a completed incident form needs to be submitted for it to be counted in the incident statistics. The first explanation is unlikely, because 125 respondents alone would account for 375 incidents a year being experienced and 250 being reported, unless about four people were experiencing the same incidents and about three people reporting it. The second and, to a lesser extent, the third explanation are more likely. The responses to the questions about the barriers to reporting would support these explanations: “I didn't think the incident was important enough to report” and “Desire to avoid work interruption and just carry on with the job” were the two highest ranked barriers. This suggests that an incident needs to be defined in the broadest possible terms to encourage more near miss reporting, and that management needs to create an expectation that incident reporting and investigation is an important part of everyone's job.

Our results show that nearly 90% of respondents believe they can identify an incident, and nearly 81% of respondents would feel comfortable reporting an incident in which they had made an error or omission. Consequently, the Tom Baker Cancer Centre has introduced a broad definition of an incident and provided training to staff to assist them in incident identification. By lowering the threshold for incident reporting, it is more likely that an unexpected change from normal system behaviour having a potential to cause harm will be recognised as an incident.

Using the survey results to improve organisational learning

The results of this study show the importance of putting organisational priority on all aspects of the incident learning system, not just on reporting. The Cancer Centre had a pre‐existing incident reporting system (rather than a learning system), which emphasised the front end of the process. If an investigation took place, incident participants were generally not involved. As expected, the results show a marked difference in response for the front end of the incident learning system (identification and response, and reporting) and the back end of the system (investigation, corrective actions and learning). The regression analysis showed that the overall perception of the incident learning system was most influenced by the investigation and learning components of the system. Only 26% of respondents felt positive about the organisation allocating sufficient resources to incident investigations and only 43% felt positive about the organisation's overall ability to learn from incidents. The median responses in table 22 suggest that most people reporting incidents believed that the incidents they reported were not investigated. This suggests that the greatest opportunity for improving organisational learning at the Tom Baker Cancer Centre may not lie in improving staff's willingness to report incidents but rather in the Cancer Centre's ability to respond to them. Some departmental supervisors have used these survey results to justify a reallocation of resources to incident investigation and communication of lessons learned.

Key findings

  • An organisation's incident learning system is the set of activities associated with identifying, reporting and investigating incidents, taking corrective actions and sharing lessons learned.
  • Survey respondents were more positive about the front end of the incident learning system (identification, response and reporting) than they were about later steps in the process (investigation, corrective actions and learning).
  • The overall perception of organisational ability to learn from incidents was most influenced by the investigation and learning components of the system.
  • Respondents in frontline positions felt more positive than those in support positions about their ability to identify and respond to incidents, but both groups had similar perceptions of other components of the incident learning system.
  • The number of incidents that respondents said they were experiencing and reporting was an order of magnitude greater than were captured in the organisation's formal incident reporting system.

One of the challenges facing the Cancer Centre is to close the incident learning feedback loop by communicating the lessons learned back to staff and to other functions or specialisations that may benefit from the knowledge gained. We recommend that organisations provide training to staff on how to properly communicate (written) incidents to supervisors, and to supervisors on how to report/summarise incident learning back to staff. The survey results suggest that lack of communication between departments and functions at the Cancer Centre may be a barrier to organisational learning. In the “describe a memorable incident” section of the survey several respondents stated that they were not sure what the organisation learned because, even if an incident investigation had taken place, the lessons learned were not communicated back to them. The survey results would appear to justify inclusion of the incident participants in the investigation team. A barrier to doing this at the Tom Baker Cancer Centre and other healthcare organisations may be a cultural bias towards independent investigations in which the persons involved in the incident provide input but do not participate in problem solving.

Several of the qualitative comments emphasised the importance of ensuring that the incident learning system is free from attribution of blame. Although the overall results in table 33 suggest that there is generally a low fear of discipline, some respondents were fearful of negative consequences from reporting an incident. At the Tom Baker Cancer Centre, management has taken steps to ensure that the incident reporting process confers immunity from disciplinary or legal action. However, only a slight majority of respondents felt positive about their experience with incident learning, suggesting that the organisation has ample opportunity to improve.

Further research

The radiotherapy programme at the Tom Baker Cancer Centre has incorporated the results of this survey in the design of an improved incident learning system.11 As our longitudinal study of this new incident learning system progresses, we expect to gather more data that can be used to further assess the validity and reliability of the survey tool. A limitation of the present study is the low overall response rate. In future surveys we will use additional strategies to increase the response rate. If the survey tool is as successful as we expect it to be in measuring the effect of the incident learning system implementation, we would expect that this will encourage further research in developing tools to measure the effect of safety interventions in healthcare. Unfortunately we did not have any data on differences in management style between different departments, which could have had an influence on survey responses and personal experiences. Further research on subculture influences on organisational learning would be beneficial.

A limitation of this study is that the results may not be generalisable to other organisations. To assess the generalisability of the results, it would be necessary to administer the survey at other cancer centres, other healthcare organisations and at non‐healthcare organisations. A comparison of the survey results with an assessment of the incident learning systems at different organisations would be a fruitful extension of this work.

Summary and conclusions

We have developed a survey tool for measuring organisational ability to learn from incidents and demonstrated its application at the Tom Baker Cancer Centre. In the present study, the overall climate for incident learning was viewed as a positive one, with solid organisational commitment to the prevention of adverse events and a willingness among staff to report incidents, but there are definite opportunities for improvement.

The results of this study suggest that staff perceptions of system performance are positive at the front end of the incident learning system (identification and response) and less positive for components further around the cycle (investigation, corrective actions and learning). We conclude from this that a willingness to report incidents will not translate into organisational learning, and hence system improvement, unless these later stage components of the cycle are managed effectively and lessons learned are communicated back to staff.

The survey provided considerable information about the respondents' experience with incidents and about incident learning at the Cancer Centre. There was a large difference between the number of incidents that respondents say they are experiencing and reporting and the much lower number of incidents formally registered in the organisation's incident reporting system. In this paper we have discussed some possible explanations for this difference, and suggested changes that the Tom Baker Cancer Centre and other healthcare organisations can make to improve incident reporting and organisational learning.

Acknowledgements

We thank the Health Quality Council of Alberta for funding Dr Cooke's fellowship in patient safety and the Alberta Cancer Board for their permission to conduct and publish this research. We also thank the staff and Quality Assurance Committee at the Tom Baker Cancer Centre, and in particular Jodi Powers and Rahim Heshmati, for their assistance in conducting the survey.

Footnotes

Competing interests: None declared.

References

1. Kohn L T, Corrigan J M, Donaldson M S. To err is human: building a safer health system. Washington, DC: National Academy Press, 1999
2. Institute of Medicine Crossing the quality chasm: a new health system for the 21st century. Washington DC: National Academy Press, 2001
3. Baker G R, Norton P G, Flintoff V. et al The Canadian adverse events study: the incidence of adverse events among hospital patients in Canada. CMAJ 2004. 1701678–1686.1686 [PMC free article] [PubMed]
4. Singer S J, Gaba D M, Geppert J J. et al The culture of safety: results of an organisation‐wide survey in 15 California hospitals. Qual Saf Health Care 2003. 12112–118.118 [PMC free article] [PubMed]
5. Colla J B, Bracken A C, Kinney L M. et al Measuring patient safety climate: a review of surveys. Qual Saf Health Care 2005. 14364–366.366 [PMC free article] [PubMed]
6. Carroll J S. Organizational learning activities in high hazard industries: the logics underlying self‐analysis. J Manage Stud 1998. 35699–717.717
7. Sitkin S B. Learning through failure: the strategy of small losses. In: Straw BM, Cummings LL, eds. Research in organizational behavior. Greenwich, CT: JAI Press, 1992. 231–266.266
8. Marcus A A, Nichols M L. On the edge: heeding the warnings of unusual events. Organization Science 1999. 10482–499.499
9. Scott T, Mannion R, Davies H. et al The quantitative measurement of organizational culture in health care: a review of the available instruments. Health Serv Res 2003. 38923–945.945 [PMC free article] [PubMed]
10. Nieva V F, Sorra J. Safety culture assessment: a tool for improving patient safety in healthcare organizations. Qual Saf Health Care 2003. 12ii17–ii23.ii23 [PMC free article] [PubMed]
11. Cooke D L, Dubetz M, Heshmati R. et alA reference guide for learning from incidents in radiation treatment. HTA Initiative No. 22. Edmonton, Alberta: Alberta Heritage Foundation for Medical Research, 2006
12. Bird F E, Germain G L. Practical Loss Control Leadership. Loganville, GA: Institute Publishing, a Division of the International Loss Control Institute, 1986
13. Morrison L M. Best practices in incident investigation in the chemical process industries with examples from the industry sector and specifically from Nova Chemicals. Journal of Hazardous Materials 2004. 111161–166.166 [PubMed]
14. Cooke D L, Rohleder T R. Learning from incidents: from normal accidents to high reliability. System Dynamics Review 2006. 22213–239.239
15. Hair J F, Anderson R E, Tatham R L. et alMultivariate data analysis. New Jersey: Prentice Hall, 1998
16. Cullen D J, Bates D W, Small S D. et al The incident reporting system does not detect adverse drug events: a problem for quality improvement. Jt Comm J Qual Improv 1995. 21541–548.548 [PubMed]
17. Weingart S N, Ship A N, Aronson M D. Confidential clinician‐reported surveillance of adverse events among medical inpatients. J Gen Intern Med 2000. 15470–477.477 [PMC free article] [PubMed]
18. Stanhope N, Crowley‐Murphy M, Vincent C. et al An evaluation of adverse incident reporting. J Eval Clin Pract 1999. 55–12.12 [PubMed]
19. Lawton R, Parker D. Barriers to incident reporting in a healthcare system. Qual Saf Health Care 2002. 1115–18.18 [PMC free article] [PubMed]

Articles from Quality & Safety in Health Care are provided here courtesy of BMJ Group