Search tips
Search criteria 


Logo of hsresearchLink to Publisher's site
Health Serv Res. 2005 August; 40(4): 997–1020.
PMCID: PMC1361187

An Educational Intervention to Enhance Nurse Leaders' Perceptions of Patient Safety Culture



To design a training intervention and then test its effect on nurse leaders' perceptions of patient safety culture.

Study Setting

Three hundred and fifty-six nurses in clinical leadership roles (nurse managers and educators/CNSs) in two Canadian multi-site teaching hospitals (study and control).

Study Design

A prospective evaluation of a patient safety training intervention using a quasi-experimental untreated control group design with pretest and posttest. Nurses in clinical leadership roles in the study group were invited to participate in two patient safety workshops over a 6-month period. Individuals in the study and control groups completed surveys measuring patient safety culture and leadership for improvement prior to training and 4 months following the second workshop.

Extraction Methods

Individual nurse clinical leaders were the unit of analysis. Exploratory factor analysis of the safety culture items was conducted; repeated-measures analysis of variance and paired t-tests were used to evaluate the effect of the training intervention on perceived safety culture (three factors). Hierarchical regression analyses looked at the influence of demographics, leadership for improvement, and the training intervention on nurse leaders' perceptions of safety culture.

Principal Findings

A statistically significant improvement in one of three safety culture measures was shown for the study group (p<.001) and a significant decline was seen on one of the safety culture measures for the control group (p<.05). Leadership support for improvement was found to explain significant amounts of variance in all three patient safety culture measures; workshop attendance explained significant amounts of variance in one of the three safety culture measures. The total R2 for the three full hierarchical regression models ranged from 0.338 and 0.554.


Sensitively delivered training initiatives for nurse leaders can help to foster a safety culture. Organizational leadership support for improvement is, however, also critical for fostering a culture of safety. Together, training interventions and leadership support may have the most significant impact on patient safety culture.

Keywords: Patient safety, safety culture, leadership, training intervention

Patient safety and medical error have emerged as important quality and public policy issues in health care. Studies of the incidence of adverse events (AEs) in acute care hospitals have been reported internationally (e.g., Brennan et al. 1991; Wilson et al. 1995; Vincent, Neale, and Woloshynowych 2001; Baker et al. 2004). These studies indicate that between 5 and 20 percent of patients admitted to a hospital experience an AE (defined in the Australian study as an unintended injury or complication which results in disability, death, or prolonged hospital stay and is caused by health care management rather than the patient's underlying disease [Wilson et al. 1995]) and that roughly 50 percent of these AEs are judged to be preventable, and that AEs cost health care systems millions of dollars in additional hospital days. These incidence data, together with the release of the Institute of Medicine's (IOM 1999) report To Err is Human, prompted several national policy documents with comprehensive plans and direction for policymakers, health care leaders, clinicians, and regulators about system changes necessary to improve patient safety (AGPS 1996; Department of Health 2000; IOM 2001).

From the literature it is clear that most AEs are the result of cumulative effects of small errors involving both human errors and latent failure—failures arising from organizational and administrative processes and systems (Reason 1990). They tend to emerge from the interactions of multiple, related components within complex systems (IOM 1999). Accordingly, the potential for error and AE reduction exists at all levels of the health care system. However, although research to measure the incidence of AEs continues to grow, there has been less in the way of empirical research into strategies for helping front-line providers reduce AEs and improve patient safety. It has been suggested that more targeted studies of potential interventions to reduce AEs are needed (Leape et al. 1998; Davis et al. 2001). Moreover, in a recent study of various health-related organizations in Canada, nearly half of all organizations surveyed indicated that they were not able to effectively improve patient safety (Baker and Norton 2002).

This study involved the design of a training intervention and a test of its effect on nurse leaders' perceptions of patient safety culture. The relationship between senior leadership support and perceived safety culture was also investigated. Nurses in clinical leadership roles were chosen as the focus for this study because they are high leverage actors in the quality improvement (QI) process because of their ability to lead change (Munro 2002; Batalden et al. 2003; Currie and Brown 2003). Our literature review revealed no controlled studies of patient safety interventions with this group. Additionally, a recent international study reported that nurses feel the quality of care is deteriorating and that AEs related to such things as medication errors and falls occur regularly (Aiken et al. 2001), suggesting the area of AE reduction will be seen as relevant to this group.


Few intervention studies to improve patient safety have been reported in the literature, although some do exist. Most of the reported studies are on the effects of computerized physician order entry systems for reducing AEs (e.g., Bates et al. 1998). Other studies on the adaptation of Crew Resource Management in Emergency Departments found that observed clinical errors were reduced in teamwork trained EDs (Morey et al. 2002). Finally, the literature includes descriptive accounts of error reduction processes undertaken in individual units or organization (e.g., Brown, Riippa, and Shaneberger 2001).

Thus far, most assessments of QI initiatives do not use randomization or nonequivalent control groups (Samsa and Matchar 2000). However, randomized-controlled studies (RCTs) are needed—a recent study revealed that while most QI studies based on before-and-after observations reported positive findings, three published RCTs of QI suggested no impact on clinical outcomes and no evidence of organization-wide improvement in clinical performance (Shortell, Bennett, and Byck 1998).

The literature also shows that interventions tend to be aimed at intermediate outcomes expected to reduce AEs rather than AE reduction itself. For instance, the Institute for Healthcare Improvement breakthrough collaboratives ultimately targeted at reducing adverse drug events actually used the implementation and development of various medication error prevention practices as the outcome measure (Leape et al. 2000). A more recent study by Pronovost et al. (2003) described a strategic plan aimed at improving intermediate outcomes of patient safety culture and safety climate. Patient safety interventions often focus on these kinds of intermediate or upstream outcome measures because testing models where reduction in AEs is the dependent variable poses serious challenges. First, such studies are vulnerable to problems associated with confounding. Second, the kinds of changes in systems and culture that many suggest as being required to reduce AEs and improve patient safety are likely to be observed only after long periods of time—witness the case of anesthesia where evidence from safety interventions implemented in the early 1990s to reduce preventable deaths are only now being seen in the published literature (Runciman and Moller 2001).

Patient safety culture, in addition to being an important outcome measure, has become a key research priority in its own right (Battles 2003; Battles and Lilford 2003). Many have argued that patient safety culture change is the key to reducing error in health care (e.g., Ohlhauser and Schurman 2001). While levers to improve safety, such as training and information technology, are important it has been suggested that such initiatives cannot be successful in the absence of a culture of safety (Firth-Cozens 2003; Nieva and Sorra 2003). For instance, root cause analysis is unlikely to uncover latent sources of AEs amidst a culture of silence (Nieva and Sorra 2003). Although safety culture assessment tools can be used for a variety of purposes (Nieva and Sorra 2003), most empirical studies of safety culture in health care have thus far only provided descriptive data on safety culture at one point in time (e.g., Pronovost et al. 2003; Singer et al. 2003). There are reports that Johns Hopkins (Pronovost et al. 2003) and the VA (Neiva and Sorra 2003) are using safety culture measures for the purpose of evaluating interventions; however, no pre- and posttest studies have been reported in the literature at this time.

Given (1) the clear need for and dearth of controlled studies of patient safety interventions, (2) the fact that most patient safety intervention studies, for good reason, focus on upstream or intermediate outcome variables, and (3) the extent to which patient safety culture is argued to be a critical antecedent of AE reduction, this study sought to carry out a controlled test of an intervention designed to improve nurse leaders' perceptions of patient safety culture in acute care settings.

Conceptual Model

It has been suggested that, in order to improve patient safety and reduce AEs, efforts are needed in three areas: (1) improved measurement and feedback to increase the detection of AEs and to guide interventions to improve systems and care processes (Croskerry 2000; Baker and Norton 2001; Battles and Lilford 2003; Thomas and Peterson 2003); (2) tools and change strategies to redesign care and support teams and individual practitioners in identifying and preventing AEs (IOM 1999; Baker and Norton 2001; Reason 2002); and (3) visible leadership supporting patient safety improvement efforts (Barach and Small 2000; Reinertsen 2000; Mohr, Abelson, and Barach 2002; O'Toole 2002; Firth-Cozens 2003; Pronovost et al. 2003; Walshe 2003).

Because testing models and interventions where AE reduction is the dependent variable poses considerable challenges (noted above), this study aimed to improve perceptions of patient safety culture—an intermediate outcome theorized to be a necessary antecedent of AE reduction (Leape et al. 1998; Firth-Cozens 2001; Mohr, Abelson, and Barach 2002). We set out to test whether (a) a training intervention focused on safety science and safety tools and (b) leadership support, influence nurse leaders' perceptions of patient safety culture (see Figure 1). Berwick (2002) and Vincent (1999), among others, argue that to truly move patient safety forward, initiatives are required at the individual level, the micro-unit of care, the organization, and the system/policy level. In this study we focused on the individual level of nurse leaders who are key actors when it comes to improvement processes at the unit level (Munro 2002; Balogun 2003; Currie and Brown 2003).

Figure 1
This Study Examined Relationships Inside the Shaded Area


A prospective evaluation of a patient safety training intervention using a quasi-experimental untreated control group design with pretest and posttest was used. Nurses in clinical leadership roles in the study group were invited to participate in two different patient safety workshops over a 6-month period. Individuals in the study and control groups completed surveys measuring patient safety culture and leadership for improvement prior to the first workshop and 10 months later (4 months following the second workshop). Workshop 1 (a) introduced evidence from international studies on the incidence of AEs in hospitals, (b) taught about theoretical work in the areas of safety and human error (e.g., the work of J. Reason, L. Leape, R. Amalberti), and (c) introduced two simple tools, one for preventing errors of omission and described by Reason (2002) and one for learning from AEs and near misses related to medical devices as described by Amoore and Ingram (2002). Workshop 2 focused on the role of teamwork and leadership in improving safety and showed how the organization's incident report data were used for improvement. The workshop presentations are available from the first author.

Questionnaire Administration and Sample

The study and control groups were two Canadian multi-site teaching hospitals from different jurisdictions. At baseline (Fall 2002) and again at follow-up (Fall 2003) we asked the nursing office in each organization to identify all nurses in clinical leadership roles including nursing directors, front-line nursing unit managers, and clinical educators (clinical nurse specialists, advanced practice nurses, nurse practitioners, etc.). There were 408 people identified as being in one of these roles at baseline and 417 at follow-up. In November 2002 baseline questionnaires, along with a covering letter, were mailed to subjects in the control group. During the same period, subjects in the study group were invited to attend the first intervention workshop. Baseline data were collected at the start of the workshop. Subjects in the study group who did not attend the first workshop were mailed the study questionnaire immediately following the workshop. We used a modified Dillman (1978) approach to increase response rates (all mailed questionnaires were followed up by reminder cards 2 weeks later and a second mailing to all nonrespondents 4 weeks after that). Posttest questionnaires were mailed to all nurses in clinical leadership roles in the study and control groups 10 months later, in September 2003. Unique ID numbers used at baseline were retained and used at follow-up so that each respondent's pretest and posttest data could be linked. Baseline response rate was 83 percent (338/408), follow-up response rate was 72 percent (300/417), and 244 of the 356 subjects (69 percent) eligible at baseline and follow-up returned both questionnaires. These 244 subjects were eligible for inclusion in our analyses. Nonrespondents did not differ from respondents with respect to role (director, front-line manager, educator) at baseline, however, at follow-up directors were underrepresented in the respondent group and managers were overrepresented.

Questionnaire Content/Study Measures

The same study questionnaire, which contained three sections, was used for the pretest and posttest (a copy of the questionnaire is available from the first author). Part A of the questionnaire measured patient safety culture using 32 items with Likert response scale (adapted from Singer et al. 2003; Capital Health Region, Halifax, NS). To determine the dimensionality of patient safety culture, exploratory factor analysis (EFA) using principal axis factoring and oblique rotation was performed on these 32 items. EFA revealed the presence of three nontrivial patient safety culture factors. Final decisions regarding which items to include in each of the three safety culture factors were based on the significance of factor loadings, theoretical links between items and constructs (including how best to treat items with significant loadings on more than one factor), and scale internal consistency (coefficient α). The three factors have been labeled (1) valuing safety (at the organization and department levels), (2) fear of negative repercussions, and (3) perceived state of safety. The factor loading matrix is available in the electronic Appendix A on the journal's website. A valuing safety variable was computed as the mean of 10 items measured using a five-point agree–disagree Likert-type response scale (e.g., “My organization effectively balances the need for patient safety and the need for productivity”). Using the same response scale, the fear of negative repercussions variable was calculated as the mean of four items (e.g., “Clinicians who make serious mistakes are usually punished”). Finally, the perceived state of safety variable was created as the mean of nine items (e.g., “I believe that health care error constitutes a real and significant risk to the patients that we treat”). The coefficient α's are 0.86, 0.73, 0.66 for the valuing safety, fear of repercussions, and perceived state of safety scales, respectively. For all three of these variables, negatively worded items were recoded so that a higher score always indicates a more positive culture.

Part B of the questionnaire contained nine items designed to measure Leadership for Improvement. Measured using a seven-point agree–disagree Likert-type scale, leadership for improvement reflects the extent to which a respondent feels senior leadership in his/her hospital values data (e.g., performance data) and supports using data to bring about improvement. Unidimensionality and reliability (α=0.84) of this measure have been previously established (Soberman Ginsburg 2003). Sample items include: “Senior managers in this organization are completely committed to the idea that if we study the way we do our work, we can make things better around here,”“This organization devotes resources to measurement initiatives, but the results often end up sitting on a shelf.” Part C of the questionnaire asked for information on respondent age, gender, and setting (inpatient or outpatient).

The intervention variable is a dichotomous variable. The entire study sample was invited to participate in the intervention workshops and workshops were scheduled at the most appropriate time as identified by the target audience (weekday morning from 7:30 to 10:30 a.m. with breakfast). However, participation was voluntary and approximately half (122) of the 240 clinical leaders in nursing in the study organization attended one or both workshops. Subjects in the study group who attended one or both of the intervention workshops were coded as 1 and subjects in the control group as well as subjects in the intervention group who did not attend either of the workshops were coded as 0 (see note 1 for details about why subjects were grouped in this manner).

Because this was a field experiment, the researchers needed to be familiar with the context in the study and control organizations. Accordingly, we conducted a series of semistructured interviews in the study and control organizations to help assess workshop impact and tool implementation (in the study organization) and broader contextual issues related to safety in both organizations. We interviewed a random sample of five workshop attendees in the study group and an additional group of 10 senior leaders and champions—five in the study organization and five in the control organization. Workshop attendees were asked why they attended the workshop, how they felt about the material presented, what information or tools they shared with staff on their unit, and factors preventing them from using the workshop tools or moving forward with patient safety more generally. Senior leaders and champions were asked about the most important safety initiatives in their organization, barriers and enablers for moving patient safety forward, whether they saw themselves as leading safety organizations, and future safety initiatives. Although in-depth qualitative study of the implementation of safety practices, including barriers and facilitating factors, was beyond the scope of this study, some common themes, which emerged from the interviews, are described very briefly in the discussion section since they help to deepen our understanding of the workshop impact.


As described above, EFA was performed to assess the dimensionality of the patient safety culture construct—our dependent variable. Although the intervention was delivered to a cluster of individuals (e.g., individuals embedded in one organization), it is reasonable to evaluate cluster-based interventions at either the individual or the cluster level (Ukoumunne et al. 1999). Because clusters (organizations) were used solely to separate the study and control groups, individual nurse clinical leaders are the unit of analysis.

To test whether the intervention had an impact on patient safety culture we used repeated-measures analysis of variance (ANOVA) crossing two groups: workshop (study) versus no intervention workshop (control) by two time periods—before the initial intervention workshop (pretest) and 10 months later (posttest). A significant interaction would support the presence of a treatment effect. Post hoc analysis (using separate paired t-tests for the study and control groups) was used to determine the nature of any differences. Hierarchical regression was used to test the unique effect of (a) demographic variables, (b) the workshop intervention, (c) leadership for improvement, and (d) the interaction between (b) and (c) on posttest measures of perceived safety culture. The repeated-measures ANOVA and hierarchical regression model were run three times, once with each of the three safety culture factors as the dependent variable. Multivariate analyses were not used because it was reasonable to expect that either of the key explanatory variables (the intervention and leadership for improvement) might impact differently on the three safety culture measures used as the dependent variables. For both procedures, assumptions were tested and there were no violations.

We performed the ANOVA just described on those 243 cases with usable pretest and posttest data. Of these, 93 were from the control organization and 150 from the study organization; however, only 78 of 150 study group respondents attended one or both of the intervention workshops. Accordingly, the treatment variable compared workshop participants (n=78) with nonparticipants from both organizations (n=165).1 The hierarchical regression analyses included 242 valid cases based on listwise deletion.


Ninety-two percent of study and control group respondents were female at baseline. At baseline, a higher proportion of respondents in the study group (31 versus 20 percent in the control group) were older (age 50–59 years) and were front-line nurse managers (44 percent of respondents in the study group compared with 25 percent of respondents in the control group). A higher percentage of control group respondents were clinical educators (54 versus 37 percent in the study group) and directors (16 versus 5 percent in the study group) at baseline. These differences in demographics remained at follow-up. In terms of baseline safety culture scores, study group scores were significantly lower than control group scores on the valuing safety (t=3.8, p<.001) and the perceived state of safety variables (t=2.2, p<.05).

Electronic Appendix B provides descriptive statistics and zero-order correlation coefficients for all study variables. Managers were more likely to be older (r=0.36, p<.01) and educators were more likely to be younger (r=−0.30, p<.01). Attendance at a study workshop was negatively correlated with educator status (r=−0.26, p<.01) and positively correlated with manager status (r=0.15, p<.05). Attendance at a study workshop was also negatively correlated with two of the three baseline safety culture measures (r=−0.15 to −0.22, p<.01), suggesting that those who attended a workshop may have had more concerns about safety. As expected, the three baseline safety culture variables and the baseline leadership for improvement measure were positively correlated with the same measure at follow up (r=0.54–0.65, p<.01) and the three safety culture variables were also significantly interrelated at baseline (r=0.28–0.43, p<.01) and follow-up (r=0.28–0.48, p<.01).

Results of the repeated measures are reported in Table 1. The interaction between group and time was significant for valuing safety (F(1, 241)=11.9, p<.001) and perceived state of safety (F(1, 241)=4.8, p<.05) but not significant for fear of negative repercussions (F(1, 241)=0.6, NS).

Table 1
Results of Repeated-Measures ANOVA

Post hoc analysis conducted separately for the study and control groups using paired t-tests indicated a significant increase in the valuing safety variable from a mean of 3.29 (SD=0.55) at pretest to 3.49 (SD=0.59) at posttest for the intervention group (t=−3.81, p<.001). There was no significant change in fear of repercussions (t=−.36, NS) or perceived state of safety (t=−0.99, NS) for the intervention group. For the control group, there was a significant decrease in perceived state of safety from 2.80 (SD=0.53) at pretest to 2.71 (SD=0.53) at posttest (t=2.48, p<.05) and there was no change in valuing safety (t=1.15, NS) or fear of repercussions (t=−0.82, NS).

Table 2a; Table 2b; Table 2c show the results of the three hierarchical regression analyses. In all three cases the results show that respondent demographics do not explain a significant amount of variance in any of the posttest safety culture variables—when entered into the regression model first (block 1), their effect is not significant (block 1ΔR2=0.009–0.028, NS in Table 2a; Table 2b; Table 2c). We controlled for the relevant pretest measure of safety culture by entering it in block 2 and, as expected, the pretest safety culture measure explains a significant amount of variance in the posttest safety culture measure (block 2ΔR2=0.28–0.32, p<.001 in Table 2a; Table 2b; Table 2c). For each of these three regression models the leadership for improvement variable was entered in block 3, a dummy variable for workshop attendance was entered in block 4, and the interaction between leadership for improvement and workshop attendance was entered in block 5. Here the results are described separately since they begin to diverge.

Table 2a
Results of Full Hierarchical Regression Analysis (DV=Valuing Safety [Posttest])
Table 2b
Results of Full Hierarchical Regression Analysis (DV=State of Safety [Posttest])
Table 2c
Results of Full Hierarchical Regression Analysis (DV=Fear of Repercussions [Posttest])

Table 2a shows that leadership for improvement explains a significant amount of variance in valuing safety, over and above that which is explained by respondent demographics and the pretest valuing safety score (model 3ΔR2=0.22, p<.001). Workshop attendance, when entered in block 4, explains additional (significant) variance in valuing safety (ΔR2=0.02, p=.001) while the interaction between leadership for improvement and workshop attendance does not explain any additional variance in valuing safety (ΔR2=0.00, NS) when added in block 5.

Table 2b shows that leadership for improvement explains a significant amount of variance in perceived state of safety, over and above that which is explained by the first 2 blocks of variables (model 3ΔR2=0.055, p<.001). Workshop attendance, when entered in block 4 did not explain any additional variance in perceived state of safety (ΔR2=0.007, NS) nor did the interaction between leadership for improvement and workshop attendance (model 5ΔR2=0.002, NS).

Table 2c shows that leadership for improvement explains a significant amount of variance in fear of repercussions, over and above that which is explained by the first two blocks of variables (model 3ΔR2=0.03, p<.01). Workshop attendance, when entered in block 4 did not explain any additional variance in fear of repercussions (ΔR2=0.000, NS) while the interaction between leadership for improvement and workshop attendance does explain a significant amount of additional variance in fear of repercussions (ΔR2=0.034, p=.001). Posthoc testing and plotting (Figure 2) shows that leadership for improvement explains significantly more variance in fear of repercussions for individuals who participated in the intervention than for individuals who did not.

Figure 2
Interaction between Leadership for Improvement and Workshop Attendance

Although the regression coefficients reported in Tables 2ac appear to suggest that, in relative terms, leadership for improvement is the most important predictor of each of the three patient safety culture factors, comparisons between predictor variables are often unfair because one variable may be procedurally or distributionally advantaged (Cooper and Richardson 1986). For instance, the leadership for improvement variable may be procedurally advantaged in that there is a common methods bias for that variable and each of the three safety culture dependent variables: each variable was measured on the same wave two questionnaire. The hierarchical regression analyses were rerun using the pretest measure of leadership for improvement (results not shown) to effectively rule out the possibility that the effects seen were strictly the result of this common methods bias.


The goal of this study was to assess whether an intervention targeted at clinical leaders in the nursing would lead to measurable improvements in participant perceptions of patient safety culture. The results yield several important findings. First, differences in baseline safety culture scores between the intervention and control groups suggest that these kind of voluntary, invitational workshops attract certain individuals—in this case those who gave significantly lower ratings of valuing safety and state of safety at baseline chose to attend one or more workshops. Although the analyses used here adequately controls for these baseline differences, these differences do suggest that this type of safety workshop intervention may be more attractive to those individuals who have more pronounced safety concerns. Accordingly, efforts will be required to make these kinds of teaching workshops attractive to others.

Posthoc analysis of the repeated-measures ANOVA revealed that valuing safety increased significantly for the study group between pretest and posttest while perceptions of the state of safety decreased significantly for the control group between pretest and posttest. The clinical significance of these differences can be gleaned by expressing the size of these differences as a proportion of the standard deviation (the effect size). The effect sizes seen here are small–medium (0.36) and small (0.17) for the change in valuing safety and the change in state of safety, respectively (Cohen and Cohen 1983). The increase in valuing safety for the study group suggests that educational workshops designed to enhance understanding of patient safety issues and provide concrete tools or direction for actions to improve safety do hold promise as a vehicle for moving patient safety culture forward. Interview data from the five randomly selected workshop participants provide additional insights. All participants indicated that although the workshops were successful at bringing sensitive issues to this audience in a nonthreatening manner, competing priorities and human resource constraints have made it difficult for participants to use the workshop tools with staff on their units. Moreover, these same constraints are also perceived to create a continuous stream of unsafe situations at the front lines. With respect to the decrease in perceived state of safety among the control group, the interview data just noted are consistent with and prompt us to consider why, for individuals who did not participate in an intervention workshop, perceived state of safety actually declined over the 10-month period under study. Of the three safety culture factors, perceived state of safety may be more open to deterioration than either valuing safety or fear of repercussions—both of which have more to do with how culture is established by superiors in the organization. Declines in perceived state of safety might be expected in a health care environment that continues to experience decreased capacity or lack of new investment at the front lines or perceived state of safety may decrease as a result of increased attention to patient safety and the incidence of medical error more generally. In other words, declines in perceived state of safety may reflect real deteriorations in this area or they may reflect perceptual shifts. Either way, these changes were not found among those who participated in an intervention workshop suggesting that although these kinds of workshops may not help improve all aspects of safety culture they may act as a buffer against and help prevent deteriorations in certain aspects of safety culture.

Nonequivalent control group designs such as the one used in this study do face threats to internal validity (Cook and Campbell 1979). For instance, there may be other causal explanations for differences between treatment and control groups (related to instrumentation, testing, history, maturation, mortality, or regression to the mean) that need to be ruled out in the absence of randomization. Use of the same instrumentation in both groups at pretest and posttest rules out this threat as well as the threat of testing. The interviews conducted with senior leaders in both organizations enabled us to rule out the threat of local history and maturation (during the study period both organizations struggled with and failed to implement any noteworthy safety initiatives that may have caused systematic changes in culture unrelated to the study intervention). High response rates leave little concern about differential study mortality in the two groups and the fact that both baseline and follow-up scores on all three of the safety culture variables were near the middle of the response scale means that regression to the mean at posttest was unlikely in either group.

Although we saw change in the anticipated direction for the study group, we did not see positive effects for all three aspects of safety culture. However, it is clear from the organizational literature (e.g., Schein 1992) that organizational culture (safety or otherwise) is difficult to change. Nieva and Sorra 2003 (p. ii21) suggest a safety culture is hard to establish and “There is much to learn regarding creating and sustaining culture change in health care and the tools that might be used in these transformation efforts.” Clearly this is an area in need of further study. They further argue that emphasis in health care on efficiency, cost containment, infallibility, and norms of perfection “combine to create a culture contradictory to the requirements of patient safety” (Nieva and Sorra 2003, p. ii17).

Given the challenges associated with changing something that is as strongly entrenched as culture, it is helpful to consider whether there may be certain aspects of safety culture more amenable to change or likely to change first. A study in six VA centers to assess safety culture transformation found that the first change was “the realization that errors are the result of a systematic rather than an individual problem” (IOM 2003, p. 299). In other words, learning and understanding about human factors and what constitutes safer systems may be among the first aspects of safety culture that changes. Our study questionnaire contained a similar item (“I believe that most serious occurrences happen as a result of multiple small failures, and are not attributable to one individual's actions”) that enabled us to look for evidence of change in this area in our study sample. Paired analysis (not shown) revealed a significantly higher level of agreement with this statement at posttest among the study group but no change between the pretest and posttest among the control group. As we continue to look for ways to improve safety and safety culture, it may be prudent to begin with educational interventions that teach about the science of patient safety, what makes for safer systems, and the importance of highlighting systems problems over blame and human error.

Finally, findings from the hierarchical regression analysis are useful for understanding the unique effect of respondent demographics, the workshop intervention, and leadership for improvement on nurse leaders' perceptions of patient safety culture. Respondent demographics (including age and managerial role) did not explain a significant amount of variance in any of the three patient safety culture variables. After controlling for pretest culture we found that leadership for improvement explains a significant amount of variance in all three safety culture variables (explaining anywhere from 3 to 21 percent of additional variance). Workshop participation explained additional variance over and above the variables just described only for the valuing safety variable. Finally, we found evidence of a significant interaction between workshop participation and leadership for improvement for the fear of repercussions culture variable suggesting that, together, leadership for improvement and training workshops are likely important for explaining variation in at least certain aspects of perceived safety culture. Our results are consistent with other work showing that success in making changes aimed at reducing adverse drug events was associated with strong leadership, among other variables (Leape et al. 2000). Indeed, this kind of leadership support has been suggested to play an important “agenda-setting” role in various other organizational improvement activities including the utilization of research findings (Huberman 1994), perceptions of performance data (Soberman Ginsburg 2003), response to hospital performance data (Baker and Soberman 2001), and clinical involvement in CQI (Weiner, Shortell, and Alexander 1997). As noted by Pronovost et al. (2003) senior leaders need to become more visible to front-line staff as they try to improve safety and this can be done through initiatives such as executive walkabouts and executive adoption of a patient care unit. Others have also noted that while individuals' attitudes toward safety can change, this change is unlikely to be sustained without a strong organizational commitment to safety (Firth-Cozens 2003). Outside of health care, employee perceptions of the safety system were also found to be related to management commitment to safety, which, in turn, was related to injury rates (O'Toole 2002). Our interviews with senior leaders revealed that in both organizations we studied there are committed individuals at the senior level along with knowledgeable and dedicated champions who are not members of senior management. However, in both organizations senior leadership struggled to define and put in place an actionable patient safety program as of Fall 2003. Thus, although senior leadership support may be a critical variable for moving patient safety forward in health care organizations, more research is needed to understand how this support can be generated or inspired, as well as how it can be conveyed to organizational members. Additional studies might look at whether more tangible forms of senior leadership support can have a positive impact at the front lines—perhaps on the kind of tool implementation we failed to see materialize in this study.

This study provides a model and empirical evidence related to mechanisms for improving nurse leaders' perceptions of patient safety culture. Although the measures used are relatively new, can benefit from further refinement and validation, and may have limited power in some of our analyses,2 this initial exploratory effort provides a useful model and set of measures that health services researchers can use to begin to quantitatively study the impact of various initiatives targeted at improving patient safety. A limitation of this study has to do with the fact that it is unclear how long it takes to see evidence of change in perceptions of safety culture. This study looked at changes over roughly a 1-year period. Some estimate that it may take as long as 5 years to develop a culture of safety that is felt throughout an organization (IOM 2003). Moreover, our data do not enable us to comment on whether these changes will be sustained. Other limitations include the fact that this study relied on self-report questionnaire data, which are subject to social desirability biases. Future studies would be strengthened by including unobtrusive measures alongside of self-report measures—as has been noted in the literature, safety culture cannot be assessed solely through the use of self-report quantitative surveys (Cooper 2000; IOM 2003; Marshall et al. 2003). Qualitative approaches can also provide a level of richness unavailable through exclusively quantitative assessments (Strauss and Corbin 1990). Finally, this study also looked at safety culture from the perspective of one group, nurse leaders. Mechanisms for influencing other groups' perceptions of safety culture and the safety culture of entire organizations require further investigation.

In terms of the generalizability of the study findings, we had strong response rates (over 80 percent at pretest, over 70 percent at posttest, and 69 percent across both) thereby limiting any nonrespondent bias. Directors were, however, underrepresented in the respondent group at follow-up suggesting that caution should be used when generalizing results to this group. Nonetheless these study findings should be generalizable to front-line clinical leaders in nursing in large acute care hospitals.

Given the limited number of published studies that have systematically tested interventions to improve either patient safety or more upstream outcome variables such as safety culture or the implementation of safety practices and tools, it is critical that work in these areas be pursued. Future studies might also look at the role of middle managers and champions in such improvement initiatives as well as the conditions under which safety and culture change can be sustained. Theoretical models of what is needed to create safer systems (e.g., IOM 1999), which are widely available, need to be subjected to more rigorous empirical examination. Admittedly, controlled studies in this area are challenging; it is therefore critical that researchers attempt to at least carry out controlled studies using quasi-experimental approaches (Cook and Campbell 1979) in addition to using other qualitative and mixed methods approaches (Verhoef and Casebeer 1997). This study has attempted to move in this direction.


The authors wish to thank the front-line and mid-level hospital managers who responded to the study questionnaires for their time and responses. As well, many thanks to Wendy Spragins who ably managed the data collection and data entry aspects of the study and to Mirka Ondrack for her statistical guidance. Finally, the authors would like to acknowledge and thank the Adult Research Committee of the Calgary Health Region for helping to fund this study and the Canadian Health Services Research Foundation (CHSRF) for supporting the first author through a postdoctoral fellowship.


1Our analyses could have looked for a treatment effect comparing nurses in the study organization who attended with those in the study organization who did not attend an intervention workshop. However, such analyses would exaggerate any self-selection bias resulting from the fact that the intervention workshops were voluntary for those in the study organization. Accordingly, subjects in the study organization who did not attend a workshop were grouped with subjects in the control organization (where workshop attendance was not an option). Nonetheless, we did conduct repeated-measures ANOVA (not shown) crossing group by time period where the group variable compared workshop attendees with nonattendees from within the study organization only. The same significant interactions were found as those we report when the group variable compared workshop participants with all nonworkshop participants (see “Results”). We also compared culture scores for nonattendees from the study organization and the control organization using independent samples t-test (not shown) and found no significant differences between these groups suggesting it is reasonable to lump them together.

2Newer measures are susceptible to unreliability (the α for the perceived state of safety variable was 0.66, which is lower than the commonly used threshold of 0.70 defined by Nunnally [1978]). In addition, in regression analysis unreliability of measures can have serious deleterious effects on variance explained (O'Grady 1982)—effects that are even more dramatic when looking at variance explained by interaction terms (Busemeyer and Jones 1983; Evans 1985; Aiken and West 1991) as we attempt to do here.

Supplementary Material

The following material is available from


  • Aiken LH, Clarke SP, Sloane DM, Sochalski JA, Busse R, Clarke H, Giovannetti P, Hunt J, Rafferty AM, Shamian J. “Nurses' Reports on Hospital Care in Five Countries.” Health Affairs. 2001;20(3):43–53. [PubMed]
  • Aiken LS, West SG. Multiple Regression: Testing and Interpreting Interactions. Newbury Park, CA: Sage Publications; 1991.
  • Amoore J, Ingram P. “Learning from Adverse Incidents Involving Medical Devices.” British Medical Journal. 2002;325:272–5. [PMC free article] [PubMed]
  • Australian Government Printing Service (AGPS). The Final Report of the Taskforce on Quality in Australian Healthcare, The Taskforce on Quality in Australian Healthcare. Canberra, Australia: Australian Government Printing Service; 1996.
  • Baker GR, Norton PG. “Making Patients Safer! Reducing Error in Canadian Healthcare.” Healthcare Papers. 2001;2(1):10–31. [PubMed]
  • Baker GR, Norton PG. “Patient Safety and Healthcare Error in the Canadian Healthcare System: A Systematic Review and Analysis of Leading Practices in Canada with Reference to Key Initiatives Elsewhere.” 2002 Report to Health Canada, Contract HC-3-030-0121.
  • Baker GR, Norton PG, Flintoft V, Blais R, Brown A, Cox J, Etchells E, Ghali WA, Hébert P, Majumdar SR, O'Beirne M, Palacios-Derflingher L, Reid RJ, Sheps S, Tamblyn. R. “The Canadian Adverse Events Study: The Incidence of Adverse Events in Hospitalized Patients in Canada.” Canadian Medical Association Journal. 2004;170(11):1678–86. [PMC free article] [PubMed]
  • Baker GR, Soberman LR. “Organizational Response to Performance Data: A Qualitative Study of Perceptions & Use of HR'99.” 2001 Working Paper, Department of Health Policy, Management, & Evaluation, University of Toronto.
  • Balogun J. “From Blaming the Middle to Harnessing Its Potential: Creating Change Intermediaries.” British Journal of Management. 2003;14(1):69–82.
  • Barach P, Small SD. “Reporting and Prevention Medical Mishaps: Lessons from Non-Medical Near Miss Reporting Systems.” British Medical Journal. 2000;320:759–63. [PMC free article] [PubMed]
  • Batalden PB, Nelson EC, Mohr JJ, Godfrey MM, Huber TP, Kosnik L, Ashling K. “Microsystems in Health Care: Part 5. How Leaders Are Leading.” The Joint Commission Journal on Quality and Safety. 2003;29(6):297–308. [PubMed]
  • Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, Brudick E, Hickey M, Kleefield S, Shea B, Vander Vliet M, Seger DL. “Effect of Computerized Physician Order Entry and a Team Intervention on Prevention of Serious Medication Errors.” Journal of the American Medical Association. 1998;280(15):1311–6. [PubMed]
  • Battles JB. “Patient Safety: Research Methods for a New Field.” Quality and Safety in Health Care. 2003;12(suppl 11):ii1.
  • Battles JB, Lilford RJ. “Organizing Patient Safety Research to Identify Risks and Hazards.” Quality and Safety in Health Care. 2003;12(suppl 11):ii2–7. [PMC free article] [PubMed]
  • Berwick DM. “A User's Manual for the IOM's ‘Quality Chasm’ Report.” Health Affairs. 2002;21(3):80–90. [PubMed]
  • Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, Newhouse JP, Weiler PC, Hiatt HH. “Incidence of Adverse Events and Negligence in Hospitalized Patients: Results of the Harvard Medical Practice Study I.” New England Journal of Medicine. 1991;324:370–6. [PubMed]
  • Brown B, Riippa M, Shaneberger K. “Promoting Patient Safety through Preoperative Patient Verification.” AORN Journal. 2001;74(5):690–8. [PubMed]
  • Busemeyer JR, Jones LE. “Analysis of Multiplicative Combination Rules When the Causal Variables Are Measured with Error.” Psychological Bulletin. 1983;93:549–62.
  • Cohen J, Cohen P. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. 2d ed. Hillsdale, NJ: Lawrence Erlbaum; 1983.
  • Cook TD, Campbell DT. Quasi-Experimentation: Design & Analysis Issues for Field Settings. Chicago: Rand McNally Publishing Co; 1979.
  • Cooper M. “Towards a Model of Safety Culture.” Safety Science. 2000;36:111–36.
  • Cooper WH, Richardson AJ. “Unfair Comparisons.” Journal of Applied Psychology. 1986;71:179–84.
  • Croskerry P. “The Feedback Sanction.” Academic Emergency Medicine. 2000;7(11):1232–8. [PubMed]
  • Currie G, Brown AD. “A Narratological Approach to Understanding Processes of Organizing in a UK Hospital.” Human Relations. 2003;56(5):563–78.
  • Davis P, Lay-Yee R, Schug S, Briant R, Scott A, Johnson S, Bingley W. “Adverse Events Regional Feasibility Study: Indicative Findings.” New Zealand Medical Journal. 2001;114(1131):203–5. [PubMed]
  • Department of Health. “An Organisation with a Memory.” 2000. Report of an Expert Group on Learning from Adverse Events in the NHS Chaired by the Chief Medical Officer. London: The Stationery Office.
  • Dillman DA. Mail and Telephone Surveys: The Total Design Method. New York: John Wiley & Sons; 1978.
  • Evans MG. “A Monte Carlo Study of the Effects of Correlated Method Variance in Moderated Multiple Regression Analysis.” Organizational Behavior and Human Decision Processes. 1985;36:305–23.
  • Firth-Cozens J. “Cultures for Improving Patient Safety through Learning: The Role of Teamwork.” Quality and Safety in Health Care. 2001;10:ii26–31. [PMC free article] [PubMed]
  • Firth-Cozens J. “Evaluating the Culture of Safety.” Quality and Safety in Health Care. 2003;12:401. [PMC free article] [PubMed]
  • Huberman M. “Research Utilization: The State of the Art.” Knowledge and Policy. 1994;7(4):13–33.
  • Institute of Medicine (IOM) In: To Err is Human: Building Safer Health System. Kohn LT, Corrigan JM, Donaldson MS, editors. Washington, DC: National Academy Press; 1999.
  • Institute of Medicine (IOM). Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.
  • Institute of Medicine (IOM). Keeping Patients Safe: Transforming the Work Environment of Nurses. Washington, DC: The National Academies Press; 2003. Accessed Jan 5, 2004, available at
  • Leape LL, Kabcenell AI, Gandhi TK, Carver P, Nolan TW, Berwick DM. “Reducing Adverse Drug Events: Lessons from a Breakthrough Series Collaborative.” Joint Commission Journal on Quality Improvement. 2000;26(6):321–31. [PubMed]
  • Leape LL, Woods DD, Hatlie J, Kizer KW, Schroeder SA, Lundberg GD. “Promoting Patient Safety by Preventing Medical Error.” Journal of the American Medical Association. 1998;280(16):1444–7. [PubMed]
  • Marshall M, Parker D, Esmail A, Kirk S, Claridge T. “Culture of Safety (Letter).” Quality and Safety in Health Care. 2003;12:318. [PMC free article] [PubMed]
  • Mohr JJ, Abelson HA, Barach P. “Creating Effective Leadership for Improving Patient Safety.” Quality Management in Health Care. 2002;11(1):69–78. [PubMed]
  • Morey JC, Simon R, Jay GD, Wears RL, Salisbury M, Dukes KA, Berns SD. “Error Reduction and Performance Improvement in the Emergency Department through Formal Teamwork Training: Evaluation Results of the MedTeams Project.” Health Services Research. 2002;37(6):1553–75. [PMC free article] [PubMed]
  • Munro A. “Working Together—Involving Staff: Partnership Working in the NHS.” Employee Relations. 2002;24(3):277–90.
  • Nieva VF, Sorra J. “Safety Culture Assessment: A Tool for Improving Patient Safety in Healthcare Organizations.” Quality and Safety in Health Care. 2003;12:ii17. [PMC free article] [PubMed]
  • Nunnally J. Psychometric Theory. New York: McGraw-Hill; 1978.
  • O'Grady KE. “Measures of Explained Variance: Cautions and Limitations.” Psychological Bulletin. 1982;92:766–77.
  • Ohlhauser L, Schurman DP. “National Agenda: Local Leadership.” Healthcare Papers. 2001;2(1):77–8. [PubMed]
  • O'Toole M. “The Relationship between Employees' Perceptions of Safety and Organizational Culture.” Journal of Safety Research. 2002;33(2):231–43. [PubMed]
  • Pronovost PJ, Weast B, Holzmueller CG, Rosenstain BJ, Kidwell RP, Haller KB, Feroli ER, Sexton JB, Rubin HR. “Evaluation of the Culture of Safety: Survey of Clinicians and Managers in an Academic Medical Center.” Quality and Safety in Health Care. 2003;12:405–10. [PMC free article] [PubMed]
  • Reason JT. Human Error. Cambridge: Cambridge University Press; 1990.
  • Reason J. “Combating Omission Errors through Task Analysis and Good Reminders.” Quality and Safety in Health Care. 2002;11:40–4. [PMC free article] [PubMed]
  • Reinertsen JL. “Let's Talk about Error: Leaders Should take Responsibility for Mistakes.” British Medical Journal. 2000;320(18 March):730. [PMC free article] [PubMed]
  • Runciman WB, Moller J. “Iatrogenic Injury in Australia.” 2001 A Report Prepared by the Australian Patient Safety Foundation.
  • Samsa G, Matchar D. “Can Continuous Quality Improvement Be Assessed Using Randomized Trials?” Health Services Research. 2000;35(3):689–702. [PMC free article] [PubMed]
  • Schein E. Organizational Culture and Leadership. San Francisco: Josey Bass; 1992.
  • Shortell SM, Bennett CL, Byck GR. “Assessing the Impact of Continuous Quality Improvement on Clinical Practice: What It Will Take to Accelerate Progress.” Milbank Quarterly. 1998;76(4):593–624. [PubMed]
  • Singer SJ, Gaba DM, Geppert JJ, Sinaiko AD, Howard SK, Park KC. “The Culture of Safety: Results of an Organization-Wide Survey in 15 California Hospitals.” Quality and Safety in Health Care. 2003;12:112–8. [PMC free article] [PubMed]
  • Soberman Ginsburg L. “Factors That Influence Line Managers' Perceptions of Hospital Performance Data.” Health Services Research. 2003;38(1):261–86. [PMC free article] [PubMed]
  • Strauss AL, Corbin J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: Sage; 1990.
  • Thomas EJ, Petersen LA. “Measuring Errors and Adverse Events in Health Care.” Journal of General Internal Medicine. 2003;19:61–7. [PMC free article] [PubMed]
  • Ukoumunne OC, Guilliford MC, Chin S, Serne JAC, Burney PGJ, Donner A. “Methods in Health Service Research: Evaluation of Health Interventions at Area and Organisation Level.” British Medical Journal. 1999;319:376–9. [PMC free article] [PubMed]
  • Verhoef MJ, Casebeer AL. “Bridging the Gap: Combining Qualitative and Quantitative Methods.” Abstract Published in American Journal of Epidemiology. 1997;145(11):S55.
  • Vincent CA. “The Human Element of Adverse Events.” Medical Journal of Australia. 1999;170:404–5. [PubMed]
  • Vincent C, Neale G, Woloshynowych M. “Adverse Events in British Hospitals: Preliminary Retrospective Record Review.” British Medical Journal. 2001;322:517–9. [PMC free article] [PubMed]
  • Walshe K. “Understanding and Learning from Organisational Failure.” Quality and Safety in Health Care. 2003;12:81–2. [PMC free article] [PubMed]
  • Weiner BJ, Shortell SM, Alexander J. “Promoting Clinical Involvement in the Hospital Quality Improvement Efforts: The Effects of Top Management, Board, and Physician Leadership.” Health Services Research. 1997;32(4):491–511. [PMC free article] [PubMed]
  • Wilson RL, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. “The Quality in Australian Health Care Study.” Medical Journal of Australia. 1995;163:458–71. [PubMed]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust