Search tips
Search criteria 


Logo of hsresearchLink to Publisher's site
Health Serv Res. 2009 October; 44(5 Pt 1): 1563–1584.
PMCID: PMC2754548

Comparing Safety Climate between Two Populations of Hospitals in the United States



To compare safety climate between diverse U.S. hospitals and Veterans Health Administration (VA) hospitals, and to explore the factors influencing climate in each setting.

Data Sources

Primary data from surveys of hospital personnel; secondary data from the American Hospital Association's 2004 Annual Survey of Hospitals.

Study Design

Cross-sectional study of 69 U.S. and 30 VA hospitals.

Data Collection

For each sample, hierarchical linear models used safety-climate scores as the dependent variable and respondent and facility characteristics as independent variables. Regression-based Oaxaca–Blinder decomposition examined differences in effects of model characteristics on safety climate between the U.S. and VA samples.

Principal Findings

The range in safety climate among U.S. and VA hospitals overlapped substantially. Characteristics of individuals influenced safety climate consistently across settings. Working in southern and urban facilities corresponded with worse safety climate among VA employees and better safety climate in the U.S. sample. Decomposition results predicted 1.4 percentage points better safety climate in U.S. than in VA hospitals: −0.77 attributable to sample-characteristic differences and 2.2 due to differential effects of sample characteristics.


Results suggest that safety climate is linked more to efforts of individual hospitals than to participation in a nationally integrated system or measured characteristics of workers and facilities.

Keywords: Safety culture, safety climate, survey research, hospitals, integrated hospital networks, decomposition

Based on mounting evidence that better safety climate is related to lower incidence (Naveh, Katz-Navon, and Stern 2005; Hofmann and Mark 2006; Neal and Griffin 2006; Vogus and Sutcliffe 2007; Singer et al. 2008b;) and greater reporting (Cohen et al. 2004; Weingart et al. 2004; Gandhi et al. 2005;) of adverse events and to increased communication among managers and staff (Hofmann and Morgeson 1999), considerable effort among hospitals is being focused on improving safety climate. Along with hospitals' own efforts, several voluntary, collaborative initiatives that could improve safety climate (e.g., Leapfrog Group's patient safety leaps, and Institute for Healthcare Improvement's 5 Million Lives campaign) have garnered substantial participation among both public and private hospitals. “Benchmarking” of safety-climate survey results through participation in such collaborative initiatives is an effective way for hospitals to target quality improvement efforts. Benchmarking enables a hospital to compare its survey results with those of other hospitals, thereby facilitating identification of relative strength and weakness. It is being encouraged by numerous organizations. Since 2002, the Joint Commission's performance improvement standard (PI.01.01.01) has encouraged hospitals to collect data on staff perceptions of safety risks and improvement opportunities and to compare data with external sources (Joint Commission on Accreditation of Health Care Organizations 2002). The Agency for Healthcare Research and Quality (AHRQ) established the Hospital Survey on Patient Safety Culture Comparative Database for this purpose in 2006. Its 2009 database included safety-climate results from 622 hospitals (Sorra et al. 2009). Independent investigators engaged in benchmarking safety climate have also identified systematic differences in safety climate within and among hospitals, which provide clues to improving safety climate more generally (Singer et al. 2003; Thomas, Sexton, and Helmreich 2003; Makary et al. 2006; Sexton et al. 2006a, b, c; Vogus and Sutcliffe 2007; Hartmann et al. 2008; Singer et al. 2008a, 2009). Benchmarking safety-climate survey results across health care systems is difficult due to coordination challenges.

Our own effort to measure and benchmark safety climate is unique in that we used essentially the same survey instrument and sampling and administration procedures at approximately the same time in two discrete populations of hospitals in the United States: the Veterans Health Administration (VA) health care system and a national sample, excluding VA hospitals. This provided a novel opportunity to compare safety climate across two populations encompassing great diversity in both individual hospital characteristics and overall organizational structure, and potentially to identify any hospital features systematically related to safer care.

Differences in safety climate between VA and other U.S. hospitals may inherently exist because the VA is a nationally integrated network, while few U.S. hospitals come from large, integrated systems and none come from nationally integrated systems. As a system, the VA enjoys distinct advantages in broadly implementing and enforcing compliance with standardized safety activities. The VA conducts several initiatives with potential to improve safety climate. For example, the VA National Center for Patient Safety was established specifically to promote a systems approach to preventing and reducing harm to patients and to encourage hospitals to conduct root cause analyses after safety incidents (, accessed on August 26, 2008, for NCPS).

In this paper, we examine differences in safety climate between 69 diverse U.S. hospitals and 30 VA hospitals using cross-sectional employee surveys. Given potential advantages in promoting strong safety climate in a nationally integrated network, we hypothesized that safety climate among VA hospitals would be stronger than among U.S. hospitals.

Differences in safety climate between U.S. and VA samples may arise from multiple sources. First, there may be variation between the two samples in measured characteristics associated with safety climate—for instance, one sample may contain more large hospitals than the other. The residual difference in safety climate between the two samples would be attributable to differential effects of the sample characteristics between the two health care systems (e.g., hospital size may impact VA hospitals differently than U.S. hospitals). We performed comparisons of safety climate in these two settings that allowed us to discern the relative impact of these potential sources of difference. We hypothesized that variance in observed sample characteristics would explain more of the difference in safety climate between U.S. and VA hospitals than would differential effects of sample characteristics on the two groups.


Data Sources

We used the Patient Safety Climate in Healthcare Organizations (PSCHO) survey to collect data on employees' perceptions of safety climate. While various instruments exist to measure hospital safety climate (Colla et al. 2005; Flin et al. 2006;), the PSCHO instrument is the only one with established reliability and validity in both U.S. and VA hospital settings (Singer et al. 2007; Hartmann et al. 2008;). PSCHO survey items use a five-point, Likert scale ranging from “strongly agree” to “strongly disagree,” with a neutral midpoint. Items reflect 12 dimensions that capture various aspects of safety climate. We divided these dimensions into three categories, based on the extent to which they described hospital (e.g., “Organizational Resources for Safety”), work-unit (e.g., “Unit Safety Norms”), and interpersonal (e.g., “Fear of Blame and Punishment”) contributions to safety climate (Singer et al. 2007).

Because of modifications resulting from psychometric testing, two slightly different versions of the PSCHO survey were used in this study. In U.S. hospitals we used a 45-item instrument, while in the VA we used a 42-item instrument. The two versions have 41 common items, 39 of which map onto the 12 safety-climate dimensions. Both versions of the PSCHO also contained six close-ended demographic items.

Because the development of a strong safety climate necessitates a homogenous focus on preventing safety failures, the PSCHO instrument is scored to highlight responses opposed to safety, which we refer to as “problematic responses.” We generated scores for items, dimensions, and safety climate overall. First, we calculated the mean percent problematic response (“PPR”) for a given item across all respondents. We then calculated the mean of all item means in a dimension and the mean of all item means in the survey. A lower mean indicates a better perception of safety climate. This method of scoring identifies areas of nonuniformity in safety focus that are of potential concern and that might benefit from interventions to improve the safety climate.

Data for characteristics of respondents' jobs were obtained from the 2004 American Hospital Association (AHA) Annual Survey of Hospitals. Using these data, we determined hospitals' nurse staffing ratios, bed size, teaching status, national census region, and urban or nonurban location (see Table 1).

Table 1
Respondents' Individual and Facility Characteristics

Approval from relevant Institutional Review Boards was granted before conducting the studies.


We used stratified random sampling strategies in both populations. The U.S. hospital sample represented non-VA public and private acute-care hospitals, approximately equally divided among U.S. census regions and size categories. The VA sample represented a balanced geographic distribution of VA hospitals in four performance strata based on AHRQ's Patient Safety Indicators (PSIs) (low, medium, high, and other), to minimize selection bias. Details of the sampling strategies have been summarized elsewhere (Hartmann et al. 2008; Singer et al. 2009;).

Although we did not stratify the U.S. sample based on performance, the sample included 69 hospitals whose PSI rates were similar to those of all U.S. hospitals. Our recruitment strategy, however, dictated that average size and related characteristics would differ from the U.S. average (Singer et al. 2009). In addition, despite recruitment efforts, hospitals from the Midwest were underrepresented in our sample compared with the U.S. average.

The VA sample included 30 hospitals, including eight facilities each from the high, medium, and other PSI rate strata, and six from the low stratum. The VA facilities represented a balanced geographic distribution within each PSI stratum, with the exception of no low PSI hospitals in the West (Rosen et al. 2008).

Administration of Surveys

U.S. hospital survey administration took place from July 2006 to May 2007; the VA administration was conducted from December 2005 to May 2006. In both groups we sampled 100 percent of senior managers, defined as department head or above; 100 percent of active hospital-based physicians; and a random 10 percent of all other employees. Senior managers and physicians were over-sampled because of their relatively small numbers and their potentially low response rates, respectively.

For U.S. hospitals, we also sampled 100 percent of employees in three work areas in 12 larger hospitals with relatively high response rates in a 2004 survey administration so as to permit work-area-level analyses while maintaining respondent confidentiality. In these hospitals, we over-sampled employees in work areas that in 2004 were least likely to meet our 10-respondent minimum reporting requirement: laboratories (lab), operating rooms (ORs), and intensive care units (ICUs). Budget constraints drove this selection approach.

In the VA, to allow for analysis of work areas in which employees conduct work of intrinsically greater hazard, we also sampled 100 percent of employees in certain work areas in 10 randomly selected hospitals. The specific work areas were the OR, postanesthesia care unit, ICU, and emergency department. In this paper, we refer to these as “high hazard units” (HHUs).

The sampling frames in U.S. and VA hospitals consisted of 36,375 and 9,309 personnel, respectively. Both samples excluded individuals who no longer worked at the facility and those who used a survey response postcard to indicate that they did not wish to participate.

Analysis of Data

Weighting of Data

Two U.S. hospitals were excluded from analysis because AHA data suggested improbably high nurse staffing ratios. One VA hospital was dropped because it returned data for physicians only. For the remaining hospitals, we employed weighting techniques to reflect the two sampling frames accurately (Singer et al. 2003; Hartmann et al. 2008;). Identical weighting calculations were performed for each sample. First, we determined separate sampling and nonresponse weights. Regarding the latter, for the U.S. hospital sample we calculated a nonresponse weight for each workgroup (senior managers, physicians, and other employees) within each hospital. In VA hospitals, we calculated four nonresponse weights: for senior managers, physicians, HHU employees, and regular staff for each hospital. Then, in both samples, we multiplied the nonresponse and sampling weights and used the resulting “combined weight” to calculate a proportional weight that accounted for hospital size differences.

Statistical Analysis

For all analyses, the unit of analysis was the individual. Initially, we compared sample characteristics of respondents in U.S. and VA hospitals. We compared overall mean PPR in each hospital, graphically distinguishing hospitals from the U.S. and VA samples. We assessed internal consistency reliability for the 12 dimensions of the PSCHO instrument by calculating Cronbach's α coefficients for proposed dimensions for each sample. We compared average PPR among U.S. and VA hospitals for each item and dimension.

The dependent variable for all statistical models was PPR for each individual across all 39 PSCHO survey items, a summary measure we call “safety climate overall.” All models included variables describing individual respondents (i.e., gender, age, length of time at institution, job type, management category, and employment in HHU) and the facilities in which they worked (i.e., geographic region, hospital size, urban location, and nurse staffing ratio). Teaching status was not included in the models because major teaching status was correlated with large hospital size (r=0.5, p<.001) in both samples.

We examined the relationship between PPR and respondent characteristics in the United States and VA by estimating a separate hierarchical linear model (HLM) for each sample. To test the appropriateness of using two-level HLM to account for nesting of individuals within hospitals, we first ran random effects ANOVA “empty” models that included no independent variables (Snijders and Bosker 1999). Comparison of the two-level models with the linear regressions revealed significant differences at the hospital level in both samples (χ2=449 and 21 for U.S. and VA hospitals, respectively; both p<.001), indicating that there were meaningful differences in PPR among staff from different hospitals and that two-level random intercept models were preferred. The models did not assume that PPR was uniformly represented within a facility; rather, they allowed for variation within and across facilities at the individual level. We did not use three-level HLMs to account for work-area variance due to limitations of the work-area data.

To examine variance in observed sample characteristics between the U.S. and VA hospitals and differential effects of sample characteristics on PPR in the two groups, we conducted a regression-based decomposition approach developed by Oaxaca and Blinder (Blinder 1973; Oaxaca 1973;). Oaxaca–Blinder decomposition has been broadly applied in the economics literature and more recently in health services research (Kirby, Taliaferro, and Zuvekas 2006; Shen and Long 2006; Hudson, Miller, and Kirby 2007;). We refer to systematic variance between the two samples in characteristics associated with safety climate as the “sample-characteristics component” because it is explained by observable variation in sample characteristics. We estimated the sample-characteristics component by using the U.S. hospital model estimates as the reference model. The residual difference in PPR between the two samples is called the “unexplained component” and includes (a) differential effects of the sample characteristics in the model between the two health care systems, and (b) differences in unobserved factors such as differences in characteristics of patients. Thus, the sample-characteristics component in our Oaxaca–Blinder decomposition measures the expected difference in PPR assuming that the same model is applicable to both systems, and the unexplained component measures the extent to which the effects of observed and unobserved characteristics in the models differ between U.S. and VA hospitals. In other words, the unexplained component indicates how the U.S. and VA samples would differ if the distribution of the sample characteristics were exactly the same.

Analyses were conducted using Stata (version 9.2), including the Oaxaca module (Jann 2008) for the Oaxaca–Blinder decomposition.


Survey Response

Among the 67 U.S. hospitals studied, 13,841 individuals responded to the survey (41 percent). Response rates for individual hospitals ranged from 13 to 100 percent. Response also varied by type of personnel, with 62 percent of senior managers, 20 percent of physicians, and 50 percent of frontline employees responding.

For the 29 VA hospitals, we obtained an overall response rate of 50 percent (4,581 respondents). Response rates varied among hospitals (26–73 percent) and among personnel (69 percent for senior managers, 38 percent physicians, 38 percent HHU personnel, and 60 percent other staff).

Comparison of Sample Characteristics

All comparisons between the demographic characteristics of U.S. and VA samples revealed statistically significant differences (p<.001; Table 1). Respondents in U.S. hospitals were considerably younger than those in VA hospitals. U.S. hospital personnel also worked less time at their facility and were less likely to be male. They were more likely than VA personnel to work in HHUs and to be nurses and senior managers. In the U.S. sample, respondents more often worked in hospitals from the West, categorized as large, and with higher nurse staffing ratios. They were less often in major teaching hospitals and urban areas.

Safety-Climate Perceptions by Hospital

The graph displays the overall PPR and 95 percent confidence interval for each of the 96 hospitals in our study, displayed from lowest PPR (best safety climate) to highest (worst safety climate), differentiating between U.S. and VA hospitals. The range in safety climate among U.S. hospitals is larger than among VA hospitals, based on point estimates. In U.S. hospitals PPR varied from 7.7 to 24.5 percent, and in VA hospitals the range was 11.6–23.3 percent. More than twice as many VA hospitals fell in the bottom half of the distribution (n=21) than in the top (n=8). The results, however, place individual VA hospitals among both the top 10 and bottom 10 hospitals surveyed, and uncertainty in the point estimates suggests few meaningful differences between U.S. and VA hospitals (Figure 1).

Figure 1
Overall Percent Problematic Response (PPR) by Hospital, U.S. and VA Hospitals 2006

Comparison of Safety-Climate Perceptions

Table 2 presents survey results by item and dimension. Cronbach's α's for all dimensions were within an acceptable range (0.6–0.8) except for “Fear of Shame” (0.4) and “Fear of Blame and Punishment” (0.5). The low reliabilities for the latter two scales reflect the reduced number of common items remaining after dropping those items that were not phrased identically in the U.S. and VA surveys. Results for these dimensions are presented because the domains represent potentially important aspects of safety climate; however, they should be regarded as tentative and interpreted with caution.

Table 2
Mean Percent Problematic Response (PPR) among All Respondents by Item and Dimension: U.S. and VA*

The overall average PPR (i.e., mean of individual item means) was not significantly different in U.S. (mean=15.9, SD=1.61) and VA hospitals (mean=17.2, SD=1.56; p=.55). For 10 of the 12 individual dimensions, mean PPR was lower in U.S. than in VA hospitals. However, a smaller percentage of VA than U.S. respondents indicated fear of blame or punishment and that they witnessed or participated in unsafe care.

Relationship of Safety Climate and Sample Characteristics

All respondent characteristics, with the exception of time worked in the hospital, related significantly and in the same direction to safety climate overall in both samples (Table 3a). Being male and a nurse were positively related to PPR (worse safety climate), while age >50, being a senior manager and working in an HHU were negatively correlated with PPR (better safety climate). The magnitude of these correlations, however, differed somewhat by sample. PPR among men was higher than among women by 0.8 and 2.6 percentage points in U.S. and VA hospitals, respectively. PPR among senior managers was lower than among nonsenior managers by 4.6 percentage points in U.S. hospitals and by 7.3 percentage points in the VA.

Table 3a
Association of Individual Mean Percent Problematic Response (PPR) with Individual and Facility Characteristics

In contrast, characteristics of the facilities in which respondents worked related to safety climate in considerably different ways in the two samples. For U.S. hospitals, all facility characteristics with the exception of urban location related significantly to safety climate. For example, larger size was associated with higher PPR while a higher nurse staffing ratio was associated with lower PPR. In the VA, only working in the South and in an urban location were significantly correlated with safety climate, and in both instances the direction of correlation was opposite that of U.S. hospitals. VA employees working in the South had higher PPR than VA employees in the West. However, employees in Western U.S. hospitals had higher PPR than all other regions. VA employees working in urban hospitals had higher PPR than employees working in nonurban locations; the opposite was true for U.S. hospital employees.

Oaxaca–Blinder Decomposition of Safety-Climate Results

The Oaxaca–Blinder decomposition analysis allowed us to quantify the extent to which the predicted difference in PPR between the U.S. and VA hospitals was due to (a) variation in the two systems' distributions of sample characteristics (sample characteristics component) and (b) differences in the U.S. and VA model characteristics as expressed by the values of the coefficients (unexplained component). The model calculated the difference between sample characteristics and unexplained components based on the coefficients from the U.S. hospital model for each variable and the difference between the distributions of each characteristic for each sample. That is, An external file that holds a picture, illustration, etc.
Object name is hesr0044-1563-mu1.jpg gives the effect of the difference in characteristic X between U.S. and VA hospitals on the predicted PPR, using the U.S. hospital model (i.e., An external file that holds a picture, illustration, etc.
Object name is hesr0044-1563-mu2.jpg) as the reference.

The net difference in safety climate, based on the predicted means of the U.S. and VA models, was 1.39 higher predicted PPR for the VA (Table 3b). Some of this difference was attributable to observed sample characteristics. For example, male respondents had higher PPR in VA than U.S. hospitals (Table 3a). Because the VA also had more male respondents, the result was a higher predicted PPR for the VA sample—on average by 0.108 percentage points (Table 3b). On the other hand, the impact of the VA sample having more respondents older than 50 than the U.S. sample was that the VA's predicted PPR for this factor was −0.298, because this characteristic was associated with lower PPR. The aggregate impact of variation between U.S. and VA hospitals in the distribution of sample characteristics was −0.766, suggesting that the predicted VA PPR should be 0.77 percentage points lower than that for U.S. hospitals (based on the U.S. hospital sample as the reference). However, the unexplained component accounted for a larger portion of the difference in safety climate between U.S. and VA hospitals than the sample characteristics component. The differential effects of observed characteristics (i.e., differences in model coefficients) plus differences in unobserved characteristics predicted average PPR in the VA to be 2.160 percentage points higher than in U.S. hospitals.

Table thumbnail
Differences in U.S. and VA Safety Climate, Decomposition Results


This study is the first to compare hospital safety climate between two fundamentally different sets of hospitals: one a nationally integrated hospital network, the other predominantly independent general acute-care hospitals. The study summarizes safety climate in the VA and other U.S. hospitals and factors influencing safety climate in each setting. Results also show how sample characteristics contribute to differences in safety climate between settings.

Overall, we found no difference in safety climate between U.S. and VA hospitals on average, based on descriptive statistics. Differences with respect to specific dimensions were significant, generally favoring U.S. hospitals. However, the range in safety-climate results among U.S. hospitals substantially overlapped, suggesting that neither population has achieved superior safety climate. In addition, relative to high reliability organizations, such as naval aviation, which serve as the “gold standard” for safety achievement despite hazardous and demanding conditions, safety climate in both U.S. and VA settings was considerably worse (Gaba et al. 2003). This finding does not support our first hypothesis, that participating in a nationally integrated hospital network would be associated with stronger safety climate. It appears that potential advantages associated with the system's intense focus on safety improvement and its ability to implement uniformly its improvement program may have been outweighed by local considerations. While institutional programs may facilitate the ability of local managers to improve safety, they may not be targeted closely enough to the actual challenges of the workplace to make a difference alone.

We also found that characteristics of individuals influenced safety climate consistently across settings when controlling for other factors. Older age and more seniority corresponded to more positive perceptions of safety climate, while working as a nurse or in an HHU were associated with more negative perceptions. These findings are consistent with studies showing perceptions of safety climate differ by workgroup and management level (Pronovost et al. 2003; Sexton et al. 2006c; Singer et al. 2008a, 2009). In contrast, facility characteristics influenced safety climate differently in U.S. and VA samples. Working in southern and urban facilities corresponded with higher PPR among VA employees and lower PPR in the U.S. sample. Other studies have found similarly mixed results regarding effects of geographical and structural characteristics within non-VA hospitals (Baldwin et al. 2004; Coburn et al. 2004; Loux, Payne, and Knott 2005; Longo et al. 2007;). Also consistent with prior studies (Aiken et al. 2002; Stone et al. 2007; Weissman et al. 2007;), we found that higher nurse staffing ratios were associated with lower PPR in U.S. hospitals.

Decomposition analysis examined the influence of (1) variation in the distribution of observed sample characteristics among personnel in an integrated network compared with other U.S. hospitals and (2) differential effects of sample characteristics in each group. The overall difference between the samples, that is, the influence of (1) and (2) together, was a 1.4 percentage point higher PPR for the VA. We hypothesized that variations in sample characteristics between settings would explain more of this difference in safety climate than would differences in effects of those sample characteristics. Our results do not support this hypothesis. Instead, it was the differential effects of sample characteristics that explained more of the difference in safety climate between U.S. and VA hospitals. The difference based on the distribution of all the VA sample characteristics compared with U.S. characteristics was negative, indicating that the VA would be expected to have a 0.77 percentage point lower PPR based on observed sample characteristics alone. The unexplained difference, indicating the differential effect of sample characteristics, was 2.2 percentage points higher PPR in VA than in U.S. hospitals. This second difference was driven primarily by two factors: region and location, both of which act in opposite directions on PPR in the U.S. and VA models, and by unobserved characteristics. Decomposition of the residual suggests that our model explained just 5.9 percent of the variation in the outcome measure. Future research should explore additional characteristics of hospitals and factors driving the effects of region and location in order to determine whether some modifiable factors may be involved that could provide leverage for change.

Our results suggest that characteristics of respondents and their work facilities influence safety-climate scores. Thus, in comparing safety climate among hospitals or over time in hospitals whose respondent characteristics may have changed, it is important to include known characteristics in analyses. Such longitudinal studies would also provide opportunity for research on how the effects of respondent characteristics on PPR change over time.

Results should be interpreted within the context of several limitations. This was a cross-sectional study; thus, we cannot make assertions about causality. We cannot explain the mechanisms underlying effects of various factors on safety climate. Nor can we differentiate the effect on safety climate of observed from unobserved characteristics in the unexplained component of the difference between samples. We cannot rule out nonresponse bias as a factor in our results. The methodology in both settings aimed to maximize response rates while maintaining the voluntary and anonymous nature of the surveys. While the VA sample achieved a response rate that is similar to that of other studies of this type (Asch, Jedrziewski, and Christakis 1997; Jepson et al. 2005;), the overall response rate in the U.S. sample was lower. We adjusted for nonresponse and sampling bias through the use of weights in our analysis; however, it is possible that results do not accurately represent the facilities or populations intended. A related issue is the representativeness of the hospitals in each sample. We conducted a stratified random sampling strategy in both settings, but since participation was voluntary, sampled facilities may differ from facilities in their respective populations in unanticipated ways. As noted, administration dates and recruitment and sampling strategies also differed slightly between U.S. and VA samples. Although recruited on the basis of size and region rather than PSI rates, those rates among the U.S. hospital sample did not differ from those of U.S. hospitals overall. In addition, within the U.S. hospital sample we found no difference when we compared overall mean PPR between over-sampled hospitals and the other hospitals in that sample. Finally, while our models included variables associated with safety climate in the literature, we were limited by variables available in our datasets.

Nevertheless, the methodology employed in our study represents an advance over prior research. In particular, the decomposition analysis provides information about systematic differences in sample characteristics and the effects of specific characteristics on safety climate in different settings. By achieving a more thorough understanding of what is driving apparent differences in safety-climate survey results among hospitals we can proceed more clearly toward developing effective improvement interventions.

The results presented suggest that continued efforts are needed to improve safety climate in hospitals. While participation in systems can provide some advantages in this regard, the large unexplained component of safety climate from the regression estimates suggests that other factors, such as hospitals' emphasis on creativity and innovation and their leaders' abilities to motivate, implement, and sustain improvement, may matter more.


Joint Acknowledgment/Disclosure Statement: Financial support for this research was provided by the VA Health Services Research and Development Service grant no. IIR 03-303-2. The authors would also like to acknowledge research support from Ms. Alyson Falwell.

Disclosures: None.

Disclaimers: None.

Supporting Information

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.


  • Aiken LH, Clarke SP, Sloane DM, Sochalski J, Silber JH. Hospital Nurse Staffing and Patient Mortality, Nurse Burnout, and Job Dissatisfaction. Journal of the American Medical Association. 2002;288(16):1987–93. [PubMed]
  • Asch D, Jedrziewski M, Christakis N. Response Rates to Mail Surveys Published in Medical Journals. Journal of Clinical Epidemiology. 1997;50(10):1129–36. [PubMed]
  • Baldwin LM, MacLehose RF, Hart LG, Beaver SK, Every N, Chan L. Quality of Care for Acute Myocardial Infarction in Rural and Urban US Hospitals. Journal of Rural Health. 2004;20(2):99–108. [PubMed]
  • Blinder AS. Wage Discrimination: Reduced Form and Structural Variables. Journal of Human Resources. 1973;8:436–55.
  • Coburn AF, Wakefield M, Casey M, Moscovice I, Payne S, Loux S. Assuring Rural Hospital Patient Safety: What Should Be the Priorities? Journal of Rural Health. 2004;20(4):314–26. [PubMed]
  • Cohen M, Kimmel N, Benage M, Hoang C, Burroughs T, Roth C. Implementing a Hospitalwide Patient Safety Program for Cultural Change. Joint Commission Journal on Quality and Safety. 2004;30(8):424–31. [PubMed]
  • Colla J, Bracken A, Kinney L, Weeks W. Measuring Patient Safety Climate: A Review of Surveys. Quality and Safety in Health Care. 2005;14(5):364–6. [PMC free article] [PubMed]
  • Flin R, Burns C, Mearns K, Yule S, Robertson E. Measuring Safety Climate in Health Care. Quality and Safety in Health Care. 2006;15(2):109–15. [PMC free article] [PubMed]
  • Gaba D, Singer S, Sinaiko A, Bowen J. Differences in Safety Climate between Hospital Personnel and Navy Aviators. Human Factors. 2003;45(2):173–85. [PubMed]
  • Gandhi T, Graydon-Baker E, Huber C, Whittemore A, Gustafson M. Closing the Loop: Follow-Up and Feedback in a Patient Safety Program. Joint Commission Journal on Quality and Patient Safety. 2005;31(11):614–21. [PubMed]
  • Hartmann C, Rosen A, Meterko M, Shokeen P, Zhao S, Singer S, Falwell A, Gaba D. An Overview of Patient Safety Climate in the VA. Health Services Research. 2008;43(4):1263–84. [PMC free article] [PubMed]
  • Hofmann D, Morgeson F. Safety-Related Behavior as a Social Exchange: The Role of Perceived Organizational Support and Leader–Member Exchange. Journal of Applied Psychology. 1999;84(2):286–96.
  • Hofmann DA, Mark B. An Investigation of the Relationship between Safety Climate and Medication Errors as Well as Other Nurse and Patient Outcomes. Personnel Psychology. 2006;59(4):847–69.
  • Hudson JL, Miller GE, Kirby JB. Explaining Racial and Ethnic Differences in Children's Use of Stimulant Medications. Medical Care. 2007;45(11):1068–75. [PubMed]
  • Jann B. The Blinder–Oaxaca Decomposition.” ETH Zurich Sociology Working Paper No. 5 [accessed on June 9, 2009]. Available at
  • Jepson C, Asch D, Hershey J, Ubel P. In a Mailed Physician Survey, Questionnaire Length Had a Threshold Effect on Response Rate. Journal of Clinical Epidemiology. 2005;58(1):103–5. [PubMed]
  • Joint Commission on Accreditation of Health Care Organizations. Hospital Accreditation Standards. Oakbrook Terrace, IL: Joint Commission Resources; 2002.
  • Kirby JB, Taliaferro G, Zuvekas SH. Explaining Racial and Ethnic Disparities in Health Care. Med Care. 2006;44(5, suppl):I64–72. [PubMed]
  • Longo DR, Hewett JE, Ge B, Schubert S. Hospital Patient Safety: Characteristics of Best-Performing Hospitals. Journal of Healthcare Management. 2007;52(3):188–204. discussion 04–5. [PubMed]
  • Loux S, Payne S, Knott A. Comparing Patient Safety in Rural Hospitals by Bed Count. In: Henriksen K, Battles JB, Lewin DI, Rockville MD, editors. Advances in Patient Safety: From Research to Implementation. Vol. 1. Agency for Healthcare Research and Quality (AHRQ) and Department of Defense (DoD); 2005. pp. 391–404. [PubMed]
  • Makary M, Sexton J, Freischlag J, Holzmueller C, Millman E, Rowen L, Pronovost P. Operating Room Teamwork among Physicians and Nurses: Teamwork in the Eye of the Beholder. Journal of the American College Surgeons. 2006;202(5):746–52. [PubMed]
  • Naveh E, Katz-Navon T, Stern Z. Treatment Errors in Healthcare: A Safety Climate Approach. Management Science. 2005;51(6):948–60.
  • Neal A, Griffin M. A Study of the Lagged Relationships among Safety Climate, Safety Motivation, Safety Behavior, and Accidents at the Individual and Group Levels. Journal of Applied Psychology. 2006;91(4):946–53. [PubMed]
  • Oaxaca R. Male–Female Wage Differentials in Urban Labor Markets. International Economic Review. 1973;14:693–709.
  • Pronovost P, Weast B, Holzmueller C, Rosenstein B, Kidwell R, Haller K, Feroli E, Sexton J, Rubin H. Evaluation of the Culture of Safety: Survey of Clinicians and Managers in an Academic Medical Center. Quality and Safety in Health Care. 2003;12(6):405–10. [PMC free article] [PubMed]
  • Rosen AK, Gaba DM, Meterko M, Shokeen P, Singer S, Zhao S, Labonte A, Falwell A. Recruitment of Hospitals for a Safety Climate Study: Facilitators and Barriers. Joint Commission Journal on Quality and Patient Safety. 2008;34(5):275–84. [PubMed]
  • Sexton J, Helmreich R, Neilands T, Rowan K, Vella K, Boyden J, Roberts P, Thomas E. The Safety Attitudes Questionnaire: Psychometric Properties, Benchmarking Data, and Emerging Research. BMC Health Services Research. 2006a;6(44) [PMC free article] [PubMed]
  • Sexton J, Holzmueller C, Pronovost P, Thomas E, McFerran S, Nunes J, Thompson D, Knight A, Penning D, Fox H. Variation in Caregiver Perceptions of Teamwork Climate in Labor and Delivery Units. Journal of Perinatology. 2006b;8:463–70. [PubMed]
  • Sexton J, Makary M, Tersigni A, Pryor D, Hendrich A, Thomas E, Holzmueller C, Knight A, Wu Y, Pronovost P. Teamwork in the Operating Room: Frontline Perspectives among Hospitals and Operating Room Personnel. Anesthesiology. 2006c;105(5):887–94. [PubMed]
  • Shen YC, Long SK. What's Driving the Downward Trend in Employer-Sponsored Health Insurance? Health Services Research. 2006;41(6):2074–96. [PMC free article] [PubMed]
  • Singer S, Falwell A, Gaba D, Baker L. Hospital Patient Safety Climate: Variation by Management Level. Medical Care. 2008a;46(11):1149–56. [PubMed]
  • Singer S, Gaba D, Falwell A, Lin S, Hayes J, Baker L. Patient Safety Climate in 92 US Hospitals: Differences by Work Area and Discipline. Medical Care. 2009;47(1):23–31. [PubMed]
  • Singer S, Meterko M, Baker L, Gaba D, Falwell A, Rosen A. Workforce Perceptions of Hospital Safety Culture: Development and Validation of the Patient Safety Climate in Healthcare Organizations Survey. Health Services Research. 2007;42(5):1999–2021. [PMC free article] [PubMed]
  • Singer SJ, Gaba DM, Geppert JJ, Sinaiko AD, Howard SK, Park KC. The Culture of Safety in California Hospitals. Quality and Safety in Health Care. 2003;12(2):112–8. [PMC free article] [PubMed]
  • Singer SJ, Lin S, Falwell A, Gaba D, Baker LC. Relationship of Safety Climate and Safety Performance in Hospitals. Health Services Research. 2008b;44(2, Part I):399–421. [PMC free article] [PubMed]
  • Snijders T, Bosker R. Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling. Thousand Oaks, CA: SAGE Publications; 1999.
  • Sorra J, Nieva V, Famolaro T, Dyer N. Hospital Survey on Patient Safety Culture: 2007 Comparative Database Report. (Prepared by Westat, Rockville, MD, under contract No. 233-02-0087, Task Order No. 18.) AHRQ Publication No. 07-0025. Rockville, MD: Agency for Healthcare Research and Quality; 2009.
  • Stone PW, Mooney-Kane C, Larson EL, Horan T, Glance LG, Zwanziger J, Dick AW. Nurse Working Conditions and Patient Safety Outcomes. Medical Care. 2007;45(6):571–8. [PubMed]
  • Thomas E, Sexton J, Helmreich R. Discrepant Attitudes about Teamwork among Critical Care Nurses and Physicians. Critical Care Medicine. 2003;31(3):956–9. [PubMed]
  • Vogus TJ, Sutcliffe KM. The Safety Organizing Scale: Development and Validation of a Behavioral Measure of Safety Culture in Hospital Nursing Units. Medical Care. 2007;45(1):46–54. [PubMed]
  • Weingart S, Farbstein K, Davis R, Phillips R. Using a Multihospital Survey to Examine the Safety Culture. Joint Commission Journal on Quality and Safety. 2004;30(3):125–32. [PubMed]
  • Weissman JS, Rothschild JM, Bendavid E, Sprivulis P, Cook EF, Evans RS, Kaganova Y, Bender M, David-Kasdan J, Haug P, Lloyd J, Selbovitz LG, Murff HJ, Bates DW. Hospital Workload and Adverse Events. Medical Care. 2007;45(5):448–55. [PubMed]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust