|Home | About | Journals | Submit | Contact Us | Français|
To determine the rates, patient, and institutional characteristics associated with the occurrence of patient safety indicators (PSIs) in hospitalized children and the degree of statistical difference derived from using three approaches of controlling for institution level effects.
Pediatric Health Information System Dataset consisting of all pediatric discharges (<21 years of age) from 34 academic, freestanding children's hospitals for calendar year 2003.
The rates of PSIs were computed for all discharges. The patient and institutional characteristics associated with these PSIs were calculated. The analyses sequentially applied three increasingly conservative methods to control for the institution-level effects robust standard error estimation, a fixed effects model, and a random effects model. The degree of difference from a “base state,” which excluded institution-level variables, and between the models was calculated. The effects of these analyses on the interpretation of the PSIs are presented.
PSIs are relatively infrequent events in hospitalized children ranging from 0 per 10,000 (postoperative hip fracture) to 87 per 10,000 (postoperative respiratory failure). Significant variables associated PSIs included age (neonates), race (Caucasians), payor status (public insurance), severity of illness (extreme), and hospital size (>300 beds), which all had higher rates of PSIs than their reference groups in the bivariable logistic regression results. The three different approaches of adjusting for institution-level effects demonstrated that there were similarities in both the clinical and statistical significance across each of the models.
Institution-level effects can be appropriately controlled for by using a variety of methods in the analyses of administrative data. Whenever possible, resource-conservative methods should be used in the analyses especially if clinical implications are minimal.
There have been an increasing number of descriptive analyses of pediatric medical errors as the release of the Institute of Medicine reports over 5 years ago (IOM 2000, IOM 2001, IOM 2004). These include single and multi-institution studies, studies of medication errors, and several broad characterizations of medical errors in hospitalized children. The incidence of medical errors in hospitalized children is estimated to range from 0.8 to 1.3 percent and is highly dependent upon the definitions and classifications used (Brennan et al. 1991; Localio et al. 1991; Smith, Langlois, and Buechner 1991; Rosen et al. 1992; Silber et al. 1992; Roos et al. 1995; Duke, Butt, and South 1997; Gawande et al. 1999; McCormick et al. 2000; Miller et al. 2001; Connors et al. 2001; Kaushal et al. 2001; Davis et al. 2002; Miller, Elixhauser, and Zhan 2003; Proctor et al. 2003; Slonim et al. 2003; Kanter, Turenne, and Slonim 2004; Miller and Zhan 2004; Sedman et al. 2005). The study of pediatric medical errors has been limited by the relatively small numbers of children hospitalized at individual institutions and the need for detailed prospective and retrospective reviews. One method of overcoming the methodological challenges of small sample sizes and the potential idiosyncrasies of data from a single institution is to use administrative datasets that contain large volumes of discharges from multiple institutions (McCormick et al. 2000; Miller et al. 2001; Miller, Elixhauser, and Zhan 2003; Proctor et al. 2003; Slonim et al. 2003; Kanter, Turenne, and Slonim 2004; Miller and Zhan 2004; Sedman et al. 2005).
Miller and colleagues developed an effective method for screening the large number of discharges in these datasets to identify potential medical errors (Miller et al. 2001; Miller, Elixhauser, and Zhan 2003; Miller and Zhan 2004). This method uses events known as patient safety indicators (PSIs). For pediatric patients, analyses of PSIs have included both community hospitals and Children's hospitals (Miller et al. 2001; Miller, Elixhauser, and Zhan 2003; Miller and Zhan 2004). However, when synthesized, the prior literature on medical errors in children during hospitalization is subject to two important caveats, both of which are informed by this study.
First, inpatient error estimates generated using the PSI method are likely to be higher in freestanding, academic Children's Hospitals where children with complex conditions receive their care, because children with more complex health care needs may be more prone to medical errors (Miller, Elixhauser, and Zhan 2003; Slonim et al. 2003). Therefore, a description of the types and frequencies of PSIs in a unique dataset of these institutions may help to prioritize efforts aimed at reducing their occurrence in specific subgroups of patients (Slonim et al. 2003). These associations and the incidence rates, however, may be misrepresented if the analytic approach inadequately considers the statistical methods used in the analysis.
The second caveat addresses the analytic approach and the methodological challenge that occurs when modeling a multi-institutional dataset (Normand, Glickman, and Gastoonis 1997; y,Snijders and Bosker 1999; McCulloch and Searle 2000; Williams 2000; Leyland and Goldstein 2001; Palta 2003; Loehlin 2004; Luke 2004; Skrondal and Hesketh 2004; Zaslavasky, Zaborski, and Cleary 2004). Patients being treated in the same hospital may be more similar to each other than those treated in other hospitals. Independence, one of the key assumptions for regression, is violated under these circumstances. Therefore, unmeasured hospital characteristics that introduce variability into the results require additional effort to appropriately control. Several techniques can account for the “clustering” which exists in the dataset. Clustering represents the correlations associated with data that are organized at specific sites or levels. The techniques used to adjust for hospital clustering, which allow for measuring both patient and hospital level characteristics are, as a group, referred to as hierarchical or mixed models (Goldstein 1987, Goldstein, 1995; Gatsonis et al. 1995; Daniels and Gastonis 1999; Landrum and Normand 1999; Landrum et al. 1999; Burgess and Lourdes 2000; Bryck and Raudnebush 2002; Hox 2002). The justifications for performing these models arise from empirical, statistical, and theoretical foundations, but their usefulness in practice remains questionable. The empirical justification arises from the possibility that the rates of PSIs differ at the individual hospital level (Goldstein 1987; Snijders 1999; Williams 2000; Bryck and Raudenbush 2002; Palta 2003; Loehlin 2004; Luke 2004; Skrondal 2004). Second, a statistical justification arises from the fact that the cases are not independent, but are clustered by hospital and may have correlated errors. Third, a theoretical justification would suggest that a multilevel model will more accurately estimate the extent to which both the patient-level and hospital-level characteristics influence PSI rates.
Consequently, we studied the incidence rates of PSIs in children hospitalized at a sample of academic Children's Hospitals and accounted for the effects of hospital “clustering” using three different statistical modeling techniques. We compared these models to a “base-state” that excluded institutional variables to determine the effect that each of these techniques may have on the characteristics associated with PSIs.
The Pediatric Health Information System (PHIS) dataset was used for these analyses. This dataset represents detailed hospital-based inpatient information from all discharges (n=385,157) from 34 independent, academic, free-standing, pediatric hospitals in the United States (PHIS). The participating institutions are affiliated with the Child Health Corporation of America (Shawnee Mission, KS), are noncompeting, and thus representative of a wide geographic distribution. The hospitals supply demographic, diagnostic and utilization data for the purposes of internal and external benchmarking. They are heterogeneous with respect to geographic location, bedsize, and average daily census. Data are submitted to PHIS and tested for reliability and validity before inclusion. The data warehouse function for PHIS is managed by Solucient, LLC (Evanston, IL).
The hospitals submit discharge-level data monthly from their internal data systems using a standardized discharge abstract. These data include the hospital identifier, medical record number, and admission and discharge dates, age, race, gender, payor, and up to 21 diagnoses and procedures. The data are then assigned an All Patient Refined Diagnostic Related Group (APR-DRG) severity level (3M Center, St. Paul, MN). The medical record numbers, billing numbers, physician identification, and zip codes are encrypted to protect the identity of the patients and assure compliance with federal regulations. These data are subjected to a number of reliability and validity checks before being processed. Data are accepted into the database once classified errors occur less frequently than a criterion threshold of 2 percent of a hospital's quarterly data (Solucient). If a hospital's quarterly data are unacceptable according to these standards, all of their quarterly data are rejected; however, these data can be resubmitted and reevaluated for inclusion at a later time (Solucient). During the study period, all data from each of the sites met the reliability and validity thresholds and were included for analysis.
All consecutive pediatric discharges from birth through age 21 years from participating hospitals between January 1, 2003, and December 31, 2003 were included. The unit of analysis for this investigation was the individual discharge. Patient readmissions were considered as separate admissions for the purpose of analysis. Patients with an obstetrical diagnosis code were excluded as many freestanding Children's Hospitals do not have active obstetrical services.
PSIs were developed by researchers at the Agency for Healthcare Research and Quality (AHRQ), Stanford University, and the University of California, Davis. The development and validation of these indicators was undertaken by the AHRQ and has been described in detail elsewhere (Miller, Elixhauser, and Zhan 2003). In summary, PSIs and the AHRQ software are a means of identifying medical errors in administrative data, such as PHIS. Algorithms using ICD-9-CM codes identify specifically coded occurrences in either the primary or secondary diagnosis fields of the dataset to derive the incidence (numerator) for a particular PSI. The number of discharges at risk of developing a particular PSI (denominator) is also determined by the software (Miller, Elixhauser, and Zhan 2003; Miller and Zhan 2004). For example, the PSI labeled “postoperative hematoma” is calculated by identifying those discharges with an ICD-9-CM code for postoperative hematoma as the numerator. The “risk pool” is derived from all surgical diagnoses as a postoperative hematoma can only occur when a surgical procedure is performed. PSIs and their incidence can then be determined with this approach (Miller, Elixhauser, and Zhan 2003; Miller and Zhan 2004; Sedman et al. 2005). Some of the definitions for the risk pool are controversial and may not be straightforward, particularly in pediatric patients. In addition, a quick inspection of the PSIs reveals that many are procedurally based and may not capture the broader problem of medical errors. Nonetheless, the PSIs have been adapted by Miller and colleagues as an approach to determining events compromising patient safety in the pediatric population and remain the prevailing paradigm for accomplishing this (Miller, Elixhauser, and Zhan 2003; Miller and Zhan 2004; Sedman et al. 2005). In addition, the use of PSIs and the AHRQ software permits comparisons across datasets. They therefore represent established “state of the art” and provide a standardized methodology.
The primary outcome measure for analyses described below was PSIs as defined by Miller using the PSI methodology (Miller, Elixhauser, and Zhan 2003; Miller and Zhan 2004; Sedman et al. 2005). We determined the association of specific PSIs with the following independent variables obtained directly from existing data elements in the PHIS dataset or derived from PHIS data elements: (1) patient characteristics, (2) utilization characteristics, and (3) hospital characteristics. For each discharge with a PSI, the demographic and utilization variables were identified. The patient-level variables included age, race, gender, payor, diagnosis, disposition, and survival. The utilization variable was length of stay (LOS). The institution-level variables included geographic location (e.g., Eastern United States), bedsize, and average daily census.
As a means of understanding the institutional risk factors associated with the occurrence of a PSI, bi-variable analyses examining the institution-level characteristics of interest for their association with a PSI were performed first. All of the outcomes were displayed as rates to allow for the ease of comparison among and between groups and for the computation of odds ratios for patient level characteristics. Next, a series of multivariable logistic regression models, adjusting for the clustering of patients within hospitals were performed to test the net effects of patient characteristics clustered at the hospital level. To accomplish this, a model that did not include institution-level variables performed first to accomplish a base case scenario. Then, three separate approaches were tested and compared with each other and the base case. Statistical analyses were performed using Stata (StataCorp LP, College Station, TX).
First, the robust standard error estimating equation (Huber–White) method as implemented by the “robust” and “cluster” options under the Stata command “logistic” was performed (http://www.stata.com). This commonly used and relatively straightforward approach uses more conservative estimates of standard errors to account for the variable contributions of institution-level effects on the outcome. This approach “linearizes” the between-cluster variance estimates of the hospitals. The estimates obtained are similar to those obtained by using generalized estimating equations (GEE) (Palta 2003). Nonetheless, the theoretical danger of this method is that it could continue to overestimate the coefficient and still underestimate the true standard errors because of a failure to appropriately account for the institutional level covariates. Furthermore, this aggregated model will confound both within and between effects at both the patient and hospital level.
Second, we performed a fixed-effects model that included an indicator variable for 33 of the 34 hospitals in the regression equations. This approach allowed us to control for the effects of the institution on the outcome of interest. In addition, this approach allowed us to see how patient-level effects may or may not change while controlling for the fixed effects of hospitals. This approach of including hospital-linked indicator variables essentially controls for any fixed yet unmeasured hospital differences in the model. While this approach represents an improvement over the aggregated model because it allows one to estimate within hospital effects, it may still be suboptimal. The use of an indicator variable may not provide “appropriate” standard errors. Furthermore, there is a loss of all between-hospital effects and this approach may not provide appropriate estimation of the individual level variation.
Finally, we performed a mixed effects model, using the XT logit command of Stata, to account not only for the institution but the random contribution of an institution's characteristics to the outcome of interest (http://www.stata.com). With this approach, within and between hospital-level effects are accounted for by the introduction of two error terms, one representing the patient and one representing the institution level, in the regression equation. These error terms are meant to represent “unmeasured” variation, as the “measured” component is already included in the equation. Increasingly complex modeling can account for additional levels of complexity by adding additional error terms to the equation, each accounting for the unmeasured errors associated with a new level of analysis that might accompany the fixed effects. Increasingly conservative standard error estimates are achieved when the random effects for additional levels are considered, and as the hospitals are considered a random variable, the results of the model are typically more generalizable to other patient cohorts than fixed-effects models.
Each of these three approaches has strengths and weaknesses associated with its execution and comparing results from the first approach (least conservative) to the third approach (most conservative) (Goldstein 1987; Gastonis et al. 1995; Goldstein 1995; Normand, Glickman, and Gastoonis 1997; Gastonis 1998; Daniels and Gastonis 1999; Landrum and Normand 1999; Landrum et al. 1999; Snijders and Bosker 1999; Burgess and Lourdes 2000; McCulloch and Searle 2000; Williams 2000; Leyland and Goldstein 2001; Bryck and Raudenbush 2002; Hox 2002; Palta 2003; Loehlin 2004; Luke 2004; Skrondal and Hesketh 2004; Zaslavasky, Zaborski, and Cleary 2004) provides an improved understanding of how institution-level effects may be operative in the outcome of a PSI.
The study uses de-identified data and has been granted an exempt status from the Institutional Review Board.
Table 1 provides the rate of PSI events by PSI group for the 385,157 discharges in 34 academic Children's Hospitals in 2003. A total of 4,939 PSI events occurred during these hospitalizations. The number of discharges varied by PSI based upon the inclusion criteria for each PSI. The number of events in a PSI grouping ranged from 0 (postoperative hip fracture) to 1,395 (infection due to medical care). The rate of PSI events ranged from 0 (postoperative hip fracture) to 87 (postoperative respiratory failure) per 10,000 discharges.
“Postoperative Respiratory Failure” and “Failure to Rescue” were the most common reported PSIs with 87 and 80 events per 10,000 discharges, respectively (Table 1). Other more common PSIs included infections due to medical care (44 events per 10,000), decubitus ulcer (37 events per 10,000), and postoperative deep venous thrombosis or pulmonary embolism (35 events per 10,000).
Table 2 examines the bivariate relationships between the occurrence of a PSI and individually considered patient and organizational-level characteristics. There were significant differences in the rate of PSIs among discharges considered by patient age, race, payor, severity, disposition, and outcome (Table 2). Neonates had the highest rate of PSIs (1.8 percent), while preschool (ages 1–5 years) and school age (6–12 years) cases had significantly lower odds of experiencing a PSI (odds ratio [OR] 0.58 and 0.61, respectively). Black cases were 16 percent less likely than white cases to experience a PSI, but there were no differences in other racial groups. Privately insured cases and self-pay cases were less likely to experience a PSI during hospitalization than those cases that were government insured (OR 0.77 and 0.52, CI 0.65–0.91 and 0.40–0.67, respectively, both p<.05). There was a linear trend between illness severity and PSIs with increasing severity being associated with a greater likelihood of events (Cochran–Armitage Test for Trend, p<.01). Cases discharged to home were less likely to have experienced a PSI than those who were transferred to another facility, received home care following discharge, or have other discharge needs (Table 2). Larger organizations had a higher risk of a PSI during a patient's hospitalization. Specifically, hospitals with >300 beds were more likely to have patients that experienced a PSI than those with <200 beds (OR 1.55, CI 1.20–1.99; p<.01). However, the “busyness” of a hospital as measured by average daily census did not appear to affect the PSI occurrences. To test for an association between bedsize and average daily census, we tested their correlation and found that they were highly related to one another (r2=0.83, p<.001). Because it was unrelated in bivariate analyses and to avoid multicollinearity in the model, we removed average daily census as a variable from further analysis. Patients discharged from hospitals in the North Central region had a significantly lower likelihood of a PSI than patients discharged from hospitals in the Northeast (OR 0.66; CI 0.51–0.86), although other geographic regions were not statistically different in their rates of occurrence.
After examining the bivariate relationships between PSIs and various patient and hospital characteristics, the next important step was to investigate the associations in a multivariable model. This approach was used to determine which of the independent variables were associated with the occurrence of a PSI. There were statistically significant between-hospital differences in the likelihood of a PSI, with the odds ratios ranging from 0.23 to 1.22. However, hospital alone did not meaningfully explain variation in the likelihood of a PSI (R2=0.01).
Table 3 displays the results of the “base case” and three other models a robust standard error estimation model, a fixed-effects model, and a random-effects models, with the results being virtually identical. This implies that the different approaches nearly equivalently adjusted for the effects of clustering on variance estimates at the hospital level related to the occurrence of a PSI. A multivariable model without the institutional covariates (base case) was added for reference to allow for a fair comparison. The relationship of age to the occurrence of a PSI persisted after adjustment with all age groups having a higher likelihood of experiencing a PSI than neonates. Infants, preschoolers, and school age children had an OR of a PSI of 1.76, 1.63, and 1.76, respectively (Table 3). Adolescents were twice as likely as neonates to have an event (OR: 2.12; CI: 1.66-2.71). As in the bivariable analysis, African American patients continued to be at lower risk of PSIs than white patients. Payor status in the “Other/Self-pay” group was protective compared with public insurance and private insurance. Severity level was also independently related to a higher risk of a PSI. For patients who died, there was still a higher risk of experiencing a PSI (OR: 7.3, CI: 5.96–8.90, Table 3).
Importantly, regardless of the method used, the relationships at the hospital level were the same. There were some notable differences when compared with the unadjusted models. For example, hospital bedsize of >300 beds now had a higher likelihood of being associated with a PSI (Table 3). Further, after controlling for hospital characteristics, the North Central, South, and West regions become more similar to one another than was true in the bivariable analysis, however, these three regions are still relatively spared compared with the Northeast (Table 3).
These data provide some of the first results using hierarchical models for the investigation of the institutional associations with medical errors in children occurring during hospitalization. Although medical errors occur in a variety of settings, hospitals represent an environment where their description is particularly important. While medical errors recorded as discharge diagnoses could have occurred outside the hospital and led to the admission, available data suggest that this is generally not the case (Slonim et al. 2003). We found that PSIs are relatively rare occurrences for children hospitalized at academic pediatric institutions occurring in approximately 1 percent of discharges. This research provides estimates regarding the incidence of medical errors using administrative data that are similar and clinically plausible given the unique subset of hospitalized pediatric patients analyzed (McCormick et al. 2000; Miller, Elixhauser, and Zhan 2003; Slonim et al. 2003; Kanter, Turenne, and Slonim 2004; Miller and Zhan 2004; Stedman et al. 2005).
The use of an administrative dataset provides extensive data on hospital encounters, patient characteristics, organizational structure, and resource utilization associated with the outcomes of interest. Their analysis has informed discussion of topics ranging from injury surveillance to resource utilization and health policy. Furthermore, Miller and colleagues have presented an organized, coherent, and reproducible framework for the study of medical errors using administrative data (Miller, Elixhauser, and Zhan 2003; Miller and Zhan 2004; Stedman et al. 2005), which was applied in this study. The analyses performed here contribute to the ongoing body of research on medical errors in content, as described above, but also in method.
This approach to understanding PSIs during hospitalization at freestanding academic Children's Hospitals is important. There are reasons to believe that these children are at higher risk based upon this type of institution. A large sample size that allows the identification of risk factors in specific pediatric subgroups, like preschool and school age children, is helpful as the occurrence of a medical error is a particularly rare event. We found that the larger the institution, the more likely was the occurrence of a PSI. Because prior work on medical errors for adults and children routinely demonstrates that academic institutions have higher rates of medical errors than nonacademic institutions due to their program of care for more complicated patients, this finding could reflect a referral bias for larger academic settings. However, it is important to remember that this dataset represents only freestanding academic, Children's Hospitals and that this effect persists after adjustment for the institution-level effects. Interestingly, this work demonstrates a higher rate of PSI events in hospitals sited in the Northeast. None of the prior work in this field has identified a geographic region that was more prone to errors during hospitalization (Miller, Elixhauser, and Zhan 2003; Miller and Zhan 2004; Stedman et al. 2005).
In controlling for hospital-level characteristics that may affect the occurrence of a PSI, we performed three different methodological approaches from simple to more complex to be able to identify the sensitivity of the different methods to detect institutional influences on the occurrence of a PSI. Overall, each approach would be expected to increasingly account for the multiple levels of data and provide more conservative estimates of the standard errors.
First, we used robust standard error estimation. Second, we analyzed the results using a fixed-effects model that accounted for the characteristics of the institutions by using an indicator variable in the logistic regression equations. Finally, a random effects model was performed by using the XTlogit function. Most notably, controlling for the effects of the institution by any of these three methods led to nearly uniform results. This is important because each subsequent approach is meant to account for a greater degree of the institutional variance. As these institutions are similar in their scope of practice, the results of these analyses may be particularly uniform. Furthermore, the increasingly complex analyses did little to effect the clinical meaningfulness of the findings.
The use of an administrative dataset like PHIS has several limitations including the underestimation of hospital reported medical errors because of occurrences that take place as an outpatient or after discharge, the potential for information bias due to the misclassification of clinical information, and errors in data entry due to nonclinical coders. While there is no statistical manipulation of data that can eliminate such problems, we were encouraged by the use of a standardized methodology for defining and measuring these PSIs and a dataset with strict validity and reliability standards that we hope has improved our ability to provide meaningful results.
Additionally, because hospital participation is voluntary, a selection bias may be imposed as hospitals with a particular focus on quality improvement may be more likely to participate in the collaborative; thus biasing the results towards a lower occurrence of PSIs. Finally, the potential for inadequate adjustment for patient-level or hospital-level case mix is always present. To address this limitation, we used the APR-DRG methodology. However, it is still possible that there may still be unexplained variation at the patient level with this approach. Our approach using hierarchical methods was proposed to account for organizational-level differences associated with the clustering of patients at institutions. However, despite these attempts, it is still possible that we have inadequately controlled for the interaction effects of the hospital on the outcome of interest.
This descriptive study does not, of course, constitute or even directly lead to, direct intervention to reduce medical errors or PSI rates. However, we hope that it will contribute further to the descriptive epidemiology of medical errors in the large and potentially vulnerable group of pediatric patients hospitalized at academic Children's hospitals. Further, insofar as consistent associations have been identified in pediatric groups at risk of medical errors, intervention efforts might target these groups. These analyses have also provided additional clarity around methods used to explain organization-level variance using a variety of methods that are relevant to other analyses using administrative data for the analysis of medical errors and public health problems.
AHRQ K08 HS 14009-01 (Slonim and Turenne)
AHRQ K08 HS 13179-01 (Marcin)
Data were presented in part: Doctoral Dissertation Defense, July 29, 2005