analysis of hospital quality scores shows that most hospitals are of similar quality and that some of the quality measures are excellent indicators of hospital quality. Looking at , we can see that the distribution of hospitals is centered on zero and that the number of hospitals with positive and negative scores (corresponding to better to worse hospitals) is about equal. Additionally, the tails of the distribution are very small, with a kurtosis of −0.23.5
The interpretation of this result is that there are very few hospitals of extreme positive or negative quality. This is broadly consistent, though not directly comparable, with the results of Landrum, Normand, and Rosenheck (2003)
and Werner and Bradlow (2006)
. Using PRIDIT
, there is no reason a priori to expect hospitals to be centered on zero6
and tightly distributed rather than, say, positively skewed (as it would be if most hospitals were of high quality and a few were of relatively poor quality). Prior studies using PRIDIT
to investigate fraud in individuals found just such a result: most individuals were honest but a few were very likely committing fraud (Brockett et al. 2002
). The distribution of hospital scores therefore suggests that most people who have access to a hospital are getting a common quality of treatment.
Histogram of Hospital PRIDIT Scores Tables
There is, however, a distinction between normal practice and best practice as shown by a small number of the quality measures. We report the weights associated with each measure, as well as their rank in absolute importance (we show ranks on an individual basis and a “binned” basis) in columns (3), (4), and (5) of . The top six measures are those with weights above 0.575. The PRIDIT
weights are multiplicative (Brockett et al. 2002
), so that the best measure (patients given assessment of left ventricular function
) is twice as good as the 16th best measure (patients given ACE inhibitor or ARB for left ventricular systolic dysfunction
) as a quality indicator. This is the most important function for the weights in the PRIDIT
analysis: they point to areas where improvement in Hospital Care measures will lead to the greatest improvement in hospital quality, a critical piece of knowledge in a resource-constrained world. The concept of “best practice” is consistent with the literature on high volume hospitals (HVHs) that indicates that these facilities are both much less prevalent than non-HVHs, especially low volume hospitals or LVLs, and that they produce superior outcomes. For instance, Dudley et al. (2002) attributed 602 deaths to the use of LVHs rather than HVHs in California, and showed that LVHs are much more prevalent.
We also find that structural factors, to varying degrees, are important indicators for hospital quality. We find that teaching hospitals are in general of higher quality than nonteaching hospitals, and that the more teaching hospitals do, the higher is their quality, our contribution to the ongoing debate as to whether teaching hospitals' care is of higher quality or whether the effect is monotonic (see, e.g., Papanikolaou, Christidi, and Ioannidis 2006
). We also find that privately owned hospitals and government run hospitals are of higher quality. Therefore, the omitted category, not-for-profit hospitals, is associated with lower quality. In contrast, Geweke, Gowrisankaran, and Town (2003)
found that “… public hospitals have the lowest quality,” while their finding that “there are no definitive comparisons among ownership categories” is consistent with our finding that ownership type is less important than teaching status and many process measures. In addition, we find that acute care hospitals, accredited hospitals, and those offering emergency services have higher quality, none of which is surprising because achieving any of these characteristics requires significant investment in services.
We also report an alternative specification, where we assess quality using only the clinical measures, in columns (6), (7), and (8). The first numbers to compare are the eigenvalues reported in columns (3) and (6). In PCA, the higher the eigenvalue is, the better the discriminatory power of the measures used, so that when using PRIDIT, a higher eigenvalue corresponds to a higher chance of obtaining the true weights. In this study, using all 28 variables is superior to using the 20 clinical variables alone. The measure with the biggest change, which is a decrease, is patients given initial antibiotic(s) within 4 hours after arrival for pneumonia, which decreases from a weight of 0.298 to a weight of 0.166 and drops in importance from 17th to 24th. The second biggest change is also a decrease for patients given assessment of left ventricular function of 0.086, although that measure continues to be the most important. There must be a correlation between the variation in hospital type, hospital ownership, or academic hospital type and patients given initial antibiotic(s) within 4 hours after arrival and patients given assessment of left ventricular function such that the inclusion of structural characteristics improves our discriminatory power while making these two process measures less important indicators of quality.