|Home | About | Journals | Submit | Contact Us | Français|
OBJECTIVE: Researchers, government, and the press often rank jurisdictions according to public health indicators; however, measures of uncertainty rarely accompany these comparisons. To demonstrate the variability associated with rankings that use public health measures, the authors examined the uncertainty associated with ranks based on three common methods used to derive public health indicators: age-adjustment, calculations based on census estimates, and calculations based on survey data. METHODS: The authors observed the effect of changing the standard population from the 1970 population to the 1997 population on rank-order lists of jurisdictions according to age-adjusted 1998 mortality rates. They used a Monte Carlo method to calculate confidence intervals (CIs) around ranks based on census estimates of 1998 infant mortality rates and based on 1999 Behavioral Risk Factor Surveillance System (BRFSS) survey data on the prevalence of hypertension. RESULTS: Changing the standard year from 1970 to 1997 resulted in a shift of at least three rank-order positions for seven states. Two states shifted five positions. CIs associated with ranking by infant mortality rates were broad, with a mean of 16 ranks. CIs around ranks for the prevalence of hypertension were also wide, with a mean of 18 ranks. CONCLUSION: While ranking based on public health indicators is an attractive and popular way of presenting public health data, caution and close examination of the underlying data are needed for proper interpretation. Alternative methods, such as longitudinal analysis or comparisons with standards, may prove more useful.