As the number of diagnoses of chronic conditions increased in individual patients, there was an associated increased risk of death. However, as the number of diagnoses increased among geographically defined populations (ie, across quintiles of diagnosis frequency), there was little relationship with population-based mortality. These apparently paradoxical findings were explained by a third observation: among patient subgroups with a given number of chronic conditions, there was a consistent stepwise decrement in case fatality as diagnosis frequency increased.
As with all observational data, causality cannot be directly tested. Our analysis could be limited by residual confounding—ie, from variables that could explain increased diagnosis frequency as well as declining case fatality. The conventional explanation for our findings would be that the geographic variation in diagnosis frequency reflects underlying differences in disease burden, ie, regions with high diagnosis frequencies must have sicker patients. However, this explanation fails to explain why population-based mortality is stable across quintiles of diagnosis frequency.
Another explanation would suggest that regions with high diagnosis frequencies are more effective in treating sick patients, thereby reducing mortality rates that otherwise would have been substantially increased. While the population of patients in the lowest quintile of diagnosis frequency may have fewer chronic conditions, those who do have fewer also may have poor access to health services (eg, fewer specialists, longer wait times), leading to increased case fatality. Conversely, the population in the highest quintile of diagnosis frequency may have more chronic conditions yet better access to services (and perhaps more experienced physicians), leading to decreased case fatality. However, to produce the observed pattern of reductions in case fatality in a stepwise relationship, another condition would have to be met: access and the ability to provide effective care must be directly related to diagnosis frequency. Given the required conditions, the conventional explanation is not particularly parsimonious.
An alternative explanation may be that geographic variation in diagnosis frequency substantially reflects the intensity of observation. This is consistent with the associations we report between diagnosis frequency and measures of physician encounters and diagnostic testing. This explanation provides a more parsimonious explanation for our case-fatality findings; ie, if diagnosis frequency reflects intensity of observation, then the pattern of case fatality we observed here would be expected. More testing and more opportunities to make diagnoses may translate into the typical patient given a diagnosis being less sick. The finding of Song et al1
—that the number of diagnoses accumulated by migrating Medicare beneficiaries is associated with the location to which they moved—provided evidence from a natural experiment that supports this hypothesis.
Our analysis has a number of limitations. First, although Medicare claims are the most complete population-based data available in the United States, they are not entirely complete. Specifically, beneficiaries enrolled in plans outside of fee-for-service (plans that receive capitated payments from Medicare, such as Medicare Advantage) are not included in claims data. Although the proportion of beneficiaries enrolled in these plans is not correlated with diagnosis frequency across HRRs (in fact, enrollment is most common in the very low and very high quintiles), the possibility of differential selection raises the question of whether the relationship we observed would be present in the entire Medicare population. This highlights the importance of developing mechanisms to capture data from capitated plans3
to foster population-based analyses.
Second, the ability of the logistic regression models to adequately isolate the effect of each individual condition from others may be limited. To address this concern, we repeated the analysis on the subset of patients with no other confounding diagnoses: those diagnosed with only 1 chronic condition. This analysis also showed lower case fatality in the very high quintile of diagnostic frequency for each of the 9 chronic conditions.
Third, the accuracy of coding diagnostic data are open to question. If in-accuracies are random and not associated with region, they would not affect our findings. Coding practices, however, could vary across regions. It is possible that coding is relatively incomplete in the lowest quintile and relatively complete in the highest quintile. Alternatively, these differences could be purposeful—that is, the result of “gaming” efforts to increase reimbursements (or improve apparent quality) in the highest quintile.4,5
Although we have no evidence that this is the case, such practices could explain our results.
The frequency of diagnoses reported in claims data are routinely used in methods for risk adjustment in comparative effectiveness research,6,7
the evaluation of readmissions following hospitalization,8,9
and in paying insurance plans under the Medicare Advantage program.10
If diagnosis is not solely an attribute of underlying disease burden, adjustments based on frequency of diagnosis may introduce bias into efforts to compare outcomes, pay for health care, and assess the extent of geographic variation in health care delivery. On the other hand, if more diagnoses (and more frequent encounters and diagnostic testing as well as greater spending) improve outcomes, then standard methods of risk adjustment may provide a more accurate comparison of effectiveness and efficiency. Future research must further evaluate the contribution of the process of observation to diagnosis frequency and explore mechanisms to better measure disease burden.