|Home | About | Journals | Submit | Contact Us | Français|
To assess revascularization and mortality after acute myocardial infarction (AMI) for all Medicare patients in fee-for-service (FFS) and health maintenance organization (HMO) settings in California.
Hospital discharge abstract and death certificate data linked with Medicare enrollment files for patients aged 65 and over with Medicare coverage (69,040) discharged from a California-licensed hospital in 1994–1996.
Risk-adjusted results were assessed for HMOs and FFS, as well as for FFS beneficiaries from areas served by each plan.
Risk models were based on all sampled patients. The HMO patients were aggregated into 17 pseudoplans: 5 individual plans, 4 large plans split geographically (10 observations), and 2 “pseudoplans” of small HMOs. Observed versus expected 30-day mortality rates, lengths-of-stay (LOS) during the index hospitalization and any transfers, revascularization (coronary artery bypass graft [CABG] surgery and/or percutaneous transluminal coronary angioplasty [PTCA]) during the index hospitalization or 30 days after admission, were calculated for each pseudoplan.
Risk-adjusted death rate was slightly higher in FFS than in HMO settings (p<.01 with one risk adjustment model, n.s. with another). Three pseudoplans had significantly (p<.01) better than expected mortality rates. One pseudoplan was significantly worse (p<.05) with one risk adjustment model but not the other. The LOS and revascularization rates varied widely, but were not associated with outcomes. Plans with among the best results had the lowest LOS and revascularization rates. These pseudoplans were less likely to have their patients initially admitted to a hospital with revascularization capability, but the hospitals they used had higher CABG volumes. Even if CABG facilities were available during the index admission, in these plans with better than expected mortality rates, revascularization was often postponed or carried out elsewhere.
For Medicare patients having an AMI in the mid-1990s in California, risk-adjusted outcomes were no different, or slightly better on average, for those in HMOs than in FFS. Not all plans performed equally well, so understanding what leads to differences in quality is more important than simple comparisons of HMOs versus FFS.
There is substantial controversy about the quality of care in health maintenance organizations (HMOs) and other forms of managed care. Although there is a growing literature (Miller and Luft 1994; 1997; 2002), many studies rely on data from a small number of often self-selected plans. Thus, the evidence, which suggests that quality is neither consistently worse nor better in HMOs, may reflect a biased sample. One exception included all 82 HMOs in the nation with 1,000 or more Medicare members, but its data are from 1989 (Clement et al. 1994).
The Medicare program encompasses a broad range of plans. Because claims are not used for payment, HMOs have not always submitted data to Medicare. Apparent differences in utilization or quality may therefore be due to biases in the data reported. Some studies supplement Medicare data with Surveillance, Epidemiology, and End Results (SEER) program information for cancer patients (Merrill et al. 1999; Potosky et al. 1999; Riley et al. 1999). Unfortunately, SEER covers only certain geographic areas with few plans, and diagnosis and treatment differences in cancer care may reflect patient, rather than clinician, decisions.
This study focuses on Medicare beneficiaries in California hospitalized for an acute myocardial infarction (AMI) in 1994–1996. The data are from hospital-submitted discharge abstracts, avoiding the inconsistent reporting of claims by HMOs. To address several limitations of previous work, it includes all HMOs with Medicare beneficiaries, and outcomes are clearly defined: length of stay, mortality, and revascularization within 30 days. Indications for admission after AMI are relatively clear; well-validated risk adjustment models are used to account for patient risk factors, and there is little patient discretion in AMI care.
Short-term outcomes and treatment of AMI are only one aspect of quality of care and, some might argue, one over which HMOs have little control. Overall cardiovascular disease rates would better reflect HMO effects on patient behavior and physician prescribing patterns, but would be confounded by benefit differences and patient orientation toward prevention. For AMI patients, HMOs might implement treatment protocols, but patients can certainly have a role in the choice of provider, length of stay, and decisions to transfer or revascularize. Moreover, several studies have examined outcomes and treatment for AMI patients in HMOs and fee-for-service (FFS) settings (Carlisle et al. 1992; Guadagnoli et al. 2000). The primary issue to be addressed in this paper, however, is not whether outcomes after AMI are better or worse for HMO enrollees, but whether there is sufficient homogeneity in outcomes across HMOs and groupings of FFS patients to usefully ask the question in that manner.
The California Office of Statewide Health Planning and Development (OSHPD) has published reports on outcomes for patients with AMI since 1996 (Zach, Romano, and Luft 1997). These California Hospital Outcome Project (CHOP) reports use discharge abstracts required of all state-licensed hospitals. Routine edits return questionable data for correction. The CHOP team examines patterns of coding for all variables to identify over- and undercoding, for example, too low a proportion of AMI patients with a diagnosis of hypertension, or too high a proportion with the site of infarct unspecified. A small number of hospitals and all their patients are thereby eliminated.
The confidential file of discharge abstracts includes the patient's social security number, date of birth, and sex, all of which are used to link multiple admissions for the same person. Prior admission data allow the identification of certain conditions, such as heart failure, as having been present before the AMI. During subsequent admissions, revascularization procedures may take place. Linkage to the death certificate file captures out-of-hospital deaths.
To account for differences at admission in the risk of death, the CHOP contractor (UCLA for the data used in this study) estimated two risk models with 30-day mortality as the dependent variable. Model A used only variables that clearly reflected the situation prior to admission, including age and sex, chronic comorbidities, for example, diabetes and hypertension, and AMI-specific risk factors such as site of infarction. Some diagnoses, such as heart failure, were included only if identified during an earlier admission. Model B included all the risk factors in Model A, plus diagnoses such as shock, that may have occurred subsequently during the stay. Appropriate interaction terms were included in both models. Model A somewhat undercompensates for true case-mix differences because it omits some variables that might be comorbidities; Model B somewhat overcompensates for risk differences because its variables may sometimes be complications of care. The models have been validated with chart data and have been used by other investigators (Zach, Romano, and Luft 1997; Krumholz et al. 1999).
The risk models and de-identified patient-level data are in the public domain; however, identifiers were needed to select patients of specific HMOs. Medicare enrollment files identify beneficiaries with FFS coverage, and if in an HMO, the specific plan. Social security number, sex, and birth date can serve as linking variables, but are highly confidential. For this project, UCLA sent a file of all AMI patients to California Medical Review, Inc. (CMRI), which has the enrollment files for Medicare beneficiaries in California. California Medical Review, Inc. linked HMO enrollment codes to the discharge data, removed all identifiers, and forwarded the resulting file to the University of California, San Francisco (UCSF). This protocol was approved by the UCSF and the California Department of Health Services human subjects committees.
The CHOP risk models for all patients were reestimated on Medicare patients aged 65 and over. The probability of death was calculated twice for each patient, once using Model A risk factors and once using Model B risk factors. Patients residing outside California, or without coverage under both Medicare parts A and B were excluded. Patients were then grouped by HMO or FFS, based on their coverage in the month of their AMI.
To prevent the larger plans from being identifiable, each was split into two or three subplans, based on the geographic locations of their patients. (The primary concern here was not that the plans would be identifiable to the casual reader, but to each other. If plan-specific results were presented, some of the plans could probably figure out which point represented their own results, and then by elimination determine which belonged to their chief competitors.) Patient-level observations were aggregated to these pseudoplans or total FFS versus HMO patients. The probability of a pseudoplan having its observed number of deaths, given the estimated probability of death for each patient, was calculated using the Z-score. The proportion of patients having coronary artery bypass graft surgery (CABG) and/or percutaneous transluminal coronary angioplasty (PTCA) within 30 days after their AMI and the length of stay (LOS) during initial and transfer admissions were also calculated. Logistic models for revascularization and ordinary least-squares models for LOS were estimated using the risk variables from the mortality model.
Medical practice may vary geographically within a state as large as California, and an HMO may reflect practice styles in the areas from which it draws its enrollees. Observed and expected death and revascularization rates and LOS were computed for FFS beneficiaries in each three-digit zip code area and were weighted by the proportion of each pseudoplan's patients to give the outcomes for “local” FFS patients.
Linkage of the discharge abstracts with the enrollment file yielded 69,040 discharges in 1994–1996 for Medicare patients aged 65 and over. The coefficients of the reestimated risk models based on just those patients aged 65 and over were very similar to the original CHOP models (see online version at http://www.blackwellpublishing.com/products/journals/suppmat/HESR/HESR02067/HESR02067sm.htm). The c-statistics of .713–.816 were similar to other studies using administrative data (Krumholz et al. 1999; Alter et al. 2001). More important for the purposes of this study, the models are well calibrated and are quite accurate for groups of patients. For example, using Model A, the lowest 5 percent of cases by risk had a 2.7 percent predicted death rate versus 2.4 percent observed, and the highest 5 percent had 49 percent predicted and 48 percent observed death rates.
For revascularization, the lowest 5 percent of cases by predicted risk had predicted and observed rates of 4.8 percent and 3.7 percent, while the top 5 percent of cases had predicted and observed rates of 58.8 percent and 54.9 percent, respectively. For LOS, the distributions are narrower, but the fit is also quite precise (see online version).
A total of 349 patients with zip codes outside California and 4,008 patients with only part A or only part B of Medicare were dropped, leaving 38,319 patients in FFS and 25,835 in HMOs. Of the latter, 2,128 were in HMOs with fewer than 300 patients and were initially grouped together. Nine large HMOs collectively accounted for 23,707 patients. Four of these were split geographically. The analysis was therefore based on 17 pseudoplans (5 moderate-sized ones representing themselves, 4 created from 2 of the large ones, 6 created from the 2 largest plans, and 2 from a geographic split of the small plan aggregation). The splits are based on contiguous three-digit zip code areas representing regions such as the Los Angeles, San Diego, or San Francisco Bay areas. An eighteenth observation represents FFS beneficiaries.
The 30-day observed death rate for FFS beneficiaries was 18.62 percent, versus expected rates of 18.29 percent with Model A (p=.059) and 18.48 percent with Model B (p=.335). For all HMO enrollees, the observed death rate was 15.85 percent, not significantly different from the expected rates of 16.26 percent and 15.97 percent for Models A and B, respectively. Comparing FFS and HMO enrollees in a logistic regression with a dummy variable for FFS coverage, mortality rates were significantly higher (p<.01) among FFS beneficiaries under Model A (standardized mortality ratio or SMR=1.047), but not under Model B (SMR=1.018). Thus, the initial “bottom line” is that Medicare beneficiaries in California HMOs in the mid-1990s fared no worse, and perhaps a little better, in terms of short-term mortality after a heart attack than their FFS colleagues.
Figure 1 brings the analysis down to the level of the pseudoplan and presents the observed and expected 30-day mortality rates using Model A for each pseudoplan and FFS. (In the online version, the figures are presented in color. The color scale is linked to the Z-score from Model A. The red range indicates higher than expected mortality, but none of the values reaches a Z-score of 1.96, equivalent to p=.05. The blue range indicates better than expected mortality.) In the figures in print, the observations for pseudoplans that have better than expected outcomes are indicated by open circles; those with worse than expected mortality by solid circles. The points are almost evenly split between more and fewer deaths than expected, but three pseudoplans have significantly fewer (p<.01) deaths than expected (indicated by a
symbol). All the pseudoplans had lower expected death rates than FFS. Results were similar for Model B risk adjustment (not shown), except for one pseudoplan with a higher than expected death rate (p <.05), identified with a solid vertical rectangle symbol. The symbols from this relationship are reflected across figures. For example, in a figure focusing on length of stay (LOS), the pseudoplans with better than expected mortality will be identified by the open circle and fat horizontal rectangle symbols.
To test the hypothesis that health plans merely reflect practices in their geographic areas, Figure 2 presents the ratio of observed to expected deaths, or SMR, in each pseudoplan versus the SMR for FFS patients in its local area. (Because HMOs often draw their enrollees from overlapping geographic areas, the groupings of FFS outcomes cannot be mutually exclusive.) Pseudoplan ratios range from 0.77 to 1.09; those for FFS patients in the same areas range from 0.96 to 1.11, with no apparent relationship (r=.004).
Controlling for patient risk factors, LOS for FFS patients was 0.79 days (p<.0001) longer for index plus transfer stays. (A transfer stay is defined as a second hospitalization that begins on the same day or the day after the end of the index hospitalization.) There was no overall relationship between pseudoplan and local FFS stays for either index admission or total LOS (not shown). Of FFS patients, 15 percent were transferred to another hospital versus 22 percent of HMO patients (12–33 percent range across pseudoplans). The pseudoplans with among the best outcomes tend to have short index stays followed by transfers (not shown).
There was substantially more variation in the 30-day revascularization rate among the pseudoplans (14.7 percent to 39.2 percent) than in FFS in their local areas (27.5 percent to 34.4 percent) (Figure 3). The three low-mortality outliers had substantially lower revascularization rates than was observed among FFS beneficiaries in their areas. Risk adjustment did not alter these findings. Although revascularization is unlikely to improve 30-day mortality, there is procedure-related risk. Thus, one may argue that the lower mortality rate among HMO enrollees might be due to their lower likelihood of receiving revascularization. It is useful, therefore, to examine the settings in which it is done.
Revascularization may depend on the capabilities of the hospital to which the patient was initially admitted (McClellan, McNeil, and Newhouse 1994). Figure 4 contrasts the overall revascularization rate within 30 days versus revascularization during the index admission. An additional 7–12 percent of the HMO patients underwent revascularization in another admission within 30 days of their AMI, with the percentage being greater in pseudoplans with low rates during the index admission. The three low-mortality outliers had very low index admission revascularization rates (2.8–5.2 percent), and even though three to four times as many procedures were done subsequently, their overall 30-day rates were still very low. The cluster of pseudoplans with lower than expected mortality points in the lower left-hand corner had very low rates of revascularization on the initial admission.
The HMOs contract with, or direct their patients to selected hospitals (Chernew, Scanlon, and Hayward 1998; Escarce et al. 1999; Erickson et al. 2000). Figure 5 shows that 57 percent of FFS patients are initially admitted to a hospital with CABG capability, and the mean volume of such hospitals is 268 CABG procedures per year. Ten of the pseudoplans exhibit a similar pattern, but six with better than expected outcomes have only 14–27 percent of their initial admissions occurring at CABG-capable hospitals. Figure 6 focuses on just those patients who ultimately received CABG surgery within 30 days. There is a cluster of pseudoplans in the upper left-hand corner with relatively low volumes and a high proportion of patients operated on in the index hospital. This is a pattern that is similar to that of FFS, and none of these plans has better than expected mortality rates. In contrast, the pseudoplans with better outcomes operate during the initial hospitalization only half the time, with the exception of two plans that have very high CABG volumes in the hospitals used for their index admissions.
Medicare beneficiaries in California's HMOs admitted for acute myocardial infarction during 1994–1996 were younger and had fewer comorbidities than those in FFS. Thus, their lower observed 30-day mortality rate of 15.85 percent relative to 18.62 percent for FFS patients is not surprising. Accounting for patient differences in risk, the FFS rate was slightly worse, but given the large sample, statistically significantly so, than the rate for HMO enrollees in general.
One does not join “HMOs in general” however, and the results for individual pseudoplans suggest that it matters which plan one joins. Among the 17 pseudoplans examined, three had significantly better than expected mortality rates. One had worse than expected rates under one, but not both, risk models. Given a p=.05 significance level, one outlier observation would be expected out of 17 points. While this might account for one outlier, it is not likely to explain all three of the good outliers, two of which are significant, (p<.01) in both models, and the other had values of p<.01 and p<.03.
There is substantial controversy about the identification of outliers, which is sometimes due to chance. Chance may account for any of the selected mortality results, but there is strong circumstantial evidence presented in these figures that some of these HMOs have different practice or hospital use patterns that appear related to the differences in risk-adjusted mortality.
Two principal concerns with this type of analysis were addressed directly. Hospital payment by HMOs is often not dependent on diagnostic related group (DRG) category, as is the case for FFS patients. Thus, some hospitals might “undercode” diagnoses for HMO patients. If so, HMOs would receive a lower predicted mortality rate than they truly “deserved.” If the variation in expected rates across pseudoplans reflected variable undercoding, there would be a negative relationship between the expected rate and the observed/expected mortality rate. This was not observed; in Figure 1 the low outliers have among the lowest expected mortality rates, and the high outlier has an unremarkable expected rate.
The second major area of concern was that differences across HMOs might merely reflect the geographic areas they serve. There are, indeed, substantial differences in risk-adjusted mortality rates among patients in FFS. Some areas exhibited 10 percent more deaths than expected—an excess greater than in any of the pseudoplans. Given that these “FFS areas” merely reflect the experience in overlapping locales from which HMO enrollees are drawn, and are not mutually exclusive, the true geographic variation is likely to be larger. Local FFS outcomes, however, were unrelated to outcomes across pseudoplans. It is possible, however, that HMOs select the best hospitals and clinicians within a geographic area, or may otherwise alter usual practices.
The data on LOS and revascularization support this conclusion. Although risk-adjusted total stays (index admission plus any transfers) were 0.79 days shorter for HMO patients, there was no relationship between LOS in the pseudoplans and their local area. The pseudoplans with the better mortality outcomes exhibit a consistent pattern that differs from both FFS and the pseudoplans with results that are similar to or worse than expected. The former tend to have a lower proportion of their AMI patients initially admitted to hospitals with CABG capabilities and tend to rely on referrals to other hospitals for revascularization. The CABG-capable hospitals these pseudoplan patients initially use have much higher average CABG volumes and the volume difference is even greater among the hospitals used for subsequent admissions (not shown). While many AMI patients are brought by ambulance to the nearest hospital, some are not, and this may account for the differential admission patterns. Even when admitted to a CABG-capable hospital, only half the patients who ultimately get CABG in the better pseudoplans have their procedure during the index admission, in contrast to the pattern among the other pseudoplans and FFS of heavy reliance on the index hospital, even if it is low volume. The substantial literature on the volume–outcome relationship for revascularization suggests these patterns are purposeful (Dudley et al. 2000).
Overall, these findings of marked plan differences are somewhat surprising. Kaiser is clearly different in its structure, financial incentives, and physician panel; the non-Kaiser HMOs in California contract in varying ways with physicians and hospitals or use intermediary medical groups (Robinson 1999). Although the identities of the HMOs were concealed in the linked file sent for analysis, different HMOs are represented among the “good” outliers. Those pseudoplans with worse than expected mortality rates (the solid points) had mortality and revascularization patterns similar to FFS in their local areas (see Figures 2 and and3).3). On the other hand, the open circle points (those with better than expected mortality), showed no relationship to local FFS patterns.
In many areas of the state, physician and hospital networks were very similar among non-Kaiser HMOs, so there was little reason to expect selective contracting to yield markedly different sets of providers. The HMOs do exercise some quality review, however, and some may exclude a small number of providers with quality problems and this would not be apparent in overall network breadth. The HMOs also develop practice guidelines, for example, with respect to use of beta-blockers after AMI, and may directly influence LOS, transfers, and revascularization. Some HMOs use hospitals similar to those used by FFS patients, and their outcomes appear similar. Other HMOs, through means as yet undetermined, apparently steer their patients to selected hospitals that are less likely to offer CABG surgery, but if they do, they have higher volumes. Furthermore, these same plans are much less likely than FFS and the other plans to have CABG surgery done in the index admission even if the hospital is capable. The fact that these different patterns of care are associated with different risk-adjusted mortality rates is worthy of further investigation.
One might expect a few points among 17 to be outliers just by chance. One might also expect an organization like Kaiser to have different practice patterns for its enrollees in terms of the hospitals they use, LOS, and revascularization. The HMO enrollees probably have better outpatient drug coverage than does the average FFS beneficiary. The surprising result from this analysis is that those observations with better outcomes also have different practice patterns with respect to gross measures, such as the frequency and location of revascularization. Moreover, these different patterns appear for several HMOs, so it cannot be just a “Kaiser effect.”
In marked contrast to these California results, Cutler, McClellan, and Newhouse (2000) found little difference in the patterns of care for AMI patients in Massachusetts, and argue that nearly all the difference in HMO costs is due to lower charges. As indicated above, Kaiser does not account for all the observed differences in practice patterns. California, however, does have a much higher prevalence of group practices, and if some HMOs concentrate their patients in such groups, this may explain the differences in clinical patterns. It may also be the case that the presence of Kaiser has created a public acceptance of more coordination of care, and this may allow some of the more conventional independent practice associations to exercise more controls over where their patients will be treated.
These findings highlight the need to move beyond simple comparisons of FFS versus HMOs to deeper analyses of the reasons for performance differences. Remembering the substantial variation in practice patterns across the FFS “comparison plans,” all of which have FFS payments to physicians and DRG payments to hospitals, the variation across HMOs is probably not just due to financial arrangements. Instead, specific practice guidelines, quality review, and other features should be examined. The differential practice patterns and better outcomes in some plans may also be due to different physicians and hospitals. If so, one should learn how those providers were chosen.
The goal of the risk adjustment models in this paper was to adjust for differences in patient risk that were present on admission, while not adjusting for differences that might reflect either complications or treatment decisions. Some diagnoses, such as diabetes or hypertension, even if first noticed during the hospital stay, were almost certainly present at the time of admission. Other diagnoses, such as heart failure or shock, may have been present at admission or may have developed during the admission, and thus should not be in the model. If a patient had a hospital admission prior to the index admission for AMI and a diagnosis was noted during the prior admission that could be either acute or chronic, such as congestive heart failure, then one could feel comfortable in using that diagnosis as a risk factor in the subsequent index admission. If such a condition were noted only during the index admission, it would not be included in the standard, or Model A version. Some diagnoses, such as shock, are very powerful predictors of death, and are only relevant during the index admission. They were included in the Model B estimates, with the recognition that their inclusion might overcompensate for diagnoses that may actually have been complications.
When a patient has had prior admissions, it makes sense to include chronic conditions such as diabetes that appear on either record. There is a question, however, whether one should just add variables for the acute/chronic conditions, for example CHF, for those patients with prior admissions to a pooled model. Patients with multiple admissions may be different from those with just an index admission, and this may carry over to the effects of their chronic diseases. Thus, the sample was first split into patients with a prior admission in eight weeks and those without such a prior admission. Separate models were estimated for each subsample, both without and with the Model B variables representing conditions that may have been complications during the index admission. Thus, although there are four regressions, each patient received only two predicted probabilities, one from the A Model and one from the B Model, each with variables appropriate for whether the patient had prior admissions or not. The prior and no-prior groups were then melded together.
Table A1 includes the means of all the variables for FFS and HMO enrolled populations used in developing the risk models. The total number of cases is somewhat larger than the sum of enrollees in the plans indicated in the paper because cases were subsequently dropped because they were out of state or did not have both Part A and Part B Medicare coverage.
|mortio30||Death within 30 days of admission||68511||0.174||42599||0.183||25912||0.159|
|npriors||Number of prior admissions||68511||0.313||42599||0.335||25912||0.276|
|iatxcabg||CABG in current or subsequent admission||68511||0.109||42599||0.111||25912||0.106|
|iatxptca||PTCA in current or subsequent admission||68511||0.174||42599||0.194||25912||0.141|
|priolagw||Weeks from most recent prior record||13484||9.146||8806||9.147||4678||9.145|
|sknulcrp||Chronic skin or ulcer condition||68511||0.005||42599||0.006||25912||0.004|
|nopriors||No prior admissions||68511||0.803||42599||0.793||25912||0.819|
|iageyrs||Age in years||68511||76.442||42599||76.747||25912||75.941|
|prcabg||History of prior CABG||68511||0.010||42599||0.011||25912||0.007|
|iadm94||Admission in 1994||68511||0.338||42599||0.361||25912||0.300|
|iadm95||Admission in 1995||68511||0.341||42599||0.339||25912||0.345|
|chfb||Congestive heart failure||68511||0.401||42599||0.406||25912||0.393|
|chrrenab||Chronic hyperten renal failure, dialysis||68511||0.058||42599||0.062||25912||0.051|
|cnsdisb||Parkinsons, degenerative/demyelinizing cns disorders||68511||0.019||42599||0.020||25912||0.017|
|hrsecmab||History of primary or secondary malignant neoplasm||68511||0.017||42599||0.018||25912||0.015|
|htb||Hypertension with no note of renal or CHF||68511||0.453||42599||0.441||25912||0.473|
|iagechf||Age * CHF||68511||31.223||42599||31.754||25912||30.350|
|iagefem||Age * Female||68511||34.750||42599||36.716||25912||31.519|
|iageinf||Age * Site inferior||68511||16.451||42599||16.419||25912||16.502|
|iagesit||Age * Site other||68511||6.110||42599||6.590||25912||5.320|
|iantchf||Anterior * CHF||68511||0.121||42599||0.124||25912||0.116|
|ichfcab||CHF * History of CABG||68511||0.004||42599||0.005||25912||0.003|
|ichffem||CHF * Female||68511||0.190||42599||0.203||25912||0.169|
|iinfchf||Inferior * CHF||68511||0.063||42599||0.064||25912||0.062|
|isitchf||Anterior * CHF||68511||0.041||42599||0.045||25912||0.035|
|ichfsho||CHF * History of CABG||68511||0.038||42599||0.039||25912||0.036|
|ishosep||Shock * Sepsis||68511||0.005||42599||0.005||25912||0.004|
|paymcal||Payor source MediCal||68511||0.023||42599||0.036||25912||0.002|
|payunin||Payor source uninsured||68511||0.005||42599||0.004||25912||0.005|
|acrenali||Acute or unspecified renal failure||68511||0.053||42599||0.057||25912||0.045|
|othcvai||Other cerebrovascular disease||68511||0.026||42599||0.027||25912||0.023|
|puledemi||Pulmonary edema, adult respiratory distress||68511||0.078||42599||0.081||25912||0.075|
|coatrbli||Complete atrioventricular block||68511||0.033||42599||0.035||25912||0.030|
|pventaci||Paroxysmal ventricular tachycardia||68511||0.083||42599||0.085||25912||0.079|
|amisequi||Catastrophic structural AMI compl||68511||0.003||42599||0.003||25912||0.003|
|vasinsui||Ischemic necrosis, infarct intestine/liver||68511||0.004||42599||0.004||25912||0.003|
|ishochf||Shock * CHF||68511||0.038||42599||0.039||25912||0.036|
|iacrchf||Acute renal failure * CHF||68511||0.036||42599||0.038||25912||0.032|
|icomage||Coma * Age||68511||0.964||42599||1.068||25912||0.792|
|icomchf||Coma * CHF||68511||0.006||42599||0.006||25912||0.005|
|ipulant||Pulmonary edema * Anterior||68511||0.026||42599||0.027||25912||0.024|
|ipulchf||Pulmonary edema * CHF||68511||0.053||42599||0.053||25912||0.052|
|iepiage||Epilepsy * Age||68511||1.596||42599||1.735||25912||1.367|
|ishoacr||Shock * Acute renal failure||68511||0.014||42599||0.015||25912||0.012|
|ishocom||Shock * Coma||68511||0.002||42599||0.003||25912||0.002|
|ishooth||Shock * Other site||68511||0.002||42599||0.003||25912||0.002|
|ishopul||Shock * Pulmonary edema||68511||0.020||42599||0.021||25912||0.020|
|ishocoa||Shock * Complete atrioventicular block||68511||0.008||42599||0.009||25912||0.006|
|ishopve||Shock * Paroxysmal ventricular tachycardia||68511||0.011||42599||0.011||25912||0.011|
|ishoasp||Shock * Aspiration pneumonia||68511||0.003||42599||0.004||25912||0.003|
|ishoami||Shock * Catostrophic AMI||68511||0.001||42599||0.001||25912||0.001|
|iacroth||Acute renal failure * Other site||68511||0.003||42599||0.003||25912||0.003|
|icomoth||Coma * Other site||68511||0.002||42599||0.002||25912||0.002|
|icompul||Coma * Pulmonary edema||68511||0.004||42599||0.004||25912||0.003|
|icomepi||Coma * Epilepsy||68511||0.002||42599||0.002||25912||0.001|
|icomasp||Coma * Aspiration pneumonia||68511||0.001||42599||0.001||25912||0.001|
|ipulasp||Pulmonary edema * Aspiration pneumonia||68511||0.007||42599||0.007||25912||0.006|
|icoapve||Complete atrioventricular block * Paroxysmal ventricular tachycardia||68511||0.004||42599||0.005||25912||0.004|
|OKplan||Case included in sample of plans after exclusions||64154||1.000||38319||1.000||25835||1.000|
Of the total number of patients, 55,467 had no prior admissions for non-AMI reasons within eight weeks, and 13,573 had a prior admission, allowing the identification of additional risk factors.
The logistic regressions for the priors and no-priors patients using Models A and B are shown in Tables A2–A5. There are no substantial differences in the coefficients for these regressions relative to those estimated by the CHOP team on all AMI patients with the exception of the UNDER35 age variable that obviously was excluded from these models.
|Variable||Parameter Estimate||Standard Error||Wald Chi-Square||Prob>0||Odds Ratio|
|Variable||Parameter Estimate||Standard Error||Wald Chi-Square||Prob>0||Odds Ratio|
|Variable||Parameter Estimate||Standard Error||Wald Chi-Square||Prob>0||Odds Ratio|
|Variable||Parameter Estimate||Standard Error||Wald Chi-Square||Prob>0||Odds Ratio|
C-statistics for Model A were .730 and .713 for patients with prior admissions and without prior admissions, respectively. For Model B, the corresponding c-statistics were .816 and .789. More important for the purposes of this study than the c-statistics is the calibration of the model. Rather than just relying on the Hosmer-Lemeshow statistic, which, with a very large number of observations can yield significant chi-square values that are not particularly meaningful, plots of the observed and predicted results are more informative. Given the large number of cases, to provide even more resolution, 20 cells are used, rather than the 10 in the classic Hosmer-Lemeshow approach. Cases are ordered by the predicted value and for each 20th of the dataset, the observed and predicted values are plotted.
Figures A1 and andA2A2 present the calibration curves for Models A and B, respectively. Both models exhibit a very wide range of predicted values, and these match the observed values quite well. As would be expected, because Model B includes some variables that may have been complications, its top range of predicted values was even higher with the top 5 percent of cases having an average value of 0.752 and an observed death rate of .721 (the maximum predicted was actually 0.990736!). As is often the case with the logistic link, the predicted values at the very high and very low ends of the spectrum were somewhat higher than they “should be.” Since HMOs tend to have a somewhat lower-risk enrollee base there is a concern that this may account for their better than expected results. To test this, the predicted mortality rates were replaced by new predicted values derived from a regression of the 30-day death variable on the predicted, predicted squared, and predicted cubed terms. This allowed an even closer fit at the tails of the distribution. The lowest 5 percent of cases had a predicted rate of .0237 and an observed rate of .0231 while the next 5 percent had predicted and observed rates of .0362 and .0379. Likewise, the top two categories had predicted and observed rates of .3843 versus .3821 and .4606 versus .4696. The results of this calibration are shown in Figure A3. This much closer fit of predicted to observed risks made no difference in the results when aggregated to the plan level. The largest difference was .0005 relative to a risk level of .1771 in a small plan.
Figure A4 presents the calibration results for the probability of any revascularization within 30 days of the AMI.
Although not technically appropriate for the Hosmer-Lemeshow test, the approach can be also be used for the results of LOS estimates. For length of stay, the distribution of predicted values was narrower, but the fit was nonetheless quite precise. These results are shown in Figures A5 and andA6A6 for total and index lengths of stay.
Gerald Kominski of UCLA provided the linked hospital discharge record data and Dexter Jung of CMRI supervised the linkage of those data with the Medicare enrollment files. Deborah Rennie of UCSF prepared the analytic files from those linked data. (Preparation and linkage of the discharge abstract and Medicare enrollment data upon which this article is based were performed under contract no. 500-96-P535 titled “Utilization and Quality Control Peer Review Organization for the State of California,” sponsored by the Health Care Financing Administration [HCFA], Department of Health and Human Services [DHHS], a result of the Health Care Quality Improvement Program initiated by HCFA, which has encouraged identification of quality improvement projects derived from analysis of patterns of care, and therefore required no special funding on the part of this contractor [California Medical Review, Inc.].) The content of this paper does not necessarily reflect the views or policies of CMRI or DHHS. The author assumes full responsibility for the accuracy and completeness of the ideas presented. Partial support for this project was provided by the Integrated Healthcare Association.