|Home | About | Journals | Submit | Contact Us | Français|
Nicholas A. Christakis, M.D., Ph.D, ude.dravrah.dem.pch@katsirhc, Telephone: 617-432-5890, Fax: 617-432-5891
A. James O’Malley, Ph.D., ude.dravrah.dem.pch@yellamo, Telephone: 617-432-3493, Fax: 617-432-2560
Jukka-Pekka Onnela, Ph. D., ude.dravrah.dem.pch@alenno, Telephone: 617-432-5890, Fax: 617-432-5891
Nancy L. Keating, M.D., M.P.H., ude.dravrah.dem.pch@gnitaek, Telephone: 617-432-3093, Fax: 617-432-0173
Bruce E. Landon, M.D., M.B.A, ude.dravrah.dem.pch@nodnal, Telephone: 617-432-3456, Fax: 617-432-0173
Mailing Address for all authors: Department of Health Care Policy, Harvard Medical School, 180 Longwood Avenue, Boston, MA 02215
There is substantial variation in the cost and intensity of care delivered by US hospitals. We assessed how the structure of patient-sharing networks of physicians affiliated with hospitals might contribute to this variation.
We constructed hospital-based professional networks based on patient-sharing ties among 61,461 physicians affiliated with 528 hospitals in 51 hospital referral regions in the US using Medicare data on clinical encounters during 2006. We estimated linear regression models to assess the relationship between measures of hospital network structure and hospital measures of spending and care intensity in the last 2 years of life.
The typical physician in an average-sized urban hospital was connected to 187 other doctors for every 100 Medicare patients shared with other doctors. For the average-sized urban hospital an increase of one standard deviation (SD) in the median number of connections per physician was associated with a 17.8% increase in total spending, in addition to 17.4% more hospital days, and 23.8% more physician visits (all p<0.001). In addition, higher “centrality” of primary care providers within these hospital networks was associated with 14.7% fewer medical specialist visits (p<0.001) as well as lower spending on imaging and tests (−9.2% and −12.9% for 1 SD increase in centrality, p<0.001).
Hospital-based physician network structure has a significant relationship with an institution’s care patterns for their patients. Hospitals with doctors who have higher numbers of connections have higher costs and more intensive care, and hospitals with primary care-centered networks have lower costs and care intensity.
American regions and hospitals within those regions differ markedly in health care spending and resource use.1,2 Even after risk adjustment and price standardization, a significant amount of variation in spending and resource use remains unexplained.3,4 These findings are concerning in light of the growth in US health care costs, since hospitals with higher spending and resource use do not appear to have better outcomes and have similar performance on health care quality indicators compared to lower spending hospitals.5–7 Prior research has shown that regional levels of health care spending and utilization are associated with physicians’ tendency towards aggressive care.8–10 It is possible that these regional and institutional patterns of high or low-cost care may be reflected in the networks of physician interactions since collectively, physician interactions contribute to the culture and knowledge of a region or institution. For instance, physicians rely on each other as trusted sources of medical advice and information, often to the exclusion of published research.11,12 Therefore, one unexplored area that might contribute to hospital-level variations in care is the structure of the networks of hospital-affiliated physicians as defined by physician-to-physician interactions.13 As recently shown, physician interactions may be measured by examining whether or not physicians treat patients in common.14
We examined how networks based on physician relationships might be associated with care delivery for patients using network analysis.15,16 Network analysis has had prior successful applications in understanding the behavior of organizations such as academic departments, company boards of directors, and artistic collaborations.17–19 Some prior research has used these methods to examine physician advice networks and the diffusion of information among physicians; however, these studies included relatively small samples of physicians or focused on a single technology or drug.20–23
We used data from the Medicare program regarding 2.6 million patients cared for by 61,146 physicians associated with 528 hospitals to study professional networks of physicians defined by patient sharing. We focused our study on networks defined by physicians affiliated with individual hospitals because of the importance of hospitals to the US health care system and the depth of data available for describing hospital performance. We hypothesized that network measures reflecting poorer coordination of care within physicians’ professional networks would be associated with higher costs and care intensity within hospitals.
To evaluate this hypothesis, we map the networks of all physicians affiliated with these nationally representative hospitals and characterize these networks with well-accepted measures from the discipline of network science that reflect aspects of care coordination. For instance, the number of physicians who share care for a patient (a measure related to physician degree within the network, described below), has been shown to be associated with increased costs and utilization in prior studies.24–26
We extend this prior work by adopting concepts developed within network science. Such measures can reveal patterns of medical care that would otherwise be difficult to measure, adding a new set of tools for insight into health care delivery.
We used encounter data from the 2006 Medicare Carrier File for 100% of patients enrolled in Medicare Part A (hospital care) and Part B (physician services, outpatient care, and durable medical equipment) in 50 randomly sampled hospital referral regions (HRRs) and the Boston HRR to define physician relationships. We excluded patients enrolled in capitated Medicare Advantage plans since we did not have claims for these patients, and since the measures of cost and intensity used (described below) were calculated for fee-for-service enrollees. We obtained descriptive information for hospitals and physicians from the 2006 American Hospital Association annual survey and American Medical Association (AMA) Masterfile. We defined physicians as primary care physicians (PCPs), medical or surgical specialists, or “other” (e.g. psychiatry).
We obtained measures of cost and care intensity for hospitals from the Dartmouth Atlas of Health Care, which were derived using data from 2001 to 2005.27 For each hospital, we examined 3 measures of health care spending (total inpatient plus outpatient, imaging, and laboratory tests) and 6 measures of utilization, including hospital days (total, intensive care unit (ICU), and general medical/surgical combined), and number of physician visits (including visits to PCPs and medical specialists). These measures were defined based on a population of patients hospitalized at least once for one of nine life-threatening chronic illnesses (e.g., congestive heart failure) who were in the last two years of life.28 All Dartmouth measures were adjusted for patient age, sex, race, type of chronic illness, and presence of multiple chronic illnesses.29 Thus, the measures represent the case-mix-adjusted cost and intensity of care for a population of older patients with roughly comparable levels of illness.
We assigned each physician with an office located in a sampled HRR (assessed using the AMA Masterfile) to a principal hospital based on where they filed the plurality of inpatient claims, or if they did not do any inpatient work, to the hospital where the plurality of patients they saw received inpatient care.30
Our sample initially included 65,757 eligible physicians in office-based patient care specialties (excluding pathologists, emergency medicine, radiologists, and anesthesiologists) affiliated with 867 general medical surgical hospitals within the selected regions. After excluding low-volume hospitals for which the outcomes measured could not be ascertained (≤400 deaths annually) and physicians with no ties within their assigned hospital (mostly applicable to physicians located at the border of an HRR who primarily used a hospital outside of our sample), our final sample included 61,461 physicians affiliated with 528 hospitals.
To define a network of relationships between the physicians in our dataset, we identified a relationship (tie) between two doctors if they each had a significant encounter with one or more common patients. These encounters included face-to-face visits or meaningful procedures with a value of at least 2 relative value units (RVUs). This was done to capture encounters where an office visit might not be billed such as those related to bundled surgical procedures. After identifying significant encounters between physicians and patients, as depicted in Figure 1A, we then created a tie between any two doctors who cared for one or more patients in common (outlined schematically in Figure 1B). The use of shared patients to identify network ties has been validated in a recent study.14
We focused on two structural measures commonly used in network analyses that we hypothesized would reflect care coordination within the network. These measures are briefly summarized below, and are schematically explained in Figure 1C with additional details presented in the Appendix.
Degree is defined as the number of ties an individual physician has in the network, or equivalently, the number of other doctors a physician is connected with through the sharing of patients. The doctors contributing to a physicians’ degree can be any doctors a physician’s patients have seen in the physician’s HRR. To account for a physician’s Medicare patient volume, we adjusted degree by dividing each physician’s degree by the total number of Medicare patients the physician shared in 2006 with other doctors (controlling for a physician having higher degree simply because he sees more patients). A physician with a high adjusted degree shares his patients with a broader array of other doctors than a physician with a low adjusted degree. This measure is independent of the number of doctors seen per patient, since, for example, a physician’s fixed panel of patients could all see the same 20 specialists or could see 200 specialists. Because our analysis was at the hospital level, we then summarized this measure across the physicians in a hospital network by using the median adjusted degree of all physicians at a hospital. We hypothesized that hospitals whose physician networks had a higher median degree would have higher costs and care intensity due to the greater challenges of care coordination as a physician shares patients with more colleagues.24–26,31 A physician sharing patient care with a broad set of colleagues may have more difficulty consolidating his patients’ clinical information and guiding their care than a physician sharing patients with fewer colleagues.
Betweenness centrality is a measure that describes the tendency of a physician to be located in the middle of the network surrounding him.32 The betweenness centrality of a physician is calculated in the following way: consider connecting each physician to every other physician in the network going through as few intermediate relationships as possible; the betweenness centrality of a physician is proportional to the number of times he lies on any of these paths as an intermediary (see the Appendix for equation and details). Visually, doctors with high betweenness centrality lie in the middle, rather than on the periphery, of a network map visualized with standard algorithms.33 Physicians with higher betweenness centrality are well positioned in their network to have greater access to, and influence on, the flow of information among doctors in their network. In Figure 1C, the size of each circle (representing a physician) is proportional to that physician’s betweenness centrality. The calculation of betweenness centrality is confined to doctors within a hospital.
We were interested in how central PCPs were in a hospital network relative to all other doctors in the network. To calculate this measure, we used the ratio of the average centrality of PCPs over the average centrality of all other doctors in a hospital (see Appendix). The resulting relative centrality value can be interpreted as how much more or less central PCPs are when compared to the other doctors in a hospital network. Given the importance of primary care systems for health care costs,34 we hypothesized that hospitals whose networks of patient sharing were more centered around PCPs would have lower costs and care intensity. Hospitals with networks focused around PCPs may have improved capacity for care coordination because specialists are more likely to share patients with a core set of PCPs in those systems (Figure 2).
Hospital-level control variables included the number of hospital beds, number of physicians assigned to a hospital, location (urban or suburban/rural/isolated),35 teaching hospital status (major [member of the Council of Teaching Hospitals], minor [any non-major teaching hospital with medical school affiliation or residency program], none), ownership (not-for-profit, for-profit, government),36 nurse full time equivalents per 1000 inpatient days, and the percentage of admissions from Medicare and Medicaid patients. In addition, we controlled for the proportion of physicians assigned to a hospital who were PCPs and the mean shared patient volume per physician at a hospital, defined for each physician as the number of patients shared in 2006 with other doctors.
We compared our sample of hospitals with hospitals nationally using χ2 or t-tests and assessed differences in network measures across hospitals using one-way analysis of variance. We used multivariable weighted linear regression to model the effect of each network structure measure on cost or care intensity outcomes, adjusting for the hospital characteristics detailed above. To account for skewness, we log-transformed each outcome variable, and to account for the precision with which each outcome was measured, we weighted observations by the number of annual deaths used by the Dartmouth Atlas group to measure the cost and utilization data.27 We also used robust heteroscedastic-consistent standard error estimation in model fitting 37,38 to account for the possibility that the variance of an observations varies with the mean value of the predictors.
Because some small hospitals had excessively large or small centrality ratios, we set outliers to equal the 1st and 99th percentile values, respectively (see Appendix).39 To enable regression coefficients to be directly compared, we centered each continuous network predictor and hospital covariate to have mean 0 and standard deviation 1 over the entire sample. Regression coefficients are reported as percent change expected in the outcome of interest for an average-sized urban hospital (the median hospital) associated with an increase of one standard deviation in the network measure predictor in order to aid comparison across the models presented. Because network measures differed for urban and non-urban hospitals, we performed a secondary analysis of covariance for each model described above that included an interaction between the network variable of interest and urban/rural location.
Five hospitals had missing data for the general medical/surgical and ICU hospital days outcomes; in addition, 7 and 25 hospitals were missing data for PCP and medical specialist relative centrality respectively due to undefined values (see Appendix). Because these missing values were the outcomes and key predictors of interest, the hospitals with missing data were excluded from the relevant models (reflected in the sample sizes shown in Figure 3). We performed extensive sensitivity analyses to test the decisions made in the modeling process and found that our results were robust under a variety of conditions, including when accounting for Medicare Advantage penetration (data not shown, see Appendix). Complete results for all covariates included in the models are in Appendix Table 2.
All analyses were performed in R version 2.10,40 using the igraph package (version 0.5.3) for calculating network structure measures and the Zelig package (version 3.4–8) for multivariable regression models.41,42 We visualized hospital networks using the Fruchterman-Reingold algorithm as implemented in igraph.33,43 This study was approved by the institutional review board at Harvard Medical School.
We studied 528 hospitals and the 61,461 physicians caring for 2.6 million Medicare patients who comprised their associated physician networks. Compared with all general medical/surgical hospitals in the US, our sample contained larger hospitals that were more likely to be in urban settings (p<0.001 for both) (Table 1).
The average median adjusted degree of a mid-sized hospital in our sample was 187 (SD=86) and ranged from 155 (SD=57) for smaller hospitals to 281 (SD=124) for larger hospitals (p<0.001) (Table 2). Therefore, the typical physician in a mid-sized hospital shared patients with 187 other doctors for every 100 patients shared with other doctors. PCP relative centrality decreased with hospital size, from a mean of 1.11 (SD=0.87) in smaller hospitals to 0.80 (SD=0.54) in larger hospitals (p<0.001).
To illustrate the concept of relative centrality, the network graphs of three similarly sized hospitals are depicted in Figure 2. In the network for the hospital in Fig. 2A, medical specialists and surgeons are far more central, with almost all PCPs being located in the periphery of the network. In contrast, the networks of the hospitals in Figs. 2B and 2C have PCPs more prominently participating as central physicians. The relative centrality of PCPs in hospitals A, B and C are 0.35, 1.0 and 2.0, respectively.
The unadjusted relationships between median adjusted degree, PCP centrality, and total Medicare spending per hospital are depicted in Appendix Figure 1.
Adjusted relationships between hospital network structure and hospital outcomes, controlling for hospital characteristics, are presented in Figure 3. For the average-sized, urban hospital in our sample, an increase of one standard deviation (SD) in the median adjusted degree (corresponding to an addition of 107 doctors per 100 patients shared to the typical doctor’s number of contacts) was associated with a 17.8% (95% CI, 13.2,22.5) increase in total Medicare spending, 17.4% (95% CI, 12.6,22.4) more hospital days, and 23.8% (95% CI, 18.6,29.1) more physician visits in the last 2 years of life.
In contrast, higher centrality of primary care providers within an average-sized urban hospital network was correlated with a decrease in overall spending of 6.0% (95% CI, −9.5, −2.4), along with 9.2% (95% CI, −13.1, −5.1) lower spending on imaging and 12.9% (95% CI,−17.0,−8.6) lower spending on tests for a 1 SD increase. In addition, higher PCP centrality was accompanied by 8.6% (95% CI, −19.4,−9.7) fewer physician visits and 14.7% (95% CI, −19.4,−9.7) fewer medical specialist visits.
In analyses examining the interaction between the network measures and urban/non-urban location, the association between median adjusted degree and all nine cost and utilization outcomes was unaffected by the urban/non-urban location of hospitals (all p>0.05 for interaction). The association between PCP relative centrality and the nine outcomes was mostly non-significant, although still negative, for non-urban hospitals (all p<0.001 for interaction except for general medical/surgical hospital days, p=0.06 and ICU days, p=0.14, results not shown).
This is the first large-scale analysis to explore how the structure of patient-sharing relationships among physicians is related to care patterns within hospitals. In addition, we present a novel method for using readily available administrative data to construct networks of physicians that will be useful for studying physician practice patterns.14 We find that the structure of physician patient-sharing networks is significantly associated with Medicare spending and care patterns. Higher adjusted degree is associated with higher spending and health care utilization even after adjusting for hospital characteristics. In contrast, higher PCP relative centrality is associated with lower spending and utilization. These results are consistent with the hypothesis that network measures reflective of poorer coordination of care within hospitals are associated with higher costs and care intensity.
We found that hospitals with physicians whose patients see a broader array of other doctors (higher adjusted degree) have higher levels of spending. They also use more hospital care, physician visits, and imaging. These associations may reflect the difficulty of care coordination as physicians have to manage information from an increasing number of colleagues, which could be either a cause or an effect of increased health care utilization.
Another possible explanation for this phenomenon might be that hospitals whose physicians have high median adjusted degree have sicker patients (who see more physicians), leading to higher costs and utilization of services. Our methods make this unlikely for two reasons. First, our outcome measures are risk-adjusted to reflect similar patient populations, so differences in costs are not reflective of differences in burden of illness.13 Second, the adjusted degree measure is distinct from the number of physicians that patients see. The difference between a broad and focused network of physicians among the doctors caring for patients is the factor measured by the median adjusted degree.
In contrast, a network measure that likely reflects greater coordination of care, PCP relative centrality, was associated with lower imaging and test spending in addition to fewer ICU days and specialist visits. These findings build upon prior state-level analyses showing that states with more PCPs have lower costs,44 but extend this work to more formal network analysis considering the relative location of PCPs within a network of their colleagues. Interestingly, PCP relative centrality did not have a significant association with costs and utilization in non-urban hospitals. One possible interpretation of this result is that urban hospitals without primary care centered networks may be more likely to use readily-available specialist services, whereas in non-urban areas, this may not be as relevant because of less access to specialists.45 Further research is needed to understand the interaction between PCP centrality in urban and non-urban settings, but this approach could provide insight into how PCPs might best be utilized to contain costs and care utilization.
A prior study showed that the average primary care physician shares patients with approximately 99 other physicians based at 53 other practices per 100 Medicare patients treated.31 That analysis, however, was based on patients assigned to individual primary care physicians. We demonstrate that, when considering all patients being cared for by all physicians, including both PCPs and specialists, physicians are connected to 155–281 doctors per 100 Medicare patients shared with other doctors. This network-based perspective illustrates the challenge of care coordination among physicians.
Our study is subject to several limitations. First, we ascertained network structure based on the presence of shared patients using administrative data. While this technique has been validated,14 we nevertheless cannot know what information or behaviors, if any, pass across the ties defined by shared patients. In addition, our data are cross-sectional and only included elderly patients insured by the Medicare program. The local network of physicians and patients in a hospital or region is likely to be in flux, and future analyses would be enhanced by longitudinal data. Furthermore, the sample of hospitals we used is representative of larger, urban hospitals rather than all US hospitals. However, because the sample included a full range of hospital sizes, and because our focus is on the relationship between variables (not population aggregates or means), the representativeness should be less of a concern. Also, we used risk-adjusted hospital-level data on costs and care intensity averaged over 2001–2005 for our outcome measures while our networks were mapped with 2006 data. This discrepancy, however, would tend to bias our results towards the null.
Next, our main dependent variables were calculated using several years of data from the Medicare program by the Dartmouth Atlas of Healthcare. Although others have noted the possibility of inadequate risk adjustment or failure to account for differences in the prices paid for services in different regions,46,47 substantial variation in spending remains even after further risk adjustment.3,4 In addition, our models include several hospital-level characteristics that are likely to be associated with unmeasured case-mix, including size, urban versus rural location, and teaching hospital affiliation. With regards to prices, although the spending measures were not adjusted for regional payment differences, regional variation is reduced only modestly when taking prices into account.47 Moreover, our six utilization measures (e.g. hospitalizations) would not be affected by price differences and the findings for these outcomes serve to validate the findings we observed for spending.
Lastly, due to the observational design of this study, our results should not necessarily be interpreted as causal. Further work is needed to determine the causal mechanisms underlying these associations. In addition, though we adjusted for numerous covariates, we cannot rule out the possibility of unobserved confounders that could help explain the mechanisms driving the associations we observe. These unmeasured confounders could reflect local medical culture and market dynamics.
In summary, we studied a large sample of physician networks to examine how network structures reflect health care in a national sample of hospitals. This analysis highlights the importance of physician relationship networks – networks that are embedded in institutional structures and that may inform health policy and physician workforce management. We demonstrate that the characteristics of physician networks affiliated with a hospital are correlated with a hospital’s performance in a manner consistent with the hypothesis that poorer coordination of care is associated with greater spending and care intensity.