PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Med Care. Author manuscript; available in PMC 2013 February 1.
Published in final edited form as:
PMCID: PMC3260449
NIHMSID: NIHMS318646

Physician Patient-Sharing Networks and the Cost and Intensity of Care in US Hospitals

Michael L. Barnett, M.D.,1,2 Nicholas A. Christakis, M.D., Ph.D,1,3,4 A. James O’Malley, Ph.D.,1 Jukka-Pekka Onnela, Ph.D.,1 Nancy L. Keating, M.D., M.P.H.,1,2 and Bruce E. Landon, M.D., M.B.A1,3

Abstract

Background

There is substantial variation in the cost and intensity of care delivered by US hospitals. We assessed how the structure of patient-sharing networks of physicians affiliated with hospitals might contribute to this variation.

Methods

We constructed hospital-based professional networks based on patient-sharing ties among 61,461 physicians affiliated with 528 hospitals in 51 hospital referral regions in the US using Medicare data on clinical encounters during 2006. We estimated linear regression models to assess the relationship between measures of hospital network structure and hospital measures of spending and care intensity in the last 2 years of life.

Results

The typical physician in an average-sized urban hospital was connected to 187 other doctors for every 100 Medicare patients shared with other doctors. For the average-sized urban hospital an increase of one standard deviation (SD) in the median number of connections per physician was associated with a 17.8% increase in total spending, in addition to 17.4% more hospital days, and 23.8% more physician visits (all p<0.001). In addition, higher “centrality” of primary care providers within these hospital networks was associated with 14.7% fewer medical specialist visits (p<0.001) as well as lower spending on imaging and tests (−9.2% and −12.9% for 1 SD increase in centrality, p<0.001).

Conclusions

Hospital-based physician network structure has a significant relationship with an institution’s care patterns for their patients. Hospitals with doctors who have higher numbers of connections have higher costs and more intensive care, and hospitals with primary care-centered networks have lower costs and care intensity.

Introduction

American regions and hospitals within those regions differ markedly in health care spending and resource use.1,2 Even after risk adjustment and price standardization, a significant amount of variation in spending and resource use remains unexplained.3,4 These findings are concerning in light of the growth in US health care costs, since hospitals with higher spending and resource use do not appear to have better outcomes and have similar performance on health care quality indicators compared to lower spending hospitals.57 Prior research has shown that regional levels of health care spending and utilization are associated with physicians’ tendency towards aggressive care.810 It is possible that these regional and institutional patterns of high or low-cost care may be reflected in the networks of physician interactions since collectively, physician interactions contribute to the culture and knowledge of a region or institution. For instance, physicians rely on each other as trusted sources of medical advice and information, often to the exclusion of published research.11,12 Therefore, one unexplored area that might contribute to hospital-level variations in care is the structure of the networks of hospital-affiliated physicians as defined by physician-to-physician interactions.13 As recently shown, physician interactions may be measured by examining whether or not physicians treat patients in common.14

We examined how networks based on physician relationships might be associated with care delivery for patients using network analysis.15,16 Network analysis has had prior successful applications in understanding the behavior of organizations such as academic departments, company boards of directors, and artistic collaborations.1719 Some prior research has used these methods to examine physician advice networks and the diffusion of information among physicians; however, these studies included relatively small samples of physicians or focused on a single technology or drug.2023

We used data from the Medicare program regarding 2.6 million patients cared for by 61,146 physicians associated with 528 hospitals to study professional networks of physicians defined by patient sharing. We focused our study on networks defined by physicians affiliated with individual hospitals because of the importance of hospitals to the US health care system and the depth of data available for describing hospital performance. We hypothesized that network measures reflecting poorer coordination of care within physicians’ professional networks would be associated with higher costs and care intensity within hospitals.

To evaluate this hypothesis, we map the networks of all physicians affiliated with these nationally representative hospitals and characterize these networks with well-accepted measures from the discipline of network science that reflect aspects of care coordination. For instance, the number of physicians who share care for a patient (a measure related to physician degree within the network, described below), has been shown to be associated with increased costs and utilization in prior studies.2426

We extend this prior work by adopting concepts developed within network science. Such measures can reveal patterns of medical care that would otherwise be difficult to measure, adding a new set of tools for insight into health care delivery.

Methods

Data Sources

We used encounter data from the 2006 Medicare Carrier File for 100% of patients enrolled in Medicare Part A (hospital care) and Part B (physician services, outpatient care, and durable medical equipment) in 50 randomly sampled hospital referral regions (HRRs) and the Boston HRR to define physician relationships. We excluded patients enrolled in capitated Medicare Advantage plans since we did not have claims for these patients, and since the measures of cost and intensity used (described below) were calculated for fee-for-service enrollees. We obtained descriptive information for hospitals and physicians from the 2006 American Hospital Association annual survey and American Medical Association (AMA) Masterfile. We defined physicians as primary care physicians (PCPs), medical or surgical specialists, or “other” (e.g. psychiatry).

We obtained measures of cost and care intensity for hospitals from the Dartmouth Atlas of Health Care, which were derived using data from 2001 to 2005.27 For each hospital, we examined 3 measures of health care spending (total inpatient plus outpatient, imaging, and laboratory tests) and 6 measures of utilization, including hospital days (total, intensive care unit (ICU), and general medical/surgical combined), and number of physician visits (including visits to PCPs and medical specialists). These measures were defined based on a population of patients hospitalized at least once for one of nine life-threatening chronic illnesses (e.g., congestive heart failure) who were in the last two years of life.28 All Dartmouth measures were adjusted for patient age, sex, race, type of chronic illness, and presence of multiple chronic illnesses.29 Thus, the measures represent the case-mix-adjusted cost and intensity of care for a population of older patients with roughly comparable levels of illness.

Assigning Physicians to a Primary Hospital

We assigned each physician with an office located in a sampled HRR (assessed using the AMA Masterfile) to a principal hospital based on where they filed the plurality of inpatient claims, or if they did not do any inpatient work, to the hospital where the plurality of patients they saw received inpatient care.30

Our sample initially included 65,757 eligible physicians in office-based patient care specialties (excluding pathologists, emergency medicine, radiologists, and anesthesiologists) affiliated with 867 general medical surgical hospitals within the selected regions. After excluding low-volume hospitals for which the outcomes measured could not be ascertained (≤400 deaths annually) and physicians with no ties within their assigned hospital (mostly applicable to physicians located at the border of an HRR who primarily used a hospital outside of our sample), our final sample included 61,461 physicians affiliated with 528 hospitals.

Ascertaining and Measuring Hospital-Affiliated Networks

To define a network of relationships between the physicians in our dataset, we identified a relationship (tie) between two doctors if they each had a significant encounter with one or more common patients. These encounters included face-to-face visits or meaningful procedures with a value of at least 2 relative value units (RVUs). This was done to capture encounters where an office visit might not be billed such as those related to bundled surgical procedures. After identifying significant encounters between physicians and patients, as depicted in Figure 1A, we then created a tie between any two doctors who cared for one or more patients in common (outlined schematically in Figure 1B). The use of shared patients to identify network ties has been validated in a recent study.14

Figure 1
Schematic of Network Identification and Measurement Methods

We focused on two structural measures commonly used in network analyses that we hypothesized would reflect care coordination within the network. These measures are briefly summarized below, and are schematically explained in Figure 1C with additional details presented in the Appendix.

Degree is defined as the number of ties an individual physician has in the network, or equivalently, the number of other doctors a physician is connected with through the sharing of patients. The doctors contributing to a physicians’ degree can be any doctors a physician’s patients have seen in the physician’s HRR. To account for a physician’s Medicare patient volume, we adjusted degree by dividing each physician’s degree by the total number of Medicare patients the physician shared in 2006 with other doctors (controlling for a physician having higher degree simply because he sees more patients). A physician with a high adjusted degree shares his patients with a broader array of other doctors than a physician with a low adjusted degree. This measure is independent of the number of doctors seen per patient, since, for example, a physician’s fixed panel of patients could all see the same 20 specialists or could see 200 specialists. Because our analysis was at the hospital level, we then summarized this measure across the physicians in a hospital network by using the median adjusted degree of all physicians at a hospital. We hypothesized that hospitals whose physician networks had a higher median degree would have higher costs and care intensity due to the greater challenges of care coordination as a physician shares patients with more colleagues.2426,31 A physician sharing patient care with a broad set of colleagues may have more difficulty consolidating his patients’ clinical information and guiding their care than a physician sharing patients with fewer colleagues.

Betweenness centrality is a measure that describes the tendency of a physician to be located in the middle of the network surrounding him.32 The betweenness centrality of a physician is calculated in the following way: consider connecting each physician to every other physician in the network going through as few intermediate relationships as possible; the betweenness centrality of a physician is proportional to the number of times he lies on any of these paths as an intermediary (see the Appendix for equation and details). Visually, doctors with high betweenness centrality lie in the middle, rather than on the periphery, of a network map visualized with standard algorithms.33 Physicians with higher betweenness centrality are well positioned in their network to have greater access to, and influence on, the flow of information among doctors in their network. In Figure 1C, the size of each circle (representing a physician) is proportional to that physician’s betweenness centrality. The calculation of betweenness centrality is confined to doctors within a hospital.

We were interested in how central PCPs were in a hospital network relative to all other doctors in the network. To calculate this measure, we used the ratio of the average centrality of PCPs over the average centrality of all other doctors in a hospital (see Appendix). The resulting relative centrality value can be interpreted as how much more or less central PCPs are when compared to the other doctors in a hospital network. Given the importance of primary care systems for health care costs,34 we hypothesized that hospitals whose networks of patient sharing were more centered around PCPs would have lower costs and care intensity. Hospitals with networks focused around PCPs may have improved capacity for care coordination because specialists are more likely to share patients with a core set of PCPs in those systems (Figure 2).

Figure 2
Example Hospital Networks: Illustrating Relative Centrality

Hospital Control Variables

Hospital-level control variables included the number of hospital beds, number of physicians assigned to a hospital, location (urban or suburban/rural/isolated),35 teaching hospital status (major [member of the Council of Teaching Hospitals], minor [any non-major teaching hospital with medical school affiliation or residency program], none), ownership (not-for-profit, for-profit, government),36 nurse full time equivalents per 1000 inpatient days, and the percentage of admissions from Medicare and Medicaid patients. In addition, we controlled for the proportion of physicians assigned to a hospital who were PCPs and the mean shared patient volume per physician at a hospital, defined for each physician as the number of patients shared in 2006 with other doctors.

Statistical Analysis

We compared our sample of hospitals with hospitals nationally using χ2 or t-tests and assessed differences in network measures across hospitals using one-way analysis of variance. We used multivariable weighted linear regression to model the effect of each network structure measure on cost or care intensity outcomes, adjusting for the hospital characteristics detailed above. To account for skewness, we log-transformed each outcome variable, and to account for the precision with which each outcome was measured, we weighted observations by the number of annual deaths used by the Dartmouth Atlas group to measure the cost and utilization data.27 We also used robust heteroscedastic-consistent standard error estimation in model fitting 37,38 to account for the possibility that the variance of an observations varies with the mean value of the predictors.

Because some small hospitals had excessively large or small centrality ratios, we set outliers to equal the 1st and 99th percentile values, respectively (see Appendix).39 To enable regression coefficients to be directly compared, we centered each continuous network predictor and hospital covariate to have mean 0 and standard deviation 1 over the entire sample. Regression coefficients are reported as percent change expected in the outcome of interest for an average-sized urban hospital (the median hospital) associated with an increase of one standard deviation in the network measure predictor in order to aid comparison across the models presented. Because network measures differed for urban and non-urban hospitals, we performed a secondary analysis of covariance for each model described above that included an interaction between the network variable of interest and urban/rural location.

Five hospitals had missing data for the general medical/surgical and ICU hospital days outcomes; in addition, 7 and 25 hospitals were missing data for PCP and medical specialist relative centrality respectively due to undefined values (see Appendix). Because these missing values were the outcomes and key predictors of interest, the hospitals with missing data were excluded from the relevant models (reflected in the sample sizes shown in Figure 3). We performed extensive sensitivity analyses to test the decisions made in the modeling process and found that our results were robust under a variety of conditions, including when accounting for Medicare Advantage penetration (data not shown, see Appendix). Complete results for all covariates included in the models are in Appendix Table 2.

Figure 3
Adjusted Estimates of Hospital Network Structure vs. Cost and Utilization Outcomes

All analyses were performed in R version 2.10,40 using the igraph package (version 0.5.3) for calculating network structure measures and the Zelig package (version 3.4–8) for multivariable regression models.41,42 We visualized hospital networks using the Fruchterman-Reingold algorithm as implemented in igraph.33,43 This study was approved by the institutional review board at Harvard Medical School.

Results

We studied 528 hospitals and the 61,461 physicians caring for 2.6 million Medicare patients who comprised their associated physician networks. Compared with all general medical/surgical hospitals in the US, our sample contained larger hospitals that were more likely to be in urban settings (p<0.001 for both) (Table 1).

Table 1
Sample Characteristics

Hospital Characteristics and Network Structure

The average median adjusted degree of a mid-sized hospital in our sample was 187 (SD=86) and ranged from 155 (SD=57) for smaller hospitals to 281 (SD=124) for larger hospitals (p<0.001) (Table 2). Therefore, the typical physician in a mid-sized hospital shared patients with 187 other doctors for every 100 patients shared with other doctors. PCP relative centrality decreased with hospital size, from a mean of 1.11 (SD=0.87) in smaller hospitals to 0.80 (SD=0.54) in larger hospitals (p<0.001).

Table 2
Average Network Measures by Selected Hospital Characteristics

To illustrate the concept of relative centrality, the network graphs of three similarly sized hospitals are depicted in Figure 2. In the network for the hospital in Fig. 2A, medical specialists and surgeons are far more central, with almost all PCPs being located in the periphery of the network. In contrast, the networks of the hospitals in Figs. 2B and 2C have PCPs more prominently participating as central physicians. The relative centrality of PCPs in hospitals A, B and C are 0.35, 1.0 and 2.0, respectively.

Relationship between Hospital Networks and Care Patterns

The unadjusted relationships between median adjusted degree, PCP centrality, and total Medicare spending per hospital are depicted in Appendix Figure 1.

Adjusted relationships between hospital network structure and hospital outcomes, controlling for hospital characteristics, are presented in Figure 3. For the average-sized, urban hospital in our sample, an increase of one standard deviation (SD) in the median adjusted degree (corresponding to an addition of 107 doctors per 100 patients shared to the typical doctor’s number of contacts) was associated with a 17.8% (95% CI, 13.2,22.5) increase in total Medicare spending, 17.4% (95% CI, 12.6,22.4) more hospital days, and 23.8% (95% CI, 18.6,29.1) more physician visits in the last 2 years of life.

In contrast, higher centrality of primary care providers within an average-sized urban hospital network was correlated with a decrease in overall spending of 6.0% (95% CI, −9.5, −2.4), along with 9.2% (95% CI, −13.1, −5.1) lower spending on imaging and 12.9% (95% CI,−17.0,−8.6) lower spending on tests for a 1 SD increase. In addition, higher PCP centrality was accompanied by 8.6% (95% CI, −19.4,−9.7) fewer physician visits and 14.7% (95% CI, −19.4,−9.7) fewer medical specialist visits.

In analyses examining the interaction between the network measures and urban/non-urban location, the association between median adjusted degree and all nine cost and utilization outcomes was unaffected by the urban/non-urban location of hospitals (all p>0.05 for interaction). The association between PCP relative centrality and the nine outcomes was mostly non-significant, although still negative, for non-urban hospitals (all p<0.001 for interaction except for general medical/surgical hospital days, p=0.06 and ICU days, p=0.14, results not shown).

Discussion

This is the first large-scale analysis to explore how the structure of patient-sharing relationships among physicians is related to care patterns within hospitals. In addition, we present a novel method for using readily available administrative data to construct networks of physicians that will be useful for studying physician practice patterns.14 We find that the structure of physician patient-sharing networks is significantly associated with Medicare spending and care patterns. Higher adjusted degree is associated with higher spending and health care utilization even after adjusting for hospital characteristics. In contrast, higher PCP relative centrality is associated with lower spending and utilization. These results are consistent with the hypothesis that network measures reflective of poorer coordination of care within hospitals are associated with higher costs and care intensity.

We found that hospitals with physicians whose patients see a broader array of other doctors (higher adjusted degree) have higher levels of spending. They also use more hospital care, physician visits, and imaging. These associations may reflect the difficulty of care coordination as physicians have to manage information from an increasing number of colleagues, which could be either a cause or an effect of increased health care utilization.

Another possible explanation for this phenomenon might be that hospitals whose physicians have high median adjusted degree have sicker patients (who see more physicians), leading to higher costs and utilization of services. Our methods make this unlikely for two reasons. First, our outcome measures are risk-adjusted to reflect similar patient populations, so differences in costs are not reflective of differences in burden of illness.13 Second, the adjusted degree measure is distinct from the number of physicians that patients see. The difference between a broad and focused network of physicians among the doctors caring for patients is the factor measured by the median adjusted degree.

In contrast, a network measure that likely reflects greater coordination of care, PCP relative centrality, was associated with lower imaging and test spending in addition to fewer ICU days and specialist visits. These findings build upon prior state-level analyses showing that states with more PCPs have lower costs,44 but extend this work to more formal network analysis considering the relative location of PCPs within a network of their colleagues. Interestingly, PCP relative centrality did not have a significant association with costs and utilization in non-urban hospitals. One possible interpretation of this result is that urban hospitals without primary care centered networks may be more likely to use readily-available specialist services, whereas in non-urban areas, this may not be as relevant because of less access to specialists.45 Further research is needed to understand the interaction between PCP centrality in urban and non-urban settings, but this approach could provide insight into how PCPs might best be utilized to contain costs and care utilization.

A prior study showed that the average primary care physician shares patients with approximately 99 other physicians based at 53 other practices per 100 Medicare patients treated.31 That analysis, however, was based on patients assigned to individual primary care physicians. We demonstrate that, when considering all patients being cared for by all physicians, including both PCPs and specialists, physicians are connected to 155–281 doctors per 100 Medicare patients shared with other doctors. This network-based perspective illustrates the challenge of care coordination among physicians.

Our study is subject to several limitations. First, we ascertained network structure based on the presence of shared patients using administrative data. While this technique has been validated,14 we nevertheless cannot know what information or behaviors, if any, pass across the ties defined by shared patients. In addition, our data are cross-sectional and only included elderly patients insured by the Medicare program. The local network of physicians and patients in a hospital or region is likely to be in flux, and future analyses would be enhanced by longitudinal data. Furthermore, the sample of hospitals we used is representative of larger, urban hospitals rather than all US hospitals. However, because the sample included a full range of hospital sizes, and because our focus is on the relationship between variables (not population aggregates or means), the representativeness should be less of a concern. Also, we used risk-adjusted hospital-level data on costs and care intensity averaged over 2001–2005 for our outcome measures while our networks were mapped with 2006 data. This discrepancy, however, would tend to bias our results towards the null.

Next, our main dependent variables were calculated using several years of data from the Medicare program by the Dartmouth Atlas of Healthcare. Although others have noted the possibility of inadequate risk adjustment or failure to account for differences in the prices paid for services in different regions,46,47 substantial variation in spending remains even after further risk adjustment.3,4 In addition, our models include several hospital-level characteristics that are likely to be associated with unmeasured case-mix, including size, urban versus rural location, and teaching hospital affiliation. With regards to prices, although the spending measures were not adjusted for regional payment differences, regional variation is reduced only modestly when taking prices into account.47 Moreover, our six utilization measures (e.g. hospitalizations) would not be affected by price differences and the findings for these outcomes serve to validate the findings we observed for spending.

Lastly, due to the observational design of this study, our results should not necessarily be interpreted as causal. Further work is needed to determine the causal mechanisms underlying these associations. In addition, though we adjusted for numerous covariates, we cannot rule out the possibility of unobserved confounders that could help explain the mechanisms driving the associations we observe. These unmeasured confounders could reflect local medical culture and market dynamics.

In summary, we studied a large sample of physician networks to examine how network structures reflect health care in a national sample of hospitals. This analysis highlights the importance of physician relationship networks – networks that are embedded in institutional structures and that may inform health policy and physician workforce management. We demonstrate that the characteristics of physician networks affiliated with a hospital are correlated with a hospital’s performance in a manner consistent with the hypothesis that poorer coordination of care is associated with greater spending and care intensity.

Supplementary Material

4

References

1. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: health outcomes and satisfaction with care. Ann Intern Med. 2003;138:273–87. [PubMed]
2. Wennberg JE, Fisher ES, Stukel TA, Skinner JS, Sharp SM, Bronner KK. Use of hospitals, physician visits, and hospice care during last six months of life among cohorts loyal to highly respected hospitals in the United States. BMJ: British Medical Journal. 2004;328:607. [PMC free article] [PubMed]
3. Committee MPA. Report to Congress: measuring regional variation in service use. Washington, DC: MEDPAC; 2009.
4. Zuckerman S, Waidmann T, Berenson R, Hadley J. Clarifying sources of geographic differences in Medicare spending. N Engl J Med. 2010;363:54–62. [PubMed]
5. Yasaitis L, Fisher ES, Skinner JS, Chandra A. Hospital quality and intensity of spending: is there an association? Health Aff (Millwood) 2009;28:w566–72. [PMC free article] [PubMed]
6. Barnato AE, Chang CC, Farrell MH, Lave JR, Roberts MS, Angus DC. Is survival better at hospitals with higher “end-of-life” treatment intensity? Med Care. 2010;48:125–32. [PubMed]
7. Jha AK, Chan DC, Ridgway AB, Franz C, Bates DW. Improving safety and eliminating redundant tests: cutting costs in U.S. hospitals. Health Aff (Millwood) 2009;28:1475–84. [PubMed]
8. Sirovich B, Gottlieb D, Welch H, Fisher E. Variation in the Tendency of Primary Care Physicians to Intervene. Archives of Internal Medicine. 2005;165:2252. [PubMed]
9. Sirovich B, Gallagher PM, Wennberg DE, Fisher ES. Discretionary decision making by primary care physicians and the cost of U.S. Health care. Health affairs (Project Hope) 2008;27:813–23. [PMC free article] [PubMed]
10. Lucas FL, Sirovich BE, Gallagher PM, Siewers AE, Wennberg DE. Variation in cardiologists’ propensity to test and treat: is it associated with regional variation in utilization? Circ Cardiovasc Qual Outcomes. 2010;3:253–60. [PMC free article] [PubMed]
11. Keating NL, Zaslavsky AM, Ayanian JZ. Physicians’ experiences and beliefs regarding informal consultation. Jama. 1998;280:900–4. [PubMed]
12. Gabbay J, le May A. Evidence based guidelines or collectively constructed “mindlines?.” Ethnographic study of knowledge management in primary care. BMJ. 2004;329:1013. [PMC free article] [PubMed]
13. Skinner J, Staiger D, Fisher ES. Looking back, moving forward. N Engl J Med. 2010;362:569–74. discussion 74. [PubMed]
14. Barnett ML, Landon BE, O’Malley AJ, Keating NL, Christakis NA. Mapping Physician Networks with Self-Reported and Administrative Data. Health Serv Res. 2011 [PMC free article] [PubMed]
15. Newman ME. The structure and function of complex networks. SIAM Review. 2003;45:167–256.
16. Christakis NA, Fowler JH. The spread of obesity in a large social network over 32 years. N Engl J Med. 2007;357:370–9. [PubMed]
17. Uzzi B. A social network’s changing statistical properties and the quality of human innovation. Journal of Physics A: Mathematical and Theoretical. 2008;41:224023.
18. Newman ME. Scientific collaboration networks. II. Shortest paths, weighted networks, and centrality. Phys Rev E Stat Nonlin Soft Matter Phys. 2001;64:016132. [PubMed]
19. Davis GF. The significance of board interlocks for corporate governance. Corporate Governance. 2006;4:154–9.
20. Keating NL, Ayanian JZ, Cleary PD, Marsden PV. Factors affecting influential discussions among physicians: a social network analysis of a primary care practice. J Gen Intern Med. 2007;22:794–8. [PMC free article] [PubMed]
21. Coleman J, Katz E, Menzel H. The diffusion of innovations among physicians. Sociometry. 1957;20:253–70.
22. Iyengar R, Van den Bulte C, Valente TW. Opinion leadership and social contagion in new product diffusion. Marketing Science. 2010 In Press.
23. Christakis NA, Fowler JH. Contagion in prescribing behavior among networks of doctors. Marketing Science. 2010 In Press.
24. Raddish M, Horn SD, Sharkey PD. Continuity of care: is it cost effective? Am J Manag Care. 1999;5:727–34. [PubMed]
25. Wasson JH, Sauvigne AE, Mogielnicki RP, et al. Continuity of outpatient medical care in elderly men. A randomized trial. JAMA. 1984;252:2413–7. [PubMed]
26. Valenstein P, Leiken A, Lehmann C. Test-ordering by multiple physicians increases unnecessary laboratory examinations. Arch Pathol Lab Med. 1988;112:238–41. [PubMed]
27. [Accessed May 6, 2010];Research-level data of End of Life Measures from The Dartmouth Atlas of Health Care. 2008 at http://intensity.dartmouth.edu/?q=node/68.
28. Iezzoni LI, Heeren T, Foley SM, Daley J, Hughes J, Coffman GA. Chronic conditions and risk of in-hospital death. Health Serv Res. 1994;29:435–60. [PMC free article] [PubMed]
29. Wennberg JE, Fisher E, Goodman DC, Skinner J. Tracking The Care of Patients with Severe Chronic Illness: The Dartmouth Atlas of Health Care, 2008. Lebanon, NH: The Dartmouth Institute for Health Policy & Clinical Practice, Center for Health Policy Research; 2008.
30. Bynum JP, Bernal-Delgado E, Gottlieb D, Fisher E. Assigning ambulatory patients and their physicians to hospitals: a method for obtaining population-based provider performance measurements. Health Serv Res. 2007;42:45–62. [PMC free article] [PubMed]
31. Pham HH, O’Malley AS, Bach PB, Saiontz-Martinez C, Schrag D. Primary care physicians’ links to other physicians through Medicare patients: the scope of care coordination. Ann Intern Med. 2009;150:236–42. [PubMed]
32. Wasserman S, Faust K. Social Network Analysis: Methods and Applications. Cambridge, MA: Cambridge University Press; 1994.
33. Fruchterman TMJ, Reingold EM. Graph Drawing by Force-directed Placement. Software-Practice and Experience. 1991;21:1129–64.
34. Starfield B, Shi L, Macinko J. Contribution of primary care to health systems and health. Milbank Q. 2005;83:457–502. [PubMed]
35. Rural-Urban Commuting Code database. University of Washington; [Accessed May 6, 2010]. at http://depts.washington.edu/uwruca/index.php.
36. Landon BE, Normand SL, Lessler A, et al. Quality of care for the treatment of acute medical conditions in US hospitals. Arch Intern Med. 2006;166:2511–7. [PubMed]
37. White H. A heteroskedasticity-consistent covariance matrix and a direct test for heteroskedasticity. Econometrica. 1980;48:817–38.
38. Zeileis A. Econometric Computing with HC and HAC Covariance Matrix Estimators. Journal of Statistical Software. 2004;11:1–17.
39. Wilcox RR. Introduction to Robust Estimation and Hypothesis Testing. New York: Elsevier; 2005.
40. R Development Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2009.
41. Csardi G, Nepusz T. The igraph software package for complex network research. InterJournal, Complex Systems. 2006
42. Imai K, King G, Lau O. R package version 3.4–8 ed. 2010. Zelig: Everyone’s Statistical Software.
43. sna: Tools for Social Network Analysis. [Accessed May 6, 2010];R package version 2.0–1. 2009 at http://erzuli.ss.uci.edu/R.stuff.
44. Baicker K, Chandra A. Medicare spending, the physician workforce, and beneficiaries’ quality of care. Health Aff (Millwood) 2004;(Suppl Web Exclusives):W184–97. [PubMed]
45. Rosenthal MB, Zaslavsky A, Newhouse JP. The geographic distribution of physicians revisited. Health Serv Res. 2005;40:1931–52. [PMC free article] [PubMed]
46. Bach PB. A map to bad policy--hospital efficiency measures in the Dartmouth Atlas. N Engl J Med. 2010;362:569–73. discussion p 74. [PubMed]
47. Gottlieb DJ, Zhou W, Song Y, Andrews KG, Skinner JS, Sutherland JM. Prices don’t drive regional Medicare spending variations. Health Aff (Millwood) 2010;29:537–43. [PMC free article] [PubMed]