Search tips
Search criteria 


Logo of hsresearchLink to Publisher's site
Health Serv Res. 2012 February; 47(1 Pt 2): 380–403.
PMCID: PMC3258347

Physician Social Networks and Variation in Prostate Cancer Treatment in Three Cities



To examine whether physician social networks are associated with variation in treatment for men with localized prostate cancer.

Data Source

2004–2005 Surveillance, Epidemiology and End Results-Medicare data from three cities.

Study Design

We identified the physicians who care for patients with prostate cancer and created physician networks for each city based on shared patients. Subgroups of urologists were defined as physicians with dense connections with one another via shared patients.

Principal Findings

Subgroups varied widely in their unadjusted rates of prostatectomy and the racial/ethnic and socioeconomic composition of their patients. There was an association between urologist subgroup and receipt of prostatectomy. In city A, four subgroups had significantly lower odds of prostatectomy compared with the subgroup with the highest rates of prostatectomy after adjusting for patient clinical and sociodemographic characteristics. Similarly, in cities B and C, subgroups had significantly lower odds of prostatectomy compared with the baseline.


Using claims data to identify physician networks may provide an insight into the observed variation in treatment patterns for men with prostate cancer.

Keywords: Physician networks, physician referral, network analysis, prostate cancer

There exist large differences in the type, quality, and costs of care that patients receive across geographic areas (The Center for Evaluative Clinical Sciences 1996) and efforts to reduce variation have figured prominently in health care reform (Gawande 2009; Epstein 2010). Unexplained variation in treatment may result, in part, from distinct physician practice styles in the setting of regional health care environments (The Center for Evaluative Clinical Sciences 1998). Provider relationships may be a key determinant of local practice styles. These relationships—both formal and informal—may be related to practice structure, underlie observed referral patterns (Kinchen et al. 2004; Forrest et al. 2006), lead to the diffusion of innovation (Coleman, Katz, and Menzel 1957; Rogers 1995), and be associated with the sharing of clinical advice (Coleman, Katz, and Menzel 1966).

In aggregate, the relationships between physicians form the building blocks of larger network structures within which health care is delivered in given geographic areas. Emerging tools allow for these networks to be mapped and analyzed (Newman, Barabasi, and Watts 2006). Evidence suggests that network structure is important in the spread of health behaviors and information (Coleman, Katz, and Menzel 1966; Christakis and Fowler 2007, 2008), and social network analysis is increasingly applied to the context of health care delivery (Keating et al. 2007; Iwashyna et al. 2009; Barnett et al. 2011).

In this study, we used social network analysis to map the relationships among doctors who treat an elderly cohort of patients with localized prostate cancer in three U.S. cities. Prostate cancer is an important case to study because it is common and its treatment widely varies across practice settings, geographic areas, and patient race/ethnicity and socioeconomic status (Lu-Yao et al. 1993; Krongrad, Lai, and Lai 1997; Lai et al. 2001; Krupski et al. 2005; Cooperberg, Broering, and Carroll 2010). In 2010, over 170,000 U.S. men were diagnosed with localized prostate cancer (Jemal et al. 2010). Although multiple treatment modalities exist, including radical prostatectomy, radiation therapy, and expectant management, the optimal treatment strategy remains unknown (NCCN 2007; Thompson, Thrasher, and Aus 2007; Walsh, DeWeese, and Eisenberger 2007; Schroeder, Roach, and Scardino 2008). The treatments vary in rates of side effects, including incontinence and bowel or erectile dysfunction (Stanford, Feng, and Hamilton 2000; Potosky, Davis, and Hoffman 2004; Pardo, Guedea, and Aguiló 2010) and financial costs (Wilson, Tesoro, and Elkin 2007; Snyder, Frick, and Blackford 2010). This wide variation in treatment cannot be fully explained by clinical features, and it is unlikely to reflect differences in patient preferences alone (Gilligan 2005; Dartmouth Atlas Project, Center for the Evaluative Clinical Sciences 2007). In the setting of clinical uncertainty (McNeil 2001) physicians’ relationships with peers may play an important role in treatment decisions.

In our study, physicians were considered related to one another if they shared in the provision of care for an individual patient with prostate cancer. Prior research has found that doctors with a higher number of shared patients in claims data are more likely to know and communicate with one another (Barnett et al. 2011). Our primary goal was to determine whether provider relationships in the context of the network structure were associated with patterns of prostate cancer treatment.


Study Design

The study was a retrospective, observational cohort study using registry and administrative claims data from the Surveillance, Epidemiology and End Results (SEER)-Medicare database. The study was approved by the institutional review board at the University of Pennsylvania and Johns Hopkins University School of Medicine.

Data Sources

The SEER-Medicare database links patient demographic and tumor-specific data collected by SEER cancer registries to longitudinal health care claims for Medicare enrollees (Potosky et al. 1993). Data on physicians’ specialties were available from the Medicare Physician Identification and Eligibility Registry (MPIER file), and practice address was determined from the 2005 American Medical Association (AMA) Masterfile. MPIER and AMA data were linked to the SEER-Medicare data through Unique Provider Identification Numbers.

Study Population

We identified men aged 65 years or older living in three cities with prostate cancer diagnosed between January 1, 2004 and December 31, 2005 in SEER with follow-up through December 31, 2006 in Medicare. Two years of data were analyzed to allow for adequate connectivity of the networks based on our preliminary analyses. Sites were selected to allow for the likely clustering of providers in large cities and due to their diverse patient populations. Sites are kept confidential in this study to ensure patient and doctor confidentiality. Data on patients with inadequate Medicare records (i.e., those enrolled in health maintenance organizations or not enrolled in fee-for-service Medicare program) were excluded. For the construction of the networks, we included all men without metastatic disease (N = 5,353). Based on our clinical experience, it was expected that patients with metastatic disease may have substantially different patterns of diagnosis and referrals.

For analyses that examine the association of network structures and treatment patterns, we further limited the sample to men with AJCC 6th edition stage 1 or 2 disease. We excluded (in a sequential fashion) men with node positive disease (N = 39), unknown (N = 122) or stage 3 disease (N = 257), unknown Gleason stage (N = 136), and men who could not be matched to a diagnosing urologist (N = 279). The size of the final analytic sample size was 4,520.

Definition of Variables


Prostatectomy and other primary treatments for localized prostate cancer were identified from Medicare inpatient, outpatient, and physician/supplier component files as described previously (Bekelman et al. 2007; Jang et al. 2007).

Other Variables

Patient characteristics were categorized from SEER-Medicare data. Gleason status was categorized as <7, 7, and 8–10. Prostate specific antigen (PSA) at the time of diagnosis was classified as 4 ng/ml or less, >4 to <10, 10 or greater, or unknown. Patient comorbidities were identified by classifying all available inpatient and outpatient Medicare claims for the 90 day interval preceding prostate cancer diagnosis into 46 categories (Elixhauser et al. 1998; Silber, Rosenbaum, and Trudeau 2001; Wong, Mitra, and Hudes 2006). For clarity, comorbidity is reported as the number (0, 1, ≥2) of the possible 46 comorbidity groups identified for each patient. Race was classified from both SEER and Medicare sources. Individuals were considered black if they were classified as black in either data source without a co-designation of Hispanic or Asian; white if they were classified as white in either data file without a classification of black, Hispanic, or Asian; or other/unknown. U.S. Census information was used as a proxy for individual measures of socioeconomic status. Men were linked to their census tract and, when not available, zip code to determine median income.

Network Creation

Provider Definitions

In constructing networks, we focused on doctors who were most likely to be involved in the patients’ prostate cancer care, and thus most likely to interact (either directly or indirectly) with one another. Match rates were based on all men included in the network construction (N = 5,353). The following doctors were included:

Diagnosing urologist. The urologist most likely to have diagnosed the patient's prostate cancer was defined as the urologist who billed for a claim on the date of the patient's diagnosis. If no claim was submitted, we chose the urologist who saw the patient nearest to the date of diagnosis in the 3 months earlier. If no urologist was identified, we selected the urologist who saw the patient nearest to the date of diagnosis in the 3 months following diagnosis. The match rate for diagnosing urologists was 94.2 percent.

Majority urologist was defined as the urologist who billed for claims on the most days in the 9 months following diagnosis. The match rate was 93.9 percent, and in 86.1 percent of cases, the diagnosing and majority urologists were the same.

Primary care provider (PCP). For each patient, we identified the PCP according to previously published algorithms (Pham et al. 2007). Inpatient and outpatient claims from the 12 months prior to the date of diagnosis were coded using the Berenson-Eggers algorithm, with inclusion of evaluation and management visits (classified as M1a, M1b, or M6). The patient's PCP was defined as the internal medicine (without subspecialty training), family practice, or general practice physician who billed for the greatest number of visits. The match rate was 74.6 percent.

Plurality provider. As doctors from other specialties may play an important role in the referral process and clinical management, we included doctors who billed for the greatest numbers of evaluation and management visits in the 12 months prior to the date of diagnosis, regardless of their clinical specialty. The match rate was 94.6 percent, and the plurality provider was the same as the PCP in 54.3 percent of cases, and the same as the diagnosing urologist in 11.0 percent of cases.

Radiation oncologists. For patients who underwent external beam radiation and brachytherapy, we included the provider who performed the clinical planning and simulation. The match rate was 98.9 percent for patients who underwent external beam radiation therapy and 92.8 percent for brachytherapy.

Graph Construction

Networks were constructed using data from all patients without metastatic disease. A graph (network) was constructed to describe the relationships between doctors in which vertices represented doctors and edges represented shared patients between doctors. The weight of an edge was determined by the number of shared patients between a pair of doctors. Network construction was performed in R version 2.12.0 using the igraph software package (Csardi and Nepusz 2006).

Subgroups Definition

Subgroups define doctors who are more densely connected with one another (via shared patients) than with doctors outside the subgroup. It is hypothesized that practice style may be more similar among doctors within the same subgroup. We used the Girvan-Newman algorithm to define subgroups (called “community-structures”) (Girvan and Newman 2002). This algorithm uses an iterative approach to successively remove edges that connect disparate subgroups. Edge betweeness for the network is calculated and the edge with the highest betweeness is removed, creating two different subgroups. The process is then repeated and a goodness-of-fit test (modularity) is used to determine the optimal number of subgroups (Newman 2006). Modularity measures the observed fraction of edges based on the community structure minus the fraction of edges that would occur in a random network. The optimal number of subgroups yields the modularity closest to 1. All doctors are placed in a single, mutually exclusive subgroup. However, a patient's doctors may span multiple subgroups. We assigned patients to the subgroup of their diagnosing urologist as it was hypothesized that the diagnosing urologist may play an influential role in helping patients decide on treatment options.

Practice Definition

Using 2005 AMA Masterfile data, we classified diagnosing urologists as belonging in the same practice if they had the same practice address.

Statistical Analyses

We used descriptive statistics and bivariate analyses to examine patient characteristics and network structure in each city. Logistic regression models were constructed to assess whether subgroup was associated with the odds of prostatectomy. In these analyses, the baseline subgroup in each city was the subgroup with the highest percent of patients who underwent prostatectomy. As subgroups do not span cities, separate models were constructed for each city. In the first model, we adjusted for clinical characteristics (Gleason score, tumor stage, PSA results, and comorbidity) and age, as these factors should likely influence treatment decisions. The second model added sociodemographic features (race, community-level income, and marital status) that have been previously demonstrated to affect treatment. To account for the clustering of patients within urologists, we used generalized estimating equations with a working independence correlation structure to calculate robust standard errors (Liang and Zeger 1986). We limited our descriptive statistics and multivariable models to subgroups with at least 50 patients to highlight the importance of these larger subgroups. Patients in subgroups with fewer than 50 patients were kept in the model in a dummy category. Smaller subgroups more frequently represented a single doctor's practice, and thus were unlikely to shed light on formal or informal communication between doctors, and lowering the threshold to subgroups with 10 patients did not significantly affect the results. Regression analyses were conducted in Stata version 11.1.


Table 1 describes characteristics of the network structures of each city. Networks were constructed using 2,420 different doctors in city A, 918 in city B, and 962 in city C. In each network, the diagnosing urologists were connected to an average of 10–12 other doctors; however, a relatively small number were very highly connected. Over 98 percent of patients in each city were part of the main component—the largest interconnected portion of the network. In the main components, there were 14 subgroups with 50 or more patients in city A, 8 in city B, and 8 in city C. Figure 1 shows the graphic representation of the network in city C.

Figure 1
In City C, Doctors Are Represented by Circles (Nodes) and Patients by Lines (Edges). Larger Sized Shapes Are Used to Denote Diagnosing Urologists. Different Shades (Colors) and Shapes Are Used for Different Subgroups, with Light Squares (Red in Color ...
Table 1
Descriptive Characteristics of the Network Structure in Three Cities, 2004–2005*

Table 2 shows the clinical and sociodemographic characteristics of the patients who were linked with a diagnosing urologist in each city and included in the final analytic sample. The cities varied in their overall rates of prostatectomy from 8.6 percent in city B to 25.3 percent in city A.

Table 2
Descriptive Characteristics of Men with Localized Prostate Cancer in Three Cities, 2004–2005

Table 3 shows the clinical and sociodemographic characteristics of patients linked to the subgroup of their diagnosing urologist. In each city, the subgroups ranged widely in the percent of patients who were black and the lowest income category. In city B, for example, subgroups ranged in their composition from 19.6 to >90 percent white, and 17.5 to 78.6 percent in the highest income category.

Table 3
Bivariate Analyses of Patient Sociodemographic Characteristics by Diagnosing Urologist Subgroup

Subgroups also varied in the percent of their patients who underwent a prostatectomy (e.g., in city A, 14.1–47.1 percent).

Table 4 presents the results of the multivariable regression analyses. In the final model controlling for sociodemographic and clinical characteristics, network subgroup was significantly associated with the likelihood that a patient undergoes prostatectomy in subgroups in each city. In city A, four subgroups had significantly lower odds of prostatectomy compared with the baseline after adjusting for patient clinical and sociodemographic characteristics; in city B, the odds of receiving a prostatectomy was lower among two subgroups; and in city C, there was significantly lower odds among five of the subgroups.

Table 4
Odds Ratio of Prostatectomy, Adjusted for Clustering by Diagnosing Urologist

In additional analyses, we examined the relationship between subgroup membership and practice structure for the diagnosing urologists (see Appendix Table S1). We identified 197 different practices, ranging in size from 1 to 4 urologists in city A, 83 practices ranging from 1 to 9 urologists in city B, and 70 practices ranging from 1 to 8 urologists in city C. In each city, a vast majority of urologists were solo practitioners (over 75.6 percent in city A, 74.7 percent in city B, and 77.1 percent in city C). Among the subgroups with more than 50 patients, there was an average of 13.1 urologists and 10.2 different practices per subgroup in city A, 10.4 urologists and 7.1 practices per subgroup in city B, and 10.3 urologists and 7.0 practices per subgroup in city C. We then identified the average number of subgroups per practices. Among the larger practices (≥4 providers), there were 2.0 subgroups per practice in city A represented; 1.5 subgroups per practice in city B; and 2.0 subgroups per practice in city C. Although providers in the same practice were often placed in the same subgroup, subgroups frequently represented doctors from multiple practices.


This study finds that it is possible to use claims data to map the connections between doctors caring for patients with prostate cancer. In each city, there was a substantial group of physicians who were connected to one another through the care of patients with localized prostate cancer. Within this large component, urologist subgroup provided additional information concerning the likelihood of patients to receive prostatectomy after accounting for individual patient-level characteristics and clustering by physicians. To our knowledge, network techniques have not been previously used to map cancer care delivery.

A major advantage of using a network approach in this setting is that it may not only shed light on the impact of formal institutional arrangements between physicians (e.g., practice structure) but also provide information on the informal relationships (e.g., “who knows who”) that may influence which patients see particular doctors. In integrated delivery systems and multispecialty group practices, the connections between physicians probably reflect this practice structure. However, a majority of PCPs work in small group practices (Bodenheimer and Pham 2010) and are likely to use informal relationships and information channels to guide these decisions (Kinchen et al. 2004; Forrest et al. 2006). Our work suggests that subgroups and practices are related to one another, yet distinct. While diagnosing urologists in the same practice location were frequently (although not invariably) placed in the same subgroup, larger subgroups brought together doctors from multiple practices.

These connections between physicians may be associated with the exchange of information and diffusion of innovation (Coleman, Katz, and Menzel 1966; Rogers 1995; Valente 1996). This exchange may occur directly between doctors communicating in the care of a given patient and indirectly communicating through patients relaying “messages” about their diagnosis and treatment. The network structure may also reflect other formal and informal ways that information is passed (e.g., through advice seeking, curbside consultations, and teaching conferences) (Kuo, Gifford, and Stein 1998).

While we were unable to directly assess these mechanisms, it is probable that the connections between physicians in the setting of shared patients probably encompass a mix of these direct and indirect communications and formal and informal mechanisms.

In each city, a majority of urologists were connected to relatively few other doctors in their care of patients with localized prostate cancer, whereas a few were very highly connected. This skewed distribution is consistent with a power law or scale-free network, which is found in many larger networks (Barabasi and Albert 1999). It is possible that the highly connected physicians may serve as opinion leaders (Lomas et al. 1991; Soumerai, McLaughlin, and Gurwitz 1998) and be used in interventions to facilitate the spread of norms and quality standards.

The data further revealed subgroups with significant clustering by patient race and socioeconomic status. This corresponds to the well-described segregation by race/ethnicity in the health care system (Smith 1998). Prior literature has tended to focus on differences in PCPs who treat white and black patients (Bach et al. 2004) and racial differences between hospitals (Barnato et al. 2005; Groeneveld, Laufer, and Garber 2005; Jha et al. 2007; Pollack et al. in press). The current work extends this by examining the ways in which providers, primarily in the outpatient setting, cluster with one another. It is critical to examine whether subgroups have different quality of care and access to resources, as has been shown with hospitals that treat varying proportions of non-white patients (Jha et al. 2007).

Physician relationships have received increasing attention as health care reform attempts to potentially modify and codify them under primary care medical home demonstration projects and accountable care organizations (ACOs) (Fisher et al. 2007; Fisher, McClellan, and Bertko 2009; Lee et al. 2010). In these reforms, relationships between PCPs and specialists may become more clearly delineated and information flow improved. Network analysis may provide a way to model these relationships. For example, the approach may be used to demonstrate the extent to which physicians in the same ACO are currently clustered with one another in caring for patients, describe how differences in network structures may impact the ability of different ACOs to improve quality and control costs, and elucidate how reforms lead to changes in clustering over time. The observed clustering by patient race/ethnicity may also have ramifications for understanding how these reforms may work to ameliorate and/or exacerbate health care disparities (Pollack and Armstrong 2011). In addition, these reforms are expected to function through improved care coordination. With few readily available measures of care coordination (McDonald, Sundaram, and Bravata 2007), it is plausible that network approaches may be an important tool in developing such measures.

The use of claims data to construct networks presents several opportunities as well as key limitations. Typically, studies on networks rely on surveys designed to capture communication between members (Wasserman and Faust 1999). Survey administration is costly and time consuming. Employing claims data increase the ability to generate networks and test how indirect connections (through patients and other physicians) may influence behavior. However, claims data lack direct information about referral processes and cannot examine the motivations and beliefs of PCPs, urologists, and patients.

This work has several additional limitations. First, Medicare claims are primarily restricted to people aged 65 and over. Although networks for younger patients may be different, prior studies have shown that urologist volume as calculated from Medicare data is highly correlated with total patient volume (Bach et al. 2004). In addition, patients enrolled in Medicare HMOs and those who receive their care exclusively from the Veterans’ Affairs system were not included. The exclusion of these patients may alter the network structure by potentially decreasing the number of doctors and lowering the ties that exist between doctors, thus biasing estimates of network-level statistics (Kossinets 2006). Second, we defined networks based on health referral regions to limit the scope, and due to constraints of the SEER-Medicare data. This may contribute to a “boundary specification problem” leading to biased measures of network structural features (Laumann, Marsden, and Prensky 1983). It remains uncertain how the inclusion of patients living outside of (but potentially proximal to) a given health referral region would impact the observed classification of subgroups. It is important to use complementary data to compare and contrast network structural features and subgroup designation, ideally testing against all payor databases and varying the geographic specifications. Third, the data include only those men diagnosed with prostate cancer and excluded men who are in the networks but who do not have prostate cancer (e.g., men with an elevated PSA but negative prostate biopsy). We would expect care connections between PCPs and urologists to be similar because most men receive their cancer diagnosis after the referral has taken place. Fourth, obtaining accurate practice variables can be challenging. We relied on preferred office address; however, because physicians may work at multiple locations, it is unlikely that our definition captured the full extent of physician practices. Allowing for a more liberal matching process (assigning doctors with the same street address but different suite numbers to the same practice) produced similar results. Importantly, we were unable to capture whether doctors at different sites of care are affiliated with one another through the same health care delivery organization. In addition, we focused solely on the practice of the diagnosing urologist, recognizing the importance of examining how multispecialty groups and health care organizations influence network structures. Fifth, network algorithms were used to define physician subgroup. Although the Girvan-Newman algorithm is widely used, considerable controversy exists over which algorithm is best for determining community structure (Fortunato 2010). Additional research is required to determine the optimal measurement of network structure among physicians and health care professionals and, if network structure is determined to be causally associated with treatment variation, to define its underlying mechanisms.

Along with clinical characteristics and patient preferences, variation in care for men with localized prostate cancer may also be related to the doctor a patient sees, how that doctor is connected to other doctors, and how the doctor fits in the overall network structure. Using claims data represents an important opportunity to study network structure. With prior studies suggesting that networks may amplify beneficial or deleterious behaviors (Christakis and Fowler 2007; Christakis and Fowler 2008; Fowler and Christakis 2008), identifying the physician network structure is a crucial first step in understanding and modifying local variation in care.


Joint Acknowledgment/Disclosure Statement: The authors acknowledge the efforts of the Applied Research Program NCI; the Office of Research, Development and Information, CMS. The authors also thank Pamela Pelizzari for her research assistance.

Funding: Funding was provided by the Center for Population Health and Health Disparities at the University of Pennsylvania under Public Health Services grant P50-CA105641. Dr. Pollack's salary was supported by the National Cancer Institute (NCI) 5U54CA091409-10, Nelson (PI) followed by a career development award from the NCI and the Office of Behavioral and Social Sciences Research (1K07CA151910-01A1).

Disclosures: None.


Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Table S1. The Number of Urologists and Practices per Subgroup.

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.


  • Bach PB, Pham HH, Schrag D, Tate RC, Hargraves JL. “Primary Care Physicians Who Treat Blacks and Whites” New England Journal of Medicine. 2004;351(6):575–84. [PubMed]
  • Barabasi A-L, Albert R. “Emergence of Scaling in Random Networks” Science. 1999;286(5439):509–12. [PubMed]
  • Barnato AE, Lucas FL, Staiger D, Wennberg DE, Chandra A. “Hospital-Level Racial Disparities in Acute Myocardial Infarction Treatment and Outcomes” Medical Care. 2005;43(4):308–19. [PMC free article] [PubMed]
  • Barnett ML, Landon BE, O'Malley AJ, Keating NL, Christakis NA. “Predicting Informal Physician Relationships with Administrative Data” Health Services Research. 2011;46(5):1592–609. [PMC free article] [PubMed]
  • Bekelman JE, Zelefsky MJ, Jang TL, Basch EM, Schrag D. “Variation in Adherence to External Beam Radiotherapy Quality Measures among Elderly men with Localized Prostate Cancer” International Journal of Radiation Oncology, Biology, Physics. 2007;69(5):1456–66. [PMC free article] [PubMed]
  • Bodenheimer T, Pham HH. “Primary Care: Current Problems and Proposed Solutions” Health Affairs. 2010;29(5):799–805. [PubMed]
  • The Center for Evaluative Clinical Sciences. The Dartmouth Atlas of Health Care. Lebanon, NH: Dartmouth Medical School; 1996.
  • The Center for Evaluative Clinical Sciences. The Dartmouth Atlas of Health Care 1998. Lebanon, NH: Dartmouth Medical School; 1998.
  • Christakis NA, Fowler JH. “The Spread of Obesity in a Large Social Network Over 32 Years” New England Journal of Medicine. 2007;357(4):370–9. [PubMed]
  • Christakis NA, Fowler JH. “The Collective Dynamics of Smoking in a Large Social Network” New England Journal of Medicine. 2008;358(21):2249–58. [PMC free article] [PubMed]
  • Coleman J, Katz E, Menzel H. “The Diffusion of an Innovation among Physicians” Sociometry. 1957;20(4):253–70.
  • Coleman JS, Katz E, Menzel H. Medical Innovation: A Diffusion Study. New York: Bobbs Merrill; 1966.
  • Cooperberg MR, Broering JM, Carroll PR. “Time Trends and Local Variation in Primary Treatment of Localized Prostate Cancer” Journal of Clinical Oncology. 2010;28(7):1117–23. [PMC free article] [PubMed]
  • Csardi G, Nepusz T. Paper presented at the International Conference on Complex Systems. Boston, MA: 2006. The Igraph Software Package for Complex Network Research.
  • Dartmouth Atlas Project, Center for the Evaluative Clinical Sciences. 2007. “Preference-Sensitive Care. A Dartmouth Atlas Project Topic Brief” [accessed on April 20, 2009]. Available at
  • Elixhauser A, Steiner C, Harris D, Coffey R. “Comorbidity Measures for Use with Administrative Data” Medical Care. 1998;36(1):8–27. [PubMed]
  • Epstein AM. “Geographic Variation in Medicare Spending” New England Journal of Medicine. 2010;363(1):85–6. [PubMed]
  • Fisher ES, McClellan MB, Bertko J, Lieberman SM, Lee JJ, Lewis JL, Skinner JS. “Fostering Accountable Health Care: Moving Forward in Medicare” Health Affairs. 2009;28(2):w219–31. [PMC free article] [PubMed]
  • Fisher E, Staiger D, Bynum J, Gottlieb D. “Creating Accountable Care Organizations: The Extended Hospital Medical Staff” Health Affairs. 2007;26:w44–57. [PMC free article] [PubMed]
  • Forrest CB, Nutting PA, von Schrader S, Rohde C, Starfield B. “Primary Care Physician Specialty Referral Decision Making: Patient, Physician, and Health Care System Determinants” Medical Decision Making. 2006;26:76–85. [PubMed]
  • Fortunato S. “Community Detection in Graphs” Physics Reports. 2010;486:75–174.
  • Fowler JH, Christakis NA. “Dynamic Spread of Happiness in a Large Social Network: Longitudinal Analysis over 20 Years in the Framingham Heart Study” British Medical Journal. 2008;337:a2338. [PMC free article] [PubMed]
  • Gawande A. The Cost Conundrum. The New Yorker. 2009. [accessed September 19, 2011]. Available at:
  • Gilligan T. “Social Disparities and Prostate Cancer: Mapping the Gaps in Our Knowledge” Cancer Cause Control. 2005;16:45–53. [PubMed]
  • Girvan M, Newman MEJ. “Community Structure in Social and Biological Networks” Proceedings of the National Academy of Sciences. 2002;99(12):7821–6. [PubMed]
  • Groeneveld PW, Laufer SB, Garber AM. “Technology Diffusion, Hospital Variation, and Racial Disparities among Elderly Medicare Beneficiaries 1989–2000” Medical Care. 2005;43(4):320–9. [PubMed]
  • Iwashyna TJ, Christie JD, Moody J, Kahn JM, Asch DA. “The Structure of Critical Care Transfer Networks” Medical Care. 2009;47:787–93. [PMC free article] [PubMed]
  • Jang TL, Yossepowitch O, Bianco FJ, Scardino PT. “Low Risk Prostate Cancer in Men under Age 65: The Case for Definitive Treatment 10.1016/j.Urolonc.2007.05.025” Urologic Oncology-Seminars Original Investigations. 2007;25(6):510–4. [PMC free article] [PubMed]
  • Jemal A, Siegel R, Xu J, Ward E. “Cancer Statistics, 2010” CA: A Cancer Journal for Clinicians. 2010;60:277–300. [PubMed]
  • Jha AK, Orav EJ, Li Z, Epstein AM. “Concentration and Quality of Hospitals That Care for Elderly Black Patients” Archives of Internal Medicine. 2007;167:1177–82. [PubMed]
  • Keating NL, Ayanian JZ, Cleary PD, Marsden PV. “Factors Affecting Influential Discussions among Physicians: A Social Network Analysis of a Primary Care Practice” Journal of General Internal Medicine. 2007;22:794–8. [PMC free article] [PubMed]
  • Kinchen KS, Cooper LA, Levine D, Wang NY, Powe NR. “Referral of Patients to Specialists: Factors Affecting Choice of Specialist by Primary Care Physicians” Annals of Family Medicine. 2004;2(3):245–52. [PubMed]
  • Kossinets G. “Effects of Missing Data in Social Networks” Socail Networks. 2006;28:247–68.
  • Krongrad A, Lai H, Lai S. “Survival after Radical Prostatectomy” Journal of the American Medical Association. 1997;278(1):44–6. [PubMed]
  • Krupski TL, Kwan L, Afifi AA, Litwin MS. “Geographic and Socioeconomic Cariation in the Treatment of Prostate Cancer” Journal of Clinical Oncology. 2005;23(31):1–8. [PubMed]
  • Kuo D, Gifford DR, Stein MD. “Curbside Consultation Practices and Attitudes among Primary Care Physicians and Medical Subspecialists” Journal of the American Medical Association. 1998;280(10):905–9. [PubMed]
  • Lai S, Lai H, Lamm S, Obek C, Krongrad A, Roos B. “Radiation Therapy in Non-Surgically-Treated Nonmetastatic Prostate Cancer: Geographic and Demographic Variation” Urology. 2001;57:510–7. [PubMed]
  • Laumann EO, Marsden PV, Prensky D. “The Boundary Specification Problem in Network Analysis. In: Burt RS, Minor MJ, editors. Applied Network Analysis. London: Sage Publications; 1983. pp. 18–34.
  • Lee TH, Casalino LP, Fisher ES, Wilensky GR. “Creating Accountable Care Organizations” New England Journal of Medicine. 2010;363(15):e23. [PubMed]
  • Liang KY, Zeger SL. “Longitudinal Data Analysis Using Generalised Linear Models” Biometrika. 1986;73:13–22.
  • Lomas J, Enkin M, Anderson GM, Hannah WJ, Vayda E, Singer J. “Opinion Leaders vs Audit and Feedback to Implement Practice Guidelines: Delivery after Previous Cesarean Section” Journal of the American Medical Association. 1991;265(17):2202–7. [PubMed]
  • Lu-Yao GL, McLerran D, Wasson J, Wennberg JE. “An Assessment of Radical Prostatectomy. Time Trends, Geographic Variation, and Outcomes. The Prostate Patient Outcomes Research Team” Journal of the American Medical Association. 1993;269(20):2633–6. [PubMed]
  • McDonald K, Sundaram V, Bravata D, Lewis R, Lin N, Kraft SA, McKinnon M, Paguntalan H, Owens DK. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies. Volume 7 - Care Coordination. Rockville, MD: AHRQ; 2007. [PubMed]
  • McNeil BJ. “Shattuck Lecture--Hidden Barriers to Improvement in the Quality of Care” New England Journal of Medicine. 2001;345(22):1612–20. [PubMed]
  • NCCN. Clinical Practice Guidelines in Oncology Prostate Cancer. Fort Washington, PA: NCCN; 2007.
  • Newman MEJ. “Modularity and Community Structure in Networks” Proceedings of the National Academy of Sciences. 2006;103(23):8577–82. [PubMed]
  • Newman MEJ, Barabasi A-L, Watts DJ. The Structure and Dynamics of Networks. Princeton, NJ: Princeton University Press; 2006.
  • Pardo Y, Guedea F, Aguiló F, Fernández P, Macías V, Mariño A, Hervás A, Herruzo I, Ortiz P, Ponce de León J, Craven-Bratle J, Suárez JF, Boladeras A, Pont A, Ayala A, Sancho G, Martinez E, Alonso J, Ferrer M. “Quality-of-Life Impact of Primary Treatments for Localized Prostate Cancer in Patients without Hormonal Treatment” Journal of Clinical Oncology. 2010;28(31):4687–96. [PubMed]
  • Pham HH, Schrag D, O'Malley AS, Wu B, Bach PB. “Care Patterns in Medicare and Their Implications for Pay for Performance” New England Journal of Medicine. 2007;356(11):1130–9. [PubMed]
  • Pollack CE, Armstrong K. “Accountable Care Organizations and Health Care Disparities” Journal of the American Medical Association. 2011;305:1706–7. [PubMed]
  • Pollack CE, Bekelman JE, Liao KJ, Armstrong K. “Hospital Racial Composition and the Treatment of Localized Prostate Cancer” Cancer. 2011 doi: 10.1002/cncr.26232. [Epub ahead of print] [PMC free article] [PubMed]
  • Potosky AL, Davis WW, Hoffman RM, et al. “Five-Year Outcomes after Prostatectomy or Radiotherapy for Prostate Cancer: The Prostate Cancer Outcomes Study” Journal of the National Cancer Institute. 2004;96(18):1358–67. [PubMed]
  • Potosky AL, Riley GF, Lubitz JD, Mentnech RM, Kessler LG. “Potential for Cancer Related Health Services Research Using a Linked Medicare-Tumor Registry Database” Medical Care. 1993;31(8):732–48. [PubMed]
  • Rogers EM. Diffusion of Innovation. New York: The Free Press; 1995.
  • Schroeder FH, Roach M, Scardino P. “Clinical Decisions: Management of Prostate Cancer” New England Journal of Medicine. 2008;359(24):2605–9. [PubMed]
  • Silber J, Rosenbaum P, Trudeau M, Even-Shoshan O, Chen W, Zhang X, Mosher RE. “Multivariate Matching and Bias Reduction in the Surgical Outcomes Study” Medical Care. 2001;39(10):1048–64. [PubMed]
  • Smith DB. “The Racial Segregation of Hospital Care Revisted: Medicare Discharge Patterns and Their Implications” American Journal of Public Health. 1998;88:461–3. [PubMed]
  • Snyder CF, Frick KD, Blackford AL, Herbert RJ, Neville BA, Carducci MA, Earle CC. “How Does Initial Treatment Choice Affect Short-Term and Long-Term Costs for Clinically Localized Prostate Cancer?” Cancer. 2010;116:5391–9. [PubMed]
  • Soumerai SB, McLaughlin TJ, Gurwitz JH, Guadagnoli E, Hauptman PJ, Borbas C, Morris N, McLaughlin B, Gao X, Willison DJ, Asinger R, Gobel F. “Effect of Local Medical Opinion Leaders on Quality of Care for Acute Myocardial Infarction: A Randomized Controlled Trial” Journal of the American Medical Association. 1998;279(17):1358–3. [PubMed]
  • Stanford JL, Feng Z, Hamilton AS, Gilliland FD, Stephenson RA, Eley JW, Albertsen PC, Harlan LC, Potosky AL. “Urinary and Sexual Function after Radical Prostatectomy for Clinically Localized Prostate Cancer: The Prostate Cancer Outcomes Study” Journal of the American Medical Association. 2000;283(3):354–60. [PubMed]
  • Thompson I, Thrasher JB, Aus G, Burnett AL, Canby-Hagino ED, Cookson MS, D'Amico AV, Dmochowski RR, Eton DT, Forman JD, Goldenberg SL, Hernandez J, Higano CS, Kraus SR, Moul JW, Tangen CM. “Guideline for the Management of Clinically Localized Prostate Cancer: 2007 Update” Journal of Urology. 2007;177(6):2106–31. [PubMed]
  • Valente TW. “Social Network Thresholds in the Diffusion of Innovations” Social Networks. 1996;18:69–89.
  • Walsh PC, DeWeese TL, Eisenberger MA. “Localized Prostate Cancer” New England Journal of Medicine. 2007;357(26):2696–705. [PubMed]
  • Wasserman S, Faust K. Social Network Analysis: Methods and Applications. New York: Cambridge University Press; 1999.
  • Wilson LS, Tesoro R, Elkin EP, Sadetsky N, Broering JM, Latini DM, DuChane J, Mody RR, Carroll PR. “Cumulative Cost Pattern Comparison of Prostate Cancer Treatments” Cancer. 2007;109:518–27. [PubMed]
  • Wong YN, Mitra N, Hudes G, Localio R, Schwartz JS, Wan F, Montagnet C, Armstrong K. “Survival Associated with Treatment vs Observation of Localized Prostate Cancer in Elderly Men” Journal of the American Medical Association. 2006;296(22):2683–93. [PubMed]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust