We gathered and merged relevant data from the following four sources: a private firm, The National Committee for Quality Assurance (NCQA), InterStudy, and the Dartmouth Atlas.
Plan Physician Network
Our data originated from a private firm that collected information on individual physicians and their health plan affiliations. This firm collected the data to generate a comprehensive database of plan–physician combinations for employers who wanted an easy way for their employees to search for affiliated physicians or to check if their physician was in any given plan. The firm gave us access to its proprietary electronic provider lists for 214 health maintenance organizations (HMOs) that reflect each plan's physician network for January 2001, 2002, and 2003.
We compared our physician sample against the area resource file (ARF) data and found that the physicians in our sample were fairly representative of all physicians in the United States during the same period in terms of geographical location. Our sample contained physicians from all 50 states plus the District of Columbia, with more than half concentrated in six states: California, New York, Ohio, Texas, Florida, and New Jersey. Moreover, over 95 percent of the physicians in the sample were located primarily in metropolitan statistical areas (MSAs). According to the ARF data in the same period, the top five states in terms of number of physicians were California, New York, Texas, Florida, and Pennsylvania (in our data, Pennsylvania was seventh); furthermore, ARF indicated that about 98 percent of all physicians in the United States were located in MSAs.
Because the firm provided specific physician names to interested employees and benefits managers, we believe the data to be highly accurate. Nevertheless, to assess the accuracy and reliability of the dataset, we checked the affiliations of all listed physicians in two large plans. We provided the physician lists and asked the plans to verify that all listed physicians were indeed affiliated with the plans. We found a combined error rate of <1.5 percent, suggesting that although the dataset had some noise, it was relatively accurate.
Because hospital-based physicians (radiologists, anesthesiologists, pathologists, and hospitalists) often do not appear on health plan provider lists, our measure of overlap is therefore best interpreted as a measure of overlap among community-based physicians.
Measuring Network Overlap
We construct two measures of overlap. One is a “plan pair” measure that captures the overlap between any pair of plans. This allows us to test how overlap affects the convergence in quality performance. The other is the average number of health plans with which the physicians in a given plan have contracts. This allows us to investigate the effect of a plan's overlap with other plans on its own quality performance. In constructing these measures, we assume that those physicians contracting with the same plan also belong to the same “networks.” Thus, to some extent, we use the terms “physician network” and “plan affiliation” interchangeably. In reality, however, belonging to a same plan may not be equivalent to belonging to a same physician network—that is, independent physician association (IPA) or preferred provider organization (PPO); unfortunately, we are not able to observe the actual “network” to which each physician belongs.
A “plan pair” measure is defined for every unique combination of two plans. It is calculated as the number of physicians belonging to both plans divided by the total number of physicians belonging to either plan. For example, suppose plan A has 300 physicians and plan B has 500 physicians in their respective networks. Also suppose that there are 200 physicians who are in both networks (thus counted twice). Then, the plan–pair overlap is 33.3 percent—that is, 200 divided by (300+500−200). This is possible because our data allow us to explicitly observe physicians' plan affiliations.
The average number of plans contracted by physicians belonging to a given plan's network captures the degree to which a plan shares its physician panel with all other competitors in the market. For example, suppose a plan contracts with five physicians, each of whom also contracts with two other plans in the market (thus, each physician contracts with three plans in total). Then the average number of plans contracted by the plan's network physicians is three (3+3+3+3+3=15 divided by 5). A greater value on this measure might suggest less incentive for the plan to undertake quality improvement initiatives at the physician level because any benefits from such initiatives would spill over to the plan's competitors.
Note that the plan–pair overlap value of 0 percent corresponds to a plan-level overlap value of one plan per physician; that is, if every physician in a given plan contracts with only one plan, that implies that none of those physicians appears in any other plan's network (i.e., exclusive provider network). Given our method of constructing the overlap measure, however, a “zero overlap” can reflect one of two situations. First, it may reflect two plans in the same market using exclusive physician networks. This is most likely when either or both of the plans in a given plan pair are predominantly group or staff model HMOs.
Alternatively, it could reflect two plans operating in entirely different markets, resulting in zero overlap by definition. To assess whether our results are sensitive to this alternative definition of zero overlap, we obtained a separate set of results using a “full” sample that included all the plan pairs that do not operate in the same market, assigning an overlap value of zero for such plan pairs. Because we found no significant difference in our results from this broader definition of zero overlap, we focused only on the plan pairs that do operate in the same markets.
Plan Performance and Characteristics
We linked HEDIS and Consumer Assessment of Healthcare Providers and Systems (CAHPS) data obtained from the NCQA to the HMOs contained in the private firm's dataset for the corresponding years (i.e., 2001, 2002, and 2003). Ideally, we would have liked to observe plan performance measures at the geographical market level in order to explicitly account for the market-level factors that influence them. However, because HEDIS and CAHPS are measured at the plan level rather than at the market level (i.e., they reflect each plan's performances aggregated across all markets in which it operates), we were unable to achieve this level of detail in our dataset.
To account for the plan characteristics, we also merged in data from InterStudy's MSA Profiler and Competitive Edge. Because InterStudy and NCQA do not use common health plan identifiers, we matched the plans manually, relying on plan name and geographic service area. This resulted in a dataset of 189 health plans, representing about 66 percent of total commercial HMO enrollment in the United States as of 2001.
Our sample was more likely to include larger and older plans operating in large urban markets. Using the 2002 and 2003 data, we then constructed a dataset of those plans covering the 3-year period. We were unable to link approximately 20 percent of the plans in our network overlap data to the NCQA data, presumably because either these plans did not report data to NCQA or we could not find the appropriate match. Also, some plans that had appeared initially in the 2001 data dropped out of the data in later years, presumably due to mergers or acquisitions.
Selecting Plan Performance Measures
As dependent variables, we chose a subset of eight HEDIS and four CAHPS measures for analysis. Specifically, we focused on HEDIS measures related to breast cancer screening, adolescent immunization, and diabetes care. The selection of these measures was driven by our a priori expectation that they are more likely to be influenced by individual physician characteristics and practice styles than other available measures.
To check whether our method may pick up false associations between physician network overlap and plan performance measures, we also included the CAHPS “Claims Processing” measure as an outcome variable under the assumption that patients' assessment of plans' claims processing would depend on plans' behaviors and not on physician behaviors. We therefore expect a nonsignificant relationship between the “Claims Processing” measure and the physician overlap measure.
HEDIS Data Collection Methods
Plans could use either administrative or hybrid methods to collect the necessary data to compute their HEDIS scores. The administrative method relied on examining claims data to determine the relevant performance measures, while the hybrid method involved supplementing the claims data with chart audits of randomly selected samples of eligible enrollees. Thus, different collection methods might have led to differences in the plan performance measures (Pawlson, Scholle, and Powers 2007
). Therefore, we created categorical variables to control for the differences in HEDIS collection methods in our regression analyses.
Regional Variation and Plan Enrollee Case Mix
Plan performance scores are further affected by enrollee characteristics and other unobserved market characteristics that lead to plan quality variation not attributable to plans' quality improvement activities or to physician practice patterns. To account for this, we created the following four indices: case mix, ambulatory-care sensitive conditions (ACSC), market share, and market dissimilarity index (MDI).
To capture the plan differences in terms of unobserved enrollee health status and severity of illnesses, we calculated the average enrollee prevalence rate of four chronic conditions (heart disease, diabetes, asthma, and hypertension) for each plan from the NCQA data. More specifically, for each of the chronic conditions, we obtained the percentage of each plan's total enrollee population identified as having that condition and calculated the average of the percentages to obtain our case mix index. A higher value of this index for a given plan thus indicated a sicker enrollee population for that plan.
To create an index that captures the regional variation in resource utilization that may affect plan performance measures, we merged in the Dartmouth Atlas data because the Atlas data contain rich information on the variation in Medicare utilization and practice patterns across regions. Although our plan performance measures are relevant only for commercial HMOs, there has been strong evidence (Needleman et al. 2003
; Wennberg et al. 2004
; Baker, Fisher, and Wennberg 2008
;) that observed treatment patterns and quality of care among the Medicare population are likely to be highly correlated with those among the general population. We based our index on hospital discharge rates for ACSC. Previous research suggests that ACSC discharge rates exhibit significant geographical variations (McCall, Harlow, and Dayhoff 2001
), reflecting regional differences in terms of utilization, access to health care, market conditions, and practice norms.
To obtain the ACSC index, we used the following formula:
The degree to which the markets for physician practices and plans are concentrated in each locale may influence the degree of physician network overlap, potentially based on the provider practice patterns and quality. Moreover, existence of a “dominant” plan in terms of the local market share may induce adoption of certain practice styles and patterns across all providers in the market.
Thus, to control for the effects of health plan market concentration on plan performance measures, we obtained a variable that captures, on average, how “dominant” plans are across the markets they serve using the following formula:
From our data, it was not possible to directly identify all the relevant market characteristics that may impact the physician network overlap and plan performance, such as the competitiveness of the physician market. Instead, for any given plan pair, we constructed an MDI that is presumed to capture the differences in the unobserved market conditions that the plans face. MDI was calculated as
If plans i and j served the same markets and derived the exact same amount of “business” from each of those markets, MDI would be zero, indicating that the plans were subjected to identical market conditions. On the other hand, if the plans operated in different markets, MDI would approach one.
These indices were used as control variables in our regression models described in the next section. Refer to for the complete list of the variables used for the analyses and their descriptions.