Search tips
Search criteria 


Logo of hsresearchLink to Publisher's site
Health Serv Res. 2011 October; 46(5): 1646–1662.
PMCID: PMC3207197

Understanding Variations in Medicare Consumer Assessment of Health Care Providers and Systems Scores: California as an Example

Donna O Farley, Ph.D., Marc N Elliott, Ph.D., Amelia M Haviland, Ph.D., Mary Ellen Slaughter, Ph.D., and Amy Heller, Ph.D., MPH



To understand reasons why California has lower Consumer Assessment of Healthcare Providers and Systems (CAHPS) scores than the rest of the country, including differing patterns of CAHPS scores between Medicare Advantage (MA) and fee-for-service, effects of additional demographic characteristics of beneficiaries, and variation across MA plans within California.

Study Design/Data Collection

Using 2008 CAHPS survey data for fee-for-service Medicare beneficiaries and MA members, we compared mean case mix adjusted Medicare CAHPS scores for California and the remainder of the nation.

Principal Findings

California fee-for-service Medicare had lower scores than non-California fee-for-service on 11 of 14 CAHPS measures; California MA had lower scores only for physician services measures and higher scores for other measures. Adding race/ethnicity and urbanity to risk adjustment improved California standing for all measures in both MA and fee-for-service. Within the MA plans, one large plan accounted for the positive performance in California MA; other California plans performed below national averages.


This study shows that the mix of fee-for-service and MA enrollees, demographic characteristics of populations, and plan-specific factors can all play a role in observed regional variations. Anticipating value-based payments, further study of successful MA plans could generate lessons for enhancing patient experience for the Medicare population.

Keywords: Patient experience of care, Geographic variations, Medicare

The Medicare program, administered by the Centers for Medicare and Medicaid Services in the U.S. Department of Health and Human Services, has collected and reported data on beneficiaries' experiences of care in private health plans, Medicare Advantage (MA), since 1997 and for beneficiaries in traditional fee-for-service Medicare since 2000 (Landon et al. 2004). These data have been collected through the Medicare Consumer Assessment of Healthcare Providers and Systems (Medicare CAHPS) survey instrument.

Variations in Medicare CAHPS scores have been well-documented geographically (Zaslavsky, Zaborski, and Cleary 2004), between fee-for-service Medicare and MA plans (Landon et al. 2004), and across MA plans (Zaslavsky, Zaborski, and Cleary 2004), but the factors driving these variations have not yet been identified. Research has found that Medicare spending is not consistently associated with better quality, access, or patient satisfaction with care (Fisher et al. 2003a,b; Baicker and Chandra 2004; Landon et al. 2004; Dowd et al. 2009).

The Patient Protection and Affordable Care Act of 2010 directs the Centers for Medicare and Medicaid Services to establish a quality bonus program that would pay bonuses to MA plans for performance on clinical quality and enrollee experience of care (U.S. Congress 2010). It also has value-based purchasing provisions for many types of fee-for-service Medicare providers. Under such provisions, variations in CAHPS performance would have financial consequences for MA plans and fee-for-service providers via a system of quality bonuses associated with overall performance of “four stars” or higher on a five-star scale.

This study examines some factors that might underlie observed regional variations in CAHPS scores. We use the California experience as a case study because California is a large state and it historically has been one of the lower CAHPS performers. California Medicare CAHPS scores have been lower for health plan services, while ratings of doctors have been closer to the overall national averages (Zaslavsky, Zaborski, and Cleary 2004). These differences would affect their payments under a quality bonus system. Using 2010 star ratings, our estimates show that California plans attained the “four star” thresholds for 18 percent of their eligible CAHPS measures, compared with 44 percent of measures for non-California MA plans.


The Medicare program consists of traditional fee-for-service Medicare, in which a majority of beneficiaries are enrolled, as well as an enrollment option for MA health plans. The Centers for Medicare and Medicaid Services contracts with several hundred managed care plans, which must fulfill reporting and performance requirements, including patient experience of care. There is no uniformly defined MA plan benefit beyond a specified minimum requirement; capitation rates vary widely across counties, and health plans vary substantially in plan structure, benefits offered, and additional services to beneficiaries (Hurley, Grossman, and Strunk 2003; Medicare Payment Advisory Commission 2003; Hurley, Strunk, and Grossman 2005). These variations among MA plans could result in different experiences across plans for enrolled beneficiaries.

The Medicare CAHPS surveys contain global-rating items, additional single-item measures, and items that are grouped to form composite scores for several domains (Goldstein et al. 2001). The CAHPS survey scores used for public reporting are case mix adjusted to control for systematic differences in response tendency associated with respondent characteristics (Martino et al. 2009; Agency for Healthcare Research and Quality 2010).

California has substantial managed care penetration and many California plans participate in the MA program (22 in 2007, 40 in 2008), with 34 percent of California Medicare beneficiaries enrolled. California's MA plans include several very large plans as well as many smaller plans. Some plan sponsors also serve markets in other parts of the country, while others are unique to California. The MA plans in California offer differing benefit structures, models of care, and monthly premiums for enrolled beneficiaries (California Health Care Foundation 2003).


We performed analyses to address the following research questions:

  1. To what extent do overall differences in CAHPS performance between California and the rest of the country hold within the fee-for-service and MA sectors?
  2. Are there additional stable characteristics of beneficiaries that would explain observed score differences between California and the rest of the country, but which are not included in the standard CAHPS case mix adjustments?
  3. Are there substantial differences in CAHPS scores among California MA plans that suggest that unique plan-level characteristics might be affecting observed California/non-California differences in scores?

We identified two factors as potential case mix adjustors (for question #2) that met the criteria of stability, potential contributor to differences in CAHPS scores, and measurability. The first is beneficiary race/ethnicity. Previous research demonstrates that race/ethnicity is strongly and consistently associated with CAHPS scores; Asians tend to report lower scores than others, and Latinos tend to report higher global ratings but lower composite scores (Lurie et al. 2003; Weech-Maldonado et al. 2004, 2008; Elliott et al. 2009; Goldstein et al. 2010). These two groups constitute a substantially larger proportion of California residents than of the nation as a whole. The second is beneficiary location in urban versus rural areas, for which score differences may reflect response tendency (which is a legitimate basis for adjustment) or may reflect true differences in care (which is not) (O'Neill 2004; Reschovsky and Staiti 2005; Lutfiyya et al. 2007).

In examining differences in CAHPS scores among California MA plans, we noted that a large proportion of all California MA enrollees are in a single large MA plan, “Plan-A,” which has distinct structure and operating characteristics. We compared CAHPS scores for Plan-A to the rest of California's Medicare health plans.


Our analyses used CAHPS survey data for national samples of both members of MA plans and fee-for-service Medicare beneficiaries (some of whom were also enrolled in prescription drug plans). We performed the same analyses for 2007 and 2008 survey data, but we report only the latter here, given similar results.

Data and Measures

A total of 408,020 MA and fee-for-service beneficiaries completed the 2008 Medicare CAHPS survey. The survey used a stratified random sampling plan, with contracts (hereafter “plans”) serving as strata for MA beneficiaries and for fee-for-service beneficiaries enrolled in prescription drug plans. States served as strata for fee-for-service beneficiaries not enrolled in a prescription drug plan. The survey was administered by two waves of mailings with telephone follow-up of nonrespondents. The overall response rate was 61 percent (56 percent for fee-for-service, 65 percent for MA).

We analyzed the five global ratings (personal physician or nurse, specialists, all health care received, experiences with Medicare/plan, and experience with prescription drug coverage), additional single items on flu shot and pneumonia shot and ease of paperwork, and four composite measures of reported care: doctor communication (four items), ease of getting needed care (two items), customer service (three items), ease of getting needed drugs (three items), and getting information on drugs (two items) (Goldstein et al. 2001). The global-rating items had 0–10 response scales, and all other items had four response options (never, usually, sometimes, always).

Each CAHPS composite score was calculated as the average of the responses for the items within the composite. To facilitate comparisons across measures of health care experiences, we linearly transformed all CAHPS scores to a 0–100 scale and then calculated weighted mean scores, weighted by the number of beneficiaries in the relevant contracts and states to account for the sample design and nonresponse.

We case mix adjusted CAHPS measures to control for systematic differences in survey response tendency related to respondents' characteristics. Standard Medicare CAHPS case mix adjustment controls for age, education, overall self-rated health, self-rated mental health, an indicator of dual eligibility for Medicaid, and Low Income Subsidy receipt (Zaslavsky et al. 2001; Martino et al. 2009).


We first estimated linear regressions to calculate overall (combined Medicare fee-for-service and MA) mean, standard case mix adjusted Medicare CAHPS scores for California respondents and for the remainder of the country, using models with a California indicator and the case mix fixed effects noted above. We also estimated overall CAHPS scores adjusted for differences in MA/fee-for-service enrollments by adding an MA indicator to the model to assess the effect on scores of the difference in MA plan penetration between California and the rest of the nation. We tested the statistical significance of the differences in scores for each measure. We also calculated case mix adjusted scores (without the MA indicator) for each of 53 states and territories (hereafter “states”) to determine California's ranking among states on each CAHPS measure (1 = highest, 53 = lowest).

To test research question #1, we stratified the data into MA and fee-for-service and estimated, within each type, regression models predicting each CAHPS measure from a dummy variable for coverage in California and standard case mix adjustors.

To test research question #2, we expanded the case mix adjustment to add adjustors for race/ethnicity and urbanicity (referred to as “full adjustment”). The race/ethnicity indicators—Hispanic, Native American, black, Asian or Pacific Islander, and multiracial—were derived from a Hispanic ethnicity question and a list of races allowing multiple endorsements (Goldstein et al. 2010). Four urbanicity indicator variables were based on Beale codes (Economic Research Service 2004) that define level of rural or urban status on a scale of 1 (most urban, 53 percent of population, omitted) to 9 (most rural, 1 percent of population), with values of 5–9 (12 percent of population) pooled (Butler and Beale 1994). We estimated models with these additional case mix adjustors (full adjustment models), again fitting separate models by coverage type (MA, fee-for-service). We also tested the separate effects of race/ethnicity and urbanicity by assessing changes in R2 for models that include each set individually, compared with models using only the standard case mix adjustment.

To test research question #3, we examined effects of individual California health plans on the difference between MA plan scores for the state and the rest of the country. We fit two additional models to test the effect of large Plan-A on the California MA scores. In one model, we included observations only for Plan-A, and in the other model we included observations only for other California plans. These two models compared each of these groups to all MA plans in the rest of the country.


Comparisons of sample demographics and other variables used in the analysis, for California and the rest of the nation, are presented in Table 1. These data highlight the higher percentages of Asian and Hispanic populations and greater concentration in urban areas in California, compared with the rest of the nation.

Table 1
Characteristics of 2008 Medicare Beneficiaries in California and the Rest of the Nation, by Medicare Advantage (MA) or Fee-for-Service (FFS) Status*

In the comparative results presented here, all differences found are statistically significant (p<.05) unless otherwise noted. As shown in Table 2, California CAHPS scores in 2008 were lower than scores for the rest of the country. California's rankings among states in the domains of physician services and immunization compared poorly and its rankings for the other CAHPS domains varied from above average to low (column 2). Adjusting scores for MA penetration reduced but did not eliminate the differences.

Table 2
Case Mix Adjusted Mean CAHPS Scores for Medicare Beneficiaries, California (CA) versus Non-CA, 2008

The results in Table 3 reveal differences in California's standing relative to the rest of the nation within MA and within fee-for-service. California MA with standard case mix adjustment (columns 2 and 3) tended to be above the national MA average with the consistent exception of physician services, whereas California fee-for-service tended to be below the national fee-for-service average in all domains, but less markedly than California MA for physician services.

Table 3
California (CA) versus Non-CA Differences in CAHPS Scores for Medicare Beneficiaries in Medicare Advantage (MA) Plans and Fee-for-Service (FFS), with Standard and Full Case Mix Adjustments, 2008

Full case mix adjustment (columns 4 and 5) improved California's standing relative to the rest of the nation on all 14 measures for both MA and fee-for-service, when compared with standard case mix adjustment. However, California MA's standing relative to the national average improved by one point or more for only three measures (pneumonia immunization, getting care quickly, and getting needed care).

For California fee-for-service, the scores for all but one CAHPS measure had been significantly lower than the national average, and these scores moved closer to the national average. The exception was getting care quickly, which had a nonsignificant positive difference under standard adjustment that became larger and significant under full adjustment.

In subsequent analyses (not shown), we found that race/ethnicity was more important than urbanicity in predicting CAHPS measures (in terms of individual-level R2) and in explaining differences of California from the rest of the nation.

With the additional case mix adjustors having only a modest effect on California CAHPS scores, we looked within the MA plans in California to explore how a large, individual plan with substantial California MA enrollment (Plan-A) might be affecting variations in performance for the California MA sector.

As shown in Table 4, Plan-A compared much more favorably to the national MA average than the rest of California plans. It was markedly above the national MA average for all domains except physician services, where it was similar to the national average. In contrast, averages for other California plans fell below the national MA averages, except for Part D measures, which slightly exceeded the national average. Thus, some portion of the effects on CAHPS scores for California plans appeared to be related to the nature of the organization and operating styles of the individual plans. The sizes of these differences, in combination with Plan-A's large California market share, had a large effect on overall comparisons of California MA to national MA.

Table 4
Effect of “Plan-A” on CAHPS Scores for Beneficiaries in Medicare Advantage (MA) Plans, California (CA) versus Non-CA, Full Case Mix Adjustments, 2008


The results of this study highlight the complexity that often underlies apparently simple, average measures used to monitor performance trends across organizations or geographic areas. In this case, we looked at performance on CAHPS patient experience-of-care measures in the Medicare program, for which the geographic areas being compared were U.S. states. In 2008, California ranked near the bottom among states in terms of physician services and immunizations, had mixed rankings for Part D measures, and had average-to-above average performance in plan services. These differences were generally small (an exception perhaps being a 4-percentage point gap in pneumococcal immunization), but they were practically and statistically significant. These results have informed our research questions regarding possible influence of different factors.

1. To what extent do overall differences in CAHPS performance between California and the rest of the country hold within the fee-for-service and MA sectors?

We found strong differences between the Medicare MA and fee-for-service sectors. In domains of immunization, plan services, and Part D services, California MA consistently exceeded the national MA average, whereas California fee-for-service generally lagged the national average. Both sectors were below average for physician services, but more so for California MA. These results are consistent with findings from other studies that found variations in performance across different quality measures (Miller and Luft 2002; Landon et al. 2004; Gillies et al. 2006).

These differences were not unexpected because the two health care models are quite different. Beneficiaries in fee-for-service Medicare have standard Medicare benefits and can choose their physicians and other providers freely. Beneficiaries enrolled in the MA plans have expanded benefits and the plans more actively manage their care processes and access to providers, with some plans essentially locking the beneficiary into receiving care from a single group practice. Taken as a whole, California MA plans may exhibit the traditional management characteristics of managed care more strongly than other Medicare managed care nationally.

2. Are there additional stable characteristics of the beneficiaries or market that would explain observed score differences between California and the rest of the country, but which are not included in the standard CAHPS case mix adjustments?

Given the large differences between California and the rest of the country in race/ethnicity and urbanicity, we anticipated that their addition as case mix adjustors might be sufficient to explain the differences. While the additional adjustors improved California scores on almost all measures, they did not fully explain the lower California scores in MA and they explained fewer than half the lower California scores in fee-for-service. The most important contributor was race/ethnicity; urbanicity had much weaker effects, making the question of whether urbanicity captured response tendency or quality of care moot for this application. The higher prevalence of Asians (who tend to provide the lowest ratings and reports of care) and Hispanics (who tend to provide high ratings but low reports) in California contributed to understating performance in California relative to the rest of the nation (Weech-Maldonado et al. 2004, 2008).

3. Are there plan-specific differences in CAHPS scores that indicate that unique plan-level characteristics might be affecting observed California/non-California differences in scores?

CAHPS performance in California was being affected by organizational and operational factors operating among the MA plans that do not exist in the fee-for-service Medicare sector. In particular, a single California health plan substantially altered the average MA CAHPS scores for California. This plan had strong scores for all the CAHPS measures except physician services, which resembled the national MA average. When its data were removed from the CAHPS data for California MA plans, the average scores for the remaining plans dropped substantially, so they were below the national MA average on all but Part D domains. Further, their differences from national MA means for physician service domains were larger than the differences in the fee-for-service sector; this was not the case for the other CAHPS domains. Thus, one cannot simply conclude that all MA plans in California perform poorly in patient experience of care. Rather, it is important to look within the group at individual health plans to better characterize patterns of performance across plans. The existence of a large plan with consistent above-average performance limits the extent to which unmeasured factors specific to California's population are likely to explain observed differences.

Clinical care processes have been found to vary across individual health plans or across types of plans, and it is reasonable to expect that similar variations would be occurring for patient experience of care. For example, differences in inpatient care utilization have been found for beneficiaries with severe chronic diseases who are in fee-for-service Medicare versus health plans (Revere and Sear 2004), as have variations across Medicaid health plans in pediatric asthma care (Dombkowski et al. 2005). Health plan ownership has been found to be associated with risk sharing processes, utilization of hospital inpatient care, catastrophic case management, and drug formularies (Ahern and Molinari 2001). Conversely, a study that examined the effects of health plan delivery system on clinical quality and patient experience of care (using CAHPS) found that the type of delivery system used affected many clinical measures, but not the CAHPS measures (Gillies et al. 2006).

Other likely sources of differences are plans' varying approaches to working with their contracted medical practices that, in turn, can affect how patients experience the care they receive from physicians in those practices. For example, studies have found that physicians were more likely to change their clinical practices if they received care management tools from a medical group or group/staff model health plan (Haggstrom and Bindman 2007), that the structure of a health plan is related to the duration of office visits by elderly patients (Hu and Reuben 2002), and that a health plan's method of paying physicians can affect patients' experiences of care (Scoggins 2002).

The available data did not permit detailed examination of the effects of particular organization-specific factors on MA plan performance on CAHPS measures. Further, our analysis focused on Medicare fee-for-service and MA plans in just one state. Additional investigations of a broader set of health plans, both Medicare and commercial, are needed to identify factors that affect variations in performance across the greater health plan population. Both qualitative and quantitative methods may help unravel the dynamics of service delivery within a number of health plans, drawing upon existing theory and published research in the organizational behavior and health service literature.

Overall, our case study results suggest some areas for improvements. The evidence for low immunization rates in California, in both fee-for-service and MA, compared with the rest of the country suggests the need for quality improvement efforts. In addition, our findings on the role of race/ethnicity case mix adjustment on the average differences between California CAHPS scores and those for the rest of the country suggest that consideration be given to adding these adjustors in some contexts. Finally, any examination of variations in performance across Medicare fee-for-service and MA will need to consider variations across MA plans.


This study demonstrates that a variety of factors may contribute to observed differences across geographic areas in performance on CAHPS measures. The factors involved for California were individual demographics not used in the standard case mix adjustment model, differences between fee-for-service Medicare and the MA plans, and plan-specific differences in performance across MA plans. Further, the positive performance by one MA plan had a masking effect on the overall average CAHPS scores for all California MA plans. Although such diversity probably exists for most comparisons, the contributing factors are likely to vary depending on the specific situation. For California, future research is needed to uncover specific sources of performance gaps. In addition, further study of successful MA plans could generate general lessons for providing good patient experience to California's Medicare population and perhaps to those in other states, such as New York, with historically lagging Medicare beneficiary experience.


Joint Acknowledgment/Disclosure Statement: This study was funded by CMS contract HHSM-500-2005-00028I to RAND. The authors would like to thank Elizabeth Goldstein, Ph.D., for advice and support and Aneetha Ramadas, A.B., for assistance with the preparation of the manuscript.

Disclosures: None.

Disclaimers: None.

Supporting Information

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Table SA1: Standard Errors for Differences in CAHPS Scores between California and Rest of the Nation, Standard and Full-Adjustment Case Mix.

Table SA2: Standard Errors for Differences in CAHPS Scores between California and Rest of the Nation, California Plans, Plan-A Only, Plans Without Plan-A.

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.


  • Agency for Healthcare Research and Quality. 2010. “Instructions for Analyzing CAHPS Data,” p. 5 [accessed on accessed December 17, 2010]. Available at
  • Ahern M, Molinari C. Impact of HMO Ownership on Management Processes and Utilization Outcomes. American Journal of Managed Care. 2001;7(5):489–97. [PubMed]
  • Baicker K, Chandra A. Medicare Spending, the PhysicianWorkforce, and Beneficiaries' Quality of Care. Health Affairs. 2004;W4:184–97. [PubMed]
  • Butler MA, Beale CL. Rural-Urban Continuum Codes for Metro and Nonmetro Counties, 1993 (Staff Report No. 9425) Washington, DC: Agriculture and Rural Economy Division, Economic Research Service, U.S. Department of Agriculture; 1994.
  • California Health Care Foundation. California Medicare HMOs: Declining Benefits and Rising Costs. Oakland, CA: California Health Care Foundation; 2003.
  • Deming WB, Stephan FF. On a Least Squares Adjustment of a Sampled Frequency Table When the Expected Marginal Totals Are Known. Annals of Mathematical Statistics. 1940;11(4):427–44.
  • Dombkowski KJ, Cabana MD, Cohn LM, Gebremariam A, Clark SJ. Geographic Variation of Asthma Quality Measures within and between Health Plans. American Journal of Managed Care. 2005;11(12):765–72. [PubMed]
  • Dowd BE, Kralewski JE, Kaissi AA, Irrgang SA. Is Patient Satisfaction Influenced by the Intensity of Medical Resource Use by Their Physicians? American Journal of Managed Care. 2009;15(5):e16–21. [PubMed]
  • Economic Research Service. 2004. “Briefing Rooms: Measuring Rurality: Rural-Urban Continuum Codes” [accessed on June 30, 2009]. Available at
  • Elliott MN, Haviland A, Kanouse DE, Hambarsoomian K, Hays RD. Adjusting for Subgroup Differences in Extreme Response Tendency When Rating Health Care: Impact on Disparity Estimates. Health Services Research. 2009;44(2, part 1):542–61. [PMC free article] [PubMed]
  • Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The Implications of Regional Variations in Medicare Spending. Part 1: The Content, Quality, and Accessibility of Care. Annals of Internal Medicine. 2003a;138(4):273–87. [PubMed]
  • Fisher ES. The Implications of Regional Variations in Medicare Spending. Part 2, Health Outcomes and Satisfaction with Care. Annals of Internal Medicine. 2003b;138(4):288–98. [PubMed]
  • Gillies RR, Chenok KE, Shortell SM, Pawlson G, Wimbush JJ. The Impact of Health Plan Delivery System Organization on Clinical Quality and Patient Satisfaction. Health Services Research. 2006;41(4):1181–99. [PMC free article] [PubMed]
  • Goldstein E, Cleary PD, Langwell KM, Zaslavsky AM, Heller A. Medicare Managed Care CAHPS®: A Tool for Performance Improvement. Health Care Financing Review. 2001;22(3):101–7. [PMC free article] [PubMed]
  • Goldstein E, Elliott MN, Lehrman WG, Hambarsoomian K, Giordano LA. Racial, Ethnic and Gender Differences in Patients' Perceptions of Inpatient Care. Medical Care Research and Review. 2010;67(1):74–92. [PubMed]
  • Haggstrom DA, Bindman AB. The Influence of Care Management Tools on Physician Practice Change across Organizational Settings. Joint Commission Journal on Quality and Patient Safety. 2007;33(11):672–80. [PubMed]
  • Hu P, Reuben DB. Effects of Managed Care on the Length of Time That Elderly Patients Spend with Physicians during Ambulatory Visits: National Ambulatory Medical Care Survey. Medical Care. 2002;40(7):606–13. [PubMed]
  • Hurley RE, Grossman JM, Strunk BC. Medicare Risk Contracting/Medicare Contracting Risk: A Life-Cycle View from Twelve Markets. Health Services Research. 2003;38(1, part 2):395–417. [PMC free article] [PubMed]
  • Hurley RE, Strunk BC, Grossman JM. Geography and Destiny: Local-Market Perspectives on Developing Medicare Advantage Regional Plans. Health Affairs. 2005;24(4):1014–21. [PubMed]
  • Landon BE, Zaslavsky AM, Bernard SL, Cioffi MJ, Cleary PD. Comparison of Performance of Traditional Medicare versus Medicare Managed Care. Journal of the American Medical Association. 2004;291(14):1744–52. [PubMed]
  • Lurie N, Zhan C, Sangl J, Bierman AS, Sekscenski ES. Variation in Racial and Ethnic Differences in Consumer Assessments of Health Care. American Journal of Managed Care. 2003;9(7):502–9. [PubMed]
  • Lutfiyya NM, Bhat DK, Gandlu SR, Nguyen C, Weidenbacher-Hoper VL, Lipsky MS. A Comparison of Quality of Care Indicators in Urban Acute Care Hospitals and Rural Critical Access Hospitals in the United States. International Journal for Quality in Health Care. 2007;19(3):141–9. [PubMed]
  • Martino SC, Elliott MN, Cleary PD, Kanouse DE, Brown JA, Spritzer KL, Heller A, Hays RD. Psychometric Properties of an Instrument to Assess Medicare Beneficiaries' Prescription Drug Plan Experiences. Health Care Financing Review. 2009;30(3):41–53. [PMC free article] [PubMed]
  • Medicare Payment Advisory Commission. Report to Congress. Washington, DC: MedPAC; 2003. Market Variation: Implications for Beneficiaries and Policy Report; pp. 19–38.
  • Miller RH, Luft HS. HMO Plan Performance Update: An Analysis of the Literature. Health Affairs. 2002;21(4):63–86. [PubMed]
  • O'Neill L. The Effect of Insurance Status on Travel Time for Rural Medicare Patients. Medical Care Research and Review. 2004;61(2):187–202. [PubMed]
  • Purcell NJ, Kish L. Postcensal Estimates for Local Areas (or Domains) International Statistical Review. 1980;48(1):3–18.
  • Reschovsky JD, Staiti AB. Access and Quality: Does Rural America Lag Behind? Health Affairs. 2005;24(4):1128–39. [PubMed]
  • Revere L, Sear A. Differences in U.S. Hospital Service Utilization between Traditional Medicare and Medicare HMO Patients. Journal of Health and Human Services Administration. 2004;27(3):347–71. [PubMed]
  • Scoggins JF. The Effect of Practitioner Compensation on HMO Consumer Satisfaction. Managed Care. 2002;11(4):49–52. [PubMed]
  • U.S. Congress. The Patient Protection and Affordable Care Act, P.L. 111-148. Washington, DC: Government Printing Office; 2010.
  • Weech-Maldonado R, Elliott MN, Morales LS, Spritzer KL, Marshall GN, Hays RD. Health Plan Effects on Patient Assessments of Medicaid Managed Care among Racial/Ethnic Minorities. Journal of General Internal Medicine. 2004;19(2):136–45. [PMC free article] [PubMed]
  • Weech-Maldonado R, Elliott MN, Oluwole A, Schiller KC, Hays RD. Survey Response Style and Differential Use of CAHPS Rating Scales by Hispanics. Medical Care. 2008;46(9):963–8. [PMC free article] [PubMed]
  • Zaslavsky AM, Zaborski LB, Cleary PD. Plan, Geographical, and Temporal Variation of Consumer Assessments of Ambulatory Health Care. Health Services Research. 2004;39(5):1467–84. [PMC free article] [PubMed]
  • Zaslavsky AM, Zaborski LB, Shaul DJA, Cioffi MJ, Cleary PD. Adjusting Performance Measures to Ensure Equitable Plan Comparisons. Health Care Financing Review. 2001;22(3):109–26. [PMC free article] [PubMed]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust