|Home | About | Journals | Submit | Contact Us | Français|
To examine whether known Medicaid enrollees misreport their health insurance coverage in surveys and the extent to which misreports of lack of coverage bias estimates of uninsurance.
Primary survey data from the Medicaid Undercount Experiment.
Analyze new data from surveys of Medicaid enrollees in California, Florida, and Pennsylvania and summarize existing research examining bias in coverage estimates due to misreports among Medicaid enrollees.
Subjects were randomly drawn from Medicaid administrative records and were surveyed by telephone.
Cumulative evidence shows that a small percentage of Medicaid enrollees mistakenly report being uninsured, resulting in modest upward bias in estimates of uninsurance. A somewhat larger percentage of enrollees report having some other type of coverage than no coverage, biasing Medicaid enrollment estimates downward but not biasing estimates of uninsurance significantly upward. Implications for policy makers' confidence in survey estimates of coverage are discussed.
There is consensus among researchers that population surveys of health insurance coverage undercount the number of individuals enrolled in Medicaid (Swartz and Purcell 1989; Holahan, Winterbottom, and Rajan 1995; Bennefield 1996; Dubay and Kenney 1996; Lewis, Elwood, and Czajka 1998; Blumberg and Cynamon 1999; Congressional Budget Office 2003). That is, estimates of the number of individuals with Medicaid coverage derived from survey data are consistently lower than the count of individuals enrolled in Medicaid obtained from administrative records. This is referred to as the “Medicaid undercount.”
The existence of a Medicaid undercount implies that Medicaid recipients do not report Medicaid coverage in surveys asking about health insurance coverage. Many assume that Medicaid enrollees either do not understand that they are enrolled, or they are embarrassed to report their enrollment (Klerman, Ringel, and Roth 2005). This misreport can take two forms: the person incorrectly reports having some other type of coverage or the person misreports being uninsured. Adjustments made to correct for the discrepancy between survey and administrative counts of Medicaid participation seem to assume a bit more of the later—that Medicaid enrollees fail to report any coverage (Callahan and Mays 2005; Urban Institute 2006).
Here, we directly test the assumption that Medicaid enrollees are inaccurate reporters of coverage and instead say they are uninsured in surveys. If Medicaid enrollees indicate that they lack any type of health insurance coverage, then estimates of uninsurance in surveys will be biased upward. If Medicaid enrollees indicate they have some type of coverage, then survey estimates of uninsurance are not biased from this form of measurement error. Understanding the magnitude and form of this measurement error is important. Surveys are the only source of information on those lacking coverage, providing the only means of assessing the extent to which programs are reaching their target populations, and survey estimates are widely used in health services research for policy development, evaluations, and simulations (Blewett et al. 2004).
In this paper, we examine new data concerning the extent to which Medicaid enrollees accurately or inaccurately report health insurance coverage. In addition, we summarize existing evidence from a variety of studies regarding the degree of misreporting. We then calculate the extent of upward bias introduced by Medicaid enrollees misreporting uninsurance in health insurance surveys. Finally, we discuss the implications of this evidence for confidence in the use of survey estimates of uninsurance for policy analysis.
The Medicaid Undercount Experiment (MUE) conducted surveys of known Medicaid enrollees to observe the percent of Medicaid enrollees who incorrectly report their health insurance coverage. Surveys were undertaken in California (CA), Florida (FL), and Pennsylvania (PA). Survey instruments and administration (including the survey vendor and timing of implementation) closely replicated each state's general Random Digit Dial population surveys of health and health care coverage.
Study populations for the MUE were randomly drawn from state administrative records of noninstitutionalized Medicaid enrollees.1Table 1 describes each of the MUE samples and response rates. The CA MUE only included individuals 18 years and older and all responses were self-report. FL and PA were household surveys that included children, with the most knowledgeable adult providing proxy reports for children and other household adults. FL's general population survey targeted nonelderly households, excluding households containing only members over age 64 from both surveys. Therefore, respondents over age 64 were not representative of the elderly population generally and their data were excluded from the analysis.
The response rates vary from state to state and are perhaps lower than desired. However, recent studies indicate that the relationship between response rates and response bias is minimal in opinion and attitude polls generally (Keeter et al. 2000, 2006; Groves 2006), and in health surveys specifically (Triplett 2002; Blumberg et al. 2005; Davern et al. 2006b; Holle et al. 2006).
After the MUE surveys were completed, respondent identification information was matched against administrative data and our analyses of insurance coverage were conducted on only those sampled Medicaid enrollees actually enrolled at the time of the survey (see exclusions in the fourth column). Given our focus on self-reports of health insurance coverage, we also exclude cases in which the respondent was unable to provide coverage information or answer questions about factors associated with health insurance coverage such as health status. The MUE data were weighted to be representative of the enrollment population from which the samples were drawn. Analyses were performed using STATA statistical software, which adjusts standard errors to account for the complex survey design (StataCorp 2003).
Paralleling the general state surveys, all MUE surveys asked about respondents' health insurance coverage at the time of the survey. Each survey included a series of questions asking whether the target respondent (CA) and other members of the household (FL and PA) were currently covered by various sources of private and public health insurance. Respondents were allowed to say “yes” to multiple sources of insurance, and a verification question confirmed lack of coverage among those saying “no” to all insurance sources. The CA MUE instrument, modeled after the California Health Interview Survey (CHIS), was an omnibus health survey in which questions about insurance coverage come later in the survey (approximately eight sections into the instrument). The FL and PA surveys, whose primary focus was on health insurance coverage and access to health care, placed the coverage questions at the beginning of the survey.
Our analysis takes the results from new and existing research presenting self-reports of health insurance coverage among known enrolled populations and then calculates the impact of survey misreports of health insurance coverage among Medicaid recipients on bias in estimates of uninsurance. This synthesis is aimed at improving our understanding of the form and magnitude of bias in uninsurance estimates derived from the various studies and methodologies (experimental and matching studies).
Tables 2 and and33 are divided into sections summarizing experimental and matching studies. Column (2) in Table 2 contains the rate that Medicaid enrollees correctly report that they have Medicaid. Column (3) shows the percent of respondents who answer the survey as though they have some other type of health insurance coverage. And column (4) provides the percent of the Medicaid population that answers the survey as though they are uninsured. Table 3, the column marked (1) provides the insured population count represented in each study. For example, the CA MUE sample (Row 1) represented the 3,309,192 adults enrolled in California's Medicaid program (referred to as Medi-Cal) on average during the months the CA MUE survey was in the field. Column (2) presents the percent of respondents known to be enrolled in Medicaid who mistakenly report being without insurance (this is the same as Table 2, column ). Column (3) is the product of the first two columns, representing the upward bias in the count of uninsured people. For example, the count of uninsured CA adults in 2004 was approximately 344,487 too high due to 10.4 percent of adult Medi-Cal recipients reporting they had no insurance. Column (4) represents the size of the age-relevant population in the study year (e.g., based on the CPS, about 25.8 million adults lived in CA in 2004), and column (5) provides the size of the total population in the study year (e.g., about 35.4 million people lived in CA in 2004). Column (6) displays the percentage point upward bias to age-relevant estimates of the percent uninsured (columns [3/4]) and column (7) shows the amount of upward bias to population estimates of uninsurance (column [3/5]) due to misreports of uninsurance among the insured.
There are three key findings from the experimental studies. First, relatively few persons known to have Medicaid coverage incorrectly indicate that they are uninsured in surveys. As shown in Table 2, on average across the seven experiments, 4.6 percent of those with coverage report no insurance, ranging from a high of 10.4 percent among adults on Medicaid in CA in 2004 to a low of 0.6 percent of adults enrolled in Medicaid in Minnesota in 2003.2
Second, the experimental results suggest greater accuracy in reports of health insurance type than previously assumed. Among those who do not report Medicaid, for the most part, more report the wrong type of insurance coverage than report a lack of insurance altogether (the exception is the CA MUE). As shown in Table 2, the majority of enrollees accurately report Medicaid—79 percent on average across the studies. The study by Call et al. (2001) shows the lowest accuracy which can be attributed to a survey design that did not allow respondents to answer the full array of insurance type questions.3
The third major result, presented in Table 3, is that errors in reporting result in little bias to overall estimates of uninsurance (either age-relevant or total population rates). The upward bias to estimates of uninsurance in the experiments ranged from as high as 1.3 percentage points for CA adults in 2004, to as low as 0.0 percentage points among Minnesotans in 2003, for a simple average of 0.6 age-relevant and 0.4 total population percentage points across the seven experiments.
Tables 2 and and33 show substantial variation in the magnitude of misreporting uninsurance in experimental studies. Variation is expected given differences in the populations studied, sampling error, instrument design, and survey operations (Call, Davern, and Blewett 2007). For example, the FL and PA surveys ask respondents about health insurance coverage early in the instrument, which may improve reporting accuracy over the CA MUE that includes these questions later in the survey. Another viable explanation for the higher rate of misreports among Medi-Cal recipients is that it is an adult-only sample. Analyses by age in FL and PA (Call et al. 2006b) indicate that reports are less accurate among adult enrollees as compared with adults reporting for child enrollees. Finally, CA has a higher percentage of Medicaid enrollees receiving partial benefits such as emergency, family planning, or tuberculosis-related services (18.3 percent compared with 2.4 percent in FL and 9.3 percent in PA [Call et al. 2006b]). Partial benefit enrollees are less accurate reporters than those receiving comprehensive Medicaid benefits (Kincheloe et al. 2006).
In contrast to the other experiments that use a point-in-time measure of insurance coverage, the study by Eberly, Pohl, and Davis (2005) mimics the Current Population Survey's Annual Social and Economic Supplement (CPS-ASEC) questions about previous calendar year coverage, which is associated with higher measurement error (Sudman, Bradburn, and Schwarz 1996). Contrary to expectations, their results are comparable with the other experiments (aside from Call et al. 2001). The lower-than-anticipated rate of misreporting of uninsurance among Maryland Medicaid enrollees may be due to the addition of a question to the end of the CPS-ASEC insurance coverage series providing colloquial names for state-specific public programs, which significantly increased reports of coverage over the standard CPS-ASEC instrument (Eberly, Pohl, and Davis 2005).
How do the state experimental studies compare with the matching study? To start, the matching study4 by Klerman, Ringel, and Roth (2005)5 finds a much higher rate of reporting error (second section of Table 2); 21.7 percent of adults with Medi-Cal enrollment over these 11 years were inferred to have reported no insurance in the CPS-ASEC, which was more than twice that of the state experimental studies. This translates into an upward bias in the estimate of uninsurance of approximately one percentage point over this period, which is similar to the experimental studies (the use of 11 years of data precludes our ability to provide relevant population totals for these cells in Table 3).
Some speculate that the Medicaid undercount results, in part, from people with Medicaid failing to report this coverage and instead reporting they are uninsured (Lewis, Elwood, and Czajka 1998; Klerman, Ringel, and Roth 2005). The experimental evidence shows that people on Medicaid are fairly accurate reporters of insurance coverage (upwards of 79 percent report Medicaid), and when they do fail to report Medicaid, for the most part, more report the wrong type of coverage than report being uninsured. Only a modest number of Medicaid enrollees erroneously report not having health insurance (an average of 4.6 percent across the seven experiments) resulting in similarly modest bias in estimates of uninsurance at a point in time (less than a one percentage point average increase in the rate of uninsurance).
The results of the experimental studies have several implications. First, they raise concerns about the reassignment of survey respondents from uninsured to Medicaid coverage in surveys. Two simulation models have been created to adjust the CPS-ASEC coverage estimates to match administrative counts of Medicaid, the ARC (Callahan and Mays 2005) and TRIM3 models (Urban Institute 2006). Both models draw a substantial proportion of cases reassigned to Medicaid from the ranks of the uninsured (54 percent for ARC as compared with 32 percent in the TRIM3 model [Czajka 2005]), and assume two to three times more bias in estimates of the uninsured than indicated in the experimental studies. However, both simulation models adjust data from the CPS-ASEC which implements an annual measure of health insurance coverage appearing toward the end of a long survey that is primarily concerned with labor force and program participation. Thus, the CPS-ASEC likely contains more measurement error than instruments that ask about point-in-time coverage as used by all but the Maryland experiment (Sudman, Bradburn, and Schwarz 1996). There are longstanding concerns that estimates of uninsurance from the CPS-ASEC are too high (primarily due to survey design issues) and uncertainty about what this uninsurance estimate truly represents (annual or point-in-time) (DeNavas-Walt, Proctor, and Lee 2005), yet adjusting these estimates to match all year Medicaid enrollment counts may not be appropriate because the estimates of other types of coverage (and the uninsured) do not seem to resemble all-year estimates. Of primary concern here is that states emulating this practice of adjusting survey estimates to be consistent with Medicaid enrollment data (Washington State Office of Financial Management 2003), may be introducing other unknown bias into coverage estimates. The Medicaid undercount is but one source a measurement bias in health insurance surveys.
Second, and most importantly, findings from the experimental studies should provide reassurance about the merits of using general population survey data to inform policy decisions, especially those that use point-in-time measures of health insurance coverage. The cumulative evidence indicates that respondents do a good job of self-reporting Medicaid coverage as well as whether they do or do not have insurance; therefore, harsh criticisms of survey estimates of uninsurance are unfounded (Hunter 2004; Joint Economic Committee 2004).
Several issues are worth pursuing in future research. The first is the question of comparability between survey and administrative data. Surveys are designed to answer questions about the distribution of attitudes, opinions and characteristics of populations. Administrative data are collected for program management and payment purposes, gathering information at enrollment and redetermination periods on family and individual characteristics and statuses that can be quite dynamic. Perhaps it is unrealistic to think that data collected for such disparate purposes would be directly comparable. Both data sources likely contribute to the discrepancy in counts of public health care program enrollees (Hoffman and Holahan 2005; Call et al. 2006a; Kincheloe et al. 2006), yet both sources of data are valuable and should be used as intended.
Future research should look to potential sources of upward bias in survey estimates of Medicaid coverage and downward bias in survey estimates of uninsurance. There is evidence that some people with commercial insurance misreport that they have Medicaid coverage (Davern et al. 2006a). Further, people living in households with Medicaid enrollees sometimes report having Medicaid when there is no indication they are enrolled and they self-report no other kind of coverage (Davidson 2005). There is also reason to believe that some uninsured persons report having private coverage in surveys (Kreider and Hill 2006). Based on the design of coverage questions alone—asking about as many as eight potential sources of coverage—an uninsured person also has many chances to mistakenly report coverage or feel pressured to offer this socially desirable response. Therefore, it is plausible that surveys may in part undercount the number uninsured.
In summary, although there is evidence that Medicaid enrollees mistakenly report being uninsured resulting in some upward bias in estimates of uninsurance, for the most part this bias is not large. The results point to the importance of improving survey measurement to help respondents correctly report the presence or absence of health insurance coverage and the source of coverage. Working toward the best estimate of the uninsured should help refocus the debate on the needs of the uninsured as opposed to the count of the uninsured. We conclude, however, that policy makers can take comfort that point-in-time survey estimates of uninsurance are usefully accurate.
This study was funded by a grant is from the Robert Wood Johnson Foundation's (RWJF) Changes in Health Care Financing and Organization (HCFO) Initiative. We thank John Holahan of the Urban Institute for his guidance on this project and Linda Bilheimer of the National Center for Health Statistics (formerly at RWJF) for encouraging our pursuit of this project. We appreciate the time and insights of all those who participated this study. In California, this includes E. Richard Brown, Wei Yen, and Jennifer Kincheloe at UCLA. In Florida we thank Paul Duncan, Allyson Hall, and Colleen Porter at the University of Florida. In Pennsylvania, this includes Brian Robertson and Patrick Madden of Market Decisions LLC; Ed Naugle and Patricia Stromberg at the Pennsylvania Insurance Department; William Columbus and Jerry Koerner at the Pennsylvania Department of Public Welfare.
1Details about the sampling design and matching process for verifying enrollment are available in Call et al. (2006b).
2Consistent with the MUE methodology, in each experiment a respondent's enrollment at the time of the survey was confirmed because of the lag between the time samples were drawn and surveys were completed.
3Those responding “yes” to the Medicare question were not asked about Medicaid coverage (the first and second question in the series, respectively); these two programs are easily confused in surveys (Pascale 2001).
4Individual-level survey responses were matched with enrollment data using Social Security numbers (SSNs) for CPS-ASEC respondents between the ages of 15 and 64 for whom SSNs were available.
5A similar matching study was conducted by Card, Hildreth and Shore-Sheppard (2004). Their analysis does not distinguish between those who report the wrong type of insurance and those who do not report any coverage and is therefore, excluded from our analysis.