|Home | About | Journals | Submit | Contact Us | Français|
The two papers by Blewett and Davern (2006) and Kenney, Holahan, and Nichols (2006) in the special section of this issue discuss the collection of large representative samples to monitor health, health care, and coverage of the population from the perspective of health policy researchers whose primary focus is at the state level. I approach this issue, not only as a health policy researcher, but as a former federal policy maker who was responsible for recommending to the Secretary of Health and Human Services how much of an investment the Department should make in federal health data collection and what type of information it should collect regarding health insurance coverage, spending, and access in the future. My goal is to provide the reader with an understanding of how federal policy makers view the relative importance and usefulness of federal health data collection and how these perceptions have influenced data collection to date and are likely to influence its prospects in the future.
The current condition of federal health data collection is not good. The major national health insurance surveys find themselves under increasing pressure to prove their usefulness to policy makers. They struggle to justify the continued expenditure of tens of millions of taxpayers' dollars to collect data that policy makers may or may not believe offers much “return on investment” in helping inform policy makers on critical policy decisions. Some surveys are at serious risk of being dropped or reduced because of annual budget freezes and reductions that lead to what federal budget officials colorfully call “death by a thousand cuts.” Policy makers, faced with limited resources, have to choose among funding priorities such as those that address pandemic flu, emergency preparedness, hurricane relief, bioterrorism, and expensive data collection tools. In this fierce competition for limited funding, data collection for ongoing monitoring of health and access to care is not faring well.
Although it is understandable that ongoing data collection is hard pressed to hold its own among these compelling and competing priorities, rigorously and systematically collected data are essential if we are to make serious inroads into policy solutions for vital areas such as the uninsured, the under insured and the affordability of both health insurance and health care.
I agree with Kenney et al.'s and Blewett and Davern's concern that the federal statistical community has failed to generate consistent, rigorous estimates of the number of uninsured, despite there being four major federal surveys focused on health care coverage alone: The Current Population Survey (CPS), The National Health Interview Survey (NHIS), The Medical Expenditure Panel Survey (MEPS), and the Survey of Income and Program Participation (SIPP). Each survey produces very different estimates of the uninsured. For example, estimates for the four most commonly used surveys range from 19 million to 45 million—a range that is far too disparate to guide sound policy making. And while the higher figure of 45 million uninsured is widely quoted by the media, it is widely criticized by experienced researchers and analysts working for Congress, the Administration, academia, and private consulting. No briefing of the President on the uninsured should have to start with a discussion of how four different government surveys measure the uninsured in four different ways and subsequently the estimates of the uninsured range from 19 million to 45 million. Unfortunately, that is precisely what happened at a briefing for President Bush that I attended in the Roosevelt Room in 2004, and it is likely that President Clinton faced a similar briefing 10 years earlier.
For a decade to have passed without health analysts being able to give policy makers a more accurate and consistent estimate of the size of the uninsured is indeed a stumbling block. No President or Congressional Committee Chair wants to author a program, for example, that targets funding an estimated 5 million uninsured children only to discover that in reality 8 million children lack insurance.
Although I therefore agree with the authors that we have a serious problem as long as we have at least four major health insurance surveys providing a vast range of estimates, I would be alarmed by an attempt to somehow merge these four surveys into one megasurvey. Each of the different surveys was designed for different purposes. As pointed out by Blewett and Davern, the CPS is an income survey that began collecting data on health insurance only after many years had passed. It was only as health insurance became a more significant percentage of a worker's total compensation package that it became clear to the CPS staff that they were missing an important part of compensation by not measuring it. In contrast, the HIS and MEPS are health-focused surveys whereas the SIPP is primarily a survey of participation in federal programs. Each serves its own topic areas and its own audience and is directed at providing information for more than one policy area.
Although the promise of a new American Community Survey (ACS) with a 3 million person sample may be appealing to analysts, it is hard to imagine that more than one or two health insurance questions could be included on such a survey. I contend that using the very large sample of the ACS to improve the sample methodologies of other surveys with more detailed questions may be more promising than simply condensing four surveys into one.
In addition, each of the four current surveys asks a wide range of questions that go well beyond health insurance coverage and access. Would those questions be jettisoned or would only one survey be allowed to ask questions about health insurance? If they are not jettisoned, then there would be little savings to reprogram into a new survey. If they were to be jettisoned, there would be significant opposition from the constituency interested in addressing those other questions.
The authors make a good point that some surveys are stronger than others at answering particular questions, and given their interest in making state-level estimates it is not surprising that they prefer the CPS and HIS surveys with their larger sample sizes. However, when the issue under study is strengthening the ability of employment-based health insurance to cover the uninsured, the MEPS is typically more helpful. When the question is discerning whether lower health insurance rates among Hispanics are really a function of immigration, not ethnicity, the CPS holds the most promise. My point is that different questions often require different survey tools and that finding answers to a problem as complex as the uninsured will continue to require a range of analytic tools.
A single mega health insurance and health access survey—as tempting as it may appear on the surface—presents political, practical, and methodological risks that I believe would prove insurmountable. A better solution is for the health policy experts to resolve the conflicts in the way in which the four major surveys estimate the number of people without health insurance. Fortunately, work is already advancing within the government to do just that. Analysts from Census and HHS are working in consultation with Treasury, CRS and CBO to achieve consensus. During my time as Assistant Secretary for Planning and Evaluation (ASPE) at HHS, I committed significant staff resources and close to a million dollars to fund outside research on this very issue. My counterparts at the other agencies involved made similar commitments of staff and research funds. Experts within the government are working collaboratively to resolve the technical questions of undercounts, overcounts and why the surveys fail to yield similar estimates.
Currently, work to understand and take into account differences in the surveys is underway at the federal level, including to: match estimates from the national surveys to estimates from state-sponsored surveys; match Medicaid respondents on the national surveys to Medicaid's administrative records of those who actually received coverage; determine if there is a miscount (either an undercount or overcount) in employer-based coverage; examine questions from the different surveys to determine whether and how cognitive interpretation accounts for some differences in responses; test whether imputation of missing data versus reweighting for missing data produces more accurate results; and add questions measuring point-in-time coverage to surveys that typically only ask questions about coverage for the full year. It is essential that this work continues and be reinforced by advice and support from the research community outside the federal government. I remain optimistic that the above actions will improve the estimates from these different surveys and ultimately will lead the estimates to converge.
I should add that the issue of health insurance coverage is not the only health policy issue faced with the troubling dynamic of multiple data bases, multiple estimates, and uncertainty about which survey produces the best numbers. This problem is repeated across a number of other crucial health policy areas including drug prices, drug spending, and accurate measures of income and poverty at the national and the state levels. Each of these areas also need efforts to be taken so that those responsible for the differing surveys work together to understand and resolve the differences and together determine how to obtain the most rigorous and accurate results for informing policy.
I would like to address a second issue raised by Blewett and Davern and by Kenney et al., i.e., their assumption that rigorous state-level estimates are an essential priority for both federal and state officials. I would argue that although it may appropriately be the top priority for state officials, it is clearly not—nor should it be—the top priority for federal officials.
In my view, the top priority for federal survey agencies is to serve as an “essential partner” to federal policy makers, either in the Administration or Congress, by providing them with the most rigorously designed and collected database and the most rigorous and sound analysis based on this evidence to inform federal policy making. To the extent that federal survey agencies fail to provide this role, they often find their federal data collection efforts under-funded and under-utilized. Therefore, while federal and state officials need to work together to try to maximize the usefulness of the data at both levels of analysis, I believe that state officials must recognize that the onus is on them to strengthen their own state's data and for federal officials to focus primarily on national data and on those influencing federal-level programs and policy.
The picture that emerges from Blewett and Davern's and Kenney et al.'s descriptions is of a state process already heavily subsidized by the federal government. Consider the example of the SCHIP program cited by the authors. As a direct consequence of the SCHIP program, states began gathering better coverage data. However, among federal/state partnerships for health-related programs, SCHIP is one of the most heavily subsidized by the federal partner. The federal government pays between 65 and 83.2 percent of SCHIP costs compared with between 50 and 76 percent of Medicaid costs.1 Furthermore, in order to support state efforts related to SCHIP, Congress appropriated an additional $10 million a year to increase the sample size of the CPS to enhance state level estimates. And as also noted in the two articles, federal HRSA grants were the key source of state funding for their data collection efforts.
Federal survey agencies have to prove to federal policy makers that they are worth the significant investment that the federal policy makers have already made. They have to prove that they can be essential partners in addressing critical policy problems. It would be ill-advised for federal survey agencies to focus their limited resources—resources which federal policy makers are already considering cutting—to enhance the data for other audiences, such as state policy makers. In these efforts the states, academia, and the broader research community are primarily colleagues, providing advice on how to maximize the rigor and usefulness of the surveys, rather than direct constituents or clients, of the federal agencies overseeing the surveys.
Despite these differences in perspectives when viewed from the federal level, I would agree that there is much to be gained from building partnerships in collecting data. Rather than assuming that all have the same goals and constituents, I would argue that the best partnership between the federal government and the states is one in which the federal government sponsors large surveys and gives states an opportunity to piggyback onto these surveys. This has already proven extremely successful, for example, when some states paid for an enhanced sample for their states on the federal MEPS survey.
In conclusion, I would urge those in favor of calling on the “Feds to pay more” for state data collection without regard to its usefulness at the federal policy levels to instead recognize the vulnerability of Federal health data collection to being under-funded for all purposes. As federal health policy researchers succeed in becoming more essential partners in providing policy makers with credible, timely, and consistent data, federal and state health data collection will improve.
1CRS Report for Congress, “State Children's Health Insurance Program (SCHIP): A Brief Overview,” Updated March 23, 2005, Elicia J. Herz, Specialist in Social Legislation, Bernadette Fernandez, Analyst in Social Legislation, Chris L. Peterson, Analyst in Social Legislation—Domestic Social Policy Division.