PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of amjepidLink to Publisher's site
 
Am J Epidemiol. 2016 March 15; 183(6): 542–543.
Published online 2016 March 5. doi:  10.1093/aje/kwv239
PMCID: PMC5013928

Basu et al. Respond to “Interdisciplinary Approach for Policy Evaluation”

We appreciate the thoughtful comments from Hawkins and Baum (1) regarding our study of welfare reforms and the health of mothers (2). On 2 central points, we agree entirely: Rigorous evaluations of the health impacts of social policies should be a high scientific priority for the fields of both economics and epidemiology, and researchers will face design and data limitations that render such evaluations challenging.

Hawkins and Baum noted limitations of our data and approach and pointed out that, given the novelty of the synthetic cohort design, more detail on the method is merited. We agree that more disaggregated estimates across types of women (e.g., those who cannot find childcare), more detail on the children in affected households (e.g., participation of multiple children in preschool), and access to data from earlier years in states with early welfare reforms would be important extensions of our analysis. Unfortunately, lack of consistent relevant data prevented us from performing such analyses. Hawkins and Baum reported that bias has been found not in the point estimates of difference-in-differences models but in the standard deviations (producing bias in precision). Bias in precision might result in substantial overconfidence in conclusions made about policy (e.g., the study we cited reported “significant” findings at the 5% level for up to 45% of placebo interventions (3)). However, the difference-in-difference approach will also produce biased point estimates if the control group is not exchangeable with the intervention group. The synthetic control approach is appealing for its ability to construct a more exchangeable control group, and it facilitates “placebo” testing (analogous to permutation testing) to estimate whether the effect for the studied cohort is large relative to that estimate for a random group, to assess whether results could be driven entirely by chance (4). Furthermore, the difference-in-differences method is limited by the need for treatment and control groups to experience parallel trends in outcomes. Yet, the synthetic control approach is unbiased even if data for only a single pre-intervention period is available (4). Hawkins and Baum requested more clarity about how we defined cohort-level data from individual-level data for the synthetic control analysis. To clarify the model specified in the Methods section, we estimated the distributions of each outcome variable, weighted by survey sample weights, for each of the 4 groups in our study (single mothers, married mothers, single nonmothers, married nonmothers) across all years of the analysis, creating a balanced panel of data to compare the outcomes across groups (summarized in Table 1 of our article (2)).

Neither the difference-in-differences nor the synthetic control approach is flawless. To our knowledge, no studies to date have provided compelling evidence on when one approach is more valid. Hence, we pursued both approaches, found substantively very similar results, and considered the results more convincing because of the concordance. We believe that this use of multiple complementary methods represents an important step forward for inference. We found generally adverse associations between welfare reforms in the 1990s and health outcomes. To the extent that effect size estimates differ in some cases, we think the divergence is quite unlikely to be a critical decision point in policymaking. The tension between our approaches is accompanied by the challenges of studying population-level health outcomes from policies despite the threat of ecological fallacies, unobserved confounding factors, and heterogeneity in risk within diverse populations—challenges that loom large when studying a policy with diffuse impact, such as welfare reform. In this context, we agree with Hawkins and Baum regarding the potential benefits of cross-discipline intellectual arbitrage between epidemiology and economics to address the challenge of policy evaluation in both substantive and methodological ways.

ACKNOWLEDGMENTS

Author affiliations: Center on Poverty and Inequality, Stanford University, Stanford, California (Sanjay Basu); Institute for Economic Policy Research, Stanford University, Stanford, California (Sanjay Basu); Centers for Health Policy, Primary Care, and Outcomes Research, Stanford University, Stanford, California (Sanjay Basu); and Prevention Research Center, Department of Medicine, Stanford University, Stanford, California (Sanjay Basu); Department of Public Health and Policy, London School of Hygiene and Tropical Medicine, London, United Kingdom (Sanjay Basu); Division of General Medical Disciplines, Department of Medicine, Stanford University, Stanford, California (David H. Rehkopf); Department of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada (Arjumand Siddiqi); Department of Social and Behavioral Sciences, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada (Arjumand Siddiqi); Department of Health Behavior, Gillings School of Global Public Health, University of North Carolina Chapel Hill, Chapel Hill, North Carolina (Arjumand Siddiqi); Department of Epidemiology and Biostatistics, University of California San Francisco, San Francisco, California (M. Maria Glymour); and Department of Social and Behavioral Sciences, Harvard School of Public Health, Boston, Massachusetts (Ichiro Kawachi).

Research reported in this publication was supported by the National Institute on Minority Health and Health Disparities of the National Institutes of Health under award number DP2MD010478.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Conflict of interest: none declared.

REFERENCES

1. Hawkins SS, Baum CF Invited commentary: an interdisciplinary approach for policy evaluation. Am J Epidemiol. 2016;1836:539–541. [PMC free article] [PubMed]
2. Basu S, Rehkopf DH, Siddiqi A et al. Health behaviors, mental health, and healthcare utilization among single mothers after welfare reforms in the 1990s. Am J Epidemiol. 2016;1836:531–538. [PMC free article] [PubMed]
3. Bertrand M, Duflo E, Mullainathan S How much should we trust differences-in-differences estimates? Q J Econ. 2004;1191:249–275.
4. Abadie A, Diamond A, Hainmueller J Synthetic control methods for comparative case studies: estimating the effect of California's tobacco control program. J Am Stat Assoc. 2010;105490:493–505.

Articles from American Journal of Epidemiology are provided here courtesy of Oxford University Press