The “stainless steel” law of evaluation states that the better designed the outcome evaluation, the less effective the intervention seems. This article explores how this law may be operating in relation to systematic reviews
Research syntheses are essential for putting studies in their proper scientific context and are increasingly common in public health, education, crime, and social welfare. A key criticism of systematic reviews, however, is that they are often unable to provide specific guidance on effective (or even ineffective) interventions; instead, they often conclude that little evidence exists to allow the question to be answered. This problem has been recognised in reviews of healthcare interventions,1 and the electronic journal Bandolier recently lamented the absence of systematic reviews containing a solid take home message.2 However, the problem is even more common in reviews of social and public health interventions, and this paper explains why.
- Systematic reviews are often criticised for being unable to provide specific guidance
- This is often because the primary studies that they include contain few outcome evaluations
- A “stainless steel” law of systematic reviews may also be operating—namely, the more rigorous the review, the less evidence there will be that the intervention is effective
- Narrative review methods and narrative and meta-analytic approaches to reviewing observational data need to be improved
- Uncertainty will often remain, but systematic reviews help us to acknowledge this and to map the areas of doubt