|Home | About | Journals | Submit | Contact Us | Français|
This Review provides abstracts from a meeting held at the London School of Hygiene and Tropical Medicine, on April 11–12, 2013, to celebrate the legacy of John Snow. They describe conventional and unconventional applications of epidemiological methods to problems ranging from diarrhoeal disease, mental health, cancer, and accident care, to education, poverty, financial networks, crime, and violence. Common themes appear throughout, including recognition of the importance of Snow’s example, the philosophical and practical implications of assessment of causality, and an emphasis on the evaluation of preventive, ameliorative, and curative interventions, in a wide variety of medical and societal examples. Almost all self-described epidemiologists nowadays work within the health arena, and this is the focus of most of the societies, journals, and courses that carry the name epidemiology. The range of applications evident in these contributions might encourage some of these institutions to consider broadening their remits. In so doing, they may contribute more directly to, and learn from, non-health-related areas that use the language and methods of epidemiology to address many important problems now facing the world.
Whether or not John Snow should be regarded as the father of epidemiology is open to debate, because some of his predecessors such as William Farr might also be credited with such a distinction. However, Snow was unquestionably an essential link in the causal chain that led epidemiology to what it is now. His ingenuity in placing cholera cases on a geographical grid, and in comparing cholera incidence according to sources of household water supply, constituted groundbreaking innovations in the development of the epidemio logical approach.
An ongoing series of articles in the International Journal of Epidemiology is taking stock of the status of epidemiology in the five continents.1–5 As is the situation in many epidemiological studies, a key issue is the definition of a case. No accepted definition exists of an epidemiologist, since they include a range of professionals from highly trained academics to public health practitioners in local health authorities, with variable levels of training in epidemiology but who undertake essential surveillance and monitoring functions. Regardless of the definition, however, the series in the International Journal of Epidemiology is clear in showing that epidemiologists seem to be just as unequally distributed throughout the world as is the case for income, technology, and most other worldly goods.
In the past few decades, the term epidemiology has been largely associated with aetiological investigations and the use of increasingly sophisticated statistical methods. One might even say that epidemiologists are obsessed with finding causes. However, in global terms, this pursuit is only one of several applications of our discipline. In countries where most deaths occur, we know very little about the precise number of such deaths or the diseases that caused them, and even less about the frequency of major non-fatal illnesses. Epidemiological capacity is lowest in Africa and in south Asia, which are the world’s regions with the greatest disease burden.3,4 Not only are fewer epidemiologists trained there than in other regions, but poor working conditions and low salaries contribute to the epidemiological brain drain from these areas, similar to the situation for doctors and nurses. An epidemiological divide clearly exists.
Which causes, therefore, should epidemiologists be pursuing 200 years after Snow’s birth? At a time when the post-2015 global development agenda is being established, epidemiologists can certainly make an important contribution to the cause of sustainable health and development. We definitely need more and better aetiological studies, but I would argue that we are relatively well served with these, in comparison with the unmet need for high-quality health data in the world’s poorest and sickest regions. The Brazilian scientist Mauricio Rocha e Silva was once asked about snakebite statistics in Brazil. He replied that there were no reliable data: “where there are snakes, there are no statistics; and where there are statistics, there are no snakes”. In my view, the most pressing need for epidemiology in today’s unequal world is to develop measurement capacity in the regions where our skills are needed most, to support evidence-based health planning and policy making. Capacity to measure disease burden, monitor trends, establish determinants, and assess the effect of public health interventions and programmes is scarce in such settings. We urgently need more John Snows—epidemiologists who count cases, investigate why these occurred, and, rather than waiting for others to act, become directly involved in evidence-based public health actions.
In his writings on cholera, Snow revealed his thinking about how causal inference works. His articulate arguments were laid out meticulously in great confidence, such as when he concluded “Whilst the presumed contamination of the water of the Broad Street pump with the evacuations of cholera patients affords an exact explanation of the fearful outbreak of cholera in St James’s parish, there is no other circumstance which offers any explanation at all, whatever hypothesis of the nature and cause of the malady be adopted.”6
This confidence might seem reasonable, especially in hindsight, but Snow’s logic was not uniformly airtight. In discounting the role of “offensive effluvia”, a then-popular theory to explain how cholera spread, he noted that “many places where offensive effluvia are very abundant have been visited very lightly by cholera, whilst the comparatively open and cleanly districts of Kennington and Clapham have suffered severely. If inquiry were made, a far closer connection would be found to exist between offensive effluvia and the itch, than between these effluvia and cholera; yet as the cause of itch is well known, we are quite aware that this connection is not one of cause and effect.”6
I am reluctant to pick at Snow’s brilliant work, but I cannot help but notice that this argument, although disarmingly strong, is premised on the invalid concept that any known cause of disease precludes other factors from being causes. That is to say, it presumes that only one cause exists for a disease, and if that is known, then to seek other causes is futile. However, by way of counterexample, epidemiologists can show that many organisms can cause pneumonia, and that the role of smoking in causing lung cancer does not preclude ionising radiation or asbestos from also causing lung cancer. Furthermore, the identification of proximal causes does not rule out causes further upstream, as illustrated by Davey-Smith’s discussion of socioeconomic factors in causing cholera in the 19th century.7
Every disease has several causes, in two senses: first, many causal pathways can exist that end in the disease, starting with distal antecedents and progressing towards proximal causes; and second, each causal pathway has multiple components that act in concert to produce the effect through that mechanism. To define causes is easier than to lay out the rules for causal inference, if any such rules actually exist. Hume, Russell, Popper, and others have explained that induction—the prediction of future events on the basis of past events—can be shown to be naive and illogical. As Russell put it, “Domestic animals expect food when they see the person who feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.”8
But do better methods of causal inference exist? Popper’s approach of conjecture and refutation was an attempt to solve Hume’s problem, although not to everyone’s satisfaction. Some people, notably Feyerabend, have argued against the existence of inferential rules, and others believe that quantification of uncertainty with bias analysis,9 including Bayesian methods, is the best of all the imperfect solutions. Today, the foundation of causal inference is not much stronger than it was in Snow’s time, but the weaknesses are more evident.
Up to one in five cancer cases worldwide are now known to be caused by infection, and mainly by only seven human viruses. However, new genomic technologies are revealing hundreds of previously unknown agents. How well does epidemiology do in terms of telling us whether any of these new agents actually cause cancer?
Two very distinct stories emerge from the two most recently discovered human cancer viruses: Kaposi’s sarcoma herpesvirus and Merkel cell polyomavirus. Kaposi’s sarcoma became infamous early in the AIDS pandemic when it struck previously healthy men who have sex with men. This cancer was a medical enigma, with many people suggesting that HIV itself was the trigger. However, by the late 1980s, careful epidemiology revealed that Kaposi’s sarcoma is caused by a second, as then undiscovered, infectious agent.10 This latent agent would manifest as a cancer when an infected person becomes immunosuppressed. The expected prevalence of the so-called Kaposi’s sarcoma agent was predicted to be high in Africa and parts of the Middle East and the Mediterranean, but relatively rare in northern Europe and America. The agent was postulated to be sexually transmitted among men who have sex with men in developed countries but poorly transmitted through blood contact, such as through transfusion. We discovered this aetiological agent by isolating two small genomic fragments of Kaposi’s sarcoma herpesvirus in 1994, which allowed the development of tests to rapidly confirm the epidemiological predictions.11 Despite initial controversy, Hill’s criteria for Kaposi’s sarcoma herpesvirus causing Kaposi’s sarcoma were fulfilled quickly and, in epidemiological terms, Kaposi’s sarcoma herpesvirus is a well behaved virus. Nowadays, the virus is the most common cause of cancer in some parts of Africa.
More than a decade later, we found another potential cancer agent,12 this time for a highly aggressive but rare skin cancer called Merkel cell carcinoma. As with Kaposi’s sarcoma, Merkel cell carcinoma is more common in immunosuppressed people. Merkel cell polyomavirus is the first polyomavirus to be convincingly linked to human cancer. However, the virus is typically a harmless, near-universal member of our skin flora. In less than 5 years since its discovery, new diagnostics are being used in patients with Merkel cell carcinoma, and promising molecular therapies are in development to treat this previously intractable cancer.
Beyond provision of the first clues that Merkel cell carcinoma might be caused by infection, quantitative epidemiology provided little help to establish causality between Merkel cell polyomavirus and Merkel cell carcinoma. Molecular studies carried this burden by convincingly showing that patients with Merkel cell polyomavirus and Merkel cell carcinoma undergo stepwise molecular changes that include loss of immune surveillance, clonal viral integration into the host genome, and mutation to the virus itself.13 The possibility that cancers can arise from mutations to a commensal virus rather than the host cell could fundamentally change our searches for the causes of human cancers. However, not all Merkel cell carcinomas contain Merkel cell polyomavirus, and the virus is found in people without the cancer, which are violations of traditional notions for infectious causality. New probabilistic or Bayesian approaches to causality—taking into account molecular biology—are urgently needed to weigh up this information.
New technologies have uncovered dozens of previously unknown human polyomaviruses, papillomaviruses, and other agents, many of which are ubiquitous but could still hold the answers to long-sought causes for some chronic human diseases.14 To avoid becoming irrelevant, modern epidemiology must develop approaches to adequately assess molecular biological information when establishing causality.
John Snow recommended handwashing and personal hygiene for the prevention of cholera almost 160 years ago.15 At about the same time, Ignaz Semmelweis showed that childbed fever could be prevented by hand disinfection.16 Generations of parents have since attempted to instil handwashing habits into their children. 10 years ago, we reviewed the evidence and concluded that handwashing with soap could reduce the risk of diarrhoea by 47% and potentially s ave 1 million lives in developing countries.17 The fact that handwashing can prevent disease is common knowledge; even in rural areas of developing countries with low literacy rates, more than 90% of those surveyed are aware of the importance of handwashing with soap.18 However, actual practice remains low. In countries such as India, China, Ghana, Tanzania, Peru, and Kyrgyzstan, we have recorded rates of post-toilet handwashing with soap of below 20%. In UK motorway service station toilets, sensors showed that only 64% of women and 32% of men were washing their hands with soap.19 Knowledge about handwashing has clearly not been translated into actual practice.
Most of the major health problems faced worldwide need changes in behaviour—whether it be to eat less, exercise more, sleep under a bednet, or practise safe sex.20 Although more sophisticated approaches are gaining ground, most health campaigns are still based on the assumption that changes in knowledge will lead to the desired changes in behaviour. Realisation of the weaknesses of educational, belief-based approaches has led us to seek new solutions. By reviewing both recent behavioural science and the practice of marketing in multinational companies, we developed the Evo-Eco approach to behaviour change.
Behaviour is an evolved phenomenon that has been around for far longer than human beings and predates rational thought. Most human behaviour is a consequence of ancient reflexes and motives.21,22 Much of what we do is not under conscious control but is motivated unconsciously, or done reactively in response to cues.23,24 Brains direct bodies to produce behaviour that would have been adaptive in the physical, biological, and social environments of our ancestors,25 either as a matter of routine; in response to cues, context-based rules and roles; or in response to opportunities that present themselves to meet evolutionarily important needs.
With this approach, we designed and tested a campaign to introduce handwashing with soap in villages in Andra Pradesh, India. The campaign used emotionally affecting appeals to nurture, disgust, and status, and cues designed to change habit. Furthermore, it attempted to redefine mothers’ roles in their social settings as so-called SuperMums (SuperAmmas). The campaign avoided explicit health messaging and banned mention of disease, doctors, or diarrhoea. A cluster-randomised controlled trial showed handwashing with soap to be 19% higher in the SuperAmma intervention villages than in the control groups (from a baseline of almost complete absence), and evidence of a change in perceived norms around handwashing behaviour was noted.
If the Evo-Eco approach is now proving to be useful beyond hygiene for both public health and for market development in the private sector, this is because it addresses the many other emotional, habitual, and situational factors that affect behaviour beyond knowledge and belief.
When a cholera outbreak led to a civil disturbance in Haiti 9 months after the 2010 Haitian earthquake, cholera once again took a place at the interface of health, water, and sanitation; international travel; and global politics. The first known cases of the outbreak were reported downriver from a UN Stabilization Mission base, and suspicion fell on the UN Stabilization Forces as the source of the infection.26 When third-generation real-time DNA sequencing linked the strains of Vibrio cholerae circulating in Haiti to endemic strains in Nepal, and a member of the Swedish Diplomatic Service publicly announced this link in a Swedish newspaper, the skirmish in Haiti intensified and resulted in gunfire and death.27
In November, 2011, the Institute for Justice and Democracy in Haiti filed a claim on behalf of 5000 Haitians who had recovered from cholera in an effort to demand the UN to provide a national water and sanitation system; to pay compensation for losses due to cholera; and to make a public apology.28 The UN, however, invoked its legal immunity and announced unwillingness to compensate, basing its decision on the 1947 convention that grants the UN immunity for its actions. The UN also noted an independent report that had concluded that a series of events in Haiti, not only an importation, had led to the cholera outbreak, and that the genetic sequences of the organism are not unique to Nepal, but are also found in other parts of south Asia.29–31
Genetic sequencing has become a powerful method in investigation of outbreaks, and it has confirmed understanding in the 19th century that linked global spread of cholera to trade routes, returning military forces, and migration. Genetic sequencing also confirms what John Snow recognised in 1848, when he linked the introduction of cholera in London to a seaman who had travelled to the city from Hamburg—that infectious diseases respect no borders. Snow’s investigation at that time led to his theory of contagion, and he concluded that wells and water pipes would have to be kept isolated from drains, cesspools, and sewers to stop transmission. 6 years later, after cholera returned to London, his careful fieldwork and two famous maps confirmed his theory, and led to measures to stop transmission of cholera.15,32
Snow would certainly be surprised to learn that 150 years after he removed the pump handle in Soho, a cholera outbreak continues under the same unsanitary conditions he observed in 19th century London, in a world where safe water and sanitation should be within the reach of all people. Although he could not have imagined the power of 21st century genetic sequencing in identification of the probable source of the cholera outbreak in Haiti, he would certainly not have been surprised to learn that infections spread globally. He might have been disappointed that some turn this information to shame and blame. He rose above that shame and blame to create an environment that could interrupt the transmission of enteric pathogens. When will Haiti and its partners do the same?
Throughout history, only infectious diseases and violence have killed up to tens of millions of people in epidemic form. However, in the past 200 years, we have made substantial progress in more successful management of infectious diseases, as a result of scientifically understanding the epidemiology, microbiology, and invisible forces of transmission. Yet, our understanding of violence remains stuck in thoughts of bad people and morality that we replaced a long time ago for infectious diseases by understanding their biological underpinnings.
A scientific view of violence reveals both population and individual characteristics that closely resemble infectious diseases.33 Population characteristics include the tendency for event clusters, epidemic curves, and capacity for spread. The clustering seen in maps of killings in US cities resembles maps of cholera in Bangladesh. Historical graphs showing outbreaks of killing in Rwanda resembled graphs of cholera in Somalia. Spread of violence is seen in street retaliations, gang wars, UK riots, and the recent crisis in Syria.
At the individual level, exposure to violence through observing or witnessing of violence or direct victimisation leads to increased likelihoods of perpetuation of violence by the individual exposed.34 This pattern has been shown for many types of violence, including child abuse, domestic violence, community violence, and suicide. Furthermore, transmission occurs across these forms of violence—eg, exposure to child abuse increases the likelihood of not only child abuse but also community violence, and vice versa. Exposure to war leads to a greater likelihood of subsequent performance of violence in one’s own community or family. Something is transmitted across a range of syndromes. Epidemiological characteristics of infectious diseases are also present for violence, including exposure, dose–response associations, variable susceptibilities, incubation periods, clinical syndromes, dormancy, and relapse.
The invisible processes that underlie transmission of violence are not completely known but seem to include mirror-like cortical circuits that mediate observational learning (imitation), and dopamine, pain-mediating, and other pathways that facilitate following and group behaviours.33 The effects of trauma on the limbic system further accelerate the contagious process. The brain dysregulates in response to—and to cause—transmission of violence, similar to how the intestine dysregulates salt and water absorption, facilitating the transmission and further spread of cholera.
The epidemic control model for reduction of violence begins with basic epidemiological mapping, detection and interruption of potential events, cessation of spread through behaviour change, and modification of social expectations and norms.35 Entirely new categories of disease control workers include violence interrupters, behaviour change agents, and others who are selected, trained, and supported for each of these functions in a unified system. This method, now referred to as Cure Violence, has undergone three independent assessments and has shown up to 100% reductions in retaliations in the settings of a killing, a statistical association between interruptions and drops in killings, and 34–73% reductions in shootings and killings.36,37 This method helps to validate the theory, and offers a new way to reverse the age-old problem of violence, based on an epidemiological framework and biological understanding. Cure Violence is now working in 15 cities and seven countries.
Explicitly mathematical approaches to epidemiology date from Daniel Bernoulli’s evaluation, in 1760, of the efficacy of variolation against smallpox. However, most people acknowledge John Snow’s spot map analysis (itself effectively mathematical) of the cholera epidemic in 1854 as the birth of modern epidemiology.
Mathematical modellers (myself included) have, however, been rather slow to recognise the mathematically inconvenient fact that one cannot usually treat a population as homogeneous, with all people transmitting infection at a roughly average rate. In particular, in studies of gonorrhoea by Hethcote and Yorke,38 and later studies of HIV/AIDS in its early days, investigators found it impossible to explain what was going on without acknowledging substantial heterogeneities in patterns of sexual-partner acquisition, and the consequent disproportionate effect of so-called superspreaders. For any given value of the basic reproductive number, R0, such high heterogeneity in infectiousness implies that the superspreaders are most likely to become infected, and also most likely to transmit infectiousness. Thus, the epidemiologically relevant factor is not the average number of partners per person, but rather the mean-square number divided by the mean.39 This fact has obvious implications for intervention—namely, to focus attention on the superspreaders. However, one further complication prevails. A detailed analysis depends not only on knowledge of the distribution of partner numbers, but also on the contact patterns: are they associative (those with many partners interacting mainly with similar people), disassociative (the opposite: highly active people associating mainly with those who have few partners), or merely random. In the case of HIV/AIDS, if the contact patterns are associative, the epidemic will develop more quickly, but will burn out more quickly, and fewer people are likely to become infected. By contrast, if the distribution patterns are disassociative, the epidemic will develop more slowly, but more people will be infected in the long run.
Not distracted by mathematical convenience, Snow explicitly recognised heterogeneity in transmission. In doing so, he was guided by facts: brewery workers who drank beer rather than water had lower infection rates, whereas the opposite was noted in the washerwomen who were at high risk from handling soiled linen and in the lady in West Hampstead who preferred water from the Broad Street pump. Paul Fine argues convincingly that “it was this recognition of heterogeneity which allowed him [Snow] to crack the problem”.
Notably, some of this work is finding applications in studies that might be called “stability and complexity in financial ecosystems”.40 In the build-up to the recent financial crises, an increasingly elaborate set of financial devices emerged (especially derivates), intended to optimise returns to individual institutions with seemingly little risk. In essence, no attention was paid to the possible effects on the stability of the system as a whole. An increasing amount of work draws analogies with the dynamics of ecological food webs and with networks within which infectious diseases spread. For the latter analogy, one can view the dodgy financial devices as newly emerging infectious agents. Indeed, the recent rise in financial assets and the subsequent crash have rather precisely the same shape as the typical rise and fall of cases in an outbreak of measles or other infection. Such curves also characterise past financial bubbles, such as tulip mania or the South Sea Bubble of the early 18th century.
One basic question, of course, is how to prevent a problem that arises in one bank from cascading through the entire banking system. Here, insights from medical epidemiology have been helpful, and indeed the word superspreaders is now used often.41 Unfortunately, studies of the interconnections between big and little banks suggest that these networks are disassociative.
Another aspect of the current financial crises is the way in which low confidence can spread throughout the system, leading to liquidity hoarding (shortening or calling-in loans) and thus amplifying problems.42 Here again, the analogy with medical epidemiology is clear. I find it interesting that the science of epidemiology is so far in advance of the allegedly so-clever banking system and its so-called quants.
In short, today’s society owes more than is often realised to the iconic Snow and other epidemiological pioneers.
The profile of the global burden of disease has changed profoundly since John Snow’s time. We now know that non-communicable diseases are the leading causes of death and disability, and that their proportionate contribution to this burden is rising inexorably in tandem with the epidemiological and demographical transitions in most countries. These very diverse health conditions include mental disorders, which are arguably the most neglected of all global health challenges. Depression is the leading mental health-related contributor to the burden of disease. A substantial amount of epidemiological evidence testifies to the high frequency of this disease (about 5% prevalence in the general population)43 and its strong, bidirectional association with social disadvantage.44 Equally well documented is the effect that this disorder has on functioning (eg, depression is at the top of the list of disorders ranked according to years lived with disability)45 and on other global health priorities (eg, about a quarter of the burden of child undernutrition in developing countries is attributable to maternal depression).46 On the positive side, a strong evidence base now exists in support of the efficacy of structured psychological treatments and antidepressants for the management of depression.47 Despite this compelling evidence, however, most people with depression worldwide do not optimally benefit from these treatments. This challenge is being addressed by trials of complex mental health interventions in routine care settings.
The MANAS project sought to improve the clinical and social outcomes of people with depression and anxiety disorders (the so-called common mental disorders) in primary care in India. The intervention and trial design had to address two formidable barriers: how to detect cases in the absence of a biomarker-based diagnostic test, and how to deliver the interventions in the absence of specialist skills in primary care personnel. The intervention addressed these barriers with a brief screening questionnaire, which was previously validated against a structured diagnostic interview, to detect cases; and a task-sharing model of care with a lay counsellor, recruited from the local community, who delivered the psychosocial components of the intervention (eg, psychoeducation, case management, and interpersonal therapy) in collaboration with the primary care doctor and under the supervision of a mental health specialist. Systematic efforts based on the Medical Research Council framework48 were made to design the intervention so that it was both acceptable to key stakeholders in the health system and feasible for delivery in the context of an absence of formal training in mental health care. For example, the lay counsellors actively addressed the social difficulties experienced by many patients and used local, rather than biomedical, labels and concepts. The intervention was assessed in a cluster randomised controlled trial in public and private primary care facilities. The results showed that in the public primary care facilities, compared with enhanced usual care (in which the primary care doctors received the results of the screening and treatment guidelines), the prevalence of common mental disorders decreased by 30% (risk ratio 0·70, 95% CI 0·53–0·92) and the prevalence of suicide attempt or plans over 12 months decreased by 36% (0·64, 0·42–0·98).49 Despite the additional resources needed, the approach was dominant from an economic perspective.50 In the private sector, the enhanced usual care facilities showed equivalent outcomes to the intervention facility.
Trials of innovations to improve access to mental health care in developing countries are now having an effect both on the research agenda of global mental health (eg, task-sharing is one of the central themes of the Grand Challenges in Global Mental Health51) and on national health policies (eg, the new District Mental Health Program of the Indian Government includes a new cadre of non-specialist mental health workers to deliver care at primary health care centres).
John Snow got the handle taken off the pump. He did not estimate the burden of disease due to cholera, insist that cholera was made a public health priority, or lobby for more funding for cholera research. Rather, he “respectfully requested an interview” with the Board of Governors of St James Parish, who, on hearing his appraisal of the aetiological factors, ordered that the handle be removed from the Broad Street pump.
For maximisation of health with the resources available, the important problems are the ones that we can do something about—ie, the ones for which we have cost-effective interventions. Removal of the handle from the pump was a highly cost-effective public health intervention. In 2010, the CRASH-2 trial52 showed that an inexpensive drug called tranexamic acid safely reduces mortality in bleeding trauma patients. When given within 3 h of injury, tranexamic acid reduces the risk of bleeding to death by 30%. Treatment is highly cost effective.53 Indeed, tranexamic acid is one of the cheapest existing ways to save a life. With whom does one “respectfully request an interview” to use this information to improve health?
For new medical knowledge to improve health, health workers and patients need to hear about it, the relevant treatment must be available, and it needs to be used appropriately. The usual method of communication of knowledge is by publication in a medical journal. The CRASH-2 results were published in The Lancet in 2010 and 2011.52,54 On the day of publication, a press conference was held and the results were covered by newspapers around the world.
However, new results are news for only a day. Even with extensive coverage, no more than a small proportion of the doctors who treat trauma worldwide will hear about it. Dissemination begins the next day, and the marketing power of the multinational drug industry is one of the most effective machines to change medical hearts and minds. Here, we encounter an obstacle. Although the knowledge that tranexamic acid can save tens of thousands of lives is new, the drug was invented in the 1960s and the profits from selling a short course of a generic drug for a problem that mostly affects poor people leaves many product managers underwhelmed. 3 years after publication, the investigators and road traffic injury victim groups are still lobbying government, regulators, and drug firms to license tranexamic acid for use in trauma.
Military medics moved quickly to include tranexamic acid in combat-care protocols. Military deaths are highly politically sensitive and when army medical chiefs say a drug should be used, it really is used. Early use of tranexamic acid by the military influenced civilian trauma care. The UK National Health Service (NHS) also embraced tranexamic acid use, and the proportion of trauma patients who received tranexamic acid is now being audited and used as a criterion for the reimbursement of trauma units in the NHS. Tranexamic acid was also included on the WHO list of essential medicines, although WHO has little capacity to ensure that bleeding trauma patients actually receive tranexamic acid. Sadly, some health professionals could be an impediment to the implementation of cost-effective treatments. The misguided view that disease burden should control which health-care activities are prioritised, rather than a comparison of costs and effects of different interventions, could cause substantial avoidable human suffering.55 What the world does not have is the policy equivalent of the Board of Governors of St James Parish—an organisation to ensure that cost-effective interventions are implemented wherever patients can benefit from them.
A fundamental distinction can be drawn between explanations of crime and explanations of criminality. Most criminological research and theory—including that from an epidemiological perspective—have focused on the second of these issues. Researchers have sought to identify historical factors—perinatal trauma, parenting and disciplinary style, child abuse and neglect, economic deprivation, adverse schooling experiences, association with antisocial peers, and so on—that affect why some individuals or groups are at an increased risk of developing criminal dispositions.56 The primary prevention of crime is conceptualised in terms of changing the developmental antecedents judged to have created the antisocial attitudes and personality characteristics that define the criminal off ender.
However, criminality does not necessarily predict crime: people with criminal dispositions do not commit crime all the time, and crime is often committed by people who do not possess criminal dispositions. By contrast with most criminological approaches, crime science is concerned with why, when, where, and how crime occurs.57 Crime is not a random event but clusters around criminogenic environments; researchers in this field seek to uncover the proximal, or situational, factors that account for the patterned distribution of crime in time and space. Primary prevention of crime might be achieved by changing the aspects of the immediate environment that facilitate or encourage crime to occur at that particular time and place—a practice known as situational crime prevention.58 Pub violence, for example, peaks at particular times of the day and on specific days of the week, and is concentrated in a small number of establishments.59 Substantial reductions in pub violence at targeted locations can be achieved with strategies such as reduction of overcrowding, enforcement of server intervention, improved training for bouncers, staggered closing times, and introduction of shatterproof glasses.
John Snow is regarded as a seminal figure in the development of the crime science approach to crime prevention. Fundamentally, Snow’s commitment to collection of data and testing of hypotheses is the basis for the problem-solving, evidence-based method that defines crime science. More specifically, Snow pioneered the concept of geographical hotspot analysis. Just as Snow mapped the distribution of cholera cases around the infamous Broad Street pump, so too crime scientists map the distribution of crime around so-called environmental crime generators. The disabling of the pump is analogous to the situational crime prevention strategies advocated by crime scientists.
The most robust research design to establish effectiveness is widely accepted to be the randomised controlled trial.60 Although the randomised controlled trial is widely used in health-care research, its first use in the last century was in the area of education. In 1931, Walters61 randomly allocated students in a university setting to a mentoring programme or a control situation and then measured academic outcomes. Later, in 1940, Lindquist62 described how the natural unit of allocation in school-based research was the class or school, rather than the individual child. Furthermore, he described the appropriate statistical approach for analysis of clustered data, which was not used widely in health-care cluster trials until the early 1990s.
Immense opportunities exist for rigorous educational randomised controlled trials to be undertaken, and design and implementation of a trial in education is, in theory, quite straightforward. For example, potential schools can be readily identified to take part in the trial, and children are generally registered with a high degree of stability within the schools; data for every child are collected regularly and comprehensively, which enables interventions to be targeted carefully at those for whom the greatest effect is anticipated. Because pretests are strongly predictive of post-tests, many educational randomised controlled trials can use this predictive value to ensure that the trial has good statistical power to record important educational differences in outcome between randomised groups. Teachers and schools assess children routinely, and children themselves are accustomed to completing tests and assessments. This situation enables us to measure any treatment effects easily.
Nevertheless, substantial challenges exist for trials undertaken in education. The design and execution of a randomised controlled trial needs skilled researchers. Unfortunately, especially in the UK, there is a dearth of experienced trial methodologists in education and inadequate capacity to undertake such trials. Consequently, a substantial proportion of published educational randomised controlled trials have flaws in their conduct, design, or analysis, which leads to uncertainty about their conclusions.63 Simple but common errors include: failure to have a sufficiently large sample size; failure to use independent randomisation; failure to do an intention-to-treat analysis; failure to undertake blinded testing and marking; failure to prespecify the main outcome; and failure to account for clustering in the analysis. Trial good practice, as recommended by CONSORT and other groups, should be followed in educational trials.60 Educational trials should be registered and reported according to modified CONSORT criteria.
A major opportunity exists to undertake randomised controlled trials in education in the existing political climate. Real public investment in rigorous educational randomised controlled trials began in earnest in the USA in 2002, and in the UK the Educational Endowment Foundation has recently begun a programme of assessment of interventions targeting disadvantaged children with randomised controlled trial designs. However, this wave of enthusiasm must not be blighted by poorly designed and executed randomised controlled trials.
Epidemiological methods are having a large and largely unhelpful effect on statistical practice in economics. In the early 1990s, several important natural experiment studies were done in economics. Snow’s work, especially as described by David Freedman,64 was often explicitly acknowledged, and admired. In one study, investigators looked at the effects of increasing the minimum wage on employment by comparing fast food restaurants in New Jersey and Pennsylvania, USA.65 Another study used the Vietnam War draft lottery to compare the subsequent earnings of those whose random draw put them at a higher or lower risk of being drafted.66 In several studies, administrative discontinuities in education were used to estimate the effects of schooling on earnings. These reports were admired for their credible identification, by contrast with much previous work that had rested on challengeable assumptions. The movement paralleled the increasing use of non-parametric methods in applied statistics.
A requirement of these methods is that the natural experiment must mimic random allocation. Snow’s assignment of water suppliers should not be a disguise for income or locational differences, which he understood well and documented effectively. In economics, the credibility of the natural experiments has worn thin with time. New Jersey is different from Pennsylvania in many ways. Conditional on a bad draw in the Vietnam lottery, the selection of those who actually went to war was systematic, not random. Parents work around educational discontinuities, and wealthier parents do so more successfully. Investigators tired of endless challenge to their natural experiments, and moved towards real experiments—randomised controlled trials. This was especially the case in development economics, where randomised controlled trials were regarded as the way to discover what works in economic development—an endeavour that held the promise of abolishing poverty worldwide.67
A well designed experiment is sometimes exactly what we need. However, experiments have their own problems. Many studies are underpowered, and when the underlying treatment effects are asymmetrically distributed—as is often the case when outcomes are financial—standard statistical methods are misleading, and we get contradictory and often implausible results, which are seemingly explained by what might be called just-so stories.68 Experimental samples are rarely randomly drawn from the population that would be treated by the hypothetical policy, and an unbiased, but noisy, estimate from a randomised controlled trial of a selected small sample can be less useful than a biased but precise estimate from an observational study of a larger and more representative sample. Average treatment effects for one group might not apply to another group, or even to subgroups or individuals within the experiment. Scaling up to the population will often bring general equilibrium or feedback effects that are shut off in the randomised controlled trial, even if the scaling up can be done in a way that is faithful to the experiment. Most seriously, the result from a randomised controlled trial is entirely silent about the mechanisms at work. Economics is concerned with the discovery and testing of mechanisms; without them, we have no chance to assess out-of-sample validity, to predict what might happen under scaling up, or indeed to learn.
Conflicts of interest
We declare that we have no conflicts of interest.