PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (868361)

Clipboard (0)
None

Related Articles

1.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
2.  Anatomy of the Epidemiological Literature on the 2003 SARS Outbreaks in Hong Kong and Toronto: A Time-Stratified Review 
PLoS Medicine  2010;7(5):e1000272.
Weijia Xing and colleagues reviewed the published epidemiological literature on SARS and show that less than a quarter of papers were published during the epidemic itself, suggesting that the research published lagged substantially behind the need for it.
Background
Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.
Methods and Findings
We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0–140.1) and 63.5 d (95% CI 18.0–94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0–52.0) and 8 (IQR 3.2–21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5–16.5, p<0.05, and 7, IQR 3.0–12.0, p<0.01, respectively).
Conclusions
A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every now and then, a new infectious disease appears in a human population or an old disease becomes much more common or more geographically widespread. Recently, several such “emerging infectious diseases” have become major public health problems. For example, HIV/AIDS, hepatitis C, and severe acute respiratory syndrome (SARS) have all emerged in the past three decades and spread rapidly round the world. When an outbreak (epidemic) of an emerging infectious disease occurs, epidemiologists (scientists who study the causes, distribution, and control of diseases in populations) swing into action, collecting and analyzing data on the new threat to human health. Epidemiological studies are rapidly launched to identify the causative agent of the new disease, to investigate how the disease spreads, to define diagnostic criteria for the disease, to evaluate potential treatments, and to devise ways to control the disease's spread. Public health officials then use the results of these studies to bring the epidemic under control.
Why Was This Study Done?
Clearly, epidemics of emerging infectious diseases can only be controlled rapidly and effectively if the results of epidemiological studies are made widely available in a timely manner. Public health bulletins (for example, the Morbidity and Mortality Weekly Report from the US Centers from Disease Control and Prevention) are an important way of disseminating information as is the publication of original research in peer-reviewed academic journals. But how timely is this second dissemination route? Submission, peer-review, revision, re-review, acceptance, and publication of a piece of academic research can be a long process, the speed of which is affected by the responses of both authors and journals. In this study, the researchers analyze how the results of academic epidemiological research are submitted and published in journals during and after an emerging infectious disease epidemic using the 2003 SARS epidemic as an example. The first case of SARS was identified in Asia in February 2003 and rapidly spread around the world. 8,098 people became ill with SARS and 774 died before the epidemic was halted in July 2003.
What Did the Researchers Do and Find?
The researchers identified more than 300 journal articles covering epidemiological research into the SARS outbreak in Hong Kong, China, and Toronto, Canada (two cities strongly affected by the epidemic) that were published online or in print between January 1, 2003 and July 31, 2007. The researchers' analysis of these articles shows that more than half them were descriptive epidemiological studies, investigations that focused on describing the distribution of SARS; a third were analytical epidemiological studies that tried to discover the cause of SARS. Overall, 22% of the journal articles were submitted for publication during the epidemic. Only 8% of the articles were accepted for publication and only 7% were actually published during the epidemic. The median (average) submission-to-acceptance and acceptance-to-publication intervals for SARS articles submitted during the epidemic were 55 and 77.5 days, respectively, much shorter intervals than those for non-SARS articles published in the same journal issues. After the epidemic was over, the submission-to-acceptance and acceptance-to-publication intervals for SARS articles was similar to that of non-SARS articles.
What Do These Findings Mean?
These findings show that, although the academic response to the SARS epidemic was rapid, most articles on the epidemiology of SARS were published after the epidemic was over even though SARS was a major threat to public health. Possible reasons for this publication delay include the time taken by authors to prepare and undertake their studies, to write and submit their papers, and, possibly, their tendency to first submit their results to high profile journals. The time then taken by journals to review the studies, make decisions about publication, and complete the publication process might also have delayed matters. To minimize future delays in the publication of epidemiological research on emerging infectious diseases, epidemiologists could adopt common, predefined protocols and ready-to-use instruments, which would improve timeliness and ensure comparability across studies, suggest the researchers. Journals, in turn, could improve their fast-track procedures and could consider setting up online sections that could be activated when an emerging infectious disease outbreak occurred. Finally, journals could consider altering their review system to speed up the publication process provided the quality of the final published articles was not compromised.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000272.
The US National Institute of Allergy and Infectious Diseases provides information on emerging infectious diseases
The US Centers for Control and Prevention of Diseases also provides information about emerging infectious diseases, including links to other resources, and information on SARS
Wikipedia has a page on epidemiology (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information on SARS (in several languages)
doi:10.1371/journal.pmed.1000272
PMCID: PMC2864302  PMID: 20454570
3.  International Monetary Fund Programs and Tuberculosis Outcomes in Post-Communist Countries 
PLoS Medicine  2008;5(7):e143.
Background
Previous studies have indicated that International Monetary Fund (IMF) economic programs have influenced health-care infrastructure in recipient countries. The post-communist Eastern European and former Soviet Union countries experienced relatively similar political and economic changes over the past two decades, and participated in IMF programs of varying size and duration. We empirically examine how IMF programs related to changes in tuberculosis incidence, prevalence, and mortality rates among these countries.
Methods and Findings
We performed multivariate regression of two decades of tuberculosis incidence, prevalence, and mortality data against variables potentially influencing tuberculosis program outcomes in 21 post-communist countries for which comparative data are available. After correcting for confounding variables, as well as potential detection, selection, and ecological biases, we observed that participating in an IMF program was associated with increased tuberculosis incidence, prevalence, and mortality rates by 13.9%, 13.2%, and 16.6%, respectively. Each additional year of participation in an IMF program was associated with increased tuberculosis mortality rates by 4.1%, and each 1% increase in IMF lending was associated with increased tuberculosis mortality rates by 0.9%. On the other hand, we estimated a decrease in tuberculosis mortality rates of 30.7% (95% confidence interval, 18.3% to 49.5%) associated with exiting the IMF programs. IMF lending did not appear to be a response to worsened health outcomes; rather, it appeared to be a precipitant of such outcomes (Granger- and Sims-causality tests), even after controlling for potential political, socioeconomic, demographic, and health-related confounders. In contrast, non-IMF lending programs were connected with decreased tuberculosis mortality rates (−7.6%, 95% confidence interval, −1.0% to −14.1%). The associations observed between tuberculosis mortality and IMF programs were similar to those observed when evaluating the impact of IMF programs on tuberculosis incidence and prevalence. While IMF programs were connected with large reductions in generalized government expenditures, tuberculosis program coverage, and the number of physicians per capita, non-IMF lending programs were not significantly associated with these variables.
Conclusions
IMF economic reform programs are associated with significantly worsened tuberculosis incidence, prevalence, and mortality rates in post-communist Eastern European and former Soviet countries, independent of other political, socioeconomic, demographic, and health changes in these countries. Future research should attempt to examine how IMF programs may have related to other non-tuberculosis–related health outcomes.
David Stuckler and colleagues show that, in Eastern European and former Soviet Union countries, participation in International Monetary Fund economic programs have been associated with higher mortality rates from tuberculosis.
Editors' Summary
Background.
Tuberculosis—a contagious, bacterial infection—has killed large numbers of people throughout human history. Over the last century improvements in public health began to reduce the incidence (the number of new cases in the population in a given time), prevalence (the number of infected people), and mortality rate (number of people dying each year) of tuberculosis in several countries. Many authorities thought that tuberculosis had become a disease of the past. It has become increasingly clear, however, that regions impacted by health and economic changes since the 1980s have continued to face a high and sometimes increasing burden of tuberculosis. In order to boost funding and resources for combating the global tuberculosis problem, the United Nations has set a target of halting and reversing increases in global tuberculosis incidence by 2015 as one of its Millennium Development Goals. Yet one region of the world—Eastern Europe and the former Soviet Union—is not on track to achieve this goal.
Why Was This Study Done?
To achieve these targets, the World Health Organization (WHO) and tuberculosis physicians' groups promote the expansion of detection and treatment efforts against tuberculosis. But these efforts depend on the maintenance of good health infrastructure to fund and support health-care workers, clinics, and hospitals. In countries with significant financial limitations, the development and maintenance of these health system resources are often dependent upon international donations and financial lending. The International Monetary Fund (IMF) is a major source of capital for resource-deprived countries, but it is unclear whether its economic reform programs have positive or negative effects on health and health infrastructures in recipient countries. There are indications, for example, that recipient countries sometimes reduce their public-health spending to meet the economic targets set by the IMF as conditions for its loans. In this study, the researchers examine the relationship between participating in IMF lending programs of varying sizes and durations by 21 post-communist Central and Eastern European and former Soviet Union countries and changes in tuberculosis incidence, prevalence, and mortality in these countries during the past two decades.
What Did the Researchers Do and Find?
To examine how participation in IMF lending programs affected tuberculosis control in these countries, the researchers developed a series of statistical models that take into account other variables (for example, directly observed therapy programs, HIV rates, military conflict, and urbanization) that might have affected tuberculosis control. Participation in an IMF program, they report, was associated with increases in tuberculosis incidence, prevalence, and mortality rate of about 15%, which corresponds to hundreds of thousands of new cases and deaths in this region. Each additional year of participation increased tuberculosis mortality rates by 4.1%; increases in the size of the IMF loan also corresponded to greater tuberculosis mortality rates. Conversely, when countries left IMF programs, tuberculosis mortality rates dropped by roughly one-third. The authors' further statistical tests indicated that IMF lending was not a positive response to worsened tuberculosis control but precipitated this adverse outcome and that lending from non-IMF sources of funding was associated with decreases in tuberculosis mortality rates. Consistent with these results, IMF (but not non-IMF) programs were associated with reductions in government expenditures, tuberculosis program coverage, and the number of doctors per capita in each country. These findings associated with mortality were also found when analyzing tuberculosis incidence and prevalence data.
What Do These Findings Mean?
These findings indicate that IMF economic programs are associated with significantly worsened tuberculosis control in post-communist Central and Eastern European and former Soviet Union countries, independent of other political, health, and economic changes in these countries. Further research is needed to discover exactly which aspects of the IMF programs were associated with the adverse effects on tuberculosis control reported here and to see whether IMF loans have similar effects on tuberculosis control in other countries or on other non–tuberculosis-related health outcomes. For now, these results challenge the proposition that the forms of economic development promoted by the IMF necessarily improve public health. In particular, they put the onus on the IMF to critically evaluate the direct and indirect effects of its economic programs on public health.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050143.
This study is further discussed in a PLoS Medicine Perspective by Murray and King
The US National Institute of Allergy and Infectious Diseases provides information on all aspects of tuberculosis, including a brief history of the disease
The US Centers for Disease Control and Prevention provide several fact sheets and other information resources about tuberculosis
The World Health Organization provides information (in several languages) on efforts to reduce the global burden of tuberculosis, including information on the Stop TB Strategy and the 2008 report on global tuberculosis control—surveillance, planning, financing
Detailed information about the International Monetary Fund is available on its Web site
An article that asks “Does the IMF constrain health spending in poor countries?” (with a link to a response from the IMF) is provided by the Center for Global Development
doi:10.1371/journal.pmed.0050143
PMCID: PMC2488179  PMID: 18651786
4.  Modelling the Impact of Artemisinin Combination Therapy and Long-Acting Treatments on Malaria Transmission Intensity 
PLoS Medicine  2008;5(11):e226.
Background
Artemisinin derivatives used in recently introduced combination therapies (ACTs) for Plasmodium falciparum malaria significantly lower patient infectiousness and have the potential to reduce population-level transmission of the parasite. With the increased interest in malaria elimination, understanding the impact on transmission of ACT and other antimalarial drugs with different pharmacodynamics becomes a key issue. This study estimates the reduction in transmission that may be achieved by introducing different types of treatment for symptomatic P. falciparum malaria in endemic areas.
Methods and Findings
We developed a mathematical model to predict the potential impact on transmission outcomes of introducing ACT as first-line treatment for uncomplicated malaria in six areas of varying transmission intensity in Tanzania. We also estimated the impact that could be achieved by antimalarials with different efficacy, prophylactic time, and gametocytocidal effects. Rates of treatment, asymptomatic infection, and symptomatic infection in the six study areas were estimated using the model together with data from a cross-sectional survey of 5,667 individuals conducted prior to policy change from sulfadoxine-pyrimethamine to ACT. The effects of ACT and other drug types on gametocytaemia and infectiousness to mosquitoes were independently estimated from clinical trial data. Predicted percentage reductions in prevalence of infection and incidence of clinical episodes achieved by ACT were highest in the areas with low initial transmission. A 53% reduction in prevalence of infection was seen if 100% of current treatment was switched to ACT in the area where baseline slide-prevalence of parasitaemia was lowest (3.7%), compared to an 11% reduction in the highest-transmission setting (baseline slide prevalence = 57.1%). Estimated percentage reductions in incidence of clinical episodes were similar. The absolute size of the public health impact, however, was greater in the highest-transmission area, with 54 clinical episodes per 100 persons per year averted compared to five per 100 persons per year in the lowest-transmission area. High coverage was important. Reducing presumptive treatment through improved diagnosis substantially reduced the number of treatment courses required per clinical episode averted in the lower-transmission settings although there was some loss of overall impact on transmission. An efficacious antimalarial regimen with no specific gametocytocidal properties but a long prophylactic time was estimated to be more effective at reducing transmission than a short-acting ACT in the highest-transmission setting.
Conclusions
Our results suggest that ACTs have the potential for transmission reductions approaching those achieved by insecticide-treated nets in lower-transmission settings. ACT partner drugs and nonartemisinin regimens with longer prophylactic times could result in a larger impact in higher-transmission settings, although their long term benefit must be evaluated in relation to the risk of development of parasite resistance.
Lucy Okell and colleagues predict the impact on transmission outcomes of ACT as first-line treatment for uncomplicated malaria in six areas of varying transmission intensity in Tanzania.
Editors' Summary
Background.
Plasmodium falciparum, a mosquito-borne parasite that causes malaria, kills nearly one million people every year. When an infected mosquito bites a person, it injects a life stage of the parasite called sporozoites, which invade human liver cells where they initially develop. The liver cells then release merozoites (another life stage of the parasite). These invade red blood cells where they multiply before bursting out and infecting more red blood cells, which can cause fever and damage vital organs. Some merozoites develop into gametocytes, which infect mosquitos when they take a blood meal. In the mosquito, the gametocytes give rise to sporozoites, thus completing the parasite's life cycle. Because malaria parasites are now resistant to many antimalarial drugs, the preferred first-line treatment for P. falciparum malaria in most countries is artemisinin combination therapy (ACT). Artemisinin derivatives are fast-acting antimalarial agents that, unlike previous first-line treatments, reduce the number of gametocytes in patients' blood, making them less infectious to mosquitos, and therefore have more potential to reduce malaria transmission. These compounds are used in combination with another antimalarial drug to reduce the chances of P. falciparum becoming resistant to either drug.
Why Was This Study Done?
Because malaria poses such a large global public-health burden, there is considerable national and international interest in eliminating it or at least minimizing its transmission. Malaria control agencies need to know how to choose between available types of ACT as well as other antimalarials so as to not only cure malaria illness but also prevent transmission as much as possible. The financial resources available to control malaria are limited, so for planning integrated transmission reduction programs it is important for policy makers to know what contribution their treatment policy could make in addition to other control strategies (for example, the provision of insecticide-treated bed nets to reduce mosquito bites) to reducing transmission. Furthermore, in areas with high levels of malaria, it is uncertain to what extent treatment can reduce transmission since many infected people are immune and do not suffer symptoms or seek health care, but continue to transmit to others. In this study, the researchers develop a mathematical model to predict the impact on malaria transmission of the introduction of ACT and alternative first-line treatments for malaria in six regions of Tanzania with different levels of malaria transmission.
What Did the Researchers Do and Find?
The researchers developed a “deterministic compartmental” model of malaria transmission in human and mosquito populations and included numerous variables likely to affect malaria transmission (variables were based on data collected in Tanzania just before the introduction of ACT). They then used the model to estimate the impact on malaria transmission of introducing ACT or other antimalarial drugs with different properties. The model predicted that the percentage reduction in the prevalence of infection (the fraction of the population with malaria) and the incidence of infection (the number of new cases in the population per year) associated with a 100% switch to ACT would be greater in areas with low initial transmission rates than in areas with high transmission rates. For example, in the area with the lowest initial transmission rates, the model predicted that the prevalence of infection would drop by 53%, but in the area with the highest initial transmission rate, the drop would be only 11%. However, because more people get malaria in high-transmission areas, the total number of malaria illness episodes prevented would be ten times higher in the area with highest transmission than in the area with lowest transmission. The model also predicted that, in areas with high transmission, long-acting treatments which protect patients from reinfection would reduce transmission more effectively than some common currently used ACT regimens which are gametocyte-killing but short-acting. Treatments which were both long-acting and gametocyte-killing were predicted to have the biggest impact across all settings.
What Do These Findings Mean?
As with all mathematical models, the accuracy of the predictions made by this model depend on the many assumptions incorporated into the model. In addition, because data from Tanzania were fed into the model, its predictions are to some extent specific to the area. Nevertheless the Tanzanian setting is typical of sub-Saharan malaria-affected areas, and the authors show that varying their assumptions and the data fed into the model within realistic ranges in most cases does not substantially change their overall conclusions. The findings in this study suggest that in low-transmission areas, provided ACT is widely used, ACT may reduce malaria transmission as effectively as the widespread use of insecticide-treated bed nets. The findings also suggest that the use of longer-acting regimens with or without artemisinin components might be a good way to reduce transmission in high-transmission areas, provided the development of parasite resistance can be avoided. More generally, these findings suggest that public-health officials need to take the properties of antimalarial drugs into account together with the levels of transmission in the area when designing policies in order to achieve the highest impact on malaria transmission.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050226.
This study is further discussed in a PLoS Medicine Perspective by Maciej Boni and colleagues
The MedlinePlus encyclopedia contains a page on malaria (in English and Spanish)
Information is available from the World Health Organization on malaria (in several languages)
The US Centers for Disease Control and Prevention provides information on malaria (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on its approach to the global control of malaria, on artemisinin-based combination therapies, and on malaria in Tanzania
doi:10.1371/journal.pmed.0050226
PMCID: PMC2586356  PMID: 19067479
5.  Estimating the Global Clinical Burden of Plasmodium falciparum Malaria in 2007 
PLoS Medicine  2010;7(6):e1000290.
Simon Hay and colleagues derive contemporary estimates of the global clinical burden of Plasmodium falciparum malaria (the deadliest form of malaria) using cartography-based techniques.
Background
The epidemiology of malaria makes surveillance-based methods of estimating its disease burden problematic. Cartographic approaches have provided alternative malaria burden estimates, but there remains widespread misunderstanding about their derivation and fidelity. The aims of this study are to present a new cartographic technique and its application for deriving global clinical burden estimates of Plasmodium falciparum malaria for 2007, and to compare these estimates and their likely precision with those derived under existing surveillance-based approaches.
Methods and Findings
In seven of the 87 countries endemic for P. falciparum malaria, the health reporting infrastructure was deemed sufficiently rigorous for case reports to be used verbatim. In the remaining countries, the mapped extent of unstable and stable P. falciparum malaria transmission was first determined. Estimates of the plausible incidence range of clinical cases were then calculated within the spatial limits of unstable transmission. A modelled relationship between clinical incidence and prevalence was used, together with new maps of P. falciparum malaria endemicity, to estimate incidence in areas of stable transmission, and geostatistical joint simulation was used to quantify uncertainty in these estimates at national, regional, and global scales.
Combining these estimates for all areas of transmission risk resulted in 451 million (95% credible interval 349–552 million) clinical cases of P. falciparum malaria in 2007. Almost all of this burden of morbidity occurred in areas of stable transmission. More than half of all estimated P. falciparum clinical cases and associated uncertainty occurred in India, Nigeria, the Democratic Republic of the Congo (DRC), and Myanmar (Burma), where 1.405 billion people are at risk.
Recent surveillance-based methods of burden estimation were then reviewed and discrepancies in national estimates explored. When these cartographically derived national estimates were ranked according to their relative uncertainty and replaced by surveillance-based estimates in the least certain half, 98% of the global clinical burden continued to be estimated by cartographic techniques.
Conclusions and Significance
Cartographic approaches to burden estimation provide a globally consistent measure of malaria morbidity of known fidelity, and they represent the only plausible method in those malaria-endemic countries with nonfunctional national surveillance. Unacceptable uncertainty in the clinical burden of malaria in only four countries confounds our ability to evaluate needs and monitor progress toward international targets for malaria control at the global scale. National prevalence surveys in each nation would reduce this uncertainty profoundly. Opportunities for further reducing uncertainty in clinical burden estimates by hybridizing alternative burden estimation procedures are also evaluated.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Malaria is a major global public-health problem. Nearly half the world's population is at risk of malaria, and Plasmodium falciparum malaria—the deadliest form of the disease—causes about one million deaths each year. Malaria is a parasitic disease that is transmitted to people through the bite of an infected mosquito. These insects inject a parasitic form known as sporozoites into people, where they replicate briefly inside liver cells. The liver cells then release merozoites (another parasitic form), which invade red blood cells. Here, the merozoites replicate rapidly before bursting out and infecting more red blood cells. This increase in the parasitic burden causes malaria's characteristic symptoms—debilitating and recurring fevers and chills. Infected red blood cells also release gametocytes, which infect mosquitoes when they take a blood meal. In the mosquito, the gametocytes multiply and develop into sporozoites, thus completing the parasite's life cycle. Malaria can be prevented by controlling the mosquitoes that spread the parasite and by avoiding mosquito bites. Effective treatment with antimalarial drugs also helps to reduce malaria transmission.
Why Was This Study Done?
In 1998, the World Health Organization (WHO) and several other international agencies launched Roll Back Malaria, a global partnership that aims to provide a coordinated, global approach to fighting malaria. For this or any other malaria control initiative to be effective, however, an accurate picture of the global clinical burden of malaria (how many people become ill because of malaria and where they live) is needed so that resources can be concentrated where they will have the most impact. Estimates of the global burden of many infectious diseases are obtained using data collected by national surveillance systems. Unfortunately, this approach does not work very well for malaria because in places where malaria is endemic (always present), diagnosis is often inaccurate and national reporting is incomplete. In this study, therefore, the researchers use an alternative, “cartographic” method for estimating the global clinical burden of P. falciparum malaria.
What Did the Researchers Do and Find?
The researchers identified seven P. falciparum malaria-endemic countries that had sufficiently reliable health information systems to determine the national clinical malaria burden in 2007 directly. They divided the other 80 malaria endemic countries into countries with a low risk of transmission (unstable transmission) and countries with a moderate or high risk of transmission (stable transmission). In countries with unstable transmission, the researchers assumed a uniform annual clinical incidence rate of 0.1 cases per 1,000 people and multiplied this by population sizes to get disease burden estimates. In countries with stable transmission, they used a modeled relationship between clinical incidence (number of new cases in a population per year) and prevalence (the proportion of a population infected with malaria parasites) and a global malaria endemicity map (a map that indicates the risk of malaria infection in different countries) to estimate malaria incidences. Finally, they used a technique called “joint simulation” to quantify the uncertainty in these estimates. Together, these disease burden estimates gave an estimated global burden of 451 million clinical cases of P. falciparum in 2007. Most of these cases occurred in areas of stable transmission and more than half occurred in India, Nigeria, the Democratic Republic of the Congo, and Myanmar. Importantly, these four nations alone contributed nearly half of the uncertainty in the global incidence estimates.
What Do These Findings Mean?
These findings are extremely valuable because they provide a global map of malaria cases that should facilitate the implementation and evaluation of malaria control programs. However, the estimate of the global clinical burden of P. falciparum malaria reported here is higher than the WHO estimate of 247 million cases each year that was obtained using surveillance-based methods. The discrepancy between the estimates obtained using the cartographic and the surveillance-based approach is particularly marked for India. The researchers discuss possible reasons for these discrepancies and suggest improvements that could be made to both methods to increase the validity and precision of estimates. Finally, they note that improvements in the national prevalence surveys in India, Nigeria, the Democratic Republic of the Congo, and Myanmar would greatly reduce the uncertainty associated with their estimate of the global clinical burden of malaria, an observation that should encourage efforts to improve malaria surveillance in these countries.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000261.
A PLoS Medicine Health in Action article by Hay and colleagues, a Research Article by Guerra and colleagues, and a Research Article by Hay and colleagues provide further details about the global mapping of malaria risk
Additional national and regional level maps and more information on the global mapping of malaria are available at the Malaria Atlas Project
Information is available from the World Health Organization on malaria (in several languages)
The US Centers for Disease Control and Prevention provide information on malaria (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on its approach to the global control of malaria (in English and French)
MedlinePlus provides links to additional information on malaria (in English and Spanish)
doi:10.1371/journal.pmed.1000290
PMCID: PMC2885984  PMID: 20563310
6.  Costs and Consequences of the US Centers for Disease Control and Prevention's Recommendations for Opt-Out HIV Testing 
PLoS Medicine  2007;4(6):e194.
Background
The United States Centers for Disease Control and Prevention (CDC) recently recommended opt-out HIV testing (testing without the need for risk assessment and counseling) in all health care encounters in the US for persons 13–64 years old. However, the overall costs and consequences of these recommendations have not been estimated before. In this paper, I estimate the costs and public health impact of opt-out HIV testing relative to testing accompanied by client-centered counseling, and relative to a more targeted counseling and testing strategy.
Methods and Findings
Basic methods of scenario and cost-effectiveness analysis were used, from a payer's perspective over a one-year time horizon. I found that for the same programmatic cost of US$864,207,288, targeted counseling and testing services (at a 1% HIV seropositivity rate) would be preferred to opt-out testing: targeted services would newly diagnose more HIV infections (188,170 versus 56,940), prevent more HIV infections (14,553 versus 3,644), and do so at a lower gross cost per infection averted (US$59,383 versus US$237,149). While the study is limited by uncertainty in some input parameter values, the findings were robust across a variety of assumptions about these parameter values (including the estimated HIV seropositivity rate in the targeted counseling and testing scenario).
Conclusions
While opt-out testing may be able to newly diagnose over 56,000 persons living with HIV in one year, abandoning client-centered counseling has real public health consequences in terms of HIV infections that could have been averted. Further, my analyses indicate that even when HIV seropositivity rates are as low as 0.3%, targeted counseling and testing performs better than opt-out testing on several key outcome variables. These analytic findings should be kept in mind as HIV counseling and testing policies are debated in the US.
Scenario and cost-effectiveness analyses found that for the same programmatic cost, targeted counseling and testing would diagnose more people living with HIV and prevent more HIV infections than opt-out testing.
Editors' Summary
Background.
About a quarter of a million people in the United States do not realize they are infected with HIV. Because they are unaware of their infection, they don't get the medicines they need to stay healthy, and they may also be transmitting HIV, the virus that causes AIDS, to others unwittingly. How can public health professionals best reach such people to offer them an HIV test? There are a number of different schools of thought, the two most common of which are studied in this paper.
The first is that the best way to reach them is by simply offering every single patient in every health care setting an HIV test, but giving them the option to decline. This approach is known as “opt-out testing” (because everyone gets tested unless they choose to opt out); it has recently been recommended by the leading US government agency responsible for promoting the US public's health, an agency called the Centers for Disease Control and Prevention (CDC). The CDC says that there is no need for patients to give specific written permission for the HIV test to be done and that there is no need for health professionals to offer counseling of what the consequences of a positive test might mean for them before the test.
The second school of thought is that public health professionals should instead target their efforts towards those who are at increased risk of being HIV positive, such as those who inject drugs or who have had high-risk sex. Persons at risk of infection or transmission are offered counseling before the test, to assess their actual risk of HIV and to discuss what would happen in the event that the HIV test comes back positive. During counseling, people are also given advice on steps they can take to stay HIV negative if their test comes back negative, and to prevent infecting others if their test comes back positive. This approach to HIV testing is called “targeted counseling and testing.” While targeting can be done according to levels of risk behavior, counseling and testing services can also be targeted by focusing on geographic areas (e.g., cities) with high levels of HIV infection, or focusing on different types of clinics that serve persons at high risk of HIV infection and/or with little routine access to health care (such as sexually transmitted disease or drug treatment clinics, emergency rooms, or medical clinics in prison settings).
Why Was This Study Done?
The researcher, David Holtgrave, wanted to know which of these two different approaches would be better at reaching people with undiagnosed HIV infection over the course of a one-year period. He also wanted to know the costs of each approach, and which might be better at curbing the spread of HIV.
What Did the Researcher Do and Find?
He used two research techniques. One is called “scenario analysis,” which involves trying to forecast the consequences of several different possible scenarios. The other is called “cost-effectiveness analysis,” which involves comparing the costs and effects of two or more different courses of action.
According to Dr. Holtgrave's analysis, opt-out testing might reach 23% of those people who are currently unaware that they are HIV positive. The program might also prevent 9% of the 40,000 new HIV infections that occur each year in the US. The cost of averting one new infection would be US$237,149. In contrast, targeted counseling and testing might identify about 75% of people in the US now unaware they are living with HIV infection, and prevent about 36% of the new HIV infections. The cost of averting one new infection would be US$59,383. Even when the author changed several assumptions in his analysis (e.g., assumptions about levels of HIV infection or the effectiveness of counseling), he found that targeted counseling and testing still performed better (so the results are “robust” across a variety of such assumptions).
What Do These Findings Mean?
These findings suggest that targeted counseling and testing would be better than opt-out testing for reaching people with undiagnosed HIV infection and for helping to stop the spread of the virus. Opt-out testing, says the author, might even make some people increase their risky behavior. For example, if someone is injecting drugs, is given an opt-out HIV test, but is never questioned about substance use or counseled, and gets an HIV-negative result, they could easily conclude that their drug injecting is not putting them at risk of becoming HIV positive.
However, it is important to note that this study has a major limitation in that it tried to predict what might happen in the future—it did not study the actual impact of the two different types of testing on a group of people. Studies such as this one, which try to predict the future, are always based on a number of assumptions and these assumptions may turn out not to be true. So we should always be cautious in interpreting the results of a “scenario analysis.” In addition, because of the assumptions made in this study, these results are only directly applicable to the US population and hence the implications for other countries are not clear.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040194
In a related Perspective on this article, Ronald Valdiserri discusses the public health implications of the study
The CDC has a Web site with information on national HIV testing resources
In addition, the CDC has published its “Revised Recommendations for HIV Testing of Adults, Adolescents, and Pregnant Women in Health-Care Settings,” which lay out its proposal for opt-out testing
The international AIDS charity AVERT has a comprehensive page on HIV testing, including information on the reasons to have a test and what the test involves
Johns Hopkins University is host to a site that provides extensive information on HIV care and treatment
The University of California at San Francisco maintains HIV InSite, an authoritative Web site covering topics such as HIV prevention, care, and policy
doi:10.1371/journal.pmed.0040194
PMCID: PMC1891318  PMID: 17564488
7.  Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation 
PLoS Medicine  2008;5(11):e217.
Background
Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.
Methods and Findings
This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).
Conclusions
Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.
Lisa Bero and colleagues review the publication status of all efficacy trials carried out in support of new drug approvals from 2001 and 2002, and find that a quarter of trials remain unpublished.
Editors' Summary
Background.
All health-care professionals want their patients to have the best available clinical care—but how can they identify the optimum drug or intervention? In the past, clinicians used their own experience or advice from colleagues to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical research findings. So, for example, before a new drug is approved for the treatment of a specific disease in the United States and becomes available for doctors to prescribe, the drug's sponsors (usually a pharmaceutical company) must submit a “New Drug Application” (NDA) to the US Food and Drug Administration (FDA). The NDA tells the story of the drug's development from laboratory and animal studies through to clinical trials, including “efficacy” trials in which the efficacy and safety of the new drug and of a standard drug for the disease are compared by giving groups of patients the different drugs and measuring several key (primary) “outcomes.” FDA reviewers use this evidence to decide whether to approve a drug.
Why Was This Study Done?
Although the information in NDAs is publicly available, clinicians and patients usually learn about new drugs from articles published in medical journals after drug approval. Unfortunately, drug sponsors sometimes publish the results only of the trials in which their drug performed well and in which statistical analyses indicate that the drug's improved performance was a real effect rather than a lucky coincidence. Trials in which a drug did not show a “statistically significant benefit” or where the drug was found to have unwanted side effects often remain unpublished. This “publication bias” means that the scientific literature can contain an inaccurate picture of a drug's efficacy and safety relative to other therapies. This may lead to clinicians preferentially prescribing newer, more expensive drugs that are not necessarily better than older drugs. In this study, the researchers test the hypothesis that not all the trial results in NDAs are published in medical journals. They also investigate whether there are any discrepancies between the trial data included in NDAs and in published articles.
What Did the Researchers Do and Find?
The researchers identified all the efficacy trials included in NDAs for totally new drugs that were approved by the FDA in 2001 and 2002 and searched the scientific literature for publications between July 2006 and June 2007 relating to these trials. Only three-quarters of the efficacy trials in the NDAs were published; trials with favorable outcomes were nearly five times as likely to be published as those without favorable outcomes. Although 155 primary outcomes were in both the papers and the NDAs, 41 outcomes were only in the NDAs. Conversely, 17 outcomes were only in the papers; 15 of these favored the test drug. Of the 43 primary outcomes reported in the NDAs that showed no statistically significant benefit for the test drug, only half were included in the papers; for five of the reported primary outcomes, the statistical significance differed between the NDA and the paper and generally favored the test drug in the papers. Finally, nine out of 99 conclusions differed between the NDAs and the papers; each time, the published conclusion favored the test drug.
What Do These Findings Mean?
These findings indicate that the results of many trials of new drugs are not published 5 years after FDA approval of the drug. Furthermore, unexplained discrepancies between the data and conclusions in NDAs and in medical journals are common and tend to paint a more favorable picture of the new drug in the scientific literature than in the NDAs. Overall, these findings suggest that the information on the efficacy of new drugs that is readily available to clinicians and patients through the published scientific literature is incomplete and potentially biased. The recent introduction in the US and elsewhere of mandatory registration of all clinical trials before they start and of mandatory publication in trial registers of the full results of all the predefined primary outcomes should reduce publication bias over the next few years and should allow clinicians and patients to make fully informed treatment decisions.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050217.
This study is further discussed in a PLoS Medicine Perspective by An-Wen Chan
PLoS Medicine recently published a related article by Ida Sim and colleagues: Lee K, Bacchetti P, Sim I (2008) Publication of clinical trials supporting successful new drug applications: A literature analysis. PLoS Med 5: e191. doi:10.1371/journal.pmed.0050191
The Food and Drug Administration provides information about drug approval in the US for consumers and for health-care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
NDAs for approved drugs can also be found on this Web site
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.0050217
PMCID: PMC2586350  PMID: 19067477
8.  Estimating the Number of Paediatric Fevers Associated with Malaria Infection Presenting to Africa's Public Health Sector in 2007 
PLoS Medicine  2010;7(7):e1000301.
Peter Gething and colleagues compute the number of fevers likely to present to public health facilities in Africa and the estimated number of these fevers likely to be infected with Plasmodium falciparum malaria parasites.
Background
As international efforts to increase the coverage of artemisinin-based combination therapy in public health sectors gather pace, concerns have been raised regarding their continued indiscriminate presumptive use for treating all childhood fevers. The availability of rapid-diagnostic tests to support practical and reliable parasitological diagnosis provides an opportunity to improve the rational treatment of febrile children across Africa. However, the cost effectiveness of diagnosis-based treatment polices will depend on the presumed numbers of fevers harbouring infection. Here we compute the number of fevers likely to present to public health facilities in Africa and the estimated number of these fevers likely to be infected with Plasmodium falciparum malaria parasites.
Methods and Findings
We assembled first administrative-unit level data on paediatric fever prevalence, treatment-seeking rates, and child populations. These data were combined in a geographical information system model that also incorporated an adjustment procedure for urban versus rural areas to produce spatially distributed estimates of fever burden amongst African children and the subset likely to present to public sector clinics. A second data assembly was used to estimate plausible ranges for the proportion of paediatric fevers seen at clinics positive for P. falciparum in different endemicity settings. We estimated that, of the 656 million fevers in African 0–4 y olds in 2007, 182 million (28%) were likely to have sought treatment in a public sector clinic of which 78 million (43%) were likely to have been infected with P. falciparum (range 60–103 million).
Conclusions
Spatial estimates of childhood fevers and care-seeking rates can be combined with a relational risk model of infection prevalence in the community to estimate the degree of parasitemia in those fevers reaching public health facilities. This quantification provides an important baseline comparison of malarial and nonmalarial fevers in different endemicity settings that can contribute to ongoing scientific and policy debates about optimum clinical and financial strategies for the introduction of new diagnostics. These models are made publicly available with the publication of this paper.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Malaria —an infectious parasitic disease transmitted to people through the bite of an infected mosquito —kills about one million people (mainly children living in sub-Saharan Africa) every year. Although several parasites cause malaria, Plasmodium falciparum is responsible for most of these deaths. For the past 50 years, the main treatments for P. falciparum malaria have been chloroquine and sulfadoxine/pyrimethamine. Unfortunately, parasitic resistance to these “monotherapies” is now widespread and there has been a global upsurge in the illness and deaths caused by P. falciparum. To combat this increase, the World Health Organization recommends artemisinin combination therapy (ACT) for P. falciparum malaria in all regions with drug-resistant malaria. In ACT, artemisinin derivatives (new, fast-acting antimalarial drugs) are used in combination with another antimalarial to reduce the chances of P. falciparum becoming resistant to either drug.
Why Was This Study Done?
All African countries at risk of P. falciparum have now adopted ACT as first-line therapy for malaria in their public clinics. However, experts are concerned that ACT is often given to children who don't actually have malaria because, in many parts of Africa, health care workers assume that all childhood fevers are malaria. This practice, which became established when diagnostic facilities for malaria were very limited, increases the chances of P. falciparum becoming resistant to ACT, wastes limited drug stocks, and means that many ill children are treated inappropriately. Recently, however, rapid diagnostic tests for malaria have been developed and there have been calls to expand their use to improve the rational treatment of African children with fever. Before such an expansion is initiated, it is important to know how many African children develop fever each year, how many of these ill children attend public clinics, and what proportion of them is likely to have malaria. Unfortunately, this type of information is incompletely or unreliably collected in many parts of Africa. In this study, therefore, the researchers use a mathematical model to estimate the number of childhood fevers associated with malaria infection that presented to Africa's public clinics in 2007 from survey data.
What Did the Researchers Do and Find?
The researchers used survey data on the prevalence (the proportion of a population with a specific disease) of childhood fever and on treatment-seeking behavior and data on child populations to map the distribution of fever among African children and the likelihood of these children attending public clinics for treatment. They then used a recent map of the distribution of P. falciparum infection risk to estimate what proportion of children with fever who attended clinics were likely to have had malaria in different parts of Africa. In 2007, the researchers estimate, 656 million cases of fever occurred in 0–4-year-old African children, 182 million were likely to have sought treatment in a public clinic, and 78 million (just under half of the cases that attended a clinic with fever) were likely to have been infected with P. falciparum. Importantly, there were marked geographical differences in the likelihood of children with fever presenting at public clinics being infected with P. falciparum. So, for example, whereas nearly 60% of the children attending public clinics with fever in Burkino Faso were likely to have had malaria, only 15% of similar children in Kenya were likely to have had this disease.
What Do These Findings Mean?
As with all mathematical models, the accuracy of these findings depends on the assumptions included in the model and on the data fed into it. Nevertheless, these findings provide a map of the prevalence of malarial and nonmalarial childhood fevers across sub-Saharan Africa and an indication of how many of the children with fever reaching public clinics are likely to have malaria and would therefore benefit from ACT. The finding that in some countries more than 80% of children attending public clinics with fever probably don't have malaria highlights the potential benefits of introducing rapid diagnostic testing for malaria. Furthermore, these findings can now be used to quantify the resources needed for and the potential clinical benefits of different policies for the introduction of rapid diagnostic testing for malaria across Africa.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000301.
Information is available from the World Health Organization on malaria (in several languages) and on rapid diagnostic tests for malaria
The US Centers for Disease Control and Prevention provide information on malaria (in English and Spanish)
MedlinePlus provides links to additional information on malaria (in English and Spanish)
Information on the global mapping of malaria is available at the Malaria Atlas Project
Information is available from the Roll Back Malaria Partnership on the global control of malaria (in English and French) and on artemisinin combination therapy
doi:10.1371/journal.pmed.1000301
PMCID: PMC2897768  PMID: 20625548
9.  Global Mortality Estimates for the 2009 Influenza Pandemic from the GLaMOR Project: A Modeling Study 
PLoS Medicine  2013;10(11):e1001558.
Lone Simonsen and colleagues use a two-stage statistical modeling approach to estimate the global mortality burden of the 2009 influenza pandemic from mortality data obtained from multiple countries.
Please see later in the article for the Editors' Summary
Background
Assessing the mortality impact of the 2009 influenza A H1N1 virus (H1N1pdm09) is essential for optimizing public health responses to future pandemics. The World Health Organization reported 18,631 laboratory-confirmed pandemic deaths, but the total pandemic mortality burden was substantially higher. We estimated the 2009 pandemic mortality burden through statistical modeling of mortality data from multiple countries.
Methods and Findings
We obtained weekly virology and underlying cause-of-death mortality time series for 2005–2009 for 20 countries covering ∼35% of the world population. We applied a multivariate linear regression model to estimate pandemic respiratory mortality in each collaborating country. We then used these results plus ten country indicators in a multiple imputation model to project the mortality burden in all world countries. Between 123,000 and 203,000 pandemic respiratory deaths were estimated globally for the last 9 mo of 2009. The majority (62%–85%) were attributed to persons under 65 y of age. We observed a striking regional heterogeneity, with almost 20-fold higher mortality in some countries in the Americas than in Europe. The model attributed 148,000–249,000 respiratory deaths to influenza in an average pre-pandemic season, with only 19% in persons <65 y. Limitations include lack of representation of low-income countries among single-country estimates and an inability to study subsequent pandemic waves (2010–2012).
Conclusions
We estimate that 2009 global pandemic respiratory mortality was ∼10-fold higher than the World Health Organization's laboratory-confirmed mortality count. Although the pandemic mortality estimate was similar in magnitude to that of seasonal influenza, a marked shift toward mortality among persons <65 y of age occurred, so that many more life-years were lost. The burden varied greatly among countries, corroborating early reports of far greater pandemic severity in the Americas than in Australia, New Zealand, and Europe. A collaborative network to collect and analyze mortality and hospitalization surveillance data is needed to rapidly establish the severity of future pandemics.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every winter, millions of people catch influenza—a viral infection of the airways—and hundreds of thousands of people (mainly elderly individuals) die as a result. These seasonal epidemics occur because small but frequent changes in the influenza virus mean that the immune response produced by infection with one year's virus provides only partial protection against the next year's virus. Influenza viruses also occasionally emerge that are very different. Human populations have virtually no immunity to these new viruses, which can start global epidemics (pandemics) that kill millions of people. The most recent influenza pandemic, which was first recognized in Mexico in March 2009, was caused by the 2009 influenza A H1N1 pandemic (H1N1pdm09) virus. This virus spread rapidly, and on 11 June 2009, the World Health Organization (WHO) declared that an influenza pandemic was underway. H1N1pdm09 caused a mild disease in most people it infected, but by the time WHO announced that the pandemic was over (10 August 2010), there had been 18,632 laboratory-confirmed deaths from H1N1pdm09.
Why Was This Study Done?
The modest number of laboratory-confirmed H1N1pdm09 deaths has caused commentators to wonder whether the public health response to H1N1pdm09 was excessive. However, as is the case with all influenza epidemics, the true mortality (death) burden from H1N1pdm09 is substantially higher than these figures indicate because only a minority of influenza-related deaths are definitively diagnosed by being confirmed in laboratory. Many influenza-related deaths result from secondary bacterial infections or from exacerbation of preexisting chronic conditions, and are not recorded as related to influenza infection. A more complete assessment of the impact of H1N1pdm09 on mortality is essential for the optimization of public health responses to future pandemics. In this modeling study (the Global Pandemic Mortality [GLaMOR] project), researchers use a two-stage statistical modeling approach to estimate the global mortality burden of the 2009 influenza pandemic from mortality data obtained from multiple countries.
What Did the Researchers Do and Find?
The researchers obtained weekly virology data from the World Health Organization FluNet database and national influenza centers to identify influenza active periods, and obtained weekly national underlying cause-of-death time series for 2005–2009 from collaborators in more than 20 countries (35% of the world's population). They used a multivariate linear regression model to measure the numbers and rates of pandemic influenza respiratory deaths in each of these countries. Then, in the second stage of their analysis, they used a multiple imputation model that took into account country-specific geographical, economic, and health indicators to project the single-country estimates to all world countries. The researchers estimated that between 123,000 and 203,000 pandemic influenza respiratory deaths occurred globally from 1 April through 31 December 2009. Most of these deaths (62%–85%) occurred in people younger than 65 years old. There was a striking regional heterogeneity in deaths, with up to 20-fold higher mortality in Central and South American countries than in European countries. Finally, the model attributed 148,000–249,000 respiratory deaths to influenza in an average pre-pandemic season. Notably, only 19% of these deaths occurred in people younger than 65 years old.
What Do These Findings Mean?
These findings suggest that respiratory mortality from the 2009 influenza pandemic was about 10-fold higher than laboratory-confirmed mortality. The true total mortality burden is likely to be even higher because deaths that occurred late in the winter of 2009–2010 and in later pandemic waves were missed in this analysis, and only pandemic influenza deaths that were recorded as respiratory deaths were included. The lack of single-country estimates from low-income countries may also limit the accuracy of these findings. Importantly, although the researchers' estimates of mortality from H1N1pdm09 and from seasonal influenza were of similar magnitude, the shift towards mortality among younger people means that more life-years were lost during the 2009 influenza pandemic than during an average pre-pandemic influenza season. Although the methods developed by the GLaMOR project can be used to make robust and comparable mortality estimates in future influenza pandemics, the lack of timeliness of such estimates needs to be remedied. One potential remedy, suggest the researchers, would be to establish a collaborative network that analyzes timely hospitalization and/or mortality data provided by sentinel countries. Such a network should be able to provide the rapid and reliable data about the severity of pandemic threats that is needed to guide public health policy decisions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001558.
The US Centers for Disease Control and Prevention provides information about influenza for patients and professionals, including archived information on H1N1pdm09
Flu.gov, a US government website, provides access to information on seasonal and pandemic influenza H1N1pdm09
The World Health Organization provides information on influenza and on the global response to H1N1pdm09, including a publication on the evolution of H1N1pdm09 (some information in several languages). Information on FluNet, a global tool for influenza surveillance, is also available
Public Health England provides information on pandemic influenza and archived information on H1N1pdm09
More information for patients about H1N1pdm09 is available through Choices, an information resource provided by the UK National Health Service
More information about the GLaMOR project is available
doi:10.1371/journal.pmed.1001558
PMCID: PMC3841239  PMID: 24302890
10.  An Epidemiological Network Model for Disease Outbreak Detection 
PLoS Medicine  2007;4(6):e210.
Background
Advanced disease-surveillance systems have been deployed worldwide to provide early detection of infectious disease outbreaks and bioterrorist attacks. New methods that improve the overall detection capabilities of these systems can have a broad practical impact. Furthermore, most current generation surveillance systems are vulnerable to dramatic and unpredictable shifts in the health-care data that they monitor. These shifts can occur during major public events, such as the Olympics, as a result of population surges and public closures. Shifts can also occur during epidemics and pandemics as a result of quarantines, the worried-well flooding emergency departments or, conversely, the public staying away from hospitals for fear of nosocomial infection. Most surveillance systems are not robust to such shifts in health-care utilization, either because they do not adjust baselines and alert-thresholds to new utilization levels, or because the utilization shifts themselves may trigger an alarm. As a result, public-health crises and major public events threaten to undermine health-surveillance systems at the very times they are needed most.
Methods and Findings
To address this challenge, we introduce a class of epidemiological network models that monitor the relationships among different health-care data streams instead of monitoring the data streams themselves. By extracting the extra information present in the relationships between the data streams, these models have the potential to improve the detection capabilities of a system. Furthermore, the models' relational nature has the potential to increase a system's robustness to unpredictable baseline shifts. We implemented these models and evaluated their effectiveness using historical emergency department data from five hospitals in a single metropolitan area, recorded over a period of 4.5 y by the Automated Epidemiological Geotemporal Integrated Surveillance real-time public health–surveillance system, developed by the Children's Hospital Informatics Program at the Harvard-MIT Division of Health Sciences and Technology on behalf of the Massachusetts Department of Public Health. We performed experiments with semi-synthetic outbreaks of different magnitudes and simulated baseline shifts of different types and magnitudes. The results show that the network models provide better detection of localized outbreaks, and greater robustness to unpredictable shifts than a reference time-series modeling approach.
Conclusions
The integrated network models of epidemiological data streams and their interrelationships have the potential to improve current surveillance efforts, providing better localized outbreak detection under normal circumstances, as well as more robust performance in the face of shifts in health-care utilization during epidemics and major public events.
Most surveillance systems are not robust to shifts in health care utilization. Ben Reis and colleagues developed network models that detected localized outbreaks better and were more robust to unpredictable shifts.
Editors' Summary
Background.
The main task of public-health officials is to promote health in communities around the world. To do this, they need to monitor human health continually, so that any outbreaks (epidemics) of infectious diseases (particularly global epidemics or pandemics) or any bioterrorist attacks can be detected and dealt with quickly. In recent years, advanced disease-surveillance systems have been introduced that analyze data on hospital visits, purchases of drugs, and the use of laboratory tests to look for tell-tale signs of disease outbreaks. These surveillance systems work by comparing current data on the use of health-care resources with historical data or by identifying sudden increases in the use of these resources. So, for example, more doctors asking for tests for salmonella than in the past might presage an outbreak of food poisoning, and a sudden rise in people buying over-the-counter flu remedies might indicate the start of an influenza pandemic.
Why Was This Study Done?
Existing disease-surveillance systems don't always detect disease outbreaks, particularly in situations where there are shifts in the baseline patterns of health-care use. For example, during an epidemic, people might stay away from hospitals because of the fear of becoming infected, whereas after a suspected bioterrorist attack with an infectious agent, hospitals might be flooded with “worried well” (healthy people who think they have been exposed to the agent). Baseline shifts like these might prevent the detection of increased illness caused by the epidemic or the bioterrorist attack. Localized population surges associated with major public events (for example, the Olympics) are also likely to reduce the ability of existing surveillance systems to detect infectious disease outbreaks. In this study, the researchers developed a new class of surveillance systems called “epidemiological network models.” These systems aim to improve the detection of disease outbreaks by monitoring fluctuations in the relationships between information detailing the use of various health-care resources over time (data streams).
What Did the Researchers Do and Find?
The researchers used data collected over a 3-y period from five Boston hospitals on visits for respiratory (breathing) problems and for gastrointestinal (stomach and gut) problems, and on total visits (15 data streams in total), to construct a network model that included all the possible pair-wise comparisons between the data streams. They tested this model by comparing its ability to detect simulated disease outbreaks implanted into data collected over an additional year with that of a reference model based on individual data streams. The network approach, they report, was better at detecting localized outbreaks of respiratory and gastrointestinal disease than the reference approach. To investigate how well the network model dealt with baseline shifts in the use of health-care resources, the researchers then added in a large population surge. The detection performance of the reference model decreased in this test, but the performance of the complete network model and of models that included relationships between only some of the data streams remained stable. Finally, the researchers tested what would happen in a situation where there were large numbers of “worried well.” Again, the network models detected disease outbreaks consistently better than the reference model.
What Do These Findings Mean?
These findings suggest that epidemiological network systems that monitor the relationships between health-care resource-utilization data streams might detect disease outbreaks better than current systems under normal conditions and might be less affected by unpredictable shifts in the baseline data. However, because the tests of the new class of surveillance system reported here used simulated infectious disease outbreaks and baseline shifts, the network models may behave differently in real-life situations or if built using data from other hospitals. Nevertheless, these findings strongly suggest that public-health officials, provided they have sufficient computer power at their disposal, might improve their ability to detect disease outbreaks by using epidemiological network systems alongside their current disease-surveillance systems.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040210.
Wikipedia pages on public health (note that Wikipedia is a free online encyclopedia that anyone can edit, and is available in several languages)
A brief description from the World Health Organization of public-health surveillance (in English, French, Spanish, Russian, Arabic, and Chinese)
A detailed report from the US Centers for Disease Control and Prevention called “Framework for Evaluating Public Health Surveillance Systems for the Early Detection of Outbreaks”
The International Society for Disease Surveillance Web site
doi:10.1371/journal.pmed.0040210
PMCID: PMC1896205  PMID: 17593895
11.  Natural Ventilation for the Prevention of Airborne Contagion 
PLoS Medicine  2007;4(2):e68.
Background
Institutional transmission of airborne infections such as tuberculosis (TB) is an important public health problem, especially in resource-limited settings where protective measures such as negative-pressure isolation rooms are difficult to implement. Natural ventilation may offer a low-cost alternative. Our objective was to investigate the rates, determinants, and effects of natural ventilation in health care settings.
Methods and Findings
The study was carried out in eight hospitals in Lima, Peru; five were hospitals of “old-fashioned” design built pre-1950, and three of “modern” design, built 1970–1990. In these hospitals 70 naturally ventilated clinical rooms where infectious patients are likely to be encountered were studied. These included respiratory isolation rooms, TB wards, respiratory wards, general medical wards, outpatient consulting rooms, waiting rooms, and emergency departments. These rooms were compared with 12 mechanically ventilated negative-pressure respiratory isolation rooms built post-2000. Ventilation was measured using a carbon dioxide tracer gas technique in 368 experiments. Architectural and environmental variables were measured. For each experiment, infection risk was estimated for TB exposure using the Wells-Riley model of airborne infection. We found that opening windows and doors provided median ventilation of 28 air changes/hour (ACH), more than double that of mechanically ventilated negative-pressure rooms ventilated at the 12 ACH recommended for high-risk areas, and 18 times that with windows and doors closed (p < 0.001). Facilities built more than 50 years ago, characterised by large windows and high ceilings, had greater ventilation than modern naturally ventilated rooms (40 versus 17 ACH; p < 0.001). Even within the lowest quartile of wind speeds, natural ventilation exceeded mechanical (p < 0.001). The Wells-Riley airborne infection model predicted that in mechanically ventilated rooms 39% of susceptible individuals would become infected following 24 h of exposure to untreated TB patients of infectiousness characterised in a well-documented outbreak. This infection rate compared with 33% in modern and 11% in pre-1950 naturally ventilated facilities with windows and doors open.
Conclusions
Opening windows and doors maximises natural ventilation so that the risk of airborne contagion is much lower than with costly, maintenance-requiring mechanical ventilation systems. Old-fashioned clinical areas with high ceilings and large windows provide greatest protection. Natural ventilation costs little and is maintenance free, and is particularly suited to limited-resource settings and tropical climates, where the burden of TB and institutional TB transmission is highest. In settings where respiratory isolation is difficult and climate permits, windows and doors should be opened to reduce the risk of airborne contagion.
In eight hospitals in Lima, opening windows and doors maximised natural ventilation and lowered the risk of airborne infection. Old-fashioned clinical areas with high ceilings and large windows provide greatest protection.
Editors' Summary
Background.
Tuberculosis (TB) is a major cause of ill health and death worldwide, with around one-third of the world's population infected with the bacterium that causes it (Mycobacterium tuberculosis). One person with active tuberculosis can go on to infect many others; the bacterium is passed in tiny liquid droplets that are produced when someone with active disease coughs, sneezes, spits, or speaks. The risk of tuberculosis being transmitted in hospital settings is particularly high, because people with tuberculosis are often in close contact with very many other people. Currently, most guidelines recommend that the risk of transmission be controlled in certain areas where TB is more likely by making sure that the air in rooms is changed with fresh air between six and 12 times an hour. Air changes can be achieved with simple measures such as opening windows and doors, or by installing mechanical equipment that forces air changes and also keeps the air pressure in an isolation room lower than that outside it. Such “negative pressure,” mechanically ventilated systems are often used on tuberculosis wards to prevent air flowing from isolation rooms to other rooms outside, and so to prevent people on the tuberculosis ward from infecting others.
Why Was This Study Done?
In many parts of the world, hospitals do not have equipment even for simple air conditioning, let alone the special equipment needed for forcing high air changes in isolation rooms and wards. Instead they rely on opening windows and doors in order to reduce the transmission of TB, and this is called natural ventilation. However, it is not clear whether these sorts of measures are adequate for controlling TB transmission. It is important to find out what sorts of systems work best at controlling TB in the real world, so that hospitals and wards can be designed appropriately, within available resources.
What Did the Researchers Do and Find?
This study was based in Lima, Peru's capital city. The researchers studied a variety of rooms, including tuberculosis wards and respiratory isolation rooms, in the city's hospitals. Rooms which had only natural measures for encouraging airflow were compared with mechanically ventilated, negative pressure rooms, which were built much more recently. A comparison was also done between rooms in old hospitals that were naturally ventilated with rooms in newer hospitals that were also naturally ventilated. The researchers used a particular method to measure the number of air changes per hour within each room, and based on this they estimated the risk of a person with TB infecting others using a method called the Wells-Riley equation. The results showed that natural ventilation provided surprisingly high rates of air exchange, with an average of 28 air changes per hour. Hospitals over 50 years old, which generally had large windows and high ceilings, had the highest ventilation, with an average of 40 air changes per hour. This rate compared with 17 air changes per hour in naturally ventilated rooms in modern hospitals, which tended to have lower ceilings and smaller windows. The rooms with modern mechanical ventilation were supposed to have 12 air changes per hour but in reality this was not achieved, as the systems were not maintained properly. The Wells-Riley equation predicted that if an untreated person with tuberculosis was exposed to other people, within 24 hours this person would infect 39% of the people in the mechanically ventilated room, 33% of people in the naturally ventilated new hospital rooms, and only 11% of the people in the naturally ventilated old hospital rooms.
What Do These Findings Mean?
These findings suggest that natural methods of encouraging airflow (e.g., opening doors and windows) work well and in theory could reduce the likelihood of TB being carried from one person to another. Some aspects of the design of wards in old hospitals (such as large windows and high ceilings) are also likely to achieve better airflow and reduce the risk of infection. In poor countries, where mechanical ventilation systems might be too expensive to install and maintain properly, rooms that are designed to naturally achieve good airflow might be the best choice. Another advantage of natural ventilation is that it is not restricted by cost to just high-risk areas, and can therefore be used in many different parts of the hospital, including emergency departments, outpatient departments, and waiting rooms, and it is here that many infectious patients are to be found.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040068.
Information from the World Health Organization on tuberculosis, detailing global efforts to prevent the spread of TB
The World Health Organization publishes guidelines for the prevention of TB in health care facilities in resource-limited settings
Tuberculosis infection control in the era of expanding HIV care and treatment is discussed in an addendum to the above booklet
The Centers for Disease Control have published guidelines for preventing the transmission of mycobacterium tuberculosis in health care settings
Wikipedia has an entry on nosocomial infections (diseases that are spread in hospital). Wikipedia is an internet encyclopedia anyone can edit
A PLoS Medicine Perspective by Peter Wilson, “Is Natural Ventilation a Useful Tool to Prevent the Airborne Spread of TB?” discusses the implications of this study
doi:10.1371/journal.pmed.0040068
PMCID: PMC1808096  PMID: 17326709
12.  Barriers to Provider-Initiated Testing and Counselling for Children in a High HIV Prevalence Setting: A Mixed Methods Study 
PLoS Medicine  2014;11(5):e1001649.
Rashida Ferrand and colleagues combine quantitative and qualitative methods to investigate HIV prevalence among older children receiving primary care in Harare, Zimbabwe, and reasons why providers did not pursue testing.
Please see later in the article for the Editors' Summary
Background
There is a substantial burden of HIV infection among older children in sub-Saharan Africa, the majority of whom are diagnosed after presentation with advanced disease. We investigated the provision and uptake of provider-initiated HIV testing and counselling (PITC) among children in primary health care facilities, and explored health care worker (HCW) perspectives on providing HIV testing to children.
Methods and Findings
Children aged 6 to 15 y attending six primary care clinics in Harare, Zimbabwe, were offered PITC, with guardian consent and child assent. The reasons why testing did not occur in eligible children were recorded, and factors associated with HCWs offering and children/guardians refusing HIV testing were investigated using multivariable logistic regression. Semi-structured interviews were conducted with clinic nurses and counsellors to explore these factors. Among 2,831 eligible children, 2,151 (76%) were offered PITC, of whom 1,534 (54.2%) consented to HIV testing. The main reasons HCWs gave for not offering PITC were the perceived unsuitability of the accompanying guardian to provide consent for HIV testing on behalf of the child and lack of availability of staff or HIV testing kits. Children who were asymptomatic, older, or attending with a male or a younger guardian had significantly lower odds of being offered HIV testing. Male guardians were less likely to consent to their child being tested. 82 (5.3%) children tested HIV-positive, with 95% linking to care. Of the 940 guardians who tested with the child, 186 (19.8%) were HIV-positive.
Conclusions
The HIV prevalence among children tested was high, highlighting the need for PITC. For PITC to be successfully implemented, clear legislation about consent and guardianship needs to be developed, and structural issues addressed. HCWs require training on counselling children and guardians, particularly male guardians, who are less likely to engage with health care services. Increased awareness of the risk of HIV infection in asymptomatic older children is needed.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Over 3 million children globally are estimated to be living with HIV (the virus that causes AIDS). While HIV infection is most commonly spread through unprotected sex with an infected person, most HIV infections among children are the result of mother-to-child HIV transmission during pregnancy, delivery, or breastfeeding. Mother-to-child transmission can be prevented by administering antiretroviral therapy to mothers with HIV during pregnancy, delivery, and breast feeding, and to their newborn babies. According to a report by the Joint United Nations Programme on HIV/AIDS published in 2012, 92% of pregnant women with HIV were living in sub-Saharan Africa and just under 60% were receiving antiretroviral therapy. Consequently, sub-Saharan Africa is the region where most children infected with HIV live.
Why Was This Study Done?
If an opportunity to prevent mother-to-child transmission around the time of birth is missed, diagnosis of HIV infection in a child or adolescent is likely to depend on HIV testing in health care facilities. Health care provider–initiated HIV testing and counselling (PITC) for children is important in areas where HIV infection is common because earlier diagnosis allows children to benefit from care that can prevent the development of advanced HIV disease. Even if a child or adolescent appears to be in good health, access to care and antiretroviral therapy provides a health benefit to the individual over the long term. The administration of HIV testing (and counselling) to children relies not only on health care workers (HCWs) offering HIV testing but also on parents or guardians consenting for a child to be tested. However, more than 30% of children in countries with severe HIV epidemics are AIDS orphans, and economic conditions in these countries cause many adults to migrate for work, leaving children under the care of extended families. This study aimed to investigate the reasons for acceptance and rejection of PITC in primary health care settings in Harare, Zimbabwe. By exploring HCW perspectives on providing HIV testing to children and adolescents, the study also sought to gain insight into factors that could be hindering implementation of testing procedures.
What Did the Researchers Do and Find?
The researchers identified all children aged 6 to 15 years old at six primary care clinics in Harare, who were offered HIV testing as part of routine care between 22 January and 31 May 2013. Study fieldworkers collected data on numbers of child attendances, numbers offered testing, numbers who underwent HIV testing, and reasons why HIV testing did not occur. During the study 2,831 children attending the health clinics were eligible for PITC, and just over half (1,534, 54.2%) underwent HIV testing. Eighty-two children tested HIV-positive, and nearly all of them received counselling, medication, and follow-up care. HCWs offered the test to around 75% of those eligible. The most frequent explanation given by HCWs for a diagnostic test not being offered was that the child was accompanied by a guardian not appropriate for providing consent (401 occasions, 59%); Other reasons given were a lack of available counsellors or test kits and counsellors refusing to conduct the test. The likelihood of being offered the test was lower for children not exhibiting symptoms (such as persistent skin problems), older children, or those attending with a male or a younger guardian. In addition, over 100 guardians or parents provided consent but left before the child could be tested.
The researchers also conducted semi-structured interviews with 12 clinic nurses and counsellors (two from each clinic) to explore challenges to implementation of PITC. The researchers recorded the factors associated with testing not taking place, either when offered to eligible children or when HCWs declined to offer the test. The interviewees identified the frequent absence or unavailability of parents or legal guardians as an obstacle, and showed uncertainty or misconceptions around whether testing of the guardian was mandatory (versus recommended) and whether specifically a parent (if one was living) must provide consent. The interviews also revealed HCW concerns about the availability of adequate counselling and child services, and fears that a child might experience maltreatment if he or she tested positive. HCWs also noted long waiting times and test kits being out of stock as practical hindrances to testing.
What Do These Findings Mean?
Prevalence of HIV was high among the children tested, validating the need for PITC in sub-Saharan health care settings. Although 76% of eligible attendees were offered testing, the authors note that this is likely higher than in routine settings because the researchers were actively recording reasons for not offering testing and counselling, which may have encouraged heath care staff to offer PITC more often than usual. The researchers outline strategies that may improve PITC rates and testing acceptance for Zimbabwe and other sub-Saharan settings. These strategies include developing clear laws and guidance concerning guardianship and proxy consent when testing older children for HIV, training HCWs around these policies, strengthening legislation to address discrimination, and increasing public awareness about HIV infection in older children.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001649.
This study is further discussed in a PLOS Medicine Perspective by Davies and Kalk
The Joint United Nations Programme on HIV/AIDS publishes an annual report on the global AIDS epidemic, which provides information on progress towards eliminating new HIV infections
The World Health Organization has more information on mother-to-child transmission of HIV
The World Health Organization's website also has information about treatment for children living with HIV
Personal stories about living with HIV/AIDS, including stories from young people infected with HIV, are available through Avert, through NAM/aidsmap, and through the charity website Healthtalkonline
doi:10.1371/journal.pmed.1001649
PMCID: PMC4035250  PMID: 24866209
13.  Cost-Effectiveness of Rapid Syphilis Screening in Prenatal HIV Testing Programs in Haiti 
PLoS Medicine  2007;4(5):e183.
Background
New rapid syphilis tests permit simple and immediate diagnosis and treatment at a single clinic visit. We compared the cost-effectiveness, projected health outcomes, and annual cost of screening pregnant women using a rapid syphilis test as part of scaled-up prenatal testing to prevent mother-to-child HIV transmission in Haiti.
Methods and Findings
A decision analytic model simulated health outcomes and costs separately for pregnant women in rural and urban areas. We compared syphilis syndromic surveillance (rural standard of care), rapid plasma reagin test with results and treatment at 1-wk follow-up (urban standard of care), and a new rapid test with immediate results and treatment. Test performance data were from a World Health Organization–Special Programme for Research and Training in Tropical Diseases field trial conducted at the GHESKIO Center Groupe Haitien d'Etude du Sarcome de Kaposi et des Infections Opportunistes in Port-au-Prince. Health outcomes were projected using historical data on prenatal syphilis treatment efficacy and included disability-adjusted life years (DALYs) of newborns, congenital syphilis cases, neonatal deaths, and stillbirths. Cost-effectiveness ratios are in US dollars/DALY from a societal perspective; annual costs are in US dollars from a payer perspective. Rapid testing with immediate treatment has a cost-effectiveness ratio of $6.83/DALY in rural settings and $9.95/DALY in urban settings. Results are sensitive to regional syphilis prevalence, rapid test sensitivity, and the return rate for follow-up visits. Integrating rapid syphilis testing into a scaled-up national HIV testing and prenatal care program would prevent 1,125 congenital syphilis cases and 1,223 stillbirths or neonatal deaths annually at a cost of $525,000.
Conclusions
In Haiti, integrating a new rapid syphilis test into prenatal care and HIV testing would prevent congenital syphilis cases and stillbirths, and is cost-effective. A similar approach may be beneficial in other resource-poor countries that are scaling up prenatal HIV testing.
Analyzing data from Haiti, Bruce Schackman and colleagues report that scale-up of prenatal HIV testing programs provides a cost-effective opportunity to prevent congenital syphilis through rapid testing.
Editors' Summary
Background.
Congenital syphilis (syphilis that is passed on from a woman infected with the disease to her unborn baby) is a major preventable public health problem. Around half of all pregnancies among women infected with syphilis result in stillbirth or death of the baby shortly after birth. However, it should be possible to reduce the health burden of congenital syphilis if infections among pregnant women could be quickly and accurately diagnosed. In resource-poor countries, many syphilis infections go undiagnosed, because the tests that are normally used involve sending samples away to a laboratory for processing. This means that the diagnosis can only be confirmed, and treatment started, at the next available visit. As a result, there is a delay in starting antibiotic treatment, and some women may never receive their intended treatment at all if they cannot return for their follow-up visit. However, new tests are available that don't involve cold storage of reagents or electrical equipment, and these can be used to give an immediate result about syphilis infection even in rural or resource-poor settings. Currently, global initiatives are underway to ensure many more pregnant women are tested for HIV and to reduce the risk of HIV being passed on from a woman to her baby. These initiatives could provide an important opportunity for carrying out widespread immediate screening for syphilis during pregnancy as well. Such screening might then help reduce infant deaths substantially.
Why Was This Study Done?
Field trials evaluating rapid syphilis tests have already been carried out by the World Health Organization's Special Programme for Research and Training in Tropical Diseases. One such trial, carried out in Port-au-Prince, Haiti, evaluated the success of three different rapid syphilis tests as compared to two “gold standard” tests (older tests that are generally considered reliable, but which don't give an immediate result). These researchers wanted to use data from these trials to compare costs and predicted health outcomes of including different types of syphilis screening as part of scaled-up prenatal care. Specifically, the researchers wanted to find out whether including rapid syphilis testing as part of universal prenatal care would be cost-effective and whether it would reduce the rate of stillbirths and congenital syphilis.
What Did the Researchers Do and Find?
This research was based on data from the field trials previously carried out in Haiti. The data from these trials were used to create a model comparing three different strategies for screening pregnant women for syphilis infections. The three strategies were as follows: checking for the symptoms of syphilis (assumed to be the standard of care in rural areas); standard testing for antibody response to the syphilis bacterium, after which treatment can be provided at follow-up a week later (assumed to be the standard of care in urban areas); and, finally, rapid testing that gives an immediate result. For each strategy, the researchers predicted what the health outcomes would be. These outcomes are summarized in “disability-adjusted life years” (DALYs) that reflect the number of years of healthy life lost due to congenital syphilis among newborn babies, the number of stillbirths, and the number of neonatal deaths. Cost-effectiveness of each strategy was also worked out by dividing the additional costs of testing and treatment for each strategy by the number of DALYs avoided using that screening method compared to the next most expensive alternative. Under the model, urban and rural settings were looked at separately. Immediate testing was more expensive than either standard testing or checking for symptoms, but emerged as more cost-effective than standard testing in rural settings; the immediate test would cost an additional $7–$10 per disability-adjusted life year compared to the current rural or urban standard of care. The researchers predicted that if immediate syphilis testing were provided to all pregnant women in Haiti who currently have access to prenatal care, over 1,000 congenital syphilis cases would be avoided, along with over 1,000 stillbirths and neonatal deaths, at a yearly cost of $525,000.
What Do These Findings Mean?
This model suggests that including rapid syphilis testing as part of current global initiatives for preventing mother-to-child transmission of HIV could substantially reduce infant deaths. The strategy is also likely to be cost-effective.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040183.
MedlinePlus encyclopedia entry on congenital syphilis
Information from the World Health Organization about congenital syphilis, including information about screening programs and new screening tests
A report is also available from the Special Programme for Research and Training in Tropical Diseases regarding rapid syphilis tests
AVERT, an international AIDS charity, provides information about preventing mother-to-child transmission of HIV
doi:10.1371/journal.pmed.0040183
PMCID: PMC1880854  PMID: 17535105
14.  The Importance of Implementation Strategy in Scaling Up Xpert MTB/RIF for Diagnosis of Tuberculosis in the Indian Health-Care System: A Transmission Model 
PLoS Medicine  2014;11(7):e1001674.
Using a modelling approach, David Dowdy and colleagues investigate how different implementation strategies for Xpert MTB/RIF within the complex, fragmented healthcare system of India may affect tuberculosis control.
Please see later in the article for the Editors' Summary
Background
India has announced a goal of universal access to quality tuberculosis (TB) diagnosis and treatment. A number of novel diagnostics could help meet this important goal. The rollout of one such diagnostic, Xpert MTB/RIF (Xpert) is being considered, but if Xpert is used mainly for people with HIV or high risk of multidrug-resistant TB (MDR-TB) in the public sector, population-level impact may be limited.
Methods and Findings
We developed a model of TB transmission, care-seeking behavior, and diagnostic/treatment practices in India and explored the impact of six different rollout strategies. Providing Xpert to 40% of public-sector patients with HIV or prior TB treatment (similar to current national strategy) reduced TB incidence by 0.2% (95% uncertainty range [UR]: −1.4%, 1.7%) and MDR-TB incidence by 2.4% (95% UR: −5.2%, 9.1%) relative to existing practice but required 2,500 additional MDR-TB treatments and 60 four-module GeneXpert systems at maximum capacity. Further including 20% of unselected symptomatic individuals in the public sector required 700 systems and reduced incidence by 2.1% (95% UR: 0.5%, 3.9%); a similar approach involving qualified private providers (providers who have received at least some training in allopathic or non-allopathic medicine) reduced incidence by 6.0% (95% UR: 3.9%, 7.9%) with similar resource outlay, but only if high treatment success was assured. Engaging 20% of all private-sector providers (qualified and informal [providers with no formal medical training]) had the greatest impact (14.1% reduction, 95% UR: 10.6%, 16.9%), but required >2,200 systems and reliable treatment referral. Improving referrals from informal providers for smear-based diagnosis in the public sector (without Xpert rollout) had substantially greater impact (6.3% reduction) than Xpert scale-up within the public sector. These findings are subject to substantial uncertainty regarding private-sector treatment patterns, patient care-seeking behavior, symptoms, and infectiousness over time; these uncertainties should be addressed by future research.
Conclusions
The impact of new diagnostics for TB control in India depends on implementation within the complex, fragmented health-care system. Transformative strategies will require private/informal-sector engagement, adequate referral systems, improved treatment quality, and substantial resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Tuberculosis—a contagious bacterial disease that usually infects the lungs—is a global public health problem. Each year, about 8.7 million people develop active tuberculosis and about 1.4 million people die from the disease. Mycobacterium tuberculosis, the bacterium that causes tuberculosis, is spread in airborne droplets when people with active disease cough or sneeze. The characteristic symptoms of tuberculosis are a persistent cough, fever, weight loss, and night sweats. Diagnostic tests for tuberculosis include sputum smear microscopy (microscopic analysis of mucus coughed up from the lungs), the growth of M. tuberculosis from sputum samples, and new molecular tests (for example, the automated Xpert MTB/RIF test) that rapidly and accurately detect M. tuberculosis in patient samples and determine its resistance to certain antibiotics. Tuberculosis can be cured by taking several antibiotics daily for at least six months, although the recent emergence of multidrug-resistant (MDR) tuberculosis is making the disease increasingly hard to treat.
Why Was This Study Done?
About 25% of all tuberculosis cases occur in India. Most people in India with underlying tuberculosis initially seek care for cough from the private health-care sector, which comprises informal providers with no formal medical training and providers with some training in mainstream or alternative medicine. Private providers rarely investigate for tuberculosis, and patients often move between providers, with long diagnostic delays. The public sector ultimately diagnoses and treats more than half of tuberculosis cases. However, the public sector relies on sputum smear microscopy, which misses half of cases, and the full diagnostic process from symptom onset to treatment initiation can take several months, during which time individuals remain infectious. Could the rollout of molecular diagnostic tests improve tuberculosis control in India? The Indian Revised National Tuberculosis Control Programme (RNTCP) is currently introducing the Xpert MTB/RIF test (Xpert) as a rapid method for drug susceptibility testing in the public sector in people at high risk of MDR tuberculosis, but is this the most effective rollout strategy? Here, the researchers use a mathematical transmission model to investigate the likely effects of the rollout of Xpert in India using different implementation strategies.
What Did the Researchers Do and Find?
The researchers explored the impact of several rollout strategies on the incidence of tuberculosis (the number of new cases of tuberculosis in the population per year) by developing a mathematical model of tuberculosis transmission, care-seeking behavior, and diagnostic/treatment practices in India. Compared to a baseline scenario of no improved diagnostic testing, provision of Xpert to 40% of public-sector patients at high risk of MDR tuberculosis (scenario 1, the current national strategy) reduced the incidence of tuberculosis by 0.2% and the incidence of MDR tuberculosis by 2.4%. Implementation of this strategy required 2,500 additional courses of MDR tuberculosis treatment and the continuous use of 60 Xpert machines, about half the machines procured in India during 2013. A scenario that added access to Xpert for 20% of all individuals with tuberculosis symptoms seeking diagnosis in the public sector and 20% of individuals seeking care from qualified private practitioners to scenario 1 reduced the incidence of tuberculosis by 14.1% compared to the baseline scenario but required more than 2,200 Xpert machines and reliable treatment referral. Notably, a scenario that encouraged informal providers to refer suspected tuberculosis cases to the public sector for smear-based diagnosis (no Xpert rollout) had a greater impact on the incidence of tuberculosis than Xpert scale-up within the public sector.
What Do These Findings Mean?
These findings are subject to considerable uncertainty because of the assumptions made in the transmission model about private-sector treatment patterns, patient care-seeking behavior, and infectiousness, and the quality of the data fed into the model. Nevertheless, these findings suggest that the rollout of Xpert (or other new diagnostic methods with similar characteristics) could substantially reduce the burden of tuberculosis due to poor diagnosis in India. Importantly, these findings highlight how the impact of Xpert rollout relies not only on the accuracy of the test but also on the behavior of patients and providers, the level of access to new tools, and the availability of treatment following diagnosis. Thus, to ensure that new diagnostic methods have the maximum impact on tuberculosis in India, it is necessary to engage the whole private health-care sector and to provide adequate referral systems, improved treatment quality, and increased resources across all health-care sectors.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001674.
The World Health Organization (WHO) provides information (in several languages) on all aspects of tuberculosis, including general information on tuberculosis diagnostics and specific information on the roll out of the Xpert MTB/RIF test; further information about WHO's endorsement of Xpert MTB/RIF is included in a Strategic and Technical Advisory Group for Tuberculosis report; the Global Tuberculosis Report 2013 provides information about tuberculosis around the world, including in India
The Stop TB Partnership is working towards tuberculosis elimination and provides patient stories about tuberculosis (in English and Spanish); the Tuberculosis Vaccine Initiative (a not-for-profit organization) also provides personal stories about tuberculosis
The US Centers for Disease Control and Prevention has information about tuberculosis and its diagnosis (in English and Spanish)
The US National Institute of Allergy and Infectious Diseases also has detailed information on all aspects of tuberculosis
TBC India provides information about tuberculosis control in India, including information on the RNTCP
The Initiative for Promoting Affordable and Quality TB Tests promotes WHO-endorsed TB tests in India
MedlinePlus has links to further information about tuberculosis (in English and Spanish)
doi:10.1371/journal.pmed.1001674
PMCID: PMC4098913  PMID: 25025235
15.  The Limits and Intensity of Plasmodium falciparum Transmission: Implications for Malaria Control and Elimination Worldwide  
PLoS Medicine  2008;5(2):e38.
Background
The efficient allocation of financial resources for malaria control using appropriate combinations of interventions requires accurate information on the geographic distribution of malaria risk. An evidence-based description of the global range of Plasmodium falciparum malaria and its endemicity has not been assembled in almost 40 y. This paper aims to define the global geographic distribution of P. falciparum malaria in 2007 and to provide a preliminary description of its transmission intensity within this range.
Methods and Findings
The global spatial distribution of P. falciparum malaria was generated using nationally reported case-incidence data, medical intelligence, and biological rules of transmission exclusion, using temperature and aridity limits informed by the bionomics of dominant Anopheles vector species. A total of 4,278 spatially unique cross-sectional survey estimates of P. falciparum parasite rates were assembled. Extractions from a population surface showed that 2.37 billion people lived in areas at any risk of P. falciparum transmission in 2007. Globally, almost 1 billion people lived under unstable, or extremely low, malaria risk. Almost all P. falciparum parasite rates above 50% were reported in Africa in a latitude band consistent with the distribution of Anopheles gambiae s.s. Conditions of low parasite prevalence were also common in Africa, however. Outside of Africa, P. falciparum malaria prevalence is largely hypoendemic (less than 10%), with the median below 5% in the areas surveyed.
Conclusions
This new map is a plausible representation of the current extent of P. falciparum risk and the most contemporary summary of the population at risk of P. falciparum malaria within these limits. For 1 billion people at risk of unstable malaria transmission, elimination is epidemiologically feasible, and large areas of Africa are more amenable to control than appreciated previously. The release of this information in the public domain will help focus future resources for P. falciparum malaria control and elimination.
Combining extensive surveillance and climate data, as well as biological characteristics of Anopheles mosquitoes, Robert Snow and colleagues create a global map of risk for P. falciparum malaria.
Editors' Summary
Background.
Malaria is a parasitic disease that occurs in tropical and subtropical regions of the world. 500 million cases of malaria occur every year, and one million people, mostly children living in sub-Saharan Africa, die as a result. The parasite mainly responsible for these deaths—Plasmodium falciparum—is transmitted to people through the bites of infected mosquitoes. These insects inject a life stage of the parasite called sporozoites, which invade and reproduce in human liver cells. After a few days, the liver cells release merozoites (another life stage of the parasite), which invade red blood cells. Here, they multiply before bursting out and infecting more red blood cells, causing fever and damaging vital organs. Infected red blood cells also release gametocytes, which infect mosquitoes when they take a human blood meal. In the mosquito, the gametocytes multiply and develop into sporozoites, thus completing the parasite's life cycle. Malaria can be treated with antimalarial drugs and can be prevented by controlling the mosquitoes that spread the parasite (for example, by using insecticides) and by avoiding mosquito bites (for example, by sleeping under a insecticide-treated bednet).
Why Was This Study Done?
Because malaria poses such a large global public-health burden, many national and international agencies give countries where malaria is endemic (always present) financial resources for malaria control and, where feasible, elimination. The efficient allocation of these resources requires accurate information on the geographical distribution of malaria risk, but it has been 40 years since a map of malaria risk was assembled. In this study, which is part of the Malaria Atlas Project, the researchers have generated a new global map to show where the risk of P. falciparum transmission is moderate or high (stable transmission areas where malaria is endemic) and areas where the risk of transmission is low (unstable transmission areas where sporadic outbreaks of malaria occur).
What Did the Researchers Do and Find?
To construct their map of P. falciparum risk, the researchers collected nationally reported data on malaria cases each year and on the number of people infected in sampled communities. They also collected information about climatic conditions that affect the parasite's life cycle and consequently the likelihood of active transmission. For example, below a certain temperature, infected mosquitoes reach the end of their natural life span before the parasite has had time to turn into infectious sporozoites, which means that malaria transmission does not occur. By combining these different pieces of information with global population data, the researchers calculated that 2.37 billion people (about 35% of the world's population) live in areas where there is some risk of P. falciparum transmission, and that about 1 billion of these people live where there is a low but still present risk of malaria transmission. Furthermore, nearly all the regions where more than half of children carry P. falciparum parasites (a P. falciparum prevalence of more than 50%) are in Africa, although there are some African regions where few people are infected with P. falciparum. Outside Africa, the P. falciparum prevalence is generally below 5%.
What Do These Findings Mean?
The accuracy of this new map of the spatial distribution of P. falciparum malaria risk depends on the assumptions made in its assembly and the accuracy of the data fed into it. Nevertheless, by providing a contemporary indication of global patterns of P. falciparum malaria risk, this new map should be a valuable resource for agencies that are trying to control and eliminate malaria. (A similar map for the more common but less deadly P. vivax malaria would also be useful, but has not yet been constructed because less information is available and its biology is more complex.) Importantly, the map provides an estimate of the number of people who are living in areas where malaria transmission is low, areas where it should, in princple, be possible to use existing interventions to eliminate the parasite. In addition, it identifies large regions of Africa where the parasite might be more amenable to control and, ultimately, elimination than previously thought. Finally, with regular updates, this map will make it possible to monitor the progress of malaria control and elimination efforts.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050038.
The MedlinePlus encyclopedia contains a page on malaria (in English and Spanish)
Information is available from the World Health Organization on malaria (in English, Spanish, French, Russian, Arabic, and Chinese)
The US Centers for Disease Control and Prevention provide information on malaria (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on its approach to the global control of malaria
More information is available on global mapping of malaria risk from the Malaria Atlas Project
doi:10.1371/journal.pmed.0050038
PMCID: PMC2253602  PMID: 18303939
16.  Modeling the Worldwide Spread of Pandemic Influenza: Baseline Case and Containment Interventions 
PLoS Medicine  2007;4(1):e13.
Background
The highly pathogenic H5N1 avian influenza virus, which is now widespread in Southeast Asia and which diffused recently in some areas of the Balkans region and Western Europe, has raised a public alert toward the potential occurrence of a new severe influenza pandemic. Here we study the worldwide spread of a pandemic and its possible containment at a global level taking into account all available information on air travel.
Methods and Findings
We studied a metapopulation stochastic epidemic model on a global scale that considers airline travel flow data among urban areas. We provided a temporal and spatial evolution of the pandemic with a sensitivity analysis of different levels of infectiousness of the virus and initial outbreak conditions (both geographical and seasonal). For each spreading scenario we provided the timeline and the geographical impact of the pandemic in 3,100 urban areas, located in 220 different countries. We compared the baseline cases with different containment strategies, including travel restrictions and the therapeutic use of antiviral (AV) drugs. We investigated the effect of the use of AV drugs in the event that therapeutic protocols can be carried out with maximal coverage for the populations in all countries. In view of the wide diversity of AV stockpiles in different regions of the world, we also studied scenarios in which only a limited number of countries are prepared (i.e., have considerable AV supplies). In particular, we compared different plans in which, on the one hand, only prepared and wealthy countries benefit from large AV resources, with, on the other hand, cooperative containment scenarios in which countries with large AV stockpiles make a small portion of their supplies available worldwide.
Conclusions
We show that the inclusion of air transportation is crucial in the assessment of the occurrence probability of global outbreaks. The large-scale therapeutic usage of AV drugs in all hit countries would be able to mitigate a pandemic effect with a reproductive rate as high as 1.9 during the first year; with AV supply use sufficient to treat approximately 2% to 6% of the population, in conjunction with efficient case detection and timely drug distribution. For highly contagious viruses (i.e., a reproductive rate as high as 2.3), even the unrealistic use of supplies corresponding to the treatment of approximately 20% of the population leaves 30%–50% of the population infected. In the case of limited AV supplies and pandemics with a reproductive rate as high as 1.9, we demonstrate that the more cooperative the strategy, the more effective are the containment results in all regions of the world, including those countries that made part of their resources available for global use.
A metapopulation stochastic epidemic model for influenza shows the need to include air transportation when assessing the occurrence probability of global outbreaks. The impact of the use of antiviral drugs is also measured.
Editors' Summary
Background.
Seasonal outbreaks (epidemics) of influenza—a viral infection of the nose, throat, and airways—affect millions of people and kill about 500,000 individuals every year. Regular epidemics occur because flu viruses frequently make small changes in the viral proteins (antigens) recognized by the human immune system. Consequently, a person's immune-system response that combats influenza one year provides incomplete protection the next year. Occasionally, a human influenza virus appears that contains large antigenic changes. People have little immunity to such viruses (which often originate in birds or animals), so they can start a global epidemic (pandemic) that kills millions of people. Experts fear that a human influenza pandemic could be triggered by the avian H5N1 influenza virus, which is present in bird flocks around the world. So far, fewer than 300 people have caught this virus but more than 150 people have died.
Why Was This Study Done?
Avian H5N1 influenza has not yet triggered a human pandemic, because it rarely passes between people. If it does acquire this ability, it would take 6–8 months to develop a vaccine to provide protection against this new, potentially pandemic virus. Public health officials therefore need other strategies to protect people during the first few months of a pandemic. These could include international travel restrictions and the use of antiviral drugs. However, to get the most benefit from these interventions, public-health officials need to understand how influenza pandemics spread, both over time and geographically. In this study, the researchers have used detailed information on air travel to model the global spread of an emerging influenza pandemic and its containment.
What Did the Researchers Do and Find?
The researchers incorporated data on worldwide air travel and census data from urban centers near airports into a mathematical model of the spread of an influenza pandemic. They then used this model to investigate how the spread and health effects of a pandemic flu virus depend on the season in which it emerges (influenza virus thrives best in winter), where it emerges, and how infectious it is. Their model predicts, for example, that a flu virus originating in Hanoi, Vietnam, with a reproductive number (R0) of 1.1 (a measure of how many people an infectious individual infects on average) poses a very mild global threat. However, epidemics initiated by a virus with an R0 of more than 1.5 would often infect half the population in more than 100 countries. Next, the researchers used their model to show that strict travel restrictions would have little effect on pandemic evolution. More encouragingly, their model predicts that antiviral drugs would mitigate pandemics of a virus with an R0 up to 1.9 if every country had an antiviral drug stockpile sufficient to treat 5% of its population; if the R0 was 2.3 or higher, the pandemic would not be contained even if 20% of the population could be treated. Finally, the researchers considered a realistic scenario in which only a few countries possess antiviral stockpiles. In these circumstances, compared with a “selfish” strategy in which countries only use their antiviral drugs within their borders, limited worldwide sharing of antiviral drugs would slow down the spread of a flu virus with an R0 of 1.9 by more than a year and would benefit both drug donors and recipients.
What Do These Findings Mean?
Like all mathematical models, this model for the global spread of an emerging pandemic influenza virus contains many assumptions (for example, about viral behavior) that might affect the accuracy of its predictions. The model also does not consider variations in travel frequency between individuals or viral spread in rural areas. Nevertheless, the model provides the most extensive global simulation of pandemic influenza spread to date. Reassuringly, it suggests that an emerging virus with a low R0 would not pose a major public-health threat, since its attack rate would be limited and would not peak for more than a year, by which time a vaccine could be developed. Most importantly, the model suggests that cooperative sharing of antiviral drugs, which could be organized by the World Health Organization, might be the best way to deal with an emerging influenza pandemic.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040013.
The US Centers for Disease Control and Prevention has information about influenza for patients and professionals, including key facts about avian influenza and antiviral drugs
The US National Institute of Allergy and Infectious Disease features information on seasonal, avian, and pandemic flu
The US Department of Health and Human Services provides information on pandemic flu and avian flu, including advice to travelers
World Health Organization has fact sheets on influenza and avian influenza, including advice to travelers and current pandemic flu threat
The UK Health Protection Agency has information on seasonal, avian, and pandemic influenza
The UK Department of Health has a feature article on bird flu and pandemic influenza
doi:10.1371/journal.pmed.0040013
PMCID: PMC1779816  PMID: 17253899
17.  The Global Burden of Snakebite: A Literature Analysis and Modelling Based on Regional Estimates of Envenoming and Deaths 
PLoS Medicine  2008;5(11):e218.
Background
Envenoming resulting from snakebites is an important public health problem in many tropical and subtropical countries. Few attempts have been made to quantify the burden, and recent estimates all suffer from the lack of an objective and reproducible methodology. In an attempt to provide an accurate, up-to-date estimate of the scale of the global problem, we developed a new method to estimate the disease burden due to snakebites.
Methods and Findings
The global estimates were based on regional estimates that were, in turn, derived from data available for countries within a defined region. Three main strategies were used to obtain primary data: electronic searching for publications on snakebite, extraction of relevant country-specific mortality data from databases maintained by United Nations organizations, and identification of grey literature by discussion with key informants. Countries were grouped into 21 distinct geographic regions that are as epidemiologically homogenous as possible, in line with the Global Burden of Disease 2005 study (Global Burden Project of the World Bank). Incidence rates for envenoming were extracted from publications and used to estimate the number of envenomings for individual countries; if no data were available for a particular country, the lowest incidence rate within a neighbouring country was used. Where death registration data were reliable, reported deaths from snakebite were used; in other countries, deaths were estimated on the basis of observed mortality rates and the at-risk population. We estimate that, globally, at least 421,000 envenomings and 20,000 deaths occur each year due to snakebite. These figures may be as high as 1,841,000 envenomings and 94,000 deaths. Based on the fact that envenoming occurs in about one in every four snakebites, between 1.2 million and 5.5 million snakebites could occur annually.
Conclusions
Snakebites cause considerable morbidity and mortality worldwide. The highest burden exists in South Asia, Southeast Asia, and sub-Saharan Africa.
Janaka de Silva and colleagues estimate that globally at least 421,000 envenomings and 20,000 deaths occur each year due to snakebite.
Editors' Summary
Background.
Of the 3,000 or so snake species that exist in the world, about 600 are venomous. Venomous snakes—which exist on every continent except Antarctica—immobilize their prey by injecting modified saliva (venom) that contains toxins into their prey's tissues through their fangs—specialized, hollow teeth. Snakes also use their venoms for self defense and will bite people who threaten, startle or provoke them. Snakebites caused by the families Viperidae (for example, pit vipers) and Elapidae (for example, kraits and cobras) are particularly dangerous to people. The potentially fatal effects of being “envenomed” (having venom injected) by these snakes include widespread bleeding, muscle paralysis, and tissue destruction (necrosis) around the bite site. Bites from these snakes can also cause permanent disability. For example, snakebite victims, who tend to be young and male, may have to have a limb amputated because of necrosis. The best treatment for any snakebite is to get the victim to a hospital as soon as possible where antivenoms (mixtures of antibodies that neutralize venoms) can be given.
Why Was This Study Done?
Although snakebites occur throughout the world, envenoming snakebites are thought to pose a particularly important yet largely neglected threat to public health. This is especially true in rural areas of tropical and subtropical countries where snakebites are common but where there is limited access to health care and to antivenoms. The true magnitude of the public-health threat posed by snakebites in these countries (and elsewhere in the world) is unknown, which makes it hard for public-health officials to optimize the prevention and treatment of snakebites in their respective countries. In this study, therefore, the researchers develop and apply a new method to estimate the global burden of snakebite.
What Did the Researchers Do and Find?
The researchers systematically searched the scientific literature for publications on snakebites and deaths from snakebites and extracted data on snakebite deaths in individual countries from the World Health Organization (WHO) mortality database. They also contacted Ministries of Health, National Poison Centers, and snakebite experts for unpublished information (“grey” literature) on snakebites. Together, these three approaches provided data on the number of snakebite envenomings and deaths for 135 and 162 countries, respectively. The researchers then grouped the 227 countries of the world into 21 geographical regions, each of which contained countries with similar population characteristics, and used the results of studies done in individual countries within each region to estimate the numbers of snakebite envenomings and deaths for each region. Finally, they added up these estimates to obtain an estimate of the global burden of snakebite. Using this method, the researchers estimate that, worldwide, at least 421,000 envenomings and 20,000 deaths from snakebite occur every year; the actual numbers, they suggest, could be as high as 1.8 million envenomings and 94,000 deaths. Their estimates also indicate that the highest burden of snakebite envenomings and death occurs in South and Southeast Asia and in sub-Saharan Africa, and that India is the country with the highest annual number of envenomings (81,000) and deaths (nearly 11,000).
What Do These Findings Mean?
These findings indicate that snakebites cause considerable illness and death around the world. Because of the careful methods used by the researchers, their global estimates of snakebite envenomings and deaths are probably more accurate than previous estimates. However, because the researchers had to make many assumptions in their calculations and because there are so few reliable data on the numbers of snakebites and deaths from the rural tropics, the true regional and global numbers of these events may differ substantially from the estimates presented here. In particular, the regional estimates for eastern sub-Saharan Africa, a region where snakebites are very common and where antivenoms are particularly hard to obtain, are likely to be inaccurate because they are based on a single study. The researchers, therefore, call for more studies on snakebite envenoming and deaths to be done to provide the information needed to deal effectively with this neglected public-health problem.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050218.
This study is further discussed in a PLoS Medicine Perspective by Chippaux
The MedlinePlus Medical Encyclopedia has a page on snakebites (in English and Spanish)
The UK National Health Service Direct health encyclopedia has detailed information about all aspects of snakebites
Wikipedia has pages on venomous snakes and on snakebites (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization provides information about antivenoms and about efforts to increase access to antivenoms in developing countries (available in several languages)
A previous article in PLoS Medicine also discusses the neglected problem of snakebite envenoming: Gutiérrez JM, Theakston RDG, Warrell DA (2006) Confronting the Neglected Problem of Snake Bite Envenoming: The Need for a Global Partnership. PLoS Med 3(6): e150
doi:10.1371/journal.pmed.0050218
PMCID: PMC2577696  PMID: 18986210
18.  A World Malaria Map: Plasmodium falciparum Endemicity in 2007 
PLoS Medicine  2009;6(3):e1000048.
Background
Efficient allocation of resources to intervene against malaria requires a detailed understanding of the contemporary spatial distribution of malaria risk. It is exactly 40 y since the last global map of malaria endemicity was published. This paper describes the generation of a new world map of Plasmodium falciparum malaria endemicity for the year 2007.
Methods and Findings
A total of 8,938 P. falciparum parasite rate (PfPR) surveys were identified using a variety of exhaustive search strategies. Of these, 7,953 passed strict data fidelity tests for inclusion into a global database of PfPR data, age-standardized to 2–10 y for endemicity mapping. A model-based geostatistical procedure was used to create a continuous surface of malaria endemicity within previously defined stable spatial limits of P. falciparum transmission. These procedures were implemented within a Bayesian statistical framework so that the uncertainty of these predictions could be evaluated robustly. The uncertainty was expressed as the probability of predicting correctly one of three endemicity classes; previously stratified to be an informative guide for malaria control. Population at risk estimates, adjusted for the transmission modifying effects of urbanization in Africa, were then derived with reference to human population surfaces in 2007. Of the 1.38 billion people at risk of stable P. falciparum malaria, 0.69 billion were found in Central and South East Asia (CSE Asia), 0.66 billion in Africa, Yemen, and Saudi Arabia (Africa+), and 0.04 billion in the Americas. All those exposed to stable risk in the Americas were in the lowest endemicity class (PfPR2−10 ≤ 5%). The vast majority (88%) of those living under stable risk in CSE Asia were also in this low endemicity class; a small remainder (11%) were in the intermediate endemicity class (PfPR2−10 > 5 to < 40%); and the remaining fraction (1%) in high endemicity (PfPR2−10 ≥ 40%) areas. High endemicity was widespread in the Africa+ region, where 0.35 billion people are at this level of risk. Most of the rest live at intermediate risk (0.20 billion), with a smaller number (0.11 billion) at low stable risk.
Conclusions
High levels of P. falciparum malaria endemicity are common in Africa. Uniformly low endemic levels are found in the Americas. Low endemicity is also widespread in CSE Asia, but pockets of intermediate and very rarely high transmission remain. There are therefore significant opportunities for malaria control in Africa and for malaria elimination elsewhere. This 2007 global P. falciparum malaria endemicity map is the first of a series with which it will be possible to monitor and evaluate the progress of this intervention process.
Incorporating data from nearly 8,000 surveys ofPlasmodium falciparum parasite rates, Simon Hay and colleagues employ a model-based geostatistical procedure to create a map of global malaria endemicity.
Editors' Summary
Background.
Malaria is one of the most common infectious diseases in the world and one of the greatest global public health problems. The Plasmodium falciparum parasite causes approximately 500 million cases each year and over one million deaths in sub-Saharan Africa. More than 40% of the world's population is at risk of malaria. The parasite is transmitted to people through the bites of infected mosquitoes. These insects inject a life stage of the parasite called sporozoites, which invade human liver cells where they reproduce briefly. The liver cells then release merozoites (another life stage of the parasite), which invade red blood cells. Here, they multiply again before bursting out and infecting more red blood cells, causing fever and damaging vital organs. The infected red blood cells also release gametocytes, which infect mosquitoes when they take a blood meal. In the mosquito, the gametocytes multiply and develop into sporozoites, thus completing the parasite's life cycle. Malaria can be prevented by controlling the mosquitoes that spread the parasite and by avoiding mosquito bites by sleeping under insecticide-treated bed nets. Effective treatment with antimalarial drugs also helps to decrease malaria transmission.
Why Was This Study Done?
In 1998, the World Health Organization and several other international agencies launched Roll Back Malaria, a global partnership that aims to reduce the human and socioeconomic costs of malaria. Targets have been continually raised since this time and have culminated in the Roll Back Malaria Global Malaria Action Plan of 2008, where universal coverage of locally appropriate interventions is called for by 2010 and the long-term goal of malaria eradication again tabled for the international community. For malaria control and elimination initiatives to be effective, financial resources must be concentrated in regions where they will have the most impact, so it is essential to have up-to-date and accurate maps to guide effort and expenditure. In 2008, researchers of the Malaria Atlas Project constructed a map that stratified the world into three levels of malaria risk: no risk, unstable transmission risk (occasional focal outbreaks), and stable transmission risk (endemic areas where the disease is always present). Now, researchers extend this work by describing a new evidence-based method for generating continuous maps of P. falciparum endemicity within the area of stable malaria risk over the entire world's surface. They then use this method to produce a P. falciparum endemicity map for 2007. Endemicity is important as it is a guide to the level of morbidity and mortality a population will suffer, as well as the intensity of the interventions that that will be required to bring the disease under control or additionally to interrupt transmission.
What Did the Researchers Do and Find?
The researchers identified nearly 8,000 surveys of P. falciparum parasite rates (PfPR; the percentage of a population with parasites detectable in their blood) completed since 1985 that met predefined criteria for inclusion into a global database of PfPR data. They then used “model-based geostatistics” to build a world map of P. falciparum endemicity for 2007 that took into account where and, importantly, when and all these surveys were done. Predictions were comprehensive (for every area of stable transmission globally) and continuous (predicted as a endemicity value between 0% and 100%). The population at risk of three levels of malaria endemicity were identified to help summarize these findings: low endemicity, where PfPR is below 5% and where it should be technically feasible to eliminate malaria; intermediate endemicity where PfPR is between 5% and 40% and it should be theoretically possible to interrupt transmission with the universal coverage of bed nets; high endemicity is where PfPR is above 40% and suites of locally appropriate intervention will be needed to bring malaria under control. The global level of malaria endemicity is much reduced when compared with historical maps. Nevertheless, the resulting map indicates that in 2007 almost 60% of the 2.4 billion people at malaria risk were living in areas with a stable risk of P. falciparum transmission—0.69 billion people in Central and South East Asia (CSE Asia), 0.66 billion in Africa, Yemen, and Saudi Arabia (Africa+), and 0.04 billion in the Americas. The people of the Americas were all in the low endemicity class. Although most people exposed to stable risk in CSE Asia were also in the low endemicity class (88%), 11% were in the intermediate class, and 1% were in the high endemicity class. By contrast, high endemicity was most common and widespread in the Africa+ region (53%), but with significant numbers in the intermediate (30%), and low (17%) endemicity classes.
What Do These Findings Mean?
The accuracy of this new world map of P. falciparum endemicity depends on the assumptions made in its construction and critically on the accuracy of the data fed into it, but because of the statistical methods used to construct this map, it is possible to quantify the uncertainty in the results for all users. Thus, this map (which, together with the data used in its construction, will be freely available) represents an important new resource that clearly indicates areas where malaria control can be improved (for example, Africa) and other areas where malaria elimination may be technically possible. In addition, planned annual updates of the global P. falciparum endemicity map and the PfPR database by the Malaria Atlas Project will help public-health experts to monitor the progress of the malaria control community towards international control and elimination targets.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000048.
A PLoS Medicine Health in Action article (Hay SI, Snow RW (2006) The Malaria Atlas Project: Developing Global Maps of Malaria Risk. PLoS Med 3(12): e473) and a Research Article (Guerra CA, Gikandi PW, Tatem AJ, Noor AM, Smith DL, et al. (2008) The Limits and Intensity of Plasmodium falciparum Transmission: Implications for Malaria Control and Elimination Worldwide. PLoS Med 5(2): e38) also provide further details about the global mapping of malaria risk, and a further Research Article (Snow RW, Guerra CA, Mutheu JJ, Hay SI (2008) International Funding for Malaria Control in Relation to Populations at Risk of Stable Plasmodium falciparum Transmission. PLoS Med 5(7): e142) discusses the financing of malaria control in relation to this risk
Additional national and regional level maps and more information on the global mapping of malaria are available at the Malaria Atlas Project
The MedlinePlus encyclopedia contains a page on malaria (in English and Spanish)
Information is available from the World Health Organization on malaria (in several languages)
The US Centers for Disease Control and Prevention provide information on malaria (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on its approach to the global control of malaria, and on malaria control efforts in specific parts of the world
doi:10.1371/journal.pmed.1000048
PMCID: PMC2659708  PMID: 19323591
19.  Evaluating Drug Prices, Availability, Affordability, and Price Components: Implications for Access to Drugs in Malaysia 
PLoS Medicine  2007;4(3):e82.
Background
Malaysia's stable health care system is facing challenges with increasing medicine costs. To investigate these issues a survey was carried out to evaluate medicine prices, availability, affordability, and the structure of price components.
Methods and Findings
The methodology developed by the World Health Organization (WHO) and Health Action International (HAI) was used. Price and availability data for 48 medicines was collected from 20 public sector facilities, 32 private sector retail pharmacies and 20 dispensing doctors in four geographical regions of West Malaysia. Medicine prices were compared with international reference prices (IRPs) to obtain a median price ratio. The daily wage of the lowest paid unskilled government worker was used to gauge the affordability of medicines. Price component data were collected throughout the supply chain, and markups, taxes, and other distribution costs were identified. In private pharmacies, innovator brand (IB) prices were 16 times higher than the IRPs, while generics were 6.6 times higher. In dispensing doctor clinics, the figures were 15 times higher for innovator brands and 7.5 for generics. Dispensing doctors applied high markups of 50%–76% for IBs, and up to 316% for generics. Retail pharmacy markups were also high—25%–38% and 100%–140% for IBs and generics, respectively. In the public sector, where medicines are free, availability was low even for medicines on the National Essential Drugs List. For a month's treatment for peptic ulcer disease and hypertension people have to pay about a week's wages in the private sector.
Conclusions
The free market by definition does not control medicine prices, necessitating price monitoring and control mechanisms. Markups for generic products are greater than for IBs. Reducing the base price without controlling markups may increase profits for retailers and dispensing doctors without reducing the price paid by end users. To increase access and affordability, promotion of generic medicines and improved availability of medicines in the public sector are required.
Drug price and availability data were collected from West Malaysian public sector facilities, private sector retail pharmacies, and dispensing doctors. Mark-ups were higher on generic drugs than on innovator brands.
Editors' Summary
Background.
The World Health Organization has said that one-third of the people of the world cannot access the medicines they need. An important reason for this problem is that prices are often too high for people or government-funded health systems to afford. In developing countries, most people who need medicines have to pay for them out of their own pockets. Where the cost of drugs is covered by health systems, spending on medicines is a major part of the total healthcare budget. Governments use a variety of approaches to try to control the cost of drugs and make sure that essential medicines are affordable and not overpriced. According to the theory of “free market economics,” the costs of goods and services are determined by interactions between buyers and sellers and not by government intervention. However, free market economics does not work well at containing the costs of medicines, particularly new medicines, because new medicines are protected by patent law, which legally prevents others from making, using, or selling the medicine for a particular period of time. Therefore, without government intervention, there is nothing to help to push down prices.
Why Was This Study Done?
Malaysia is a middle-income country with a relatively effective public health system, but it is facing a rapid rise in drug costs. In Malaysia, medicine prices are determined by free-market economics, without any control by government. Government hospitals are expected to provide drugs free, but a substantial proportion of medicines are paid for by patients who buy them directly from private pharmacies or prescribing doctors. There is evidence that Malaysian patients have difficulties accessing the drugs they need and that cost is an important factor. Therefore, the researchers who wrote this paper wanted to examine the cost of different medicines in Malaysia, and their availability and affordability from different sources.
What Did the Researchers Do and Find?
In this research project, 48 drugs were studied, of which 28 were part of a “core list” identified by the World Health Organization as “essential drugs” on the basis of the global burden of disease. The remaining 20 reflected health care needs in Malaysia itself. The costs of each medicine were collected from government hospitals, private pharmacies, and dispensing doctors in four different regions of Malaysia. Data were collected for the “innovator brand” (made by the original patent holder) and for “generic” brands (an equivalent drug to the innovator brand, produced by a different company once the innovator brand no longer has an exclusive patent). The medicine prices were compared against international reference prices (IRP), which are the average prices offered by not-for-profit drug companies to developing countries. Finally, the researchers also compared the cost of the drugs with daily wages, in order to work out their “affordability.”
The researchers found that, irrespective of the source of medicines, prices were on average very much higher than the international reference price, ranging from 2.4 times the IRP for innovator brands accessed through public hospitals, to 16 times the IRP for innovator brands accessed through private pharmacies. The availability of medicines was also very poor, with only 25% of generic medicines available on average through the public sector. The affordability of many of the medicines studied was again very poor. For example, one month's supply of ranitidine (a drug for stomach ulcers) was equivalent to around three days' wages for a low-paid government worker, and one month's supply of fluoxetine (an antidepressant) would cost around 26 days' wages.
What Do These Findings Mean?
These results show that essential drugs are very expensive in Malaysia and are not universally available. Many people would not be able to pay for essential medicines. The cost of medicines in Malaysia seems to be much higher than in areas of India and Sri Lanka, although the researchers did not attempt to collect data in order to carry out an international comparison. It is possible that the high cost and low availability in Malaysia are the result of a lack of government regulation. Overall, the findings suggest that the government should set up mechanisms to prevent drug manufacturers from increasing prices too much and thus ensure greater access to essential medicines.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040082.
Read a related PLoS Medicine Perspective article by Suzanne Hill
Information is available from the World Health Organization on Improving Access to Medicines
Information on medicine prices is available from Health Action International
Wikipedia has an entry on Patent (a type of intellectual property that is normally used to prevent other companies from selling a newly invented medicine). (Wikipedia is an internet encyclopedia anyone can edit.)
The Drugs for Neglected Diseases Initiative is an international collaboration between public organizations that aims to develop drugs for people suffering from neglected diseases
doi:10.1371/journal.pmed.0040082
PMCID: PMC1831730  PMID: 17388660
20.  Impact of Intermittent Screening and Treatment for Malaria among School Children in Kenya: A Cluster Randomised Trial 
PLoS Medicine  2014;11(1):e1001594.
Katherine Halliday and colleagues conducted a cluster randomized controlled trial in Kenyan school children in an area of low to moderate malaria transmission to investigate the effect of intermittent screening and treatment of malaria on health and education.
Please see later in the article for the Editors' Summary
Background
Improving the health of school-aged children can yield substantial benefits for cognitive development and educational achievement. However, there is limited experimental evidence of the benefits of alternative school-based malaria interventions or how the impacts of interventions vary according to intensity of malaria transmission. We investigated the effect of intermittent screening and treatment (IST) for malaria on the health and education of school children in an area of low to moderate malaria transmission.
Methods and Findings
A cluster randomised trial was implemented with 5,233 children in 101 government primary schools on the south coast of Kenya in 2010–2012. The intervention was delivered to children randomly selected from classes 1 and 5 who were followed up for 24 months. Once a school term, children were screened by public health workers using malaria rapid diagnostic tests (RDTs), and children (with or without malaria symptoms) found to be RDT-positive were treated with a six dose regimen of artemether-lumefantrine (AL). Given the nature of the intervention, the trial was not blinded. The primary outcomes were anaemia and sustained attention. Secondary outcomes were malaria parasitaemia and educational achievement. Data were analysed on an intention-to-treat basis.
During the intervention period, an average of 88.3% children in intervention schools were screened at each round, of whom 17.5% were RDT-positive. 80.3% of children in the control and 80.2% in the intervention group were followed-up at 24 months. No impact of the malaria IST intervention was observed for prevalence of anaemia at either 12 or 24 months (adjusted risk ratio [Adj.RR]: 1.03, 95% CI 0.93–1.13, p = 0.621 and Adj.RR: 1.00, 95% CI 0.90–1.11, p = 0.953) respectively, or on prevalence of P. falciparum infection or scores of classroom attention. No effect of IST was observed on educational achievement in the older class, but an apparent negative effect was seen on spelling scores in the younger class at 9 and 24 months and on arithmetic scores at 24 months.
Conclusion
In this setting in Kenya, IST as implemented in this study is not effective in improving the health or education of school children. Possible reasons for the absence of an impact are the marked geographical heterogeneity in transmission, the rapid rate of reinfection following AL treatment, the variable reliability of RDTs, and the relative contribution of malaria to the aetiology of anaemia in this setting.
Trial registration
www.ClinicalTrials.gov NCT00878007
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, more than 200 million cases of malaria occur worldwide and more than 600,000 people, mostly children living in sub-Saharan Africa, die from this mosquito-borne parasitic infection. Malaria can be prevented by controlling the night-biting mosquitoes that transmit Plasmodium parasites and by sleeping under insecticide-treated nets to avoid mosquito bites. Infection with malaria parasites causes recurring flu-like symptoms and needs to be treated promptly with antimalarial drugs to prevent the development of anaemia (a reduction in red blood cell numbers) and potentially fatal damage to the brain and other organs. Treatment also reduces malaria transmission. In 1998, the World Health Organization and several other international bodies established the Roll Back Malaria Partnership to provide a coordinated global approach to fighting malaria. In 2008, the Partnership launched its Global Malaria Action Plan, which aims to control malaria to reduce the current burden, to eliminate malaria over time country by country, and, ultimately, to eradicate malaria.
Why Was This Study Done?
In recent years, many malaria-endemic countries (countries where malaria is always present) have implemented successful malaria control programs and reduced malaria transmission levels. In these countries, immunity to malaria is now acquired more slowly than in the past, the burden of clinical malaria is shifting from very young children to older children, and infection rates with malaria parasites are now highest among school-aged children. Chronic untreated Plasmodium infection, even when it does not cause symptoms, can negatively affect children's health, cognitive development (the acquisition of thinking skills), and educational achievement. However, little is known about how school-based malaria interventions affect the health of children or their educational outcomes. In this cluster randomized trial, the researchers investigate the effect of intermittent screening and treatment (IST) of malaria on the health and education of school children in a rural area of southern Kenya with low-to-moderate malaria transmission. Cluster randomized trials compare the outcomes of groups (“clusters”) of people randomly assigned to receive alternative interventions. IST of malaria involves periodical screening of individuals for Plasmodium infection followed by treatment of everyone who is infected, including people without symptoms, with antimalarial drugs.
What Did the Researchers Do and Find?
The researchers enrolled more than 5,000 children aged between 5 and 20 years from 101 government primary schools in Kenya into their 24-month study. Half the schools were randomly selected to receive the IST intervention (screening once a school term for infection with a malaria parasite with a rapid diagnostic test [RDT] and treatment of all RDT-positive children, with or without malaria symptoms, with six doses of artemether-lumefantrine), which was delivered to randomly selected children from classes 1 and 5 (which contained younger and older children, respectively). During the study, 17.5% of the children in the intervention schools were RDT-positive at screening on average. The prevalences of anaemia and parasitemia (the proportion of children with anaemia and the proportion who were RDT-positive, respectively) were similar in the intervention and control groups at the 12-month and 24-month follow-up and there was no difference between the two groups in classroom attention scores at the 9-month and 24-month follow-up. The IST intervention also had no effect on educational achievement in the older class but, unexpectedly, appeared to have a negative effect on spelling and arithmetic scores in the younger class.
What Do These Findings Mean?
These findings indicate that, in this setting in Kenya, IST as implemented in this study provided no health or education benefits to school children. The finding that the educational achievement of younger children was lower in the intervention group than in the control group may be a chance finding or may indicate that apprehension about the finger prick needed to take blood for the RDT had a negative effect on the performance of younger children during educational tests. The researchers suggest that their failure to demonstrate that the school-based IST intervention they tested had any long-lasting health or education benefits may be because, in a low-to-moderate malaria transmission setting, most of the children screened did not require treatment and those who did lived in focal high transmission regions, where rapid re-infection occurred between screening rounds. Importantly, however, these findings suggest that school screening using RDT could be an efficient way to identify transmission hotspots in communities that should be targeted for malaria control interventions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001594.
This study is further discussed in a PLOS Medicine Perspective by Lorenz von Seidlein
Information is available fro m the World Health Organization on malaria (in several languages); the 2012 World Malaria Report provides details of the current global malaria situation
The US Centers for Disease Control and Prevention provide information on malaria (in English and Spanish), including a selection of personal stories about children with malaria
Information is available from the Roll Back Malaria Partnership on the global control of malaria and on the Global Malaria Action Plan (in English and French); its website includes a fact sheet about malaria in Kenya
MedlinePlus provides links to additional information on malaria (in English and Spanish)
More information about this trial is available
More information about malaria control in schools is provided in the toolkit
doi:10.1371/journal.pmed.1001594
PMCID: PMC3904819  PMID: 24492859
21.  Major Burden of Severe Anemia from Non-Falciparum Malaria Species in Southern Papua: A Hospital-Based Surveillance Study 
PLoS Medicine  2013;10(12):e1001575.
Ric Price and colleagues use hospital-based surveillance data to estimate the risk of severe anemia and mortality associated with endemic Plasmodium species in southern Papua, Indonesia.
Please see later in the article for the Editors' Summary
Background
The burden of anemia attributable to non-falciparum malarias in regions with Plasmodium co-endemicity is poorly documented. We compared the hematological profile of patients with and without malaria in southern Papua, Indonesia.
Methods and Findings
Clinical and laboratory data were linked for all patients presenting to a referral hospital between April 2004 and December 2012. Data were available on patient demographics, malaria diagnosis, hemoglobin concentration, and clinical outcome, but other potential causes of anemia could not be identified reliably. Of 922,120 patient episodes (837,989 as outpatients and 84,131 as inpatients), a total of 219,845 (23.8%) were associated with a hemoglobin measurement, of whom 67,696 (30.8%) had malaria. Patients with P. malariae infection had the lowest hemoglobin concentration (n = 1,608, mean = 8.93 [95% CI 8.81–9.06]), followed by those with mixed species infections (n = 8,645, mean = 9.22 [95% CI 9.16–9.28]), P. falciparum (n = 37,554, mean = 9.47 [95% CI 9.44–9.50]), and P. vivax (n = 19,858, mean = 9.53 [95% CI 9.49–9.57]); p-value for all comparisons <0.001. Severe anemia (hemoglobin <5 g/dl) was present in 8,151 (3.7%) patients. Compared to patients without malaria, those with mixed Plasmodium infection were at greatest risk of severe anemia (adjusted odds ratio [AOR] 3.25 [95% CI 2.99–3.54]); AORs for severe anaemia associated with P. falciparum, P. vivax, and P. malariae were 2.11 (95% CI 2.00–2.23), 1.87 (95% CI 1.74–2.01), and 2.18 (95% CI 1.76–2.67), respectively, p<0.001. Overall, 12.2% (95% CI 11.2%–13.3%) of severe anemia was attributable to non-falciparum infections compared with 15.1% (95% CI 13.9%–16.3%) for P. falciparum monoinfections. Patients with severe anemia had an increased risk of death (AOR = 5.80 [95% CI 5.17–6.50]; p<0.001). Not all patients had a hemoglobin measurement, thus limitations of the study include the potential for selection bias, and possible residual confounding in multivariable analyses.
Conclusions
In Papua P. vivax is the dominant cause of severe anemia in early infancy, mixed P. vivax/P. falciparum infections are associated with a greater hematological impairment than either species alone, and in adulthood P. malariae, although rare, is associated with the lowest hemoglobin concentration. These findings highlight the public health importance of integrated genus-wide malaria control strategies in areas of Plasmodium co-endemicity.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Malaria—a mosquito-borne parasitic disease—is a global public health problem. Five parasites cause malaria—Plasmodium falciparum, P. vivax, P. ovale, P. malariae, and P. knowlesi. Of these, P. vivax is the commonest and most widely distributed, whereas P. falciparum causes the most deaths—about a million every year. All these parasites enter their human host when an infected mosquito takes a blood meal. The parasites migrate to the liver where they replicate and mature into a parasitic form known as merozoites. After 8–9 days, the merozoites are released from the liver cells and invade red blood cells where they replicate rapidly before bursting out and infecting more red blood cells. Malaria's recurring flu-like symptoms are caused by this cyclical increase in parasites in the blood. Malaria needs to be treated promptly with antimalarial drugs to prevent the development of potentially fatal complications. Infections with P. falciparum in particular can cause anemia (a reduction in red blood cell numbers) and can damage the brain and other vital organs by blocking the capillaries that supply these organs with blood.
Why Was This Study Done?
It is unclear what proportion of anemia is attributable to non-falciparum malarias in regions of the world where several species of malaria parasite are always present (Plasmodium co-endemicity). Public health officials in such regions need to know whether non-falciparum malarias are a major cause of anemia when designing malaria control strategies. If P. vivax, for example, is a major cause of anemia in an area where P. vivax and P. falciparum co-exist, then any malaria control strategies that are implemented need to take into account the biological differences between the parasites. In this hospital-based cohort study, the researchers investigate the burden of severe anemia from the endemic Plasmodium species in southern Papua, Indonesia.
What Did the Researchers Do and Find?
The researchers used hospital record numbers to link clinical and laboratory data for patients presenting to a referral hospital in southern Papua over an 8-year period. The hemoglobin level (an indicator of anemia) was measured in about a quarter of hospital presentations (some patients attended the hospital several times). A third of the presentations who had their hemoglobin level determined (67,696 presentations) had clinical malaria. Patients with P. malariae infection had the lowest average hemoglobin concentration. Patients with mixed species, P. falciparum, and P. vivax infections had slightly higher average hemoglobin levels but all these levels were below the normal range for people living in Papua. Among the patients who had their hemoglobin status assessed, 3.7% had severe anemia. After allowing for other factors that alter the risk of anemia (“confounding” factors such as age), patients with mixed Plasmodium infection were more than three times as likely to have severe anemia as patients without malaria. Patients with P. falciparum, P. vivax, or P. malariae infections were about twice as likely to have severe anemia as patients without malaria. About 12.2% of severe anemia was attributable to non-falciparum infections, 15.1% was attributable to P. falciparum monoinfections, and P. vivax was the dominant cause of severe anemia in infancy. Finally, compared to patients without anemia, patients with severe anemia had nearly a 6-fold higher risk of death.
What Do These Findings Mean?
These findings provide a comparative assessment of the pattern of anemia associated with non-falciparum malarias in Papua and an estimate of the public health importance of these malarias. Although the accuracy of these findings may be affected by residual confounding (for example, the researchers did not consider nutritional status when calculating how much malaria infection increases the risk of anemia) and other limitations of the study design, non-falciparum malarias clearly make a major contribution to the burden of anemia in southern Papua. In particular, these findings reveal the large contribution that P. vivax makes to severe anemia in infancy, show that the hematological (blood-related) impact of P. malariae is most apparent in adulthood, and suggest, in contrast to some previous reports, that mixed P. vivax/P. falciparum infection is associated with a higher risk of severe anemia than monoinfection with either species. These findings, which need to be confirmed in other settings, highlight the public health importance of implementing integrated malaria control strategies that aim to control all Plasmodium species rather than a single species in regions of Plasmodium co-endemicity.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001575.
This study is further discussed in a PLOS Medicine Perspective by Gosling and Hsiang
Information is available from the World Health Organization on malaria (in several languages); the 2012 World Malaria Report provides details of the current global malaria situation
The US Centers for Disease Control and Prevention provide information on malaria (in English and Spanish), including information on different Plasmodium species and a selection of personal stories about malaria
The Malaria Vaccine Initiative has fact sheets on Plasmodium falciparum malaria and on Plasmodium vivax malaria
MedlinePlus provides links to additional information on malaria and on anemia (in English and Spanish)
Information is available from the WorldWide Antimalarial Resistance Network on antimalarial drug resistance for P. falciparum and P. vivax
doi:10.1371/journal.pmed.1001575
PMCID: PMC3866090  PMID: 24358031
22.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
23.  Reinterpreting Ethnic Patterns among White and African American Men Who Inject Heroin: A Social Science of Medicine Approach 
PLoS Medicine  2006;3(10):e452.
Background
Street-based heroin injectors represent an especially vulnerable population group subject to negative health outcomes and social stigma. Effective clinical treatment and public health intervention for this population requires an understanding of their cultural environment and experiences. Social science theory and methods offer tools to understand the reasons for economic and ethnic disparities that cause individual suffering and stress at the institutional level.
Methods and Findings
We used a cross-methodological approach that incorporated quantitative, clinical, and ethnographic data collected by two contemporaneous long-term San Francisco studies, one epidemiological and one ethnographic, to explore the impact of ethnicity on street-based heroin-injecting men 45 years of age or older who were self-identified as either African American or white. We triangulated our ethnographic findings by statistically examining 14 relevant epidemiological variables stratified by median age and ethnicity. We observed significant differences in social practices between self-identified African Americans and whites in our ethnographic social network sample with respect to patterns of (1) drug consumption; (2) income generation; (3) social and institutional relationships; and (4) personal health and hygiene. African Americans and whites tended to experience different structural relationships to their shared condition of addiction and poverty. Specifically, this generation of San Francisco injectors grew up as the children of poor rural to urban immigrants in an era (the late 1960s through 1970s) when industrial jobs disappeared and heroin became fashionable. This was also when violent segregated inner city youth gangs proliferated and the federal government initiated its “War on Drugs.” African Americans had earlier and more negative contact with law enforcement but maintained long-term ties with their extended families. Most of the whites were expelled from their families when they began engaging in drug-related crime. These historical-structural conditions generated distinct presentations of self. Whites styled themselves as outcasts, defeated by addiction. They professed to be injecting heroin to stave off “dopesickness” rather than to seek pleasure. African Americans, in contrast, cast their physical addiction as an oppositional pursuit of autonomy and pleasure. They considered themselves to be professional outlaws and rejected any appearance of abjection. Many, but not all, of these ethnographic findings were corroborated by our epidemiological data, highlighting the variability of behaviors within ethnic categories.
Conclusions
Bringing quantitative and qualitative methodologies and perspectives into a collaborative dialog among cross-disciplinary researchers highlights the fact that clinical practice must go beyond simple racial or cultural categories. A clinical social science approach provides insights into how sociocultural processes are mediated by historically rooted and institutionally enforced power relations. Recognizing the logical underpinnings of ethnically specific behavioral patterns of street-based injectors is the foundation for cultural competence and for successful clinical relationships. It reduces the risk of suboptimal medical care for an exceptionally vulnerable and challenging patient population. Social science approaches can also help explain larger-scale patterns of health disparities; inform new approaches to structural and institutional-level public health initiatives; and enable clinicians to take more leadership in changing public policies that have negative health consequences.
Bourgois and colleagues found that the African American and white men in their study had a different pattern of drug use and risk behaviors, adopted different strategies for survival, and had different personal histories.
Editors' Summary
Background.
There are stark differences in the health of different ethnic groups in America. For example, the life expectancy for white men is 75.4 years, but it is only 69.2 years for African-American men. The reasons behind these disparities are unclear, though there are several possible explanations. Perhaps, for example, different ethnic groups are treated differently by health professionals (with some groups receiving poorer quality health care). Or maybe the health disparities are due to differences across ethnic groups in income level (we know that richer people are healthier). These disparities are likely to persist unless we gain a better understanding of how they arise.
Why Was This Study Done?
The researchers wanted to study the health of a very vulnerable community of people: heroin users living on the streets in the San Francisco Bay Area. The health status of this community is extremely poor, and its members are highly stigmatized—including by health professionals themselves. The researchers wanted to know whether African American men and white men who live on the streets have a different pattern of drug use, whether they adopt varying strategies for survival, and whether they have different personal histories. Knowledge of such differences would help the health community to provide more tailored and culturally appropriate interventions. Physicians, nurses, and social workers often treat street-based drug users, especially in emergency rooms and free clinics. These health professionals regularly report that their interactions with street-based drug users are frustrating and confrontational. The researchers hoped that their study would help these professionals to have a better understanding of the cultural backgrounds and motivations of their drug-using patients.
What Did the Researchers Do and Find?
Over the course of six years, the researchers directly observed about 70 men living on the streets who injected heroin as they went about their usual lives (this type of research is called “participant observation”). The researchers specifically looked to see whether there were differences between the white and African American men. All the men gave their consent to be studied in this way and to be photographed. The researchers also studied a database of interviews with almost 7,000 injection drug users conducted over five years, drawing out the data on differences between white and African men. The researchers found that the white men were more likely to supplement their heroin use with inexpensive fortified wine, while African American men were more likely to supplement heroin with crack. Most of the white men were expelled from their families when they began engaging in drug-related crime, and these men tended to consider themselves as destitute outcasts. African American men had earlier and more negative contact with law enforcement but maintained long-term ties with their extended families, and these men tended to consider themselves as professional outlaws. The white men persevered less in attempting to find a vein in which to inject heroin, and so were more likely to inject the drug directly under the skin—this meant that they were more likely to suffer from skin abscesses. The white men generated most of their income from panhandling (begging for money), while the African American men generated most of their income through petty crime and/or through offering services such as washing car windows at gas stations.
What Do These Findings Mean?
Among street-based heroin users, there are important differences between white men and African American men in the type of drugs used, the method of drug use, their social backgrounds, the way in which they identify themselves, and the health risks that they take. By understanding these differences, health professionals should be better placed to provide tailored and appropriate care when these men present to clinics and emergency rooms. As the researchers say, “understanding of different ethnic populations of drug injectors may reduce difficult clinical interactions and resultant physician frustration while improving patient access and adherence to care.” One limitation of this study is that the researchers studied one specific community in one particular area of the US—so we should not assume that their findings would apply to street-based heroin users elsewhere.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030452.
The US Centers for Disease Control (CDC) has a web page on HIV prevention among injection drug users
The World Health Organization has collected documents on reducing the risk of HIV in injection drug users and on harm reduction approaches
The International Harm Reduction Association has information relevant to a global audience on reducing drug-related harm among individuals and communities
US-focused information on harm reduction is available via the websites of the Harm Reduction Coalition and the Chicago Recovery Alliance
Canada-focused information can be found at the Street Works Web site
The Harm Reduction Journal publishes open-access articles
The CDC has a web page on eliminating racial and ethnic health disparities
The Drug Policy Alliance has a web page on drug policy in the United States
doi:10.1371/journal.pmed.0030452
PMCID: PMC1621100  PMID: 17076569
24.  Reducing the Impact of the Next Influenza Pandemic Using Household-Based Public Health Interventions 
PLoS Medicine  2006;3(9):e361.
Background
The outbreak of highly pathogenic H5N1 influenza in domestic poultry and wild birds has caused global concern over the possible evolution of a novel human strain [1]. If such a strain emerges, and is not controlled at source [2,3], a pandemic is likely to result. Health policy in most countries will then be focused on reducing morbidity and mortality.
Methods and Findings
We estimate the expected reduction in primary attack rates for different household-based interventions using a mathematical model of influenza transmission within and between households. We show that, for lower transmissibility strains [2,4], the combination of household-based quarantine, isolation of cases outside the household, and targeted prophylactic use of anti-virals will be highly effective and likely feasible across a range of plausible transmission scenarios. For example, for a basic reproductive number (the average number of people infected by a typically infectious individual in an otherwise susceptible population) of 1.8, assuming only 50% compliance, this combination could reduce the infection (symptomatic) attack rate from 74% (49%) to 40% (27%), requiring peak quarantine and isolation levels of 6.2% and 0.8% of the population, respectively, and an overall anti-viral stockpile of 3.9 doses per member of the population. Although contact tracing may be additionally effective, the resources required make it impractical in most scenarios.
Conclusions
National influenza pandemic preparedness plans currently focus on reducing the impact associated with a constant attack rate, rather than on reducing transmission. Our findings suggest that the additional benefits and resource requirements of household-based interventions in reducing average levels of transmission should also be considered, even when expected levels of compliance are only moderate.
Voluntary household-based quarantine and external isolation are likely to be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is moderate.
Editors' Summary
Background.
Naturally occurring variation in the influenza virus can lead both to localized annual epidemics and to less frequent global pandemics of catastrophic proportions. The most destructive of the three influenza pandemics of the 20th century, the so-called Spanish flu of 1918–1919, is estimated to have caused 20 million deaths. As evidenced by ongoing tracking efforts and news media coverage of H5N1 avian influenza, contemporary approaches to monitoring and communications can be expected to alert health officials and the general public of the emergence of new, potentially pandemic strains before they spread globally.
Why Was This Study Done?
In order to act most effectively on advance notice of an approaching influenza pandemic, public health workers need to know which available interventions are likely to be most effective. This study was done to estimate the effectiveness of specific preventive measures that communities might implement to reduce the impact of pandemic flu. In particular, the study evaluates methods to reduce person-to-person transmission of influenza, in the likely scenario that complete control cannot be achieved by mass vaccination and anti-viral treatment alone.
What Did the Researchers Do and Find?
The researchers developed a mathematical model—essentially a computer simulation—to simulate the course of pandemic influenza in a hypothetical population at risk for infection at home, through external peer networks such as schools and workplaces, and through general community transmission. Parameters such as the distribution of household sizes, the rate at which individuals develop symptoms from nonpandemic viruses, and the risk of infection within households were derived from demographic and epidemiologic data from Hong Kong, as well as empirical studies of influenza transmission. A model based on these parameters was then used to calculate the effects of interventions including voluntary household quarantine, voluntary individual isolation in a facility outside the home, and contact tracing (that is, asking infectious individuals to identify people whom they may have infected and then warning those people) on the spread of pandemic influenza through the population. The model also took into account the anti-viral treatment of exposed, asymptomatic household members and of individuals in isolation, and assumed that all intervention strategies were put into place before the arrival of individuals infected with the pandemic virus.
  Using this model, the authors predicted that even if only half of the population were to comply with public health interventions, the proportion infected during the first year of an influenza pandemic could be substantially reduced by a combination of household-based quarantine, isolation of actively infected individuals in a location outside the household, and targeted prophylactic treatment of exposed individuals with anti-viral drugs. Based on an influenza-associated mortality rate of 0.5% (as has been estimated for New York City in the 1918–1919 pandemic), the magnitude of the predicted benefit of these interventions is a reduction from 49% to 27% in the proportion of the population who become ill in the first year of the pandemic, which would correspond to 16,000 fewer deaths in a city the size of Hong Kong (6.8 million people). In the model, anti-viral treatment appeared to be about as effective as isolation when each was used in combination with household quarantine, but would require stockpiling 3.9 doses of anti-viral for each member of the population. Contact tracing was predicted to provide a modest additional benefit over quarantine and isolation, but also to increase considerably the proportion of the population in quarantine.
What Do These Findings Mean?
This study predicts that voluntary household-based quarantine and external isolation can be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is far from uniform. These simulations can therefore inform preparedness plans in the absence of data from actual intervention trials, which would be impossible outside (and impractical within) the context of an actual pandemic. Like all mathematical models, however, the one presented in this study relies on a number of assumptions regarding the characteristics and circumstances of the situation that it is intended to represent. For example, the authors found that the efficacy of policies to reduce the rate of infection vary according to the ease with which a given virus spreads from person to person. Because this parameter (known as the basic reproductive ratio, R0) cannot be reliably predicted for a new viral strain based on past epidemics, the authors note that in an actual influenza pandemic rapid determinations of R0 in areas already involved would be necessary to finalize public health responses in threatened areas. Further, the implementation of the interventions that appear beneficial in this model would require devoting attention and resources to practical considerations, such as how to staff isolation centers and provide food and water to those in household quarantine. However accurate the scientific data and predictive models may be, their effectiveness can only be realized through well-coordinated local, as well as international, efforts.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030361.
• World Health Organization influenza pandemic preparedness page
• US Department of Health and Human Services avian and pandemic flu information site
• Pandemic influenza page from the Public Health Agency of Canada
• Emergency planning page on pandemic flu from the England Department of Health
• Wikipedia entry on pandemic influenza with links to individual country resources (note: Wikipedia is a free Internet encyclopedia that anyone can edit)
doi:10.1371/journal.pmed.0030361
PMCID: PMC1526768  PMID: 16881729
25.  International Funding for Malaria Control in Relation to Populations at Risk of Stable Plasmodium falciparum Transmission 
PLoS Medicine  2008;5(7):e142.
Background
The international financing of malaria control has increased significantly in the last ten years in parallel with calls to halve the malaria burden by the year 2015. The allocation of funds to countries should reflect the size of the populations at risk of infection, disease, and death. To examine this relationship, we compare an audit of international commitments with an objective assessment of national need: the population at risk of stable Plasmodium falciparum malaria transmission in 2007.
Methods and Findings
The national distributions of populations at risk of stable P. falciparum transmission were projected to the year 2007 for each of 87 P. falciparum–endemic countries. Systematic online- and literature-based searches were conducted to audit the international funding commitments made for malaria control by major donors between 2002 and 2007. These figures were used to generate annual malaria funding allocation (in US dollars) per capita population at risk of stable P. falciparum in 2007. Almost US$1 billion are distributed each year to the 1.4 billion people exposed to stable P. falciparum malaria risk. This is less than US$1 per person at risk per year. Forty percent of this total comes from the Global Fund to Fight AIDS, Tuberculosis and Malaria. Substantial regional and national variations in disbursements exist. While the distribution of funds is found to be broadly appropriate, specific high population density countries receive disproportionately less support to scale up malaria control. Additionally, an inadequacy of current financial commitments by the international community was found: under-funding could be from 50% to 450%, depending on which global assessment of the cost required to scale up malaria control is adopted.
Conclusions
Without further increases in funding and appropriate targeting of global malaria control investment it is unlikely that international goals to halve disease burdens by 2015 will be achieved. Moreover, the additional financing requirements to move from malaria control to malaria elimination have not yet been considered by the scientific or international community.
To reach global malaria control goals, Robert Snow and colleagues argue that more international funding is needed but that it must be targeted at specific countries most at risk.
Editors' Summary
Background.
Malaria is one of the most common infectious diseases in the world and one of the greatest global public health problems. The Plasmodium falciparum parasite causes approximately 500 million cases each year and over one million deaths. More than 40% of the world's population is at risk of malaria.
The Millennium Development Goals (MDGs), established by the United Nations in 2000, include a target in Goal 6: “to have halted by 2015 and begun to reverse the incidence of malaria and other major diseases.” Following the launch of the MDG and international initiatives like Roll Back Malaria, there has been an upsurge in support for malaria control. This effort has included the formation of the Global Fund to Fight AIDS, Tuberculosis and Malaria (GFATM) and considerable funding from the US President's Malaria Initiative, the World Bank, the UK Department for International Development, USAID, and nongovernmental agencies and foundations like the Bill & Melinda Gates Foundation. But it is not yet clear how equitable or effective the financial commitments to malaria control have been.
Why Was This Study Done?
As part of the activities of the Malaria Atlas Project, the researchers had previously generated a global map of the limits of P. falciparum transmission. This map detailed areas where risk is moderate or high (stable transmission areas where malaria is endemic) and areas where the risk of transmission is low (unstable transmission areas where sporadic outbreaks of malaria may occur). Because the level of funding to control malaria should be proportionate to the size of the populations at risk, the researchers in this study appraised whether the areas of greatest need were receiving financial resources in proportion to this risk. That is, whether there is equity in how malaria funding is allocated.
What Did the Researchers Do and Find?
To assess the international financing of malaria control, the researchers conducted a audit of financial commitments to malaria control of the GFATM, national governments, and other donors for the period 2002 to 2007. To assess need, they estimated the population at risk of stable P. falciparum malaria transmission in 2007, building on their previous malaria map. Financial commitments were identified via online and literature searches, including the GFATM Web site, the World Malaria Report produced by WHO and UNICEF, and various other sources of financial information. Together these data allowed the authors to generate an estimate of the annual malaria funding allocation per capita population at risk of P. falciparum.
Of the 87 malaria-endemic countries, 76 received malaria funding commitments by the end of 2007. Overall, annual funding amounted to US$1 billion dollars, or less than US$1 per person at risk. Forty percent came from the GFATM, and the remaining from a mix of national government and external donors. The authors found great regional variation in the levels of funding. For example, looking at just the countries approved for GFATM funding, Myanmar was awarded an average annual per capita-at-risk amount of US$0.01 while Suriname was awarded US$147. With all financial commitments combined, ten countries had per capita annual support of more than US$4 per person, but 34 countries had less than US$1, including 16 where annual malaria support was less than US$0.5 per capita. These 16 countries represent 50% of the global population at risk and include seven of the poorest countries in Africa and two of the most densely populated stable endemic countries in the world (India and Indonesia).
What Do These Findings Mean?
The researchers find that the distribution of funds across the regions affected by malaria to be generally appropriate, with the Africa region and low-population-at-risk areas such as the Americas, the Caribbean, the Middle East, and Eastern Europe receiving proportionate annual malaria support. But they also identify large shortfalls, such as in the South East Asia and Western Pacific regions, which represents 47% of the global population at risk but received only 17% of GFATM and 24% of non-GFATM support. National government spending also falls short: for example, in Nigeria, where more than 100 million people are at risk of stable P. falciparum transmission, less than US$1 is invested per person per year. These findings illustrate how important it is to examine financial commitments against actual needs. Given the gaps between funding support and level of stable P. falciparum risk, the authors conclude that the goal to reduce the global burden of malaria by 2015 very likely will not be met with current commitments. They estimate that there remains a 50%–450% shortfall in funding needed to scale up malaria control worldwide.
Future research should assess the impact of these funding commitments and what additional resources will be needed if goals of malaria elimination are added to malaria control targets.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050142.
This study is discussed further in a PLoS Medicine Perspective by Anthony Kiszewski
The authors of this article have also published a global map of malaria risk; see Guerra, et al. (2008) PLoS Med 5(2) e38
Information is available from the Global Fund to Fight AIDS, Tuberculosis and Malaria
More information is available on global mapping of malaria risk from the Malaria Atlas Project
doi:10.1371/journal.pmed.0050142
PMCID: PMC2488181  PMID: 18651785

Results 1-25 (868361)