PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1534923)

Clipboard (0)
None

Related Articles

1.  Defining Catastrophic Costs and Comparing Their Importance for Adverse Tuberculosis Outcome with Multi-Drug Resistance: A Prospective Cohort Study, Peru 
PLoS Medicine  2014;11(7):e1001675.
Tom Wingfield and colleagues investigate the relationship between catastrophic costs and tuberculosis outcomes for patients receiving free tuberculosis care in Peru.
Please see later in the article for the Editors' Summary
Background
Even when tuberculosis (TB) treatment is free, hidden costs incurred by patients and their households (TB-affected households) may worsen poverty and health. Extreme TB-associated costs have been termed “catastrophic” but are poorly defined. We studied TB-affected households' hidden costs and their association with adverse TB outcome to create a clinically relevant definition of catastrophic costs.
Methods and Findings
From 26 October 2002 to 30 November 2009, TB patients (n = 876, 11% with multi-drug-resistant [MDR] TB) and healthy controls (n = 487) were recruited to a prospective cohort study in shantytowns in Lima, Peru. Patients were interviewed prior to and every 2–4 wk throughout treatment, recording direct (household expenses) and indirect (lost income) TB-related costs. Costs were expressed as a proportion of the household's annual income. In poorer households, costs were lower but constituted a higher proportion of the household's annual income: 27% (95% CI = 20%–43%) in the least-poor houses versus 48% (95% CI = 36%–50%) in the poorest. Adverse TB outcome was defined as death, treatment abandonment or treatment failure during therapy, or recurrence within 2 y. 23% (166/725) of patients with a defined treatment outcome had an adverse outcome. Total costs ≥20% of household annual income was defined as catastrophic because this threshold was most strongly associated with adverse TB outcome. Catastrophic costs were incurred by 345 households (39%). Having MDR TB was associated with a higher likelihood of incurring catastrophic costs (54% [95% CI = 43%–61%] versus 38% [95% CI = 34%–41%], p<0.003). Adverse outcome was independently associated with MDR TB (odds ratio [OR] = 8.4 [95% CI = 4.7–15], p<0.001), previous TB (OR = 2.1 [95% CI = 1.3–3.5], p = 0.005), days too unwell to work pre-treatment (OR = 1.01 [95% CI = 1.00–1.01], p = 0.02), and catastrophic costs (OR = 1.7 [95% CI = 1.1–2.6], p = 0.01). The adjusted population attributable fraction of adverse outcomes explained by catastrophic costs was 18% (95% CI = 6.9%–28%), similar to that of MDR TB (20% [95% CI = 14%–25%]). Sensitivity analyses demonstrated that existing catastrophic costs thresholds (≥10% or ≥15% of household annual income) were not associated with adverse outcome in our setting. Study limitations included not measuring certain “dis-saving” variables (including selling household items) and gathering only 6 mo of costs-specific follow-up data for MDR TB patients.
Conclusions
Despite free TB care, having TB disease was expensive for impoverished TB patients in Peru. Incurring higher relative costs was associated with adverse TB outcome. The population attributable fraction indicated that catastrophic costs and MDR TB were associated with similar proportions of adverse outcomes. Thus TB is a socioeconomic as well as infectious problem, and TB control interventions should address both the economic and clinical aspects of this disease.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Caused by the infectious microbe Mycobacterium tuberculosis, tuberculosis (or TB) is a global health problem. In 2012, an estimated 8.6 million people fell ill with TB, and 1.3 million were estimated to have died because of the disease. Poverty is widely recognized as an important risk factor for TB, and developing nations shoulder a disproportionate burden of both poverty and TB disease. For example, in Lima (the capital of Peru), the incidence of TB follows the poverty map, sparing residents living in rich areas of the city while spreading among poorer residents that live in overcrowded households.
The Peruvian government, non-profit organizations, and the World Health Organization (WHO) have extended healthcare programs to provide free diagnosis and treatment for TB and drug-resistant strains of TB in Peru, but rates of new TB cases remain high. For example, in Ventanilla (an area of 16 shantytowns located in northern Lima), the rate of infection was higher during the study period, at 162 new cases per 100,000 people per year, than the national average. About one-third of the 277,895 residents of Ventanilla live on under US$1 per day.
Why Was This Study Done?
Poverty increases the risks associated with contracting TB infection, but the disease also affects the most economically productive age group, and the income of TB-affected households often decreases post-diagnosis, exacerbating poverty. A recent WHO consultation report proposed a target of eradicating catastrophic costs for TB-affected families by 2035, but hidden TB-related costs remain understudied, and there is no international consensus defining catastrophic costs incurred by patients and households affected by TB. Lost income and the cost of transport are among hidden costs associated with free treatment programs; these costs and their potential impact on patients and their households are not well defined. Here the researchers sought to clarify and characterize TB-related costs and explore whether there is a relationship between the hidden costs associated with free TB treatment programs and the likelihood of completing treatment and becoming cured of TB.
What Did the Researchers Do and Find?
Over a seven-year period (2002–2009), the researchers recruited 876 study participants with TB diagnosed at health posts located in Ventanilla. To provide a comparative control group, a sample of 487 healthy individuals was also recruited to participate. Participants were interviewed prior to treatment, and households' TB-related direct expenses and indirect expenses (lost income attributed to TB) were recorded every 2–4 wk. Data were collected during scheduled household visits.
TB patients were poorer than controls, and analysis of the data showed that accessing free TB care was expensive for TB patients, especially those with multi-drug-resistant (MDR) TB. Total expenses were similar pre-treatment compared to during treatment for TB patients, despite receiving free care (1.1 versus 1.2 times the same household's monthly income). Even though direct expenses (for example, costs of medical examinations and medicines other than anti-TB therapy) were lower in the poorest households, their total expenses (direct and indirect) made up a greater proportion of their household annual income: 48% for the poorest households compared to 27% in the least-poor households.
The researchers defined costs that were equal to or above one-fifth (20%) of household annual income as catastrophic because this threshold marked the greatest association with adverse treatment outcomes such as death, abandoning treatment, failing to respond to treatment, or TB recurrence. By calculating the population attributable fraction—the proportional reduction in population adverse treatment outcomes that could occur if a risk factor was reduced to zero—the authors estimate that adverse TB outcomes explained by catastrophic costs and MDR TB were similar: 18% for catastrophic costs and 20% for MDR TB.
What Do These Findings Mean?
The findings of this study indicate a potential role for social protection as a means to improve TB disease control and health, as well as defining a novel, evidence-based threshold for catastrophic costs for TB-affected households of 20% or more of annual income. Addressing the economic impact of diagnosis and treatment in impoverished communities may increase the odds of curing TB.
Study limitations included only six months of follow-up data being gathered on costs for each participant and not recording “dissavings,” such as selling of household items in response to financial shock. Because the study was observational, the authors aren't able to determine the direction of the association between catastrophic costs and TB outcome. Even so, the study indicates that TB is a socioeconomic as well as infectious problem, and that TB control interventions should address both the economic and clinical aspects of the disease.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001675.
The World Health Organization provides information on all aspects of tuberculosis, including the Global Tuberculosis Report 2013
The US Centers for Disease Control and Prevention has information about tuberculosis
Médecins Sans Frontières's TB&ME blog provides patients' stories of living with MDR TB
TB Alert, a UK-based charity that promotes TB awareness worldwide, has information on TB in several European, African, and Asian languages
More information is available about the Innovation For Health and Development (IFHAD) charity and its research team's work in Peru
doi:10.1371/journal.pmed.1001675
PMCID: PMC4098993  PMID: 25025331
2.  Health and Human Rights in Chin State, Western Burma: A Population-Based Assessment Using Multistaged Household Cluster Sampling 
PLoS Medicine  2011;8(2):e1001007.
Sollom and colleagues report the findings from a household survey study carried out in Western Burma; they report a high prevalence of human rights violations such as forced labor, food theft, forced displacement, beatings, and ethnic persecution.
Background
The Chin State of Burma (also known as Myanmar) is an isolated ethnic minority area with poor health outcomes and reports of food insecurity and human rights violations. We report on a population-based assessment of health and human rights in Chin State. We sought to quantify reported human rights violations in Chin State and associations between these reported violations and health status at the household level.
Methods and Findings
Multistaged household cluster sampling was done. Heads of household were interviewed on demographics, access to health care, health status, food insecurity, forced displacement, forced labor, and other human rights violations during the preceding 12 months. Ratios of the prevalence of household hunger comparing exposed and unexposed to each reported violation were estimated using binomial regression, and 95% confidence intervals (CIs) were constructed. Multivariate models were done to adjust for possible confounders. Overall, 91.9% of households (95% CI 89.7%–94.1%) reported forced labor in the past 12 months. Forty-three percent of households met FANTA-2 (Food and Nutrition Technical Assistance II project) definitions for moderate to severe household hunger. Common violations reported were food theft, livestock theft or killing, forced displacement, beatings and torture, detentions, disappearances, and religious and ethnic persecution. Self reporting of multiple rights abuses was independently associated with household hunger.
Conclusions
Our findings indicate widespread self-reports of human rights violations. The nature and extent of these violations may warrant investigation by the United Nations or International Criminal Court.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
More than 60 years after the adoption of the Universal Declaration of Human Rights, thousands of people around the world are still deprived of their basic human rights—life, liberty, and security of person. In many countries, people live in fear of arbitrary arrest and detention, torture, forced labor, religious and ethnic persecution, forced displacement, and murder. In addition, ongoing conflicts and despotic governments deprive them of the ability to grow sufficient food (resulting in food insecurity) and deny them access to essential health care. In Burma, for example, the military junta, which seized power in 1962, frequently confiscates land unlawfully, demands forced labor, and uses violence against anyone who protests. Burma is also one of the world's poorest countries in terms of health indicators. Its average life expectancy is 54 years, its maternal mortality rate (380 deaths among women from pregnancy-related causes per 100,000 live births) is nearly ten times higher than that of neighboring Thailand, and its under-five death rate (122/1000 live births) is twice that of nearby countries. Moreover, nearly half of Burmese children under 5 are stunted, and a third of young children are underweight, indicators of malnutrition in a country that, on paper, has a food surplus.
Why Was This Study Done?
Investigators are increasingly using population-based methods to quantify the associations between human rights violations and health outcomes. In eastern Burma, for example, population-based research has recently revealed a link between human rights violations and reduced access to maternal health-care services. In this study, the researchers undertake a population-based assessment of health and human rights in Chin State, an ethnic minority area in western Burma where multiple reports of human rights abuses have been documented and from which thousands of people have fled. In particular, the researchers investigate correlations between household hunger and household experiences of human rights violations—food security in Chin State is affected by periodic expansions of rat populations that devastate crop yields, by farmers being forced by the government to grow an inedible oil crop (jatropha), and by the Burmese military regularly stealing food and livestock.
What Did the Researchers Do and Find?
Local surveyors questioned the heads of randomly selected households in Chin State about their household's access to health care and its health status, and about forced labor and other human rights violations experienced by the household during the preceding 12 months. They also asked three standard questions about food availability, the answers to which were combined to provide a measure of household hunger. Of the 621 households interviewed, 91.9% reported at least one episode of a household member being forced to work in the preceding 12 months. The Burmese military imposed two-thirds of these forced labor demands. Other human rights violations reported included beating or torture (14.8% of households), religious or ethnic persecutions (14.1% of households), and detention or imprisonment of a family member (5.9% of households). Forty-three percent of the households met the US Agency for International Development Food and Nutrition Technical Assistance (FANTA) definition for moderate to severe household hunger, and human rights violations related to food insecurity were common. For example, more than half the households were forced to give up food out of fear of violence. A statistical analysis of these data indicated that the prevalence of household hunger was 6.51 times higher in households that had experienced three food-related human rights violations than in households that had not experienced such violations.
What Do These Findings Mean?
These findings quantify the extent to which the Chin ethnic minority in Burma is subjected to multiple human rights violations and indicate the geographical spread of these abuses. Importantly, they show that the health impacts of human rights violations in Chin State are substantial. In addition, they suggest that the indirect health outcomes of human rights violations probably dwarf the mortality from direct killings. Although this study has some limitations (for example, surveyors had to work in secret and it was not safe for them to collect biological samples that could have given a more accurate indication of the health status of households than questions alone), these findings should encourage the international community to intensify its efforts to reduce human rights violations in Burma.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001007.
The UN Universal Declaration of Human Rights is available in numerous languages
The Burma Campaign UK and Human Rights Watch provide detailed information about human rights violations in Burma (in several languages)
The World Health Organization provides information on health in Burma and on human rights (in several languages)
The Mae Tao clinic also provides general information about Burma and its health services (including some information in Thai)
A PLoS Medicine Research Article by Luke Mullany and colleagues provides data on human rights violations and maternal health in Burma
The Chin Human Rights Organization is working to protect and promote the rights of the Chin people
The Global Health Access Program (GHAP) provides information on health in Burma
FANTA works to improve nutrition and global food security policies
doi:10.1371/journal.pmed.1001007
PMCID: PMC3035608  PMID: 21346799
3.  Internet-Based Device-Assisted Remote Monitoring of Cardiovascular Implantable Electronic Devices 
Executive Summary
Objective
The objective of this Medical Advisory Secretariat (MAS) report was to conduct a systematic review of the available published evidence on the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted remote monitoring systems (RMSs) for therapeutic cardiac implantable electronic devices (CIEDs) such as pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. The MAS evidence-based review was performed to support public financing decisions.
Clinical Need: Condition and Target Population
Sudden cardiac death (SCD) is a major cause of fatalities in developed countries. In the United States almost half a million people die of SCD annually, resulting in more deaths than stroke, lung cancer, breast cancer, and AIDS combined. In Canada each year more than 40,000 people die from a cardiovascular related cause; approximately half of these deaths are attributable to SCD.
Most cases of SCD occur in the general population typically in those without a known history of heart disease. Most SCDs are caused by cardiac arrhythmia, an abnormal heart rhythm caused by malfunctions of the heart’s electrical system. Up to half of patients with significant heart failure (HF) also have advanced conduction abnormalities.
Cardiac arrhythmias are managed by a variety of drugs, ablative procedures, and therapeutic CIEDs. The range of CIEDs includes pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. Bradycardia is the main indication for PMs and individuals at high risk for SCD are often treated by ICDs.
Heart failure (HF) is also a significant health problem and is the most frequent cause of hospitalization in those over 65 years of age. Patients with moderate to severe HF may also have cardiac arrhythmias, although the cause may be related more to heart pump or haemodynamic failure. The presence of HF, however, increases the risk of SCD five-fold, regardless of aetiology. Patients with HF who remain highly symptomatic despite optimal drug therapy are sometimes also treated with CRT devices.
With an increasing prevalence of age-related conditions such as chronic HF and the expanding indications for ICD therapy, the rate of ICD placement has been dramatically increasing. The appropriate indications for ICD placement, as well as the rate of ICD placement, are increasingly an issue. In the United States, after the introduction of expanded coverage of ICDs, a national ICD registry was created in 2005 to track these devices. A recent survey based on this national ICD registry reported that 22.5% (25,145) of patients had received a non-evidence based ICD and that these patients experienced significantly higher in-hospital mortality and post-procedural complications.
In addition to the increased ICD device placement and the upfront device costs, there is the need for lifelong follow-up or surveillance, placing a significant burden on patients and device clinics. In 2007, over 1.6 million CIEDs were implanted in Europe and the United States, which translates to over 5.5 million patient encounters per year if the recommended follow-up practices are considered. A safe and effective RMS could potentially improve the efficiency of long-term follow-up of patients and their CIEDs.
Technology
In addition to being therapeutic devices, CIEDs have extensive diagnostic abilities. All CIEDs can be interrogated and reprogrammed during an in-clinic visit using an inductive programming wand. Remote monitoring would allow patients to transmit information recorded in their devices from the comfort of their own homes. Currently most ICD devices also have the potential to be remotely monitored. Remote monitoring (RM) can be used to check system integrity, to alert on arrhythmic episodes, and to potentially replace in-clinic follow-ups and manage disease remotely. They do not currently have the capability of being reprogrammed remotely, although this feature is being tested in pilot settings.
Every RMS is specifically designed by a manufacturer for their cardiac implant devices. For Internet-based device-assisted RMSs, this customization includes details such as web application, multiplatform sensors, custom algorithms, programming information, and types and methods of alerting patients and/or physicians. The addition of peripherals for monitoring weight and pressure or communicating with patients through the onsite communicators also varies by manufacturer. Internet-based device-assisted RMSs for CIEDs are intended to function as a surveillance system rather than an emergency system.
Health care providers therefore need to learn each application, and as more than one application may be used at one site, multiple applications may need to be reviewed for alarms. All RMSs deliver system integrity alerting; however, some systems seem to be better geared to fast arrhythmic alerting, whereas other systems appear to be more intended for remote follow-up or supplemental remote disease management. The different RMSs may therefore have different impacts on workflow organization because of their varying frequency of interrogation and methods of alerts. The integration of these proprietary RM web-based registry systems with hospital-based electronic health record systems has so far not been commonly implemented.
Currently there are 2 general types of RMSs: those that transmit device diagnostic information automatically and without patient assistance to secure Internet-based registry systems, and those that require patient assistance to transmit information. Both systems employ the use of preprogrammed alerts that are either transmitted automatically or at regular scheduled intervals to patients and/or physicians.
The current web applications, programming, and registry systems differ greatly between the manufacturers of transmitting cardiac devices. In Canada there are currently 4 manufacturers—Medtronic Inc., Biotronik, Boston Scientific Corp., and St Jude Medical Inc.—which have regulatory approval for remote transmitting CIEDs. Remote monitoring systems are proprietary to the manufacturer of the implant device. An RMS for one device will not work with another device, and the RMS may not work with all versions of the manufacturer’s devices.
All Internet-based device-assisted RMSs have common components. The implanted device is equipped with a micro-antenna that communicates with a small external device (at bedside or wearable) commonly known as the transmitter. Transmitters are able to interrogate programmed parameters and diagnostic data stored in the patients’ implant device. The information transfer to the communicator can occur at preset time intervals with the participation of the patient (waving a wand over the device) or it can be sent automatically (wirelessly) without their participation. The encrypted data are then uploaded to an Internet-based database on a secure central server. The data processing facilities at the central database, depending on the clinical urgency, can trigger an alert for the physician(s) that can be sent via email, fax, text message, or phone. The details are also posted on the secure website for viewing by the physician (or their delegate) at their convenience.
Research Questions
The research directions and specific research questions for this evidence review were as follows:
To identify the Internet-based device-assisted RMSs available for follow-up of patients with therapeutic CIEDs such as PMs, ICDs, and CRT devices.
To identify the potential risks, operational issues, or organizational issues related to Internet-based device-assisted RM for CIEDs.
To evaluate the safety, acceptability, and effectiveness of Internet-based device-assisted RMSs for CIEDs such as PMs, ICDs, and CRT devices.
To evaluate the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted RMSs for CIEDs compared to usual outpatient in-office monitoring strategies.
To evaluate the resource implications or budget impact of RMSs for CIEDs in Ontario, Canada.
Research Methods
Literature Search
The review included a systematic review of published scientific literature and consultations with experts and manufacturers of all 4 approved RMSs for CIEDs in Canada. Information on CIED cardiac implant clinics was also obtained from Provincial Programs, a division within the Ministry of Health and Long-Term Care with a mandate for cardiac implant specialty care. Various administrative databases and registries were used to outline the current clinical follow-up burden of CIEDs in Ontario. The provincial population-based ICD database developed and maintained by the Institute for Clinical Evaluative Sciences (ICES) was used to review the current follow-up practices with Ontario patients implanted with ICD devices.
Search Strategy
A literature search was performed on September 21, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from 1950 to September 2010. Search alerts were generated and reviewed for additional relevant literature until December 31, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Inclusion Criteria
published between 1950 and September 2010;
English language full-reports and human studies;
original reports including clinical evaluations of Internet-based device-assisted RMSs for CIEDs in clinical settings;
reports including standardized measurements on outcome events such as technical success, safety, effectiveness, cost, measures of health care utilization, morbidity, mortality, quality of life or patient satisfaction;
randomized controlled trials (RCTs), systematic reviews and meta-analyses, cohort and controlled clinical studies.
Exclusion Criteria
non-systematic reviews, letters, comments and editorials;
reports not involving standardized outcome events;
clinical reports not involving Internet-based device assisted RM systems for CIEDs in clinical settings;
reports involving studies testing or validating algorithms without RM;
studies with small samples (<10 subjects).
Outcomes of Interest
The outcomes of interest included: technical outcomes, emergency department visits, complications, major adverse events, symptoms, hospital admissions, clinic visits (scheduled and/or unscheduled), survival, morbidity (disease progression, stroke, etc.), patient satisfaction, and quality of life.
Summary of Findings
The MAS evidence review was performed to review available evidence on Internet-based device-assisted RMSs for CIEDs published until September 2010. The search identified 6 systematic reviews, 7 randomized controlled trials, and 19 reports for 16 cohort studies—3 of these being registry-based and 4 being multi-centered. The evidence is summarized in the 3 sections that follow.
1. Effectiveness of Remote Monitoring Systems of CIEDs for Cardiac Arrhythmia and Device Functioning
In total, 15 reports on 13 cohort studies involving investigations with 4 different RMSs for CIEDs in cardiology implant clinic groups were identified in the review. The 4 RMSs were: Care Link Network® (Medtronic Inc,, Minneapolis, MN, USA); Home Monitoring® (Biotronic, Berlin, Germany); House Call 11® (St Jude Medical Inc., St Pauls, MN, USA); and a manufacturer-independent RMS. Eight of these reports were with the Home Monitoring® RMS (12,949 patients), 3 were with the Care Link® RMS (167 patients), 1 was with the House Call 11® RMS (124 patients), and 1 was with a manufacturer-independent RMS (44 patients). All of the studies, except for 2 in the United States, (1 with Home Monitoring® and 1 with House Call 11®), were performed in European countries.
The RMSs in the studies were evaluated with different cardiac implant device populations: ICDs only (6 studies), ICD and CRT devices (3 studies), PM and ICD and CRT devices (4 studies), and PMs only (2 studies). The patient populations were predominately male (range, 52%–87%) in all studies, with mean ages ranging from 58 to 76 years. One study population was unique in that RMSs were evaluated for ICDs implanted solely for primary prevention in young patients (mean age, 44 years) with Brugada syndrome, which carries an inherited increased genetic risk for sudden heart attack in young adults.
Most of the cohort studies reported on the feasibility of RMSs in clinical settings with limited follow-up. In the short follow-up periods of the studies, the majority of the events were related to detection of medical events rather than system configuration or device abnormalities. The results of the studies are summarized below:
The interrogation of devices on the web platform, both for continuous and scheduled transmissions, was significantly quicker with remote follow-up, both for nurses and physicians.
In a case-control study focusing on a Brugada population–based registry with patients followed-up remotely, there were significantly fewer outpatient visits and greater detection of inappropriate shocks. One death occurred in the control group not followed remotely and post-mortem analysis indicated early signs of lead failure prior to the event.
Two studies examined the role of RMSs in following ICD leads under regulatory advisory in a European clinical setting and noted:
– Fewer inappropriate shocks were administered in the RM group.
– Urgent in-office interrogations and surgical revisions were performed within 12 days of remote alerts.
– No signs of lead fracture were detected at in-office follow-up; all were detected at remote follow-up.
Only 1 study reported evaluating quality of life in patients followed up remotely at 3 and 6 months; no values were reported.
Patient satisfaction was evaluated in 5 cohort studies, all in short term follow-up: 1 for the Home Monitoring® RMS, 3 for the Care Link® RMS, and 1 for the House Call 11® RMS.
– Patients reported receiving a sense of security from the transmitter, a good relationship with nurses and physicians, positive implications for their health, and satisfaction with RM and organization of services.
– Although patients reported that the system was easy to implement and required less than 10 minutes to transmit information, a variable proportion of patients (range, 9% 39%) reported that they needed the assistance of a caregiver for their transmission.
– The majority of patients would recommend RM to other ICD patients.
– Patients with hearing or other physical or mental conditions hindering the use of the system were excluded from studies, but the frequency of this was not reported.
Physician satisfaction was evaluated in 3 studies, all with the Care Link® RMS:
– Physicians reported an ease of use and high satisfaction with a generally short-term use of the RMS.
– Physicians reported being able to address the problems in unscheduled patient transmissions or physician initiated transmissions remotely, and were able to handle the majority of the troubleshooting calls remotely.
– Both nurses and physicians reported a high level of satisfaction with the web registry system.
2. Effectiveness of Remote Monitoring Systems in Heart Failure Patients for Cardiac Arrhythmia and Heart Failure Episodes
Remote follow-up of HF patients implanted with ICD or CRT devices, generally managed in specialized HF clinics, was evaluated in 3 cohort studies: 1 involved the Home Monitoring® RMS and 2 involved the Care Link® RMS. In these RMSs, in addition to the standard diagnostic features, the cardiac devices continuously assess other variables such as patient activity, mean heart rate, and heart rate variability. Intra-thoracic impedance, a proxy measure for lung fluid overload, was also measured in the Care Link® studies. The overall diagnostic performance of these measures cannot be evaluated, as the information was not reported for patients who did not experience intra-thoracic impedance threshold crossings or did not undergo interventions. The trial results involved descriptive information on transmissions and alerts in patients experiencing high morbidity and hospitalization in the short study periods.
3. Comparative Effectiveness of Remote Monitoring Systems for CIEDs
Seven RCTs were identified evaluating RMSs for CIEDs: 2 were for PMs (1276 patients) and 5 were for ICD/CRT devices (3733 patients). Studies performed in the clinical setting in the United States involved both the Care Link® RMS and the Home Monitoring® RMS, whereas all studies performed in European countries involved only the Home Monitoring® RMS.
3A. Randomized Controlled Trials of Remote Monitoring Systems for Pacemakers
Two trials, both multicenter RCTs, were conducted in different countries with different RMSs and study objectives. The PREFER trial was a large trial (897 patients) performed in the United States examining the ability of Care Link®, an Internet-based remote PM interrogation system, to detect clinically actionable events (CAEs) sooner than the current in-office follow-up supplemented with transtelephonic monitoring transmissions, a limited form of remote device interrogation. The trial results are summarized below:
In the 375-day mean follow-up, 382 patients were identified with at least 1 CAE—111 patients in the control arm and 271 in the remote arm.
The event rate detected per patient for every type of CAE, except for loss of atrial capture, was higher in the remote arm than the control arm.
The median time to first detection of CAEs (4.9 vs. 6.3 months) was significantly shorter in the RMS group compared to the control group (P < 0.0001).
Additionally, only 2% (3/190) of the CAEs in the control arm were detected during a transtelephonic monitoring transmission (the rest were detected at in-office follow-ups), whereas 66% (446/676) of the CAEs were detected during remote interrogation.
The second study, the OEDIPE trial, was a smaller trial (379 patients) performed in France evaluating the ability of the Home Monitoring® RMS to shorten PM post-operative hospitalization while preserving the safety of conventional management of longer hospital stays.
Implementation and operationalization of the RMS was reported to be successful in 91% (346/379) of the patients and represented 8144 transmissions.
In the RM group 6.5% of patients failed to send messages (10 due to improper use of the transmitter, 2 with unmanageable stress). Of the 172 patients transmitting, 108 patients sent a total of 167 warnings during the trial, with a greater proportion of warnings being attributed to medical rather than technical causes.
Forty percent had no warning message transmission and among these, 6 patients experienced a major adverse event and 1 patient experienced a non-major adverse event. Of the 6 patients having a major adverse event, 5 contacted their physician.
The mean medical reaction time was faster in the RM group (6.5 ± 7.6 days vs. 11.4 ± 11.6 days).
The mean duration of hospitalization was significantly shorter (P < 0.001) for the RM group than the control group (3.2 ± 3.2 days vs. 4.8 ± 3.7 days).
Quality of life estimates by the SF-36 questionnaire were similar for the 2 groups at 1-month follow-up.
3B. Randomized Controlled Trials Evaluating Remote Monitoring Systems for ICD or CRT Devices
The 5 studies evaluating the impact of RMSs with ICD/CRT devices were conducted in the United States and in European countries and involved 2 RMSs—Care Link® and Home Monitoring ®. The objectives of the trials varied and 3 of the trials were smaller pilot investigations.
The first of the smaller studies (151 patients) evaluated patient satisfaction, achievement of patient outcomes, and the cost-effectiveness of the Care Link® RMS compared to quarterly in-office device interrogations with 1-year follow-up.
Individual outcomes such as hospitalizations, emergency department visits, and unscheduled clinic visits were not significantly different between the study groups.
Except for a significantly higher detection of atrial fibrillation in the RM group, data on ICD detection and therapy were similar in the study groups.
Health-related quality of life evaluated by the EuroQoL at 6-month or 12-month follow-up was not different between study groups.
Patients were more satisfied with their ICD care in the clinic follow-up group than in the remote follow-up group at 6-month follow-up, but were equally satisfied at 12- month follow-up.
The second small pilot trial (20 patients) examined the impact of RM follow-up with the House Call 11® system on work schedules and cost savings in patients randomized to 2 study arms varying in the degree of remote follow-up.
The total time including device interrogation, transmission time, data analysis, and physician time required was significantly shorter for the RM follow-up group.
The in-clinic waiting time was eliminated for patients in the RM follow-up group.
The physician talk time was significantly reduced in the RM follow-up group (P < 0.05).
The time for the actual device interrogation did not differ in the study groups.
The third small trial (115 patients) examined the impact of RM with the Home Monitoring® system compared to scheduled trimonthly in-clinic visits on the number of unplanned visits, total costs, health-related quality of life (SF-36), and overall mortality.
There was a 63.2% reduction in in-office visits in the RM group.
Hospitalizations or overall mortality (values not stated) were not significantly different between the study groups.
Patient-induced visits were higher in the RM group than the in-clinic follow-up group.
The TRUST Trial
The TRUST trial was a large multicenter RCT conducted at 102 centers in the United States involving the Home Monitoring® RMS for ICD devices for 1450 patients. The primary objectives of the trial were to determine if remote follow-up could be safely substituted for in-office clinic follow-up (3 in-office visits replaced) and still enable earlier physician detection of clinically actionable events.
Adherence to the protocol follow-up schedule was significantly higher in the RM group than the in-office follow-up group (93.5% vs. 88.7%, P < 0.001).
Actionability of trimonthly scheduled checks was low (6.6%) in both study groups. Overall, actionable causes were reprogramming (76.2%), medication changes (24.8%), and lead/system revisions (4%), and these were not different between the 2 study groups.
The overall mean number of in-clinic and hospital visits was significantly lower in the RM group than the in-office follow-up group (2.1 per patient-year vs. 3.8 per patient-year, P < 0.001), representing a 45% visit reduction at 12 months.
The median time from onset of first arrhythmia to physician evaluation was significantly shorter (P < 0.001) in the RM group than in the in-office follow-up group for all arrhythmias (1 day vs. 35.5 days).
The median time to detect clinically asymptomatic arrhythmia events—atrial fibrillation (AF), ventricular fibrillation (VF), ventricular tachycardia (VT), and supra-ventricular tachycardia (SVT)—was also significantly shorter (P < 0.001) in the RM group compared to the in-office follow-up group (1 day vs. 41.5 days) and was significantly quicker for each of the clinical arrhythmia events—AF (5.5 days vs. 40 days), VT (1 day vs. 28 days), VF (1 day vs. 36 days), and SVT (2 days vs. 39 days).
System-related problems occurred infrequently in both groups—in 1.5% of patients (14/908) in the RM group and in 0.7% of patients (3/432) in the in-office follow-up group.
The overall adverse event rate over 12 months was not significantly different between the 2 groups and individual adverse events were also not significantly different between the RM group and the in-office follow-up group: death (3.4% vs. 4.9%), stroke (0.3% vs. 1.2%), and surgical intervention (6.6% vs. 4.9%), respectively.
The 12-month cumulative survival was 96.4% (95% confidence interval [CI], 95.5%–97.6%) in the RM group and 94.2% (95% confidence interval [CI], 91.8%–96.6%) in the in-office follow-up group, and was not significantly different between the 2 groups (P = 0.174).
The CONNECT Trial
The CONNECT trial, another major multicenter RCT, involved the Care Link® RMS for ICD/CRT devices in a15-month follow-up study of 1,997 patients at 133 sites in the United States. The primary objective of the trial was to determine whether automatically transmitted physician alerts decreased the time from the occurrence of clinically relevant events to medical decisions. The trial results are summarized below:
Of the 575 clinical alerts sent in the study, 246 did not trigger an automatic physician alert. Transmission failures were related to technical issues such as the alert not being programmed or not being reset, and/or a variety of patient factors such as not being at home and the monitor not being plugged in or set up.
The overall mean time from the clinically relevant event to the clinical decision was significantly shorter (P < 0.001) by 17.4 days in the remote follow-up group (4.6 days for 172 patients) than the in-office follow-up group (22 days for 145 patients).
– The median time to a clinical decision was shorter in the remote follow-up group than in the in-office follow-up group for an AT/AF burden greater than or equal to 12 hours (3 days vs. 24 days) and a fast VF rate greater than or equal to 120 beats per minute (4 days vs. 23 days).
Although infrequent, similar low numbers of events involving low battery and VF detection/therapy turned off were noted in both groups. More alerts, however, were noted for out-of-range lead impedance in the RM group (18 vs. 6 patients), and the time to detect these critical events was significantly shorter in the RM group (same day vs. 17 days).
Total in-office clinic visits were reduced by 38% from 6.27 visits per patient-year in the in-office follow-up group to 3.29 visits per patient-year in the remote follow-up group.
Health care utilization visits (N = 6,227) that included cardiovascular-related hospitalization, emergency department visits, and unscheduled clinic visits were not significantly higher in the remote follow-up group.
The overall mean length of hospitalization was significantly shorter (P = 0.002) for those in the remote follow-up group (3.3 days vs. 4.0 days) and was shorter both for patients with ICD (3.0 days vs. 3.6 days) and CRT (3.8 days vs. 4.7 days) implants.
The mortality rate between the study arms was not significantly different between the follow-up groups for the ICDs (P = 0.31) or the CRT devices with defribillator (P = 0.46).
Conclusions
There is limited clinical trial information on the effectiveness of RMSs for PMs. However, for RMSs for ICD devices, multiple cohort studies and 2 large multicenter RCTs demonstrated feasibility and significant reductions in in-office clinic follow-ups with RMSs in the first year post implantation. The detection rates of clinically significant events (and asymptomatic events) were higher, and the time to a clinical decision for these events was significantly shorter, in the remote follow-up groups than in the in-office follow-up groups. The earlier detection of clinical events in the remote follow-up groups, however, was not associated with lower morbidity or mortality rates in the 1-year follow-up. The substitution of almost all the first year in-office clinic follow-ups with RM was also not associated with an increased health care utilization such as emergency department visits or hospitalizations.
The follow-up in the trials was generally short-term, up to 1 year, and was a more limited assessment of potential longer term device/lead integrity complications or issues. None of the studies compared the different RMSs, particularly the different RMSs involving patient-scheduled transmissions or automatic transmissions. Patients’ acceptance of and satisfaction with RM were reported to be high, but the impact of RM on patients’ health-related quality of life, particularly the psychological aspects, was not evaluated thoroughly. Patients who are not technologically competent, having hearing or other physical/mental impairments, were identified as potentially disadvantaged with remote surveillance. Cohort studies consistently identified subgroups of patients who preferred in-office follow-up. The evaluation of costs and workflow impact to the health care system were evaluated in European or American clinical settings, and only in a limited way.
Internet-based device-assisted RMSs involve a new approach to monitoring patients, their disease progression, and their CIEDs. Remote monitoring also has the potential to improve the current postmarket surveillance systems of evolving CIEDs and their ongoing hardware and software modifications. At this point, however, there is insufficient information to evaluate the overall impact to the health care system, although the time saving and convenience to patients and physicians associated with a substitution of in-office follow-up by RM is more certain. The broader issues surrounding infrastructure, impacts on existing clinical care systems, and regulatory concerns need to be considered for the implementation of Internet-based RMSs in jurisdictions involving different clinical practices.
PMCID: PMC3377571  PMID: 23074419
4.  Healthy Eating and Risks of Total and Cause-Specific Death among Low-Income Populations of African-Americans and Other Adults in the Southeastern United States: A Prospective Cohort Study 
PLoS Medicine  2015;12(5):e1001830.
Background
A healthy diet, as defined by the US Dietary Guidelines for Americans (DGA), has been associated with lower morbidity and mortality from major chronic diseases in studies conducted in predominantly non-Hispanic white individuals. It is unknown whether this association can be extrapolated to African-Americans and low-income populations.
Methods and Findings
We examined the associations of adherence to the DGA with total and cause-specific mortality in the Southern Community Cohort Study, a prospective study that recruited 84,735 American adults, aged 40–79 y, from 12 southeastern US states during 2002–2009, mostly through community health centers that serve low-income populations. The present analysis included 50,434 African-Americans, 24,054 white individuals, and 3,084 individuals of other racial/ethnic groups, among whom 42,759 participants had an annual household income less than US$15,000. Usual dietary intakes were assessed using a validated food frequency questionnaire at baseline. Adherence to the DGA was measured by the Healthy Eating Index (HEI), 2010 and 2005 editions (HEI-2010 and HEI-2005, respectively). During a mean follow-up of 6.2 y, 6,906 deaths were identified, including 2,244 from cardiovascular disease, 1,794 from cancer, and 2,550 from other diseases. A higher HEI-2010 score was associated with lower risks of disease death, with adjusted hazard ratios (HRs) of 0.80 (95% CI, 0.73–0.86) for all-disease mortality, 0.81 (95% CI, 0.70–0.94) for cardiovascular disease mortality, 0.81 (95% CI, 0.69–0.95) for cancer mortality, and 0.77 (95% CI, 0.67–0.88) for other disease mortality, when comparing the highest quintile with the lowest (all p-values for trend < 0.05). Similar inverse associations between HEI-2010 score and mortality were observed regardless of sex, race, and income (all p-values for interaction > 0.50). Several component scores in the HEI-2010, including whole grains, dairy, seafood and plant proteins, and ratio of unsaturated to saturated fatty acids, showed significant inverse associations with total mortality. HEI-2005 score was also associated with lower disease mortality, with a HR of 0.86 (95% CI, 0.79–0.93) when comparing extreme quintiles. Given the observational study design, however, residual confounding cannot be completely ruled out. In addition, future studies are needed to evaluate the generalizability of these findings to African-Americans of other socioeconomic status.
Conclusions
Our results showed, to our knowledge for the first time, that adherence to the DGA was associated with lower total and cause-specific mortality in a low-income population, including a large proportion of African-Americans, living in the southeastern US.
In a prospective cohort study, Wei Zheng and colleagues study the association between adherence to dietary guidelines and mortality in low-income US adults, two thirds of whom are African-Americans.
Editors' Summary
Background
Certain parts of the population, including women, children, ethnic and racial minorities, and poor people, are often underrepresented in clinical trials and in epidemiological studies (which examine the patterns, causes, and effects of health and disease conditions). In the US population, the link between diet and health has mostly been studied in non-Hispanic white individuals from middle- and high-income households. Such studies formed the basis for the Dietary Guidelines for Americans (DGA), and more recently have shown that adherence to the DGA is associated with lower levels of obesity, as well as lower risks for diabetes, cardiovascular disease (such as heart attacks and strokes), and certain cancers. To measure adherence to the DGA, the Center for Nutrition Policy and Promotion at the US Department of Agriculture developed the Healthy Eating Index (HEI) in 1995. The DGA and the HEI have been updated several times, and the HEI-2010—the latest version—reflects the 2010 DGA.
Why Was This Study Done?
Because research participants are often not representative of the entire US population, it is unknown whether the results of many studies are valid for all Americans. To remedy this situation, efforts have been made to recruit participants from previously underrepresented parts of the population and to address important health questions in such groups. For this study, the researchers wanted to examine whether adherence to the DGA was associated with better health outcomes in poor people and African-Americans, consistent with the results in wealthier non-Hispanic white individuals.
What Did the Researchers Do and Find?
The researchers analyzed data from the Southern Community Cohort Study (SCCS). The SCCS was funded by the National Cancer Institute and was initiated in 2001 with the goal of addressing unresolved questions about the causes of cancer and other chronic diseases, as well as reasons for health disparities. The SCCS recruited most of its participants from community health centers in 12 states in the southeastern US. These centers serve predominantly poor and uninsured people, including many African-Americans. Of approximately 85,000 SCCS participants, over two-thirds were African-American, and over half were poor, with an annual household income of less than US$15,000.
For this study, the researchers used a food frequency questionnaire that was designed to capture foods commonly consumed in the southeastern US, and from this calculated HEI-2010 scores for each participant. They also collected other health- and lifestyle-related information. They then followed all participants for whom they had complete information (over 77,000) for a number of years (half of them for over 6.2 years). During that period, 6,906 participants died; including 2,244 from cardiovascular disease, 1,794 from cancer, and 2,550 from other diseases. When the researchers tested for a possible association between HEI-2010 and death (controlling for other relevant factors such as age, weight, exercise, smoking, and the presence of specific chronic diseases), they found that participants with a higher HEI-2010 score (reflecting better adherence to the DGA) had a lower risk of dying in the follow-up period. Participants with the healthiest diet (those in the top one-fifth of HEI-2010 scores) had only about 80% of the risk of death of those with the unhealthiest diets (those in the bottom one-fifth of HEI-2010 scores). This reduction in the risk of death by approximately 20% was true for death from any disease, death from cancer, and death from cardiovascular disease.
What Do These Findings Mean?
The results support the validity of the DGA for healthy eating across the US population. However, the study had some limitations. For example, participants were asked only once—when they first joined the SCCS—about their diet, their household income, and other factors that can change over time, such as exercise habits and diseases they have been diagnosed with. Besides such changes, there could be other factors not captured in the study that might influence the association between diet and death. Despite these uncertainties, the findings suggest that adherence to the DGA is associated with lower total mortality and mortality from cancer or cardiovascular disease in poor US Americans in general, and in low-income African-Americans.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001830.
Information is available online about the Southern Community Cohort Study
The US Department of Agriculture’s Center for Nutrition Policy and Promotion has information on the Healthy Eating Index, which is based on the Dietary Guidelines for Americans
The World Health Organization provides information on diet as part of its global strategy for diet, physical activity, and health, as well as a factsheet on healthy diet
Wikipedia has a page on race and health in the US (note that Wikipedia is a free online encyclopedia that anyone can edit)
doi:10.1371/journal.pmed.1001830
PMCID: PMC4444091  PMID: 26011727
5.  Mortality in Iraq Associated with the 2003–2011 War and Occupation: Findings from a National Cluster Sample Survey by the University Collaborative Iraq Mortality Study 
PLoS Medicine  2013;10(10):e1001533.
Based on a survey of 2,000 randomly selected households throughout Iraq, Amy Hagopian and colleagues estimate that close to half a million excess deaths are attributable to the recent Iraq war and occupation.
Please see later in the article for the Editors' Summary
Background
Previous estimates of mortality in Iraq attributable to the 2003 invasion have been heterogeneous and controversial, and none were produced after 2006. The purpose of this research was to estimate direct and indirect deaths attributable to the war in Iraq between 2003 and 2011.
Methods and Findings
We conducted a survey of 2,000 randomly selected households throughout Iraq, using a two-stage cluster sampling method to ensure the sample of households was nationally representative. We asked every household head about births and deaths since 2001, and all household adults about mortality among their siblings. We used secondary data sources to correct for out-migration. From March 1, 2003, to June 30, 2011, the crude death rate in Iraq was 4.55 per 1,000 person-years (95% uncertainty interval 3.74–5.27), more than 0.5 times higher than the death rate during the 26-mo period preceding the war, resulting in approximately 405,000 (95% uncertainty interval 48,000–751,000) excess deaths attributable to the conflict. Among adults, the risk of death rose 0.7 times higher for women and 2.9 times higher for men between the pre-war period (January 1, 2001, to February 28, 2003) and the peak of the war (2005–2006). We estimate that more than 60% of excess deaths were directly attributable to violence, with the rest associated with the collapse of infrastructure and other indirect, but war-related, causes. We used secondary sources to estimate rates of death among emigrants. Those estimates suggest we missed at least 55,000 deaths that would have been reported by households had the households remained behind in Iraq, but which instead had migrated away. Only 24 households refused to participate in the study. An additional five households were not interviewed because of hostile or threatening behavior, for a 98.55% response rate. The reliance on outdated census data and the long recall period required of participants are limitations of our study.
Conclusions
Beyond expected rates, most mortality increases in Iraq can be attributed to direct violence, but about a third are attributable to indirect causes (such as from failures of health, sanitation, transportation, communication, and other systems). Approximately a half million deaths in Iraq could be attributable to the war.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
War is a major public health problem. Its health effects include violent deaths among soldiers and civilians as well as indirect increases in mortality and morbidity caused by conflict. Unlike those of other causes of death and disability, however, the consequences of war on population health are rarely studied scientifically. In conflict situations, deaths and diseases are not reliably measured and recorded, and estimating the proportion caused, directly or indirectly, by a war or conflict is challenging. Population-based mortality survey methods—asking representative survivors about deaths they know about—were developed by public health researchers to estimate death rates. By comparing death rate estimates for periods before and during a conflict, researchers can derive the number of excess deaths that are attributable to the conflict.
Why Was This Study Done?
A number of earlier studies have estimated the death toll in Iraq since the beginning of the war in March 2003. The previous studies covered different periods from 2003 to 2006 and derived different rates of overall deaths and excess deaths attributable to the war and conflict. All of them have been controversial, and their methodologies have been criticized. For this study, based on a population-based mortality survey, the researchers modified and improved their methodology in response to critiques of earlier surveys. The study covers the period from the beginning of the war in March 2003 until June 2011, including a period of high violence from 2006 to 2008. It provides population-based estimates for excess deaths in the years after 2006 and covers most of the period of the war and subsequent occupation.
What Did the Researchers Do and Find?
Interviewers trained by the researchers conducted the survey between May 2011 and July 2011 and collected data from 2,000 randomly selected households in 100 geographical clusters, distributed across Iraq's 18 governorates. The interviewers asked the head of each household about deaths among household members from 2001 to the time of the interview, including a pre-war period from January 2001 to March 2003 and the period of the war and occupation. They also asked all adults in the household about deaths among their siblings during the same period. From the first set of data, the researchers calculated the crude death rates (i.e., the number of deaths during a year per 1,000 individuals) before and during the war. They found the wartime crude death rate in Iraq to be 4.55 per 1,000, more than 50% higher than the death rate of 2.89 during the two-year period preceding the war. By multiplying those rates by the annual Iraq population, the authors estimate the total excess Iraqi deaths attributable to the war through mid-2011 to be about 405,000. The researchers also estimated that an additional 56,000 deaths were not counted due to migration. Including this number, their final estimate is that approximately half a million people died in Iraq as a result of the war and subsequent occupation from March 2003 to June 2011.
The risk of death at the peak of the conflict in 2006 almost tripled for men and rose by 70% for women. Respondents attributed 20% of household deaths to war-related violence. Violent deaths were attributed primarily to coalition forces (35%) and militia (32%). The majority (63%) of violent deaths were from gunshots. Twelve percent were attributed to car bombs. Based on the responses from adults in the surveyed households who reported on the alive-or-dead status of their siblings, the researchers estimated the total number of deaths among adults aged 15–60 years, from March 2003 to June 2011, to be approximately 376,000; 184,000 of these deaths were attributed to the conflict, and of those, the authors estimate that 132,000 were caused directly by war-related violence.
What Do These Findings Mean?
These findings provide the most up-to-date estimates of the death toll of the Iraq war and subsequent conflict. However, given the difficult circumstances, the estimates are associated with substantial uncertainties. The researchers extrapolated from a small representative sample of households to estimate Iraq's national death toll. In addition, respondents were asked to recall events that occurred up to ten years prior, which can lead to inaccuracies. The researchers also had to rely on outdated census data (the last complete population census in Iraq dates back to 1987) for their overall population figures. Thus, to accompany their estimate of 460,000 excess deaths from March 2003 to mid-2011, the authors used statistical methods to determine the likely range of the true estimate. Based on the statistical methods, the researchers are 95% confident that the true number of excess deaths lies between 48,000 and 751,000—a large range. More than two years past the end of the period covered in this study, the conflict in Iraq is far from over and continues to cost lives at alarming rates. As discussed in an accompanying Perspective by Salman Rawaf, violence and lawlessness continue to the present day. In addition, post-war Iraq has limited capacity to re-establish and maintain its battered public health and safety infrastructure.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001533
This study is further discussed in a PLOS Medicine Perspective by Salman Rawaf.
The Geneva Declaration on Armed Violence and Development website provides information on the global burden of armed violence.
The International Committee of the Red Cross provides information about war and international humanitarian law (in several languages).
Medact, a global health charity, has information on health and conflict.
Columbia University has a program on forced migration and health.
Johns Hopkins University runs the Center for Refugee and Disaster Response.
University of Washington's Health Alliance International website also has information about war and conflict.
doi:10.1371/journal.pmed.1001533
PMCID: PMC3797136  PMID: 24143140
6.  Access To Essential Maternal Health Interventions and Human Rights Violations among Vulnerable Communities in Eastern Burma 
PLoS Medicine  2008;5(12):e242.
Background
Health indicators are poor and human rights violations are widespread in eastern Burma. Reproductive and maternal health indicators have not been measured in this setting but are necessary as part of an evaluation of a multi-ethnic pilot project exploring strategies to increase access to essential maternal health interventions. The goal of this study is to estimate coverage of maternal health services prior to this project and associations between exposure to human rights violations and access to such services.
Methods and Findings
Selected communities in the Shan, Mon, Karen, and Karenni regions of eastern Burma that were accessible to community-based organizations operating from Thailand were surveyed to estimate coverage of reproductive, maternal, and family planning services, and to assess exposure to household-level human rights violations within the pilot-project target population. Two-stage cluster sampling surveys among ever-married women of reproductive age (15–45 y) documented access to essential antenatal care interventions, skilled attendance at birth, postnatal care, and family planning services. Mid-upper arm circumference, hemoglobin by color scale, and Plasmodium falciparum parasitemia by rapid diagnostic dipstick were measured. Exposure to human rights violations in the prior 12 mo was recorded. Between September 2006 and January 2007, 2,914 surveys were conducted. Eighty-eight percent of women reported a home delivery for their last pregnancy (within previous 5 y). Skilled attendance at birth (5.1%), any (39.3%) or ≥ 4 (16.7%) antenatal visits, use of an insecticide-treated bed net (21.6%), and receipt of iron supplements (11.8%) were low. At the time of the survey, more than 60% of women had hemoglobin level estimates ≤ 11.0 g/dl and 7.2% were Pf positive. Unmet need for contraceptives exceeded 60%. Violations of rights were widely reported: 32.1% of Karenni households reported forced labor and 10% of Karen households had been forced to move. Among Karen households, odds of anemia were 1.51 (95% confidence interval [CI] 0.95–2.40) times higher among women reporting forced displacement, and 7.47 (95% CI 2.21–25.3) higher among those exposed to food security violations. The odds of receiving no antenatal care services were 5.94 (95% CI 2.23–15.8) times higher among those forcibly displaced.
Conclusions
Coverage of basic maternal health interventions is woefully inadequate in these selected populations and substantially lower than even the national estimates for Burma, among the lowest in the region. Considerable political, financial, and human resources are necessary to improve access to maternal health care in these communities.
Luke Mullany and colleagues examine access to essential maternal health interventions and human rights violations within vulnerable communities in eastern Burma.
Editors' Summary
Background.
After decades of military rule, Burma has one of the world's worst health-care systems and high levels of ill health. For example, maternal mortality (deaths among women from pregnancy-related causes) is around 360 per 100,000 live births in Burma, whereas in neighboring Thailand it is only 44 per 100,000 live births. Maternal health is even worse in the Shan, Karenni, Karen and Mon states in eastern Burma where ethnic conflicts and enforced village relocations have internally displaced more than half a million people. Here, maternal mortality is thought to be about 1000 per 100, 000 live births. In an effort to improve access to life-saving maternal health interventions in these states, Burmese community-based health organizations, the Johns Hopkins Center for Public Health and Human Rights and the Global Health Access Program in the USA, and the Mae Tao Clinic (a health-worker training center in Thailand) recently set up the Mobile Obstetric Maternal Health Workers (MOM) Project. In this pilot project, local health workers from 12 communities in eastern Burma received training in antenatal care, emergency obstetrics (the care of women during childbirth), blood transfusion, and family planning at the Mae Tao Clinic. Back in Burma, these maternal health workers trained additional local health workers and traditional birth attendants. All these individuals now provide maternal health care to their communities.
Why Was This Study Done?
The effectiveness of the MOM project can only be evaluated if accurate baseline information on women's access to maternal health-care services is available. This information is also needed to ensure the wise use of scarce health-care resources. However, very little is known about reproductive and maternal health in eastern Burma. In this study, the researchers analyze the information on women's access to reproductive and maternal health-care services that was collected during the initial field implementation stage of the MOM project. In addition, they analyze whether exposure to enforced village relocations and other human rights violations affect access to maternal health-care services.
What Did the Researchers Do and Find?
Trained survey workers asked nearly 3000 ever-married women of reproductive age in the selected communities about their access to antenatal and postnatal care, skilled birth attendants, and family planning. They measured each woman's mid-upper arm circumference (an indicator of nutritional status) and tested them for anemia (iron deficiency) and infection with malaria parasites (a common cause of anemia in tropical countries). Finally, they asked the women about any recent violations of their human rights such as forced labour or relocation. Nearly 90% of the women reported a home delivery for their last baby. A skilled attendant was present at only one in 20 births and only one in three women had any antenatal care. One third of the women received postnatal care and only a third said they had access to effective contraceptives. Few women had received iron supplements or had used insecticide-treated bednets to avoid malaria-carrying mosquitos. Consequently, more than half the women were anemic and 7.2% were infected with malaria parasites. Many women also showed signs of poor nutrition. Finally, human rights violations were widely reported by the women. In Karen, the region containing most of the study communities, forced relocation tripled the risk of women developing anemia and greatly decreased their chances of receiving any antenatal care.
What Do These Findings Mean?
These findings show that access to maternal health-care interventions is extremely limited and that poor nutrition, anemia, and malaria, all of which increase the risk of pregnancy complications, are widespread in the communities in the MOM project. Because these communities had some basic health services and access to training in Thailand before the project started, these results probably underestimate the lack of access to maternal health-care services in eastern Burma. Nevertheless, it is clear that considerable political, financial, and human resources will be needed to improve maternal health in this region. Finally, the findings also reveal a link between human rights violations and reduced access to maternal health-care services. Thus, the scale of human rights violations will need to be considered when evaluating programs designed to improve maternal health in Burma and in other places where there is ongoing conflict.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050242.
This research article is further discussed in a PLoS Medicine Perspective by Macaya Douoguih
The World Health Organization provides information on all aspects of health in Burma (in several languages)
The Mae Tao Clinic also provides general information about Burma and its health services
More information about the MOM project is available in a previous publication by the researchers
The Burma Campaign UK and Human Rights Watch both provide detailed information about human rights violations in Burma
The United Nations Population Fund provides information about safe motherhood and ongoing efforts to save mothers' lives around the world
doi:10.1371/journal.pmed.0050242
PMCID: PMC2605890  PMID: 19108601
7.  Measuring Adult Mortality Using Sibling Survival: A New Analytical Method and New Results for 44 Countries, 1974–2006 
PLoS Medicine  2010;7(4):e1000260.
Julie Rajaratnam and colleagues describe a novel method, called the Corrected Sibling Survival method, to measure adult mortality in countries without good vital registration by use of histories taken from surviving siblings.
Background
For several decades, global public health efforts have focused on the development and application of disease control programs to improve child survival in developing populations. The need to reliably monitor the impact of such intervention programs in countries has led to significant advances in demographic methods and data sources, particularly with large-scale, cross-national survey programs such as the Demographic and Health Surveys (DHS). Although no comparable effort has been undertaken for adult mortality, the availability of large datasets with information on adult survival from censuses and household surveys offers an important opportunity to dramatically improve our knowledge about levels and trends in adult mortality in countries without good vital registration. To date, attempts to measure adult mortality from questions in censuses and surveys have generally led to implausibly low levels of adult mortality owing to biases inherent in survey data such as survival and recall bias. Recent methodological developments and the increasing availability of large surveys with information on sibling survival suggest that it may well be timely to reassess the pessimism that has prevailed around the use of sibling histories to measure adult mortality.
Methods and Findings
We present the Corrected Sibling Survival (CSS) method, which addresses both the survival and recall biases that have plagued the use of survey data to estimate adult mortality. Using logistic regression, our method directly estimates the probability of dying in a given country, by age, sex, and time period from sibling history data. The logistic regression framework borrows strength across surveys and time periods for the estimation of the age patterns of mortality, and facilitates the implementation of solutions for the underrepresentation of high-mortality families and recall bias. We apply the method to generate estimates of and trends in adult mortality, using the summary measure 45q15—the probability of a 15-y old dying before his or her 60th birthday—for 44 countries with DHS sibling survival data. Our findings suggest that levels of adult mortality prevailing in many developing countries are substantially higher than previously suggested by other analyses of sibling history data. Generally, our estimates show the risk of adult death between ages 15 and 60 y to be about 20%–35% for females and 25%–45% for males in sub-Saharan African populations largely unaffected by HIV. In countries of Southern Africa, where the HIV epidemic has been most pronounced, as many as eight out of ten men alive at age 15 y will be dead by age 60, as will six out of ten women. Adult mortality levels in populations of Asia and Latin America are generally lower than in Africa, particularly for women. The exceptions are Haiti and Cambodia, where mortality risks are comparable to many countries in Africa. In all other countries with data, the probability of dying between ages 15 and 60 y was typically around 10% for women and 20% for men, not much higher than the levels prevailing in several more developed countries.
Conclusions
Our results represent an expansion of direct knowledge of levels and trends in adult mortality in the developing world. The CSS method provides grounds for renewed optimism in collecting sibling survival data. We suggest that all nationally representative survey programs with adequate sample size ought to implement this critical module for tracking adult mortality in order to more reliably understand the levels and patterns of adult mortality, and how they are changing.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Governments and international health agencies need accurate information on births and deaths in populations to help them plan health care policies and monitor the effectiveness of public-health programs designed, for example, to prevent premature deaths from preventable causes such as tobacco smoking. In developed countries, full information on births and deaths is recorded in “vital registration systems.” Unfortunately, very few developing countries have complete vital registration systems. In most African countries, for example, less than one-quarter of deaths are counted through vital registration systems. To fill this information gap, scientists have developed several methods to estimate mortality levels (the proportion of deaths in populations) and trends in mortality (how the proportion of deaths in populations changes over time) from data collected in household surveys and censuses. A household survey collects data about family members (for example, number, age, and sex) for a national sample of households randomly selected from a list of households collected in a census (a periodic count of a population).
Why Was This Study Done?
To date, global public-health efforts have concentrated on improving child survival. Consequently, methods for calculating child mortality levels and trends from surveys are well-developed and generally yield accurate estimates. By contrast, although attempts have been made to measure adult mortality using sibling survival histories (records of the sex, age if alive, or age at death, if dead, of all the children born to survey respondents' mothers that are collected in many household surveys), these attempts have often produced implausibly low estimates of adult mortality. These low estimates arise because people do not always recall deaths accurately when questioned (recall bias) and because families that have fallen apart, possibly because of family deaths, are underrepresented in household surveys (selection bias). In this study, the researchers develop a corrected sibling survival (CSS) method that addresses the problems of selection and recall bias and use their method to estimate mortality levels and trends in 44 developing countries between 1974 and 2006.
What Did the Researchers Do and Find?
The researchers used a statistical approach called logistic regression to develop the CSS method. They then used the method to estimate the probability of a 15-year-old dying before his or her 60th birthday from sibling survival data collected by the Demographic and Health Surveys program (DHS, a project started in 1984 to help developing countries collect data on population and health trends). Levels of adult mortality estimated in this way were considerably higher than those suggested by previous analyses of sibling history data. For example, the risk of adult death between the ages of 15 and 60 years was 20%–35% for women and 25%–45% for men living in sub-Saharan African countries largely unaffected by HIV and 60% for women and 80% for men living in countries in Southern Africa where the HIV epidemic is worst. Importantly, the researchers show that their mortality level estimates compare well to those obtained from vital registration data and other data sources where available. So, for example, in the Philippines, adult mortality levels estimated using the CSS method were similar to those obtained from vital registration data. Finally, the researchers used the CSS method to estimate mortality trends. These calculations reveal, for example, that there has been a 3–4-fold increase in adult mortality since the late 1980s in Zimbabwe, a country badly affected by the HIV epidemic.
What Do These Findings Mean?
These findings suggest that the CSS method, which applies a correction for both selection and recall bias, yields more accurate estimates of adult mortality in developing countries from sibling survival data than previous methods. Given their findings, the researchers suggest that sibling survival histories should be routinely collected in all future household survey programs and, if possible, these surveys should be expanded so that all respondents are asked about sibling histories—currently the DHS only collects sibling histories from women aged 15–49 years. Widespread collection of such data and their analysis using the CSS method, the researchers conclude, would help governments and international agencies track trends in adult mortality and progress toward major health and development targets.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000260.
This study and two related PLoS Medicine Research Articles by Rajaratnam et al. and by Murray et al. are further discussed in a PLoS Medicine Perspective by Mathers and Boerma
Information is available about the Demographic and Health Surveys
The Institute for Health Metrics and Evaluation makes available high-quality information on population health, its determinants, and the performance of health systems
Grand Challenges in Global Health provides information on research into better ways for developing countries to measure their health status
The World Health Organization Statistical Information System (WHOSIS) is an interactive database that brings together core health statistics for WHO member states, including information on vital registration of deaths; the WHO Health Metrics Network is a global collaboration focused on improving sources of vital statistics
doi:10.1371/journal.pmed.1000260
PMCID: PMC2854132  PMID: 20405004
8.  Clinical Utility of Vitamin D Testing 
Executive Summary
This report from the Medical Advisory Secretariat (MAS) was intended to evaluate the clinical utility of vitamin D testing in average risk Canadians and in those with kidney disease. As a separate analysis, this report also includes a systematic literature review of the prevalence of vitamin D deficiency in these two subgroups.
This evaluation did not set out to determine the serum vitamin D thresholds that might apply to non-bone health outcomes. For bone health outcomes, no high or moderate quality evidence could be found to support a target serum level above 50 nmol/L. Similarly, no high or moderate quality evidence could be found to support vitamin D’s effects in non-bone health outcomes, other than falls.
Vitamin D
Vitamin D is a lipid soluble vitamin that acts as a hormone. It stimulates intestinal calcium absorption and is important in maintaining adequate phosphate levels for bone mineralization, bone growth, and remodelling. It’s also believed to be involved in the regulation of cell growth proliferation and apoptosis (programmed cell death), as well as modulation of the immune system and other functions. Alone or in combination with calcium, Vitamin D has also been shown to reduce the risk of fractures in elderly men (≥ 65 years), postmenopausal women, and the risk of falls in community-dwelling seniors. However, in a comprehensive systematic review, inconsistent results were found concerning the effects of vitamin D in conditions such as cancer, all-cause mortality, and cardiovascular disease. In fact, no high or moderate quality evidence could be found concerning the effects of vitamin D in such non-bone health outcomes. Given the uncertainties surrounding the effects of vitamin D in non-bone health related outcomes, it was decided that this evaluation should focus on falls and the effects of vitamin D in bone health and exclusively within average-risk individuals and patients with kidney disease.
Synthesis of vitamin D occurs naturally in the skin through exposure to ultraviolet B (UVB) radiation from sunlight, but it can also be obtained from dietary sources including fortified foods, and supplements. Foods rich in vitamin D include fatty fish, egg yolks, fish liver oil, and some types of mushrooms. Since it is usually difficult to obtain sufficient vitamin D from non-fortified foods, either due to low content or infrequent use, most vitamin D is obtained from fortified foods, exposure to sunlight, and supplements.
Clinical Need: Condition and Target Population
Vitamin D deficiency may lead to rickets in infants and osteomalacia in adults. Factors believed to be associated with vitamin D deficiency include:
darker skin pigmentation,
winter season,
living at higher latitudes,
skin coverage,
kidney disease,
malabsorption syndromes such as Crohn’s disease, cystic fibrosis, and
genetic factors.
Patients with chronic kidney disease (CKD) are at a higher risk of vitamin D deficiency due to either renal losses or decreased synthesis of 1,25-dihydroxyvitamin D.
Health Canada currently recommends that, until the daily recommended intakes (DRI) for vitamin D are updated, Canada’s Food Guide (Eating Well with Canada’s Food Guide) should be followed with respect to vitamin D intake. Issued in 2007, the Guide recommends that Canadians consume two cups (500 ml) of fortified milk or fortified soy beverages daily in order to obtain a daily intake of 200 IU. In addition, men and women over the age of 50 should take 400 IU of vitamin D supplements daily. Additional recommendations were made for breastfed infants.
A Canadian survey evaluated the median vitamin D intake derived from diet alone (excluding supplements) among 35,000 Canadians, 10,900 of which were from Ontario. Among Ontarian males ages 9 and up, the median daily dietary vitamin D intake ranged between 196 IU and 272 IU per day. Among females, it varied from 152 IU to 196 IU per day. In boys and girls ages 1 to 3, the median daily dietary vitamin D intake was 248 IU, while among those 4 to 8 years it was 224 IU.
Vitamin D Testing
Two laboratory tests for vitamin D are available, 25-hydroxy vitamin D, referred to as 25(OH)D, and 1,25-dihydroxyvitamin D. Vitamin D status is assessed by measuring the serum 25(OH)D levels, which can be assayed using radioimmunoassays, competitive protein-binding assays (CPBA), high pressure liquid chromatography (HPLC), and liquid chromatography-tandem mass spectrometry (LC-MS/MS). These may yield different results with inter-assay variation reaching up to 25% (at lower serum levels) and intra-assay variation reaching 10%.
The optimal serum concentration of vitamin D has not been established and it may change across different stages of life. Similarly, there is currently no consensus on target serum vitamin D levels. There does, however, appear to be a consensus on the definition of vitamin D deficiency at 25(OH)D < 25 nmol/l, which is based on the risk of diseases such as rickets and osteomalacia. Higher target serum levels have also been proposed based on subclinical endpoints such as parathyroid hormone (PTH). Therefore, in this report, two conservative target serum levels have been adopted, 25 nmol/L (based on the risk of rickets and osteomalacia), and 40 to 50 nmol/L (based on vitamin D’s interaction with PTH).
Ontario Context
Volume & Cost
The volume of vitamin D tests done in Ontario has been increasing over the past 5 years with a steep increase of 169,000 tests in 2007 to more than 393,400 tests in 2008. The number of tests continues to rise with the projected number of tests for 2009 exceeding 731,000. According to the Ontario Schedule of Benefits, the billing cost of each test is $51.7 for 25(OH)D (L606, 100 LMS units, $0.517/unit) and $77.6 for 1,25-dihydroxyvitamin D (L605, 150 LMS units, $0.517/unit). Province wide, the total annual cost of vitamin D testing has increased from approximately $1.7M in 2004 to over $21.0M in 2008. The projected annual cost for 2009 is approximately $38.8M.
Evidence-Based Analysis
The objective of this report is to evaluate the clinical utility of vitamin D testing in the average risk population and in those with kidney disease. As a separate analysis, the report also sought to evaluate the prevalence of vitamin D deficiency in Canada. The specific research questions addressed were thus:
What is the clinical utility of vitamin D testing in the average risk population and in subjects with kidney disease?
What is the prevalence of vitamin D deficiency in the average risk population in Canada?
What is the prevalence of vitamin D deficiency in patients with kidney disease in Canada?
Clinical utility was defined as the ability to improve bone health outcomes with the focus on the average risk population (excluding those with osteoporosis) and patients with kidney disease.
Literature Search
A literature search was performed on July 17th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 1998 until July 17th, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Observational studies that evaluated the prevalence of vitamin D deficiency in Canada in the population of interest were included based on the inclusion and exclusion criteria listed below. The baseline values were used in this report in the case of interventional studies that evaluated the effect of vitamin D intake on serum levels. Studies published in grey literature were included if no studies published in the peer-reviewed literature were identified for specific outcomes or subgroups.
Considering that vitamin D status may be affected by factors such as latitude, sun exposure, food fortification, among others, the search focused on prevalence studies published in Canada. In cases where no Canadian prevalence studies were identified, the decision was made to include studies from the United States, given the similar policies in vitamin D food fortification and recommended daily intake.
Inclusion Criteria
Studies published in English
Publications that reported the prevalence of vitamin D deficiency in Canada
Studies that included subjects from the general population or with kidney disease
Studies in children or adults
Studies published between January 1998 and July 17th 2009
Exclusion Criteria
Studies that included subjects defined according to a specific disease other than kidney disease
Letters, comments, and editorials
Studies that measured the serum vitamin D levels but did not report the percentage of subjects with serum levels below a given threshold
Outcomes of Interest
Prevalence of serum vitamin D less than 25 nmol/L
Prevalence of serum vitamin D less than 40 to 50 nmol/L
Serum 25-hydroxyvitamin D was the metabolite used to assess vitamin D status. Results from adult and children studies were reported separately. Subgroup analyses according to factors that affect serum vitamin D levels (e.g., seasonal effects, skin pigmentation, and vitamin D intake) were reported if enough information was provided in the studies
Quality of Evidence
The quality of the prevalence studies was based on the method of subject recruitment and sampling, possibility of selection bias, and generalizability to the source population. The overall quality of the trials was examined according to the GRADE Working Group criteria.
Summary of Findings
Fourteen prevalence studies examining Canadian adults and children met the eligibility criteria. With the exception of one longitudinal study, the studies had a cross-sectional design. Two studies were conducted among Canadian adults with renal disease but none studied Canadian children with renal disease (though three such US studies were included). No systematic reviews or health technology assessments that evaluated the prevalence of vitamin D deficiency in Canada were identified. Two studies were published in grey literature, consisting of a Canadian survey designed to measure serum vitamin D levels and a study in infants presented as an abstract at a conference. Also included were the results of vitamin D tests performed in community laboratories in Ontario between October 2008 and September 2009 (provided by the Ontario Association of Medical Laboratories).
Different threshold levels were used in the studies, thus we reported the percentage of subjects with serum levels of between 25 and 30 nmol/L and between 37.5 and 50 nmol/L. Some studies stratified the results according to factors affecting vitamin D status and two used multivariate models to investigate the effects of these characteristics (including age, season, BMI, vitamin D intake, skin pigmentation, and season) on serum 25(OH)D levels. It’s unclear, however, if these studies were adequately powered for these subgroup analyses.
Study participants generally consisted of healthy, community-dwelling subjects and most excluded individuals with conditions or medications that alter vitamin D or bone metabolism, such as kidney or liver disease. Although the studies were conducted in different parts of Canada, fewer were performed in Northern latitudes, i.e. above 53°N, which is equivalent to the city of Edmonton.
Adults
Serum vitamin D levels of < 25 to 30 nmol/L were observed in 0% to 25.5% of the subjects included in five studies; the weighted average was 3.8% (95% CI: 3.0, 4.6). The preliminary results of the Canadian survey showed that approximately 5% of the subjects had serum levels below 29.5 nmol/L. The results of over 600,000 vitamin D tests performed in Ontarian community laboratories between October 2008 and September 2009 showed that 2.6% of adults (> 18 years) had serum levels < 25 nmol/L.
The prevalence of serum vitamin D levels below 37.5-50 nmol/L reported among studies varied widely, ranging from 8% to 73.6% with a weighted average of 22.5%. The preliminary results of the CHMS survey showed that between 10% and 25% of subjects had serum levels below 37 to 48 nmol/L. The results of the vitamin D tests performed in community laboratories showed that 10% to 25% of the individuals had serum levels between 39 and 50 nmol/L.
In an attempt to explain this inter-study variation, the study results were stratified according to factors affecting serum vitamin D levels, as summarized below. These results should be interpreted with caution as none were adjusted for other potential confounders. Adequately powered multivariate analyses would be necessary to determine the contribution of risk factors to lower serum 25(OH)D levels.
Seasonal variation
Three adult studies evaluating serum vitamin D levels in different seasons observed a trend towards a higher prevalence of serum levels < 37.5 to 50 nmol/L during the winter and spring months, specifically 21% to 39%, compared to 8% to 14% in the summer. The weighted average was 23.6% over the winter/spring months and 9.6% over summer. The difference between the seasons was not statistically significant in one study and not reported in the other two studies.
Skin Pigmentation
Four studies observed a trend toward a higher prevalence of serum vitamin D levels < 37.5 to 50 nmol/L in subjects with darker skin pigmentation compared to those with lighter skin pigmentation, with weighted averages of 46.8% among adults with darker skin colour and 15.9% among those with fairer skin.
Vitamin D intake and serum levels
Four adult studies evaluated serum vitamin D levels according to vitamin D intake and showed an overall trend toward a lower prevalence of serum levels < 37.5 to 50 nmol/L with higher levels of vitamin D intake. One study observed a dose-response relationship between higher vitamin D intake from supplements, diet (milk), and sun exposure (results not adjusted for other variables). It was observed that subjects taking 50 to 400 IU or > 400 IU of vitamin D per day had a 6% and 3% prevalence of serum vitamin D level < 40 nmol/L, respectively, versus 29% in subjects not on vitamin D supplementation. Similarly, among subjects drinking one or two glasses of milk per day, the prevalence of serum vitamin D levels < 40 nmol/L was found to be 15%, versus 6% in those who drink more than two glasses of milk per day and 21% among those who do not drink milk. On the other hand, one study observed little variation in serum vitamin D levels during winter according to milk intake, with the proportion of subjects exhibiting vitamin D levels of < 40 nmol/L being 21% among those drinking 0-2 glasses per day, 26% among those drinking > 2 glasses, and 20% among non-milk drinkers.
The overall quality of evidence for the studies conducted among adults was deemed to be low, although it was considered moderate for the subgroups of skin pigmentation and seasonal variation.
Newborn, Children and Adolescents
Five Canadian studies evaluated serum vitamin D levels in newborns, children, and adolescents. In four of these, it was found that between 0 and 36% of children exhibited deficiency across age groups with a weighted average of 6.4%. The results of over 28,000 vitamin D tests performed in children 0 to 18 years old in Ontario laboratories (Oct. 2008 to Sept. 2009) showed that 4.4% had serum levels of < 25 nmol/L.
According to two studies, 32% of infants 24 to 30 months old and 35.3% of newborns had serum vitamin D levels of < 50 nmol/L. Two studies of children 2 to 16 years old reported that 24.5% and 34% had serum vitamin D levels below 37.5 to 40 nmol/L. In both studies, older children exhibited a higher prevalence than younger children, with weighted averages 34.4% and 10.3%, respectively. The overall weighted average of the prevalence of serum vitamin D levels < 37.5 to 50 nmol/L among pediatric studies was 25.8%. The preliminary results of the Canadian survey showed that between 10% and 25% of subjects between 6 and 11 years (N= 435) had serum levels below 50 nmol/L, while for those 12 to 19 years, 25% to 50% exhibited serum vitamin D levels below 50 nmol/L.
The effects of season, skin pigmentation, and vitamin D intake were not explored in Canadian pediatric studies. A Canadian surveillance study did, however, report 104 confirmed cases1 (2.9 cases per 100,000 children) of vitamin D-deficient rickets among Canadian children age 1 to 18 between 2002 and 2004, 57 (55%) of which from Ontario. The highest incidence occurred among children living in the North, i.e., the Yukon, Northwest Territories, and Nunavut. In 92 (89%) cases, skin pigmentation was categorized as intermediate to dark, 98 (94%) had been breastfed, and 25 (24%) were offspring of immigrants to Canada. There were no cases of rickets in children receiving ≥ 400 IU VD supplementation/day.
Overall, the quality of evidence of the studies of children was considered very low.
Kidney Disease
Adults
Two studies evaluated serum vitamin D levels in Canadian adults with kidney disease. The first included 128 patients with chronic kidney disease stages 3 to 5, 38% of which had serum vitamin D levels of < 37.5 nmol/L (measured between April and July). This is higher than what was reported in Canadian studies of the general population during the summer months (i.e. between 8% and 14%). In the second, which examined 419 subjects who had received a renal transplantation (mean time since transplantation: 7.2 ± 6.4 years), the prevalence of serum vitamin D levels < 40 nmol/L was 27.3%. The authors concluded that the prevalence observed in the study population was similar to what is expected in the general population.
Children
No studies evaluating serum vitamin D levels in Canadian pediatric patients with kidney disease could be identified, although three such US studies among children with chronic kidney disease stages 1 to 5 were. The mean age varied between 10.7 and 12.5 years in two studies but was not reported in the third. Across all three studies, the prevalence of serum vitamin D levels below the range of 37.5 to 50 nmol/L varied between 21% and 39%, which is not considerably different from what was observed in studies of healthy Canadian children (24% to 35%).
Overall, the quality of evidence in adults and children with kidney disease was considered very low.
Clinical Utility of Vitamin D Testing
A high quality comprehensive systematic review published in August 2007 evaluated the association between serum vitamin D levels and different bone health outcomes in different age groups. A total of 72 studies were included. The authors observed that there was a trend towards improvement in some bone health outcomes with higher serum vitamin D levels. Nevertheless, precise thresholds for improved bone health outcomes could not be defined across age groups. Further, no new studies on the association were identified during an updated systematic review on vitamin D published in July 2009.
With regards to non-bone health outcomes, there is no high or even moderate quality evidence that supports the effectiveness of vitamin D in outcomes such as cancer, cardiovascular outcomes, and all-cause mortality. Even if there is any residual uncertainty, there is no evidence that testing vitamin D levels encourages adherence to Health Canada’s guidelines for vitamin D intake. A normal serum vitamin D threshold required to prevent non-bone health related conditions cannot be resolved until a causal effect or correlation has been demonstrated between vitamin D levels and these conditions. This is as an ongoing research issue around which there is currently too much uncertainty to base any conclusions that would support routine vitamin D testing.
For patients with chronic kidney disease (CKD), there is again no high or moderate quality evidence supporting improved outcomes through the use of calcitriol or vitamin D analogs. In the absence of such data, the authors of the guidelines for CKD patients consider it best practice to maintain serum calcium and phosphate at normal levels, while supplementation with active vitamin D should be considered if serum PTH levels are elevated. As previously stated, the authors of guidelines for CKD patients believe that there is not enough evidence to support routine vitamin D [25(OH)D] testing. According to what is stated in the guidelines, decisions regarding the commencement or discontinuation of treatment with calcitriol or vitamin D analogs should be based on serum PTH, calcium, and phosphate levels.
Limitations associated with the evidence of vitamin D testing include ambiguities in the definition of an ‘adequate threshold level’ and both inter- and intra- assay variability. The MAS considers both the lack of a consensus on the target serum vitamin D levels and assay limitations directly affect and undermine the clinical utility of testing. The evidence supporting the clinical utility of vitamin D testing is thus considered to be of very low quality.
Daily vitamin D intake, either through diet or supplementation, should follow Health Canada’s recommendations for healthy individuals of different age groups. For those with medical conditions such as renal disease, liver disease, and malabsorption syndromes, and for those taking medications that may affect vitamin D absorption/metabolism, physician guidance should be followed with respect to both vitamin D testing and supplementation.
Conclusions
Studies indicate that vitamin D, alone or in combination with calcium, may decrease the risk of fractures and falls among older adults.
There is no high or moderate quality evidence to support the effectiveness of vitamin D in other outcomes such as cancer, cardiovascular outcomes, and all-cause mortality.
Studies suggest that the prevalence of vitamin D deficiency in Canadian adults and children is relatively low (approximately 5%), and between 10% and 25% have serum levels below 40 to 50 nmol/L (based on very low to low grade evidence).
Given the limitations associated with serum vitamin D measurement, ambiguities in the definition of a ‘target serum level’, and the availability of clear guidelines on vitamin D supplementation from Health Canada, vitamin D testing is not warranted for the average risk population.
Health Canada has issued recommendations regarding the adequate daily intake of vitamin D, but current studies suggest that the mean dietary intake is below these recommendations. Accordingly, Health Canada’s guidelines and recommendations should be promoted.
Based on a moderate level of evidence, individuals with darker skin pigmentation appear to have a higher risk of low serum vitamin D levels than those with lighter skin pigmentation and therefore may need to be specially targeted with respect to optimum vitamin D intake. The cause-effect of this association is currently unclear.
Individuals with medical conditions such as renal and liver disease, osteoporosis, and malabsorption syndromes, as well as those taking medications that may affect vitamin D absorption/metabolism, should follow their physician’s guidance concerning both vitamin D testing and supplementation.
PMCID: PMC3377517  PMID: 23074397
9.  Effect of Health Risk Assessment and Counselling on Health Behaviour and Survival in Older People: A Pragmatic Randomised Trial 
PLoS Medicine  2015;12(10):e1001889.
Background
Potentially avoidable risk factors continue to cause unnecessary disability and premature death in older people. Health risk assessment (HRA), a method successfully used in working-age populations, is a promising method for cost-effective health promotion and preventive care in older individuals, but the long-term effects of this approach are unknown. The objective of this study was to evaluate the effects of an innovative approach to HRA and counselling in older individuals for health behaviours, preventive care, and long-term survival.
Methods and Findings
This study was a pragmatic, single-centre randomised controlled clinical trial in community-dwelling individuals aged 65 y or older registered with one of 19 primary care physician (PCP) practices in a mixed rural and urban area in Switzerland. From November 2000 to January 2002, 874 participants were randomly allocated to the intervention and 1,410 to usual care. The intervention consisted of HRA based on self-administered questionnaires and individualised computer-generated feedback reports, combined with nurse and PCP counselling over a 2-y period. Primary outcomes were health behaviours and preventive care use at 2 y and all-cause mortality at 8 y. At baseline, participants in the intervention group had a mean ± standard deviation of 6.9 ± 3.7 risk factors (including unfavourable health behaviours, health and functional impairments, and social risk factors) and 4.3 ± 1.8 deficits in recommended preventive care. At 2 y, favourable health behaviours and use of preventive care were more frequent in the intervention than in the control group (based on z-statistics from generalised estimating equation models). For example, 70% compared to 62% were physically active (odds ratio 1.43, 95% CI 1.16–1.77, p = 0.001), and 66% compared to 59% had influenza vaccinations in the past year (odds ratio 1.35, 95% CI 1.09–1.66, p = 0.005). At 8 y, based on an intention-to-treat analysis, the estimated proportion alive was 77.9% in the intervention and 72.8% in the control group, for an absolute mortality difference of 4.9% (95% CI 1.3%–8.5%, p = 0.009; based on z-test for risk difference). The hazard ratio of death comparing intervention with control was 0.79 (95% CI 0.66–0.94, p = 0.009; based on Wald test from Cox regression model), and the number needed to receive the intervention to prevent one death was 21 (95% CI 12–79). The main limitations of the study include the single-site study design, the use of a brief self-administered questionnaire for 2-y outcome data collection, the unavailability of other long-term outcome data (e.g., functional status, nursing home admissions), and the availability of long-term follow-up data on mortality for analysis only in 2014.
Conclusions
This is the first trial to our knowledge demonstrating that a collaborative care model of HRA in community-dwelling older people not only results in better health behaviours and increased use of recommended preventive care interventions, but also improves survival. The intervention tested in our study may serve as a model of how to implement a relatively low-cost but effective programme of disease prevention and health promotion in older individuals.
Trial Registration
International Standard Randomized Controlled Trial Number: ISRCTN 28458424
In a randomized trial, Andreas Stuck and colleagues assess the benefits of a collaborative care intervention to health behaviors and survival among elderly participants in Solothurn, Switzerland.
Editors' Summary
Background
The world’s population is getting older. In almost every country, the over–60 age group is growing faster than any other age group. In 2000, globally, there were about 605 million people aged 60 or more; by 2050, 2 billion people (many living in low- and middle-income countries) will be in this age group. But old age is not always a happy and healthy phase of life. Sadly, many older people find that their enjoyment of life is curtailed by chronic illnesses and increasing disability. Moreover, many older people die prematurely. In part, these adverse outcomes are linked to avoidable risk factors, particularly unhealthy lifestyles and failure to engage in preventative care. For example, older people commonly are physically inactive, smoke, drink too much alcohol, or do not have regular blood pressure checks or annual influenza vaccinations.
Why Was This Study Done?
Programs that encourage a healthy lifestyle and the uptake of preventative care among older people are a health policy priority worldwide. But what is the best way to improve health and reduce premature death among older people? One promising approach is “health risk assessment.” In this multidimensional approach, which has been used successfully among working-age populations, older individuals complete a questionnaire to provide information about their risk factors for functional status decline and are subsequently given personalized feedback on how to promote health, maintain function, or prevent disease. Previous studies showed that this approach may improve short-term outcomes such as take-up of preventive care and health behaviors, but the long-term effects on health were unknown. Here, the researchers evaluate the effects of health risk assessment plus counseling on both short-term outcomes and on long-term survival among older people by undertaking a pragmatic randomized controlled trial in Solothurn, Switzerland. A randomized controlled trial compares the outcomes of individuals randomly chosen to receive or not receive an intervention; a pragmatic trial asks whether an intervention works under real-life conditions.
What Did the Researchers Do and Find?
The researchers allocated 874 community-dwelling individuals aged 65 years or older living in a mixed rural and urban area in Switzerland to receive the intervention (the intervention group) and 1,410 individuals to receive usual care (the control group). The intervention consisted of health risk assessment based on self-administered questionnaires and individualized computer-generated feedback reports, combined with nurse and primary care physician counseling over a two-year period. At baseline, intervention group participants had about seven risk factors on average (including unfavorable health behaviors, health and functional impairments, and social risk factors) and 4–5 deficits in recommended preventative care. At two years, favorable health behaviors and use of preventative care were more frequent in the intervention group than in the control group, and these differences were statistically significant. For example, 70% of the intervention group were physically active compared to 62% of the control group, and 66% of the intervention group had had an influenza vaccination during the past 12 months compared to 59% of the control group. At eight years, 77.9% and 72.8% of the participants in the intervention and control groups, respectively, were still alive. Comparing the intervention group with the control group, the hazard ratio of death was 0.79. Finally, the researchers calculated that, to avert one death over eight years, 21 individuals would need to receive the intervention.
What Do These Findings Mean?
These findings show that implementation of a collaborative care model of health risk assessment in community-dwelling older people resulted in better health behaviors, increased use of preventative care, and improved survival. Certain aspects of the trial design may limit the interpretation of these findings. For example, a self-administered questionnaire was used to collect the two-year health behavior outcome data, and some participants may have given socially desirable answers (for example, they may have understated their alcohol intake). Also, as the study was undertaken at a single site, these findings may not be generalizable. Moreover, the study was based on complete follow-up information on survival, but no long-term follow-up data were available for functional status outcome. Overall, however, these findings suggest that the use of health risk assessment combined with personal reinforcement of health risk assessment recommendations by specially trained counselors might be an effective and relatively low-cost way to promote good health among non-disabled older people. Moreover, the researchers suggest that it might be possible to adapt this model for use in low- and middle-income countries, where the challenge of a rapidly growing population of older people is greatest.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001889.
The US National Institute on Aging provides information on health and aging (in English and Spanish)
The UK National Health Service and Age UK (a not-for-profit organization) have produced a practical guide to healthy aging
The World Health Organization provides information on many aspects of aging (in several languages); the WHO Study on Global Ageing and Adult Health is compiling longitudinal information on the health and well-being of adult populations and the aging process
The United Nations Population Fund and HelpAge International publication Ageing in the Twenty-First Century is available
HelpAge International is an international non-governmental organization that helps older people claim their rights, challenge discrimination, and overcome poverty, so that they can lead dignified, secure, and healthy lives
More information on this trial, the Prevention in Older People–Assessment in Generalists’ Practices (PRO-AGE) trial, is available
Wikipedia has a page on health risk assessment (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001889
PMCID: PMC4610679  PMID: 26479077
10.  The Effectiveness of Mobile-Health Technology-Based Health Behaviour Change or Disease Management Interventions for Health Care Consumers: A Systematic Review 
PLoS Medicine  2013;10(1):e1001362.
Caroline Free and colleagues systematically review a fast-moving field, that of the effectiveness of mobile technology interventions delivered to healthcare consumers, and conclude that high-quality, adequately powered trials of optimized interventions are required to evaluate effects on objective outcomes.
Background
Mobile technologies could be a powerful media for providing individual level support to health care consumers. We conducted a systematic review to assess the effectiveness of mobile technology interventions delivered to health care consumers.
Methods and Findings
We searched for all controlled trials of mobile technology-based health interventions delivered to health care consumers using MEDLINE, EMBASE, PsycINFO, Global Health, Web of Science, Cochrane Library, UK NHS HTA (Jan 1990–Sept 2010). Two authors extracted data on allocation concealment, allocation sequence, blinding, completeness of follow-up, and measures of effect. We calculated effect estimates and used random effects meta-analysis. We identified 75 trials. Fifty-nine trials investigated the use of mobile technologies to improve disease management and 26 trials investigated their use to change health behaviours. Nearly all trials were conducted in high-income countries. Four trials had a low risk of bias. Two trials of disease management had low risk of bias; in one, antiretroviral (ART) adherence, use of text messages reduced high viral load (>400 copies), with a relative risk (RR) of 0.85 (95% CI 0.72–0.99), but no statistically significant benefit on mortality (RR 0.79 [95% CI 0.47–1.32]). In a second, a PDA based intervention increased scores for perceived self care agency in lung transplant patients. Two trials of health behaviour management had low risk of bias. The pooled effect of text messaging smoking cessation support on biochemically verified smoking cessation was (RR 2.16 [95% CI 1.77–2.62]). Interventions for other conditions showed suggestive benefits in some cases, but the results were not consistent. No evidence of publication bias was demonstrated on visual or statistical examination of the funnel plots for either disease management or health behaviours. To address the limitation of the older search, we also reviewed more recent literature.
Conclusions
Text messaging interventions increased adherence to ART and smoking cessation and should be considered for inclusion in services. Although there is suggestive evidence of benefit in some other areas, high quality adequately powered trials of optimised interventions are required to evaluate effects on objective outcomes.
Please see later in the article for the Editors' Summary
Editors’ Summary
Background
Every year, millions of people die from cardiovascular diseases (diseases of the heart and circulation), chronic obstructive pulmonary disease (a long-term lung disease), lung cancer, HIV infection, and diabetes. These diseases are increasingly important causes of mortality (death) in low- and middle-income countries and are responsible for nearly 40% of deaths in high-income countries. For all these diseases, individuals can adopt healthy behaviors that help prevent disease onset. For example, people can lower their risk of diabetes and cardiovascular disease by maintaining a healthy body weight, and, if they are smokers, they can reduce their risk of lung cancer and cardiovascular disease by giving up cigarettes. In addition, optimal treatment of existing diseases can reduce mortality and morbidity (illness). Thus, in people who are infected with HIV, antiretroviral therapy delays the progression of HIV infection and the onset of AIDS, and in people who have diabetes, good blood sugar control can prevent retinopathy (a type of blindness) and other serious complications of diabetes.
Why Was This Study Done?
Health-care providers need effective ways to encourage "health-care consumers" to make healthy lifestyle choices and to self-manage chronic diseases. The amount of information, encouragement and support that can be conveyed to individuals during face-to-face consultations or through traditional media such as leaflets is limited, but mobile technologies such as mobile phones and portable computers have the potential to transform the delivery of health messages. These increasingly popular technologies—more than two-thirds of the world's population now owns a mobile phone—can be used to deliver health messages to people anywhere and at the most relevant times. For example, smokers trying to quit smoking can be sent regular text messages to sustain their motivation, but can also use text messaging to request extra support when it is needed. But is "mHealth," the provision of health-related services using mobile communication technology, an effective way to deliver health messages to health-care consumers? In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers assess the effectiveness of mobile technology-based health behavior change interventions and disease management interventions delivered to health-care consumers.
What Did the Researchers Do and Find?
The researchers identified 75 controlled trials (studies that compare the outcomes of people who do and do not receive an intervention) of mobile technology-based health interventions delivered to health-care consumers that met their predefined criteria. Twenty-six trials investigated the use of mobile technologies to change health behaviors, 59 investigated their use in disease management, most were of low quality, and nearly all were undertaken in high-income countries. In one high-quality trial that used text messages to improve adherence to antiretroviral therapy among HIV-positive patients in Kenya, the intervention significantly reduced the patients’ viral load but did not significantly reduce mortality (the observed reduction in deaths may have happened by chance). In two high-quality UK trials, a smoking intervention based on text messaging (txt2stop) more than doubled biochemically verified smoking cessation. Other lower-quality trials indicated that using text messages to encourage physical activity improved diabetes control but had no effect on body weight. Combined diet and physical activity text messaging interventions also had no effect on weight, whereas interventions for other conditions showed suggestive benefits in some but not all cases.
What Do These Findings Mean?
These findings provide mixed evidence for the effectiveness of health intervention delivery to health-care consumers using mobile technologies. Moreover, they highlight the need for additional high-quality controlled trials of this mHealth application, particularly in low- and middle-income countries. Specifically, the demonstration that text messaging interventions increased adherence to antiretroviral therapy in a low-income setting and increased smoking cessation in a high-income setting provides some support for the inclusion of these two interventions in health-care services in similar settings. However, the effects of these two interventions need to be established in other settings and their cost-effectiveness needs to be measured before they are widely implemented. Finally, for other mobile technology–based interventions designed to change health behaviors or to improve self-management of chronic diseases, the results of this systematic review suggest that the interventions need to be optimized before further trials are undertaken to establish their clinical benefits.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001362.
A related PLOS Medicine Research Article by Free et al. investigates the ability of mHealth technologies to improve health-care service delivery processes
Wikipedia has a page on mHealth (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
mHealth: New horizons for health through mobile technologies is a global survey of mHealth prepared by the World Health Organization’s Global Observatory for eHealth (eHealth is health-care practice supported by electronic processes and communication)
The mHealth in Low-Resource Settings website, which is maintained by the Netherlands Royal Tropical Institute, provides information on the current use, potential, and limitations of mHealth in low-resource settings
More information about Txt2stop is available, the UK National Health Service Choices website provides an analysis of the Txt2stop trial and what its results mean, and the UK National Health Service Smokefree website provides a link to a Quit App for the iPhone
The US Centers for Disease Control and Prevention has launched a text messaging service that delivers regular health tips and alerts to mobile phones
doi:10.1371/journal.pmed.1001362
PMCID: PMC3548655  PMID: 23349621
11.  The Effect of India's Total Sanitation Campaign on Defecation Behaviors and Child Health in Rural Madhya Pradesh: A Cluster Randomized Controlled Trial 
PLoS Medicine  2014;11(8):e1001709.
Sumeet Patil and colleagues conduct a cluster randomized controlled trial to measure the effect of India's Total Sanitation Campaign in Madhya Pradesh on the availability of individual household latrines, defecation behaviors, and child health.
Please see later in the article for the Editors' Summary
Background
Poor sanitation is thought to be a major cause of enteric infections among young children. However, there are no previously published randomized trials to measure the health impacts of large-scale sanitation programs. India's Total Sanitation Campaign (TSC) is one such program that seeks to end the practice of open defecation by changing social norms and behaviors, and providing technical support and financial subsidies. The objective of this study was to measure the effect of the TSC implemented with capacity building support from the World Bank's Water and Sanitation Program in Madhya Pradesh on availability of individual household latrines (IHLs), defecation behaviors, and child health (diarrhea, highly credible gastrointestinal illness [HCGI], parasitic infections, anemia, growth).
Methods and Findings
We conducted a cluster-randomized, controlled trial in 80 rural villages. Field staff collected baseline measures of sanitation conditions, behaviors, and child health (May–July 2009), and revisited households 21 months later (February–April 2011) after the program was delivered. The study enrolled a random sample of 5,209 children <5 years old from 3,039 households that had at least one child <24 months at the beginning of the study. A random subsample of 1,150 children <24 months at enrollment were tested for soil transmitted helminth and protozoan infections in stool. The randomization successfully balanced intervention and control groups, and we estimated differences between groups in an intention to treat analysis. The intervention increased percentage of households in a village with improved sanitation facilities as defined by the WHO/UNICEF Joint Monitoring Programme by an average of 19% (95% CI for difference: 12%–26%; group means: 22% control versus 41% intervention), decreased open defecation among adults by an average of 10% (95% CI for difference: 4%–15%; group means: 73% intervention versus 84% control). However, the intervention did not improve child health measured in terms of multiple health outcomes (diarrhea, HCGI, helminth infections, anemia, growth). Limitations of the study included a relatively short follow-up period following implementation, evidence for contamination in ten of the 40 control villages, and bias possible in self-reported outcomes for diarrhea, HCGI, and open defecation behaviors.
Conclusions
The intervention led to modest increases in availability of IHLs and even more modest reductions in open defecation. These improvements were insufficient to improve child health outcomes (diarrhea, HCGI, parasite infection, anemia, growth). The results underscore the difficulty of achieving adequately large improvements in sanitation levels to deliver expected health benefits within large-scale rural sanitation programs.
Trial Registration
ClinicalTrials.gov NCT01465204
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Diarrheal diseases are linked with the deaths of hundreds of thousands of young children each year in resource-limited countries. Infection with enteric pathogens (organisms such as bacteria, viruses, and parasites that infect the human intestine or gut) also affects the health and growth of many young children in these countries. A major contributor to the transmission of enteric pathogens is thought to be open defecation, which can expose individuals to direct contact with human feces containing infectious pathogens and also contaminate food and drinking water. Open defecation can be reduced by ensuring that people have access to and use toilets or latrines. Consequently, programs have been initiated in many resource-limited countries that aim to reduce open defecation by changing behaviors and by providing technical and financial support to help households build improved latrines (facilities that prevent human feces from re-entering the environment such as pit latrines with sealed squat plates; an example of an unimproved facility is a simple open hole). However, in 2011, according to the WHO/UNICEF Joint Monitoring Programme for Water Supply and Sanitation, more than 1 billion people (15% of the global population) still defecated in the open.
Why Was This Study Done?
Studies of sewerage system provision in urban areas suggest that interventions that prevent human feces entering the environment reduce diarrheal diseases. However, little is known about how rural sanitation programs, which usually focus on providing stand-alone sanitation facilities, affect diarrheal disease, intestinal parasite infections, anemia (which can be caused by parasite infections), or growth in young children. Governments and international donors need to know whether large-scale rural sanitation programs improve child health before expending further resources on these interventions or to identify an urgency to improve the existing program design or implementation so that they deliver the health impact. In this study, the researchers investigate the effect of India's Total Sanitation Campaign (TSC) on the availability of individual household latrines, defecation behaviors, and child health in rural Madhya Pradesh, one of India's less developed states. Sixty percent of people who practice open defection live in India and a quarter of global child deaths from diarrheal diseases occur in the country. India's TSC, which was initiated in 1999, includes activities designed to change social norms and behaviors and provides technical and financial support for latrine building. So far there are no published studies that rigorously evaluated whether the TSC improved child health or not.
What Did the Researchers Do and Find?
A cluster randomized controlled trial randomly assigns groups of people to receive the intervention under study and compares the outcomes with a control group that does not receive the intervention. The researchers enrolled 5,209 children aged under 5 years old living in 3,039 households in 80 rural villages in Madhya Pradesh. Half of the villages (40), chosen at random, were included in the TSC (the intervention). Field staff collected data on sanitation conditions, defecation behaviors, and child health from caregivers in each household at the start of the study and after the TSC implementation was over in the intervention villages. A random subsample of children was also tested for infection with enteric parasites. The intervention increased the percentage of households in a village with improved sanitation facilities by 19% on average. Specifically, 41% of households in the intervention villages had improved latrines on average compared to 22% of households in the control villages. The intervention also decreased the proportion of adults who self-reported open defecation from 84% to 73%. However, the intervention did not improve child health measured on the basis of multiple health outcomes, including the prevalence of gastrointestinal illnesses and intestinal parasite infections, and growth.
What Do These Findings Mean?
These findings indicate that in rural Madhya Pradesh, the TSC implemented with support from the WSP only slightly increased the availability of individual household latrines and only slightly decreased the practice of open defecation. Importantly, these findings show that these modest improvements in sanitation and in defecation behaviors were insufficient to improve health outcomes among children. The accuracy of these findings may be limited by various aspects of the study. For example, several control villages actually received the intervention, which means that these findings probably underestimate the effect of the intervention under perfect conditions. Self-reporting of defecation behavior, availability of sanitation facilities, and gastrointestinal illnesses among children may also have biased these findings. Finally, because TSC implementation varies widely across India, these findings may not apply to other Indian states or variations in the TSC implementation strategies. Overall, however, these findings highlight the challenges associated with achieving large enough improvement in access to sanitation and correspondingly large reductions in the practice of open defecation to deliver health benefits within large-scale rural sanitation programs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001709.
This study is further discussed in a PLOS Medicine Perspective by Clarissa Brocklehurst
A PLOS Medicine Collection on water and sanitation is available
The World Health Organization (WHO) provides information on water, sanitation, and health (in several languages), on diarrhea (in several languages), and on intestinal parasites (accessed through WHO's web page on neglected tropical diseases); the 2009 WHO/UNICEF report “Diarrhea: why children are still dying and what can be done”, is available online for download
The WHO/UNICEF Joint Monitoring Programme for Water Supply and Sanitation monitors progress toward improved global sanitation; its 2014 update report is available online
The children's charity UNICEF, which protects the rights of children and young people around the world, provides information on water, sanitation, and hygiene, and on diarrhea (in several languages)
doi:10.1371/journal.pmed.1001709
PMCID: PMC4144850  PMID: 25157929
12.  Online Health Information Seeking Behaviors of Hispanics in New York City: A Community-Based Cross-Sectional Study 
Background
The emergence of the Internet has increased access to health information and can facilitate active individual engagement in health care decision making. Hispanics are the fastest-growing minority group in the United States and are also the most underserved in terms of access to online health information. A growing body of literature has examined correlates of online health information seeking behaviors (HISBs), but few studies have included Hispanics.
Objective
The specific aim of this descriptive, correlational study was to examine factors associated with HISBs of Hispanics.
Methods
The study sample (N=4070) was recruited from five postal zip codes in northern Manhattan for the Washington Heights Inwood Informatics Infrastructure for Comparative Effectiveness Research project. Survey data were collected via interview by bilingual community health workers in a community center, households, and other community settings. Data were analyzed using bivariate analyses and logistic regression.
Results
Among individual respondents, online HISBs were significantly associated with higher education (OR 3.03, 95% CI 2.15-4.29, P<.001), worse health status (OR 0.42, 95% CI 0.31-0.57, P<.001), and having no hypertension (OR 0.60, 95% CI 0.43-0.84, P=.003). Online HISBs of other household members were significantly associated with respondent factors: female gender (OR 1.60, 95% CI 1.22-2.10, P=.001), being younger (OR 0.75, 95% CI 0.62-0.90, P=.002), being married (OR 1.36, 95% CI 1.09-1.71, P=.007), having higher education (OR 1.80, 95% CI 1.404-2.316, P<.001), being in worse health (OR 0.59, 95% CI 0.46-0.77, P<.001), and having serious health problems increased the odds of their household members’ online HISBs (OR 1.83, 95% CI 1.29-2.60, P=.001).
Conclusions
This large-scale community survey identified factors associated with online HISBs among Hispanics that merit closer examination. To enhance online HISBs among Hispanics, health care providers and policy makers need to understand the cultural context of the Hispanic population. Results of this study can provide a foundation for the development of informatics-based interventions to improve the health of Hispanics in the United States.
doi:10.2196/jmir.3499
PMCID: PMC4129127  PMID: 25092120
Internet; information seeking behavior; health behavior; consumer health information; hispanic Americans
13.  Measuring Coverage in MNCH: Population HIV-Free Survival among Children under Two Years of Age in Four African Countries 
PLoS Medicine  2013;10(5):e1001424.
Background
Population-based evaluations of programs for prevention of mother-to-child HIV transmission (PMTCT) are scarce. We measured PMTCT service coverage, regimen use, and HIV-free survival among children ≤24 mo of age in Cameroon, Côte D'Ivoire, South Africa, and Zambia.
Methods and Findings
We randomly sampled households in 26 communities and offered participation if a child had been born to a woman living there during the prior 24 mo. We tested consenting mothers with rapid HIV antibody tests and tested the children of seropositive mothers with HIV DNA PCR or rapid antibody tests. Our primary outcome was 24-mo HIV-free survival, estimated with survival analysis. In an individual-level analysis, we evaluated the effectiveness of various PMTCT regimens. In a community-level analysis, we evaluated the relationship between HIV-free survival and community PMTCT coverage (the proportion of HIV-exposed infants in each community that received any PMTCT intervention during gestation or breastfeeding). We also compared our community coverage results to those of a contemporaneous study conducted in the facilities serving each sampled community. Of 7,985 surveyed children under 2 y of age, 1,014 (12.7%) were HIV-exposed. Of these, 110 (10.9%) were HIV-infected, 851 (83.9%) were HIV-uninfected, and 53 (5.2%) were dead. HIV-free survival at 24 mo of age among all HIV-exposed children was 79.7% (95% CI: 76.4, 82.6) overall, with the following country-level estimates: Cameroon (72.6%; 95% CI: 62.3, 80.5), South Africa (77.7%; 95% CI: 72.5, 82.1), Zambia (83.1%; 95% CI: 78.4, 86.8), and Côte D'Ivoire (84.4%; 95% CI: 70.0, 92.2). In adjusted analyses, the risk of death or HIV infection was non-significantly lower in children whose mothers received a more complex regimen of either two or three antiretroviral drugs compared to those receiving no prophylaxis (adjusted hazard ratio: 0.60; 95% CI: 0.34, 1.06). Risk of death was not different for children whose mothers received a more complex regimen compared to those given single-dose nevirapine (adjusted hazard ratio: 0.88; 95% CI: 0.45, 1.72). Community PMTCT coverage was highest in Cameroon, where 75 of 114 HIV-exposed infants met criteria for coverage (66%; 95% CI: 56, 74), followed by Zambia (219 of 444, 49%; 95% CI: 45, 54), then South Africa (152 of 365, 42%; 95% CI: 37, 47), and then Côte D'Ivoire (3 of 53, 5.7%; 95% CI: 1.2, 16). In a cluster-level analysis, community PMTCT coverage was highly correlated with facility PMTCT coverage (Pearson's r = 0.85), and moderately correlated with 24-mo HIV-free survival (Pearson's r = 0.29). In 14 of 16 instances where both the facility and community samples were large enough for comparison, the facility-based coverage measure exceeded that observed in the community.
Conclusions
HIV-free survival can be estimated with community surveys and should be incorporated into ongoing country monitoring. Facility-based coverage measures correlate with those derived from community sampling, but may overestimate population coverage. The more complex regimens recommended by the World Health Organization seem to have measurable public health benefit at the population level, but power was limited and additional field validation is needed.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
For a pregnant woman who is HIV-positive, the discrepancy across the world in outlook for mother and child is stark. Mother-to-child transmission of HIV during pregnancy is now less than 1% in many high-income settings, but occurs much more often in low-income countries. Three interventions have a major impact on transmission of HIV to the baby: antiretroviral drugs, mode of delivery, and type of infant feeding. The latter two are complex, as the interventions commonly used in high-income countries (cesarean section if the maternal viral load is high; exclusive formula feeding) have their own risks in low-income settings. Minimizing the risks of transmitting HIV through effective drug regimes therefore becomes particularly important. Monitoring progress on reducing the incidence of mother-to-child HIV transmission is essential, but not always easy to achieve.
Why Was This Study Done?
A research group led by Stringer and colleagues recently reported a study from four countries in Africa: Cameroon, Côte D'Ivoire, South Africa, and Zambia. The study showed that even in the health facility setting (e.g., hospitals and clinics), only half of infants whose mothers were HIV-positive received the minimum recommended drug treatment (one dose of nevirapine during labor) to prevent HIV transmission. Across the population of these countries, it is possible that fewer receive antiretroviral drugs, as the study did not include women who did not access health facilities. Therefore, the next stage of the study by this research group, reported here, involved going into the communities around these health facilities to find out how many infants under two years old had been exposed to HIV, whether they had received drugs to prevent transmission, and what proportion were alive and not infected with HIV at two years old.
What Did the Researchers Do and Find?
The researchers tested all consenting women who had delivered a baby in the last two years in the surrounding communities. If the mother was found to be HIV-positive, then the infant was also tested for HIV. The researchers then calculated how many of the infants would be alive at two years and free of HIV infection.
Most mothers (78%) agreed to testing for themselves and their infants. There were 7,985 children under two years of age in this study, of whom 13% had been born to an HIV-positive mother. Less than half (46%) of the HIV-positive mothers had received any drugs to prevent HIV transmission. Of the children with HIV-positive mothers, 11% were HIV-infected, 84% were not infected with HIV, and 5% had died. Overall, the researchers estimated that around 80% of these children would be alive at two years without HIV infection. This proportion differed non-significantly between the four countries (ranging from 73% to 84%). The researchers found higher rates of infant survival than they had expected and knew that they might have missed some infant deaths (e.g., if households with infant deaths were less likely to take part in the study).
The researchers found that their estimates of the proportion of HIV-positive mothers who received drugs to prevent transmission were fairly similar between their previous study, looking at health facilities, and this study of the surrounding communities. However, in 14 out of 16 comparisons, the estimate from the community was lower than that from the facility.
What Do These Findings Mean?
This study shows that it would be possible to estimate how many infants are surviving free of HIV infection using a study based in the community, and that these estimates may be more accurate than those for studies based in health facilities. There are still a large proportion of HIV-positive mothers who are not receiving drugs to prevent transmission to the baby. The authors suggest that using two or three drugs to prevent HIV may help to reduce transmission.
There are already community surveys conducted in many low-income countries, but they have not included routine infant testing for HIV. It is now essential that organizations providing drugs, money, and infrastructure in this field consider more accurate means of monitoring incidence of HIV transmission from mother to infant, particularly at the community level.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001424.
The World Health Organization has more information on mother-to-child transmission of HIV
The United Nations Children's Fund has more information on the status of national PMTCT responses in the most affected countries
doi:10.1371/journal.pmed.1001424
PMCID: PMC3646218  PMID: 23667341
14.  The Effect of Handwashing at Recommended Times with Water Alone and With Soap on Child Diarrhea in Rural Bangladesh: An Observational Study 
PLoS Medicine  2011;8(6):e1001052.
By observing handwashing behavior in 347 households from 50 villages across rural Bangladesh in 2007, Stephen Luby and colleagues found that hand washing with soap or hand rinsing without soap before food preparation can both reduce the burden of childhood diarrhea.
Background
Standard public health interventions to improve hand hygiene in communities with high levels of child mortality encourage community residents to wash their hands with soap at five separate key times, a recommendation that would require mothers living in impoverished households to typically wash hands with soap more than ten times per day. We analyzed data from households that received no intervention in a large prospective project evaluation to assess the relationship between observed handwashing behavior and subsequent diarrhea.
Methods and Findings
Fieldworkers conducted a 5-hour structured observation and a cross-sectional survey in 347 households from 50 villages across rural Bangladesh in 2007. For the subsequent 2 years, a trained community resident visited each of the enrolled households every month and collected information on the occurrence of diarrhea in the preceding 48 hours among household residents under the age of 5 years. Compared with children living in households where persons prepared food without washing their hands, children living in households where the food preparer washed at least one hand with water only (odds ratio [OR] = 0.78; 95% confidence interval [CI] = 0.57–1.05), washed both hands with water only (OR = 0.67; 95% CI = 0.51–0.89), or washed at least one hand with soap (OR = 0.30; 95% CI = 0.19–0.47) had less diarrhea. In households where residents washed at least one hand with soap after defecation, children had less diarrhea (OR = 0.45; 95% CI = 0.26–0.77). There was no significant association between handwashing with or without soap before feeding a child, before eating, or after cleaning a child's anus who defecated and subsequent child diarrhea.
Conclusions
These observations suggest that handwashing before preparing food is a particularly important opportunity to prevent childhood diarrhea, and that handwashing with water alone can significantly reduce childhood diarrhea.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The resurgence of donor interest in regarding water and sanitation as fundamental public health issues has been a welcome step forward and will do much to improve the health of the 1.1 billion people world-wide without access to clean water and the 2.4 billion without access to improved sanitation. However, improving hygiene practices is also very important—studies have consistently shown that handwashing with soap reduces childhood diarrheal disease—but in reality is particularly difficult to do as this activity involves complex behavioral changes. Therefore although public health programs in communities with high child mortality commonly promote handwashing with soap, this practice is still uncommon and washing hands with water only is still common practice—partly because of the high cost of soap relative to income, the risk that conveniently placed soap would be stolen or wasted, and the inconvenience of fetching soap.
Handwashing promotion programs often focus on five “key times” for handwashing with soap—after defecation, after handling child feces or cleaning a child's anus, before preparing food, before feeding a child, and before eating—which would require requesting busy impoverished mothers to wash their hands with soap more than ten times a day.
Why Was This Study Done?
In addition to encouraging handwashing only at the most critical times, clarifying whether handwashing with water alone, a behavior that is seemingly much easier for people to practice, but for which there is little evidence, may be a way forward. In order to guide more focused and evidence-based recommendations, the researchers evaluated the control group of a large handwashing, hygiene/sanitation, and water quality improvement program—Sanitation, Hygiene Education and Water supply-Bangladesh (SHEWA-B), organized and supported by the Bangladesh Government, UNICEF, and the UK's Department for International Development. The researchers analyzed the relationship between handwashing behavior as observed at baseline and the subsequent experience of child diarrhea in participating households to identify which specific handwashing behaviors were associated with less diarrhea in young children.
What Did the Researchers Do and Find?
The SHEWA-B intervention targeted 19.6 million people in rural Bangladesh in 68 subdistricts. In this study and with community and household consent, the researchers organized trained field workers, using a pretested instrument, to note handwashing behavior at key times and recorded handwashing behavior of all observed household at baseline in 50 randomly selected villages that served as nonintervention control households to compare with outcomes to communities receiving the SHEWA-B program. The fieldworkers recruited community monitors, female village residents who completed 3 days training on how to administer the monthly diarrhea survey, to record the frequency of diarrhea in children aged less than 3 years in control households for the subsequent two years. The researchers used statistical models to evaluate the association between the exposure variables (household characteristics and observed handwashing) and diarrhea.
Using these methods, the researchers found that compared to no handwashing at all before food preparation, children living in households where the food preparer washed at least one hand with water only, washed both hands with water only, or washed at least one hand with soap, had less diarrhea with odds ratios (ORs) of 0.78, 0.67, and 0.19, respectively. In households where residents washed at least one hand with soap after defecation, children had less diarrhea (OR = 0.45), but there was no significant association between handwashing with or without soap before feeding a child, before eating, or after cleaning a child's anus, and subsequent child diarrhea.
What Do These Findings Mean?
These findings from 50 villages across rural Bangladesh where fecal environmental contamination, undernutrition, and diarrhea are common, suggest that handwashing before preparing food is a particularly important opportunity to prevent childhood diarrhea, and also that handwashing with water alone can significantly reduce childhood diarrhea. In contrast to current standard recommendations, these results suggest that promoting handwashing exclusively with soap may be unwarranted. Handwashing with water alone might be seen as a step on the handwashing ladder: handwashing with water is good; handwashing with soap is better. Therefore, handwashing promotion programs in rural Bangladesh should not attempt to modify handwashing behavior at all five key times, but rather, should focus primarily on handwashing after defecation and before food preparation. Furthermore, research to develop and evaluate handwashing messages that account for the limited time and soap supplies available for low-income families, and are focused on those behaviors where there is the strongest evidence for a health benefit could help identify more effective strategies.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001052.
A four-part collection of Policy Forum articles published in November 2010 in PLoS Medicine, called “Water and Sanitation,” provides information on water, sanitation, and hygiene
Hygiene Central provides information on improving hygiene practices
doi:10.1371/journal.pmed.1001052
PMCID: PMC3125291  PMID: 21738452
15.  Reducing the Impact of the Next Influenza Pandemic Using Household-Based Public Health Interventions 
PLoS Medicine  2006;3(9):e361.
Background
The outbreak of highly pathogenic H5N1 influenza in domestic poultry and wild birds has caused global concern over the possible evolution of a novel human strain [1]. If such a strain emerges, and is not controlled at source [2,3], a pandemic is likely to result. Health policy in most countries will then be focused on reducing morbidity and mortality.
Methods and Findings
We estimate the expected reduction in primary attack rates for different household-based interventions using a mathematical model of influenza transmission within and between households. We show that, for lower transmissibility strains [2,4], the combination of household-based quarantine, isolation of cases outside the household, and targeted prophylactic use of anti-virals will be highly effective and likely feasible across a range of plausible transmission scenarios. For example, for a basic reproductive number (the average number of people infected by a typically infectious individual in an otherwise susceptible population) of 1.8, assuming only 50% compliance, this combination could reduce the infection (symptomatic) attack rate from 74% (49%) to 40% (27%), requiring peak quarantine and isolation levels of 6.2% and 0.8% of the population, respectively, and an overall anti-viral stockpile of 3.9 doses per member of the population. Although contact tracing may be additionally effective, the resources required make it impractical in most scenarios.
Conclusions
National influenza pandemic preparedness plans currently focus on reducing the impact associated with a constant attack rate, rather than on reducing transmission. Our findings suggest that the additional benefits and resource requirements of household-based interventions in reducing average levels of transmission should also be considered, even when expected levels of compliance are only moderate.
Voluntary household-based quarantine and external isolation are likely to be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is moderate.
Editors' Summary
Background.
Naturally occurring variation in the influenza virus can lead both to localized annual epidemics and to less frequent global pandemics of catastrophic proportions. The most destructive of the three influenza pandemics of the 20th century, the so-called Spanish flu of 1918–1919, is estimated to have caused 20 million deaths. As evidenced by ongoing tracking efforts and news media coverage of H5N1 avian influenza, contemporary approaches to monitoring and communications can be expected to alert health officials and the general public of the emergence of new, potentially pandemic strains before they spread globally.
Why Was This Study Done?
In order to act most effectively on advance notice of an approaching influenza pandemic, public health workers need to know which available interventions are likely to be most effective. This study was done to estimate the effectiveness of specific preventive measures that communities might implement to reduce the impact of pandemic flu. In particular, the study evaluates methods to reduce person-to-person transmission of influenza, in the likely scenario that complete control cannot be achieved by mass vaccination and anti-viral treatment alone.
What Did the Researchers Do and Find?
The researchers developed a mathematical model—essentially a computer simulation—to simulate the course of pandemic influenza in a hypothetical population at risk for infection at home, through external peer networks such as schools and workplaces, and through general community transmission. Parameters such as the distribution of household sizes, the rate at which individuals develop symptoms from nonpandemic viruses, and the risk of infection within households were derived from demographic and epidemiologic data from Hong Kong, as well as empirical studies of influenza transmission. A model based on these parameters was then used to calculate the effects of interventions including voluntary household quarantine, voluntary individual isolation in a facility outside the home, and contact tracing (that is, asking infectious individuals to identify people whom they may have infected and then warning those people) on the spread of pandemic influenza through the population. The model also took into account the anti-viral treatment of exposed, asymptomatic household members and of individuals in isolation, and assumed that all intervention strategies were put into place before the arrival of individuals infected with the pandemic virus.
  Using this model, the authors predicted that even if only half of the population were to comply with public health interventions, the proportion infected during the first year of an influenza pandemic could be substantially reduced by a combination of household-based quarantine, isolation of actively infected individuals in a location outside the household, and targeted prophylactic treatment of exposed individuals with anti-viral drugs. Based on an influenza-associated mortality rate of 0.5% (as has been estimated for New York City in the 1918–1919 pandemic), the magnitude of the predicted benefit of these interventions is a reduction from 49% to 27% in the proportion of the population who become ill in the first year of the pandemic, which would correspond to 16,000 fewer deaths in a city the size of Hong Kong (6.8 million people). In the model, anti-viral treatment appeared to be about as effective as isolation when each was used in combination with household quarantine, but would require stockpiling 3.9 doses of anti-viral for each member of the population. Contact tracing was predicted to provide a modest additional benefit over quarantine and isolation, but also to increase considerably the proportion of the population in quarantine.
What Do These Findings Mean?
This study predicts that voluntary household-based quarantine and external isolation can be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is far from uniform. These simulations can therefore inform preparedness plans in the absence of data from actual intervention trials, which would be impossible outside (and impractical within) the context of an actual pandemic. Like all mathematical models, however, the one presented in this study relies on a number of assumptions regarding the characteristics and circumstances of the situation that it is intended to represent. For example, the authors found that the efficacy of policies to reduce the rate of infection vary according to the ease with which a given virus spreads from person to person. Because this parameter (known as the basic reproductive ratio, R0) cannot be reliably predicted for a new viral strain based on past epidemics, the authors note that in an actual influenza pandemic rapid determinations of R0 in areas already involved would be necessary to finalize public health responses in threatened areas. Further, the implementation of the interventions that appear beneficial in this model would require devoting attention and resources to practical considerations, such as how to staff isolation centers and provide food and water to those in household quarantine. However accurate the scientific data and predictive models may be, their effectiveness can only be realized through well-coordinated local, as well as international, efforts.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030361.
• World Health Organization influenza pandemic preparedness page
• US Department of Health and Human Services avian and pandemic flu information site
• Pandemic influenza page from the Public Health Agency of Canada
• Emergency planning page on pandemic flu from the England Department of Health
• Wikipedia entry on pandemic influenza with links to individual country resources (note: Wikipedia is a free Internet encyclopedia that anyone can edit)
doi:10.1371/journal.pmed.0030361
PMCID: PMC1526768  PMID: 16881729
16.  Effect of Household-Based Drinking Water Chlorination on Diarrhoea among Children under Five in Orissa, India: A Double-Blind Randomised Placebo-Controlled Trial 
PLoS Medicine  2013;10(8):e1001497.
Sophie Boisson and colleagues conducted a double-blind, randomized placebo-controlled trial in Orissa, a state in southeast India, to evaluate the effect of household water treatment in preventing diarrheal illnesses in children aged under five years of age.
Please see later in the article for the Editors' Summary
Background
Boiling, disinfecting, and filtering water within the home can improve the microbiological quality of drinking water among the hundreds of millions of people who rely on unsafe water supplies. However, the impact of these interventions on diarrhoea is unclear. Most studies using open trial designs have reported a protective effect on diarrhoea while blinded studies of household water treatment in low-income settings have found no such effect. However, none of those studies were powered to detect an impact among children under five and participants were followed-up over short periods of time. The aim of this study was to measure the effect of in-home water disinfection on diarrhoea among children under five.
Methods and Findings
We conducted a double-blind randomised controlled trial between November 2010 and December 2011. The study included 2,163 households and 2,986 children under five in rural and urban communities of Orissa, India. The intervention consisted of an intensive promotion campaign and free distribution of sodium dichloroisocyanurate (NaDCC) tablets during bi-monthly households visits. An independent evaluation team visited households monthly for one year to collect health data and water samples. The primary outcome was the longitudinal prevalence of diarrhoea (3-day point prevalence) among children aged under five. Weight-for-age was also measured at each visit to assess its potential as a proxy marker for diarrhoea. Adherence was monitored each month through caregiver's reports and the presence of residual free chlorine in the child's drinking water at the time of visit. On 20% of the total household visits, children's drinking water was assayed for thermotolerant coliforms (TTC), an indicator of faecal contamination. The primary analysis was on an intention-to-treat basis. Binomial regression with a log link function and robust standard errors was used to compare prevalence of diarrhoea between arms. We used generalised estimating equations to account for clustering at the household level. The impact of the intervention on weight-for-age z scores (WAZ) was analysed using random effect linear regression.
Over the follow-up period, 84,391 child-days of observations were recorded, representing 88% of total possible child-days of observation. The longitudinal prevalence of diarrhoea among intervention children was 1.69% compared to 1.74% among controls. After adjusting for clustering within household, the prevalence ratio of the intervention to control was 0.95 (95% CI 0.79–1.13). The mean WAZ was similar among children of the intervention and control groups (−1.586 versus −1.589, respectively). Among intervention households, 51% reported their child's drinking water to be treated with the tablets at the time of visit, though only 32% of water samples tested positive for residual chlorine. Faecal contamination of drinking water was lower among intervention households than controls (geometric mean TTC count of 50 [95% CI 44–57] per 100 ml compared to 122 [95% CI 107–139] per 100 ml among controls [p<0.001] [n = 4,546]).
Conclusions
Our study was designed to overcome the shortcomings of previous double-blinded trials of household water treatment in low-income settings. The sample size was larger, the follow-up period longer, both urban and rural populations were included, and adherence and water quality were monitored extensively over time. These results provide no evidence that the intervention was protective against diarrhoea. Low compliance and modest reduction in water contamination may have contributed to the lack of effect. However, our findings are consistent with other blinded studies of similar interventions and raise additional questions about the actual health impact of household water treatment under these conditions.
Trial Registration
ClinicalTrials.gov NCT01202383
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Millennium Development Goal 7 calls for halving the proportion of the global population without sustainable access to safe drinking water between 1990 and 2015. Although this target was met in 2010, according to latest figures, 768 million people world-wide still rely on unimproved drinking water sources. Access to clean drinking water is integral to good health and a key strategy in reducing diarrhoeal illness: Currently, 1.3 million children aged less than five years die of diarrhoeal illnesses every year with a sixth of such deaths occurring in one country—India. Although India has recently made substantial progress in improving water supplies throughout the country, currently almost 90% of the rural population does not have a water connection to their house and drinking water supplies throughout the country are extensively contaminated with human waste. A strategy internationally referred to as Household Water Treatment and Safe Storage (HWTS), which involves people boiling, chlorinating, and filtering water at home, has been recommended by the World Health Organization and UNICEF to improve water quality at the point of delivery.
Why Was This Study Done?
The WHO and UNICEF strategy to promote HWTS is based on previous studies from low-income settings that found that such interventions could reduce diarrhoeal illnesses by between 30%–40%. However, these studies had several limitations including reporting bias, short follow up periods, and small sample sizes; and importantly, in blinded studies (in which both the study participants and researchers are unaware of which participants are receiving the intervention or the control) have found no evidence that HWTS is protective against diarrhoeal illnesses. So the researchers conducted a blinded study (a double-blind, randomized placebo-controlled trial) in Orissa, a state in southeast India, to address those shortcomings and evaluate the effect of household water treatment in preventing diarrhoeal illnesses in children under five years of age.
What Did the Researchers Do and Find?
The researchers conducted their study in 11 informal settlements (where the inhabitants do not benefit from public water or sewers) in the state's capital city and also in 20 rural villages. 2,163 households were randomized to receive the intervention—the promotion and free distribution of sodium dichloroisocyanurate (chlorine) disinfection tablets with instruction on how to use them—or placebo tablets that were similar in appearance and had the same effervescent base as the chlorine tablets. Trained field workers visited households every month for 12 months (between December 2010 and December 2011) to record whether any child had experienced diarrhoea in the previous three days (as reported by the primary care giver). The researchers tested compliance with the intervention by asking participants if they had treated the water and also by testing for chlorine in the water.
Using these methods, the researchers found that over the 12-month follow-up period, the longitudinal prevalence of diarrhoea among children in the intervention group was 1.69% compared to 1.74% in the control group, a non-significant finding (a finding that could have happened by chance). There was also no difference in diarrhoea prevalence among other household members in the two groups and no difference in weight for age z scores (a measurement of growth) between children in the two groups. The researchers also found that although just over half (51%) of households in the intervention group reported treating their water, on testing, only 32% of water samples tested positive for chlorine. Finally, the researchers found that water quality (as measured by thermotolerant coliforms, TTCs) was better in the intervention group than the control group.
What Do These Findings Mean?
These findings suggest that treating water with chlorine tablets has no effect in reducing the prevalence of diarrhoea in both children aged under five years and in other household members in Orissa, India. However, poor compliance was a major issue with only a third of households in the intervention group confirmed as treating their water with chlorine tablets. Furthermore, these findings are limited in that the prevalence of diarrhoea was lower than expected, which may have also reduced the power of detecting a potential effect of the intervention. Nevertheless, this study raises questions about the health impact of household water treatment and highlights the key challenge of poor compliance with public health interventions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001497.
The website of the World Health Organization has a section dedicated to household water treatment and safe storage, including a network to promote the use of HWTS and a toolkit to measure HWTS
The Water Institute hosts the communications portal for the International Network on Household Water Treatment and Safe Storage
doi:10.1371/journal.pmed.1001497
PMCID: PMC3747993  PMID: 23976883
17.  Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study 
PLoS Medicine  2016;13(2):e1001963.
Introduction
While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO “best buy” intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities.
Methods and Findings
An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis.
The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, −3.2%; value-based tax, −2.9%; strength-based tax, −6.1%; minimum unit pricing, −7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, −1.3%; value-based tax, −1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, −3.6% [95% uncertainty interval (UI) −6.1%, −0.6%]; value-based tax, −3.3% [UI −5.1%, −1.7%]; strength-based tax, −7.5% [UI −13.7%, −3.9%]; minimum unit pricing, −10.3% [UI −10.3%, −7.0%]) and professional/managerial occupation groups (current tax increase, −1.8% [UI −4.7%, +1.6%]; value-based tax, −1.9% [UI −3.6%, +0.4%]; strength-based tax, −0.8% [UI −6.9%, +4.0%]; minimum unit pricing, −0.7% [UI −5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices.
Conclusions
Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation.
In this mathematical modelling study, Petra Meier and colleagues estimate the impact of different alcohol taxation and pricing structures on consumption, spending, and health inequalities.
Editors' Summary
Background
People have drunk alcoholic beverages throughout history. However, harmful alcohol consumption is currently responsible for around 2.7 million deaths every year and is a leading risk factor worldwide for heart disease, liver disease, and many other health problems. It also affects the well-being and health of people around those who drink, both within the household and through alcohol-related crime and road traffic crashes. As with most products, the price of alcohol influences how much people buy and consume. Alcohol affordability is an important driver of alcohol consumption, and in many countries, including the UK and US, alcohol prices have not kept pace with inflation and rising incomes. Alcohol taxes have the dual function of raising revenues and regulating alcohol prices. Different countries employ different alcohol-specific taxation structures including taxation by beverage volume, by value, or by alcohol content. Some countries also have additional price control measures that prevent the sale of very cheap alcohol.
Why Was This Study Done?
Although research shows that increases in the price of alcohol brought about by taxation or other pricing policies reduce alcohol consumption and alcohol-related harm, little is known about how alternative tax and price policy options affect the scale and distribution of alcohol-related health impacts across society. This is important because people with lower social status and/or income are disproportionately affected by alcohol-related disease. An effective public health policy to reduce alcohol harm might, therefore, help to reduce social disparities in health (health inequality), a major goal of population health policies in affluent countries.
What Did the Researchers Do and Find?
Here, the researchers use the Sheffield Alcohol Policy Model (SAPM) to investigate the effects of four common alcohol taxation and price policies on health inequalities in England during 2014/2015. SAPM is a deterministic mathematical simulation model that estimates how price changes affect individual-level alcohol consumption and how consumption changes affect the illnesses (morbidity), deaths (mortality), and economic costs associated with 43 alcohol-attributable conditions. The researchers used SAPM to simulate the effect of increasing the existing UK duty on all alcohol products by 13.4% (current tax increase), introducing a 4% tax based on product price (value-based, or ad valorem, taxation), introducing a tax of £0.22 per alcohol unit (strength-based, or volumetric, taxation), or setting a minimum price threshold of £0.50 per alcohol unit (minimum unit pricing). The magnitudes of the different policy-induced price increases modeled were chosen to result in the same overall population-wide 4.3% decrease in alcohol-related mortality. Notably, the impacts of policy changes on moderate drinkers were small, regardless of income/socioeconomic group. However, among heavy drinkers, the effects of the four policies were differentially distributed across the population. Among heavy drinkers in the lowest socioeconomic group (the population group at greatest risk of harm from alcohol use), the estimated effects on mortality rates were −3.2% for the current tax increase (that is, it reduced alcohol-related deaths by 3.2%), −2.9% for value-based taxation, −6.1% for strength-based taxation, and −7.8% for minimum unit pricing. Among heavy drinkers in the highest socioeconomic group, the corresponding effects on mortality rates were −1.3%, −1.4%, +0.2%, and +0.8%.
What Do These Findings Mean?
As with any policy modelling, the accuracy of these findings depends on the evidence base, the quality of the data incorporated into the model, and the assumptions used. Limitations to the analysis here include that, due to an absence of evidence, the researchers have not examined the impact of any tax avoidance, which could potentially vary between the policies. The study findings suggest that in England (and probably in other countries) the introduction of strength-based taxation or minimum unit pricing would lead to larger reductions in health inequalities among heavy drinkers than an increase in the current tax rate or the introduction of value-based taxation. That is, the two policy options that target cheap, high-strength alcohol are likely to outperform value-based taxation and increasing the current UK tax in terms of reducing health inequalities. Thus, although these policies might be considered “regressive” (i.e., affecting the poor more than the rich) in terms of consumption and spending, they are at the same time “progressive” in that they reduce health inequalities. Finally, these findings suggest that minimum unit pricing and strength-based taxation, unlike the other two options tested, would target harmful drinking without unnecessarily penalizing people with low incomes who drink moderate amounts of alcohol.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001963.
The World Health Organization provides detailed information about alcohol, including a fact sheet on the harmful use of alcohol; its Global Status Report on Alcohol and Health 2014 provides country information on the impact of alcohol use on health and policy responses; its Global Strategy to Reduce Harmful Use of Alcohol includes information on pricing policies; the Global Information System on Alcohol and Health provides further information about alcohol control policies
The US National Institute on Alcohol Abuse and Alcoholism has information about alcohol and its effects on health; it provides interactive worksheets to help people evaluate their drinking and decide whether and how to make a change
The US Centers for Disease Control and Prevention provides information on alcohol and public health and a fact sheet on preventing excessive alcohol use
The UK National Health Service Choices website provides detailed information about drinking and alcohol, including information on the risks of drinking too much, tools for calculating alcohol consumption, and personal stories
EuroCare is an alliance of non-governmental public health and social organizations working on the prevention and reduction of alcohol-related harm in Europe; it provides information about alcohol taxation in the European Union
The UK Institute of Alcohol Studies advocates for the use of scientific evidence in policymaking to reduce alcohol-related harm and produces easily accessible briefings on alcohol policy issues
MedlinePlus provides links to many other resources on alcohol
Information about the Sheffield Alcohol Policy Model is available
The UK Chief Medical Officers’ proposed new alcohol guidelines are available
doi:10.1371/journal.pmed.1001963
PMCID: PMC4764336  PMID: 26905063
18.  Evidence for Community Transmission of Community-Associated but Not Health-Care-Associated Methicillin-Resistant Staphylococcus Aureus Strains Linked to Social and Material Deprivation: Spatial Analysis of Cross-sectional Data 
PLoS Medicine  2016;13(1):e1001944.
Background
Identifying and tackling the social determinants of infectious diseases has become a public health priority following the recognition that individuals with lower socioeconomic status are disproportionately affected by infectious diseases. In many parts of the world, epidemiologically and genotypically defined community-associated (CA) methicillin-resistant Staphylococcus aureus (MRSA) strains have emerged to become frequent causes of hospital infection. The aim of this study was to use spatial models with adjustment for area-level hospital attendance to determine the transmission niche of genotypically defined CA- and health-care-associated (HA)-MRSA strains across a diverse region of South East London and to explore a potential link between MRSA carriage and markers of social and material deprivation.
Methods and Findings
This study involved spatial analysis of cross-sectional data linked with all MRSA isolates identified by three National Health Service (NHS) microbiology laboratories between 1 November 2011 and 29 February 2012. The cohort of hospital-based NHS microbiology diagnostic services serves 867,254 usual residents in the Lambeth, Southwark, and Lewisham boroughs in South East London, United Kingdom (UK). Isolates were classified as HA- or CA-MRSA based on whole genome sequencing. All MRSA cases identified over 4 mo within the three-borough catchment area (n = 471) were mapped to small geographies and linked to area-level aggregated socioeconomic and demographic data. Disease mapping and ecological regression models were used to infer the most likely transmission niches for each MRSA genetic classification and to describe the spatial epidemiology of MRSA in relation to social determinants. Specifically, we aimed to identify demographic and socioeconomic population traits that explain cross-area extra variation in HA- and CA-MRSA relative risks following adjustment for hospital attendance data. We explored the potential for associations with the English Indices of Deprivation 2010 (including the Index of Multiple Deprivation and several deprivation domains and subdomains) and the 2011 England and Wales census demographic and socioeconomic indicators (including numbers of households by deprivation dimension) and indicators of population health. Both CA-and HA-MRSA were associated with household deprivation (CA-MRSA relative risk [RR]: 1.72 [1.03–2.94]; HA-MRSA RR: 1.57 [1.06–2.33]), which was correlated with hospital attendance (Pearson correlation coefficient [PCC] = 0.76). HA-MRSA was also associated with poor health (RR: 1.10 [1.01–1.19]) and residence in communal care homes (RR: 1.24 [1.12–1.37]), whereas CA-MRSA was linked with household overcrowding (RR: 1.58 [1.04–2.41]) and wider barriers, which represent a combined score for household overcrowding, low income, and homelessness (RR: 1.76 [1.16–2.70]). CA-MRSA was also associated with recent immigration to the UK (RR: 1.77 [1.19–2.66]). For the area-level variation in RR for CA-MRSA, 28.67% was attributable to the spatial arrangement of target geographies, compared with only 0.09% for HA-MRSA. An advantage to our study is that it provided a representative sample of usual residents receiving care in the catchment areas. A limitation is that relationships apparent in aggregated data analyses cannot be assumed to operate at the individual level.
Conclusions
There was no evidence of community transmission of HA-MRSA strains, implying that HA-MRSA cases identified in the community originate from the hospital reservoir and are maintained by frequent attendance at health care facilities. In contrast, there was a high risk of CA-MRSA in deprived areas linked with overcrowding, homelessness, low income, and recent immigration to the UK, which was not explainable by health care exposure. Furthermore, areas adjacent to these deprived areas were themselves at greater risk of CA-MRSA, indicating community transmission of CA-MRSA. This ongoing community transmission could lead to CA-MRSA becoming the dominant strain types carried by patients admitted to hospital, particularly if successful hospital-based MRSA infection control programmes are maintained. These results suggest that community infection control programmes targeting transmission of CA-MRSA will be required to control MRSA in both the community and hospital. These epidemiological changes will also have implications for effectiveness of risk-factor-based hospital admission MRSA screening programmes.
Community associated MRSA variants, rather than hospital associated ones, are more readily transmitted and this is where control programs should focus to limit both hospital and community infections.
Editors' Summary
Background
Addressing health inequality requires understanding the social determinants of poor health. Previous studies have suggested a link between deprived living conditions and infections with methicillin-resistant Staphylococcus aureus (MRSA), that is, strains of the common bacterium S. aureus that have acquired antibiotic resistance and are therefore more difficult to treat. MRSA was first identified in the 1960s and for years thought of as a dangerous health-care-associated (HA-) pathogen that infects hospital patients who are predominantly older, sick, or undergoing invasive procedures. In the late 1990s, however, community-associated MRSA (CA-MRSA) emerged as pathogen infecting healthy individuals of all ages and without recent hospital contact. Most CA-MRSA cases are contagious skin infections, and numerous outbreaks have been reported in different communities. The traditional distinction between HA-MRSA and CA-MRSA based on where transmission occurred has become problematic in recent years, because CA-MRSA transmission has also been reported in health care settings. However, as HA- and CA-MRSA strains are genetically distinct, cases can be classified by DNA sequencing regardless of where a patient got infected.
Why Was This Study Done?
With hospitals historically considered the only place of MRSA transmission, prevention efforts remain focused on health care settings. Given the changing patterns of MRSA infections, however, the need to consider HA and CA transmission settings together has been recognized. This study was designed to take a closer look at the relationship between both HA- and CA-MRSA and socioeconomic deprivation, with the ultimate aim to inform prevention efforts. The researchers selected three boroughs in South East London with a highly diverse population of approximately 850,000 residents for whom socioeconomic and demographic data were available at a high level of spatial resolution. They also had data on hospital attendance for the residents and were therefore able to account for this factor in their analysis. The study addressed the following questions: is there a link between socioeconomic deprivation and both HA- and CA-MRSA cases among the residents? What social determinants are associated with HA- and CA-MRSA cases? What are the transmission settings (i.e., community versus health care) for HA- and CA-MRSA?
What Did the Researchers Do and Find?
They analyzed data on all MRSA samples collected over 4 consecutive mo in late 2011 and early 2012 by microbiology laboratories that serve the three boroughs. Of 471 MRSA cases that occurred in residents, 392 could be classified based on genome sequencing. Of these, approximately 72% were HA-MRSA, and 26% were CA-MRSA. Approximately 2% of residents carried both HA- and CA-MRSA. All MRSA cases were mapped to 513 smaller areas (called Lower Layer Super Output Areas, or LSOAs) in the three boroughs for which extensive socioeconomic and demographic data existed. The former included data on income, employment, health, and education, the latter data on number individuals per household, their ages and gender, and length of residence in the UK. MRSA cases were detected in just over half of the LSOAs in the study area. The researchers then used mathematical models to determine the most likely transmission settings for each MRSA genetic classification. They also described the spatial distributions of the two in relation to socioeconomic and demographic determinants. Both CA-and HA-MRSA were associated with household deprivation, which was itself correlated with hospital attendance. HA-MRSA was also associated with poor health and with living in communal care homes, whereas CA-MRSA was linked with household overcrowding and a combination of household overcrowding, low income, and homelessness. CA-MRSA was also associated with recent immigration to the UK. Around 27% of local variation in CA-MRSA could be explained by the spatial arrangement of LSOAs, meaning areas of high risk tended to cluster. No such clustering was observed for HA-MRSA.
What Do these Findings Mean?
The results show that residents in the most deprived areas are at greater risk for MRSA. The absence of spatial clusters of HA-MRSA suggests that transmission of genetically determined HA-MRSA occurs in hospitals, with little or no transmission in the community. The most important risk factor for acquiring HA-MRSA is therefore likely to be hospital attendance as a result of deprivation. In contrast, genetically determined CA-MRSA both affects deprived areas disproportionately, and—as the clusters imply—spreads from such areas in the community. This suggests that living in deprived conditions itself is a risk factor for acquiring CA-MRSA, as is living near deprived neighbors. Some of the CA-MRSA cases are also likely imported by recent immigrants. Whereas transmission of CA-MRSA in health care settings has been reported in a number of other studies, data from this study cannot answer whether or to what extent this is the case here. However, because of ongoing transmission in the community, and because deprived residents are both more likely to have CA-MRSA and to attend a hospital, importation of CA-MRSA strains into hospitals is an obvious concern. While the researchers intentionally located the study in an area with a very diverse population, it is not clear how generalizable the findings are to other communities, either in the UK or in other countries. Nonetheless, the results justify special focus on deprived populations in the control of MRSA and are useful for the design of specific strategies for HA-MRSA and CA-MRSA.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001944.
Online information on MRSA from the UK National Health Service: http://www.nhs.uk/conditions/MRSA/Pages/Introduction.aspx
MRSA webpage from the US Centers of Disease Control and Prevention: http://www.cdc.gov/mrsa/
MRSA page from the San Francisco Department of Public Health: http://www.sfcdcp.org/mrsa.html
MedlinePlus provides links to information about MRSA, including sources in languages other than English: https://www.nlm.nih.gov/medlineplus/mrsa.html
doi:10.1371/journal.pmed.1001944
PMCID: PMC4727805  PMID: 26812054
19.  Community Mobilization in Mumbai Slums to Improve Perinatal Care and Outcomes: A Cluster Randomized Controlled Trial 
PLoS Medicine  2012;9(7):e1001257.
David Osrin and colleagues report findings from a cluster-randomized trial conducted in Mumbai slums; the trial aimed to evaluate whether facilitator-supported women's groups could improve perinatal outcomes.
Introduction
Improving maternal and newborn health in low-income settings requires both health service and community action. Previous community initiatives have been predominantly rural, but India is urbanizing. While working to improve health service quality, we tested an intervention in which urban slum-dweller women's groups worked to improve local perinatal health.
Methods and Findings
A cluster randomized controlled trial in 24 intervention and 24 control settlements covered a population of 283,000. In each intervention cluster, a facilitator supported women's groups through an action learning cycle in which they discussed perinatal experiences, improved their knowledge, and took local action. We monitored births, stillbirths, and neonatal deaths, and interviewed mothers at 6 weeks postpartum. The primary outcomes described perinatal care, maternal morbidity, and extended perinatal mortality. The analysis included 18,197 births over 3 years from 2006 to 2009. We found no differences between trial arms in uptake of antenatal care, reported work, rest, and diet in later pregnancy, institutional delivery, early and exclusive breastfeeding, or care-seeking. The stillbirth rate was non-significantly lower in the intervention arm (odds ratio 0.86, 95% CI 0.60–1.22), and the neonatal mortality rate higher (1.48, 1.06–2.08). The extended perinatal mortality rate did not differ between arms (1.19, 0.90–1.57). We have no evidence that these differences could be explained by the intervention.
Conclusions
Facilitating urban community groups was feasible, and there was evidence of behaviour change, but we did not see population-level effects on health care or mortality. In cities with multiple sources of health care, but inequitable access to services, community mobilization should be integrated with attempts to deliver services for the poorest and most vulnerable, and with initiatives to improve quality of care in both public and private sectors.
Trial registration
Current Controlled Trials ISRCTN96256793
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Substantial progress is being made to reduce global child mortality (deaths of children before the age of 5 years) and maternal mortality (deaths among women because of complications of pregnancy and childbirth)—two of the Millennium Development Goals agreed by world leaders in 2000 to end extreme poverty. Even so, worldwide, in 2010, 7.6 million children died before their fifth birthday and there were nearly 360,000 maternal deaths. Almost all child and maternal deaths occur in developing countries—a fifth of under-five deaths and more than a quarter of neonatal deaths (deaths during the first month of life, which account for two-fifths of all child deaths) occur in India alone. Moreover, most child and maternal deaths are caused by avoidable conditions. Specifically, the major causes of neonatal death—complications of preterm delivery, breathing problems during or after delivery, and infections of the blood (sepsis) and lungs (pneumonia)—and of maternal deaths—hemorrhage (abnormal bleeding), sepsis, unsafe abortion, obstructed labor, and hypertensive diseases of pregnancy—could all be largely prevented by improved access to reproductive health services and skilled health care workers.
Why Was This Study Done?
Experts believe that improvements to maternal and newborn health in low-income settings require both health service strengthening and community action. That is, the demand for better services, driven by improved knowledge about maternal and newborn health (perinatal issues), has to be increased in parallel with the supply of those services. To date, community mobilization around perinatal issues has largely been undertaken in rural settings but populations in developing countries are becoming increasingly urban. In India, for example, 30% of the population now lives in cities. In this cluster randomized controlled trial (a study in which groups of people are randomly assigned to receive alternative interventions and the outcomes in the differently treated “clusters” are compared), City Initiative for Newborn Health (CINH) researchers investigate the effect of an intervention designed to help women's groups in the slums of Mumbai work towards improving local perinatal health. The CINH aims to improve maternal and newborn health in slum communities by improving public health care provision and by working with community members to improve maternal and newborn care practices and care-seeking behaviors.
What Did the Researchers Do and Find?
The researchers enrolled 48 Mumbai slum communities of at least 1,000 households into their trial. In each of the 24 intervention clusters, a facilitator supported local women's groups through a 36-meeting learning cycle during which group members discussed their perinatal experiences, improved their knowledge, and took action. To measure the effect of the intervention, the researchers monitored births, stillbirths, and neonatal deaths in all the clusters and interviewed mothers 6 weeks after delivery. During the 3-year trial, there were 18,197 births in the participating settlements. The women in the intervention clusters were enthusiastic about acquiring new knowledge and made substantial efforts to reach out to other women but were less successful in undertaking collective action such as negotiations with civic authorities for more amenities. There were no differences between the intervention and control communities in the uptake of antenatal care, reported work, rest, and diet in late pregnancy, institutional delivery, or in breast feeding and care-seeking behavior. Finally, the combined rate of stillbirths and neonatal deaths (the extended perinatal mortality rate) was the same in both arms of the trial, as was maternal mortality.
What Do These Findings Mean?
These findings indicate that it is possible to facilitate the discussion of perinatal health care by urban women's groups in the challenging conditions that exist in the slums of Mumbai. However, they fail to show any measureable effect of community mobilization through the facilitation of women's groups on perinatal health at the population level. The researchers acknowledge that more intensive community activities that target the poorest, most vulnerable slum dwellers might produce measurable effects on perinatal mortality, and they conclude that, in cities with multiple sources of health care and inequitable access to services, it remains important to integrate community mobilization with attempts to deliver services to the poorest and most vulnerable, and with initiatives to improve the quality of health care in both the public and private sector.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001257.
The United Nations Childrens Fund (UNICEF) works for children's rights, survival, development, and protection around the world; it provides information on the reduction of child mortality (Millennium Development Goal 4); its Childinfo website provides information about all the Millennium Development Goals and detailed statistics about on child survival and health, newborn care, and maternal health (some information in several languages)
The World Health Organization also has information about Millennium Development Goal 4 and Millennium Development Goal 5, the reduction of maternal mortality, provides information on newborn infants, and provides estimates of child mortality rates (some information in several languages)
Further information about the Millennium Development Goals is available
Information on the City Initiative for Newborn Health and its partners and a detailed description of its trial of community mobilization in Mumbai slums to improve care during pregnancy, delivery, postnatally and for the newborn are available
Further information about the Society for Nutrition, Education and Health Action (SNEHA) is available
doi:10.1371/journal.pmed.1001257
PMCID: PMC3389036  PMID: 22802737
20.  Estimates of Outcomes Up to Ten Years after Stroke: Analysis from the Prospective South London Stroke Register 
PLoS Medicine  2011;8(5):e1001033.
Charles Wolfe and colleagues collected data from the South London Stroke Register on 3,373 first strokes registered between 1995 and 2006 and showed that between 20% and 30% of survivors have poor outcomes up to 10 years after stroke.
Background
Although stroke is acknowledged as a long-term condition, population estimates of outcomes longer term are lacking. Such estimates would be useful for planning health services and developing research that might ultimately improve outcomes. This burden of disease study provides population-based estimates of outcomes with a focus on disability, cognition, and psychological outcomes up to 10 y after initial stroke event in a multi-ethnic European population.
Methods and Findings
Data were collected from the population-based South London Stroke Register, a prospective population-based register documenting all first in a lifetime strokes since 1 January 1995 in a multi-ethnic inner city population. The outcomes assessed are reported as estimates of need and included disability (Barthel Index <15), inactivity (Frenchay Activities Index <15), cognitive impairment (Abbreviated Mental Test < 8 or Mini-Mental State Exam <24), anxiety and depression (Hospital Anxiety and Depression Scale >10), and mental and physical domain scores of the Medical Outcomes Study 12-item short form (SF-12) health survey. Estimates were stratified by age, gender, and ethnicity, and age-adjusted using the standard European population. Plots of outcome estimates over time were constructed to examine temporal trends and sociodemographic differences. Between 1995 and 2006, 3,373 first-ever strokes were registered: 20%–30% of survivors had a poor outcome over 10 y of follow-up. The highest rate of disability was observed 7 d after stroke and remained at around 110 per 1,000 stroke survivors from 3 mo to 10 y. Rates of inactivity and cognitive impairment both declined up to 1 y (280/1,000 and 180/1,000 survivors, respectively); thereafter rates of inactivity remained stable till year eight, then increased, whereas rates of cognitive impairment fluctuated till year eight, then increased. Anxiety and depression showed some fluctuation over time, with a rate of 350 and 310 per 1,000 stroke survivors, respectively. SF-12 scores showed little variation from 3 mo to 10 y after stroke. Inactivity was higher in males at all time points, and in white compared to black stroke survivors, although black survivors reported better outcomes in the SF-12 physical domain. No other major differences were observed by gender or ethnicity. Increased age was associated with higher rates of disability, inactivity, and cognitive impairment.
Conclusions
Between 20% and 30% of stroke survivors have a poor range of outcomes up to 10 y after stroke. Such epidemiological data demonstrate the sociodemographic groups that are most affected longer term and should be used to develop longer term management strategies that reduce the significant poor outcomes of this group, for whom effective interventions are currently elusive.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, 15 million people have a stroke. About 5 million of these people die within a few days, and another 5 million are left disabled. Stroke occurs when the brain's blood supply is suddenly interrupted by a blood clot blocking a blood vessel in the brain (ischemic stroke, the commonest type of stroke) or by a blood vessel in the brain bursting (hemorrhagic stroke). Deprived of the oxygen normally carried to them by the blood, the brain cells near the blockage die. The symptoms of stroke depend on which part of the brain is damaged but include sudden weakness or paralysis along one side of the body, vision loss in one or both eyes, and confusion or trouble speaking or understanding speech. Anyone experiencing these symptoms should seek immediate medical attention because prompt treatment can limit the damage to the brain. Risk factors for stroke include age (three-quarters of strokes occur in people over 65 years old), high blood pressure, and heart disease.
Why Was This Study Done?
Post-stroke rehabilitation can help individuals overcome the physical disabilities caused by stroke, and drugs and behavioral counseling can reduce the risk of a second stroke. However, people can also have problems with cognition (thinking, awareness, attention, learning, judgment, and memory) after a stroke, and they can become depressed or anxious. These “outcomes” can persist for many years, but although stroke is acknowledged as a long-term condition, most existing data on stroke outcomes are limited to a year after the stroke and often focus on disability alone. Longer term, more extensive information is needed to help plan services and to help develop research to improve outcomes. In this burden of disease analysis, the researchers use follow-up data collected by the prospective South London Stroke Register (SLSR) to provide long-term population-based estimates of disability, cognition, and psychological outcomes after a first stroke. The SLSR has recorded and followed all patients of all ages in an inner area of South London after their first-ever stroke since 1995.
What Did the Researchers Do and Find?
Between 1995 and 2006, the SLSR recorded 3,373 first-ever strokes. Patients were examined within 48 hours of referral to SLSR, their stroke diagnosis was verified, and their sociodemographic characteristics (including age, gender, and ethnic origin) were recorded. Study nurses and fieldworkers then assessed the patients at three months and annually after the stroke for disability (using the Barthel Index, which measures the ability to, for example, eat unaided), inactivity (using the Frenchay Activities Index, which measures participation in social activities), and cognitive impairment (using the Abbreviated Mental Test or the Mini-Mental State Exam). Anxiety and depression and the patients' perceptions of their mental and physical capabilities were also assessed. Using preset cut-offs for each outcome, 20%–30% of stroke survivors had a poor outcome over ten years of follow-up. So, for example, 110 individuals per 1,000 population were judged disabled from three months to ten years, rates of inactivity remained constant from year one to year eight, at 280 affected individuals per 1,000 survivors, and rates of anxiety and depression fluctuated over time but affected about a third of the population. Notably, levels of inactivity were higher among men than women at all time points and were higher in white than in black stroke survivors. Finally, increased age was associated with higher rates of disability, inactivity, and cognitive impairment.
What Do These Findings Mean?
Although the accuracy of these findings may be affected by the loss of some patients to follow-up, these population-based estimates of outcome measures for survivors of a first-ever stroke for up to ten years after the event provide concrete evidence that stroke is a lifelong condition with ongoing poor outcomes. They also identify the sociodemographic groups of patients that are most affected in the longer term. Importantly, most of the measured outcomes remain relatively constant (and worse than outcomes in an age-matched non-stroke-affected population) after 3–12 months, a result that needs to be considered when planning services for stroke survivors. In other words, these findings highlight the need for health and social services to provide long-term, ongoing assessment and rehabilitation for patients for many years after a stroke.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001033.
The US National Institute of Neurological Disorders and Stroke provides information about all aspects of stroke (in English and Spanish); the US National Institute of Health SeniorHealth Web site has additional information about stroke
The Internet Stroke Center provides detailed information about stroke for patients, families, and health professionals (in English and Spanish)
The UK National Health Service Choices Web site also provides information about stroke for patients and their families
MedlinePlus has links to additional resources about stroke (in English and Spanish)
More information about the South London Stroke Register is available
doi:10.1371/journal.pmed.1001033
PMCID: PMC3096613  PMID: 21610863
21.  Home Telehealth for Patients With Chronic Obstructive Pulmonary Disease (COPD) 
Executive Summary
In July 2010, the Medical Advisory Secretariat (MAS) began work on a Chronic Obstructive Pulmonary Disease (COPD) evidentiary framework, an evidence-based review of the literature surrounding treatment strategies for patients with COPD. This project emerged from a request by the Health System Strategy Division of the Ministry of Health and Long-Term Care that MAS provide them with an evidentiary platform on the effectiveness and cost-effectiveness of COPD interventions.
After an initial review of health technology assessments and systematic reviews of COPD literature, and consultation with experts, MAS identified the following topics for analysis: vaccinations (influenza and pneumococcal), smoking cessation, multidisciplinary care, pulmonary rehabilitation, long-term oxygen therapy, noninvasive positive pressure ventilation for acute and chronic respiratory failure, hospital-at-home for acute exacerbations of COPD, and telehealth (including telemonitoring and telephone support). Evidence-based analyses were prepared for each of these topics. For each technology, an economic analysis was also completed where appropriate. In addition, a review of the qualitative literature on patient, caregiver, and provider perspectives on living and dying with COPD was conducted, as were reviews of the qualitative literature on each of the technologies included in these analyses.
The Chronic Obstructive Pulmonary Disease Mega-Analysis series is made up of the following reports, which can be publicly accessed at the MAS website at: http://www.hqontario.ca/en/mas/mas_ohtas_mn.html.
Chronic Obstructive Pulmonary Disease (COPD) Evidentiary Framework
Influenza and Pneumococcal Vaccinations for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Smoking Cessation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Community-Based Multidisciplinary Care for Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Pulmonary Rehabilitation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Long-term Oxygen Therapy for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Acute Respiratory Failure Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Chronic Respiratory Failure Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Hospital-at-Home Programs for Patients With Acute Exacerbations of Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Home Telehealth for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Cost-Effectiveness of Interventions for Chronic Obstructive Pulmonary Disease Using an Ontario Policy Model
Experiences of Living and Dying With COPD: A Systematic Review and Synthesis of the Qualitative Empirical Literature
For more information on the qualitative review, please contact Mita Giacomini at: http://fhs.mcmaster.ca/ceb/faculty_member_giacomini.htm.
For more information on the economic analysis, please visit the PATH website: http://www.path-hta.ca/About-Us/Contact-Us.aspx.
The Toronto Health Economics and Technology Assessment (THETA) collaborative has produced an associated report on patient preference for mechanical ventilation. For more information, please visit the THETA website: http://theta.utoronto.ca/static/contact.
Objective
The objective of this analysis was to conduct an evidence-based assessment of home telehealth technologies for patients with chronic obstructive pulmonary disease (COPD) in order to inform recommendations regarding the access and provision of these services in Ontario. This analysis was one of several analyses undertaken to evaluate interventions for COPD. The perspective of this assessment was that of the Ontario Ministry of Health and Long-Term Care, a provincial payer of medically necessary health care services.
Clinical Need: Condition and Target Population
Canada is facing an increase in chronic respiratory diseases due in part to its aging demographic. The projected increase in COPD will put a strain on health care payers and providers. There is therefore an increasing demand for telehealth services that improve access to health care services while maintaining or improving quality and equality of care. Many telehealth technologies however are in the early stages of development or diffusion and thus require study to define their application and potential harms or benefits. The Medical Advisory Secretariat (MAS) therefore sought to evaluate telehealth technologies for COPD.
Technology
Telemedicine (or telehealth) refers to using advanced information and communication technologies and electronic medical devices to support the delivery of clinical care, professional education, and health-related administrative services.
Generally there are 4 broad functions of home telehealth interventions for COPD:
to monitor vital signs or biological health data (e.g., oxygen saturation),
to monitor symptoms, medication, or other non-biologic endpoints (e.g., exercise adherence),
to provide information (education) and/or other support services (such as reminders to exercise or positive reinforcement), and
to establish a communication link between patient and provider.
These functions often require distinct technologies, although some devices can perform a number of these diverse functions. For the purposes of this review, MAS focused on home telemonitoring and telephone only support technologies.
Telemonitoring (or remote monitoring) refers to the use of medical devices to remotely collect a patient’s vital signs and/or other biologic health data and the transmission of those data to a monitoring station for interpretation by a health care provider.
Telephone only support refers to disease/disorder management support provided by a health care provider to a patient who is at home via telephone or videoconferencing technology in the absence of transmission of patient biologic data.
Research Questions
What is the effectiveness, cost-effectiveness, and safety of home telemonitoring compared with usual care for patients with COPD?
What is the effectiveness, cost-effectiveness, and safety of telephone only support programs compared with usual care for patients with COPD?
Research Methods
Literature Search
Search Strategy
A literature search was performed on November 3, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2000 until November 3, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, and then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low, or very low according to GRADE methodology.
Inclusion Criteria – Question #1
frequent transmission of a patient’s physiological data collected at home and without a health care professional physically present to health care professionals for routine monitoring through the use of a communication technology;
monitoring combined with a coordinated management and feedback system based on transmitted data;
telemonitoring as a key component of the intervention (subjective determination);
usual care as provided by the usual care provider for the control group;
randomized controlled trials (RCTs), controlled clinical trials (CCTs), systematic reviews, and/or meta-analyses;
published between January 1, 2000 and November 3, 2010.
Inclusion Criteria – Question #2
scheduled or frequent contact between patient and a health care professional via telephone or videoconferencing technology in the absence of transmission of patient physiological data;
monitoring combined with a coordinated management and feedback system based on transmitted data;
telephone support as a key component of the intervention (subjective determination);
usual care as provided by the usual care provider for the control group;
RCTs, CCTs, systematic reviews, and/or meta-analyses;
published between January 1, 2000 and November 3, 2010.
Exclusion Criteria
published in a language other than English;
intervention group (and not control) receiving some form of home visits by a medical professional, typically a nurse (i.e., telenursing) beyond initial technology set-up and education, to collect physiological data, or to somehow manage or treat the patient;
not recording patient or health system outcomes (e.g., technical reports testing accuracy, reliability or other development-related outcomes of a device, acceptability/feasibility studies, etc.);
not using an independent control group that received usual care (e.g., studies employing historical or periodic controls).
Outcomes of Interest
hospitalizations (primary outcome)
mortality
emergency department visits
length of stay
quality of life
other […]
Subgroup Analyses (a priori)
length of intervention (primary)
severity of COPD (primary)
Quality of Evidence
The quality of evidence assigned to individual studies was determined using a modified CONSORT Statement Checklist for Randomized Controlled Trials. (1) The CONSORT Statement was adapted to include 3 additional quality measures: the adequacy of control group description, significant differential loss to follow-up between groups, and greater than or equal to 30% study attrition. Individual study quality was defined based on total scores according to the CONSORT Statement checklist: very low (0 to < 40%), low (≥ 40 to < 60%), moderate (≥ 60 to < 80%), and high (≥ 80 to 100%).
The quality of the body of evidence was assessed as high, moderate, low, or very low according to the GRADE Working Group criteria. The following definitions of quality were used in grading the quality of the evidence:
Summary of Findings
Six publications, representing 5 independent trials, met the eligibility criteria for Research Question #1. Three trials were RCTs reported across 4 publications, whereby patients were randomized to home telemonitoring or usual care, and 2 trials were CCTs, whereby patients or health care centers were nonrandomly assigned to intervention or usual care.
A total of 310 participants were studied across the 5 included trials. The mean age of study participants in the included trials ranged from 61.2 to 74.5 years for the intervention group and 61.1 to 74.5 years for the usual care group. The percentage of men ranged from 40% to 64% in the intervention group and 46% to 72% in the control group.
All 5 trials were performed in a moderate to severe COPD patient population. Three trials initiated the intervention following discharge from hospital. One trial initiated the intervention following a pulmonary rehabilitation program. The final trial initiated the intervention during management of patients at an outpatient clinic.
Four of the 5 trials included oxygen saturation (i.e., pulse oximetry) as one of the biological patient parameters being monitored. Additional parameters monitored included forced expiratory volume in one second, peak expiratory flow, and temperature.
There was considerable clinical heterogeneity between trials in study design, methods, and intervention/control. In relation to the telemonitoring intervention, 3 of the 5 included studies used an electronic health hub that performed multiple functions beyond the monitoring of biological parameters. One study used only a pulse oximeter device alone with modem capabilities. Finally, in 1 study, patients measured and then forwarded biological data to a nurse during a televideo consultation. Usual care varied considerably between studies.
Only one trial met the eligibility criteria for Research Question #2. The included trial was an RCT that randomized 60 patients to nurse telephone follow-up or usual care (no telephone follow-up). Participants were recruited from the medical department of an acute-care hospital in Hong Kong and began receiving follow-up after discharge from the hospital with a diagnosis of COPD (no severity restriction). The intervention itself consisted of only two 10-to 20-minute telephone calls, once between days 3 to 7 and once between days 14 to 20, involving a structured, individualized educational and supportive programme led by a nurse that focused on 3 components: assessment, management options, and evaluation.
Regarding Research Question #1:
Low to very low quality evidence (according to GRADE) finds non-significant effects or conflicting effects (of significant or non-significant benefit) for all outcomes examined when comparing home telemonitoring to usual care.
There is a trend towards significant increase in time free of hospitalization and use of other health care services with home telemonitoring, but these findings need to be confirmed further in randomized trials of high quality.
There is severe clinical heterogeneity between studies that limits summary conclusions.
The economic impact of home telemonitoring is uncertain and requires further study.
Home telemonitoring is largely dependent on local information technologies, infrastructure, and personnel, and thus the generalizability of external findings may be low. Jurisdictions wishing to replicate home telemonitoring interventions should likely test those interventions within their jurisdictional framework before adoption, or should focus on home-grown interventions that are subjected to appropriate evaluation and proven effective.
Regarding Research Question #2:
Low quality evidence finds significant benefit in favour of telephone-only support for self-efficacy and emergency department visits when compared to usual care, but non-significant results for hospitalizations and hospital length of stay.
There are very serious issues with the generalizability of the evidence and thus additional research is required.
PMCID: PMC3384362  PMID: 23074421
22.  Configuring Balanced Scorecards for Measuring Health System Performance: Evidence from 5 Years' Evaluation in Afghanistan 
PLoS Medicine  2011;8(7):e1001066.
Anbrasi Edward and colleagues report the results of a balanced scorecard performance system used to examine 29 key performance indicators over a 5-year period in Afghanistan, between 2004 and 2008.
Background
In 2004, Afghanistan pioneered a balanced scorecard (BSC) performance system to manage the delivery of primary health care services. This study examines the trends of 29 key performance indicators over a 5-year period between 2004 and 2008.
Methods and Findings
Independent evaluations of performance in six domains were conducted annually through 5,500 patient observations and exit interviews and 1,500 provider interviews in >600 facilities selected by stratified random sampling in each province. Generalized estimating equation (GEE) models were used to assess trends in BSC parameters. There was a progressive improvement in the national median scores scaled from 0–100 between 2004 and 2008 in all six domains: patient and community satisfaction of services (65.3–84.5, p<0.0001); provider satisfaction (65.4–79.2, p<0.01); capacity for service provision (47.4–76.4, p<0.0001); quality of services (40.5–67.4, p<0.0001); and overall vision for pro-poor and pro-female health services (52.0–52.6). The financial domain also showed improvement until 2007 (84.4–95.7, p<0.01), after which user fees were eliminated. By 2008, all provinces achieved the upper benchmark of national median set in 2004.
Conclusions
The BSC has been successfully employed to assess and improve health service capacity and service delivery using performance benchmarking during the 5-year period. However, scorecard reconfigurations are needed to integrate effectiveness and efficiency measures and accommodate changes in health systems policy and strategy architecture to ensure its continued relevance and effectiveness as a comprehensive health system performance measure. The process of BSC design and implementation can serve as a valuable prototype for health policy planners managing performance in similar health care contexts.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Traditionally, the performance of a health system (the complete network of health care agencies, facilities, and providers in a defined geographical region) has been measured in terms of health outcomes: how many people have been treated, how many got better, and how many died. But, nowadays, with increased demand for improved governance and accountability, policy makers are seeking comprehensive performance measures that show in detail how innovations designed to strengthen health systems are affecting service delivery and health outcomes. One such performance measure is the “balanced scorecard,” an integrated management and measurement tool that enables organizations to clarify their vision and strategy and translate them into action. The balanced scorecard—essentially a list of key performance indicators and performance benchmarks in several domains—was originally developed for industry but is now becoming a popular strategic management tool in the health sector. For example, balanced scorecards have been successfully integrated into the Dutch and Italian public health care systems.
Why Was This Study Done?
Little is known about the use of balanced scorecards in the national public health care systems of developing countries but the introduction of performance management into health system reform in fragile states in particular (developing countries where the state fails to perform the fundamental functions necessary to meet its citizens' basic needs and expectations) could help to promote governance and leadership, and facilitate essential policy changes. One fragile state that has introduced the balanced scorecard system for public health care management is Afghanistan, which emerged from decades of conflict in 2002 with some of the world's worst health indicators. To deal with an extremely high burden of disease, the Ministry of Public Health (MOPH) designed a Basic Package of Health Services (BPHS), which is delivered by nongovernmental organizations and MOPH agencies. In 2004, the MOPH introduced the National Health Service Performance Assessment (NHSPA), an annual country-wide assessment of service provision and patient satisfaction and pioneered a balanced scorecard, which uses data collected in the NHSPA, to manage the delivery of primary health care services. In this study, the researchers examine the trends between 2004 and 2008 of the 29 key performance indicators in six domains included in this balanced scorecard, and consider the potential and limitations of the scorecard as a management tool to measure and improve health service delivery in Afghanistan and other similar countries.
What Did the Researchers Do and Find?
Each year of the study, a random sample of 25 facilities (district hospitals and comprehensive and basic health centers) in 28 of Afghanistan's 34 provinces was chosen (one province did not have functional facilities in 2004 and the other five missing provinces were inaccessible because of ongoing conflicts). NHSPA surveyors collected approximately 5,000 patient observations, 5,000 exit interviews with patients or their caregivers, and 1,500 health provider interviews by observing consultations involving five children under 5 years old and five patients over 5 years old in each facility. The researchers then used this information to evaluate the key performance indicators in the balanced scorecard and a statistical method called generalized estimating equation modeling to assess trends in these indicators. They report that there was a progressive improvement in national average scores in all six domains (patients and community satisfaction with services, provider satisfaction, capacity for service provision, quality of services, overall vision for pro-poor and pro-female health services, and financial systems) between 2004 and 2008.
What Do These Findings Mean?
These findings suggest that the balanced scorecard was successfully used to improve health system capacity and service delivery through performance benchmarking over the 5-year study period. Importantly, the use of the balanced scorecard helped to show the effects of investments, facilitate policy change, and create a more evidence-based decision-making culture in Afghanistan's primary health care system. However, the researchers warn that the continuing success of the balanced scorecard in Afghanistan will depend on its ability to accommodate changes in health systems policy. Furthermore, reconfigurations of the scorecard are needed to include measures of the overall effectiveness and efficiency of the health system such as mortality rates. More generally, the researchers conclude that the balanced scorecard offers a promising measure of health system performance that could be used to examine the effectiveness of health care strategies and innovations in other fragile and developing countries.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001066.
A 2010 article entitled An Afghan Success Story: The Balanced Scorecard and Improved Health Services in The Globe, a newsletter produced by the Department of International Health at the John Hopkins Bloomberg School of Public Health, provides a detailed description of the balanced scorecard used in this study
Wikipedia has a page on health systems and on balanced scorecards (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization country profile of Afghanistan provides information on the country's health system and burden of disease (in several languages)
doi:10.1371/journal.pmed.1001066
PMCID: PMC3144209  PMID: 21814499
23.  A quasi-experimental assessment of the effectiveness of the Community Health Strategy on health outcomes in Kenya 
BMC Health Services Research  2014;14(Suppl 1):S3.
Background
Despite focused health policies and reform agenda, Kenya has challenges in improving households’ situation in poverty and ill health; interventions to address the Millennium Development Goals in maternal and child health, such as focused antenatal care and immunization of children, are yet to achieve success. Research has shown that addressing the demand side is critical in improving health outcomes. This paper presents a model for health systems performance improvement using a strategy that bridges the interface between the community and the health system.
Methods
The study employed quasi-experimental design, using pre- and post-intervention surveys in intervention and control sites. The intervention was the implementation of all components of the Kenyan Community Health Strategy, guided by policy. The two year intervention (2011 and 2012) saw the strategy introduced to selected district health management teams, service providers, and communities through a series of three-day training workshops that were held three times during the intervention period.
Baseline and endline surveys were conducted in intervention and control sites where community unit assessment was undertaken to determine the status of health service utilization before and after the intervention. A community health unit consists of 1000 households, a population of about 5000, served by trained community health workers, each supporting about 20 to 50 households. Data was organized and analyzed using Excel, SPSS, Epi info, Stata Cal, and SAS.
Results
A number of health indicators, such as health facility delivery, antenatal care, water treatment, latrine use, and insecticide treated nets, improved in the intervention sites compared to non-interventions sites. The difference between intervention and control sites was statistically significant (p<0.0001) for antenatal care, health facility delivery, water treatment, latrine use, use of insecticide treated nets, presence of clinic card, and measles vaccination. Degree of improvement across the various indicators measured differed by socio-demographic contexts. The changes were greatest in the rural agrarian sites, compared to peri-urban and nomadic sites.
Conclusion
The study showed that most of the components of the strategy were implemented and sustained in different socio-demographic contexts, while participatory community planning based on household information drives improvement of health indicators.
doi:10.1186/1472-6963-14-S1-S3
PMCID: PMC4108865  PMID: 25079378
Community health strategy; community dialogue; quasi-experimental design; community health workers; health outcomes; stratégie en santé communautaire; dialogue communautaire; concept quasi expérimental; agents de santé communautaire; santé
24.  Facilitating the Recruitment of Minority Ethnic People into Research: Qualitative Case Study of South Asians and Asthma 
PLoS Medicine  2009;6(10):e1000148.
Aziz Sheikh and colleagues report on a qualitative study in the US and the UK to investigate ways to bolster recruitment of South Asians into asthma studies, including making inclusion of diverse populations mandatory.
Background
There is international interest in enhancing recruitment of minority ethnic people into research, particularly in disease areas with substantial ethnic inequalities. A recent systematic review and meta-analysis found that UK South Asians are at three times increased risk of hospitalisation for asthma when compared to white Europeans. US asthma trials are far more likely to report enrolling minority ethnic people into studies than those conducted in Europe. We investigated approaches to bolster recruitment of South Asians into UK asthma studies through qualitative research with US and UK researchers, and UK community leaders.
Methods and Findings
Interviews were conducted with 36 researchers (19 UK and 17 US) from diverse disciplinary backgrounds and ten community leaders from a range of ethnic, religious, and linguistic backgrounds, followed by self-completion questionnaires. Interviews were digitally recorded, translated where necessary, and transcribed. The Framework approach was used for analysis. Barriers to ethnic minority participation revolved around five key themes: (i) researchers' own attitudes, which ranged from empathy to antipathy to (in a minority of cases) misgivings about the scientific importance of the question under study; (ii) stereotypes and prejudices about the difficulties in engaging with minority ethnic populations; (iii) the logistical challenges posed by language, cultural differences, and research costs set against the need to demonstrate value for money; (iv) the unique contexts of the two countries; and (v) poorly developed understanding amongst some minority ethnic leaders of what research entails and aims to achieve. US researchers were considerably more positive than their UK counterparts about the importance and logistics of including ethnic minorities, which appeared to a large extent to reflect the longer-term impact of the National Institutes of Health's requirement to include minority ethnic people.
Conclusions
Most researchers and community leaders view the broadening of participation in research as important and are reasonably optimistic about the feasibility of recruiting South Asians into asthma studies provided that the barriers can be overcome. Suggested strategies for improving recruitment in the UK included a considerably improved support structure to provide academics with essential contextual information (e.g., languages of particular importance and contact with local gatekeepers), and the need to ensure that care is taken to engage with the minority ethnic communities in ways that are both culturally appropriate and sustainable; ensuring reciprocal benefits was seen as one key way of avoiding gatekeeper fatigue. Although voluntary measures to encourage researchers may have some impact, greater impact might be achieved if UK funding bodies followed the lead of the US National Institutes of Health requiring recruitment of ethnic minorities. Such a move is, however, likely in the short- to medium-term, to prove unpopular with many UK academics because of the added “hassle” factor in engaging with more diverse populations than many have hitherto been accustomed to.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In an ideal world, everyone would have the same access to health care and the same health outcomes (responses to health interventions). However, health inequalities—gaps in health care and in health between different parts of the population—exist in many countries. In particular, people belonging to ethnic minorities in the UK, the US, and elsewhere have poorer health outcomes for several conditions than people belonging to the ethnic majority (ethnicity is defined by social characteristics such as cultural tradition or national origin). For example, in the UK, people whose ancestors came from the Indian subcontinent (also known as South Asians and comprising in the main of people of Indian, Pakistani, and Bangladeshi origin) are three times as likely to be admitted to hospital for asthma as white Europeans. The reasons underpinning ethnic health inequalities are complex. Some inequalities may reflect intrinsic differences between groups of people—some ethnic minorities may inherit genes that alter their susceptibility to a specific disease. Other ethnic health inequalities may arise because of differences in socioeconomic status or because different cultural traditions affect the uptake of health care services.
Why Was This Study Done?
Minority ethnic groups are often under-represented in health research, which could limit the generalizability of research findings. That is, an asthma treatment that works well in a trial where all the participants are white Europeans might not be suitable for South Asians. Clinicians might nevertheless use the treatment in all their patients irrespective of their ethnicity and thus inadvertently increase ethnic health inequality. So, how can ethnic minorities be encouraged to enroll into research studies? In this qualitative study, the investigators try to answer this question by talking to US and UK asthma researchers and UK community leaders about how they feel about enrolling ethnic minorities into research studies. The investigators chose to compare the feelings of US and UK asthma researchers because minority ethnic people are more likely to enroll into US asthma studies than into UK studies, possibly because the US National Institute of Health's (NIH) Revitalization Act 1993 mandates that all NIH-funded clinical research must include people from ethnic minority groups; there is no similar mandatory policy in the UK.
What Did the Researchers Do and Find?
The investigators interviewed 16 UK and 17 US asthma researchers and three UK social researchers with experience of working with ethnic minorities. They also interviewed ten community leaders from diverse ethnic, religious and linguistic backgrounds. They then analyzed the interviews using the “Framework” approach, an analytical method in which qualitative data are classified and organized according to key themes and then interpreted. By comparing the data from the UK and US researchers, the investigators identified several barriers to ethnic minority participation in health research including: the attitudes of researchers towards the scientific importance of recruiting ethnic minority people into health research studies; prejudices about the difficulties of including ethnic minorities in health research; and the logistical challenges posed by language and cultural differences. In general, the US researchers were more positive than their UK counterparts about the importance and logistics of including ethnic minorities in health research. Finally, the investigators found that some community leaders had a poor understanding of what research entails and about its aims.
What Do These Findings Mean?
These findings reveal a large gap between US and UK researchers in terms of policy, attitudes, practices, and experiences in relation to including ethnic minorities in asthma research. However, they also suggest that most UK researchers and community leaders believe that it is both important and feasible to increase the participation of South Asians in asthma studies. Although some of these findings may have been affected by the study participants sometimes feeling obliged to give “politically correct” answers, these findings are likely to be generalizable to other diseases and to other parts of Europe. Given their findings, the researchers warn that a voluntary code of practice that encourages the recruitment of ethnic minority people into health research studies is unlikely to be successful. Instead, they suggest, the best way to increase the representation of ethnic minority people in health research in the UK might be to follow the US lead and introduce a policy that requires their inclusion in such research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000148.
Families USA, a US nonprofit organization that campaigns for high-quality, affordable health care for all Americans, has information about many aspects of minority health in the US, including an interactive game about minority health issues
The US Agency for Healthcare Research and Quality has a section on minority health
The UK Department of Health provides information on health inequalities and a recent report on the experiences of patients in Black and minority ethnic groups
The UK Parliamentary Office of Science and Technology also has a short article on ethnicity and health
Information on the NIH Revitalization Act 1993 is available
NHS Evidences Ethnicity and Health has a variety of policy, clinical, and research resources on ethnicity and health
doi:10.1371/journal.pmed.1000148
PMCID: PMC2752116  PMID: 19823568
25.  Indoor Residual Spraying in Combination with Insecticide-Treated Nets Compared to Insecticide-Treated Nets Alone for Protection against Malaria: A Cluster Randomised Trial in Tanzania 
PLoS Medicine  2014;11(4):e1001630.
Philippa West and colleagues compare Plasmodium falciparum infection prevalence in children, anemia in young children, and entomological inoculation rate between study arms.
Please see later in the article for the Editors' Summary
Background
Insecticide-treated nets (ITNs) and indoor residual spraying (IRS) of houses provide effective malaria transmission control. There is conflicting evidence about whether it is more beneficial to provide both interventions in combination. A cluster randomised controlled trial was conducted to investigate whether the combination provides added protection compared to ITNs alone.
Methods and Findings
In northwest Tanzania, 50 clusters (village areas) were randomly allocated to ITNs only or ITNs and IRS. Dwellings in the ITN+IRS arm were sprayed with two rounds of bendiocarb in 2012. Plasmodium falciparum prevalence rate (PfPR) in children 0.5–14 y old (primary outcome) and anaemia in children <5 y old (secondary outcome) were compared between study arms using three cross-sectional household surveys in 2012. Entomological inoculation rate (secondary outcome) was compared between study arms.
IRS coverage was approximately 90%. ITN use ranged from 36% to 50%. In intention-to-treat analysis, mean PfPR was 13% in the ITN+IRS arm and 26% in the ITN only arm, odds ratio = 0.43 (95% CI 0.19–0.97, n = 13,146). The strongest effect was observed in the peak transmission season, 6 mo after the first IRS. Subgroup analysis showed that ITN users were additionally protected if their houses were sprayed. Mean monthly entomological inoculation rate was non-significantly lower in the ITN+IRS arm than in the ITN only arm, rate ratio = 0.17 (95% CI 0.03–1.08).
Conclusions
This is the first randomised trial to our knowledge that reports significant added protection from combining IRS and ITNs compared to ITNs alone. The effect is likely to be attributable to IRS providing added protection to ITN users as well as compensating for inadequate ITN use. Policy makers should consider deploying IRS in combination with ITNs to control transmission if local ITN strategies on their own are insufficiently effective. Given the uncertain generalisability of these findings, it would be prudent for malaria control programmes to evaluate the cost-effectiveness of deploying the combination.
Trial registration
www.ClinicalTrials.gov NCT01697852
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, more than 200 million cases of malaria occur worldwide, and more than 600,000 people, mainly children living in sub-Saharan Africa, die from this parasitic infection. Malaria parasites, which are transmitted to people through the bites of infected night-flying mosquitoes, cause a characteristic fever that needs to be treated promptly with antimalarial drugs to prevent anaemia (a reduction in red blood cell numbers) and organ damage. Prompt treatment also helps to reduce malaria transmission, but the mainstays of global malaria control efforts are the provision of insecticide-treated nets (ITNs) for people to sleep under to avoid mosquito bites, and indoor residual spraying (IRS) of houses with insecticides, which prevents mosquitoes from resting in houses. Both approaches have been scaled up in the past decade. About 54% of households in Africa now own at least one ITN, and 8% of at-risk populations are protected by IRS. As a result of the widespread deployment of these preventative tools and the increased availability of effective antimalarial drugs, malaria-related deaths in Africa fell by 45% between 2000 and 2012.
Why Was This Study Done?
Some countries have chosen to use ITNs and IRS in combination, reasoning that this will increase the proportion of individuals who are protected by at least one intervention and may provide additional protection to people using both interventions rather than one alone. However, providing both interventions is costly, so it is important to know whether this rationale is correct. In this cluster randomised controlled trial (a study that compares outcomes of groups of people randomly assigned to receive different interventions) undertaken in the Muleba District of Tanzania during 2012, the researchers investigate whether ITNs plus IRS provide more protection against malaria than ITNs alone. Malaria transmission occurs throughout the year in Muleba District but peaks after the October–December and March–May rains. Ninety-one percent of the district's households own at least one ITN, and 58% of households own enough ITNs to cover all their sleeping places. Annual rounds of IRS have been conducted in the region since 2007.
What Did the Researchers Do and Find?
The researchers allocated 50 communities to the ITN intervention or to the ITN+IRS intervention. Dwellings allocated to ITN+IRS were sprayed with insecticide just before each of the malaria transmission peaks in 2012. The researchers used household surveys to collect information about ITN coverage in the study population, the proportion of children aged 0.5–14 years infected with the malaria parasite Plasmodium falciparum (the prevalence of infection), and the proportion of children under five years old with anaemia. IRS coverage in the ITN+IRS arm was approximately 90%, and 50% of the children in both intervention arms used ITNs at the start of the trial, declining to 36% at the end of the study. In an intention-to-treat analysis (which assumed that all study participants got the planned intervention), the average prevalence of infection was 13% in the ITN+IRS arm and 26% in the ITN arm. A per-protocol analysis (which considered data only from participants who received their allocated intervention) indicated that the combined intervention had a statistically significant protective effect on the prevalence of infection compared to ITNs alone (an effect that is unlikely to have arisen by chance). Finally, the proportion of young children with anaemia was lower in the ITN+IRS arm than in the ITN arm, but this effect was not statistically significant.
What Do These Findings Mean?
These findings provide evidence that IRS, when used in combination with ITNs, can provide better protection against malaria infection than ITNs used alone. This effect is likely to be the result of IRS providing added protection to ITN users as well as compensating for inadequate ITN use. The findings also suggest that the combination of interventions may reduce the prevalence of anaemia better than ITNs alone, but this result needs to be confirmed. Additional trials are also needed to investigate whether ITN+IRS compared to ITN reduces clinical cases of malaria, and whether similar effects are seen in other settings. Moreover, the cost-effectiveness of ITN+IRS and ITN alone needs to be compared. For now, though, these findings suggest that national malaria control programs should consider implementing IRS in combination with ITNs if local ITN strategies alone are insufficiently effective and cannot be improved.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001630.
Information is available from the World Health Organization on malaria (in several languages), including information on insecticide-treated bed nets and indoor residual spraying; the World Malaria Report 2013 provides details of the current global malaria situation
The US Centers for Disease Control and Prevention provides information on malaria, on insecticide-treated bed nets, and on indoor residual spraying; it also provides a selection of personal stories about malaria
Information is available from the Roll Back Malaria Partnership on the global control of malaria and on the Global Malaria Action Plan (in English and French); its website includes fact sheets about malaria in Africa and about nets and insecticides
MedlinePlus provides links to additional information on malaria (in English and Spanish)
doi:10.1371/journal.pmed.1001630
PMCID: PMC3988001  PMID: 24736370

Results 1-25 (1534923)