PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1836941)

Clipboard (0)
None

Related Articles

1.  A survey of occupational cancer in the rubber and cablemaking industries: results of five-year analysis, 1967-71 
Fox, A. J., Lindars, D. C., and Owen, R. (1974).British Journal of Industrial Medicine,31, 140-151. A survey of occupational cancer in the rubber and cablemaking industries: results of five-year analysis, 1967-71. A mortality study of 40 867 subjects employed in the rubber and cablemaking industries on 1 February 1967 is reported. No evidence is found of a continued excess risk of neoplasms of the bladder in people who entered the industry after 1949. For those employed before that date, during the period when known bladder carcinogens were in use, the SMR is higher than predicted, indicating that men are still dying with occupationally induced tumours.
An excess of all neoplasms was noted in the five years of the study. In certain sections of the industry (tyre manufacture, belting hose rubber with asbestos, and flooring industry) there is a particular excess of bronchial carcinoma. In those sections which use asbestos such an excess is not altogether surprising, but this does not apply to the tyre industry. The latter industry is sufficiently large (16 035 men in the study compared with 4 350 in the belting, hose rubber with asbestos, and flooring industry) for attention to be focused on particular operations. Two job groups are found to share the excess: moulding, press, autoclave, and pan curemen; and finished goods, packaging, and despatch. Job selection may play a part in the latter, as the work is generally considered suitable for older and perhaps less healthy people.
Crude analyses have been undertaken to indicate whether the excesses are due to regional differences or to the population comprising an abnormally high proportion of smokers. No excesses are found in other smoking-related diseases. Although the effects of differences in smoking habit and regional differences cannot be ruled out, the indications are against these factors being the primary cause.
The difficulties of this type of study are discussed. It is emphasized that the results can be used only as an indication of a problem area and the type of further study required. A more exact study would concentrate on five-year cohorts of people who left the industry between 1940 and 1960. A study of five-year cohorts of people who entered the industry in the same period would also be valuable. An attempt has been made to perform the latter, but it would be hazardous to draw too many conclusions from this because the population comprises those who `survive' in the industry until 1967.
PMCID: PMC1009569  PMID: 4830765
2.  Noninvasive Positive Pressure Ventilation for Acute Respiratory Failure Patients With Chronic Obstructive Pulmonary Disease (COPD) 
Executive Summary
In July 2010, the Medical Advisory Secretariat (MAS) began work on a Chronic Obstructive Pulmonary Disease (COPD) evidentiary framework, an evidence-based review of the literature surrounding treatment strategies for patients with COPD. This project emerged from a request by the Health System Strategy Division of the Ministry of Health and Long-Term Care that MAS provide them with an evidentiary platform on the effectiveness and cost-effectiveness of COPD interventions.
After an initial review of health technology assessments and systematic reviews of COPD literature, and consultation with experts, MAS identified the following topics for analysis: vaccinations (influenza and pneumococcal), smoking cessation, multidisciplinary care, pulmonary rehabilitation, long-term oxygen therapy, noninvasive positive pressure ventilation for acute and chronic respiratory failure, hospital-at-home for acute exacerbations of COPD, and telehealth (including telemonitoring and telephone support). Evidence-based analyses were prepared for each of these topics. For each technology, an economic analysis was also completed where appropriate. In addition, a review of the qualitative literature on patient, caregiver, and provider perspectives on living and dying with COPD was conducted, as were reviews of the qualitative literature on each of the technologies included in these analyses.
The Chronic Obstructive Pulmonary Disease Mega-Analysis series is made up of the following reports, which can be publicly accessed at the MAS website at: http://www.hqontario.ca/en/mas/mas_ohtas_mn.html.
Chronic Obstructive Pulmonary Disease (COPD) Evidentiary Framework
Influenza and Pneumococcal Vaccinations for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Smoking Cessation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Community-Based Multidisciplinary Care for Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Pulmonary Rehabilitation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Long-term Oxygen Therapy for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Acute Respiratory Failure Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Chronic Respiratory Failure Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Hospital-at-Home Programs for Patients With Acute Exacerbations of Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Home Telehealth for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Cost-Effectiveness of Interventions for Chronic Obstructive Pulmonary Disease Using an Ontario Policy Model
Experiences of Living and Dying With COPD: A Systematic Review and Synthesis of the Qualitative Empirical Literature
For more information on the qualitative review, please contact Mita Giacomini at: http://fhs.mcmaster.ca/ceb/faculty_member_giacomini.htm.
For more information on the economic analysis, please visit the PATH website: http://www.path-hta.ca/About-Us/Contact-Us.aspx.
The Toronto Health Economics and Technology Assessment (THETA) collaborative has produced an associated report on patient preference for mechanical ventilation. For more information, please visit the THETA website: http://theta.utoronto.ca/static/contact.
Objective
The objective of this evidence-based analysis was to examine the effectiveness, safety, and cost-effectiveness of noninvasive positive pressure ventilation (NPPV) in the following patient populations: patients with acute respiratory failure (ARF) due to acute exacerbations of chronic obstructive pulmonary disease (COPD); weaning of COPD patients from invasive mechanical ventilation (IMV); and prevention of or treatment of recurrent respiratory failure in COPD patients after extubation from IMV.
Clinical Need and Target Population
Acute Hypercapnic Respiratory Failure
Respiratory failure occurs when the respiratory system cannot oxygenate the blood and/or remove carbon dioxide from the blood. It can be either acute or chronic and is classified as either hypoxemic (type I) or hypercapnic (type II) respiratory failure. Acute hypercapnic respiratory failure frequently occurs in COPD patients experiencing acute exacerbations of COPD, so this is the focus of this evidence-based analysis. Hypercapnic respiratory failure occurs due to a decrease in the drive to breathe, typically due to increased work to breathe in COPD patients.
Technology
There are several treatment options for ARF. Usual medical care (UMC) attempts to facilitate adequate oxygenation and treat the cause of the exacerbation, and typically consists of supplemental oxygen, and a variety of medications such as bronchodilators, corticosteroids, and antibiotics. The failure rate of UMC is high and has been estimated to occur in 10% to 50% of cases.
The alternative is mechanical ventilation, either invasive or noninvasive. Invasive mechanical ventilation involves sedating the patient, creating an artificial airway through endotracheal intubation, and attaching the patient to a ventilator. While this provides airway protection and direct access to drain sputum, it can lead to substantial morbidity, including tracheal injuries and ventilator-associated pneumonia (VAP).
While both positive and negative pressure noninvasive ventilation exists, noninvasive negative pressure ventilation such as the iron lung is no longer in use in Ontario. Noninvasive positive pressure ventilation provides ventilatory support through a facial or nasal mask and reduces inspiratory work. Noninvasive positive pressure ventilation can often be used intermittently for short periods of time to treat respiratory failure, which allows patients to continue to eat, drink, talk, and participate in their own treatment decisions. In addition, patients do not require sedation, airway defence mechanisms and swallowing functions are maintained, trauma to the trachea and larynx are avoided, and the risk for VAP is reduced. Common complications are damage to facial and nasal skin, higher incidence of gastric distension with aspiration risk, sleeping disorders, and conjunctivitis. In addition, NPPV does not allow direct access to the airway to drain secretions and requires patients to cooperate, and due to potential discomfort, compliance and tolerance may be low.
In addition to treating ARF, NPPV can be used to wean patients from IMV through the gradual removal of ventilation support until the patient can breathe spontaneously. Five to 30% of patients have difficultly weaning. Tapering levels of ventilatory support to wean patients from IMV can be achieved using IMV or NPPV. The use of NPPV helps to reduce the risk of VAP by shortening the time the patient is intubated.
Following extubation from IMV, ARF may recur, leading to extubation failure and the need for reintubation, which has been associated with increased risk of nosocomial pneumonia and mortality. To avoid these complications, NPPV has been proposed to help prevent ARF recurrence and/or to treat respiratory failure when it recurs, thereby preventing the need for reintubation.
Research Questions
What is the effectiveness, cost-effectiveness, and safety of NPPV for the treatment of acute hypercapnic respiratory failure due to acute exacerbations of COPD compared with
usual medical care, and
invasive mechanical ventilation?
What is the effectiveness, cost-effectiveness, and safety of NPPV compared with IMV in COPD patients after IMV for the following purposes:
weaning COPD patients from IMV,
preventing ARF in COPD patients after extubation from IMV, and
treating ARF in COPD patients after extubation from IMV?
Research Methods
Literature Search
A literature search was performed on December 3, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), Wiley Cochrane, and the Centre for Reviews and Dissemination/International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2004 until December 3, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Since there were numerous studies that examined the effectiveness of NPPV for the treatment of ARF due to exacerbations of COPD published before 2004, pre-2004 trials which met the inclusion/exclusion criteria for this evidence-based review were identified by hand-searching reference lists of included studies and systematic reviews.
Inclusion Criteria
English language full-reports;
health technology assessments, systematic reviews, meta-analyses, and randomized controlled trials (RCTs);
studies performed exclusively in patients with a diagnosis of COPD or studies performed with patients with a mix of conditions if results are reported for COPD patients separately;
patient population: (Question 1) patients with acute hypercapnic respiratory failure due to an exacerbation of COPD; (Question 2a) COPD patients being weaned from IMV; (Questions 2b and 2c) COPD patients who have been extubated from IMV.
Exclusion Criteria
< 18 years of age
animal studies
duplicate publications
grey literature
studies examining noninvasive negative pressure ventilation
studies comparing modes of ventilation
studies comparing patient-ventilation interfaces
studies examining outcomes not listed below, such as physiologic effects including heart rate, arterial blood gases, and blood pressure
Outcomes of Interest
mortality
intubation rates
length of stay (intensive care unit [ICU] and hospital)
health-related quality of life
breathlessness
duration of mechanical ventilation
weaning failure
complications
NPPV tolerance and compliance
Statistical Methods
When possible, results were pooled using Review Manager 5 Version 5.1, otherwise, the results were summarized descriptively. Dichotomous data were pooled into relative risks using random effects models and continuous data were pooled using weighted mean differences with a random effects model. Analyses using data from RCTs were done using intention-to-treat protocols; P values < 0.05 were considered significant. A priori subgroup analyses were planned for severity of respiratory failure, location of treatment (ICU or hospital ward), and mode of ventilation with additional subgroups as needed based on the literature. Post hoc sample size calculations were performed using STATA 10.1.
Quality of Evidence
The quality of each included study was assessed taking into consideration allocation concealment, randomization, blinding, power/sample size, withdrawals/dropouts, and intention-to-treat analyses.
The quality of the body of evidence was assessed as high, moderate, low, or very low according to the GRADE Working Group criteria. The following definitions of quality were used in grading the quality of the evidence:
Summary of Findings
NPPV for the Treatment of ARF due to Acute Exacerbations of COPD
NPPV Plus Usual Medical Care Versus Usual Medical Care Alone for First Line Treatment
A total of 1,000 participants were included in 11 RCTs1; the sample size ranged from 23 to 342. The mean age of the participants ranged from approximately 60 to 72 years of age. Based on either the Global Initiative for Chronic Obstructive Lung Disease (GOLD) COPD stage criteria or the mean percent predicted forced expiratory volume in 1 second (FEV1), 4 of the studies included people with severe COPD, and there was inadequate information to classify the remaining 7 studies by COPD severity. The severity of the respiratory failure was classified into 4 categories using the study population mean pH level as follows: mild (pH ≥ 7.35), moderate (7.30 ≤ pH < 7.35), severe (7.25 ≤ pH < 7.30), and very severe (pH < 7.25). Based on these categories, 3 studies included patients with a mild respiratory failure, 3 with moderate respiratory failure, 4 with severe respiratory failure, and 1 with very severe respiratory failure.
The studies were conducted either in the ICU (3 of 11 studies) or general or respiratory wards (8 of 11 studies) in hospitals, with patients in the NPPV group receiving bilevel positive airway pressure (BiPAP) ventilatory support, except in 2 studies, which used pressure support ventilation and volume cycled ventilation, respectively. Patients received ventilation through nasal, facial, or oronasal masks. All studies specified a protocol or schedule for NPPV delivery, but this varied substantially across the studies. For example, some studies restricted the amount of ventilation per day (e.g., 6 hours per day) and the number of days it was offered (e.g., maximum of 3 days); whereas, other studies provided patients with ventilation for as long as they could tolerate it and recommended it for much longer periods of time (e.g., 7 to 10 days). These differences are an important source of clinical heterogeneity between the studies. In addition to NPPV, all patients in the NPPV group also received UMC. Usual medical care varied between the studies, but common medications included supplemental oxygen, bronchodilators, corticosteroids, antibiotics, diuretics, and respiratory stimulators.
The individual quality of the studies ranged. Common methodological issues included lack of blinding and allocation concealment, and small sample sizes.
Need for Endotracheal Intubation
Eleven studies reported the need for endotracheal intubation as an outcome. The pooled results showed a significant reduction in the need for endotracheal intubation in the NPPV plus UMC group compared with the UMC alone group (relative risk [RR], 0.38; 95% confidence interval [CI], 0.28−0.50). When subgrouped by severity of respiratory failure, the results remained significant for the mild, severe, and very severe respiratory failure groups.
GRADE: moderate
Inhospital Mortality
Nine studies reported inhospital mortality as an outcome. The pooled results showed a significant reduction in inhospital mortality in the NPPV plus UMC group compared with the UMC group (RR, 0.53; 95% CI, 0.35−0.81). When subgrouped by severity of respiratory failure, the results remained significant for the moderate and severe respiratory failure groups.
GRADE: moderate
Hospital Length of Stay
Eleven studies reported hospital length of stay (LOS) as an outcome. The pooled results showed a significant decrease in the mean length of stay for the NPPV plus UMC group compared with the UMC alone group (weighted mean difference [WMD], −2.68 days; 95% CI, −4.41 to −0.94 days). When subgrouped by severity of respiratory failure, the results remained significant for the mild, severe, and very severe respiratory failure groups.
GRADE: moderate
Complications
Five studies reported complications. Common complications in the NPPV plus UMC group included pneumonia, gastrointestinal disorders or bleeds, skin abrasions, eye irritation, gastric insufflation, and sepsis. Similar complications were observed in the UMC group including pneumonia, sepsis, gastrointestinal disorders or bleeds, pneumothorax, and complicated endotracheal intubations. Many of the more serious complications in both groups occurred in those patients who required endotracheal intubation. Three of the studies compared complications in the NPPV plus UMC and UMC groups. While the data could not be pooled, overall, the NPPV plus UMC group experienced fewer complications than the UMC group.
GRADE: low
Tolerance/Compliance
Eight studies reported patient tolerance or compliance with NPPV as an outcome. NPPV intolerance ranged from 5% to 29%. NPPV tolerance was generally higher for patients with more severe respiratory failure. Compliance with the NPPV protocol was reported by 2 studies, which showed compliance decreases over time, even over short periods such as 3 days.
NPPV Versus IMV for the Treatment of Patients Who Failed Usual Medical Care
A total of 205 participants were included in 2 studies; the sample sizes of these studies were 49 and 156. The mean age of the patients was 71 to 73 years of age in 1 study, and the median age was 54 to 58 years of age in the second study. Based on either the GOLD COPD stage criteria or the mean percent predicted FEV1, patients in 1 study had very severe COPD. The COPD severity could not be classified in the second study. Both studies had study populations with a mean pH less than 7.23, which was classified as very severe respiratory failure in this analysis. One study enrolled patients with ARF due to acute exacerbations of COPD who had failed medical therapy. The patient population was not clearly defined in the second study, and it was not clear whether they had to have failed medical therapy before entry into the study.
Both studies were conducted in the ICU. Patients in the NPPV group received BiPAP ventilatory support through nasal or full facial masks. Patients in the IMV group received pressure support ventilation.
Common methodological issues included small sample size, lack of blinding, and unclear methods of randomization and allocation concealment. Due to the uncertainty about whether both studies included the same patient population and substantial differences in the direction and significance of the results, the results of the studies were not pooled.
Mortality
Both studies reported ICU mortality. Neither study showed a significant difference in ICU mortality between the NPPV and IMV groups, but 1 study showed a higher mortality rate in the NPPV group (21.7% vs. 11.5%) while the other study showed a lower mortality rate in the NPPV group (5.1% vs. 6.4%). One study reported 1-year mortality and showed a nonsignificant reduction in mortality in the NPPV group compared with the IMV group (26.1% vs. 46.1%).
GRADE: low to very low
Intensive Care Unit Length of Stay
Both studies reported LOS in the ICU. The results were inconsistent. One study showed a statistically significant shorter LOS in the NPPV group compared with the IMV group (5 ± 1.35 days vs. 9.29 ± 3 days; P < 0.001); whereas, the other study showed a nonsignificantly longer LOS in the NPPV group compared with the IMV group (22 ± 19 days vs. 21 ± 20 days; P = 0.86).
GRADE: very low
Duration of Mechanical Ventilation
Both studies reported the duration of mechanical ventilation (including both invasive and noninvasive ventilation). The results were inconsistent. One study showed a statistically significant shorter duration of mechanical ventilation in the NPPV group compared with the IMV group (3.92 ± 1.08 days vs. 7.17 ± 2.22 days; P < 0.001); whereas, the other study showed a nonsignificantly longer duration of mechanical ventilation in the NPPV group compared with the IMV group (16 ± 19 days vs. 15 ± 21 days; P = 0.86). GRADE: very low
Complications
Both studies reported ventilator-associated pneumonia and tracheotomies. Both showed a reduction in ventilator-associated pneumonia in the NPPV group compared with the IMV group, but the results were only significant in 1 study (13% vs. 34.6%, P = 0.07; and 6.4% vs. 37.2%, P < 0.001, respectively). Similarly, both studies showed a reduction in tracheotomies in the NPPV group compared with the IMV group, but the results were only significant in 1 study (13% vs. 23.1%, P = 0.29; and 6.4% vs. 34.6%; P < 0.001).
GRADE: very low
Other Outcomes
One of the studies followed patients for 12 months. At the end of follow-up, patients in the NPPV group had a significantly lower rate of needing de novo oxygen supplementation at home. In addition, the IMV group experienced significant increases in functional limitations due to COPD, while no increase was seen in the NPPV group. Finally, no significant differences were observed for hospital readmissions, ICU readmissions, and patients with an open tracheotomy, between the NPPV and IMV groups.
NPPV for Weaning COPD Patients From IMV
A total of 80 participants were included in the 2 RCTs; the sample sizes of the studies were 30 and 50 patients. The mean age of the participants ranged from 58 to 69 years of age. Based on either the GOLD COPD stage criteria or the mean percent predicted FEV1, both studies included patients with very severe COPD. Both studies also included patients with very severe respiratory failure (mean pH of the study populations was less than 7.23). Chronic obstructive pulmonary disease patients receiving IMV were enrolled in the study if they failed a T-piece weaning trial (spontaneous breathing test), so they could not be directly extubated from IMV.
Both studies were conducted in the ICU. Patients in the NPPV group received weaning using either BiPAP or pressure support ventilation NPPV through a face mask, and patients in the IMV weaning group received pressure support ventilation. In both cases, weaning was achieved by tapering the ventilation level.
The individual quality of the studies ranged. Common methodological problems included unclear randomization methods and allocation concealment, lack of blinding, and small sample size.
Mortality
Both studies reported mortality as an outcome. The pooled results showed a significant reduction in ICU mortality in the NPPV group compared with the IMV group (RR, 0.47; 95% CI, 0.23−0.97; P = 0.04).
GRADE: moderate
Intensive Care Unit Length of Stay
Both studies reported ICU LOS as an outcome. The pooled results showed a nonsignificant reduction in ICU LOS in the NPPV group compared with the IMV group (WMD, −5.21 days; 95% CI, −11.60 to 1.18 days).
GRADE: low
Duration of Mechanical Ventilation
Both studies reported duration of mechanical ventilation (including both invasive and noninvasive ventilation) as an outcome. The pooled results showed a nonsignificant reduction in duration of mechanical ventilation (WMD, −3.55 days; 95% CI, −8.55 to 1.44 days).
GRADE: low
Nosocomial Pneumonia
Both studies reported nosocominal pneumonia as an outcome. The pooled results showed a significant reduction in nosocomial pneumonia in the NPPV group compared with the IMV group (RR, 0.14; 95% CI, 0.03−0.71; P = 0.02).
GRADE: moderate
Weaning Failure
One study reported a significant reduction in weaning failure in the NPPV group compared with the IMV group, but the results were not reported in the publication. In this study, 1 of 25 patients in the NPPV group and 2 of 25 patients in the IMV group could not be weaned after 60 days in the ICU.
NPPV After Extubation of COPD Patients From IMV
The literature was reviewed to identify studies examining the effectiveness of NPPV compared with UMC in preventing recurrence of ARF after extubation from IMV or treating acute ARF which has recurred after extubation from IMV. No studies that included only COPD patients or reported results for COPD patients separately were identified for the prevention of ARF postextubation.
One study was identified for the treatment of ARF in COPD patients that recurred within 48 hours of extubation from IMV. This study included 221 patients, of whom 23 had COPD. A post hoc subgroup analysis was conducted examining the rate of reintubation in the COPD patients only. A nonsignificant reduction in the rate of reintubation was observed in the NPPV group compared with the UMC group (7 of 14 patients vs. 6 of 9 patients, P = 0.67). GRADE: low
Conclusions
NPPV Plus UMC Versus UMC Alone for First Line Treatment of ARF due to Acute Exacerbations of COPD
Moderate quality of evidence showed that compared with UMC, NPPV plus UMC significantly reduced the need for endotracheal intubation, inhospital mortality, and the mean length of hospital stay.
Low quality of evidence showed a lower rate of complications in the NPPV plus UMC group compared with the UMC group.
NPPV Versus IMV for the Treatment of ARF in Patients Who Have Failed UMC
Due to inconsistent and low to very low quality of evidence, there was insufficient evidence to draw conclusions on the comparison of NPPV versus IMV for patients who failed UMC.
NPPV for Weaning COPD Patients From IMV
Moderate quality of evidence showed that weaning COPD patients from IMV using NPPV results in significant reductions in mortality, nosocomial pneumonia, and weaning failure compared with weaning with IMV.
Low quality of evidence showed a nonsignificant reduction in the mean LOS and mean duration of mechanical ventilation in the NPPV group compared with the IMV group.
NPPV for the Treatment of ARF in COPD Patients After Extubation From IMV
Low quality of evidence showed a nonsignificant reduction in the rate of reintubation in the NPPV group compared with the UMC group; however, there was inadequate evidence to draw conclusions on the effectiveness of NPPV for the treatment of ARF in COPD patients after extubation from IMV
PMCID: PMC3384377  PMID: 23074436
3.  Mortality experience of workers exposed to vinyl chloride monomer in the manufacture of polyvinyl chloride in Great Britain. 
Identification particulars were obtained for over 7000 men who were at some time between 1940 and 1974 exposed to vinyl chloride monomer in the manufacture of polyvinyl chloride. Approximately 99% of these men have been traced and their mortality experience studied. The overall standardised mortality ratio, 75-4, shows a significant reduction compared with the national rates. Four cases of liver cancer were found. Two of these have been confirmed by a panel of liver pathologists as angiosarcoma and two as not angiosarcoma. There is no evidence to support the hypothesis that cancers other than those of the liver are associated with exposure to vinyl chloride monomer. The two cases of angiosarcoma were found in men who had been exposed to high concentrations of the monomer although the second man died only eight years after first exposure. The industry in Great Britain has expanded considerably since the second world war with over 50% of men having entered with the last decade. Conclusions drawn about the effect of vinyl chloride monomer on the mortality experience of men in this industry must consequently be tempered by the reservation that the full impact may not yet be in evidence.
PMCID: PMC1008165  PMID: 557328
4.  Socioeconomic Factors and All Cause and Cause-Specific Mortality among Older People in Latin America, India, and China: A Population-Based Cohort Study 
PLoS Medicine  2012;9(2):e1001179.
Cleusa Ferri and colleagues studied mortality rates in over 12,000 people aged 65 years and over in Latin America, India, and China and showed that chronic diseases are the main causes of death and that education has an important effect on mortality.
Background
Even in low and middle income countries most deaths occur in older adults. In Europe, the effects of better education and home ownership upon mortality seem to persist into old age, but these effects may not generalise to LMICs. Reliable data on causes and determinants of mortality are lacking.
Methods and Findings
The vital status of 12,373 people aged 65 y and over was determined 3–5 y after baseline survey in sites in Latin America, India, and China. We report crude and standardised mortality rates, standardized mortality ratios comparing mortality experience with that in the United States, and estimated associations with socioeconomic factors using Cox's proportional hazards regression. Cause-specific mortality fractions were estimated using the InterVA algorithm. Crude mortality rates varied from 27.3 to 70.0 per 1,000 person-years, a 3-fold variation persisting after standardisation for demographic and economic factors. Compared with the US, mortality was much higher in urban India and rural China, much lower in Peru, Venezuela, and urban Mexico, and similar in other sites. Mortality rates were higher among men, and increased with age. Adjusting for these effects, it was found that education, occupational attainment, assets, and pension receipt were all inversely associated with mortality, and food insecurity positively associated. Mutually adjusted, only education remained protective (pooled hazard ratio 0.93, 95% CI 0.89–0.98). Most deaths occurred at home, but, except in India, most individuals received medical attention during their final illness. Chronic diseases were the main causes of death, together with tuberculosis and liver disease, with stroke the leading cause in nearly all sites.
Conclusions
Education seems to have an important latent effect on mortality into late life. However, compositional differences in socioeconomic position do not explain differences in mortality between sites. Social protection for older people, and the effectiveness of health systems in preventing and treating chronic disease, may be as important as economic and human development.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide, half of all deaths occur in people aged 60 or older. Yet mortality among older people is a neglected topic in global health. In high income countries, where 84% of people do not die until they are aged 65 years or older, the causes of death among older people and the factors (determinants) that affect their risk of dying are well documented. In Europe, for example, the leading causes of death among older people are heart disease, stroke, and other chronic (long-term) diseases. Moreover, as in younger age groups, having a better education and owning a house reduces the risk of death among older people. By contrast, in low and middle income countries (LMICs), where three-quarters of deaths of older people occur, reliable data on the causes and determinants of death among older people are lacking, in part because many LMICs have inadequate vital registration systems—official records of all births and deaths.
Why Was This Study Done?
In many LMICs, chronic diseases are replacing communicable (infectious) diseases as the leading causes of death and disability—health experts call this the epidemiological transition (epidemiology is the study of the distribution and causes of disease in populations)—and the average age of the population is increasing (the demographic transition). Faced with these changes, which occur when countries move from a pre-industrial to an industrial economy, policy makers in LMICs need to introduce measures to improve health and reduce deaths among older people. However, to do this, they need reliable data on the causes and determinants of death in this section of the population. In this longitudinal population-based cohort study (a type of study that follows a group of people from a defined population over time), researchers from the 10/66 Dementia Research Group, which is carrying out population-based research on dementia, aging, and non-communicable diseases in LMICs, investigate the patterns of mortality among older people living in Latin America, India, and China.
What Did the Researchers Do and Find?
Between 2003 and 2005, the researchers completed a baseline survey of people aged 65 years or older living in six Latin American LMICs, China, and India. Three to five years later, they determined the vital status of 12,373 of the study participants (that is, they determined whether the individual was alive or dead) and interviewed a key informant (usually a relative) about each death using a standardized “verbal autopsy” questionnaire that includes questions about date and place of death, and about medical help-seeking and signs and symptoms noted during the final illness. Finally, they used a tool called the InterVA algorithm to calculate the most likely causes of death from the verbal autopsies. Crude mortality rates varied from 27.3 per 1,000 person-years in urban Peru to 70.0 per 1,000 person-years in urban India, a three-fold difference in mortality rates that persisted even after allowing for differences in age, sex, education, occupational attainment, and number of assets among the study sites. Compared to the US, mortality rates were much higher in urban India and rural China; much lower in urban and rural Peru, Venezuela, and urban Mexico; but similar elsewhere. Although several socioeconomic factors were associated with mortality, only a higher education status provided consistent independent protection against death in statistical analyses. Finally, chronic diseases were the main causes of death; stroke was the leading cause of death at all the sites except those in rural Peru and Mexico.
What Do These Findings Mean?
These findings identify the main causes of death among older adults in a range of LMICs and suggest that there is an association of education with mortality that extends into later life. However, these findings may not be generalizable to other LMICs or even to other sites in the LMICs studied, and because some of the information provided by key informants may have been affected by recall error, the accuracy of the findings may be limited. Nevertheless, these findings suggest how health and mortality might be improved in elderly people in LMICs. Specifically, they suggest that efforts to ensure universal access to education should confer substantial health benefits and that interventions that target social and economic vulnerability in later life and promote access to effectively organized health care (particularly for stroke) should be considered.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001179.
The World Health Organization provides information on mortality around the world and projections of global mortality up to 2030
The 10/66 Dementia Research Group is building an evidence base to inform the development and implementation of policies for improving the health and social welfare of older people in LMICs, particularly people with dementia; its website includes background information about demographic and epidemiological aging in LMICs
Wikipedia has a page on the demographic transition (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
Information about the InterVA tool for interpreting verbal autopsy data is available
The US Centers for Disease Control and Prevention has information about healthy aging
doi:10.1371/journal.pmed.1001179
PMCID: PMC3289608  PMID: 22389633
5.  Metal-on-Metal Total Hip Resurfacing Arthroplasty 
Executive Summary
Objective
The objective of this review was to assess the safety and effectiveness of metal on metal (MOM) hip resurfacing arthroplasty for young patients compared with that of total hip replacement (THR) in the same population.
Clinical Need
Total hip replacement has proved to be very effective for late middle-aged and elderly patients with severe degenerative diseases of the hips. As indications for THR began to include younger patients and those with a more active life style, the longevity of the implant became a concern. Evidence suggests that these patients experience relatively higher rates of early implant failure and the need for revision. The Swedish hip registry, for example, has demonstrated a survival rate in excess of 80% at 20 years for those aged over 65 years, whereas this figure was 33% by 16 years in those aged under 55 years.
Hip resurfacing arthroplasty is a bone-conserving alternative to THR that restores normal joint biomechanics and load transfer. The technique has been used around the world for more than 10 years, specifically in the United Kingdom and other European countries.
The Technology
Metal-on-metal hip resurfacing arthroplasty is an alternative procedure to conventional THR in younger patients. Hip resurfacing arthroplasty is less invasive than THR and addresses the problem of preserving femoral bone stock at the initial operation. This means that future hip revisions are possible with THR if the initial MOM arthroplasty becomes less effective with time in these younger patients. The procedure involves the removal and replacement of the surface of the femoral head with a hollow metal hemisphere, which fits into a metal acetabular cup.
Hip resurfacing arthroplasty is a technically more demanding procedure than is conventional THR. In hip resurfacing, the femoral head is retained, which makes it much more difficult to access the acetabular cup. However, hip resurfacing arthroplasty has several advantages over a conventional THR with a small (28 mm) ball. First, the large femoral head reduces the chance of dislocation, so that rates of dislocation are less than those with conventional THR. Second, the range of motion with hip resurfacing arthroplasty is higher than that achieved with conventional THR.
A variety of MOM hip resurfacing implants are used in clinical practice. Six MOM hip resurfacing implants have been issued licences in Canada.
Review Strategy
A search of electronic bibliographies (OVID Medline, Medline In-Process and Other Non-Indexed Citations, Embase, Cochrane CENTRAL and DSR, INAHTA) was undertaken to identify evidence published from Jan 1, 1997 to October 27, 2005. The search was limited to English-language articles and human studies. The literature search yielded 245 citations. Of these, 11 met inclusion criteria (9 for effectiveness, 2 for safety).
The result of the only reported randomized controlled trial on MOM hip resurfacing arthroplasty could not be included in this assessment, because it used a cemented acetabular component, whereas in the new generation of implants, a cementless acetabular component is used. After omitting this publication, only case series remained.
Summary of Findings
 
Health Outcomes
The Harris hip score and SF-12 are 2 measures commonly used to report health outcomes in MOM hip resurfacing arthroplasty studies. Other scales used are the Oxford hip score and the University of California Los Angeles hip score.
The case series showed that the mean revision rate of MOM hip resurfacing arthroplasty is 1.5% and the incidence of femoral neck fracture is 0.67%. Across all studies, 2 cases of osteonecrosis were reported. Four studies reported improvement in Harris hip scores. However, only 1 study reported a statistically significant improvement. Three studies reported improvement in SF-12 scores, of which 2 reported a significant improvement. One study reported significant improvement in UCLA hip score. Two studies reported postoperative Oxford hip scores, but no preoperative values were reported.
None of the reviewed studies reported procedure-related deaths. Four studies reported implant survival rates ranging from 94.4% to 99.7% for a follow-up period of 2.8 to 3.5 years. Three studies reported on the range of motion. One reported improvement in all motions including flexion, extension, abduction-adduction, and rotation, and another reported improvement in flexion. Yet another reported improvement in range of motion for flexion abduction-adduction and rotation arc. However, the author reported a decrease in the range of motion in the arc of flexion in patients with Brooker class III or IV heterotopic bone (all patients were men).
Safety of Metal-on-Metal Hip Resurfacing Arthroplasty
There is a concern about metal wear debris and its systemic distribution throughout the body. Detectable metal concentrations in the serum and urine of patients with metal hip implants have been described as early as the 1970s, and this issue is still controversial after 35 years.
Several studies have reported high concentration of cobalt and chromium in serum and/or urine of the patients with metal hip implants. Potential toxicological effects of the elevated metal ions have heightened concerns about safety of MOM bearings. This is of particular concern in young and active patients in whom life expectancy after implantation is long.
Since 1997, 15 studies, including 1 randomized clinical trial, have reported high levels of metal ions after THR with metal implants. Some of these studies have reported higher metal levels in patients with loose implants.
Adverse Biological Effects of Cobalt and Chromium
Because patients who receive a MOM hip arthroplasty are shown to be exposed to high concentrations of metallic ions, the Medical Advisory Secretariat searched the literature for reports of adverse biological effects of cobalt and chromium. Cobalt and chromium make up the major part of the metal articulations; therefore, they are a focus of concern.
Risk of Cancer
To date, only one study has examined the incidence of cancer after MOM and polyethylene on metal total hip arthroplasties. The results were compared to that of general population in Finland. The mean duration of follow-up for MOM arthroplasty was 15.7 years; for polyethylene arthroplasty, it was 12.5 years. The standardized incidence ratio for all cancers in the MOM group was 0.95 (95% CI, 0.79–1.13). In the polyethylene on metal group it was 0.76 (95% CI, 0.68–0.86). The combined standardized incidence ratio for lymphoma and leukemia in the patients who had MOM THR was 1.59 (95% CI, 0.82–2.77). It was 0.59 (95% CI, 0.29–1.05) for the patients who had polyethylene on metal THR. Patients with MOM THR had a significantly higher risk of leukemia. All patients who had leukemia were aged over than 60 years.
Cobalt Cardiotoxicity
 
Epidemiological Studies of Myocardiopathy of Beer Drinkers
An unusual type of myocardiopathy, characterized by pericardial effusion, elevated hemoglobin concentrations, and congestive heart failure, occurred as an epidemic affecting 48 habitual beer drinkers in Quebec City between 1965 and 1966. This epidemic was directly related the consumption of a popular beer containing cobalt sulfate. The epidemic appeared 1 month after cobalt sulfate was added to the specific brewery, and no further cases were seen a month after this specific chemical was no longer used in making this beer. A beer of the same name is made in Montreal, and the only difference at that time was that the Quebec brand of beer contained about 10 times more cobalt sulphate. Cobalt has been added to some Canadian beers since 1965 to improve the stability of the foam but it has been added in larger breweries only to draught beer. However, in small breweries, such as those in Quebec City, separate batches were not brewed for bottle and draught beer; therefore, cobalt was added to all of the beer processed in this brewery.
In March 1966, a committee was appointed under the chairmanship of the Deputy Minister of Health for Quebec that included members of the department of forensic medicine of Quebec’s Ministry of Justice, epidemiologists, members of Food and Drug Directorate of Ottawa, toxicologists, biomedical researchers, pathologists, and members of provincial police. Epidemiological studies were carried out by the Provincial Ministry of Health and the Quebec City Health Department.
The association between the development of myocardiopathy and the consumption of the particular brand of beer was proven. The mortality rate of this epidemic was 46.1% and those who survived were desperately ill, and recovered only after a struggle for their lives.
Similar cases were seen in Omaha (Nebraska). The epidemic started after a cobalt additive was used in 1 of the beers marketed in Nebraska. Sixty-four patients with the clinical diagnosis of alcoholic myocardiopathy were seen during an 18-month period (1964–1965). Thirty of these patients died. The first patient became ill within 1 month after cobalt was added to the beer, and the last patient was seen within 1 month of withdrawal of cobalt.
A similar epidemic occurred in Minneapolis, Minnesota. Between 1964 and 1967, 42 patients with acute heart failure were admitted to a hospital in Minneapolis, Minnesota. Twenty of these patients were drinking 6 to 30 bottles per day of a particular brand of beer exclusively. The other 14 patients also drank the same brand of beer, but not exclusively. The mortality rate from the acute illness was 18%, but late deaths accounted for a total mortality rate of 43%. Examination of the tissue from these patients revealed markedly abnormal changes in myofibrils (heart muscles), mitochondria, and sarcoplasmic reticulum.
In Belgium, a similar epidemic was reported in 1966, in which, cobalt was used in some Belgian beers. There was a difference in mortality between the Canadian or American epidemic and this series. Only 1 of 24 patients died, 1.5 years after the diagnosis. In March 1965, at an international meeting in Brussels, a new heart disease in chronic beer drinkers was described. This disease consists of massive pericardial effusion, low cardiac output, raised venous pressure, and polycythemia in some cases. This syndrome was thought to be different from the 2 other forms of alcoholic heart disease (beriberi and a form characterized by myocardial fibrosis).
The mystery of the above epidemics as stated by investigators is that the amount of cobalt added to the beer was below the therapeutic doses used for anemia. For example, 24 pints of Quebec brand of beer in Quebec would contain 8 mg of cobalt chloride, whereas an intake of 50 to 100 mg of cobalt as an antianemic agent has been well tolerated. Thus, greater cobalt intake alone does not explain the occurrence of myocardiopathy. It seems that there are individual differences in cobalt toxicity. Other features, like subclinical alcoholic heart disease, deficient diet, and electrolyte imbalance could have been precipitating factors that made these patients susceptible to cobalt’s toxic effects.
In the Omaha epidemic, 60% of the patients had weight loss, anorexia, and occasional vomiting and diarrhea 2 to 6 months before the onset of cardiac symptoms. In the Quebec epidemic, patients lost their appetite 3 to 6 months before the diagnosis of myocardiopathy and developed nausea in the weeks before hospital admission. In the Belgium epidemic, anorexia was one of the most predominant symptoms at the time of diagnosis, and the quality and quantity of food intake was poor. Alcohol has been shown to increase the uptake of intracoronary injected cobalt by 47%. When cobalt enters the cells, calcium exits; this shifts the cobalt to calcium ratio. The increased uptake of cobalt in alcoholic patients may explain the high incidence of cardiomyopathies in beer drinkers’ epidemics.
As all of the above suggest, it may be that prior chronic exposure to alcohol and/or a nutritionally deficient diet may have a marked synergistic effect with the cardiotoxicity of cobalt.
Conclusions
MOM hip resurfacing arthroplasty has been shown to be an effective arthroplasty procedure as tested in younger patients.
However, evidence for effectiveness is based only on 7 case series with short duration of follow-up (2.8–3.5 years). There are no RCTs or other well-controlled studies that compare MOM hip resurfacing with THR.
Revision rates reported in the MOM studies using implants currently licensed in Canada (hybrid systems, uncemented acetabular, and cemented femoral) range from 0.3% to 3.6% for a mean follow-up ranging from 2.8 to 3.5 years.
Fracture of femoral neck is not very common; it occurs in 0.4% to 2.2% of cases (as observed in a short follow-up period).
All the studies that measured health outcomes have reported improvement in Harris Hip and SF-12 scores; 1 study reported significant reduction in pain and improvement in function, and 2 studies reported significant improvement in SF-12 scores. One study reported significant improvement in UCLA Hip scores.
Concerns remain on the potential adverse effects of metal ions. Longer-term follow-up data will help to resolve the inconsistency of findings on adverse effects, including toxicity and carcinogenicity.
Ontario-Based Economic Analysis
The device cost for MOM ranges from $4,300 to $6,000 (Cdn). Traditional hip replacement devices cost about $2,000 (Cdn). Using Ontario Case Costing Initiative data, the total estimated costs for hip resurfacing surgery including physician fees, device fees, follow-up consultation, and postsurgery rehabilitation is about $15,000 (Cdn).
Cost of Total Hip Replacement Surgery in Ontario
MOM hip arthroplasty is generally recommended for patients aged under 55 years because its bone-conserving advantage enables patients to “buy time” and hence helps THRs to last over the lifetime of the patient. In 2004/2005, 15.9% of patients who received THRs were aged 55 years and younger. It is estimated that there are from 600 to 1,000 annual MOM hip arthroplasty surgeries in Canada with an estimated 100 to 150 surgeries in Ontario. Given the increased public awareness of this device, it is forecasted that demand for MOM hip arthroplasty will steadily increase with a conservative estimate of demand rising to 1,400 cases by 2010 (Figure 10). The net budget impact over a 5-year period could be $500,000 to $4.7 million, mainly because of the increasing cost of the device.
Projected Number of Metal-on-Metal Hip Arthroplasty Surgeries in Ontario: to 2010
PMCID: PMC3379532  PMID: 23074495
6.  Long-Term Exposure to Silica Dust and Risk of Total and Cause-Specific Mortality in Chinese Workers: A Cohort Study 
PLoS Medicine  2012;9(4):e1001206.
A retro-prospective cohort study by Weihong Chen and colleagues provides new estimates for the risk of total and cause-specific mortality due to long-term silica dust exposure among Chinese workers.
Background
Human exposure to silica dust is very common in both working and living environments. However, the potential long-term health effects have not been well established across different exposure situations.
Methods and Findings
We studied 74,040 workers who worked at 29 metal mines and pottery factories in China for 1 y or more between January 1, 1960, and December 31, 1974, with follow-up until December 31, 2003 (median follow-up of 33 y). We estimated the cumulative silica dust exposure (CDE) for each worker by linking work history to a job–exposure matrix. We calculated standardized mortality ratios for underlying causes of death based on Chinese national mortality rates. Hazard ratios (HRs) for selected causes of death associated with CDE were estimated using the Cox proportional hazards model. The population attributable risks were estimated based on the prevalence of workers with silica dust exposure and HRs. The number of deaths attributable to silica dust exposure among Chinese workers was then calculated using the population attributable risk and the national mortality rate. We observed 19,516 deaths during 2,306,428 person-years of follow-up. Mortality from all causes was higher among workers exposed to silica dust than among non-exposed workers (993 versus 551 per 100,000 person-years). We observed significant positive exposure–response relationships between CDE (measured in milligrams/cubic meter–years, i.e., the sum of silica dust concentrations multiplied by the years of silica exposure) and mortality from all causes (HR 1.026, 95% confidence interval 1.023–1.029), respiratory diseases (1.069, 1.064–1.074), respiratory tuberculosis (1.065, 1.059–1.071), and cardiovascular disease (1.031, 1.025–1.036). Significantly elevated standardized mortality ratios were observed for all causes (1.06, 95% confidence interval 1.01–1.11), ischemic heart disease (1.65, 1.35–1.99), and pneumoconiosis (11.01, 7.67–14.95) among workers exposed to respirable silica concentrations equal to or lower than 0.1 mg/m3. After adjustment for potential confounders, including smoking, silica dust exposure accounted for 15.2% of all deaths in this study. We estimated that 4.2% of deaths (231,104 cases) among Chinese workers were attributable to silica dust exposure. The limitations of this study included a lack of data on dietary patterns and leisure time physical activity, possible underestimation of silica dust exposure for individuals who worked at the mines/factories before 1950, and a small number of deaths (4.3%) where the cause of death was based on oral reports from relatives.
Conclusions
Long-term silica dust exposure was associated with substantially increased mortality among Chinese workers. The increased risk was observed not only for deaths due to respiratory diseases and lung cancer, but also for deaths due to cardiovascular disease.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Walk along most sandy beaches and you will be walking on millions of grains of crystalline silica, one of the commonest minerals on earth and a major ingredient in glass and in ceramic glazes. Silica is also used in the manufacture of building materials, in foundry castings, and for sandblasting, and respirable (breathable) crystalline silica particles are produced during quarrying and mining. Unfortunately, silica dust is not innocuous. Several serious diseases are associated with exposure to this dust, including silicosis (a chronic lung disease characterized by scarring and destruction of lung tissue), lung cancer, and pulmonary tuberculosis (a serious lung infection). Moreover, exposure to silica dust increases the risk of death (mortality). Worryingly, recent reports indicate that in the US and Europe, about 1.7 and 3.0 million people, respectively, are occupationally exposed to silica dust, figures that are dwarfed by the more than 23 million workers who are exposed in China. Occupational silica exposure, therefore, represents an important global public health concern.
Why Was This Study Done?
Although the lung-related adverse health effects of exposure to silica dust have been extensively studied, silica-related health effects may not be limited to these diseases. For example, could silica dust particles increase the risk of cardiovascular disease (diseases that affect the heart and circulation)? Other environmental particulates, such as the products of internal combustion engines, are associated with an increased risk of cardiovascular disease, but no one knows if the same is true for silica dust particles. Moreover, although it is clear that high levels of exposure to silica dust are dangerous, little is known about the adverse health effects of lower exposure levels. In this cohort study, the researchers examined the effect of long-term exposure to silica dust on the risk of all cause and cause-specific mortality in a large group (cohort) of Chinese workers.
What Did the Researchers Do and Find?
The researchers estimated the cumulative silica dust exposure for 74,040 workers at 29 metal mines and pottery factories from 1960 to 2003 from individual work histories and more than four million measurements of workplace dust concentrations, and collected health and mortality data for all the workers. Death from all causes was higher among workers exposed to silica dust than among non-exposed workers (993 versus 551 deaths per 100,000 person-years), and there was a positive exposure–response relationship between silica dust exposure and death from all causes, respiratory diseases, respiratory tuberculosis, and cardiovascular disease. For example, the hazard ratio for all cause death was 1.026 for every increase in cumulative silica dust exposure of 1 mg/m3-year; a hazard ratio is the incidence of an event in an exposed group divided by its incidence in an unexposed group. Notably, there was significantly increased mortality from all causes, ischemic heart disease, and silicosis among workers exposed to respirable silica concentrations at or below 0.1 mg/m3, the workplace exposure limit for silica dust set by the US Occupational Safety and Health Administration. For example, the standardized mortality ratio (SMR) for silicosis among people exposed to low levels of silica dust was 11.01; an SMR is the ratio of observed deaths in a cohort to expected deaths calculated from recorded deaths in the general population. Finally, the researchers used their data to estimate that, in 2008, 4.2% of deaths among industrial workers in China (231,104 deaths) were attributable to silica dust exposure.
What Do These Findings Mean?
These findings indicate that long-term silica dust exposure is associated with substantially increased mortality among Chinese workers. They confirm that there is an exposure–response relationship between silica dust exposure and a heightened risk of death from respiratory diseases and lung cancer. That is, the risk of death from these diseases increases as exposure to silica dust increases. In addition, they show a significant relationship between silica dust exposure and death from cardiovascular diseases. Importantly, these findings suggest that even levels of silica dust that are considered safe increase the risk of death. The accuracy of these findings may be affected by the accuracy of the silica dust exposure estimates and/or by confounding (other factors shared by the people exposed to silica such as diet may have affected their risk of death). Nevertheless, these findings highlight the need to tighten regulations on workplace dust control in China and elsewhere.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001206.
The American Lung Association provides information on silicosis
The US Centers for Disease Control and Prevention provides information on silica in the workplace, including links to relevant US National Institute for Occupational Health and Safety publications, and information on silicosis and other pneumoconioses
The US Occupational Safety and Health Administration also has detailed information on occupational exposure to crystalline silica
What does silicosis mean to you is a video provided by the US Mine Safety and Health Administration that includes personal experiences of silicosis; Dont let silica dust you is a video produced by the Association of Occupational and Environmental Clinics that identifies ways to reduce silica dust exposure in the workplace
The MedlinePlus encyclopedia has a page on silicosis (in English and Spanish)
The International Labour Organization provides information on health surveillance for those exposed to respirable crystalline silica
The World Health Organization has published a report about the health effects of crystalline silica and quartz
doi:10.1371/journal.pmed.1001206
PMCID: PMC3328438  PMID: 22529751
7.  Positron Emission Tomography for the Assessment of Myocardial Viability 
Executive Summary
In July 2009, the Medical Advisory Secretariat (MAS) began work on Non-Invasive Cardiac Imaging Technologies for the Assessment of Myocardial Viability, an evidence-based review of the literature surrounding different cardiac imaging modalities to ensure that appropriate technologies are accessed by patients undergoing viability assessment. This project came about when the Health Services Branch at the Ministry of Health and Long-Term Care asked MAS to provide an evidentiary platform on effectiveness and cost-effectiveness of non-invasive cardiac imaging modalities.
After an initial review of the strategy and consultation with experts, MAS identified five key non-invasive cardiac imaging technologies that can be used for the assessment of myocardial viability: positron emission tomography, cardiac magnetic resonance imaging, dobutamine echocardiography, and dobutamine echocardiography with contrast, and single photon emission computed tomography.
A 2005 review conducted by MAS determined that positron emission tomography was more sensitivity than dobutamine echocardiography and single photon emission tomography and dominated the other imaging modalities from a cost-effective standpoint. However, there was inadequate evidence to compare positron emission tomography and cardiac magnetic resonance imaging. Thus, this report focuses on this comparison only. For both technologies, an economic analysis was also completed.
The Non-Invasive Cardiac Imaging Technologies for the Assessment of Myocardial Viability is made up of the following reports, which can be publicly accessed at the MAS website at: www.health.gov.on.ca/mas or at www.health.gov.on.ca/english/providers/program/mas/mas_about.html
Positron Emission Tomography for the Assessment of Myocardial Viability: An Evidence-Based Analysis
Magnetic Resonance Imaging for the Assessment of Myocardial Viability: An Evidence-Based Analysis
Objective
The objective of this analysis is to assess the effectiveness and safety of positron emission tomography (PET) imaging using F-18-fluorodeoxyglucose (FDG) for the assessment of myocardial viability. To evaluate the effectiveness of FDG PET viability imaging, the following outcomes are examined:
the diagnostic accuracy of FDG PET for predicting functional recovery;
the impact of PET viability imaging on prognosis (mortality and other patient outcomes); and
the contribution of PET viability imaging to treatment decision making and subsequent patient outcomes.
Clinical Need: Condition and Target Population
Left Ventricular Systolic Dysfunction and Heart Failure
Heart failure is a complex syndrome characterized by the heart’s inability to maintain adequate blood circulation through the body leading to multiorgan abnormalities and, eventually, death. Patients with heart failure experience poor functional capacity, decreased quality of life, and increased risk of morbidity and mortality.
In 2005, more than 71,000 Canadians died from cardiovascular disease, of which, 54% were due to ischemic heart disease. Left ventricular (LV) systolic dysfunction due to coronary artery disease (CAD)1 is the primary cause of heart failure accounting for more than 70% of cases. The prevalence of heart failure was estimated at one percent of the Canadian population in 1989. Since then, the increase in the older population has undoubtedly resulted in a substantial increase in cases. Heart failure is associated with a poor prognosis: one-year mortality rates were 32.9% and 31.1% for men and women, respectively in Ontario between 1996 and 1997.
Treatment Options
In general, there are three options for the treatment of heart failure: medical treatment, heart transplantation, and revascularization for those with CAD as the underlying cause. Concerning medical treatment, despite recent advances, mortality remains high among treated patients, while, heart transplantation is affected by the limited availability of donor hearts and consequently has long waiting lists. The third option, revascularization, is used to restore the flow of blood to the heart via coronary artery bypass grafting (CABG) or through minimally invasive percutaneous coronary interventions (balloon angioplasty and stenting). Both methods, however, are associated with important perioperative risks including mortality, so it is essential to properly select patients for this procedure.
Myocardial Viability
Left ventricular dysfunction may be permanent if a myocardial scar is formed, or it may be reversible after revascularization. Reversible LV dysfunction occurs when the myocardium is viable but dysfunctional (reduced contractility). Since only patients with dysfunctional but viable myocardium benefit from revascularization, the identification and quantification of the extent of myocardial viability is an important part of the work-up of patients with heart failure when determining the most appropriate treatment path. Various non-invasive cardiac imaging modalities can be used to assess patients in whom determination of viability is an important clinical issue, specifically:
dobutamine echocardiography (echo),
stress echo with contrast,
SPECT using either technetium or thallium,
cardiac magnetic resonance imaging (cardiac MRI), and
positron emission tomography (PET).
Dobutamine Echocardiography
Stress echocardiography can be used to detect viable myocardium. During the infusion of low dose dobutamine (5 – 10 μg/kg/min), an improvement of contractility in hypokinetic and akentic segments is indicative of the presence of viable myocardium. Alternatively, a low-high dose dobutamine protocol can be used in which a biphasic response characterized by improved contractile function during the low-dose infusion followed by a deterioration in contractility due to stress induced ischemia during the high dose dobutamine infusion (dobutamine dose up to 40 ug/kg/min) represents viable tissue. Newer techniques including echocardiography using contrast agents, harmonic imaging, and power doppler imaging may help to improve the diagnostic accuracy of echocardiographic assessment of myocardial viability.
Stress Echocardiography with Contrast
Intravenous contrast agents, which are high molecular weight inert gas microbubbles that act like red blood cells in the vascular space, can be used during echocardiography to assess myocardial viability. These agents allow for the assessment of myocardial blood flow (perfusion) and contractile function (as described above), as well as the simultaneous assessment of perfusion to make it possible to distinguish between stunned and hibernating myocardium.
SPECT
SPECT can be performed using thallium-201 (Tl-201), a potassium analogue, or technetium-99 m labelled tracers. When Tl-201 is injected intravenously into a patient, it is taken up by the myocardial cells through regional perfusion, and Tl-201 is retained in the cell due to sodium/potassium ATPase pumps in the myocyte membrane. The stress-redistribution-reinjection protocol involves three sets of images. The first two image sets (taken immediately after stress and then three to four hours after stress) identify perfusion defects that may represent scar tissue or viable tissue that is severely hypoperfused. The third set of images is taken a few minutes after the re-injection of Tl-201 and after the second set of images is completed. These re-injection images identify viable tissue if the defects exhibit significant fill-in (> 10% increase in tracer uptake) on the re-injection images.
The other common Tl-201 viability imaging protocol, rest-redistribution, involves SPECT imaging performed at rest five minutes after Tl-201 is injected and again three to four hours later. Viable tissue is identified if the delayed images exhibit significant fill-in of defects identified in the initial scans (> 10% increase in uptake) or if defects are fixed but the tracer activity is greater than 50%.
There are two technetium-99 m tracers: sestamibi (MIBI) and tetrofosmin. The uptake and retention of these tracers is dependent on regional perfusion and the integrity of cellular membranes. Viability is assessed using one set of images at rest and is defined by segments with tracer activity greater than 50%.
Cardiac Magnetic Resonance Imaging
Cardiac magnetic resonance imaging (cardiac MRI) is a non-invasive, x-ray free technique that uses a powerful magnetic field, radio frequency pulses, and a computer to produce detailed images of the structure and function of the heart. Two types of cardiac MRI are used to assess myocardial viability: dobutamine stress magnetic resonance imaging (DSMR) and delayed contrast-enhanced cardiac MRI (DE-MRI). DE-MRI, the most commonly used technique in Ontario, uses gadolinium-based contrast agents to define the transmural extent of scar, which can be visualized based on the intensity of the image. Hyper-enhanced regions correspond to irreversibly damaged myocardium. As the extent of hyper-enhancement increases, the amount of scar increases, so there is a lower the likelihood of functional recovery.
Cardiac Positron Emission Tomography
Positron emission tomography (PET) is a nuclear medicine technique used to image tissues based on the distinct ways in which normal and abnormal tissues metabolize positron-emitting radionuclides. Radionuclides are radioactive analogs of common physiological substrates such as sugars, amino acids, and free fatty acids that are used by the body. The only licensed radionuclide used in PET imaging for viability assessment is F-18 fluorodeoxyglucose (FDG).
During a PET scan, the radionuclides are injected into the body and as they decay, they emit positively charged particles (positrons) that travel several millimetres into tissue and collide with orbiting electrons. This collision results in annihilation where the combined mass of the positron and electron is converted into energy in the form of two 511 keV gamma rays, which are then emitted in opposite directions (180 degrees) and captured by an external array of detector elements in the PET gantry. Computer software is then used to convert the radiation emission into images. The system is set up so that it only detects coincident gamma rays that arrive at the detectors within a predefined temporal window, while single photons arriving without a pair or outside the temporal window do not active the detector. This allows for increased spatial and contrast resolution.
Evidence-Based Analysis
Research Questions
What is the diagnostic accuracy of PET for detecting myocardial viability?
What is the prognostic value of PET viability imaging (mortality and other clinical outcomes)?
What is the contribution of PET viability imaging to treatment decision making?
What is the safety of PET viability imaging?
Literature Search
A literature search was performed on July 17, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2004 to July 16, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. In addition, published systematic reviews and health technology assessments were reviewed for relevant studies published before 2004. Reference lists of included studies were also examined for any additional relevant studies not already identified. The quality of the body of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Inclusion Criteria
Criteria applying to diagnostic accuracy studies, prognosis studies, and physician decision-making studies:
English language full-reports
Health technology assessments, systematic reviews, meta-analyses, randomized controlled trials (RCTs), and observational studies
Patients with chronic, known CAD
PET imaging using FDG for the purpose of detecting viable myocardium
Criteria applying to diagnostic accuracy studies:
Assessment of functional recovery ≥3 months after revascularization
Raw data available to calculate sensitivity and specificity
Gold standard: prediction of global or regional functional recovery
Criteria applying to prognosis studies:
Mortality studies that compare revascularized patients with non-revascularized patients and patients with viable and non-viable myocardium
Exclusion Criteria
Criteria applying to diagnostic accuracy studies, prognosis studies, and physician decision-making studies:
PET perfusion imaging
< 20 patients
< 18 years of age
Patients with non-ischemic heart disease
Animal or phantom studies
Studies focusing on the technical aspects of PET
Studies conducted exclusively in patients with acute myocardial infarction (MI)
Duplicate publications
Criteria applying to diagnostic accuracy studies
Gold standard other than functional recovery (e.g., PET or cardiac MRI)
Assessment of functional recovery occurs before patients are revascularized
Outcomes of Interest
Diagnostic accuracy studies
Sensitivity and specificity
Positive and negative predictive values (PPV and NPV)
Positive and negative likelihood ratios
Diagnostic accuracy
Adverse events
Prognosis studies
Mortality rate
Functional status
Exercise capacity
Quality of Life
Influence on PET viability imaging on physician decision making
Statistical Methods
Pooled estimates of sensitivity and specificity were calculated using a bivariate, binomial generalized linear mixed model. Statistical significance was defined by P values less than 0.05, where “false discovery rate” adjustments were made for multiple hypothesis testing. Using the bivariate model parameters, summary receiver operating characteristic (sROC) curves were produced. The area under the sROC curve was estimated by numerical integration with a cubic spline (default option). Finally, pooled estimates of mortality rates were calculated using weighted means.
Quality of Evidence
The quality of evidence assigned to individual diagnostic studies was determined using the QUADAS tool, a list of 14 questions that address internal and external validity, bias, and generalizibility of diagnostic accuracy studies. Each question is scored as “yes”, “no”, or “unclear”. The quality of the body of evidence was then assessed as high, moderate, low, or very low according to the GRADE Working Group criteria. The following definitions of quality were used in grading the quality of the evidence:
Summary of Findings
A total of 40 studies met the inclusion criteria and were included in this review: one health technology assessment, two systematic reviews, 22 observational diagnostic accuracy studies, and 16 prognosis studies. The available PET viability imaging literature addresses two questions: 1) what is the diagnostic accuracy of PET imaging for the assessment; and 2) what is the prognostic value of PET viability imaging. The diagnostic accuracy studies use regional or global functional recovery as the reference standard to determine the sensitivity and specificity of the technology. While regional functional recovery was most commonly used in the studies, global functional recovery is more important clinically. Due to differences in reporting and thresholds, however, it was not possible to pool global functional recovery.
Functional recovery, however, is a surrogate reference standard for viability and consequently, the diagnostic accuracy results may underestimate the specificity of PET viability imaging. For example, regional functional recovery may take up to a year after revascularization depending on whether it is stunned or hibernating tissue, while many of the studies looked at regional functional recovery 3 to 6 months after revascularization. In addition, viable tissue may not recover function after revascularization due to graft patency or re-stenosis. Both issues may lead to false positives and underestimate specificity. Given these limitations, the prognostic value of PET viability imaging provides the most direct and clinically useful information. This body of literature provides evidence on the comparative effectiveness of revascularization and medical therapy in patients with viable myocardium and patients without viable myocardium. In addition, the literature compares the impact of PET-guided treatment decision making with SPECT-guided or standard care treatment decision making on survival and cardiac events (including cardiac mortality, MI, hospital stays, unintended revascularization, etc).
The main findings from the diagnostic accuracy and prognosis evidence are:
Based on the available very low quality evidence, PET is a useful imaging modality for the detection of viable myocardium. The pooled estimates of sensitivity and specificity for the prediction of regional functional recovery as a surrogate for viable myocardium are 91.5% (95% CI, 88.2% – 94.9%) and 67.8% (95% CI, 55.8% – 79.7%), respectively.
Based the available very low quality of evidence, an indirect comparison of pooled estimates of sensitivity and specificity showed no statistically significant difference in the diagnostic accuracy of PET viability imaging for regional functional recovery using perfusion/metabolism mismatch with FDG PET plus either a PET or SPECT perfusion tracer compared with metabolism imaging with FDG PET alone.
FDG PET + PET perfusion metabolism mismatch: sensitivity, 89.9% (83.5% – 96.4%); specificity, 78.3% (66.3% – 90.2%);
FDG PET + SPECT perfusion metabolism mismatch: sensitivity, 87.2% (78.0% – 96.4%); specificity, 67.1% (48.3% – 85.9%);
FDG PET metabolism: sensitivity, 94.5% (91.0% – 98.0%); specificity, 66.8% (53.2% – 80.3%).
Given these findings, further higher quality studies are required to determine the comparative effectiveness and clinical utility of metabolism and perfusion/metabolism mismatch viability imaging with PET.
Based on very low quality of evidence, patients with viable myocardium who are revascularized have a lower mortality rate than those who are treated with medical therapy. Given the quality of evidence, however, this estimate of effect is uncertain so further higher quality studies in this area should be undertaken to determine the presence and magnitude of the effect.
While revascularization may reduce mortality in patients with viable myocardium, current moderate quality RCT evidence suggests that PET-guided treatment decisions do not result in statistically significant reductions in mortality compared with treatment decisions based on SPECT or standard care protocols. The PARR II trial by Beanlands et al. found a significant reduction in cardiac events (a composite outcome that includes cardiac deaths, MI, or hospital stay for cardiac cause) between the adherence to PET recommendations subgroup and the standard care group (hazard ratio, .62; 95% confidence intervals, 0.42 – 0.93; P = .019); however, this post-hoc sub-group analysis is hypothesis generating and higher quality studies are required to substantiate these findings.
The use of FDG PET plus SPECT to determine perfusion/metabolism mismatch to assess myocardial viability increases the radiation exposure compared with FDG PET imaging alone or FDG PET combined with PET perfusion imaging (total-body effective dose: FDG PET, 7 mSv; FDG PET plus PET perfusion tracer, 7.6 – 7.7 mSV; FDG PET plus SPECT perfusion tracer, 16 – 25 mSv). While the precise risk attributed to this increased exposure is unknown, there is increasing concern regarding lifetime multiple exposures to radiation-based imaging modalities, although the incremental lifetime risk for patients who are older or have a poor prognosis may not be as great as for healthy individuals.
PMCID: PMC3377573  PMID: 23074393
8.  Utilization of DXA Bone Mineral Densitometry in Ontario 
Executive Summary
Issue
Systematic reviews and analyses of administrative data were performed to determine the appropriate use of bone mineral density (BMD) assessments using dual energy x-ray absorptiometry (DXA), and the associated trends in wrist and hip fractures in Ontario.
Background
Dual Energy X-ray Absorptiometry Bone Mineral Density Assessment
Dual energy x-ray absorptiometry bone densitometers measure bone density based on differential absorption of 2 x-ray beams by bone and soft tissues. It is the gold standard for detecting and diagnosing osteoporosis, a systemic disease characterized by low bone density and altered bone structure, resulting in low bone strength and increased risk of fractures. The test is fast (approximately 10 minutes) and accurate (exceeds 90% at the hip), with low radiation (1/3 to 1/5 of that from a chest x-ray). DXA densitometers are licensed as Class 3 medical devices in Canada. The World Health Organization has established criteria for osteoporosis and osteopenia based on DXA BMD measurements: osteoporosis is defined as a BMD that is >2.5 standard deviations below the mean BMD for normal young adults (i.e. T-score <–2.5), while osteopenia is defined as BMD that is more than 1 standard deviation but less than 2.5 standard deviation below the mean for normal young adults (i.e. T-score< –1 & ≥–2.5). DXA densitometry is presently an insured health service in Ontario.
Clinical Need
 
Burden of Disease
The Canadian Multicenter Osteoporosis Study (CaMos) found that 16% of Canadian women and 6.6% of Canadian men have osteoporosis based on the WHO criteria, with prevalence increasing with age. Osteopenia was found in 49.6% of Canadian women and 39% of Canadian men. In Ontario, it is estimated that nearly 530,000 Ontarians have some degrees of osteoporosis. Osteoporosis-related fragility fractures occur most often in the wrist, femur and pelvis. These fractures, particularly those in the hip, are associated with increased mortality, and decreased functional capacity and quality of life. A Canadian study showed that at 1 year after a hip fracture, the mortality rate was 20%. Another 20% required institutional care, 40% were unable to walk independently, and there was lower health-related quality of life due to attributes such as pain, decreased mobility and decreased ability to self-care. The cost of osteoporosis and osteoporotic fractures in Canada was estimated to be $1.3 billion in 1993.
Guidelines for Bone Mineral Density Testing
With 2 exceptions, almost all guidelines address only women. None of the guidelines recommend blanket population-based BMD testing. Instead, all guidelines recommend BMD testing in people at risk of osteoporosis, predominantly women aged 65 years or older. For women under 65 years of age, BMD testing is recommended only if one major or two minor risk factors for osteoporosis exist. Osteoporosis Canada did not restrict its recommendations to women, and thus their guidelines apply to both sexes. Major risk factors are age greater than or equal to 65 years, a history of previous fractures, family history (especially parental history) of fracture, and medication or disease conditions that affect bone metabolism (such as long-term glucocorticoid therapy). Minor risk factors include low body mass index, low calcium intake, alcohol consumption, and smoking.
Current Funding for Bone Mineral Density Testing
The Ontario Health Insurance Program (OHIP) Schedule presently reimburses DXA BMD at the hip and spine. Measurements at both sites are required if feasible. Patients at low risk of accelerated bone loss are limited to one BMD test within any 24-month period, but there are no restrictions on people at high risk. The total fee including the professional and technical components for a test involving 2 or more sites is $106.00 (Cdn).
Method of Review
This review consisted of 2 parts. The first part was an analysis of Ontario administrative data relating to DXA BMD, wrist and hip fractures, and use of antiresorptive drugs in people aged 65 years and older. The Institute for Clinical Evaluative Sciences extracted data from the OHIP claims database, the Canadian Institute for Health Information hospital discharge abstract database, the National Ambulatory Care Reporting System, and the Ontario Drug Benefit database using OHIP and ICD-10 codes. The data was analyzed to examine the trends in DXA BMD use from 1992 to 2005, and to identify areas requiring improvement.
The second part included systematic reviews and analyses of evidence relating to issues identified in the analyses of utilization data. Altogether, 8 reviews and qualitative syntheses were performed, consisting of 28 published systematic reviews and/or meta-analyses, 34 randomized controlled trials, and 63 observational studies.
Findings of Utilization Analysis
Analysis of administrative data showed a 10-fold increase in the number of BMD tests in Ontario between 1993 and 2005.
OHIP claims for BMD tests are presently increasing at a rate of 6 to 7% per year. Approximately 500,000 tests were performed in 2005/06 with an age-adjusted rate of 8,600 tests per 100,000 population.
Women accounted for 90 % of all BMD tests performed in the province.
In 2005/06, there was a 2-fold variation in the rate of DXA BMD tests across local integrated health networks, but a 10-fold variation between the county with the highest rate (Toronto) and that with the lowest rate (Kenora). The analysis also showed that:
With the increased use of BMD, there was a concomitant increase in the use of antiresorptive drugs (as shown in people 65 years and older) and a decrease in the rate of hip fractures in people age 50 years and older.
Repeat BMD made up approximately 41% of all tests. Most of the people (>90%) who had annual BMD tests in a 2-year or 3-year period were coded as being at high risk for osteoporosis.
18% (20,865) of the people who had a repeat BMD within a 24-month period and 34% (98,058) of the people who had one BMD test in a 3-year period were under 65 years, had no fracture in the year, and coded as low-risk.
Only 19% of people age greater than 65 years underwent BMD testing and 41% received osteoporosis treatment during the year following a fracture.
Men accounted for 24% of all hip fractures and 21 % of all wrist fractures, but only 10% of BMD tests. The rates of BMD tests and treatment in men after a fracture were only half of those in women.
In both men and women, the rate of hip and wrist fractures mainly increased after age 65 with the sharpest increase occurring after age 80 years.
Findings of Systematic Review and Analysis
Serial Bone Mineral Density Testing for People Not Receiving Osteoporosis Treatment
A systematic review showed that the mean rate of bone loss in people not receiving osteoporosis treatment (including postmenopausal women) is generally less than 1% per year. Higher rates of bone loss were reported for people with disease conditions or on medications that affect bone metabolism. In order to be considered a genuine biological change, the change in BMD between serial measurements must exceed the least significant change (variability) of the testing, ranging from 2.77% to 8% for precisions ranging from 1% to 3% respectively. Progression in BMD was analyzed, using different rates of baseline BMD values, rates of bone loss, precision, and BMD value for initiating treatment. The analyses showed that serial BMD measurements every 24 months (as per OHIP policy for low-risk individuals) is not necessary for people with no major risk factors for osteoporosis, provided that the baseline BMD is normal (T-score ≥ –1), and the rate of bone loss is less than or equal to 1% per year. The analyses showed that for someone with a normal baseline BMD and a rate of bone loss of less than 1% per year, the change in BMD is not likely to exceed least significant change (even for a 1% precision) in less than 3 years after the baseline test, and is not likely to drop to a BMD level that requires initiation of treatment in less than 16 years after the baseline test.
Serial Bone Mineral Density Testing in People Receiving Osteoporosis Therapy
Seven published meta-analysis of randomized controlled trials (RCTs) and 2 recent RCTs on BMD monitoring during osteoporosis therapy showed that although higher increases in BMD were generally associated with reduced risk of fracture, the change in BMD only explained a small percentage of the fracture risk reduction.
Studies showed that some people with small or no increase in BMD during treatment experienced significant fracture risk reduction, indicating that other factors such as improved bone microarchitecture might have contributed to fracture risk reduction.
There is conflicting evidence relating to the role of BMD testing in improving patient compliance with osteoporosis therapy.
Even though BMD may not be a perfect surrogate for reduction in fracture risk when monitoring responses to osteoporosis therapy, experts advised that it is still the only reliable test available for this purpose.
A systematic review conducted by the Medical Advisory Secretariat showed that the magnitude of increases in BMD during osteoporosis drug therapy varied among medications. Although most of the studies yielded mean percentage increases in BMD from baseline that did not exceed the least significant change for a 2% precision after 1 year of treatment, there were some exceptions.
Bone Mineral Density Testing and Treatment After a Fragility Fracture
A review of 3 published pooled analyses of observational studies and 12 prospective population-based observational studies showed that the presence of any prevalent fracture increases the relative risk for future fractures by approximately 2-fold or more. A review of 10 systematic reviews of RCTs and 3 additional RCTs showed that therapy with antiresorptive drugs significantly reduced the risk of vertebral fractures by 40 to 50% in postmenopausal osteoporotic women and osteoporotic men, and 2 antiresorptive drugs also reduced the risk of nonvertebral fractures by 30 to 50%. Evidence from observational studies in Canada and other jurisdictions suggests that patients who had undergone BMD measurements, particularly if a diagnosis of osteoporosis is made, were more likely to be given pharmacologic bone-sparing therapy. Despite these findings, the rate of BMD investigation and osteoporosis treatment after a fracture remained low (<20%) in Ontario as well as in other jurisdictions.
Bone Mineral Density Testing in Men
There are presently no specific Canadian guidelines for BMD screening in men. A review of the literature suggests that risk factors for fracture and the rate of vertebral deformity are similar for men and women, but the mortality rate after a hip fracture is higher in men compared with women. Two bisphosphonates had been shown to reduce the risk of vertebral and hip fractures in men. However, BMD testing and osteoporosis treatment were proportionately low in Ontario men in general, and particularly after a fracture, even though men accounted for 25% of the hip and wrist fractures. The Ontario data also showed that the rates of wrist fracture and hip fracture in men rose sharply in the 75- to 80-year age group.
Ontario-Based Economic Analysis
The economic analysis focused on analyzing the economic impact of decreasing future hip fractures by increasing the rate of BMD testing in men and women age greater than or equal to 65 years following a hip or wrist fracture. A decision analysis showed the above strategy, especially when enhanced by improved reporting of BMD tests, to be cost-effective, resulting in a cost-effectiveness ratio ranging from $2,285 (Cdn) per fracture avoided (worst-case scenario) to $1,981 (Cdn) per fracture avoided (best-case scenario). A budget impact analysis estimated that shifting utilization of BMD testing from the low risk population to high risk populations within Ontario would result in a saving of $0.85 million to $1.5 million (Cdn) to the health system. The potential net saving was estimated at $1.2 million to $5 million (Cdn) when the downstream cost-avoidance due to prevention of future hip fractures was factored into the analysis.
Other Factors for Consideration
There is a lack of standardization for BMD testing in Ontario. Two different standards are presently being used and experts suggest that variability in results from different facilities may lead to unnecessary testing. There is also no requirement for standardized equipment, procedure or reporting format. The current reimbursement policy for BMD testing encourages serial testing in people at low risk of accelerated bone loss. This review showed that biannual testing is not necessary for all cases. The lack of a database to collect clinical data on BMD testing makes it difficult to evaluate the clinical profiles of patients tested and outcomes of the BMD tests. There are ministry initiatives in progress under the Osteoporosis Program to address the development of a mandatory standardized requisition form for BMD tests to facilitate data collection and clinical decision-making. Work is also underway for developing guidelines for BMD testing in men and in perimenopausal women.
Conclusion
Increased use of BMD in Ontario since 1996 appears to be associated with increased use of antiresorptive medication and a decrease in hip and wrist fractures.
Data suggest that as many as 20% (98,000) of the DXA BMD tests in Ontario in 2005/06 were performed in people aged less than 65 years, with no fracture in the current year, and coded as being at low risk for accelerated bone loss; this is not consistent with current guidelines. Even though some of these people might have been incorrectly coded as low-risk, the number of tests in people truly at low risk could still be substantial.
Approximately 4% (21,000) of the DXA BMD tests in 2005/06 were repeat BMDs in low-risk individuals within a 24-month period. Even though this is in compliance with current OHIP reimbursement policies, evidence showed that biannual serial BMD testing is not necessary in individuals without major risk factors for fractures, provided that the baseline BMD is normal (T-score < –1). In this population, BMD measurements may be repeated in 3 to 5 years after the baseline test to establish the rate of bone loss, and further serial BMD tests may not be necessary for another 7 to 10 years if the rate of bone loss is no more than 1% per year. Precision of the test needs to be considered when interpreting serial BMD results.
Although changes in BMD may not be the perfect surrogate for reduction in fracture risk as a measure of response to osteoporosis treatment, experts advised that it is presently the only reliable test for monitoring response to treatment and to help motivate patients to continue treatment. Patients should not discontinue treatment if there is no increase in BMD after the first year of treatment. Lack of response or bone loss during treatment should prompt the physician to examine whether the patient is taking the medication appropriately.
Men and women who have had a fragility fracture at the hip, spine, wrist or shoulder are at increased risk of having a future fracture, but this population is presently under investigated and under treated. Additional efforts have to be made to communicate to physicians (particularly orthopaedic surgeons and family physicians) and the public about the need for a BMD test after fracture, and for initiating treatment if low BMD is found.
Men had a disproportionately low rate of BMD tests and osteoporosis treatment, especially after a fracture. Evidence and fracture data showed that the risk of hip and wrist fractures in men rises sharply at age 70 years.
Some counties had BMD utilization rates that were only 10% of that of the county with the highest utilization. The reasons for low utilization need to be explored and addressed.
Initiatives such as aligning reimbursement policy with current guidelines, developing specific guidelines for BMD testing in men and perimenopausal women, improving BMD reports to assist in clinical decision making, developing a registry to track BMD tests, improving access to BMD tests in remote/rural counties, establishing mechanisms to alert family physicians of fractures, and educating physicians and the public, will improve the appropriate utilization of BMD tests, and further decrease the rate of fractures in Ontario. Some of these initiatives such as developing guidelines for perimenopausal women and men, and developing a standardized requisition form for BMD testing, are currently in progress under the Ontario Osteoporosis Strategy.
PMCID: PMC3379167  PMID: 23074491
9.  Influenza and Pneumococcal Vaccinations for Patients With Chronic Obstructive Pulmonary Disease (COPD) 
Executive Summary
In July 2010, the Medical Advisory Secretariat (MAS) began work on a Chronic Obstructive Pulmonary Disease (COPD) evidentiary framework, an evidence-based review of the literature surrounding treatment strategies for patients with COPD. This project emerged from a request by the Health System Strategy Division of the Ministry of Health and Long-Term Care that MAS provide them with an evidentiary platform on the effectiveness and cost-effectiveness of COPD interventions.
After an initial review of health technology assessments and systematic reviews of COPD literature, and consultation with experts, MAS identified the following topics for analysis: vaccinations (influenza and pneumococcal), smoking cessation, multidisciplinary care, pulmonary rehabilitation, long-term oxygen therapy, noninvasive positive pressure ventilation for acute and chronic respiratory failure, hospital-at-home for acute exacerbations of COPD, and telehealth (including telemonitoring and telephone support). Evidence-based analyses were prepared for each of these topics. For each technology, an economic analysis was also completed where appropriate. In addition, a review of the qualitative literature on patient, caregiver, and provider perspectives on living and dying with COPD was conducted, as were reviews of the qualitative literature on each of the technologies included in these analyses.
The Chronic Obstructive Pulmonary Disease Mega-Analysis series is made up of the following reports, which can be publicly accessed at the MAS website at: http://www.hqontario.ca/en/mas/mas_ohtas_mn.html.
Chronic Obstructive Pulmonary Disease (COPD) Evidentiary Framework
Influenza and Pneumococcal Vaccinations for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Smoking Cessation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Community-Based Multidisciplinary Care for Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Pulmonary Rehabilitation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Long-term Oxygen Therapy for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Acute Respiratory Failure Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Chronic Respiratory Failure Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Hospital-at-Home Programs for Patients with Acute Exacerbations of Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Home Telehealth for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Cost-Effectiveness of Interventions for Chronic Obstructive Pulmonary Disease Using an Ontario Policy Model
Experiences of Living and Dying With COPD: A Systematic Review and Synthesis of the Qualitative Empirical Literature
For more information on the qualitative review, please contact Mita Giacomini at: http://fhs.mcmaster.ca/ceb/faculty_member_giacomini.htm.
For more information on the economic analysis, please visit the PATH website: http://www.path-hta.ca/About-Us/Contact-Us.aspx.
The Toronto Health Economics and Technology Assessment (THETA) collaborative has produced an associated report on patient preference for mechanical ventilation. For more information, please visit the THETA website: http://theta.utoronto.ca/static/contact.
Objective
The objective of this analysis was to determine the effectiveness of the influenza vaccination and the pneumococcal vaccination in patients with chronic obstructive pulmonary disease (COPD) in reducing the incidence of influenza-related illness or pneumococcal pneumonia.
Clinical Need: Condition and Target Population
Influenza Disease
Influenza is a global threat. It is believed that the risk of a pandemic of influenza still exists. Three pandemics occurred in the 20th century which resulted in millions of deaths worldwide. The fourth pandemic of H1N1 influenza occurred in 2009 and affected countries in all continents.
Rates of serious illness due to influenza viruses are high among older people and patients with chronic conditions such as COPD. The influenza viruses spread from person to person through sneezing and coughing. Infected persons can transfer the virus even a day before their symptoms start. The incubation period is 1 to 4 days with a mean of 2 days. Symptoms of influenza infection include fever, shivering, dry cough, headache, runny or stuffy nose, muscle ache, and sore throat. Other symptoms such as nausea, vomiting, and diarrhea can occur.
Complications of influenza infection include viral pneumonia, secondary bacterial pneumonia, and other secondary bacterial infections such as bronchitis, sinusitis, and otitis media. In viral pneumonia, patients develop acute fever and dyspnea, and may further show signs and symptoms of hypoxia. The organisms involved in bacterial pneumonia are commonly identified as Staphylococcus aureus and Hemophilus influenza. The incidence of secondary bacterial pneumonia is most common in the elderly and those with underlying conditions such as congestive heart disease and chronic bronchitis.
Healthy people usually recover within one week but in very young or very old people and those with underlying medical conditions such as COPD, heart disease, diabetes, and cancer, influenza is associated with higher risks and may lead to hospitalization and in some cases death. The cause of hospitalization or death in many cases is viral pneumonia or secondary bacterial pneumonia. Influenza infection can lead to the exacerbation of COPD or an underlying heart disease.
Streptococcal Pneumonia
Streptococcus pneumoniae, also known as pneumococcus, is an encapsulated Gram-positive bacterium that often colonizes in the nasopharynx of healthy children and adults. Pneumococcus can be transmitted from person to person during close contact. The bacteria can cause illnesses such as otitis media and sinusitis, and may become more aggressive and affect other areas of the body such as the lungs, brain, joints, and blood stream. More severe infections caused by pneumococcus are pneumonia, bacterial sepsis, meningitis, peritonitis, arthritis, osteomyelitis, and in rare cases, endocarditis and pericarditis.
People with impaired immune systems are susceptible to pneumococcal infection. Young children, elderly people, patients with underlying medical conditions including chronic lung or heart disease, human immunodeficiency virus (HIV) infection, sickle cell disease, and people who have undergone a splenectomy are at a higher risk for acquiring pneumococcal pneumonia.
Technology
Influenza and Pneumococcal Vaccines
Trivalent Influenza Vaccines in Canada
In Canada, 5 trivalent influenza vaccines are currently authorized for use by injection. Four of these are formulated for intramuscular use and the fifth product (Intanza®) is formulated for intradermal use.
The 4 vaccines for intramuscular use are:
Fluviral (GlaxoSmithKline), split virus, inactivated vaccine, for use in adults and children ≥ 6 months;
Vaxigrip (Sanofi Pasteur), split virus inactivated vaccine, for use in adults and children ≥ 6 months;
Agriflu (Novartis), surface antigen inactivated vaccine, for use in adults and children ≥ 6 months; and
Influvac (Abbott), surface antigen inactivated vaccine, for use in persons ≥ 18 years of age.
FluMist is a live attenuated virus in the form of an intranasal spray for persons aged 2 to 59 years. Immunization with current available influenza vaccines is not recommended for infants less than 6 months of age.
Pneumococcal Vaccine
Pneumococcal polysaccharide vaccines were developed more than 50 years ago and have progressed from 2-valent vaccines to the current 23-valent vaccines to prevent diseases caused by 23 of the most common serotypes of S pneumoniae. Canada-wide estimates suggest that approximately 90% of cases of pneumococcal bacteremia and meningitis are caused by these 23 serotypes. Health Canada has issued licenses for 2 types of 23-valent vaccines to be injected intramuscularly or subcutaneously:
Pneumovax 23® (Merck & Co Inc. Whitehouse Station, NJ, USA), and
Pneumo 23® (Sanofi Pasteur SA, Lion, France) for persons 2 years of age and older.
Other types of pneumococcal vaccines licensed in Canada are for pediatric use. Pneumococcal polysaccharide vaccine is injected only once. A second dose is applied only in some conditions.
Research Questions
What is the effectiveness of the influenza vaccination and the pneumococcal vaccination compared with no vaccination in COPD patients?
What is the safety of these 2 vaccines in COPD patients?
What is the budget impact and cost-effectiveness of these 2 vaccines in COPD patients?
Research Methods
Literature search
Search Strategy
A literature search was performed on July 5, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2000 to July 5, 2010. The search was updated monthly through the AutoAlert function of the search up to January 31, 2011. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Articles with an unknown eligibility were reviewed with a second clinical epidemiologist and then a group of epidemiologists until consensus was established. Data extraction was carried out by the author.
Inclusion Criteria
studies comparing clinical efficacy of the influenza vaccine or the pneumococcal vaccine with no vaccine or placebo;
randomized controlled trials published between January 1, 2000 and January 31, 2011;
studies including patients with COPD only;
studies investigating the efficacy of types of vaccines approved by Health Canada;
English language studies.
Exclusion Criteria
non-randomized controlled trials;
studies investigating vaccines for other diseases;
studies comparing different variations of vaccines;
studies in which patients received 2 or more types of vaccines;
studies comparing different routes of administering vaccines;
studies not reporting clinical efficacy of the vaccine or reporting immune response only;
studies investigating the efficacy of vaccines not approved by Health Canada.
Outcomes of Interest
Primary Outcomes
Influenza vaccination: Episodes of acute respiratory illness due to the influenza virus.
Pneumococcal vaccination: Time to the first episode of community-acquired pneumonia either due to pneumococcus or of unknown etiology.
Secondary Outcomes
rate of hospitalization and mechanical ventilation
mortality rate
adverse events
Quality of Evidence
The quality of each included study was assessed taking into consideration allocation concealment, randomization, blinding, power/sample size, withdrawals/dropouts, and intention-to-treat analyses. The quality of the body of evidence was assessed as high, moderate, low, or very low according to the GRADE Working Group criteria. The following definitions of quality were used in grading the quality of the evidence:
Summary of Efficacy of the Influenza Vaccination in Immunocompetent Patients With COPD
Clinical Effectiveness
The influenza vaccination was associated with significantly fewer episodes of influenza-related acute respiratory illness (ARI). The incidence density of influenza-related ARI was:
All patients: vaccine group: (total of 4 cases) = 6.8 episodes per 100 person-years; placebo group: (total of 17 cases) = 28.1 episodes per 100 person-years, (relative risk [RR], 0.2; 95% confidence interval [CI], 0.06−0.70; P = 0.005).
Patients with severe airflow obstruction (forced expiratory volume in 1 second [FEV1] < 50% predicted): vaccine group: (total of 1 case) = 4.6 episodes per 100 person-years; placebo group: (total of 7 cases) = 31.2 episodes per 100 person-years, (RR, 0.1; 95% CI, 0.003−1.1; P = 0.04).
Patients with moderate airflow obstruction (FEV1 50%−69% predicted): vaccine group: (total of 2 cases) = 13.2 episodes per 100 person-years; placebo group: (total of 4 cases) = 23.8 episodes per 100 person-years, (RR, 0.5; 95% CI, 0.05−3.8; P = 0.5).
Patients with mild airflow obstruction (FEV1 ≥ 70% predicted): vaccine group: (total of 1 case) = 4.5 episodes per 100 person-years; placebo group: (total of 6 cases) = 28.2 episodes per 100 person-years, (RR, 0.2; 95% CI, 0.003−1.3; P = 0.06).
The Kaplan-Meier survival analysis showed a significant difference between the vaccinated group and the placebo group regarding the probability of not acquiring influenza-related ARI (log-rank test P value = 0.003). Overall, the vaccine effectiveness was 76%. For categories of mild, moderate, or severe COPD the vaccine effectiveness was 84%, 45%, and 85% respectively.
With respect to hospitalization, fewer patients in the vaccine group compared with the placebo group were hospitalized due to influenza-related ARIs, although these differences were not statistically significant. The incidence density of influenza-related ARIs that required hospitalization was 3.4 episodes per 100 person-years in the vaccine group and 8.3 episodes per 100 person-years in the placebo group (RR, 0.4; 95% CI, 0.04−2.5; P = 0.3; log-rank test P value = 0.2). Also, no statistically significant differences between the 2 groups were observed for the 3 categories of severity of COPD.
Fewer patients in the vaccine group compared with the placebo group required mechanical ventilation due to influenza-related ARIs. However, these differences were not statistically significant. The incidence density of influenza-related ARIs that required mechanical ventilation was 0 episodes per 100 person-years in the vaccine group and 5 episodes per 100 person-years in the placebo group (RR, 0.0; 95% CI, 0−2.5; P = 0.1; log-rank test P value = 0.4). In addition, no statistically significant differences between the 2 groups were observed for the 3 categories of severity of COPD. The effectiveness of the influenza vaccine in preventing influenza-related ARIs and influenza-related hospitalization was not related to age, sex, severity of COPD, smoking status, or comorbid diseases.
safety
Overall, significantly more patients in the vaccine group than the placebo group experienced local adverse reactions (vaccine: 17 [27%], placebo: 4 [6%]; P = 0.002). Significantly more patients in the vaccine group than the placebo group experienced swelling (vaccine 4, placebo 0; P = 0.04) and itching (vaccine 4, placebo 0; P = 0.04). Systemic reactions included headache, myalgia, fever, and skin rash and there were no significant differences between the 2 groups for these reactions (vaccine: 47 [76%], placebo: 51 [81%], P = 0.5).
With respect to lung function, dyspneic symptoms, and exercise capacity, there were no significant differences between the 2 groups at 1 week and at 4 weeks in: FEV1, maximum inspiratory pressure at residual volume, oxygen saturation level of arterial blood, visual analogue scale for dyspneic symptoms, and the 6 Minute Walking Test for exercise capacity.
There was no significant difference between the 2 groups with regard to the probability of not acquiring total ARIs (influenza-related and/or non-influenza-related); (log-rank test P value = 0.6).
Summary of Efficacy of the Pneumococcal Vaccination in Immunocompetent Patients With COPD
Clinical Effectiveness
The Kaplan-Meier survival analysis showed no significant differences between the group receiving the penumoccocal vaccination and the control group for time to the first episode of community-acquired pneumonia due to pneumococcus or of unknown etiology (log-rank test 1.15; P = 0.28). Overall, vaccine efficacy was 24% (95% CI, −24 to 54; P = 0.33).
With respect to the incidence of pneumococcal pneumonia, the Kaplan-Meier survival analysis showed a significant difference between the 2 groups (vaccine: 0/298; control: 5/298; log-rank test 5.03; P = 0.03).
Hospital admission rates and median length of hospital stays were lower in the vaccine group, but the difference was not statistically significant. The mortality rate was not different between the 2 groups.
Subgroup Analysis
The Kaplan-Meier survival analysis showed significant differences between the vaccine and control groups for pneumonia due to pneumococcus and pneumonia of unknown etiology, and when data were analyzed according to subgroups of patients (age < 65 years, and severe airflow obstruction FEV1 < 40% predicted). The accumulated percentage of patients without pneumonia (due to pneumococcus and of unknown etiology) across time was significantly lower in the vaccine group than in the control group in patients younger than 65 years of age (log-rank test 6.68; P = 0.0097) and patients with a FEV1 less than 40% predicted (log-rank test 3.85; P = 0.0498).
Vaccine effectiveness was 76% (95% CI, 20−93; P = 0.01) for patients who were less than 65 years of age and −14% (95% CI, −107 to 38; P = 0.8) for those who were 65 years of age or older. Vaccine effectiveness for patients with a FEV1 less than 40% predicted and FEV1 greater than or equal to 40% predicted was 48% (95% CI, −7 to 80; P = 0.08) and −11% (95% CI, −132 to 47; P = 0.95), respectively. For patients who were less than 65 years of age (FEV1 < 40% predicted), vaccine effectiveness was 91% (95% CI, 35−99; P = 0.002).
Cox modelling showed that the effectiveness of the vaccine was dependent on the age of the patient. The vaccine was not effective in patients 65 years of age or older (hazard ratio, 1.53; 95% CI, 0.61−a2.17; P = 0.66) but it reduced the risk of acquiring pneumonia by 80% in patients less than 65 years of age (hazard ratio, 0.19; 95% CI, 0.06−0.66; P = 0.01).
safety
No patients reported any local or systemic adverse reactions to the vaccine.
PMCID: PMC3384373  PMID: 23074431
10.  Biventricular Pacing (Cardiac Resynchronization Therapy) 
Executive Summary
Issue
In 2002, (before the establishment of the Ontario Health Technology Advisory Committee), the Medical Advisory Secretariat conducted a health technology policy assessment on biventricular (BiV) pacing, also called cardiac resynchronization therapy (CRT). The goal of treatment with BiV pacing is to improve cardiac output for people in heart failure (HF) with conduction defect on ECG (wide QRS interval) by synchronizing ventricular contraction. The Medical Advisory Secretariat concluded that there was evidence of short (6 months) and longer-term (12 months) effectiveness in terms of cardiac function and quality of life (QoL). More recently, a hospital submitted an application to the Ontario Health Technology Advisory Committee to review CRT, and the Medical Advisory Secretariat subsequently updated its health technology assessment.
Background
Chronic HF results from any structural or functional cardiac disorder that impairs the ability of the heart to act as a pump. It is estimated that 1% to 5% of the general population (all ages) in Europe have chronic HF. (1;2) About one-half of the patients with HF are women, and about 40% of men and 60% of women with this condition are aged older than 75 years.
The incidence (i.e., the number of new cases in a specified period) of chronic HF is age dependent: from 1 to 5 per 1,000 people each year in the total population, to as high as 30 to 40 per 1,000 people each year in those aged 75 years and older. Hence, in an aging society, the prevalence (i.e., the number of people with a given disease or condition at any time) of HF is increasing, despite a reduction in cardiovascular mortality.
A recent study revealed 28,702 patients were hospitalized for first-time HF in Ontario between April 1994 and March 1997. (3) Women comprised 51% of the cohort. Eighty-five percent were aged 65 years or older, and 58% were aged 75 years or older.
Patients with chronic HF experience shortness of breath, a limited capacity for exercise, high rates of hospitalization and rehospitalization, and die prematurely. (2;4) The New York Heart Association (NYHA) has provided a commonly used functional classification for the severity of HF (2;5):
Class I: No limitation of physical activity. No symptoms with ordinary exertion.
Class II: Slight limitations of physical activity. Ordinary activity causes symptoms.
Class III: Marked limitation of physical activity. Less than ordinary activity causes symptoms. Asymptomatic at rest.
Class IV: Inability to carry out any physical activity without discomfort. Symptoms at rest.
The National Heart, Lung, and Blood Institute estimates that 35% of patients with HF are in functional NYHA class I; 35% are in class II; 25%, class III; and 5%, class IV. (5) Surveys (2) suggest that from 5% to 15% of patients with HF have persistent severe symptoms, and that the remainder of patients with HF is evenly divided between those with mild and moderately severe symptoms.
Overall, patients with chronic, stable HF have an annual mortality rate of about 10%. (2) One-third of patients with new-onset HF will die within 6 months of diagnosis. These patients do not survive to enter the pool of those with “chronic” HF. About 60% of patients with incident HF will die within 3 years, and there is limited evidence that the overall prognosis has improved in the last 15 years.
To date, the diagnosis and management of chronic HF has concentrated on patients with the clinical syndrome of HF accompanied by severe left ventricular systolic dysfunction. Major changes in treatment have resulted from a better understanding of the pathophysiology of HF and the results of large clinical trials. Treatment for chronic HF includes lifestyle management, drugs, cardiac surgery, or implantable pacemakers and defibrillators. Despite pharmacologic advances, which include diuretics, angiotensin-converting enzyme inhibitors, beta-blockers, spironolactone, and digoxin, many patients remain symptomatic on maximally tolerated doses.
The Technology
Owing to the limitations of drug therapy, cardiac transplantation and device therapies have been used to try to improve QoL and survival of patients with chronic HF. Ventricular pacing is an emerging treatment option for patients with severe HF that does not respond well to medical therapy. Traditionally, indications for pacing include bradyarrhythmia, sick sinus syndrome, atrioventricular block, and other indications, including combined sick sinus syndrome with atrioventricular block and neurocardiogenic syncope. Recently, BiV pacing as a new, adjuvant therapy for patients with chronic HF and mechanical dyssynchrony has been investigated. Ventricular dysfunction is a sign of HF; and, if associated with severe intraventricular conduction delay, it can cause dyssynchronous ventricular contractions resulting in decreased ventricular filling. The therapeutic intent is to activate both ventricles simultaneously, thereby improving the mechanical efficiency of the ventricles.
About 30% of patients with chronic HF have intraventricular conduction defects. (6) These conduction abnormalities progress over time and lead to discoordinated contraction of an already hemodynamically compromised ventricle. Intraventricular conduction delay has been associated with clinical instability and an increased risk of death in patients with HF. (7) Hence, BiV pacing, which involves pacing left and right ventricles simultaneously, may provide a more coordinated pattern of ventricular contraction and thereby potentially reduce QRS duration, and intraventricular and interventricular asynchrony. People with advanced chronic HF, a wide QRS complex (i.e., the portion of the electrocardiogram comprising the Q, R, and S waves, together representing ventricular depolarization), low left ventricular ejection fraction and contraction dyssynchrony in a viable myocardium and normal sinus rhythm, are the target patients group for BiV pacing. One-half of all deaths in HF patients are sudden, and the mode of death is arrhythmic in most cases. Internal cardioverter defibrillators (ICDs) combined with BiV pacemakers are therefore being increasingly considered for patients with HF who are at high risk of sudden death.
Current Implantation Technique for Cardiac Resynchronization
Conventional dual-chamber pacemakers have only 2 leads: 1 placed in the right atrium and the other in the right ventricle. The technique used for BiV pacemaker implantation also uses right atrial and ventricular pacing leads, in addition to a left ventricle lead advanced through the coronary sinus into a vein that runs along the ventricular free wall. This permits simultaneous pacing of both ventricles to allow resynchronization of the left ventricle septum and free wall.
Mode of Operation
Permanent pacing systems consist of an implantable pulse generator that contains a battery and electronic circuitry, together with 1 (single-chamber pacemaker) or 2 (dual-chamber pacemaker) leads. Leads conduct intrinsic atrial or ventricular signals to the sensing circuitry and deliver the pulse generator charge to the myocardium (muscle of the heart).
Complications of Biventricular Pacemaker Implantation
The complications that may arise when a BiV pacemaker is implanted are similar to those that occur with standard pacemaker implantation, including pneumothorax, perforation of the great vessels or the myocardium, air embolus, infection, bleeding, and arrhythmias. Moreover, left ventricular pacing through the coronary sinus can be associated with rupture of the sinus as another complication.
Conclusion of 2003 Review of Biventricular Pacemakers by the Medical Advisory Secretariat
The randomized controlled trials (RCTs) the Medical Advisory Secretariat retrieved analyzed chronic HF patients that were assessed for up to 6 months. Other studies have been prospective, but nonrandomized, not double-blinded, uncontrolled and/or have had a limited or uncalculated sample size. Short-term studies have focused on acute hemodynamic analyses. The authors of the RCTs reported improved cardiac function and QoL up to 6 months after BiV pacemaker implantation; therefore, there is level 1 evidence that patients in ventricular dyssynchrony who remain symptomatic after medication might benefit from this technology. Based on evidence made available to the Medical Advisory Secretariat by a manufacturer, (8) it appears that these 6-month improvements are maintained at 12-month follow-up.
To date, however, there is insufficient evidence to support the routine use of combined ICD/BiV devices in patients with chronic HF with prolonged QRS intervals.
Summary of Updated Findings Since the 2003 Review
Since the Medical Advisory Secretariat’s review in 2003 of biventricular pacemakers, 2 large RCTs have been published: COMPANION (9) and CARE-HF. (10) The characteristics of each trial are shown in Table 1. The COMPANION trial had a number of major methodological limitations compared with the CARE-HF trial.
Characteristics of the COMPANION and CARE-HF Trials*
COMPANION; (9) CARE-HF. (10)
BiV indicates biventricular; ICD, implantable cardioverter defibrillator; EF, ejection fraction; QRS, the interval representing the Q, R and S waves on an electrocardiogram; FDA, United States Food and Drug Administration.
Overall, CARE-HF showed that BiV pacing significantly improves mortality, QoL, and NYHA class in patients with severe HF and a wide QRS interval (Tables 2 and 3).
CARE-HF Results: Primary and Secondary Endpoints*
BiV indicates biventricular; NNT, number needed to treat.
Cleland JGF, Daubert J, Erdmann E, Freemantle N, Gras D, Kappenberger L et al. The effect of cardiac resynchronization on morbidity and mortality in heart failure (CARE-HF). New England Journal of Medicine 2005; 352:1539-1549; Copyright 2003 Massachusettes Medical Society. All rights reserved. (10)
CARE H-F Results: NYHA Class and Quality of Life Scores*
Minnesota Living with Heart Failure scores range from 0 to 105; higher scores reflect poorer QoL.
European Quality of Life–5 Dimensions scores range from -0.594 to 1.000; 1.000 indicates fully healthy; 0, dead
Cleland JGF, Daubert J, Erdmann E, Freemantle N, Gras D, Kappenberger L et al. The effect of cardiac resynchronization on morbidity and mortality in heart failure (CARE-HF). New England Journal of Medicine 2005; 352:1539-1549; Copyright 2005 Massachusettes Medical Society. All rights reserved.(10)
GRADE Quality of Evidence
The quality of these 3 trials was examined according to the GRADE Working Group criteria, (12) (Table 4).
Quality refers to criteria such as the adequacy of allocation concealment, blinding, and follow-up.
Consistency refers to the similarity of estimates of effect across studies. If there is an important unexplained inconsistency in the results, confidence in the estimate of effect for that outcome decreases. Differences in the direction of effect, the size of the differences in effect, and the significance of the differences guide the decision about whether important inconsistency exists.
Directness refers to the extent to which the people interventions and outcome measures are similar to those of interest. For example, there may be uncertainty about the directness of the evidence if the people of interest are older, sicker, or have more comorbid conditions than do the people in the studies.
As stated by the GRADE Working Group, (12) the following definitions were used in grading the quality of the evidence:
High: Further research is very unlikely to change our confidence on the estimate of effect.
Moderate: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low: Any estimate of effect is very uncertain.
Quality of Evidence: CARE-HF and COMPANION
Conclusions
Overall, there is evidence that BiV pacemakers are effective for improving mortality, QoL, and functional status in patients with NYHA class III/IV HF, an EF less than 0.35, a QRS interval greater than 120 ms, who are refractory to drug therapy.
As per the GRADE Working Group, recommendations considered the following 4 main factors:
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates, and the relative value placed on the outcome
The quality of the evidence (Table 4)
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects such as proximity to a hospital or availability of necessary expertise
Uncertainty about the baseline risk for the population of interest
The GRADE Working Group also recommends that incremental costs of health care alternatives should be considered explicitly alongside the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 5 shows the overall trade-off between benefits and harms and incorporates any risk/uncertainty.
For BiV pacing, the overall GRADE and strength of the recommendation is moderate: the quality of the evidence is moderate/high (because of some uncertainty due to methodological limitations in the study design, e.g., no blinding), but there is also some risk/uncertainty in terms of the estimated prevalence and wide cost-effectiveness estimates (Table 5).
For the combination BiV pacing/ICD, the overall GRADE and strength of the recommendation is weak—the quality of the evidence is low (because of uncertainty due to methodological limitations in the study design), but there is also some risk/uncertainty in terms of the estimated prevalence, high cost, and high budget impact (Table 5). There are indirect, low-quality comparisons of the effectiveness of BiV pacemakers compared with the combination BiV/ICD devices.
A stronger recommendation can be made for BiV pacing only compared with the combination BiV/ICD device for patients with an EF less than or equal to 0.35, and a QRS interval over or equal to 120 ms, and NYHA III/IV symptoms, and refractory to optimal medical therapy (Table 5).
There is moderate/high-quality evidence that BiV pacemakers significantly improve mortality, QoL, and functional status.
There is low-quality evidence that combined BiV/ICD devices significantly improve mortality, QoL, and functional status.
To date, there are no direct comparisons of the effectiveness of BiV pacemakers compared with the combined BiV/ICD devices in terms of mortality, QoL, and functional status.
Overall GRADE and Strength of Recommendation
BiV refers to biventricular; ICD, implantable cardioverter defibrillator; NNT, number needed to treat.
PMCID: PMC3382419  PMID: 23074464
11.  The Fall and Rise of US Inequities in Premature Mortality: 1960–2002 
PLoS Medicine  2008;5(2):e46.
Background
Debates exist as to whether, as overall population health improves, the absolute and relative magnitude of income- and race/ethnicity-related health disparities necessarily increase—or derease. We accordingly decided to test the hypothesis that health inequities widen—or shrink—in a context of declining mortality rates, by examining annual US mortality data over a 42 year period.
Methods and Findings
Using US county mortality data from 1960–2002 and county median family income data from the 1960–2000 decennial censuses, we analyzed the rates of premature mortality (deaths among persons under age 65) and infant death (deaths among persons under age 1) by quintiles of county median family income weighted by county population size. Between 1960 and 2002, as US premature mortality and infant death rates declined in all county income quintiles, socioeconomic and racial/ethnic inequities in premature mortality and infant death (both relative and absolute) shrank between 1966 and 1980, especially for US populations of color; thereafter, the relative health inequities widened and the absolute differences barely changed in magnitude. Had all persons experienced the same yearly age-specific premature mortality rates as the white population living in the highest income quintile, between 1960 and 2002, 14% of the white premature deaths and 30% of the premature deaths among populations of color would not have occurred.
Conclusions
The observed trends refute arguments that health inequities inevitably widen—or shrink—as population health improves. Instead, the magnitude of health inequalities can fall or rise; it is our job to understand why.
Nancy Krieger and colleagues found evidence of decreasing, and then increasing or stagnating, socioeconomic and racial inequities in US premature mortality and infant death from 1960 to 2002.
Editors' Summary
Background
One of the biggest aims of public health advocates and governments is to improve the health of the population. Improving health increases people's quality of life and helps the population be more economically productive. But within populations are often persistent differences (usually called “disparities” or “inequities”) in the health of different subgroups—between women and men, different income groups, and people of different races/ethnicities, for example. Researchers study these differences so that policy makers and the broader public can be informed about what to do to intervene. For example, if we know that the health of certain subgroups of the population—such as the poor—is staying the same or even worsening as the overall health of the population is improving, policy makers could design programs and devote resources to specifically target the poor.
To study health disparities, researchers use both relative and absolute measures. Relative inequities refer to ratios, while absolute inequities refer to differences. For example, if one group's average income level increases from $1,000 to $10,000 and another group's from $2,000 to $20,000, the relative inequality between the groups stays the same (i.e., the ratio of incomes between the two groups is still 2) but the absolute difference between the two groups has increased from $1,000 to $10,000.
Examining the US population, Nancy Krieger and colleagues looked at trends over time in both relative and absolute differences in mortality between people in different income groups and between whites and people of color.
Why Was This Study Done?
There has been a lot of debate about whether disparities have been widening or narrowing as overall population health improves. Some research has found that both total health and health disparities are getting better with time. Other research has shown that overall health gains mask worsening disparities—such that the rich get healthier while the poor get sicker.
Having access to more data over a longer time frame meant that Krieger and colleagues could provide a more complete picture of this sometimes contradictory story. It also meant they could test their hypothesis about whether, as population health improves, health inequities necessarily widen or shrink within the time period between the 1960s through the 1990s during which certain events and policies likely would have had an impact on the mortality trends in that country.
What Did the Researchers Do and Find?
In order to investigate health inequities, the authors chose to look at two common measures of population health: rates of premature mortality (dying before the age of 65 years) and rates of infant mortality (death before the age of 1).
To determine mortality rates, the authors used death statistics data from different counties, which are routinely collected by state and national governments. To be able to rank mortality rates for different income groups, they used data on the median family incomes of people living within those counties (meaning half the families had income above, and half had incomes below, the median value). They calculated mortality rates for the total population and for whites versus people of color. They used data from 1960 through 2002. They compared rates for 1966–1980 with two other time periods: 1960–1965 and 1981–2002. They also examined trends in the annual mortality rates and in the annual relative and absolute disparites in these rates by county income level.
Over the whole period 1960–2002, the authors found that premature mortality (death before the age of 65) and infant mortality (death before the age of 1) decreased for all income groups. But they also found that disparities between income groups and between whites and people of color were not the same over this time period. In fact, the economic disparities narrowed then widened. First, they shrank between 1966 and 1980, especially for Americans of color. After 1980, however, the relative health inequities widened and the absolute differences did not change. The authors conclude that if all people in the US population experienced the same health gains as the most advantaged did during these 42 years (i.e., as the whites in the highest income groups), 14% of the premature deaths among whites and 30% of the premature deaths among people of color would have been prevented.
What Do These Findings Mean?
The findings provide an overview of the trends in inequities in premature and infant mortality over a long period of time. Different explanations for these trends can now be tested. The authors discuss several potential reasons for these trends, including generally rising incomes across America and changes related to specific diseases, such as the advent of HIV/AIDS, changes in smoking habits, and better management of cancer and cardiovascular disease. But they find that these do not explain the fall then rise of inequities. Instead, the authors suggest that explanations lie in the social programs of the 1960s and the subsequent roll-back of some of these programmes in the 1980s. The US “War on Poverty,” civil rights legislation, and the establishment of Medicare occurred in the mid 1960s, which were intended to reduce socioeconomic and racial/ethnic inequalities and improve access to health care. In the 1980s there was a general cutting back of welfare state provisions in America, which included cuts to public health and antipoverty programs, tax relief for the wealthy, and worsening inequity in the access to and quality of health care. Together, these wider events could explain the fall then rise trends in mortality disparities.
The authors say their findings are important to inform and help monitor the progress of various policies and programmes, including those such as the Healthy People 2010 initiative in America, which aims to increase the quality and years of healthy life and decrease health disparities by the end of this decade.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed. 0050046.
Healthy People 2010 was created by the US Department of Health and Human Services along with scientists inside and outside of government and includes a comprehensive set of disease prevention and health promotion objectives for the US to achieve by 2010, with two overarching goals: to increase quality and years of healthy life and to eliminate health disparities
Johan Mackenbach and colleagues provide an overview of mortality inequalities in six Western European countries—Finland, Sweden, Norway, Denmark, England/Wales, and Italy—and conclude that eliminating mortality inequalities requires that more cardiovascular deaths among lower socioeconomic groups be prevented, as well as more attention be paid to rising death rates of lung cancer, breast cancer, respiratory disease, gastrointestinal disease, and injuries among women and men in the lower income groups.
The WHO Health for All program promotes health equity
A primer on absolute versus relative differences is provided by the American College of Physicians
doi:10.1371/journal.pmed.0050046
PMCID: PMC2253609  PMID: 18303941
12.  Internet-Based Device-Assisted Remote Monitoring of Cardiovascular Implantable Electronic Devices 
Executive Summary
Objective
The objective of this Medical Advisory Secretariat (MAS) report was to conduct a systematic review of the available published evidence on the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted remote monitoring systems (RMSs) for therapeutic cardiac implantable electronic devices (CIEDs) such as pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. The MAS evidence-based review was performed to support public financing decisions.
Clinical Need: Condition and Target Population
Sudden cardiac death (SCD) is a major cause of fatalities in developed countries. In the United States almost half a million people die of SCD annually, resulting in more deaths than stroke, lung cancer, breast cancer, and AIDS combined. In Canada each year more than 40,000 people die from a cardiovascular related cause; approximately half of these deaths are attributable to SCD.
Most cases of SCD occur in the general population typically in those without a known history of heart disease. Most SCDs are caused by cardiac arrhythmia, an abnormal heart rhythm caused by malfunctions of the heart’s electrical system. Up to half of patients with significant heart failure (HF) also have advanced conduction abnormalities.
Cardiac arrhythmias are managed by a variety of drugs, ablative procedures, and therapeutic CIEDs. The range of CIEDs includes pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. Bradycardia is the main indication for PMs and individuals at high risk for SCD are often treated by ICDs.
Heart failure (HF) is also a significant health problem and is the most frequent cause of hospitalization in those over 65 years of age. Patients with moderate to severe HF may also have cardiac arrhythmias, although the cause may be related more to heart pump or haemodynamic failure. The presence of HF, however, increases the risk of SCD five-fold, regardless of aetiology. Patients with HF who remain highly symptomatic despite optimal drug therapy are sometimes also treated with CRT devices.
With an increasing prevalence of age-related conditions such as chronic HF and the expanding indications for ICD therapy, the rate of ICD placement has been dramatically increasing. The appropriate indications for ICD placement, as well as the rate of ICD placement, are increasingly an issue. In the United States, after the introduction of expanded coverage of ICDs, a national ICD registry was created in 2005 to track these devices. A recent survey based on this national ICD registry reported that 22.5% (25,145) of patients had received a non-evidence based ICD and that these patients experienced significantly higher in-hospital mortality and post-procedural complications.
In addition to the increased ICD device placement and the upfront device costs, there is the need for lifelong follow-up or surveillance, placing a significant burden on patients and device clinics. In 2007, over 1.6 million CIEDs were implanted in Europe and the United States, which translates to over 5.5 million patient encounters per year if the recommended follow-up practices are considered. A safe and effective RMS could potentially improve the efficiency of long-term follow-up of patients and their CIEDs.
Technology
In addition to being therapeutic devices, CIEDs have extensive diagnostic abilities. All CIEDs can be interrogated and reprogrammed during an in-clinic visit using an inductive programming wand. Remote monitoring would allow patients to transmit information recorded in their devices from the comfort of their own homes. Currently most ICD devices also have the potential to be remotely monitored. Remote monitoring (RM) can be used to check system integrity, to alert on arrhythmic episodes, and to potentially replace in-clinic follow-ups and manage disease remotely. They do not currently have the capability of being reprogrammed remotely, although this feature is being tested in pilot settings.
Every RMS is specifically designed by a manufacturer for their cardiac implant devices. For Internet-based device-assisted RMSs, this customization includes details such as web application, multiplatform sensors, custom algorithms, programming information, and types and methods of alerting patients and/or physicians. The addition of peripherals for monitoring weight and pressure or communicating with patients through the onsite communicators also varies by manufacturer. Internet-based device-assisted RMSs for CIEDs are intended to function as a surveillance system rather than an emergency system.
Health care providers therefore need to learn each application, and as more than one application may be used at one site, multiple applications may need to be reviewed for alarms. All RMSs deliver system integrity alerting; however, some systems seem to be better geared to fast arrhythmic alerting, whereas other systems appear to be more intended for remote follow-up or supplemental remote disease management. The different RMSs may therefore have different impacts on workflow organization because of their varying frequency of interrogation and methods of alerts. The integration of these proprietary RM web-based registry systems with hospital-based electronic health record systems has so far not been commonly implemented.
Currently there are 2 general types of RMSs: those that transmit device diagnostic information automatically and without patient assistance to secure Internet-based registry systems, and those that require patient assistance to transmit information. Both systems employ the use of preprogrammed alerts that are either transmitted automatically or at regular scheduled intervals to patients and/or physicians.
The current web applications, programming, and registry systems differ greatly between the manufacturers of transmitting cardiac devices. In Canada there are currently 4 manufacturers—Medtronic Inc., Biotronik, Boston Scientific Corp., and St Jude Medical Inc.—which have regulatory approval for remote transmitting CIEDs. Remote monitoring systems are proprietary to the manufacturer of the implant device. An RMS for one device will not work with another device, and the RMS may not work with all versions of the manufacturer’s devices.
All Internet-based device-assisted RMSs have common components. The implanted device is equipped with a micro-antenna that communicates with a small external device (at bedside or wearable) commonly known as the transmitter. Transmitters are able to interrogate programmed parameters and diagnostic data stored in the patients’ implant device. The information transfer to the communicator can occur at preset time intervals with the participation of the patient (waving a wand over the device) or it can be sent automatically (wirelessly) without their participation. The encrypted data are then uploaded to an Internet-based database on a secure central server. The data processing facilities at the central database, depending on the clinical urgency, can trigger an alert for the physician(s) that can be sent via email, fax, text message, or phone. The details are also posted on the secure website for viewing by the physician (or their delegate) at their convenience.
Research Questions
The research directions and specific research questions for this evidence review were as follows:
To identify the Internet-based device-assisted RMSs available for follow-up of patients with therapeutic CIEDs such as PMs, ICDs, and CRT devices.
To identify the potential risks, operational issues, or organizational issues related to Internet-based device-assisted RM for CIEDs.
To evaluate the safety, acceptability, and effectiveness of Internet-based device-assisted RMSs for CIEDs such as PMs, ICDs, and CRT devices.
To evaluate the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted RMSs for CIEDs compared to usual outpatient in-office monitoring strategies.
To evaluate the resource implications or budget impact of RMSs for CIEDs in Ontario, Canada.
Research Methods
Literature Search
The review included a systematic review of published scientific literature and consultations with experts and manufacturers of all 4 approved RMSs for CIEDs in Canada. Information on CIED cardiac implant clinics was also obtained from Provincial Programs, a division within the Ministry of Health and Long-Term Care with a mandate for cardiac implant specialty care. Various administrative databases and registries were used to outline the current clinical follow-up burden of CIEDs in Ontario. The provincial population-based ICD database developed and maintained by the Institute for Clinical Evaluative Sciences (ICES) was used to review the current follow-up practices with Ontario patients implanted with ICD devices.
Search Strategy
A literature search was performed on September 21, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from 1950 to September 2010. Search alerts were generated and reviewed for additional relevant literature until December 31, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Inclusion Criteria
published between 1950 and September 2010;
English language full-reports and human studies;
original reports including clinical evaluations of Internet-based device-assisted RMSs for CIEDs in clinical settings;
reports including standardized measurements on outcome events such as technical success, safety, effectiveness, cost, measures of health care utilization, morbidity, mortality, quality of life or patient satisfaction;
randomized controlled trials (RCTs), systematic reviews and meta-analyses, cohort and controlled clinical studies.
Exclusion Criteria
non-systematic reviews, letters, comments and editorials;
reports not involving standardized outcome events;
clinical reports not involving Internet-based device assisted RM systems for CIEDs in clinical settings;
reports involving studies testing or validating algorithms without RM;
studies with small samples (<10 subjects).
Outcomes of Interest
The outcomes of interest included: technical outcomes, emergency department visits, complications, major adverse events, symptoms, hospital admissions, clinic visits (scheduled and/or unscheduled), survival, morbidity (disease progression, stroke, etc.), patient satisfaction, and quality of life.
Summary of Findings
The MAS evidence review was performed to review available evidence on Internet-based device-assisted RMSs for CIEDs published until September 2010. The search identified 6 systematic reviews, 7 randomized controlled trials, and 19 reports for 16 cohort studies—3 of these being registry-based and 4 being multi-centered. The evidence is summarized in the 3 sections that follow.
1. Effectiveness of Remote Monitoring Systems of CIEDs for Cardiac Arrhythmia and Device Functioning
In total, 15 reports on 13 cohort studies involving investigations with 4 different RMSs for CIEDs in cardiology implant clinic groups were identified in the review. The 4 RMSs were: Care Link Network® (Medtronic Inc,, Minneapolis, MN, USA); Home Monitoring® (Biotronic, Berlin, Germany); House Call 11® (St Jude Medical Inc., St Pauls, MN, USA); and a manufacturer-independent RMS. Eight of these reports were with the Home Monitoring® RMS (12,949 patients), 3 were with the Care Link® RMS (167 patients), 1 was with the House Call 11® RMS (124 patients), and 1 was with a manufacturer-independent RMS (44 patients). All of the studies, except for 2 in the United States, (1 with Home Monitoring® and 1 with House Call 11®), were performed in European countries.
The RMSs in the studies were evaluated with different cardiac implant device populations: ICDs only (6 studies), ICD and CRT devices (3 studies), PM and ICD and CRT devices (4 studies), and PMs only (2 studies). The patient populations were predominately male (range, 52%–87%) in all studies, with mean ages ranging from 58 to 76 years. One study population was unique in that RMSs were evaluated for ICDs implanted solely for primary prevention in young patients (mean age, 44 years) with Brugada syndrome, which carries an inherited increased genetic risk for sudden heart attack in young adults.
Most of the cohort studies reported on the feasibility of RMSs in clinical settings with limited follow-up. In the short follow-up periods of the studies, the majority of the events were related to detection of medical events rather than system configuration or device abnormalities. The results of the studies are summarized below:
The interrogation of devices on the web platform, both for continuous and scheduled transmissions, was significantly quicker with remote follow-up, both for nurses and physicians.
In a case-control study focusing on a Brugada population–based registry with patients followed-up remotely, there were significantly fewer outpatient visits and greater detection of inappropriate shocks. One death occurred in the control group not followed remotely and post-mortem analysis indicated early signs of lead failure prior to the event.
Two studies examined the role of RMSs in following ICD leads under regulatory advisory in a European clinical setting and noted:
– Fewer inappropriate shocks were administered in the RM group.
– Urgent in-office interrogations and surgical revisions were performed within 12 days of remote alerts.
– No signs of lead fracture were detected at in-office follow-up; all were detected at remote follow-up.
Only 1 study reported evaluating quality of life in patients followed up remotely at 3 and 6 months; no values were reported.
Patient satisfaction was evaluated in 5 cohort studies, all in short term follow-up: 1 for the Home Monitoring® RMS, 3 for the Care Link® RMS, and 1 for the House Call 11® RMS.
– Patients reported receiving a sense of security from the transmitter, a good relationship with nurses and physicians, positive implications for their health, and satisfaction with RM and organization of services.
– Although patients reported that the system was easy to implement and required less than 10 minutes to transmit information, a variable proportion of patients (range, 9% 39%) reported that they needed the assistance of a caregiver for their transmission.
– The majority of patients would recommend RM to other ICD patients.
– Patients with hearing or other physical or mental conditions hindering the use of the system were excluded from studies, but the frequency of this was not reported.
Physician satisfaction was evaluated in 3 studies, all with the Care Link® RMS:
– Physicians reported an ease of use and high satisfaction with a generally short-term use of the RMS.
– Physicians reported being able to address the problems in unscheduled patient transmissions or physician initiated transmissions remotely, and were able to handle the majority of the troubleshooting calls remotely.
– Both nurses and physicians reported a high level of satisfaction with the web registry system.
2. Effectiveness of Remote Monitoring Systems in Heart Failure Patients for Cardiac Arrhythmia and Heart Failure Episodes
Remote follow-up of HF patients implanted with ICD or CRT devices, generally managed in specialized HF clinics, was evaluated in 3 cohort studies: 1 involved the Home Monitoring® RMS and 2 involved the Care Link® RMS. In these RMSs, in addition to the standard diagnostic features, the cardiac devices continuously assess other variables such as patient activity, mean heart rate, and heart rate variability. Intra-thoracic impedance, a proxy measure for lung fluid overload, was also measured in the Care Link® studies. The overall diagnostic performance of these measures cannot be evaluated, as the information was not reported for patients who did not experience intra-thoracic impedance threshold crossings or did not undergo interventions. The trial results involved descriptive information on transmissions and alerts in patients experiencing high morbidity and hospitalization in the short study periods.
3. Comparative Effectiveness of Remote Monitoring Systems for CIEDs
Seven RCTs were identified evaluating RMSs for CIEDs: 2 were for PMs (1276 patients) and 5 were for ICD/CRT devices (3733 patients). Studies performed in the clinical setting in the United States involved both the Care Link® RMS and the Home Monitoring® RMS, whereas all studies performed in European countries involved only the Home Monitoring® RMS.
3A. Randomized Controlled Trials of Remote Monitoring Systems for Pacemakers
Two trials, both multicenter RCTs, were conducted in different countries with different RMSs and study objectives. The PREFER trial was a large trial (897 patients) performed in the United States examining the ability of Care Link®, an Internet-based remote PM interrogation system, to detect clinically actionable events (CAEs) sooner than the current in-office follow-up supplemented with transtelephonic monitoring transmissions, a limited form of remote device interrogation. The trial results are summarized below:
In the 375-day mean follow-up, 382 patients were identified with at least 1 CAE—111 patients in the control arm and 271 in the remote arm.
The event rate detected per patient for every type of CAE, except for loss of atrial capture, was higher in the remote arm than the control arm.
The median time to first detection of CAEs (4.9 vs. 6.3 months) was significantly shorter in the RMS group compared to the control group (P < 0.0001).
Additionally, only 2% (3/190) of the CAEs in the control arm were detected during a transtelephonic monitoring transmission (the rest were detected at in-office follow-ups), whereas 66% (446/676) of the CAEs were detected during remote interrogation.
The second study, the OEDIPE trial, was a smaller trial (379 patients) performed in France evaluating the ability of the Home Monitoring® RMS to shorten PM post-operative hospitalization while preserving the safety of conventional management of longer hospital stays.
Implementation and operationalization of the RMS was reported to be successful in 91% (346/379) of the patients and represented 8144 transmissions.
In the RM group 6.5% of patients failed to send messages (10 due to improper use of the transmitter, 2 with unmanageable stress). Of the 172 patients transmitting, 108 patients sent a total of 167 warnings during the trial, with a greater proportion of warnings being attributed to medical rather than technical causes.
Forty percent had no warning message transmission and among these, 6 patients experienced a major adverse event and 1 patient experienced a non-major adverse event. Of the 6 patients having a major adverse event, 5 contacted their physician.
The mean medical reaction time was faster in the RM group (6.5 ± 7.6 days vs. 11.4 ± 11.6 days).
The mean duration of hospitalization was significantly shorter (P < 0.001) for the RM group than the control group (3.2 ± 3.2 days vs. 4.8 ± 3.7 days).
Quality of life estimates by the SF-36 questionnaire were similar for the 2 groups at 1-month follow-up.
3B. Randomized Controlled Trials Evaluating Remote Monitoring Systems for ICD or CRT Devices
The 5 studies evaluating the impact of RMSs with ICD/CRT devices were conducted in the United States and in European countries and involved 2 RMSs—Care Link® and Home Monitoring ®. The objectives of the trials varied and 3 of the trials were smaller pilot investigations.
The first of the smaller studies (151 patients) evaluated patient satisfaction, achievement of patient outcomes, and the cost-effectiveness of the Care Link® RMS compared to quarterly in-office device interrogations with 1-year follow-up.
Individual outcomes such as hospitalizations, emergency department visits, and unscheduled clinic visits were not significantly different between the study groups.
Except for a significantly higher detection of atrial fibrillation in the RM group, data on ICD detection and therapy were similar in the study groups.
Health-related quality of life evaluated by the EuroQoL at 6-month or 12-month follow-up was not different between study groups.
Patients were more satisfied with their ICD care in the clinic follow-up group than in the remote follow-up group at 6-month follow-up, but were equally satisfied at 12- month follow-up.
The second small pilot trial (20 patients) examined the impact of RM follow-up with the House Call 11® system on work schedules and cost savings in patients randomized to 2 study arms varying in the degree of remote follow-up.
The total time including device interrogation, transmission time, data analysis, and physician time required was significantly shorter for the RM follow-up group.
The in-clinic waiting time was eliminated for patients in the RM follow-up group.
The physician talk time was significantly reduced in the RM follow-up group (P < 0.05).
The time for the actual device interrogation did not differ in the study groups.
The third small trial (115 patients) examined the impact of RM with the Home Monitoring® system compared to scheduled trimonthly in-clinic visits on the number of unplanned visits, total costs, health-related quality of life (SF-36), and overall mortality.
There was a 63.2% reduction in in-office visits in the RM group.
Hospitalizations or overall mortality (values not stated) were not significantly different between the study groups.
Patient-induced visits were higher in the RM group than the in-clinic follow-up group.
The TRUST Trial
The TRUST trial was a large multicenter RCT conducted at 102 centers in the United States involving the Home Monitoring® RMS for ICD devices for 1450 patients. The primary objectives of the trial were to determine if remote follow-up could be safely substituted for in-office clinic follow-up (3 in-office visits replaced) and still enable earlier physician detection of clinically actionable events.
Adherence to the protocol follow-up schedule was significantly higher in the RM group than the in-office follow-up group (93.5% vs. 88.7%, P < 0.001).
Actionability of trimonthly scheduled checks was low (6.6%) in both study groups. Overall, actionable causes were reprogramming (76.2%), medication changes (24.8%), and lead/system revisions (4%), and these were not different between the 2 study groups.
The overall mean number of in-clinic and hospital visits was significantly lower in the RM group than the in-office follow-up group (2.1 per patient-year vs. 3.8 per patient-year, P < 0.001), representing a 45% visit reduction at 12 months.
The median time from onset of first arrhythmia to physician evaluation was significantly shorter (P < 0.001) in the RM group than in the in-office follow-up group for all arrhythmias (1 day vs. 35.5 days).
The median time to detect clinically asymptomatic arrhythmia events—atrial fibrillation (AF), ventricular fibrillation (VF), ventricular tachycardia (VT), and supra-ventricular tachycardia (SVT)—was also significantly shorter (P < 0.001) in the RM group compared to the in-office follow-up group (1 day vs. 41.5 days) and was significantly quicker for each of the clinical arrhythmia events—AF (5.5 days vs. 40 days), VT (1 day vs. 28 days), VF (1 day vs. 36 days), and SVT (2 days vs. 39 days).
System-related problems occurred infrequently in both groups—in 1.5% of patients (14/908) in the RM group and in 0.7% of patients (3/432) in the in-office follow-up group.
The overall adverse event rate over 12 months was not significantly different between the 2 groups and individual adverse events were also not significantly different between the RM group and the in-office follow-up group: death (3.4% vs. 4.9%), stroke (0.3% vs. 1.2%), and surgical intervention (6.6% vs. 4.9%), respectively.
The 12-month cumulative survival was 96.4% (95% confidence interval [CI], 95.5%–97.6%) in the RM group and 94.2% (95% confidence interval [CI], 91.8%–96.6%) in the in-office follow-up group, and was not significantly different between the 2 groups (P = 0.174).
The CONNECT Trial
The CONNECT trial, another major multicenter RCT, involved the Care Link® RMS for ICD/CRT devices in a15-month follow-up study of 1,997 patients at 133 sites in the United States. The primary objective of the trial was to determine whether automatically transmitted physician alerts decreased the time from the occurrence of clinically relevant events to medical decisions. The trial results are summarized below:
Of the 575 clinical alerts sent in the study, 246 did not trigger an automatic physician alert. Transmission failures were related to technical issues such as the alert not being programmed or not being reset, and/or a variety of patient factors such as not being at home and the monitor not being plugged in or set up.
The overall mean time from the clinically relevant event to the clinical decision was significantly shorter (P < 0.001) by 17.4 days in the remote follow-up group (4.6 days for 172 patients) than the in-office follow-up group (22 days for 145 patients).
– The median time to a clinical decision was shorter in the remote follow-up group than in the in-office follow-up group for an AT/AF burden greater than or equal to 12 hours (3 days vs. 24 days) and a fast VF rate greater than or equal to 120 beats per minute (4 days vs. 23 days).
Although infrequent, similar low numbers of events involving low battery and VF detection/therapy turned off were noted in both groups. More alerts, however, were noted for out-of-range lead impedance in the RM group (18 vs. 6 patients), and the time to detect these critical events was significantly shorter in the RM group (same day vs. 17 days).
Total in-office clinic visits were reduced by 38% from 6.27 visits per patient-year in the in-office follow-up group to 3.29 visits per patient-year in the remote follow-up group.
Health care utilization visits (N = 6,227) that included cardiovascular-related hospitalization, emergency department visits, and unscheduled clinic visits were not significantly higher in the remote follow-up group.
The overall mean length of hospitalization was significantly shorter (P = 0.002) for those in the remote follow-up group (3.3 days vs. 4.0 days) and was shorter both for patients with ICD (3.0 days vs. 3.6 days) and CRT (3.8 days vs. 4.7 days) implants.
The mortality rate between the study arms was not significantly different between the follow-up groups for the ICDs (P = 0.31) or the CRT devices with defribillator (P = 0.46).
Conclusions
There is limited clinical trial information on the effectiveness of RMSs for PMs. However, for RMSs for ICD devices, multiple cohort studies and 2 large multicenter RCTs demonstrated feasibility and significant reductions in in-office clinic follow-ups with RMSs in the first year post implantation. The detection rates of clinically significant events (and asymptomatic events) were higher, and the time to a clinical decision for these events was significantly shorter, in the remote follow-up groups than in the in-office follow-up groups. The earlier detection of clinical events in the remote follow-up groups, however, was not associated with lower morbidity or mortality rates in the 1-year follow-up. The substitution of almost all the first year in-office clinic follow-ups with RM was also not associated with an increased health care utilization such as emergency department visits or hospitalizations.
The follow-up in the trials was generally short-term, up to 1 year, and was a more limited assessment of potential longer term device/lead integrity complications or issues. None of the studies compared the different RMSs, particularly the different RMSs involving patient-scheduled transmissions or automatic transmissions. Patients’ acceptance of and satisfaction with RM were reported to be high, but the impact of RM on patients’ health-related quality of life, particularly the psychological aspects, was not evaluated thoroughly. Patients who are not technologically competent, having hearing or other physical/mental impairments, were identified as potentially disadvantaged with remote surveillance. Cohort studies consistently identified subgroups of patients who preferred in-office follow-up. The evaluation of costs and workflow impact to the health care system were evaluated in European or American clinical settings, and only in a limited way.
Internet-based device-assisted RMSs involve a new approach to monitoring patients, their disease progression, and their CIEDs. Remote monitoring also has the potential to improve the current postmarket surveillance systems of evolving CIEDs and their ongoing hardware and software modifications. At this point, however, there is insufficient information to evaluate the overall impact to the health care system, although the time saving and convenience to patients and physicians associated with a substitution of in-office follow-up by RM is more certain. The broader issues surrounding infrastructure, impacts on existing clinical care systems, and regulatory concerns need to be considered for the implementation of Internet-based RMSs in jurisdictions involving different clinical practices.
PMCID: PMC3377571  PMID: 23074419
13.  Mortality of HIV-Infected Patients Starting Antiretroviral Therapy in Sub-Saharan Africa: Comparison with HIV-Unrelated Mortality 
PLoS Medicine  2009;6(4):e1000066.
Comparing mortality rates between patients starting HIV treatment and the general population in four African countries, Matthias Egger and colleagues find the gap decreases over time, especially with early treatment.
Background
Mortality in HIV-infected patients who have access to highly active antiretroviral therapy (ART) has declined in sub-Saharan Africa, but it is unclear how mortality compares to the non-HIV–infected population. We compared mortality rates observed in HIV-1–infected patients starting ART with non-HIV–related background mortality in four countries in sub-Saharan Africa.
Methods and Findings
Patients enrolled in antiretroviral treatment programmes in Côte d'Ivoire, Malawi, South Africa, and Zimbabwe were included. We calculated excess mortality rates and standardised mortality ratios (SMRs) with 95% confidence intervals (CIs). Expected numbers of deaths were obtained using estimates of age-, sex-, and country-specific, HIV-unrelated, mortality rates from the Global Burden of Disease project. Among 13,249 eligible patients 1,177 deaths were recorded during 14,695 person-years of follow-up. The median age was 34 y, 8,831 (67%) patients were female, and 10,811 of 12,720 patients (85%) with information on clinical stage had advanced disease when starting ART. The excess mortality rate was 17.5 (95% CI 14.5–21.1) per 100 person-years SMR in patients who started ART with a CD4 cell count of less than 25 cells/µl and World Health Organization (WHO) stage III/IV, compared to 1.00 (0.55–1.81) per 100 person-years in patients who started with 200 cells/µl or above with WHO stage I/II. The corresponding SMRs were 47.1 (39.1–56.6) and 3.44 (1.91–6.17). Among patients who started ART with 200 cells/µl or above in WHO stage I/II and survived the first year of ART, the excess mortality rate was 0.27 (0.08–0.94) per 100 person-years and the SMR was 1.14 (0.47–2.77).
Conclusions
Mortality of HIV-infected patients treated with combination ART in sub-Saharan Africa continues to be higher than in the general population, but for some patients excess mortality is moderate and reaches that of the general population in the second year of ART. Much of the excess mortality might be prevented by timely initiation of ART.
Please see later in the article for Editors' Summary
Editors' Summary
Background
Acquired immunodeficiency syndrome (AIDS) has killed more than 25 million people since 1981 and more than 30 million people (22 million in sub-Saharan Africa alone) are now infected with the human immunodeficiency virus (HIV), which causes AIDS. HIV destroys immune system cells (including CD4 cells, a type of lymphocyte), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-positive people died within ten years of infection. Then, in 1996, highly active antiretroviral therapy (ART)—combinations of powerful antiretroviral drugs—was developed and the life expectancy of HIV-infected people living in affluent countries improved dramatically. Now, in industrialized countries, all-cause mortality (death from any cause) among HIV-infected patients treated successfully with ART is similar to that of the general population and the mortality rate (the number of deaths in a population per year) among patients with HIV/AIDS is comparable to that among patients with diabetes and other chronic conditions.
Why Was This Study Done?
Unfortunately, combination ART is costly, so although HIV/AIDS quickly became a chronic disease in industrialized countries, AIDS deaths continued unabated among the millions of HIV-infected people living in low- and middle-income countries. Then, in 2003, governments, international agencies and funding bodies began to implement plans to increase ART coverage in developing countries. By the end of 2007, nearly three million people living with HIV/AIDS in these countries were receiving ART—nearly a third of the people who urgently need ART. In sub-Saharan Africa more than 2 million people now receive ART and mortality in HIV-infected patients who have access to ART is declining. However, no-one knows how mortality among HIV-infected people starting ART compares with non-HIV related mortality in sub-Saharan Africa. This information is needed to ensure that appropriate health services (including access to ART) are provided in this region. In this study, the researchers compare mortality rates among HIV-infected patients starting ART with non-HIV related mortality in the general population of four sub-Saharan countries.
What Did the Researchers Do and Find?
The researchers obtained estimates of the number of HIV-unrelated deaths and information about patients during their first two years on ART at five antiretroviral treatment programs in the Côte d'Ivoire, Malawi, South Africa, and Zimbabwe from the World Health Organization Global Burden of Disease (GBD) project and the International epidemiological Databases to Evaluate AIDS (IeDEA) initiative, respectively. They then calculated the excess mortality rates among the HIV-infected patients (the death rates in HIV-infected patients minus the national HIV-unrelated death rates) and the standardized mortality rate (SMR; the number of deaths among HIV-infected patients divided by the number of HIV-unrelated deaths in the general population). The excess mortality rate among HIV-infected people who started ART when they had a low CD4 cell count and clinically advanced disease was 17.5 per 100 person-years of follow-up. For HIV-infected people who started ART with a high CD4 cell count and early disease, the excess mortality rate was 1.0 per 100 person-years. The SMRs over two years of ART for these two groups of HIV-infected patients were 47.1 and 3.4, respectively. Finally, patients who started ART with a high CD4 cell count and early disease who survived the first year of ART had an excess mortality of only 0.27 per 100 person-years and an SMR over two years follow-up of only 1.14.
What Do These Findings Mean?
These findings indicate that mortality among HIV-infected people during the first two years of ART is higher than in the general population in these four sub-Saharan countries. However, for patients who start ART when they have a high CD4 count and clinically early disease, the excess mortality is moderate and similar to that associated with diabetes. Because the researchers compared the death rates among HIV-infected patients with estimates of national death rates rather than with estimates of death rates for the areas where the ART programs were located, these findings may not be completely accurate. Nevertheless, these findings support further expansion of strategies that increase access to ART in sub-Saharan Africa and suggest the excess mortality among HIV-infected patients in this region might be largely prevented by starting ART before an individual's HIV infection has progressed to advanced stages.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000066.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
HIV InSite has comprehensive information on all aspects of HIV/AIDS
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS including HIV and AIDS in Africa, providing AIDS drug treatment for millions, and on the stages of HIV infection
The World Health Organization provides information about universal access to HIV treatment and about the Global Burden of Disease project (in several languages)
More information about the International epidemiological Databases to evaluate AIDS initiative is available on the IeDEA Web site
doi:10.1371/journal.pmed.1000066
PMCID: PMC2667633  PMID: 19399157
14.  MORTALITY OF GASWORKERS WITH SPECIAL REFERENCE TO CANCERS OF THE LUNG AND BLADDER, CHRONIC BRONCHITIS, AND PNEUMOCONIOSIS 
The mortality of selected groups of gasworkers has been observed over a period of eight years, and a comparison has been made of the mortality from different causes among different occupational groups. Men were included in the study if they had been employed by the industry for more than five years and were between 40 and 65 years of age when the observations began. All employees and pensioners of four area Gas Boards who met these conditions were initially included; but the number was subsequently reduced to 11,499 by excluding many of the occupations which did not involve entry into the carbonizing plants or involved this only irregularly. All but 0·4% of the men were followed successfully throughout the study. Mortality rates, standardized for age, were calculated for 10 diseases, or groups of diseases, for each of three broad occupational classes, i.e., those having heavy exposure in carbonizing plants (class A), intermittent exposure or exposure to conditions in other gas-producing plants (class B), and such exposure (class C).
The results showed that the annual death rate was highest in class A (17·2 per 1,000), intermediate in class B (14·6 per 1,000), and lowest in class C (13·7 per 1,000), the corresponding mortallity for all men in England and Wales over the same period being slightly lower than the rate for class A (16·3 per 1,000). The differences between the three classes were largely accounted for by two diseases, cancer of the lung and bronchitis. For cancer of the lung the death rate (3·06 per 1,000) was 69% higher in class A than in class C; for bronchitis (2·89 per 1,000) it was 126% higher. For both diseases the mortality in class B was only slightly higher than in class C, and in both these categories the mortality was close to that observed in the country as a whole.
Three other causes of death showed higher death rates in the exposed classes than in the unexposed or in the country as a whole, but the numbers of deaths attributed to them were very small. The death rate from cancer of the bladder in class A was four times that in class C, but the total number of deaths was only 14. Five deaths were attributed to pneumoconiosis, four of which occurred in bricklayers (class B). One death from cancer of the scrotum occurred in a retort house worker.
For other causes of death the mortality rates were similar to or lower than the corresponding national rates.
Examination of the data separately for each area Board showed that the excess mortality from lung cancer and chronic bronchitis in retort house workers persisted in each area. For two Boards the mortality from other causes was close to that recorded for other men living in the same region; in the other two Boards it was substantially lower.
A comparison between the mortality of men who worked in horizontal retort houses and of those who worked in vertical houses suggested that the risk of lung cancer was greater in the horizontal houses and the risk of bronchitis was greater in the vertical houses, the differences being, however, not statistically significant.
In the light of these and other data, it is concluded that exposure to products of coal carbonization can give rise to cancer of the lung and to bronchitis, and probably also to cancer of the bladder. A risk of pneumoconiosis from work on the repair and setting of retorts is confirmed.
PMCID: PMC1008209  PMID: 14261702
15.  Extracorporeal Lung Support Technologies – Bridge to Recovery and Bridge to Lung Transplantation in Adult Patients 
Executive Summary
For cases of acute respiratory distress syndrome (ARDS) and progressive chronic respiratory failure, the first choice or treatment is mechanical ventilation. For decades, this method has been used to support critically ill patients in respiratory failure. Despite its life-saving potential, however, several experimental and clinical studies have suggested that ventilator-induced lung injury can adversely affect the lungs and patient outcomes. Current opinion is that by reducing the pressure and volume of gas delivered to the lungs during mechanical ventilation, the stress applied to the lungs is eased, enabling them to rest and recover. In addition, mechanical ventilation may fail to provide adequate gas exchange, thus patients may suffer from severe hypoxia and hypercapnea. For these reasons, extracorporeal lung support technologies may play an important role in the clinical management of patients with lung failure, allowing not only the transfer of oxygen and carbon dioxide (CO2) but also buying the lungs the time needed to rest and heal.
Objective
The objective of this analysis was to assess the effectiveness, safety, and cost-effectiveness of extracorporeal lung support technologies in the improvement of pulmonary gas exchange and the survival of adult patients with acute pulmonary failure and those with end-stage chronic progressive lung disease as a bridge to lung transplantation (LTx). The application of these technologies in primary graft dysfunction (PGD) after LTx is beyond the scope of this review and is not discussed.
Clinical Applications of Extracorporeal Lung Support
Extracorporeal lung support technologies [i.e., Interventional Lung Assist (ILA) and extracorporeal membrane oxygenation (ECMO)] have been advocated for use in the treatment of patients with respiratory failure. These techniques do not treat the underlying lung condition; rather, they improve gas exchange while enabling the implantation of a protective ventilation strategy to prevent further damage to the lung tissues imposed by the ventilator. As such, extracorporeal lung support technologies have been used in three major lung failure case types:
As a bridge to recovery in acute lung failure – for patients with injured or diseased lungs to give their lungs time to heal and regain normal physiologic function.
As a bridge to LTx – for patients with irreversible end stage lung disease requiring LTx.
As a bridge to recovery after LTx – used as lung support for patients with PGD or severe hypoxemia.
Ex-Vivo Lung Perfusion and Assessment
Recently, the evaluation and reconditioning of donor lungs ex-vivo has been introduced into clinical practice as a method of improving the rate of donor lung utilization. Generally, about 15% to 20% of donor lungs are suitable for LTx, but these figures may increase with the use of ex-vivo lung perfusion. The ex-vivo evaluation and reconditioning of donor lungs is currently performed at the Toronto General Hospital (TGH) and preliminary results have been encouraging (Personal communication, clinical expert, December 17, 2009). If its effectiveness is confirmed, the use of the technique could lead to further expansion of donor organ pools and improvements in post-LTx outcomes.
Extracorporeal Lung support Technologies
ECMO
The ECMO system consists of a centrifugal pump, a membrane oxygenator, inlet and outlet cannulas, and tubing. The exchange of oxygen and CO2 then takes place in the oxygenator, which delivers the reoxygenated blood back into one of the patient’s veins or arteries. Additional ports may be added for haemodialysis or ultrafiltration.
Two different techniques may be used to introduce ECMO: venoarterial and venovenous. In the venoarterial technique, cannulation is through either the femoral artery and the femoral vein, or through the carotid artery and the internal jugular vein. In the venovenous technique cannulation is through both femoral veins or a femoral vein and internal jugular vein; one cannula acts as inflow or arterial line, and the other as an outflow or venous line. Venovenous ECMO will not provide adequate support if a patient has pulmonary hypertension or right heart failure. Problems associated with cannulation during the procedure include bleeding around the cannulation site and limb ischemia distal to the cannulation site.
ILA
Interventional Lung Assist (ILA) is used to remove excess CO2 from the blood of patients in respiratory failure. The system is characterized by a novel, low-resistance gas exchange device with a diffusion membrane composed of polymethylpentene (PMP) fibres. These fibres are woven into a complex configuration that maximizes the exchange of oxygen and CO2 by simple diffusion. The system is also designed to operate without the help of an external pump, though one can be added if higher blood flow is required. The device is then applied across an arteriovenous shunt between the femoral artery and femoral vein. Depending on the size of the arterial cannula used and the mean systemic arterial pressure, a blood flow of up to 2.5 L/min can be achieved (up to 5.5 L/min with an external pump). The cannulation is performed after intravenous administration of heparin.
Recently, the first commercially available extracorporeal membrane ventilator (NovaLung GmbH, Hechingen, Germany) was approved for clinical use by Health Canada for patients in respiratory failure. The system has been used in more than 2,000 patients with various indications in Europe, and was used for the first time in North America at the Toronto General Hospital in 2006.
Evidence-Based Analysis
The research questions addressed in this report are:
Does ILA/ECMO facilitate gas exchange in the lungs of patients with severe respiratory failure?
Does ILA/ECMO improve the survival rate of patients with respiratory failure caused by a range of underlying conditions including patients awaiting LTx?
What are the possible serious adverse events associated with ILA/ECMO therapy?
To address these questions, a systematic literature search was performed on September 28, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2005 to September 28, 2008. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with an unknown eligibility were reviewed with a second clinical epidemiologist and then a group of epidemiologists until consensus was established.
Inclusion Criteria
Studies in which ILA/ECMO was used as a bridge to recovery or bridge to LTx
Studies containing information relevant to the effectiveness and safety of the procedure
Studies including at least five patients
Exclusion Criteria
Studies reporting the use of ILA/ECMO for inter-hospital transfers of critically ill patients
Studies reporting the use of ILA/ECMO in patients during or after LTx
Animal or laboratory studies
Case reports
Outcomes of Interest
Reduction in partial pressure of CO2
Correction of respiratory acidosis
Improvement in partial pressure of oxygen
Improvement in patient survival
Frequency and severity of adverse events
The search yielded 107 citations in Medline and 107 citations in EMBASE. After reviewing the information provided in the titles and abstracts, eight citations were found to meet the study inclusion criteria. One study was then excluded because of an overlap in the study population with a previous study. Reference checking did not produce any additional studies for inclusion. Seven case series studies, all conducted in Germany, were thus included in this review (see Table 1).
Also included is the recently published CESAR trial, a multicentre RCT in the UK in which ECMO was compared with conventional intensive care management. The results of the CESAR trial were published when this review was initiated. In the absence of any other recent RCT on ECMO, the results of this trial were considered for this assessment and no further searches were conducted. A literature search was then conducted for application of ECMO as bridge to LTx patients (January, 1, 2005 to current). A total of 127 citations on this topic were identified and reviewed but none were found to have examined the use of ECMO as bridge to LTx.
Quality of Evidence
To grade the quality of evidence, the grading system formulated by the GRADE working group and adopted by MAS was applied. The GRADE system classifies the quality of a body of evidence as high, moderate, low, or very low according to four key elements: study design, study quality, consistency across studies, and directness.
Results
Trials on ILA
Of the seven studies identified, six involved patients with ARDS caused by a range of underlying conditions; the seventh included only patients awaiting LTx. All studies reported the rate of gas exchange and respiratory mechanics before ILA and for up to 7 days of ILA therapy. Four studies reported the means and standard deviations of blood gas transfer and arterial blood pH, which were used for meta-analysis.
Fischer et al. reported their first experience on the use of ILA as a bridge to LTx. In their study, 12 patients at high urgency status for LTx, who also had severe ventilation refractory hypercapnea and respiratory acidosis, were connected to ILA prior to LTx. Seven patients had a systemic infection or sepsis prior to ILA insertion. Six hours after initiation of ILA, the partial pressure of CO2 in arterial blood significantly decreased (P < .05) and arterial blood pH significantly improved (P < .05) and remained stable for one week (last time point reported). The partial pressure of oxygen in arterial blood improved from 71 mmHg to 83 mmHg 6 hours after insertion of ILA. The ratio of PaO2/FiO2 improved from 135 at baseline to 168 at 24 hours after insertion of ILA but returned to baseline values in the following week.
Trials on ECMO
The UK-based CESAR trial was conducted to assess the effectiveness and cost of ECMO therapy for severe, acute respiratory failure. The trial protocol were published in 2006 and details of the methods used for the economic evaluation were published in 2008. The study itself was a pragmatic trial (similar to a UK trial of neonatal ECMO), in which best standard practice was compared with an ECMO protocol. The trial involved 180 patients with acute but potentially reversible respiratory failure, with each also having a Murray score of ≥ 3.0 or uncompensated hypercapnea at a pH of < 7.2. Enrolled patients were randomized in a 1:1 ratio to receive either conventional ventilation treatment or ECMO while on ventilator. Conventional management included intermittent positive pressure ventilation, high frequency oscillatory ventilation, or both. As a pragmatic trial, a specific management protocol was not followed; rather the treatment centres were advised to follow a low volume low pressure ventilation strategy. A tidal volume of 4 to 8 mL/kg body weight and a plateau pressure of < 30 cm H2O were recommended.
Conclusions
ILA
Bridge to recovery
No RCTs or observational studies compared ILA to other treatment modalities.
Case series have shown that ILA therapy results in significant CO2 removal from arterial blood and correction of respiratory acidosis, as well as an improvement in oxygen transfer.
ILA therapy enabled a lowering of respiratory settings to protect the lungs without causing a negative impact on arterial blood CO2 and arterial blood pH.
The impact of ILA on patient long-term survival cannot be determined through the studies reviewed.
In-hospital mortality across studies ranged from 20% to 65%.
Ischemic complications were the most frequent adverse events following ILA therapy.
Leg amputation is a rare but possible outcome of ILA therapy, having occurred in about 0.9% of patients in these case series. New techniques involving the insertion of additional cannula into the femoral artery to perfuse the leg may lower this rate.
Bridge to LTx
The results of one case series (n=12) showed that ILA effectively removes CO2 from arterial blood and corrects respiratory acidosis in patients with ventilation refractory hypercapnea awaiting a LTx
Eight of the 12 patients (67%) awaiting a LTx were successfully transplanted and one-year survival for those transplanted was 80%
Since all studies are case series, the grade of the evidence for these observations is classified as “LOW”.
ECMO
Bridge to recovery
Based on the results of a pragmatic trial and an intention to treat analysis, referral of patient to an ECMO based centre significantly improves patient survival without disability compared to conventional ventilation. The results of CESAR trial showed that:
For patients with information about disability, survival without severe disability was significantly higher in ECMO arm
Assuming that the three patients in the conventional ventilation arm who did not have information about severe disability were all disabled, the results were also significant.
Assuming that none of these patients were disabled, the results were at borderline significance
A greater, though not statistically significant, proportion of patients in ECMO arm survived.
The rate of serious adverse events was higher among patients in ECMO group
The grade of evidence for the above observations is classified as “HIGH”.
Bridge to LTx
No studies fitting the inclusion criteria were identified.
There is no accurate data on the use of ECMO in patients awaiting LTx.
Economic Analysis
The objective of the economic analysis was to determine the costs associated with extracorporeal lung support technologies for bridge to LTx in adults. A literature search was conducted for which the target population was adults eligible for extracorporeal lung support. The primary analytic perspective was that of the Ministry of Health and Long-Term Care (MOHLTC). Articles published in English and fitting the following inclusion criteria were reviewed:
Full economic evaluations including cost-effectiveness analyses (CEA), cost-utility analyses (CUA), cost-benefit analyses (CBA);
Economic evaluations reporting incremental cost-effectiveness ratios (ICER) i.e. cost per quality adjusted life year (QALY), life years gained (LYG), or cost per event avoided; and
Studies in patients eligible for lung support technologies for to lung transplantation.
The search yielded no articles reporting comparative economic analyses.
Resource Use and Costs
Costs associated with both ILA and ECMO (outlined in Table ES-1) were obtained from the University Health Network (UHN) case costing initiative (personal communication, UHN, January 2010). Consultation with a clinical expert in the field was also conducted to verify resource utilization. The consultant was situated at the UHN in Toronto. The UHN has one ECMO machine, which cost approximately $100,000. The system is 18 years old and is used an average of 3 to 4 times a year with 35 procedures being performed over the last 9 years. The disposable cost per patient associated with ECMO is, on average, $2,200. There is a maintenance cost associated with the machine (not reported by the UHN), which is currently absorbed by the hospital’s biomedical engineering department.
The average capital cost of an ILA device is $7,100 per device, per patient, while the average cost of the reusable pump $65,000. The UHN has performed 16 of these procedures over the last 2.5 years. Similarly, there is a maintenance cost not that was reported by UHN but is absorbed by the hospital’s biomedical engineering department.
Resources Associated with Extracorporeal Lung Support Technologies
Hospital costs associated with ILA were based on the average cost incurred by the hospital for 11 cases performed in the FY 07/08 (personal communication, UHN, January 2010). The resources incurred with this hospital procedure included:
Device and disposables
OR transplant
Surgical ICU
Laboratory work
Medical imaging
Pharmacy
Clinical nutrition
Physiotherapy
Occupational therapy
Speech and language pathology
Social work
The average length of stay in hospital was 61 days for ILA (range: 5 to 164 days) and the average direct cost was $186,000 per case (range: $19,000 to $552,000). This procedure has a high staffing requirement to monitor patients in hospital, driving up the average cost per case.
PMCID: PMC3415698  PMID: 23074408
16.  Intrastromal Corneal Ring Implants for Corneal Thinning Disorders 
Executive Summary
Objective
The purpose of this project was to determine the role of corneal implants in the management of corneal thinning disease conditions. An evidence-based review was conducted to determine the safety, effectiveness and durability of corneal implants for the management of corneal thinning disorders. The evolving directions of research in this area were also reviewed.
Subject of the Evidence-Based Analysis
The primary treatment objectives for corneal implants are to normalize corneal surface topography, improve contact lens tolerability, and restore visual acuity in order to delay or defer the need for corneal transplant. Implant placement is a minimally invasive procedure that is purported to be safe and effective. The procedure is also claimed to be adjustable, reversible, and both eyes can be treated at the same time. Further, implants do not limit the performance of subsequent surgical approaches or interfere with corneal transplant. The evidence for these claims is the focus of this review.
The specific research questions for the evidence review were as follows:
Safety
Corneal Surface Topographic Effects:
Effects on corneal surface remodelling
Impact of these changes on subsequent interventions, particularly corneal transplantation (penetrating keratoplasty [PKP])
Visual Acuity
Refractive Outcomes
Visual Quality (Symptoms): such as contrast vision or decreased visual symptoms (halos, fluctuating vision)
Contact lens tolerance
Functional visual rehabilitation and quality of life
Patient satisfaction:
Disease Process:
Impact on corneal thinning process
Effect on delaying or deferring the need for corneal transplantation
Clinical Need: Target Population and Condition
Corneal ectasia (thinning) comprises a range of disorders involving either primary disease conditions such as keratoconus and pellucid marginal corneal degeneration or secondary iatrogenic conditions such as corneal thinning occurring after LASIK refractive surgery. The condition occurs when the normally round dome-shaped cornea progressively thins causing a cone-like bulge or forward protrusion in response to the normal pressure of the eye. Thinning occurs primarily in the stoma layers and is believed to be a breakdown in the collagen network. This bulging can lead to an irregular shape or astigmatism of the cornea and, because the anterior part of the cornea is largely responsible for the focusing of light on the retina, results in loss of visual acuity. This can make even simple daily tasks, such as driving, watching television or reading, difficult to perform.
Keratoconus (KC) is the most common form of corneal thinning disorder and is a noninflammatory chronic disease process. Although the specific causes of the biomechanical alterations that occur in KC are unknown, there is a growing body of evidence to suggest that genetic factors may play an important role. KC is a rare condition (<0.05% of the population) and is unique among chronic eye diseases as it has an early age of onset (median age of 25 years). Disease management for this condition follows a step-wise approach depending on disease severity. Contact lenses are the primary treatment of choice when there is irregular astigmatism associated with the disease. When patients can no longer tolerate contact lenses or when lenses no longer provide adequate vision, patients are referred for corneal transplant.
Keratoconus is one of the leading indications for corneal transplants and has been so for the last three decades. Yet, despite high graft survival rates of up to 20 years, there are reasons to defer receiving transplants for as long as possible. Patients with keratoconus are generally young and life-long term graft survival would be an important consideration. The surgery itself involves lengthy time off work and there are potential complications from long term steroid use following surgery, as well as the risk of developing secondary cataracts, glaucoma etc. After transplant, recurrent KC is possible with need for subsequent intervention. Residual refractive errors and astigmatism can remain challenging after transplantation and high refractive surgery rates and re-graft rates in KC patients have been reported. Visual rehabilitation or recovery of visual acuity after transplant may be slow and/or unsatisfactory to patients.
Description of Technology/Therapy
INTACS® (Addition Technology Inc. Sunnyvale, CA, formerly KeraVision, Inc.) are the only currently licensed corneal implants in Canada. The implants are micro-thin poly methyl methacrylate crescent shaped ring segments with a circumference arc length of 150 degrees, an external diameter of 8.10 mm, an inner diameter of 6.77 mm, and a range of different thicknesses. Implants act as passive spacers and, when placed in the cornea, cause local separation of the corneal lamellae resulting in a shortening of the arc length of the anterior corneal curvature and flattening the central cornea. Increasing segment thickness results in greater lamellar separation with increased flattening of the cornea correcting for myopia by decreasing the optical power of the eye. Corneal implants also improve corneal astigmatism but the mechanism of action for this is less well understood.
Treatment with corneal implants is considered for patients who are contact lens intolerant, having adequate corneal thickness particularly around the area of the implant incision site and without central corneal scarring. Those with central corneal scarring would not benefit from implants and those without an adequate corneal thickness, particularly in the region that the implants are being inserted, would be at increased risk for corneal perforation. Patients desiring to have visual rehabilitation that does not include glasses or contact lenses would not be candidates for corneal ring implants.
Placement of the implants is an outpatient procedure with topical anesthesia generally performed by either corneal specialists or refractive surgeons. It involves creating tunnels in the corneal stroma to secure the implants either by a diamond knife or laser calibrated to an approximate depth of 70% of the cornea. Variable approaches have been employed by surgeons in selecting ring segment size, number and position. Generally, two segments of equal thickness are placed superiorly and inferiorly to manage symmetrical patterns of corneal thinning whereas one segment may be placed to manage asymmetric thinning patterns.
Following implantation, the major safety concerns are for potential adverse events including corneal perforation, infection, corneal infiltrates, corneal neovascularization, ring migration and extrusion and corneal thinning. Technical results can be unsatisfactory for several reasons. Treatment may result in an over or under-correction of refraction and may induce astigmatism or asymmetry of the cornea.
Progression of the corneal cone with corneal opacities is also invariably an indication for progression to corneal transplant. Other reasons for treatment failure or patient dissatisfaction include foreign body sensation, unsatisfactory visual quality with symptoms such as double vision, fluctuating vision, poor night vision or visual side effects related to ring edge or induced or unresolved astigmatism.
Evidence-Based Analysis Methods
The literature search strategy employed keywords and subject headings to capture the concepts of 1) intrastromal corneal rings and 2) corneal diseases, with a focus on keratoconus, astigmatism, and corneal ectasia. The initial search was run on April 17, 2008, and a final search was run on March 6, 2009 in the following databases: Ovid MEDLINE (1996 to February Week 4 2009), OVID MEDLINE In-Process and Other Non-Indexed Citations, EMBASE (1980 to 2009 Week 10), OVID Cochrane Library, and the Centre for Reviews and Dissemination/International Agency for Health Technology Assessment. Parallel search strategies were developed for the remaining databases. Search results were limited to human and English-language published between January 2000 and April 17, 2008. The resulting citations were downloaded into Reference Manager, v.11 (ISI Researchsoft, Thomson Scientific, U.S.A), and duplicates were removed. The Web sites of several other health technology agencies were also reviewed including the Canadian Agency for Drugs and Technologies in Health (CADTH), ECRI, and the United Kingdom National Institute for Clinical Excellence (NICE). The bibliographies of relevant articles were scanned.
Inclusion Criteria
English language reports and human studies
Any corneal thinning disorder
Reports with corneal implants used alone or in conjunction with other interventions
Original reports with defined study methodology
Reports including standardized measurements on outcome events such as technical success, safety, effectiveness, durability, vision quality of life or patient satisfaction
Case reports or case series for complications and adverse events
Exclusion Criteria
Non-systematic reviews, letters, comments and editorials
Reports not involving outcome events such as safety, effectiveness, durability, vision quality or patient satisfaction following an intervention with corneal implants
Reports not involving corneal thinning disorders and an intervention with corneal implants
Summary of Findings
In the MAS evidence review on intrastromal corneal ring implants, 66 reports were identified on the use of implants for management of corneal thinning disorders. Reports varied according to their primary clinical indication, type of corneal implant, and whether or not secondary procedures were used in conjunction with the implants. Implants were reported to manage post LASIK thinning and/or uncorrected refractive error and were also reported as an adjunctive intervention both during and after corneal transplant to manage recurrent thinning and/or uncorrected refractive error.
Ten pre-post cohort longitudinal follow-up studies were identified examining the safety and effectiveness of INTAC corneal implants in patients with keratoconus. Five additional cohort studies were identified using the Ferrara implant for keratoconus management but because this corneal implant is not licensed in Canada these studies were not reviewed.
The cohorts implanted with INTACS involved 608 keratoconus patients (754 eyes) followed for 1, 2 or 3 years. Three of the reports involved ≥ 2 years of follow-up with the longest having 5-year follow-up data for a small number of patients. Four of the INTAC cohort studies involved 50 or more patients; the largest involved 255 patients. Inclusion criteria for the studies were consistent and included patients who were contact lens intolerant, had adequate corneal thickness, particularly around the area of the implant incision site, and without central corneal scarring. Disease severity, thinning pattern, and corneal cone protrusions all varied and generally required different treatment approaches involving defined segment sizes and locations.
A wide range of outcome measures were reported in the cohort studies. High levels of technical success or ability to place INTAC segments were reported. Technically related complications were often delayed and generally reported as segment migration attributable to early experience. Overall, complications were infrequently reported and largely involved minor reversible events without clinical sequelae.
The outcomes reported across studies involved statistically significant and clinically relevant improvements in corneal topography, refraction and visual acuity, for both uncorrected and best-corrected visual acuity. Patients’ vision was usually restored to within normal functioning levels and for those not achieving satisfactory correction, insertion of intraocular lenses was reported in case studies to result in additional gains in visual acuity. Vision loss (infrequently reported) was usually reversed by implant exchange or removal. The primary effects of INTACS on corneal surface remodelling were consistent with secondary improvements in refractive error and visual acuity. The improvements in visual acuity and refractive error noted at 6 months were maintained at 1 and 2-year follow-up
Improvements in visual acuity and refractive error following insertion of INTACS, however, were not noted for all patients. Although improvements were not found to vary across age groups there were differences across stages of disease. Several reports suggested that improvements in visual acuity and refractive outcomes may not be as large or predictable in more advanced stages of KC. Some studies have suggested that the effects of INTACs were much greater in flattening the corneal surface than in correcting astigmatism. However, these studies involved small numbers of high risk patients in advanced stages of KC and conclusions made from this group are limited.
INTACS were used for other indications other than primary KC. The results of implant insertion on corneal topography, refraction, and visual acuity in post-LASIK thinning cases were similar to those reported for KC. The evidence for this indication, however, only involved case reports and small case series. INTACS were also successfully used to treat recurrent KC after corneal transplant but this was based on only a single case report. Corneal implants were compared to corneal transplantation but these studies were not randomized and based on small numbers of selected patients.
The foremost limitation of the evidence base is the basic study design in the reports that involved longitudinal follow-up only for the treated group; there were no randomized trials. Follow-up in the trials (although at prescribed intervals) often had incomplete accounts of losses at follow-up and estimates of change were often not reported or based on group differences. Second, although standardized outcome measures were reported, contact lens tolerance (a key treatment objective) was infrequently specified. A third general limitation was the lack of reporting of patients’ satisfaction with their vision quality or functional vision. Outcome measures for vision quality and impact on patient quality of life were available but rarely reported and have been noted to be a limitation in ophthalmological literature in general. Fourth, the longitudinal cohort studies have not followed patients long enough to evaluate the impact of implants on the underlying disease process (follow-up beyond 3 years is limited). Additionally, only a few of these studies directly examined corneal thinning in follow-up. The overall quality of evidence determined using the GRADE hierarchy of evidence was moderate.
There is some evidence in these studies to support the claim that corneal implants do not interfere with, or increase the difficultly of, subsequent corneal transplant, at least for those performed shortly after INTAC placement. Although it’s uncertain for how long implants can delay the need for a corneal transplant, given that patients with KC are often young (in their twenties and thirties), delaying transplant for any number of years may still be a valuable consideration.
Conclusion
The clinical indications for corneal implants have evolved from management of myopia in normal eyes to the management of corneal thinning disorders such as KC and thinning occurring after refractive surgery. Despite the limited evidence base for corneal implants, which consists solely of longitudinal follow-up studies, they appear to be a valuable clinical tool for improving vision in patients with corneal thinning. For patients unable to achieve functional vision, corneal implants achieved statistically significant and clinically relevant improvements in corneal topography, refraction, and visual acuity, providing a useful alternative to corneal transplant. Implants may also have a rescue function, treating corneal thinning occurring after refractive surgery in normal eyes, or managing refractive errors following corneal transplant. The treatment offers several advantages in that it’s an outpatient based procedure, is associated with minimal risk, and has high technical success rates. Both eyes can be treated at once and the treatment is adjustable and reversible. The implants can be removed or exchanged to improve vision without limiting subsequent interventions, particularly corneal transplant.
Better reporting on vision quality, functional vision and patient satisfaction, however, would improve evaluation of the impact of these devices. Information on the durability of the implants’ treatment effects and their affects on underlying disease processes is limited. This information is becoming more important as alternative treatment strategies, such as collagen cross-linking aimed at strengthening the underlying corneal tissue, are emerging and which might prove to be more effective or increase the effectiveness of the implants, particularly in advances stages of corneal thinning.
Ontario Health System Considerations
At present there are approximately 70 ophthalmologists in Canada who’ve had training with corneal implants; 30 of these practice in Ontario. Industry currently sponsors the training, proctoring and support for the procedure. The cost of the implant device ranges from $950 to $1200 (CAD) and costs for instrumentation range from $20,000 to $30,000 (CAD) (a one time capital expenditure). There is no physician services fee code for corneal implants in Ontario but assuming that they are no higher than those for a corneal transplant, the estimated surgical costs would be $914.32(CAD) An estimated average cost per patient, based on device costs and surgical fees, for treatment is $1,964 (CAD) (range $1,814 to $2,114) per eye. There have also been no out of province treatment requests. In Ontario the treatment is currently being offered in private clinics and an increasing number of ophthalmologists are being certified in the technique by the manufacturer.
KC is a rare disease and not all of these patients would be eligible candidates for treatment with corneal implants. Based on published population rates of KC occurrence, it can be expected that there is a prevalent population of approximately 6,545 patients and an incident population of 240 newly diagnosed cases per year. Given this small number of potential cases, the use of corneal implants would not be expected to have much impact on the Ontario healthcare system. The potential impact on the provincial budget for managing the incident population, assuming the most conservative scenario (i.e., all are eligible and all receive bilateral implants) ranges from $923 thousand to $1.1 million (CAD). This estimate would vary based on a variety of criteria including eligibility, unilateral or bilateral interventions, re-interventions, capacity and uptake
Keywords
Keratoconus, corneal implants, corneal topography, corneal transplant, visual acuity, refractive error
PMCID: PMC3385416  PMID: 23074513
17.  Emergence of Drug Resistance Is Associated with an Increased Risk of Death among Patients First Starting HAART 
PLoS Medicine  2006;3(9):e356.
Background
The impact of the emergence of drug-resistance mutations on mortality is not well characterized in antiretroviral-naïve patients first starting highly active antiretroviral therapy (HAART). Patients may be able to sustain immunologic function with resistant virus, and there is limited evidence that reduced sensitivity to antiretrovirals leads to rapid disease progression or death. We undertook the present analysis to characterize the determinants of mortality in a prospective cohort study with a median of nearly 5 y of follow-up. The objective of this study was to determine the impact of the emergence of drug-resistance mutations on survival among persons initiating HAART.
Methods and Findings
Participants were antiretroviral therapy naïve at entry and initiated triple combination antiretroviral therapy between August 1, 1996, and September 30, 1999. Marginal structural modeling was used to address potential confounding between time-dependent variables in the Cox proportional hazard regression models. In this analysis resistance to any class of drug was considered as a binary time-dependent exposure to the risk of death, controlling for the effect of other time-dependent confounders. We also considered each separate class of mutation as a binary time-dependent exposure, while controlling for the presence/absence of other mutations. A total of 207 deaths were identified among 1,138 participants over the follow-up period, with an all cause mortality rate of 18.2%. Among the 679 patients with HIV-drug-resistance genotyping done before initiating HAART, HIV-drug resistance to any class was observed in 53 (7.8%) of the patients. During follow-up, HIV-drug resistance to any class was observed in 302 (26.5%) participants. Emergence of any resistance was associated with mortality (hazard ratio: 1.75 [95% confidence interval: 1.27, 2.43]). When we considered each class of resistance separately, persons who exhibited resistance to non-nucleoside reverse transcriptase inhibitors had the highest risk: mortality rates were 3.02 times higher (95% confidence interval: 1.99, 4.57) for these patients than for those who did not exhibit this type of resistance.
Conclusions
We demonstrated that emergence of resistance to non-nucleoside reverse transcriptase inhibitors was associated with a greater risk of subsequent death than was emergence of protease inhibitor resistance. Future research is needed to identify the particular subpopulations of men and women at greatest risk and to elucidate the impact of resistance over a longer follow-up period.
Emergence of resistance to both non-nucleoside reverse transcriptase inhibitors and protease inhibitors was associated with a higher risk of subsequent death, but the risk was greater in patients with NNRTI-resistant HIV.
Editors' Summary
Background.
In the 1980s, infection with the human immunodeficiency virus (HIV) was effectively a death sentence. HIV causes AIDS (acquired immunodeficiency syndrome) by replicating inside immune system cells and destroying them, which leaves infected individuals unable to fight off other viruses and bacteria. The first antiretroviral drugs were developed quickly, but it soon became clear that single antiretrovirals only transiently suppress HIV infection. HIV mutates (accumulates random changes to its genetic material) very rapidly and, although most of these changes (or mutations) are bad for the virus, by chance some make it drug resistant. Highly active antiretroviral therapy (HAART), which was introduced in the mid-1990s, combines three or four antiretroviral drugs that act at different stages of the viral life cycle. For example, they inhibit the reverse transcriptase that the virus uses to replicate its genetic material, or the protease that is necessary to assemble new viruses. With HAART, the replication of any virus that develops resistance to one drug is inhibited by the other drugs in the mix. As a consequence, for many individuals with access to HAART, AIDS has become a chronic rather than a fatal disease. However, being on HAART requires patients to take several pills a day at specific times. In addition, the drugs in the HAART regimens often have side effects.
Why Was This Study Done?
Drug resistance still develops even with HAART, often because patients don't stick to the complicated regimens. The detection of resistance to one drug is usually the prompt to change a patient's drug regimen to head off possible treatment failure. Although most patients treated with HAART live for many years, some still die from AIDS. We don't know much about how the emergence of drug-resistance mutations affects mortality in patients who are starting antiretroviral therapy for the first time. In this study, the researchers looked at how the emergence of drug resistance affected survival in a group of HIV/AIDS patients in British Columbia, Canada. Here, everyone with HIV/AIDS has access to free medical attention, HAART, and laboratory monitoring, and full details of all HAART recipients are entered into a central reporting system.
What Did the Researchers Do and Find?
The researchers enrolled people who started antiretroviral therapy for the first time between August 1996 and September 1999 into the HAART Observational Medical Evaluation and Research (HOMER) cohort. They then excluded anyone who was infected with already drug-resistant HIV strains (based on the presence of drug-resistance mutations in viruses isolated from the patients) at the start of therapy. The remaining 1,138 patients were followed for an average of five years. All the patients received either two nucleoside reverse transcriptase inhibitors and a protease inhibitor, or two nucleoside and one non-nucleoside reverse transcriptase inhibitor (NNRTI). Nearly a fifth of the study participants died during the follow-up period. Most of these patients actually had drug-sensitive viruses, possibly because they had neglected taking their drugs to such an extent that there had been insufficient drug exposure to select for drug-resistant viruses. In a quarter of the patients, however, HIV strains resistant to one or more antiretroviral drugs emerged during the study (again judged by looking for mutations). Detailed statistical analyses indicated that the emergence of any drug resistance nearly doubled the risk of patients dying, and that people carrying viruses resistant to NNRTIs were three times as likely to die as those without resistance to this class of antiretroviral drug.
What Do These Findings Mean?
These results provide new information about the emergence of drug-resistant HIV during HAART and possible effects on the long-term survival of patients. In particular, they suggest that clinicians should watch carefully for the emergence of resistance to NNRTIs in their patients. Because this type of resistance is often due to poor adherence to drug regimens, these results also suggest that increased efforts should be made to ensure that patients comply with the prescribed HAART regimens, especially those whose antiretroviral therapy includes NNRTIs. As with all studies in which a group of individuals who share a common characteristic are studied over time, it is possible that some other, unmeasured difference between the patients who died and those who didn't—rather than emerging drug resistance—is responsible for the observed differences in survival. Additional studies are needed to confirm the findings here, and to investigate whether specific subpopulations of patients are at particular risk of developing drug resistance and/or dying during HAART.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030356.
US National Institute of Allergy and Infectious Diseases fact sheet on HIV infection and AIDS
US Department of Health and Human Services information on AIDS, including details of approved drugs for the treatment of HIV infection
US Centers for Disease Control and Prevention information on HIV/AIDS
Aidsmap, information on HIV and AIDS provided by the charity NAM, which includes details on antiretroviral drugs
doi:10.1371/journal.pmed.0030356
PMCID: PMC1569883  PMID: 16984218
18.  Gender Differences in Survival among Adult Patients Starting Antiretroviral Therapy in South Africa: A Multicentre Cohort Study 
PLoS Medicine  2012;9(9):e1001304.
Morna Cornell and colleagues investigate differences in mortality for HIV-positive men and women on antiretroviral therapy in South Africa.
Background
Increased mortality among men on antiretroviral therapy (ART) has been documented but remains poorly understood. We examined the magnitude of and risk factors for gender differences in mortality on ART.
Methods and Findings
Analyses included 46,201 ART-naïve adults starting ART between January 2002 and December 2009 in eight ART programmes across South Africa (SA). Patients were followed from initiation of ART to outcome or analysis closure. The primary outcome was mortality; secondary outcomes were loss to follow-up (LTF), virologic suppression, and CD4+ cell count responses. Survival analyses were used to examine the hazard of death on ART by gender. Sensitivity analyses were limited to patients who were virologically suppressed and patients whose CD4+ cell count reached >200 cells/µl. We compared gender differences in mortality among HIV+ patients on ART with mortality in an age-standardised HIV-negative population.
Among 46,201 adults (65% female, median age 35 years), during 77,578 person-years of follow-up, men had lower median CD4+ cell counts than women (85 versus 110 cells/µl, p<0.001), were more likely to be classified WHO stage III/IV (86 versus 77%, p<0.001), and had higher mortality in crude (8.5 versus 5.7 deaths/100 person-years, p<0.001) and adjusted analyses (adjusted hazard ratio [AHR] 1.31, 95% CI 1.22–1.41). After 36 months on ART, men were more likely than women to be truly LTF (AHR 1.20, 95% CI 1.12–1.28) but not to die after LTF (AHR 1.04, 95% CI 0.86–1.25). Findings were consistent across all eight programmes. Virologic suppression was similar by gender; women had slightly better immunologic responses than men. Notably, the observed gender differences in mortality on ART were smaller than gender differences in age-standardised death rates in the HIV-negative South African population. Over time, non-HIV mortality appeared to account for an increasing proportion of observed mortality. The analysis was limited by missing data on baseline HIV disease characteristics, and we did not observe directly mortality in HIV-negative populations where the participating cohorts were located.
Conclusions
HIV-infected men have higher mortality on ART than women in South African programmes, but these differences are only partly explained by more advanced HIV disease at the time of ART initiation, differential LTF and subsequent mortality, and differences in responses to treatment. The observed differences in mortality on ART may be best explained by background differences in mortality between men and women in the South African population unrelated to the HIV/AIDS epidemic.
Please see later in the article for the Editors' Summary.
Editors' Summary
Background
About 34 million people (most living in low- and middle-income countries) are currently infected with HIV, the virus that causes AIDS. HIV destroys CD4 lymphocytes and other immune system cells, leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-infected people died within 10 years of becoming infected. Then, in 1996, antiretroviral therapy (ART)—cocktails of drugs that keep HIV in check—became available. For people living in affluent countries, HIV/AIDS became a chronic condition. However, ART was expensive and, for people living in poorer countries, HIV/AIDS remained a fatal illness. In 2003, this situation was declared a global emergency, and governments and international agencies began to implement plans to increase ART coverage in resource-limited countries. Since then, ART programs in these countries have grown rapidly. In South Africa, for example, about 52% of the 3.14 million adults in need of ART were receiving an ART regimen recommended by the World Health Organization by the end of 2010.
Why Was This Study Done?
The outcomes of ART programs in resource-limited countries need to be evaluated thoroughly so that these programs can be optimized. One area of concern to ART providers is that of gender differences in survival among patients receiving treatment. In sub-Saharan Africa, for example, men are more likely to die than women while receiving ART. This gender difference in mortality may arise because men initiating ART in many African ART programs have more advanced HIV disease than women (early ART initiation is associated with better outcomes than late initiation) or because men are more likely to be lost to follow-up than women (failure to continue treatment is associated with death). Other possible explanations for gender differentials in mortality on ART include gender differences in immunologic and virologic responses to treatment (increased numbers of immune system cells and reduced amounts of virus in the blood, respectively). In this multicenter cohort study, the researchers examine the size of, and risk factors for, gender differences in mortality on ART in South Africa by examining data collected from adults starting ART at International Epidemiologic Databases to Evaluate AIDS South Africa (IeDEA-SA) collaboration sites.
What Did the Researchers Do and Find?
The researchers analyzed data collected from 46,201 ART-naïve adults who started ART between 2002 and 2009 in eight IeDEA-SA ART programs. At ART initiation, men had a lower CD4 count on average and were more likely to have advanced HIV disease than women. During the study, after allowing for factors likely to affect mortality such as HIV disease stage at initiation, men on ART had a 31% higher risk of dying than women. Men were more likely to be lost to follow-up than women, but men and women who were lost to follow-up were equally likely to die. Women had a slightly better immunological response to ART than men but virologic suppression was similar in both genders. Importantly, in analyses of mortality limited to individuals who were virologically suppressed at 12 months and to patients who had a good immunological response to ART, men still had a higher risk of death than women. However, the gender differences in mortality on ART were smaller than the gender differences in age-standardized mortality in the HIV-negative South African population.
What Do These Findings Mean?
These analyses show that among South African patients initiating ART between 2002 and 2009, men were more likely to die than women but that this gender difference in mortality on ART cannot be completely explained by gender differences in baseline characteristics, loss to follow-up, or virologic and/or immunologic responses. Instead, the observed gender differences in mortality can best be explained by background gender differences in mortality in the whole South African population. Because substantial amounts of data were missing in this study (for example, HIV disease stage was not available for all the patients), these findings need to be interpreted cautiously. Moreover, similar studies need to be done in other settings to investigate whether they are generalizable to the South African national ART program and to other countries. If confirmed, however, these findings suggest that the root causes of gender differences in mortality on ART may be unrelated to HIV/AIDS or to the characteristics of ART programs.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001304.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
Information on the treatment of HIV/AIDS in South Africa is available from the Southern African HIV Clinicians Society
NAM/aidsmap provides basic information about HIV/AIDS and summaries of recent research findings on HIV care and treatment
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on HIV/AIDS treatment and care, and on HIV/AIDS in South Africa (in English and Spanish)
WHO provides information about universal access to AIDS treatment (in several languages); its 2010 ART guidelines can be downloaded
Information about the IeDEA-SA collaboration is available
The Treatment Action Campaign provides information on antiretroviral therapy and South African HIV statistics
Patient stories about living with HIV/AIDS are available through Avert; the nonprofit website Healthtalkonline also provides personal stories about living with HIV, including stories about taking anti-HIV drugs and the challenges of anti-HIV drugs
doi:10.1371/journal.pmed.1001304
PMCID: PMC3433409  PMID: 22973181
19.  Medical Students' Exposure to and Attitudes about the Pharmaceutical Industry: A Systematic Review 
PLoS Medicine  2011;8(5):e1001037.
A systematic review of published studies reveals that undergraduate medical students may experience substantial exposure to pharmaceutical marketing, and that this contact may be associated with positive attitudes about marketing.
Background
The relationship between health professionals and the pharmaceutical industry has become a source of controversy. Physicians' attitudes towards the industry can form early in their careers, but little is known about this key stage of development.
Methods and Findings
We performed a systematic review reported according to PRISMA guidelines to determine the frequency and nature of medical students' exposure to the drug industry, as well as students' attitudes concerning pharmaceutical policy issues. We searched MEDLINE, EMBASE, Web of Science, and ERIC from the earliest available dates through May 2010, as well as bibliographies of selected studies. We sought original studies that reported quantitative or qualitative data about medical students' exposure to pharmaceutical marketing, their attitudes about marketing practices, relationships with industry, and related pharmaceutical policy issues. Studies were separated, where possible, into those that addressed preclinical versus clinical training, and were quality rated using a standard methodology. Thirty-two studies met inclusion criteria. We found that 40%–100% of medical students reported interacting with the pharmaceutical industry. A substantial proportion of students (13%–69%) were reported as believing that gifts from industry influence prescribing. Eight studies reported a correlation between frequency of contact and favorable attitudes toward industry interactions. Students were more approving of gifts to physicians or medical students than to government officials. Certain attitudes appeared to change during medical school, though a time trend was not performed; for example, clinical students (53%–71%) were more likely than preclinical students (29%–62%) to report that promotional information helps educate about new drugs.
Conclusions
Undergraduate medical education provides substantial contact with pharmaceutical marketing, and the extent of such contact is associated with positive attitudes about marketing and skepticism about negative implications of these interactions. These results support future research into the association between exposure and attitudes, as well as any modifiable factors that contribute to attitudinal changes during medical education.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The complex relationship between health professionals and the pharmaceutical industry has long been a subject of discussion among physicians and policymakers. There is a growing body of evidence that suggests that physicians' interactions with pharmaceutical sales representatives may influence clinical decision making in a way that is not always in the best interests of individual patients, for example, encouraging the use of expensive treatments that have no therapeutic advantage over less costly alternatives. The pharmaceutical industry often uses physician education as a marketing tool, as in the case of Continuing Medical Education courses that are designed to drive prescribing practices.
One reason that physicians may be particularly susceptible to pharmaceutical industry marketing messages is that doctors' attitudes towards the pharmaceutical industry may form early in their careers. The socialization effect of professional schooling is strong, and plays a lasting role in shaping views and behaviors.
Why Was This Study Done?
Recently, particularly in the US, some medical schools have limited students' and faculties' contact with industry, but some have argued that these restrictions are detrimental to students' education. Given the controversy over the pharmaceutical industry's role in undergraduate medical training, consolidating current knowledge in this area may be useful for setting priorities for changes to educational practices. In this study, the researchers systematically examined studies of pharmaceutical industry interactions with medical students and whether such interactions influenced students' views on related topics.
What Did the Researchers Do and Find?
The researchers did a comprehensive literature search using appropriate search terms for all relevant quantitative and qualitative studies published before June 2010. Using strict inclusion criteria, the researchers then selected 48 articles (from 1,603 abstracts) for full review and identified 32 eligible for analysis—giving a total of approximately 9,850 medical students studying at 76 medical schools or hospitals.
Most students had some form of interaction with the pharmaceutical industry but contact increased in the clinical years, with up to 90% of all clinical students receiving some form of educational material. The highest level of exposure occurred in the US. In most studies, the majority of students in their clinical training years found it ethically permissible for medical students to accept gifts from drug manufacturers, while a smaller percentage of preclinical students reported such attitudes. Students justified their entitlement to gifts by citing financial hardship or by asserting that most other students accepted gifts. In addition, although most students believed that education from industry sources is biased, students variably reported that information obtained from industry sources was useful and a valuable part of their education.
Almost two-thirds of students reported that they were immune to bias induced by promotion, gifts, or interactions with sales representatives but also reported that fellow medical students or doctors are influenced by such encounters. Eight studies reported a relationship between exposure to the pharmaceutical industry and positive attitudes about industry interactions and marketing strategies (although not all included supportive statistical data). Finally, student opinions were split on whether physician–industry interactions should be regulated by medical schools or the government.
What Do These Findings Mean?
This analysis shows that students are frequently exposed to pharmaceutical marketing, even in the preclinical years, and that the extent of students' contact with industry is generally associated with positive attitudes about marketing and skepticism towards any negative implications of interactions with industry. Therefore, strategies to educate students about interactions with the pharmaceutical industry should directly address widely held misconceptions about the effects of marketing and other biases that can emerge from industry interactions. But education alone may be insufficient. Institutional policies, such as rules regulating industry interactions, can play an important role in shaping students' attitudes, and interventions that decrease students' contact with industry and eliminate gifts may have a positive effect on building the skills that evidence-based medical practice requires. These changes can help cultivate strong professional values and instill in students a respect for scientific principles and critical evidence review that will later inform clinical decision-making and prescribing practices.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001037.
Further information about the influence of the pharmaceutical industry on doctors and medical students can be found at the American Medical Students Association PharmFree campaign and PharmFree Scorecard, Medsin-UKs PharmAware campaign, the nonprofit organization Healthy Skepticism, and the Web site of No Free Lunch.
doi:10.1371/journal.pmed.1001037
PMCID: PMC3101205  PMID: 21629685
20.  Guidelines, Editors, Pharma And The Biological Paradigm Shift 
Mens Sana Monographs  2007;5(1):27-30.
Private investment in biomedical research has increased over the last few decades. At most places it has been welcomed as the next best thing to technology itself. Much of the intellectual talent from academic institutions is getting absorbed in lucrative positions in industry. Applied research finds willing collaborators in venture capital funded industry, so a symbiotic growth is ensured for both.
There are significant costs involved too. As academia interacts with industry, major areas of conflict of interest especially applicable to biomedical research have arisen. They are related to disputes over patents and royalty, hostile encounters between academia and industry, as also between public and private enterprise, legal tangles, research misconduct of various types, antagonistic press and patient-advocate lobbies and a general atmosphere in which commercial interest get precedence over patient welfare.
Pharma image stinks because of a number of errors of omission and commission. A recent example is suppression of negative findings about Bayer's Trasylol (Aprotinin) and the marketing maneuvers of Eli Lilly's Xigris (rhAPC). Whenever there is a conflict between patient vulnerability and profit motives, pharma often tends to tilt towards the latter. Moreover there are documents that bring to light how companies frequently cross the line between patient welfare and profit seeking behaviour.
A voluntary moratorium over pharma spending to pamper drug prescribers is necessary. A code of conduct adopted recently by OPPI in India to limit pharma company expenses over junkets and trinkets is a welcome step.
Clinical practice guidelines (CPG) are considered important as they guide the diagnostic/therapeutic regimen of a large number of medical professionals and hospitals and provide recommendations on drugs, their dosages and criteria for selection. Along with clinical trials, they are another area of growing influence by the pharmaceutical industry. For example, in a relatively recent survey of 2002, it was found that about 60% of 192 authors of clinical practice guidelines reported they had financial connections with the companies whose drugs were under consideration. There is a strong case for making CPGs based not just on effectivity but cost effectivity. The various ramifications of this need to be spelt out. Work of bodies like the Appraisal of Guidelines Research and Evaluation (AGREE) Collaboration and Guidelines Advisory Committee (GAC) are also worth a close look.
Even the actions of Foundations that work for disease amelioration have come under scrutiny. The process of setting up ‘Best Practices’ Guidelines for interactions between the pharmaceutical industry and clinicians has already begun and can have important consequences for patient care. Similarly, Good Publication Practice (GPP) for pharmaceutical companies have also been set up aimed at improving the behaviour of drug companies while reporting drug trials
The rapidly increasing trend toward influence and control by industry has become a concern for many. It is of such importance that the Association of American Medical Colleges has issued two relatively new documents - one, in 2001, on how to deal with individual conflicts of interest; and the other, in 2002, on how to deal with institutional conflicts of interest in the conduct of clinical research. Academic Medical Centers (AMCs), as also medical education and research institutions at other places, have to adopt means that minimize their conflicts of interest.
Both medical associations and research journal editors are getting concerned with individual and institutional conflicts of interest in the conduct of clinical research and documents are now available which address these issues. The 2001 ICMJE revision calls for full disclosure of the sponsor's role in research, as well as assurance that the investigators are independent of the sponsor, are fully accountable for the design and conduct of the trial, have independent access to all trial data and control all editorial and publication decisions. However the findings of a 2002 study suggest that academic institutions routinely participate in clinical research that does not adhere to ICMJE standards of accountability, access to data and control of publication.
There is an inevitable slant to produce not necessarily useful but marketable products which ensure the profitability of industry and research grants outflow to academia. Industry supports new, not traditional, therapies, irrespective of what is effective. Whatever traditional therapy is supported is most probably because the company concerned has a product with a big stake there, which has remained a ‘gold standard’ or which that player thinks has still some ‘juice’ left.
Industry sponsorship is mainly for potential medications, not for trying to determine whether there may be non-pharmacological interventions that may be equally good, if not better. In the paradigm shift towards biological psychiatry, the role of industry sponsorship is not overt but probably more pervasive than many have realised, or the right thinking may consider good, for the health of the branch in the long run.
An issue of major concern is protection of the interests of research subjects. Patients agree to become research subjects not only for personal medical benefit but, as an extension, to benefit the rest of the patient population and also advance medical research.
We all accept that industry profits have to be made, and investment in research and development by the pharma industry is massive. However, we must also accept there is a fundamental difference between marketing strategies for other entities and those for drugs.
The ultimate barometer is patient welfare and no drug that compromises it can stand the test of time. So, how does it make even commercial sense in the long term to market substandard products? The greatest mistake long-term players in industry may make is try to adopt the shady techniques of the upstart new entrant. Secrecy of marketing/sales tactics, of the process of manufacture, of other strategies and plans of business expansion, of strategies to tackle competition are fine business tactics. But it is critical that secrecy as a tactic not extend to reporting of research findings, especially those contrary to one's product.
Pharma has no option but to make a quality product, do comprehensive adverse reaction profiles, and market it only if it passes both tests.
Why does pharma adopt questionable tactics? The reasons are essentially two:
What with all the constraints, a drug comes to the pharmacy after huge investments. There are crippling overheads and infrastructure costs to be recovered. And there are massive profit margins to be maintained. If these were to be dependent only on genuine drug discoveries, that would be taking too great a risk.Industry players have to strike the right balance between profit making and credibility. In profit making, the marketing champions play their role. In credibility ratings, researchers and paid spokes-persons play their role. All is hunky dory till marketing is based on credibility. When there is nothing available to make for credibility, something is projected as one and marketing carried out, in the calculated hope that profits can accrue, since profit making must continue endlessly. That is what makes pharma adopt even questionable means to make profits.
Essentially, there are four types of drugs. First, drugs that work and have minimal side-effects; second, drugs which work but have serious side-effects; third, drugs that do not work and have minimal side-effects; and fourth, drugs which work minimally but have serious side-effects. It is the second and fourth types that create major hassles for industry. Often, industry may try to project the fourth type as the second to escape censure.
The major cat and mouse game being played by conscientious researchers is in exposing the third and fourth for what they are and not allowing industry to palm them off as the first and second type respectively. The other major game is in preventing the second type from being projected as the first. The third type are essentially harmless, so they attract censure all right and some merriment at the antics to market them. But they escape anything more than a light rap on the knuckles, except when they are projected as the first type.
What is necessary for industry captains and long-term players is to realise:
Their major propelling force can only be producing the first type. 2. They accept the second type only till they can lay their hands on the first. 3. The third type can be occasionally played around with to shore up profits, but never by projecting them as the first type. 4. The fourth type are the laggards, real threat to credibility and therefore do not deserve any market hype or promotion.
In finding out why most pharma indulges in questionable tactics, we are lead to some interesting solutions to prevent such tactics with the least amount of hassles for all concerned, even as both profits and credibility are kept intact.
doi:10.4103/0973-1229.32176
PMCID: PMC3192391  PMID: 22058616
Academia; Pharmaceutical Industry; Clinical Practice Guidelines; Best Practice Guidelines; Academic Medical Centers; Medical Associations; Research Journals; Clinical Research; Public Welfare; Pharma Image; Corporate Welfare; Biological Psychiatry; Law Suits Against Industry
21.  Respiratory cancer in relation to occupational exposures among retired asbestos workers1 
Enterline, P., de Coufle, P., and Henderson, V. (1973).British Journal of Industrial Medicine,30, 162-166. Respiratory cancer in relation to occupational exposures among retired asbestos worker. A cohort of 1 348 men who completed their working lifetime in the asbestos industry and retired with an industry pension during the period 1941-67 was observed through 1969 for deaths. The average length of employment in the asbestos industry for these men was 25 years and all had exposures to asbestos dust. In some instances these exposures were very high and continued for many years. Mortality for this cohort of men after age 65 was 14·7% higher than for the entire population of United States white men living at the same ages and time periods. This excess was due almost entirely to cancer and respiratory disease. The cancer excess was chiefly due to respiratory cancer where mortality was 2·7 times the expected. The respiratory disease excess was entirely due to asbestosis.
A time-weighted measure of asbestos dust exposure at the time of retirement was calculated for each man. This was made up of the summed products of dust levels for each job (expressed in mppcf) and years at each level. This measure was directly related to the respiratory cancer excess at ages 65 and over, ranging from 1·7 times expected for men with less than 125 mppcf-years exposure to 5·6 times expected for men with 750 or more mppcf-years exposure. There appeared to be no direct relationship between asbestos dust exposure and respiratory cancer below 125 mppcf-years. Important increments in respiratory cancer mortality apparently occurred somewhere between 100 and 200 mppcf-years exposure.
Separation of the effects of time from the effects of average dust level on respiratory cancer mortality showed that the contribution of each was about the same and that a time-weighted measure of asbestos dust appears to be an appropriate method for predicting respiratory cancer effects.
PMCID: PMC1009499  PMID: 4703087
22.  Point-of-Care International Normalized Ratio (INR) Monitoring Devices for Patients on Long-term Oral Anticoagulation Therapy 
Executive Summary
Subject of the Evidence-Based Analysis
The purpose of this evidence based analysis report is to examine the safety and effectiveness of point-of-care (POC) international normalized ratio (INR) monitoring devices for patients on long-term oral anticoagulation therapy (OAT).
Clinical Need: Target Population and Condition
Long-term OAT is typically required by patients with mechanical heart valves, chronic atrial fibrillation, venous thromboembolism, myocardial infarction, stroke, and/or peripheral arterial occlusion. It is estimated that approximately 1% of the population receives anticoagulation treatment and, by applying this value to Ontario, there are an estimated 132,000 patients on OAT in the province, a figure that is expected to increase with the aging population.
Patients on OAT are regularly monitored and their medications adjusted to ensure that their INR scores remain in the therapeutic range. This can be challenging due to the narrow therapeutic window of warfarin and variation in individual responses. Optimal INR scores depend on the underlying indication for treatment and patient level characteristics, but for most patients the therapeutic range is an INR score of between 2.0 and 3.0.
The current standard of care in Ontario for patients on long-term OAT is laboratory-based INR determination with management carried out by primary care physicians or anticoagulation clinics (ACCs). Patients also regularly visit a hospital or community-based facility to provide a venous blood samples (venipuncture) that are then sent to a laboratory for INR analysis.
Experts, however, have commented that there may be under-utilization of OAT due to patient factors, physician factors, or regional practice variations and that sub-optimal patient management may also occur. There is currently no population-based Ontario data to permit the assessment of patient care, but recent systematic reviews have estimated that less that 50% of patients receive OAT on a routine basis and that patients are in the therapeutic range only 64% of the time.
Overview of POC INR Devices
POC INR devices offer an alternative to laboratory-based testing and venipuncture, enabling INR determination from a fingerstick sample of whole blood. Independent evaluations have shown POC devices to have an acceptable level of precision. They permit INR results to be determined immediately, allowing for more rapid medication adjustments.
POC devices can be used in a variety of settings including physician offices, ACCs, long-term care facilities, pharmacies, or by the patients themselves through self-testing (PST) or self-management (PSM) techniques. With PST, patients measure their INR values and then contact their physician for instructions on dose adjustment, whereas with PSM, patients adjust the medication themselves based on pre-set algorithms. These models are not suitable for all patients and require the identification and education of suitable candidates.
Potential advantages of POC devices include improved convenience to patients, better treatment compliance and satisfaction, more frequent monitoring and fewer thromboembolic and hemorrhagic complications. Potential disadvantages of the device include the tendency to underestimate high INR values and overestimate low INR values, low thromboplastin sensitivity, inability to calculate a mean normal PT, and errors in INR determination in patients with antiphospholipid antibodies with certain instruments. Although treatment satisfaction and quality of life (QoL) may improve with POC INR monitoring, some patients may experience increased anxiety or preoccupation with their disease with these strategies.
Evidence-Based Analysis Methods
Research Questions
1. Effectiveness
Does POC INR monitoring improve clinical outcomes in various settings compared to standard laboratory-based testing?
Does POC INR monitoring impact patient satisfaction, QoL, compliance, acceptability, convenience compared to standard laboratory-based INR determination?
Settings include primary care settings with use of POC INR devices by general practitioners or nurses, ACCs, pharmacies, long-term care homes, and use by the patient either for PST or PSM.
2. Cost-effectiveness
What is the cost-effectiveness of POC INR monitoring devices in various settings compared to standard laboratory-based INR determination?
Inclusion Criteria
English-language RCTs, systematic reviews, and meta-analyses
Publication dates: 1996 to November 25, 2008
Population: patients on OAT
Intervention: anticoagulation monitoring by POC INR device in any setting including anticoagulation clinic, primary care (general practitioner or nurse), pharmacy, long-term care facility, PST, PSM or any other POC INR strategy
Minimum sample size: 50 patients Minimum follow-up period: 3 months
Comparator: usual care defined as venipuncture blood draw for an INR laboratory test and management provided by an ACC or individual practitioner
Outcomes: Hemorrhagic events, thromboembolic events, all-cause mortality, anticoagulation control as assessed by proportion of time or values in the therapeutic range, patient reported outcomes including satisfaction, QoL, compliance, acceptability, convenience
Exclusion criteria
Non-RCTs, before-after studies, quasi-experimental studies, observational studies, case reports, case series, editorials, letters, non-systematic reviews, conference proceedings, abstracts, non-English articles, duplicate publications
Studies where POC INR devices were compared to laboratory testing to assess test accuracy
Studies where the POC INR results were not used to guide patient management
Method of Review
A search of electronic databases (OVID MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, The Cochrane Library, and the International Agency for Health Technology Assessment [INAHTA] database) was undertaken to identify evidence published from January 1, 1998 to November 25, 2008. Studies meeting the inclusion criteria were selected from the search results. Reference lists of selected articles were also checked for relevant studies.
Summary of Findings
Five existing reviews and 22 articles describing 17 unique RCTs met the inclusion criteria. Three RCTs examined POC INR monitoring devices with PST strategies, 11 RCTs examined PSM strategies, one RCT included both PST and PSM strategies and two RCTs examined the use of POC INR monitoring devices by health care professionals.
Anticoagulation Control
Anticoagulation control is measured by the percentage of time INR is within the therapeutic range or by the percentage of INR values in the therapeutic range. Due to the differing methodologies and reporting structures used, it was deemed inappropriate to combine the data and estimate whether the difference between groups would be significant. Instead, the results of individual studies were weighted by the number of person-years of observation and then pooled to calculate a summary measure.
Across most studies, patients in the intervention groups tended to have a higher percentage of time and values in the therapeutic target range in comparison to control patients. When the percentage of time in the therapeutic range was pooled across studies and weighted by the number of person-years of observation, the difference between the intervention and control groups was 4.2% for PSM, 7.2% for PST and 6.1% for POC use by health care practitioners. Overall, intervention patients were in the target range 69% of the time and control patients were in the therapeutic target range 64% of the time leading to an overall difference between groups of roughly 5%.
Major Complications and Deaths
There was no statistically significant difference in the number of major hemorrhagic events between patients managed with POC INR monitoring devices and patients managed with standard laboratory testing (OR =0.74; 95% CI: 0.52- 1.04). This difference was non-significant for all POC strategies (PSM, PST, health care practitioner).
Patients managed with POC INR monitoring devices had significantly fewer thromboembolic events than usual care patients (OR =0.52; 95% CI: 0.37 - 0.74). When divided by POC strategy, PSM resulted in significantly fewer thromboembolic events than usual care (OR =0.46.; 95% CI: 0.29 - 0.72). The observed difference in thromboembolic events for PSM remained significant when the analysis was limited to major thromboembolic events (OR =0.40; 95% CI: 0.17 - 0.93), but was non-significant when the analysis was limited to minor thromboembolic events (OR =0.73; 95% CI: 0.08 - 7.01). PST and GP/Nurse strategies did not result in significant differences in thromboembolic events, however there were only a limited number of studies examining these interventions.
No statistically significant difference was observed in the number of deaths between POC intervention and usual care control groups (OR =0.67; 95% CI: 0.41 - 1.10). This difference was non-significant for all POC strategies. Only one study reported on survival with 10-year survival rate of 76.1% in the usual care control group compared to 84.5% in the PSM group (P=0.05).
Summary Results of Meta-Analyses of Major Complications and Deaths in POC INR Monitoring Studies
Patient Satisfaction and Quality of Life
Quality of life measures were reported in eight studies comparing POC INR monitoring to standard laboratory testing using a variety of measurement tools. It was thus not possible to calculate a quantitative summary measure. The majority of studies reported favourable impacts of POC INR monitoring on QoL and found better treatment satisfaction with POC monitoring. Results from a pre-analysis patient and caregiver focus group conducted in Ontario also indicated improved patient QoL with POC monitoring.
Quality of the Evidence
Studies varied with regard to patient eligibility, baseline patient characteristics, follow-up duration, and withdrawal rates. Differential drop-out rates were observed such that the POC intervention groups tended to have a larger number of patients who withdrew. There was a lack of consistency in the definitions and reporting for OAT control and definitions of adverse events. In most studies, the intervention group received more education on the use of warfarin and performed more frequent INR testing, which may have overestimated the effect of the POC intervention. Patient selection and eligibility criteria were not always fully described and it is likely that the majority of the PST/PSM trials included a highly motivated patient population. Lastly, a large number of trials were also sponsored by industry.
Despite the observed heterogeneity among studies, there was a general consensus in findings that POC INR monitoring devices have beneficial impacts on the risk of thromboembolic events, anticoagulation control and patient satisfaction and QoL (ES Table 2).
GRADE Quality of the Evidence on POC INR Monitoring Studies
CI refers to confidence interval; Interv, intervention; OR, odds ratio; RCT, randomized controlled trial.
Economic Analysis
Using a 5-year Markov model, the health and economic outcomes associated with four different anticoagulation management approaches were evaluated:
Standard care: consisting of a laboratory test with a venipuncture blood draw for an INR;
Healthcare staff testing: consisting of a test with a POC INR device in a medical clinic comprised of healthcare staff such as pharmacists, nurses, and physicians following protocol to manage OAT;
PST: patient self-testing using a POC INR device and phoning in results to an ACC or family physician; and
PSM: patient self-managing using a POC INR device and self-adjustment of OAT according to a standardized protocol. Patients may also phone in to a medical office for guidance.
The primary analytic perspective was that of the MOHLTC. Only direct medical costs were considered and the time horizon of the model was five years - the serviceable life of a POC device.
From the results of the economic analysis, it was found that POC strategies are cost-effective compared to traditional INR laboratory testing. In particular, the healthcare staff testing strategy can derive potential cost savings from the use of one device for multiple patients. The PSM strategy, however, seems to be the most cost-effective method i.e. patients are more inclined to adjust their INRs more readily (as opposed to allowing INRs to fall out of range).
Considerations for Ontario Health System
Although the use of POC devices continues to diffuse throughout Ontario, not all OAT patients are suitable or have the ability to practice PST/PSM. The use of POC is currently concentrated at the institutional setting, including hospitals, ACCs, long-term care facilities, physician offices and pharmacies, and is much less commonly used at the patient level. It is, however, estimated that 24% of OAT patients (representing approximately 32,000 patients in Ontario), would be suitable candidates for PST/PSM strategies and willing to use a POC device.
There are several barriers to the use and implementation of POC INR monitoring devices, including factors such as lack of physician familiarity with the devices, resistance to changing established laboratory-based methods, lack of an approach for identifying suitable patients and inadequate resources for effective patient education and training. Issues of cost and insufficient reimbursement strategies may also hinder implementation and effective quality assurance programs would need to be developed to ensure that INR measurements are accurate and precise.
Conclusions
For a select group of patients who are highly motivated and trained, PSM resulted in significantly fewer thromboembolic events compared to conventional laboratory-based INR testing. No significant differences were observed for major hemorrhages or all-cause mortality. PST and GP/Nurse use of POC strategies are just as effective as conventional laboratory-based INR testing for thromboembolic events, major hemorrhages, and all-cause mortality. POC strategies may also result in better OAT control as measured by the proportion of time INR is in the therapeutic range and there appears to be beneficial impacts on patient satisfaction and QoL. The use of POC devices should factor in patient suitability, patient education and training, health system constraints, and affordability.
Keywords
anticoagulants, International Normalized Ratio, point-of-care, self-monitoring, warfarin.
PMCID: PMC3377545  PMID: 23074516
23.  Expanding Disease Definitions in Guidelines and Expert Panel Ties to Industry: A Cross-sectional Study of Common Conditions in the United States 
PLoS Medicine  2013;10(8):e1001500.
Background
Financial ties between health professionals and industry may unduly influence professional judgments and some researchers have suggested that widening disease definitions may be one driver of over-diagnosis, bringing potentially unnecessary labeling and harm. We aimed to identify guidelines in which disease definitions were changed, to assess whether any proposed changes would increase the numbers of individuals considered to have the disease, whether potential harms of expanding disease definitions were investigated, and the extent of members' industry ties.
Methods and Findings
We undertook a cross-sectional study of the most recent publication between 2000 and 2013 from national and international guideline panels making decisions about definitions or diagnostic criteria for common conditions in the United States. We assessed whether proposed changes widened or narrowed disease definitions, rationales offered, mention of potential harms of those changes, and the nature and extent of disclosed ties between members and pharmaceutical or device companies.
Of 16 publications on 14 common conditions, ten proposed changes widening and one narrowing definitions. For five, impact was unclear. Widening fell into three categories: creating “pre-disease”; lowering diagnostic thresholds; and proposing earlier or different diagnostic methods. Rationales included standardising diagnostic criteria and new evidence about risks for people previously considered to not have the disease. No publication included rigorous assessment of potential harms of proposed changes.
Among 14 panels with disclosures, the average proportion of members with industry ties was 75%. Twelve were chaired by people with ties. For members with ties, the median number of companies to which they had ties was seven. Companies with ties to the highest proportions of members were active in the relevant therapeutic area. Limitations arise from reliance on only disclosed ties, and exclusion of conditions too broad to enable analysis of single panel publications.
Conclusions
For the common conditions studied, a majority of panels proposed changes to disease definitions that increased the number of individuals considered to have the disease, none reported rigorous assessment of potential harms of that widening, and most had a majority of members disclosing financial ties to pharmaceutical companies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Health professionals generally base their diagnosis of physical and mental disorders among their patients on disease definitions and diagnostic thresholds that are drawn up by expert panels and published as statements or as part of clinical practice guidelines. These disease definitions and diagnostic thresholds are reviewed and updated in response to changes in disease detection methods, treatments, medical knowledge, and, in the case of mental illness, changes in cultural norms. Sometimes, the review process widens disease definitions and lowers diagnostic thresholds. Such changes can be beneficial. For example, they might ensure that life-threatening conditions are diagnosed early when they are still treatable. But the widening of disease definitions can also lead to over-diagnosis—the diagnosis of a condition in a healthy individual that will never cause any symptoms and won't lead to an early death. Over-diagnosis can unnecessarily label people as ill, harm healthy individuals by exposing them to treatments they do not need, and waste resources that could be used to treat or prevent “genuine” illness.
Why Was This Study Done?
In recent years, evidence for widespread financial and non-financial ties between pharmaceutical companies and the health professionals involved in writing clinical practice guidelines has increased, and concern that these links may influence professional judgments has grown. As a result, a 2011 report from the US Institute of Medicine (IOM) recommended that, whenever possible, guideline developers should not have conflicts of interest, that a minority of the panel members involved in guideline development should have conflicts of interest, and that the chairs of these panels should be free of conflicts. Much less is known, however, about the ties between industry and the health professionals involved in reviewing disease definitions and whether these ties might in some way contribute to over-diagnosis. In this cross-sectional study (an investigation that takes a snapshot of a situation at a single time point), the researchers identify panels that have recently made decisions about definitions or diagnostic thresholds for conditions that are common in the US and describe the industry ties among the panel members and the changes in disease definitions proposed by the panels.
What Did the Researchers Do and Find?
The researchers identified 16 publications in which expert panels proposed changes to the disease definitions and diagnostic criteria for 14 conditions that are common in the US such as hypertension (high blood pressure) and Alzheimer disease. The proposed changes widened the disease definition for ten diseases, narrowed it for one disease, and had an unclear impact for five diseases. Reasons included in the publications for changing disease definitions included new evidence of risk for people previously considered normal (pre-hypertension) and the emergence of new biomarkers, tests, or treatments (Alzheimer disease). Only six of the panels mentioned possible harms of the proposed changes and none appeared to rigorously assess the downsides of expanding definitions. Of the 15 panels involved in the publications (one panel produced two publications), 12 included members who disclosed financial ties to multiple companies. Notably, the commonest industrial ties among these panels were to companies marketing drugs for the disease being considered by that panel. On average, 75% of panel members disclosed industry ties (range 0% to 100%) to a median of seven companies each. Moreover, similar proportions of panel members disclosed industry ties in publications released before and after the 2011 IOM report.
What Do These Findings Mean?
These findings show that, for the conditions studied, most panels considering disease definitions and diagnostic criteria proposed changes that widened disease definitions and that financial ties with pharmaceutical companies with direct interests in the therapeutic area covered by the panel were common among panel members. Because this study does not include a comparison group, these findings do not establish a causal link between industry ties and proposals to change disease definitions. Moreover, because the study concentrates on a subset of common diseases in the US setting, the generalizability of these findings is limited. Despite these and other study limitations, these findings provide new information about the ties between industry and influential medical professionals and raise questions about the current processes of disease definition. Future research, the researchers suggest, should investigate how disease definitions change over time, how much money panel members receive from industry, and how panel proposals affect the potential market of sponsors. Finally it should aim to design new processes for reviewing disease definitions that are free from potential conflicts of interest.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001500.
A PLOS Medicine Research Article by Knüppel et al. assesses the representation of ethical issues in general clinical practice guidelines on dementia care
Wikipedia has a page on medical diagnosis (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
An article on over-diagnosis by two of the study authors is available; an international conference on preventing over-diagnosis will take place this September
The 2011 US Institute of Medicine report Clinical Practice Guidelines We Can Trust is available
A PLOS Medicine Essay by Lisa Cosgrove and Sheldon Krimsky discusses the financial ties with industry of panel members involved in the preparation of the latest revision of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM), which provides standard criteria for the classification of mental disorders
doi:10.1371/journal.pmed.1001500
PMCID: PMC3742441  PMID: 23966841
24.  Inequalities in Alcohol-Related Mortality in 17 European Countries: A Retrospective Analysis of Mortality Registers 
PLoS Medicine  2015;12(12):e1001909.
Background
Socioeconomic inequalities in alcohol-related mortality have been documented in several European countries, but it is unknown whether the magnitude of these inequalities differs between countries and whether these inequalities increase or decrease over time.
Methods and Findings
We collected and harmonized data on mortality from four alcohol-related causes (alcoholic psychosis, dependence, and abuse; alcoholic cardiomyopathy; alcoholic liver cirrhosis; and accidental poisoning by alcohol) by age, sex, education level, and occupational class in 20 European populations from 17 different countries, both for a recent period and for previous points in time, using data from mortality registers. Mortality was age-standardized using the European Standard Population, and measures for both relative and absolute inequality between low and high socioeconomic groups (as measured by educational level and occupational class) were calculated.
Rates of alcohol-related mortality are higher in lower educational and occupational groups in all countries. Both relative and absolute inequalities are largest in Eastern Europe, and Finland and Denmark also have very large absolute inequalities in alcohol-related mortality. For example, for educational inequality among Finnish men, the relative index of inequality is 3.6 (95% CI 3.3–4.0) and the slope index of inequality is 112.5 (95% CI 106.2–118.8) deaths per 100,000 person-years. Over time, the relative inequality in alcohol-related mortality has increased in many countries, but the main change is a strong rise of absolute inequality in several countries in Eastern Europe (Hungary, Lithuania, Estonia) and Northern Europe (Finland, Denmark) because of a rapid rise in alcohol-related mortality in lower socioeconomic groups. In some of these countries, alcohol-related causes now account for 10% or more of the socioeconomic inequality in total mortality.
Because our study relies on routinely collected underlying causes of death, it is likely that our results underestimate the true extent of the problem.
Conclusions
Alcohol-related conditions play an important role in generating inequalities in total mortality in many European countries. Countering increases in alcohol-related mortality in lower socioeconomic groups is essential for reducing inequalities in mortality. Studies of why such increases have not occurred in countries like France, Switzerland, Spain, and Italy can help in developing evidence-based policies in other European countries.
In a harmonized analysis of regional data, Johan Mackenbach and colleagues characterize three decades of alcohol-related mortality across socioeconomic groups in Europe.
Editors' Summary
Background
People have consumed alcoholic beverages throughout history, but, globally, about three million people die from alcohol-related causes every year. Alcohol consumption, particularly in higher amounts, is a risk factor for cardiovascular disease (diseases of the heart and/or blood vessels), liver cirrhosis (scarring of the liver), injuries, and many other fatal and nonfatal health problems. Alcohol also affects the well-being and health of people around those who drink, through alcohol-related crime and road traffic crashes. The impact of alcohol use on health depends on the amount of alcohol consumed and on the pattern of drinking. Most guidelines on alcohol consumption recommend that men should regularly consume no more than two alcoholic drinks per day and that women should regularly consume no more than one drink per day (a “drink” is, roughly speaking, a can of beer or a small glass of wine). The guidelines also advise people to avoid binge drinking—the consumption of five or more drinks on a single occasion for men or four or more drinks on a single occasion for women.
Why Was This Study Done?
Like many other behaviors that affect health, alcohol consumption is affected by socioeconomic status (an individual’s economic and social position in relation to others based on income, level of education, and occupation). Thus, in many European countries, the frequency of drinking and the levels of alcohol consumption are greater in higher socioeconomic groups than in lower socioeconomic groups, whereas binge drinking and other problematic forms of alcohol consumption occur more frequently in lower socioeconomic groups. Importantly, higher levels of mortality (death) from alcohol-related conditions have been documented in lower socioeconomic groups than in higher socioeconomic groups in several European countries. Here, the researchers analyze mortality registers to find out whether the magnitude of socioeconomic inequalities in alcohol-related mortality differs among European countries and whether these inequalities have changed over time. Documenting these differences and changes is important because it may help to explain socioeconomic inequalities in alcohol-related mortality and thus inform policies and interventions designed to reduce alcohol-related harm and socioeconomic inequalities in mortality.
What Did the Researchers Do and Find?
The researchers obtained data on deaths from alcoholic psychosis, dependence, and abuse; alcoholic cardiomyopathy (a type of heart disease); alcoholic liver cirrhosis; and accidental alcohol poisoning from the mortality registers of 17 European countries. Using available data on educational level and occupational class, they calculated relative and absolute socioeconomic inequalities in alcohol-related mortality (relative inequality reflects mortality differences between socioeconomic groups in terms of a proportion or percentage; absolute inequality reflects mortality differences between groups in terms of deaths per 100,000 person-years). Rates of alcohol-related mortality were higher in individuals with less education or with manual (as opposed to non-manual) occupations in all 17 countries. Both relative and absolute inequalities were largest in Eastern Europe but Finland and Denmark also had very large absolute inequalities in alcohol-related mortality. For example, among Finnish men, those with the lowest level of education were 3.6 times more likely to die from an alcohol-related cause than those with the highest level of education, and there were 112.5 more deaths per 100,000 person-years among those with the lowest level of education than among those with the highest level of education. The relative inequality in alcohol-related mortality increased over time in many countries. Moreover, the absolute inequality increased markedly in Hungary, Lithuania, Estonia, Finland, and Denmark because of a rapid rise in alcohol-related mortality in lower socioeconomic groups. By contrast, mortality from alcohol-related causes among lower educated men was stable in France, Switzerland, Spain, and Italy.
What Do These Findings Mean?
These findings suggest that alcohol-related conditions are an important contributing factor to the socioeconomic inequality in total mortality in many European countries. Indeed, in some countries (for example, Finland), alcohol-related causes account for 10% or more of the socioeconomic inequality in total mortality among men. The accuracy of these findings is likely to be affected by the use of routinely collected underlying causes of death and by other aspects of the study design. Importantly, however, these findings indicate that to reduce socioeconomic inequalities in mortality, health professionals and governments need to introduce interventions and policies designed to counter recent increases in alcohol-related mortality in lower socioeconomic groups. Further investigation of why such increases have not occurred in some countries may help in the design of these important public health initiatives.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001909.
The World Health Organization provides detailed information about alcohol, including a fact sheet on the harmful use of alcohol; its Global Status Report on Alcohol and Health 2014 provides country profiles for alcohol consumption, information on the impact of alcohol use on health, and policy responses; the Global Information System on Alcohol and Health provides further information about alcohol, including information on control policies around the world
The US National Institute on Alcohol Abuse and Alcoholism has information about alcohol and its effects on health; it provides interactive worksheets to help people evaluate their drinking and decide whether and how to make a change
The US Centers for Disease Control and Prevention provides information on alcohol and public health
The UK National Health Service Choices website provides detailed information about drinking and alcohol, including information on the risks of drinking too much, tools for calculating alcohol consumption, and personal stories about alcohol use problems
MedlinePlus provides links to many other resources on alcohol
doi:10.1371/journal.pmed.1001909
PMCID: PMC4666661  PMID: 26625134
25.  Measuring Adult Mortality Using Sibling Survival: A New Analytical Method and New Results for 44 Countries, 1974–2006 
PLoS Medicine  2010;7(4):e1000260.
Julie Rajaratnam and colleagues describe a novel method, called the Corrected Sibling Survival method, to measure adult mortality in countries without good vital registration by use of histories taken from surviving siblings.
Background
For several decades, global public health efforts have focused on the development and application of disease control programs to improve child survival in developing populations. The need to reliably monitor the impact of such intervention programs in countries has led to significant advances in demographic methods and data sources, particularly with large-scale, cross-national survey programs such as the Demographic and Health Surveys (DHS). Although no comparable effort has been undertaken for adult mortality, the availability of large datasets with information on adult survival from censuses and household surveys offers an important opportunity to dramatically improve our knowledge about levels and trends in adult mortality in countries without good vital registration. To date, attempts to measure adult mortality from questions in censuses and surveys have generally led to implausibly low levels of adult mortality owing to biases inherent in survey data such as survival and recall bias. Recent methodological developments and the increasing availability of large surveys with information on sibling survival suggest that it may well be timely to reassess the pessimism that has prevailed around the use of sibling histories to measure adult mortality.
Methods and Findings
We present the Corrected Sibling Survival (CSS) method, which addresses both the survival and recall biases that have plagued the use of survey data to estimate adult mortality. Using logistic regression, our method directly estimates the probability of dying in a given country, by age, sex, and time period from sibling history data. The logistic regression framework borrows strength across surveys and time periods for the estimation of the age patterns of mortality, and facilitates the implementation of solutions for the underrepresentation of high-mortality families and recall bias. We apply the method to generate estimates of and trends in adult mortality, using the summary measure 45q15—the probability of a 15-y old dying before his or her 60th birthday—for 44 countries with DHS sibling survival data. Our findings suggest that levels of adult mortality prevailing in many developing countries are substantially higher than previously suggested by other analyses of sibling history data. Generally, our estimates show the risk of adult death between ages 15 and 60 y to be about 20%–35% for females and 25%–45% for males in sub-Saharan African populations largely unaffected by HIV. In countries of Southern Africa, where the HIV epidemic has been most pronounced, as many as eight out of ten men alive at age 15 y will be dead by age 60, as will six out of ten women. Adult mortality levels in populations of Asia and Latin America are generally lower than in Africa, particularly for women. The exceptions are Haiti and Cambodia, where mortality risks are comparable to many countries in Africa. In all other countries with data, the probability of dying between ages 15 and 60 y was typically around 10% for women and 20% for men, not much higher than the levels prevailing in several more developed countries.
Conclusions
Our results represent an expansion of direct knowledge of levels and trends in adult mortality in the developing world. The CSS method provides grounds for renewed optimism in collecting sibling survival data. We suggest that all nationally representative survey programs with adequate sample size ought to implement this critical module for tracking adult mortality in order to more reliably understand the levels and patterns of adult mortality, and how they are changing.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Governments and international health agencies need accurate information on births and deaths in populations to help them plan health care policies and monitor the effectiveness of public-health programs designed, for example, to prevent premature deaths from preventable causes such as tobacco smoking. In developed countries, full information on births and deaths is recorded in “vital registration systems.” Unfortunately, very few developing countries have complete vital registration systems. In most African countries, for example, less than one-quarter of deaths are counted through vital registration systems. To fill this information gap, scientists have developed several methods to estimate mortality levels (the proportion of deaths in populations) and trends in mortality (how the proportion of deaths in populations changes over time) from data collected in household surveys and censuses. A household survey collects data about family members (for example, number, age, and sex) for a national sample of households randomly selected from a list of households collected in a census (a periodic count of a population).
Why Was This Study Done?
To date, global public-health efforts have concentrated on improving child survival. Consequently, methods for calculating child mortality levels and trends from surveys are well-developed and generally yield accurate estimates. By contrast, although attempts have been made to measure adult mortality using sibling survival histories (records of the sex, age if alive, or age at death, if dead, of all the children born to survey respondents' mothers that are collected in many household surveys), these attempts have often produced implausibly low estimates of adult mortality. These low estimates arise because people do not always recall deaths accurately when questioned (recall bias) and because families that have fallen apart, possibly because of family deaths, are underrepresented in household surveys (selection bias). In this study, the researchers develop a corrected sibling survival (CSS) method that addresses the problems of selection and recall bias and use their method to estimate mortality levels and trends in 44 developing countries between 1974 and 2006.
What Did the Researchers Do and Find?
The researchers used a statistical approach called logistic regression to develop the CSS method. They then used the method to estimate the probability of a 15-year-old dying before his or her 60th birthday from sibling survival data collected by the Demographic and Health Surveys program (DHS, a project started in 1984 to help developing countries collect data on population and health trends). Levels of adult mortality estimated in this way were considerably higher than those suggested by previous analyses of sibling history data. For example, the risk of adult death between the ages of 15 and 60 years was 20%–35% for women and 25%–45% for men living in sub-Saharan African countries largely unaffected by HIV and 60% for women and 80% for men living in countries in Southern Africa where the HIV epidemic is worst. Importantly, the researchers show that their mortality level estimates compare well to those obtained from vital registration data and other data sources where available. So, for example, in the Philippines, adult mortality levels estimated using the CSS method were similar to those obtained from vital registration data. Finally, the researchers used the CSS method to estimate mortality trends. These calculations reveal, for example, that there has been a 3–4-fold increase in adult mortality since the late 1980s in Zimbabwe, a country badly affected by the HIV epidemic.
What Do These Findings Mean?
These findings suggest that the CSS method, which applies a correction for both selection and recall bias, yields more accurate estimates of adult mortality in developing countries from sibling survival data than previous methods. Given their findings, the researchers suggest that sibling survival histories should be routinely collected in all future household survey programs and, if possible, these surveys should be expanded so that all respondents are asked about sibling histories—currently the DHS only collects sibling histories from women aged 15–49 years. Widespread collection of such data and their analysis using the CSS method, the researchers conclude, would help governments and international agencies track trends in adult mortality and progress toward major health and development targets.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000260.
This study and two related PLoS Medicine Research Articles by Rajaratnam et al. and by Murray et al. are further discussed in a PLoS Medicine Perspective by Mathers and Boerma
Information is available about the Demographic and Health Surveys
The Institute for Health Metrics and Evaluation makes available high-quality information on population health, its determinants, and the performance of health systems
Grand Challenges in Global Health provides information on research into better ways for developing countries to measure their health status
The World Health Organization Statistical Information System (WHOSIS) is an interactive database that brings together core health statistics for WHO member states, including information on vital registration of deaths; the WHO Health Metrics Network is a global collaboration focused on improving sources of vital statistics
doi:10.1371/journal.pmed.1000260
PMCID: PMC2854132  PMID: 20405004

Results 1-25 (1836941)