PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1503226)

Clipboard (0)
None

Related Articles

1.  Inference about the expected performance of a data-driven dynamic treatment regime 
Clinical trials (London, England)  2014;11(4):408-417.
Background
A dynamic treatment regime (DTR) comprises a sequence of decision rules, one per stage of intervention, that recommends how to individualize treatment to patients based on evolving treatment and covariate history. These regimes are useful for managing chronic disorders, and fit into the larger paradigm of personalized medicine. The Value of a DTR is the expected outcome when the DTR is used to assign treatments to a population of interest.
Purpose
The Value of a data-driven DTR, estimated using data from a sequential multiple assignment randomized trial, is both a data-dependent parameter and a non-smooth function of the underlying generative distribution. These features introduce additional variability that is not accounted for by standard methods for conducting statistical inference, e.g., the bootstrap or normal approximations, if applied without adjustment. Our purpose is to provide a feasible method for constructing valid confidence intervals for this quantity of practical interest.
Methods
We propose a conceptually simple and computationally feasible method for constructing valid confidence intervals for the Value of an estimated DTR based on subsampling. The method is self-tuning by virtue of an approach called the double bootstrap. We demonstrate the proposed method using a series of simulated experiments.
Results
The proposed method offers considerable improvement in terms of coverage rates of the confidence intervals over the standard bootstrap approach.
Limitations
In this paper, we have restricted our attention to Q-learning for estimating the optimal DTR. However, other methods can be employed for this purpose; to keep the discussion focused, we have not explored these alternatives.
Conclusions
Subsampling-based confidence intervals provide much better performance compared to standard bootstrap for the Value of an estimated DTR.
doi:10.1177/1740774514537727
PMCID: PMC4265005  PMID: 24925083
2.  Estimation of Optimal Dynamic Treatment Regimes 
Clinical trials (London, England)  2014;11(4):400-407.
Background
Recent advances in medical research suggest that the optimal treatment rules should be adaptive to patients over time. This has led to an increasing interest in studying dynamic treatment regimes (DTRs), a sequence of individualized treatment rules, one per stage of clinical intervention, which map present patient information to a recommended treatment. There has been a recent surge of statistical work for estimating optimal DTRs from randomized and observational studies. The purpose of this paper is to review recent methodological progress and applied issues associated with estimating optimal DTRs.
Methods
We discuss Sequential Multiple Assignment Randomized Trials (SMARTs), a clinical trial design used to study treatment sequences. We use a common estimator of an optimal DTR that applies to SMART data as a platform to discuss several practical and methodological issues.
Results
We provide a limited survey of practical issues associated with modeling SMART data. We review some existing estimators of optimal dynamic treatment regimes and discuss practical issues associated with these methods including: model building; missing data; statistical inference; and choosing an outcome when only non-responders are re-randomized. We mainly focus on the estimation and inference of DTRs using SMART data. DTRs can also be constructed from observational data, which may be easier to obtain in practice, however, care must be taken to account for potential confounding.
doi:10.1177/1740774514532570
PMCID: PMC4247353  PMID: 24872361
Adaptive treatment strategies; Dynamic treatment regimes; Missing data; Personalized treatment; Q-learning; Sequential Multiple Assignment Randomized Trials; Outcome weighted learning; Augmented value maximization; Structural mean models
3.  Q-learning for estimating optimal dynamic treatment rules from observational data 
The area of dynamic treatment regimes (DTR) aims to make inference about adaptive, multistage decision-making in clinical practice. A DTR is a set of decision rules, one per interval of treatment, where each decision is a function of treatment and covariate history that returns a recommended treatment. Q-learning is a popular method from the reinforcement learning literature that has recently been applied to estimate DTRs. While, in principle, Q-learning can be used for both randomized and observational data, the focus in the literature thus far has been exclusively on the randomized treatment setting. We extend the method to incorporate measured confounding covariates, using direct adjustment and a variety of propensity score approaches. The methods are examined under various settings including non-regular scenarios. We illustrate the methods in examining the effect of breastfeeding on vocabulary testing, based on data from the Promotion of Breastfeeding Intervention Trial.
doi:10.1002/cjs.11162
PMCID: PMC3551601  PMID: 23355757
Bias; confounding; dynamic treatment regime; inverse probability of treatment weighting; non-regularity; propensity scores
4.  Identifying a set that contains the best dynamic treatment regimes 
Biostatistics (Oxford, England)  2015;17(1):135-148.
A dynamic treatment regime (DTR) is a treatment design that seeks to accommodate patient heterogeneity in response to treatment. DTRs can be operationalized by a sequence of decision rules that map patient information to treatment options at specific decision points. The sequential, multiple assignment, randomized trial (SMART) is a trial design that was developed specifically for the purpose of obtaining data that informs the construction of good (i.e. efficacious) decision rules. One of the scientific questions motivating a SMART concerns the comparison of multiple DTRs that are embedded in the design. Typical approaches for identifying the best DTRs involve all possible comparisons between DTRs that are embedded in a SMART, at the cost of greatly reduced power to the extent that the number of embedded DTRs (EDTRs) increase. Here, we propose a method that will enable investigators to use SMART study data more efficiently to identify the set that contains the most efficacious EDTRs. Our method ensures that the true best EDTRs are included in this set with at least a given probability. Simulation results are presented to evaluate the proposed method, and the Extending Treatment Effectiveness of Naltrexone SMART study data are analyzed to illustrate its application.
doi:10.1093/biostatistics/kxv025
PMCID: PMC4679070  PMID: 26243172
Double robust; Marginal structural model; Multiple comparisons with the best; SMART designs
5.  Positron Emission Tomography for the Assessment of Myocardial Viability 
Executive Summary
In July 2009, the Medical Advisory Secretariat (MAS) began work on Non-Invasive Cardiac Imaging Technologies for the Assessment of Myocardial Viability, an evidence-based review of the literature surrounding different cardiac imaging modalities to ensure that appropriate technologies are accessed by patients undergoing viability assessment. This project came about when the Health Services Branch at the Ministry of Health and Long-Term Care asked MAS to provide an evidentiary platform on effectiveness and cost-effectiveness of non-invasive cardiac imaging modalities.
After an initial review of the strategy and consultation with experts, MAS identified five key non-invasive cardiac imaging technologies that can be used for the assessment of myocardial viability: positron emission tomography, cardiac magnetic resonance imaging, dobutamine echocardiography, and dobutamine echocardiography with contrast, and single photon emission computed tomography.
A 2005 review conducted by MAS determined that positron emission tomography was more sensitivity than dobutamine echocardiography and single photon emission tomography and dominated the other imaging modalities from a cost-effective standpoint. However, there was inadequate evidence to compare positron emission tomography and cardiac magnetic resonance imaging. Thus, this report focuses on this comparison only. For both technologies, an economic analysis was also completed.
The Non-Invasive Cardiac Imaging Technologies for the Assessment of Myocardial Viability is made up of the following reports, which can be publicly accessed at the MAS website at: www.health.gov.on.ca/mas or at www.health.gov.on.ca/english/providers/program/mas/mas_about.html
Positron Emission Tomography for the Assessment of Myocardial Viability: An Evidence-Based Analysis
Magnetic Resonance Imaging for the Assessment of Myocardial Viability: An Evidence-Based Analysis
Objective
The objective of this analysis is to assess the effectiveness and safety of positron emission tomography (PET) imaging using F-18-fluorodeoxyglucose (FDG) for the assessment of myocardial viability. To evaluate the effectiveness of FDG PET viability imaging, the following outcomes are examined:
the diagnostic accuracy of FDG PET for predicting functional recovery;
the impact of PET viability imaging on prognosis (mortality and other patient outcomes); and
the contribution of PET viability imaging to treatment decision making and subsequent patient outcomes.
Clinical Need: Condition and Target Population
Left Ventricular Systolic Dysfunction and Heart Failure
Heart failure is a complex syndrome characterized by the heart’s inability to maintain adequate blood circulation through the body leading to multiorgan abnormalities and, eventually, death. Patients with heart failure experience poor functional capacity, decreased quality of life, and increased risk of morbidity and mortality.
In 2005, more than 71,000 Canadians died from cardiovascular disease, of which, 54% were due to ischemic heart disease. Left ventricular (LV) systolic dysfunction due to coronary artery disease (CAD)1 is the primary cause of heart failure accounting for more than 70% of cases. The prevalence of heart failure was estimated at one percent of the Canadian population in 1989. Since then, the increase in the older population has undoubtedly resulted in a substantial increase in cases. Heart failure is associated with a poor prognosis: one-year mortality rates were 32.9% and 31.1% for men and women, respectively in Ontario between 1996 and 1997.
Treatment Options
In general, there are three options for the treatment of heart failure: medical treatment, heart transplantation, and revascularization for those with CAD as the underlying cause. Concerning medical treatment, despite recent advances, mortality remains high among treated patients, while, heart transplantation is affected by the limited availability of donor hearts and consequently has long waiting lists. The third option, revascularization, is used to restore the flow of blood to the heart via coronary artery bypass grafting (CABG) or through minimally invasive percutaneous coronary interventions (balloon angioplasty and stenting). Both methods, however, are associated with important perioperative risks including mortality, so it is essential to properly select patients for this procedure.
Myocardial Viability
Left ventricular dysfunction may be permanent if a myocardial scar is formed, or it may be reversible after revascularization. Reversible LV dysfunction occurs when the myocardium is viable but dysfunctional (reduced contractility). Since only patients with dysfunctional but viable myocardium benefit from revascularization, the identification and quantification of the extent of myocardial viability is an important part of the work-up of patients with heart failure when determining the most appropriate treatment path. Various non-invasive cardiac imaging modalities can be used to assess patients in whom determination of viability is an important clinical issue, specifically:
dobutamine echocardiography (echo),
stress echo with contrast,
SPECT using either technetium or thallium,
cardiac magnetic resonance imaging (cardiac MRI), and
positron emission tomography (PET).
Dobutamine Echocardiography
Stress echocardiography can be used to detect viable myocardium. During the infusion of low dose dobutamine (5 – 10 μg/kg/min), an improvement of contractility in hypokinetic and akentic segments is indicative of the presence of viable myocardium. Alternatively, a low-high dose dobutamine protocol can be used in which a biphasic response characterized by improved contractile function during the low-dose infusion followed by a deterioration in contractility due to stress induced ischemia during the high dose dobutamine infusion (dobutamine dose up to 40 ug/kg/min) represents viable tissue. Newer techniques including echocardiography using contrast agents, harmonic imaging, and power doppler imaging may help to improve the diagnostic accuracy of echocardiographic assessment of myocardial viability.
Stress Echocardiography with Contrast
Intravenous contrast agents, which are high molecular weight inert gas microbubbles that act like red blood cells in the vascular space, can be used during echocardiography to assess myocardial viability. These agents allow for the assessment of myocardial blood flow (perfusion) and contractile function (as described above), as well as the simultaneous assessment of perfusion to make it possible to distinguish between stunned and hibernating myocardium.
SPECT
SPECT can be performed using thallium-201 (Tl-201), a potassium analogue, or technetium-99 m labelled tracers. When Tl-201 is injected intravenously into a patient, it is taken up by the myocardial cells through regional perfusion, and Tl-201 is retained in the cell due to sodium/potassium ATPase pumps in the myocyte membrane. The stress-redistribution-reinjection protocol involves three sets of images. The first two image sets (taken immediately after stress and then three to four hours after stress) identify perfusion defects that may represent scar tissue or viable tissue that is severely hypoperfused. The third set of images is taken a few minutes after the re-injection of Tl-201 and after the second set of images is completed. These re-injection images identify viable tissue if the defects exhibit significant fill-in (> 10% increase in tracer uptake) on the re-injection images.
The other common Tl-201 viability imaging protocol, rest-redistribution, involves SPECT imaging performed at rest five minutes after Tl-201 is injected and again three to four hours later. Viable tissue is identified if the delayed images exhibit significant fill-in of defects identified in the initial scans (> 10% increase in uptake) or if defects are fixed but the tracer activity is greater than 50%.
There are two technetium-99 m tracers: sestamibi (MIBI) and tetrofosmin. The uptake and retention of these tracers is dependent on regional perfusion and the integrity of cellular membranes. Viability is assessed using one set of images at rest and is defined by segments with tracer activity greater than 50%.
Cardiac Magnetic Resonance Imaging
Cardiac magnetic resonance imaging (cardiac MRI) is a non-invasive, x-ray free technique that uses a powerful magnetic field, radio frequency pulses, and a computer to produce detailed images of the structure and function of the heart. Two types of cardiac MRI are used to assess myocardial viability: dobutamine stress magnetic resonance imaging (DSMR) and delayed contrast-enhanced cardiac MRI (DE-MRI). DE-MRI, the most commonly used technique in Ontario, uses gadolinium-based contrast agents to define the transmural extent of scar, which can be visualized based on the intensity of the image. Hyper-enhanced regions correspond to irreversibly damaged myocardium. As the extent of hyper-enhancement increases, the amount of scar increases, so there is a lower the likelihood of functional recovery.
Cardiac Positron Emission Tomography
Positron emission tomography (PET) is a nuclear medicine technique used to image tissues based on the distinct ways in which normal and abnormal tissues metabolize positron-emitting radionuclides. Radionuclides are radioactive analogs of common physiological substrates such as sugars, amino acids, and free fatty acids that are used by the body. The only licensed radionuclide used in PET imaging for viability assessment is F-18 fluorodeoxyglucose (FDG).
During a PET scan, the radionuclides are injected into the body and as they decay, they emit positively charged particles (positrons) that travel several millimetres into tissue and collide with orbiting electrons. This collision results in annihilation where the combined mass of the positron and electron is converted into energy in the form of two 511 keV gamma rays, which are then emitted in opposite directions (180 degrees) and captured by an external array of detector elements in the PET gantry. Computer software is then used to convert the radiation emission into images. The system is set up so that it only detects coincident gamma rays that arrive at the detectors within a predefined temporal window, while single photons arriving without a pair or outside the temporal window do not active the detector. This allows for increased spatial and contrast resolution.
Evidence-Based Analysis
Research Questions
What is the diagnostic accuracy of PET for detecting myocardial viability?
What is the prognostic value of PET viability imaging (mortality and other clinical outcomes)?
What is the contribution of PET viability imaging to treatment decision making?
What is the safety of PET viability imaging?
Literature Search
A literature search was performed on July 17, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2004 to July 16, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. In addition, published systematic reviews and health technology assessments were reviewed for relevant studies published before 2004. Reference lists of included studies were also examined for any additional relevant studies not already identified. The quality of the body of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Inclusion Criteria
Criteria applying to diagnostic accuracy studies, prognosis studies, and physician decision-making studies:
English language full-reports
Health technology assessments, systematic reviews, meta-analyses, randomized controlled trials (RCTs), and observational studies
Patients with chronic, known CAD
PET imaging using FDG for the purpose of detecting viable myocardium
Criteria applying to diagnostic accuracy studies:
Assessment of functional recovery ≥3 months after revascularization
Raw data available to calculate sensitivity and specificity
Gold standard: prediction of global or regional functional recovery
Criteria applying to prognosis studies:
Mortality studies that compare revascularized patients with non-revascularized patients and patients with viable and non-viable myocardium
Exclusion Criteria
Criteria applying to diagnostic accuracy studies, prognosis studies, and physician decision-making studies:
PET perfusion imaging
< 20 patients
< 18 years of age
Patients with non-ischemic heart disease
Animal or phantom studies
Studies focusing on the technical aspects of PET
Studies conducted exclusively in patients with acute myocardial infarction (MI)
Duplicate publications
Criteria applying to diagnostic accuracy studies
Gold standard other than functional recovery (e.g., PET or cardiac MRI)
Assessment of functional recovery occurs before patients are revascularized
Outcomes of Interest
Diagnostic accuracy studies
Sensitivity and specificity
Positive and negative predictive values (PPV and NPV)
Positive and negative likelihood ratios
Diagnostic accuracy
Adverse events
Prognosis studies
Mortality rate
Functional status
Exercise capacity
Quality of Life
Influence on PET viability imaging on physician decision making
Statistical Methods
Pooled estimates of sensitivity and specificity were calculated using a bivariate, binomial generalized linear mixed model. Statistical significance was defined by P values less than 0.05, where “false discovery rate” adjustments were made for multiple hypothesis testing. Using the bivariate model parameters, summary receiver operating characteristic (sROC) curves were produced. The area under the sROC curve was estimated by numerical integration with a cubic spline (default option). Finally, pooled estimates of mortality rates were calculated using weighted means.
Quality of Evidence
The quality of evidence assigned to individual diagnostic studies was determined using the QUADAS tool, a list of 14 questions that address internal and external validity, bias, and generalizibility of diagnostic accuracy studies. Each question is scored as “yes”, “no”, or “unclear”. The quality of the body of evidence was then assessed as high, moderate, low, or very low according to the GRADE Working Group criteria. The following definitions of quality were used in grading the quality of the evidence:
Summary of Findings
A total of 40 studies met the inclusion criteria and were included in this review: one health technology assessment, two systematic reviews, 22 observational diagnostic accuracy studies, and 16 prognosis studies. The available PET viability imaging literature addresses two questions: 1) what is the diagnostic accuracy of PET imaging for the assessment; and 2) what is the prognostic value of PET viability imaging. The diagnostic accuracy studies use regional or global functional recovery as the reference standard to determine the sensitivity and specificity of the technology. While regional functional recovery was most commonly used in the studies, global functional recovery is more important clinically. Due to differences in reporting and thresholds, however, it was not possible to pool global functional recovery.
Functional recovery, however, is a surrogate reference standard for viability and consequently, the diagnostic accuracy results may underestimate the specificity of PET viability imaging. For example, regional functional recovery may take up to a year after revascularization depending on whether it is stunned or hibernating tissue, while many of the studies looked at regional functional recovery 3 to 6 months after revascularization. In addition, viable tissue may not recover function after revascularization due to graft patency or re-stenosis. Both issues may lead to false positives and underestimate specificity. Given these limitations, the prognostic value of PET viability imaging provides the most direct and clinically useful information. This body of literature provides evidence on the comparative effectiveness of revascularization and medical therapy in patients with viable myocardium and patients without viable myocardium. In addition, the literature compares the impact of PET-guided treatment decision making with SPECT-guided or standard care treatment decision making on survival and cardiac events (including cardiac mortality, MI, hospital stays, unintended revascularization, etc).
The main findings from the diagnostic accuracy and prognosis evidence are:
Based on the available very low quality evidence, PET is a useful imaging modality for the detection of viable myocardium. The pooled estimates of sensitivity and specificity for the prediction of regional functional recovery as a surrogate for viable myocardium are 91.5% (95% CI, 88.2% – 94.9%) and 67.8% (95% CI, 55.8% – 79.7%), respectively.
Based the available very low quality of evidence, an indirect comparison of pooled estimates of sensitivity and specificity showed no statistically significant difference in the diagnostic accuracy of PET viability imaging for regional functional recovery using perfusion/metabolism mismatch with FDG PET plus either a PET or SPECT perfusion tracer compared with metabolism imaging with FDG PET alone.
FDG PET + PET perfusion metabolism mismatch: sensitivity, 89.9% (83.5% – 96.4%); specificity, 78.3% (66.3% – 90.2%);
FDG PET + SPECT perfusion metabolism mismatch: sensitivity, 87.2% (78.0% – 96.4%); specificity, 67.1% (48.3% – 85.9%);
FDG PET metabolism: sensitivity, 94.5% (91.0% – 98.0%); specificity, 66.8% (53.2% – 80.3%).
Given these findings, further higher quality studies are required to determine the comparative effectiveness and clinical utility of metabolism and perfusion/metabolism mismatch viability imaging with PET.
Based on very low quality of evidence, patients with viable myocardium who are revascularized have a lower mortality rate than those who are treated with medical therapy. Given the quality of evidence, however, this estimate of effect is uncertain so further higher quality studies in this area should be undertaken to determine the presence and magnitude of the effect.
While revascularization may reduce mortality in patients with viable myocardium, current moderate quality RCT evidence suggests that PET-guided treatment decisions do not result in statistically significant reductions in mortality compared with treatment decisions based on SPECT or standard care protocols. The PARR II trial by Beanlands et al. found a significant reduction in cardiac events (a composite outcome that includes cardiac deaths, MI, or hospital stay for cardiac cause) between the adherence to PET recommendations subgroup and the standard care group (hazard ratio, .62; 95% confidence intervals, 0.42 – 0.93; P = .019); however, this post-hoc sub-group analysis is hypothesis generating and higher quality studies are required to substantiate these findings.
The use of FDG PET plus SPECT to determine perfusion/metabolism mismatch to assess myocardial viability increases the radiation exposure compared with FDG PET imaging alone or FDG PET combined with PET perfusion imaging (total-body effective dose: FDG PET, 7 mSv; FDG PET plus PET perfusion tracer, 7.6 – 7.7 mSV; FDG PET plus SPECT perfusion tracer, 16 – 25 mSv). While the precise risk attributed to this increased exposure is unknown, there is increasing concern regarding lifetime multiple exposures to radiation-based imaging modalities, although the incremental lifetime risk for patients who are older or have a poor prognosis may not be as great as for healthy individuals.
PMCID: PMC3377573  PMID: 23074393
6.  Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies 
Shanghai Archives of Psychiatry  2014;26(6):376-383.
Summary
Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data.
doi:10.11919/j.issn.1002-0829.214172
PMCID: PMC4311115  PMID: 25642116
SMART; dynamic treatment regimes; personalized medicine; O-learning; Q-learning; double robust estimation
7.  New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes 
Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation.
doi:10.1080/01621459.2014.937488
PMCID: PMC4517946  PMID: 26236062
Dynamic treatment regimes; Personalized medicine; Reinforcement learning; Q-learning; Support vector machine; Classification; Risk Bound
8.  Continuous Subcutaneous Insulin Infusion (CSII) Pumps for Type 1 and Type 2 Adult Diabetic Populations 
Executive Summary
In June 2008, the Medical Advisory Secretariat began work on the Diabetes Strategy Evidence Project, an evidence-based review of the literature surrounding strategies for successful management and treatment of diabetes. This project came about when the Health System Strategy Division at the Ministry of Health and Long-Term Care subsequently asked the secretariat to provide an evidentiary platform for the Ministry’s newly released Diabetes Strategy.
After an initial review of the strategy and consultation with experts, the secretariat identified five key areas in which evidence was needed. Evidence-based analyses have been prepared for each of these five areas: insulin pumps, behavioural interventions, bariatric surgery, home telemonitoring, and community based care. For each area, an economic analysis was completed where appropriate and is described in a separate report.
To review these titles within the Diabetes Strategy Evidence series, please visit the Medical Advisory Secretariat Web site, http://www.health.gov.on.ca/english/providers/program/mas/mas_about.html,
Diabetes Strategy Evidence Platform: Summary of Evidence-Based Analyses
Continuous Subcutaneous Insulin Infusion Pumps for Type 1 and Type 2 Adult Diabetics: An Evidence-Based Analysis
Behavioural Interventions for Type 2 Diabetes: An Evidence-Based Analysis
Bariatric Surgery for People with Diabetes and Morbid Obesity: An Evidence-Based Summary
Community-Based Care for the Management of Type 2 Diabetes: An Evidence-Based Analysis
Home Telemonitoring for Type 2 Diabetes: An Evidence-Based Analysis
Application of the Ontario Diabetes Economic Model (ODEM) to Determine the Cost-effectiveness and Budget Impact of Selected Type 2 Diabetes Interventions in Ontario
Objective
The objective of this analysis is to review the efficacy of continuous subcutaneous insulin infusion (CSII) pumps as compared to multiple daily injections (MDI) for the type 1 and type 2 adult diabetics.
Clinical Need and Target Population
Insulin therapy is an integral component of the treatment of many individuals with diabetes. Type 1, or juvenile-onset diabetes, is a life-long disorder that commonly manifests in children and adolescents, but onset can occur at any age. It represents about 10% of the total diabetes population and involves immune-mediated destruction of insulin producing cells in the pancreas. The loss of these cells results in a decrease in insulin production, which in turn necessitates exogenous insulin therapy.
Type 2, or ‘maturity-onset’ diabetes represents about 90% of the total diabetes population and is marked by a resistance to insulin or insufficient insulin secretion. The risk of developing type 2 diabetes increases with age, obesity, and lack of physical activity. The condition tends to develop gradually and may remain undiagnosed for many years. Approximately 30% of patients with type 2 diabetes eventually require insulin therapy.
CSII Pumps
In conventional therapy programs for diabetes, insulin is injected once or twice a day in some combination of short- and long-acting insulin preparations. Some patients require intensive therapy regimes known as multiple daily injection (MDI) programs, in which insulin is injected three or more times a day. It’s a time consuming process and usually requires an injection of slow acting basal insulin in the morning or evening and frequent doses of short-acting insulin prior to eating. The most common form of slower acting insulin used is neutral protamine gagedorn (NPH), which reaches peak activity 3 to 5 hours after injection. There are some concerns surrounding the use of NPH at night-time as, if injected immediately before bed, nocturnal hypoglycemia may occur. To combat nocturnal hypoglycemia and other issues related to absorption, alternative insulins have been developed, such as the slow-acting insulin glargine. Glargine has no peak action time and instead acts consistently over a twenty-four hour period, helping reduce the frequency of hypoglycemic episodes.
Alternatively, intensive therapy regimes can be administered by continuous insulin infusion (CSII) pumps. These devices attempt to closely mimic the behaviour of the pancreas, continuously providing a basal level insulin to the body with additional boluses at meal times. Modern CSII pumps are comprised of a small battery-driven pump that is designed to administer insulin subcutaneously through the abdominal wall via butterfly needle. The insulin dose is adjusted in response to measured capillary glucose values in a fashion similar to MDI and is thus often seen as a preferred method to multiple injection therapy. There are, however, still risks associated with the use of CSII pumps. Despite the increased use of CSII pumps, there is uncertainty around their effectiveness as compared to MDI for improving glycemic control.
Part A: Type 1 Diabetic Adults (≥19 years)
An evidence-based analysis on the efficacy of CSII pumps compared to MDI was carried out on both type 1 and type 2 adult diabetic populations.
Research Questions
Are CSII pumps more effective than MDI for improving glycemic control in adults (≥19 years) with type 1 diabetes?
Are CSII pumps more effective than MDI for improving additional outcomes related to diabetes such as quality of life (QoL)?
Literature Search
Inclusion Criteria
Randomized controlled trials, systematic reviews, meta-analysis and/or health technology assessments from MEDLINE, EMBASE, CINAHL
Adults (≥ 19 years)
Type 1 diabetes
Study evaluates CSII vs. MDI
Published between January 1, 2002 – March 24, 2009
Patient currently on intensive insulin therapy
Exclusion Criteria
Studies with <20 patients
Studies <5 weeks in duration
CSII applied only at night time and not 24 hours/day
Mixed group of diabetes patients (children, adults, type 1, type 2)
Pregnancy studies
Outcomes of Interest
The primary outcomes of interest were glycosylated hemoglobin (HbA1c) levels, mean daily blood glucose, glucose variability, and frequency of hypoglycaemic events. Other outcomes of interest were insulin requirements, adverse events, and quality of life.
Search Strategy
The literature search strategy employed keywords and subject headings to capture the concepts of:
1) insulin pumps, and
2) type 1 diabetes.
The search was run on July 6, 2008 in the following databases: Ovid MEDLINE (1996 to June Week 4 2008), OVID MEDLINE In-Process and Other Non-Indexed Citations, EMBASE (1980 to 2008 Week 26), OVID CINAHL (1982 to June Week 4 2008) the Cochrane Library, and the Centre for Reviews and Dissemination/International Agency for Health Technology Assessment. A search update was run on March 24, 2009 and studies published prior to 2002 were also examined for inclusion into the review. Parallel search strategies were developed for the remaining databases. Search results were limited to human and English-language published between January 2002 and March 24, 2009. Abstracts were reviewed, and studies meeting the inclusion criteria outlined above were obtained. Reference lists were also checked for relevant studies.
Summary of Findings
The database search identified 519 relevant citations published between 1996 and March 24, 2009. Of the 519 abstracts reviewed, four RCTs and one abstract met the inclusion criteria outlined above. While efficacy outcomes were reported in each of the trials, a meta-analysis was not possible due to missing data around standard deviations of change values as well as missing data for the first period of the crossover arm of the trial. Meta-analysis was not possible on other outcomes (quality of life, insulin requirements, frequency of hypoglycemia) due to differences in reporting.
HbA1c
In studies where no baseline data was reported, the final values were used. Two studies (Hanaire-Broutin et al. 2000, Hoogma et al. 2005) reported a slight reduction in HbA1c of 0.35% and 0.22% respectively for CSII pumps in comparison to MDI. A slightly larger reduction in HbA1c of 0.84% was reported by DeVries et al.; however, this study was the only study to include patients with poor glycemic control marked by higher baseline HbA1c levels. One study (Bruttomesso et al. 2008) showed no difference between CSII pumps and MDI on Hba1c levels and was the only study using insulin glargine (consistent with results of parallel RCT in abstract by Bolli 2004). While there is statistically significant reduction in HbA1c in three of four trials, there is no evidence to suggest these results are clinically significant.
Mean Blood Glucose
Three of four studies reported a statistically significant reduction in the mean daily blood glucose for patients using CSII pump, though these results were not clinically significant. One study (DeVries et al. 2002) did not report study data on mean blood glucose but noted that the differences were not statistically significant. There is difficulty with interpreting study findings as blood glucose was measured differently across studies. Three of four studies used a glucose diary, while one study used a memory meter. In addition, frequency of self monitoring of blood glucose (SMBG) varied from four to nine times per day. Measurements used to determine differences in mean daily blood glucose between the CSII pump group and MDI group at clinic visits were collected at varying time points. Two studies use measurements from the last day prior to the final visit (Hoogma et al. 2005, DeVries et al. 2002), while one study used measurements taken during the last 30 days and another study used measurements taken during the 14 days prior to the final visit of each treatment period.
Glucose Variability
All four studies showed a statistically significant reduction in glucose variability for patients using CSII pumps compared to those using MDI, though one, Bruttomesso et al. 2008, only showed a significant reduction at the morning time point. Brutomesso et al. also used alternate measures of glucose variability and found that both the Lability index and mean amplitude of glycemic excursions (MAGE) were in concordance with the findings using the standard deviation (SD) values of mean blood glucose, but the average daily risk range (ADRR) showed no difference between the CSII pump and MDI groups.
Hypoglycemic Events
There is conflicting evidence concerning the efficacy of CSII pumps in decreasing both mild and severe hypoglycemic events. For mild hypoglycemic events, DeVries et al. observed a higher number of events per patient week in the CSII pump group than the MDI group, while Hoogma et al. observed a higher number of events per patient year in the MDI group. The remaining two studies found no differences between the two groups in the frequency of mild hypoglycemic events. For severe hypoglycemic events, Hoogma et al. found an increase in events per patient year among MDI patients, however, all of the other RCTs showed no difference between the patient groups in this aspect.
Insulin Requirements and Adverse Events
In all four studies, insulin requirements were significantly lower in patients receiving CSII pump treatment in comparison to MDI. This difference was statistically significant in all studies. Adverse events were reported in three studies. Devries et al. found no difference in ketoacidotic episodes between CSII pump and MDI users. Bruttomesso et al. reported no adverse events during the study. Hanaire-Broutin et al. found that 30 patients experienced 58 serious adverse events (SAEs) during MDI and 23 patients had 33 SAEs during treatment out of a total of 256 patients. Most events were related to severe hypoglycemia and diabetic ketoacidosis.
Quality of Life and Patient Preference
QoL was measured in three studies and patient preference was measured in one. All three studies found an improvement in QoL for CSII users compared to those using MDI, although various instruments were used among the studies and possible reporting bias was evident as non-positive outcomes were not consistently reported. Moreover, there was also conflicting results in two of the studies using the Diabetes Treatment Satisfaction Questionnaire (DTSQ). DeVries et al. reported no difference in treatment satisfaction between CSII pump users and MDI users while Brutomesso et al. reported that treatment satisfaction improved among CSII pump users.
Patient preference for CSII pumps was demonstrated in just one study (Hanaire-Broutin et al. 2000) and there are considerable limitations with interpreting this data as it was gathered through interview and 72% of patients that preferred CSII pumps were previously on CSII pump therapy prior to the study. As all studies were industry sponsored, findings on QoL and patient preference must be interpreted with caution.
Quality of Evidence
Overall, the body of evidence was downgraded from high to low due to study quality and issues with directness as identified using the GRADE quality assessment tool (see Table 1) While blinding of patient to intervention/control was not feasible in these studies, blinding of study personnel during outcome assessment and allocation concealment were generally lacking. Trials reported consistent results for the outcomes HbA1c, mean blood glucose and glucose variability, but the directness or generalizability of studies, particularly with respect to the generalizability of the diabetic population, was questionable as most trials used highly motivated populations with fairly good glycemic control. In addition, the populations in each of the studies varied with respect to prior treatment regimens, which may not be generalizable to the population eligible for pumps in Ontario. For the outcome of hypoglycaemic events the evidence was further downgraded to very low since there was conflicting evidence between studies with respect to the frequency of mild and severe hypoglycaemic events in patients using CSII pumps as compared to CSII (see Table 2). The GRADE quality of evidence for the use of CSII in adults with type 1 diabetes is therefore low to very low and any estimate of effect is, therefore, uncertain.
GRADE Quality Assessment for CSII pumps vs. MDI on HbA1c, Mean Blood Glucose, and Glucose Variability for Adults with Type 1 Diabetes
Inadequate or unknown allocation concealment (3/4 studies); Unblinded assessment (all studies) however lack of blinding due to the nature of the study; No ITT analysis (2/4 studies); possible bias SMBG (all studies)
HbA1c: 3/4 studies show consistency however magnitude of effect varies greatly; Single study uses insulin glargine instead of NPH; Mean Blood Glucose: 3/4 studies show consistency however magnitude of effect varies between studies; Glucose Variability: All studies show consistency but 1 study only showed a significant effect in the morning
Generalizability in question due to varying populations: highly motivated populations, educational component of interventions/ run-in phases, insulin pen use in 2/4 studies and varying levels of baseline glycemic control and experience with intensified insulin therapy, pumps and MDI.
GRADE Quality Assessment for CSII pumps vs. MDI on Frequency of Hypoglycemic
Inadequate or unknown allocation concealment (3/4 studies); Unblinded assessment (all studies) however lack of blinding due to the nature of the study; No ITT analysis (2/4 studies); possible bias SMBG (all studies)
Conflicting evidence with respect to mild and severe hypoglycemic events reported in studies
Generalizability in question due to varying populations: highly motivated populations, educational component of interventions/ run-in phases, insulin pen use in 2/4 studies and varying levels of baseline glycemic control and experience with intensified insulin therapy, pumps and MDI.
Economic Analysis
One article was included in the analysis from the economic literature scan. Four other economic evaluations were identified but did not meet our inclusion criteria. Two of these articles did not compare CSII with MDI and the other two articles used summary estimates from a mixed population with Type 1 and 2 diabetes in their economic microsimulation to estimate costs and effects over time. Included were English articles that conducted comparisons between CSII and MDI with the outcome of Quality Adjusted Life Years (QALY) in an adult population with type 1 diabetes.
From one study, a subset of the population with type 1 diabetes was identified that may be suitable and benefit from using insulin pumps. There is, however, limited data in the literature addressing the cost-effectiveness of insulin pumps versus MDI in type 1 diabetes. Longer term models are required to estimate the long term costs and effects of pumps compared to MDI in this population.
Conclusions
CSII pumps for the treatment of adults with type 1 diabetes
Based on low-quality evidence, CSII pumps confer a statistically significant but not clinically significant reduction in HbA1c and mean daily blood glucose as compared to MDI in adults with type 1 diabetes (>19 years).
CSII pumps also confer a statistically significant reduction in glucose variability as compared to MDI in adults with type 1 diabetes (>19 years) however the clinical significance is unknown.
There is indirect evidence that the use of newer long-acting insulins (e.g. insulin glargine) in MDI regimens result in less of a difference between MDI and CSII compared to differences between MDI and CSII in which older insulins are used.
There is conflicting evidence regarding both mild and severe hypoglycemic events in this population when using CSII pumps as compared to MDI. These findings are based on very low-quality evidence.
There is an improved quality of life for patients using CSII pumps as compared to MDI however, limitations exist with this evidence.
Significant limitations of the literature exist specifically:
All studies sponsored by insulin pump manufacturers
All studies used crossover design
Prior treatment regimens varied
Types of insulins used in study varied (NPH vs. glargine)
Generalizability of studies in question as populations were highly motivated and half of studies used insulin pens as the mode of delivery for MDI
One short-term study concluded that pumps are cost-effective, although this was based on limited data and longer term models are required to estimate the long-term costs and effects of pumps compared to MDI in adults with type 1 diabetes.
Part B: Type 2 Diabetic Adults
Research Questions
Are CSII pumps more effective than MDI for improving glycemic control in adults (≥19 years) with type 2 diabetes?
Are CSII pumps more effective than MDI for improving other outcomes related to diabetes such as quality of life?
Literature Search
Inclusion Criteria
Randomized controlled trials, systematic reviews, meta-analysis and/or health technology assessments from MEDLINE, Excerpta Medica Database (EMBASE), Cumulative Index to Nursing & Allied Health Literature (CINAHL)
Any person with type 2 diabetes requiring insulin treatment intensive
Published between January 1, 2000 – August 2008
Exclusion Criteria
Studies with <10 patients
Studies <5 weeks in duration
CSII applied only at night time and not 24 hours/day
Mixed group of diabetes patients (children, adults, type 1, type 2)
Pregnancy studies
Outcomes of Interest
The primary outcome of interest was a reduction in glycosylated hemoglobin (HbA1c) levels. Other outcomes of interest were mean blood glucose level, glucose variability, insulin requirements, frequency of hypoglycemic events, adverse events, and quality of life.
Search Strategy
A comprehensive literature search was performed in OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, CINAHL, The Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published between January 1, 2000 and August 15, 2008. Studies meeting the inclusion criteria were selected from the search results. Data on the study characteristics, patient characteristics, primary and secondary treatment outcomes, and adverse events were abstracted. Reference lists of selected articles were also checked for relevant studies. The quality of the evidence was assessed as high, moderate, low, or very low according to the GRADE methodology.
Summary of Findings
The database search identified 286 relevant citations published between 1996 and August 2008. Of the 286 abstracts reviewed, four RCTs met the inclusion criteria outlined above. Upon examination, two studies were subsequently excluded from the meta-analysis due to small sample size and missing data (Berthe et al.), as well as outlier status and high drop out rate (Wainstein et al) which is consistent with previously reported meta-analyses on this topic (Jeitler et al 2008, and Fatourechi M et al. 2009).
HbA1c
The primary outcome in this analysis was reduction in HbA1c. Both studies demonstrated that both CSII pumps and MDI reduce HbA1c, but neither treatment modality was found to be superior to the other. The results of a random effects model meta-analysis showed a mean difference in HbA1c of -0.14 (-0.40, 0.13) between the two groups, which was found not to be statistically or clinically significant. There was no statistical heterogeneity observed between the two studies (I2=0%).
Forrest plot of two parallel, RCTs comparing CSII to MDI in type 2 diabetes
Secondary Outcomes
Mean Blood Glucose and Glucose Variability
Mean blood glucose was only used as an efficacy outcome in one study (Raskin et al. 2003). The authors found that the only time point in which there were consistently lower blood glucose values for the CSII group compared to the MDI group was 90 minutes after breakfast. Glucose variability was not examined in either study and the authors reported no difference in weight gain between the CSII pump group and MDI groups at the end of study. Conflicting results were reported regarding injection site reactions between the two studies. Herman et al. reported no difference in the number of subjects experiencing site problems between the two groups, while Raskin et al. reported that there were no injection site reactions in the MDI group but 15 such episodes among 8 participants in the CSII pump group.
Frequency of Hypoglycemic Events and Insulin Requirements
All studies reported that there were no differences in the number of mild hypoglycemic events in patients on CSII pumps versus MDI. Herman et al. also reported no differences in the number of severe hypoglycemic events in patients using CSII pumps compared to those on MDI. Raskin et al. reported that there were no severe hypoglycemic events in either group throughout the study duration. Insulin requirements were only examined in Herman et al., who found that daily insulin requirements were equal between the CSII pump and MDI treatment groups.
Quality of Life
QoL was measured by Herman et al. using the Diabetes Quality of Life Clinical Trial Questionnaire (DQOLCTQ). There were no differences reported between CSII users and MDI users for treatment satisfaction, diabetes impact, and worry-related scores. Patient satisfaction was measured in Raskin et al. using a patient satisfaction questionnaire, whose results indicated that patients in the CSII pump group had significantly greater improvement in overall treatment satisfaction at the end of the study compared to the MDI group. Although patient preference was also reported, it was only examined in the CSII pump group, thus results indicating a greater preference for CSII pumps in this groups (as compared to prior injectable insulin regimens) are biased and must be interpreted with caution.
Quality of Evidence
Overall, the body of evidence was downgraded from high to low according to study quality and issues with directness as identified using the GRADE quality assessment tool (see Table 3). While blinding of patient to intervention/control is not feasible in these studies, blinding of study personnel during outcome assessment and allocation concealment were generally lacking. ITT was not clearly explained in one study and heterogeneity between study populations was evident from participants’ treatment regimens prior to study initiation. Although trials reported consistent results for HbA1c outcomes, the directness or generalizability of studies, particularly with respect to the generalizability of the diabetic population, was questionable as trials required patients to adhere to an intense SMBG regimen. This suggests that patients were highly motivated. In addition, since prior treatment regimens varied between participants (no requirement for patients to be on MDI), study findings may not be generalizable to the population eligible for a pump in Ontario. The GRADE quality of evidence for the use of CSII in adults with type 2 diabetes is, therefore, low and any estimate of effect is uncertain.
GRADE Quality Assessment for CSII pumps vs. MDI on HbA1c Adults with Type 2 Diabetes
Inadequate or unknown allocation concealment (all studies); Unblinded assessment (all studies) however lack of blinding due to the nature of the study; ITT not well explained in 1 of 2 studies
Indirect due to lack of generalizability of findings since participants varied with respect to prior treatment regimens and intensive SMBG suggests highly motivated populations used in trials.
Economic Analysis
An economic analysis of CSII pumps was carried out using the Ontario Diabetes Economic Model (ODEM) and has been previously described in the report entitled “Application of the Ontario Diabetes Economic Model (ODEM) to Determine the Cost-effectiveness and Budget Impact of Selected Type 2 Diabetes Interventions in Ontario”, part of the diabetes strategy evidence series. Based on the analysis, CSII pumps are not cost-effective for adults with type 2 diabetes, either for the age 65+ sub-group or for all patients in general. Details of the analysis can be found in the full report.
Conclusions
CSII pumps for the treatment of adults with type 2 diabetes
There is low quality evidence demonstrating that the efficacy of CSII pumps is not superior to MDI for adult type 2 diabetics.
There were no differences in the number of mild and severe hypoglycemic events in patients on CSII pumps versus MDI.
There are conflicting findings with respect to an improved quality of life for patients using CSII pumps as compared to MDI.
Significant limitations of the literature exist specifically:
All studies sponsored by insulin pump manufacturers
Prior treatment regimens varied
Types of insulins used in study varied (NPH vs. glargine)
Generalizability of studies in question as populations may not reflect eligible patient population in Ontario (participants not necessarily on MDI prior to study initiation, pen used in one study and frequency of SMBG required during study was high suggesting highly motivated participants)
Based on ODEM, insulin pumps are not cost-effective for adults with type 2 diabetes either for the age 65+ sub-group or for all patients in general.
PMCID: PMC3377523  PMID: 23074525
9.  Stereotactic Body Radiation Therapy for Prostate Cancer: What is the Appropriate Patient-Reported Outcome for Clinical Trial Design? 
Purpose: Stereotactic body radiation therapy (SBRT) is increasingly utilized as primary treatment for clinically localized prostate cancer. Consensus regarding the appropriate patient-reported outcome (PRO) endpoints for clinical trials evaluating radiation modalities for early stage prostate cancer is lacking. To aid in clinical trial design, this study presents PROs over a 36-month period following SBRT for clinically localized prostate cancer.
Methods: Between February 2008 and September 2010, 174 hormone-naïve patients with clinically localized prostate cancer were treated with 35–36.25 Gy SBRT (CyberKnife, Accuray) delivered in 5 fractions. Patients completed the validated Expanded Prostate Cancer Index Composite (EPIC)-26 questionnaire at baseline and all follow-ups. The proportion of patients developing a clinically significant decline in each EPIC domain score was determined. The minimally important difference (MID) was defined as a change of one-half the standard deviation from the baseline. Per Radiation Therapy Oncology Group (RTOG) 0938, we also examined the patients who experienced a decline in EPIC urinary domain summary score of >2 points (unacceptable toxicity defined as ≥60% of all patients reporting this degree of decline) and EPIC bowel domain summary score of >5 points (unacceptable toxicity defined as >55% of all patients reporting this degree of decline) from baseline to 1 year.
Results: A total of 174 patients at a median age of 69 years received SBRT with a minimum follow-up of 36 months. The proportion of patients reporting a clinically significant decline (MID for urinary/bowel are 5.5/4.4) in EPIC urinary/bowel domain scores was 34%/30% at 6 months, 40%/32.2% at 12 months, and 32.8%/21.5% at 36 months. The patients reporting a decrease in the EPIC urinary domain summary score of >2 points was 43.2% (CI: 33.7%, 54.6%) at 6 months, 51.6% (CI: 43.4%, 59.7%) at 12 months, and 41.8% (CI: 33.3%, 50.6%) at 36 months. The patients reporting a decrease in the EPIC bowel domain summary score of >5 points was 29.6% (CI: 21.9%, 39.3%) at 6 months, 29% (CI: 22%, 36.8%) at 12 months, and 22.4% (CI: 15.7%, 30.4%) at 36 months.
Conclusion: Following prostate SBRT, clinically significant urinary symptoms are more common than bowel symptoms. Our prostate SBRT treatment protocol meets the RTOG 0938 criteria for moving forward to a Phase III trial comparing it to conventionally fractionated radiation therapy. Notably, between 12 and 36 months, the proportion of patients reporting a significant decrease in both EPIC urinary and bowel domain scores declined, suggesting a late improvement in these symptom domains. Further investigation is needed to elucidate (1) which EPIC domains bear the greatest influence on post-treatment quality of life and (2) at what time point PRO endpoint(s) should be assessed.
doi:10.3389/fonc.2015.00077
PMCID: PMC4379875  PMID: 25874188
prostate cancer; SBRT; CyberKnife; EPIC; patient-reported outcome; toxicity
10.  Mistletoe treatments for minimising side effects of anticancer chemotherapy 
Background
More than 200,000 persons died in 2002 in Germany as a consequence of cancer diseases. Cancer (ICD-9: 140-208, ICD-10: C00-C97) accounted for 28% of all male deaths and for 22% of all female deaths. Cancer treatment consists on surgery, radio- and chemotherapy. During chemotherapy patients may experience a wide variety of toxic effects (including life-threatening toxicity) which require treatment. The type and the intensity of chemotherapy toxicity are one of the limiting factors in cancer treatment. Toxic effects are also one of the factors affecting health related quality of life (HRQOL) during chemotherapy.
Mistletoe extracts belong to the group of so called „unconventional methods“ and are used in Germany as complementary cancer treatments. It has been postulated that the addition of mistletoe to chemotherapeutical regimes could help reduce chemotherapy-induced toxicity and enhance treatment tolerability.
The German social health insurance covers the prescription of ML I standardized mistletoe extracts when those are prescribed as palliative cancer treatments with the aim of improving HRQOL.
Research questions
Does the addition of mistletoe to chemotherapeutical regimes reduce their toxicity?Does the addition of mistletoe to chemotherapeutical regimes contribute to improve quality of life?Has the addition of mistletoe to chemotherapeutical regimes any effects on survival?Has the addition of mistletoe to chemotherapeutical regimes any effects on tumor-remission?
Methods
We conducted a systematic literature search in following databases: The Cochrane Library, DIMDI Superbase and Dissertation Abstracts. We included systematic reviews and randomized controlled trials (RCT). Appraisal of literature was done by two authors independently. Checklists were used to guide literature appraisal. The Jadad-Score was used to score quality of RCT. Evidence was summarized in tables and in narrative form.
Results and discussion
The literature search yielded 437 potentially relevant papers. A total of 94 papers was retrieved. Of them, 48 were potentially relevant for answering the research questions and 46 for background information. In this report we summarize the results from three systematic reviews, five published RCT and two unpublished RCT. A protocol of an ongoing systematic review from the Cochrane Collaboration was also identified.
The information gathered from the systematic reviews was insufficient to answer the research questions. The relevant studies identified and synthetised in these reviews were appraised and extracted again. In addition, a set of recently published RCT was identified and included in these report.
None of the RCT defined frequency or severity of chemotherapy associated toxic effects as its primary outcome. Some of the RCT reported, however, rates of toxic effects or parameters related to toxicity. The results are inconsistent among the RCT ranging from no effect on to positive effects (i. e. reduction) on chemotherapy toxicity. RCT with treatment toxicity as primary outcome are needed to answer the question of whether the addition of mistletoe extracts to chemotherapy regimes can help reducing treatment toxicity.
HRQOL was the primary outcome in four RCT. The addition of mistletoe to chemotherapy showed to have a positive effect on HRQOL of women treated for breast cancer.
Conclusions
The available evidence does not allow giving a conclusive answer to the question of whether the addition of mistletoe to chemotherapeutical regimes can reduce the toxicity of the latter. RCT are needed in which the primary outcome is treatment toxicity. The addition of standardised mistletoe extract to chemotherapeutical regimes in the treatment of women with breast cancer can lead to improvements in HRQOL. In the light of the results from RCT the coverage of mistletoe in cancer treatment should be restricted in Germany to the latter indication.
PMCID: PMC3011359  PMID: 21289969
11.  Extracorporeal Photophoresis 
Executive Summary
Objective
To assess the effectiveness, safety and cost-effectiveness of extracorporeal photophoresis (ECP) for the treatment of refractory erythrodermic cutaneous T cell lymphoma (CTCL) and refractory chronic graft versus host disease (cGvHD).
Background
Cutaneous T Cell Lymphoma
Cutaneous T cell lymphoma (CTCL) is a general name for a group of skin affecting disorders caused by malignant white blood cells (T lymphocytes). Cutaneous T cell lymphoma is relatively uncommon and represents slightly more than 2% of all lymphomas in the United States. The most frequently diagnosed form of CTCL is mycosis fungoides (MF) and its leukemic variant Sezary syndrome (SS). The relative frequency and disease-specific 5-year survival of 1,905 primary cutaneous lymphomas classified according to the World Health Organization-European Organization for Research and Treatment of Cancer (WHO-EORTC) classification (Appendix 1). Mycosis fungoides had a frequency of 44% and a disease specific 5-year survival of 88%. Sezary syndrome had a frequency of 3% and a disease specific 5-year survival of 24%.
Cutaneous T cell lymphoma has an annual incidence of approximately 0.4 per 100,000 and it mainly occurs in the 5th to 6th decade of life, with a male/female ratio of 2:1. Mycosis fungoides is an indolent lymphoma with patients often having several years of eczematous or dermatitic skin lesions before the diagnosis is finally established. Mycosis fungoides commonly presents as chronic eczematous patches or plaques and can remain stable for many years. Early in the disease biopsies are often difficult to interpret and the diagnosis may only become apparent by observing the patient over time.
The clinical course of MF is unpredictable. Most patients will live normal lives and experience skin symptoms without serious complications. Approximately 10% of MF patients will experience progressive disease involving lymph nodes, peripheral blood, bone marrow and visceral organs. A particular syndrome in these patients involves erythroderma (intense and usually widespread reddening of the skin from dilation of blood vessels, often preceding or associated with exfoliation), and circulating tumour cells. This is known as SS. It has been estimated that approximately 5-10% of CTCL patients have SS. Patients with SS have a median survival of approximately 30 months.
Chronic Graft Versus Host Disease
Allogeneic hematopoietic cell transplantation (HCT) is a treatment used for a variety of malignant and nonmalignant disease of the bone marrow and immune system. The procedure is often associated with serious immunological complications, particularly graft versus host disease (GvHD). A chronic form of GvHD (cGvHD) afflicts many allogeneic HCT recipients, which results in dysfunction of numerous organ systems or even a profound state of immunodeficiency. Chronic GVHD is the most frequent cause of poor long-term outcome and quality of life after allogeneic HCT. The syndrome typically develops several months after transplantation, when the patient may no longer be under the direct care of the transplant team.
Approximately 50% of patients with cGvHD have limited disease and a good prognosis. Of the patients with extensive disease, approximately 60% will respond to treatment and eventually be able to discontinue immunosuppressive therapy. The remaining patients will develop opportunistic infection, or require prolonged treatment with immunosuppressive agents.
Chronic GvHD occurs in at least 30% to 50% of recipients of transplants from human leukocyte antigen matched siblings and at least 60% to 70% of recipients of transplants from unrelated donors. Risk factors include older age of patient or donor, higher degree of histoincompatibility, unrelated versus related donor, use of hematopoietic cells obtained from the blood rather than the marrow, and previous acute GvHD. Bhushan and Collins estimated that the incidence of severe cGvHD has probably increased in recent years because of the use of more unrelated transplants, donor leukocyte infusions, nonmyeloablative transplants and stem cells obtained from the blood rather than the marrow. The syndrome typically occurs 4 to 7 months after transplantation but may begin as early as 2 months or as late as 2 or more years after transplantation. Chronic GvHD may occur by itself, evolve from acute GvHD, or occur after resolution of acute GvHD.
The onset of the syndrome may be abrupt but is frequently insidious with manifestations evolving gradually for several weeks. The extent of involvement varies significantly from mild involvement limited to a few patches of skin to severe involvement of numerous organ systems and profound immunodeficiency. The most commonly involved tissues are the skin, liver, mouth, and eyes. Patients with limited disease have localized skin involvement, evidence of liver dysfunction, or both, whereas those with more involvement of the skin or involvement of other organs have extensive disease.
Treatment
 
Cutaneous T Cell Lymphoma
The optimal management of MF is undetermined because of its low prevalence, and its highly variable natural history, with frequent spontaneous remissions and exacerbations and often prolonged survival.
Nonaggressive approaches to therapy are usually warranted with treatment aimed at improving symptoms and physical appearance while limiting toxicity. Given that multiple skin sites are usually involved, the initial treatment choices are usually topical or intralesional corticosteroids or phototherapy using psoralen (a compound found in plants which make the skin temporarily sensitive to ultraviolet A) (PUVA). PUVA is not curative and its influence on disease progression remains uncertain. Repeated courses are usually required which may lead to an increased risk of both melanoma and nonmelanoma skin cancer. For thicker plaques, particularly if localized, radiotherapy with superficial electrons is an option.
“Second line” therapy for early stage disease is often topical chemotherapy, radiotherapy or total skin electron beam radiation (TSEB).
Treatment of advanced stage (IIB-IV) MF usually consists of topical or systemic therapy in refractory or rapidly progressive SS.
Bone marrow transplantation and peripheral blood stem cell transplantation have been used to treat many malignant hematologic disorders (e.g., leukemias) that are refractory to conventional treatment. Reports on the use of these procedures for the treatment of CTCL are limited and mostly consist of case reports or small case series.
Chronic Graft Versus Host Disease
Patients who develop cGvHD require reinstitution of immunosuppressive medication (if already discontinued) or an increase in dosage and possibly addition of other agents. The current literature regarding cGvHD therapy is less than optimal and many recommendations about therapy are based on common practices that await definitive testing. Patients with disease that is extensive by definition but is indolent in clinical appearance may respond to prednisone. However, patients with more aggressive disease are treated with higher doses of corticosteroids and/or cyclosporine.
Numerous salvage therapies have been considered in patients with refractory cGvHD, including ECP. Due to uncertainty around salvage therapies, Bhushan and Collins suggested that ideally, patients with refractory cGvHD should be entered into clinical trials.
Two Ontario expert consultants jointly estimated that there may be approximately 30 new erythrodermic treatment resistant CTCL patients and 30 new treatment resistant cGvHD patients per year who are unresponsive to other forms of therapy and may be candidates for ECP.
Extracorporeal photopheresis is a procedure that was initially developed as a treatment for CTCL, particularly SS.
Current Technique
Extracorporeal photopheresis is an immunomodulatory technique based on pheresis of light sensitive cells. Whole blood is removed from patients followed by pheresis. Lymphocytes are separated by centrifugation to create a concentrated layer of white blood cells. The lymphocyte layer is treated with methoxsalen (a drug that sensitizes the lymphocytes to light) and exposed to UVA, following which the lymphocytes are returned to the patient. Red blood cells and plasma are returned to the patient between each cycle.
Photosensitization is achieved by administering methoxsalen to the patient orally 2 hours before the procedure, or by injecting methoxsalen directly ino the leucocyte rich fraction. The latter approach avoids potential side effects such as nausea, and provides a more consistent drug level within the machine.
In general, from the time the intravenous line is inserted until the white blood cells are returned to the patient takes approximately 2.5-3.5 hours.
For CTCL, the treatment schedule is generally 2 consecutive days every 4 weeks for a median of 6 months. For cGvHD, an expert in the field estimated that the treatment schedule would be 3 times a week for the 1st month, then 2 consecutive days every 2 weeks after that (i.e., 4 treatments a month) for a median of 6 to 9 months.
Regulatory Status
The UVAR XTS Photopheresis System is licensed by Health Canada as a Class 3 medical device (license # 7703) for the “palliative treatment of skin manifestations of CTCL.” It is not licensed for the treatment of cGvHD.
UVADEX (sterile solution methoxsalen) is not licensed by Health Canada, but can be used in Canada via the Special Access Program. (Personal communication, Therakos, February 16, 2006)
According to the manufacturer, the UVAR XTS photopheresis system licensed by Health Canada can also be used with oral methoxsalen. (Personal communication, Therakos, February 16, 2006) However, oral methoxsalen is associated with side effects, must be taken by the patient in advance of ECP, and has variable absorption in the gastrointestinal tract.
According to Health Canada, UVADEX is not approved for use in Canada. In addition, a review of the Product Monographs of the methoxsalen products that have been approved in Canada showed that none of them have been approved for oral administration in combination with the UVAR XTS photophoresis system for “the palliative treatment of the skin manifestations of cutaneous T-cell Lymphoma”.
In the United States, the UVAR XTS Photopheresis System is approved by the Food and Drug Administration (FDA) for “use in the ultraviolet-A (UVA) irradiation in the presence of the photoactive drug methoxsalen of extracorporeally circulating leukocyte-enriched blood in the palliative treatment of the skin manifestations of CTCL in persons who have not been responsive to other therapy.”
UVADEX is approved by the FDA for use in conjunction with UVR XTS photopheresis system for “use in the ultraviolet-A (UVA) irradiation in the presence of the photoactive drug methoxsalen of extracorporeally circulating leukocyte-enriched blood in the palliative treatment of the skin manifestations of CTCL in persons who have not been responsive to other therapy.”
The use of the UVAR XTS photopheresis system or UVADEX for cGvHD is an off-label use of a FDA approved device/drug.
Summary of Findings
The quality of the trials was examined.
As stated by the GRADE Working Group, the following definitions were used in grading the quality of the evidence.
Cutaneous T Cell Lymphoma
Overall, there is low-quality evidence that ECP improves response rates and survival in patients with refractory erythrodermic CTCL (Table 1).
Limitations in the literature related to ECP for the treatment of refractory erythrodermic CTCL include the following:
Different treatment regimens.
Variety of forms of CTCL (and not necessarily treatment resistant) - MF, erythrodermic MF, SS.
SS with peripheral blood involvement → role of T cell clonality reporting?
Case series (1 small crossover RCT with several limitations)
Small sample sizes.
Retrospective.
Response criteria not clearly defined/consistent.
Unclear how concomitant therapy contributed to responses.
Variation in definitions of concomitant therapy
Comparison to historical controls.
Some patients were excluded from analysis because of progression of disease, toxicity and other reasons.
Unclear/strange statistics
Quality of life not reported as an outcome of interest.
The reported CR range is ~ 16% to 23% and the overall reported CR/PR range is ~ 33% to 80%.
The wide range in reported responses to ECP appears to be due to the variability of the patients treated and the way in which the data were presented and analyzed.
Many patients, in mostly retrospective case series, were concurrently on other therapies and were not assessed for comparability of diagnosis or disease stage (MF versus SS; erythrodermic versus not erythrodermic). Blood involvement in patients receiving ECP (e.g., T cell clonality) was not consistently reported, especially in earlier studies. The definitions of partial and complete response also are not standardized or consistent between studies.
Quality of life was reported in one study; however, the scale was developed by the authors and is not a standard validated scale.
Adverse events associated with ECP appear to be uncommon and most involve catheter related infections and hypotension caused by volume depletion.
GRADE Quality of Studies – Extracorporeal Photopheresis for Refractory Erythrodermic Cutaneous T-Cell Lymphoma
Chronic Graft-Versus-Host Disease
Overall, there is low-quality evidence that ECP improves response rates and survival in patients with refractory cGvHD (Table 2).
Patients in the studies had stem cell transplants due to a variety of hematological disorders (e.g., leukemias, aplastic anemia, thalassemia major, Hodgkin’s lymphoma, non Hodgkin’s lymphoma).
In 2001, The Blue Cross Blue Shield Technology Evaluation Centre concluded that ECP meets the TEC criteria as treatment of cGvHD that is refractory to established therapy.
The Catalan health technology assessment (also published in 2001) concluded that ECP is a new but experimental therapeutic alternative for the treatment of the erythrodermal phase of CTCL and cGvHD in allogenic HPTC and that this therapy should be evaluated in the framework of a RCT.
Quality of life (Lansky/Karnofsky play performance score) was reported in 1 study.
The patients in the studies were all refractory to steroids and other immunosuppressive agents, and these drugs were frequently continued concomitantly with ECP.
Criteria for assessment of organ improvement in cGvHD are variable, but PR was typically defined as >50% improvement from baseline parameters and CR as complete resolution of organ involvement.
Followup was variable and incomplete among the studies.
GRADE Quality of Studies – ECP for Refractory cGvHD
Conclusion
As per the GRADE Working Group, overall recommendations consider 4 main factors.
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates and the relative value placed on the outcome.
The quality of the evidence (Tables 1 and 2).
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects such as proximity to a hospital or availability of necessary expertise.
Uncertainty about the baseline risk for the population of interest.
The GRADE Working Group also recommends that incremental costs of healthcare alternatives should be considered explicitly alongside the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 3 is the overall trade-off between benefits and harms and incorporates any risk/uncertainty.
For refractory erythrodermic CTCL, the overall GRADE and strength of the recommendation is “weak” – the quality of the evidence is “low” (uncertainties due to methodological limitations in the study design in terms of study quality and directness), and the corresponding risk/uncertainty is increased due to an annual budget impact of approximately $1.5M Cdn (based on 30 patients) while the cost-effectiveness of ECP is unknown and difficult to estimate considering that there are no high quality studies of effectiveness. The device is licensed by Health Canada, but the sterile solution of methoxsalen is not licensed.
With an annual budget impact of $1.5 M Cdn (based on 30 patients), and the current expenditure is $1.3M Cdn (for out of country for 7 patients), the potential cost savings based on 30 patients with refractory erythrodermic CTCL is about $3.8 M Cdn (annual).
For refractory cGvHD, the overall GRADE and strength of the recommendation is “weak” – the quality of the evidence is “low” (uncertainties due to methodological limitations in the study design in terms of study quality and directness), and the corresponding risk/uncertainty is increased due to a budget impact of approximately $1.5M Cdn while the cost-effectiveness of ECP is unknown and difficult to estimate considering that there are no high quality studies of effectiveness. Both the device and sterile solution are not licensed by Health Canada for the treatment of cGvHD.
If all the ECP procedures for patients with refractory erythrodermic CTCL and refractory cGvHD were performed in Ontario, the annual budget impact would be approximately $3M Cdn.
Overall GRADE and Strength of Recommendation (Including Uncertainty)
PMCID: PMC3379535  PMID: 23074497
12.  Phase II Trial of Neoadjuvant Docetaxel and CG1940/CG8711 Followed by Radical Prostatectomy in Patients With High-Risk Clinically Localized Prostate Cancer 
The Oncologist  2013;18(6):687-688.
Background.
Prostate cancer (PC) is the most commonly diagnosed noncutaneous malignancy in American men. PC, which exhibits a slow growth rate and multiple potential target epitopes, is an ideal candidate for immunotherapy. GVAX for prostate cancer is a cellular immunotherapy, composed of PC-3 cells (CG1940) and LNCaP cells (CG8711). Each of the components is a prostate adenocarcinoma cell line that has been genetically modified to secrete granulocyte-macrophage colony-stimulating factor. Hypothesizing that GVAX for prostate cancer could be effective in a neoadjuvant setting in patients with locally advanced disease, we initiated a phase II trial of neoadjuvant docetaxel and GVAX. For the trial, the clinical effects of GVAX were assessed in patients undergoing radical prostatectomy (RP).
Methods.
Patients received docetaxel administered at a dose of 75 mg/m2 every 3 weeks for 4 cycles. GVAX was administered 2–3 days after chemotherapy preoperatively for four courses of immunotherapy. The first dose of GVAX was a prime immunotherapy of 5×108 cells. The subsequent boost immunotherapies consisted of 3×108 cells. After RP, patients received an additional six courses of immunotherapy. Pathologic complete response, toxicity, and clinical response were assessed. The primary endpoint of the trial was a pathologic state of pT0, which is defined as no evidence of cancer in the prostate.
Results.
Six patients completed neoadjuvant docetaxel and GVAX therapy. No serious drug-related adverse events were observed. Median change in prostate-specific antigen (PSA) following neoadjuvant therapy was 1.47 ng/ml. One patient did not undergo RP due to the discovery of positive lymph nodes during exploration. Of the five patients completing RP, four had a downstaging of their Gleason score. Undetectable PSA was achieved in three patients at 2 months after RP and in two patients at 3 years after RP.
Conclusions.
Neoadjuvant docetaxel/GVAX is safe and well tolerated in patients with high-risk locally advanced PC. No evidence of increased intraoperative hemorrhage or increased length of hospital stay postoperatively was noted. These results justify further study of neoadjuvant immunotherapy.
doi:10.1634/theoncologist.2011-0234
PMCID: PMC4063395  PMID: 23740935
13.  Influence of a six month endurance exercise program on the immune function of prostate cancer patients undergoing Antiandrogen- or Chemotherapy: design and rationale of the ProImmun study 
BMC Cancer  2013;13:272.
Background
Exercise seems to minimize prostate cancer specific mortality risk and treatment related side effects like fatigue and incontinence. However the influence of physical activity on the immunological level remains uncertain. Even prostate cancer patients undergoing palliative treatment often have a relatively long life span compared to other cancer entities. To optimize exercise programs and their outcomes it is essential to investigate the underlying mechanisms. Further, it is important to discriminate between different exercise protocols and therapy regimes.
Methods/Design
The ProImmun study is a prospective multicenter patient preference randomized controlled trial investigating the influence of a 24 week endurance exercise program in 80–100 prostate cancer patients by comparing patients undergoing Antiandrogen therapy combined with exercise (AE), Antiandrogen therapy without exercise (A), Chemotherapy with exercise(CE) or Chemotherapy without exercise (C). The primary outcome of the study is a change in prostate cancer relevant cytokines and hormones (IL-6, MIF, IGF-1, Testosterone). Secondary endpoints are immune cell ratios, oxidative stress and antioxidative capacity levels, VO2 peak, fatigue and quality of life. Patients of the intervention group exercise five times per week, while two sessions are supervised. During the supervised sessions patients (AE and CE) exercise for 33 minutes on a bicycle ergometer at 70-75% of their VO2 peak. To assess long term effects and sustainability of the intervention two follow-up assessments are arranged 12 and 18 month after the intervention.
Discussion
The ProImmun study is the first trial which primarily investigates immunological effects of a six month endurance exercise program in prostate cancer patients during palliative care. Separating patients treated with Antiandrogen therapy from those who are additionally treated with Chemotherapy might allow a more specific view on the influence of endurance training interventions and the impact of different therapy protocols on the immune function.
Trial registration
German Clinical Trials Register: DRKS00004739
doi:10.1186/1471-2407-13-272
PMCID: PMC3681550  PMID: 23731674
Exercise; Prostate cancer; Immune function
14.  Effect of switching basal insulin regimen to degludec on quality of life in Japanese patients with type 1 and type 2 diabetes mellitus 
Background
Maintainance of a stable basal insulin level is important for glycemic control in treatment of diabetes mellitus. Recently introduced insulin degludec has the longest duration of action among basal insulin formulations. The purpose of this study was to assess changes in quality of life (QOL) associated with switching the basal insulin regimen to degludec in patients with type 1 and type 2 diabetes mellitus.
Methods
This 24-week open-label intervention study included type 1 (n = 10) and type 2 (n = 20) diabetes mellitus patients, with adequately controlled hemoglobin A1c (HbA1c), who had received insulin glargine or detemir for at least 6 months. The primary outcome was change of QOL from baseline, as assessed by the Diabetes Therapy-Related QOL (DTR-QOL) application, after switching from glargine or detemir to degludec. HbA1c and other parameters were also assessed as secondary outcomes.
Results
QOL and HbA1c in patients with type 1 diabetes mellitus were unchanged during this study. In patients with type 2 diabetes mellitus, HbA1c did not change, but total DTR-QOL score was significantly improved from baseline after switching to degludec. The DTR-QOL Factor 2, “Anxiety and dissatisfaction with treatment”, was significantly improved in patients with type 2 diabetes mellitus and especially in the subgroup receiving basal supported oral therapy (BOT).
Conclusions
Switching of the basal insulin regimen from glargine or detemir to degludec significantly improved the QOL of patients with type 2 diabetes mellitus who were receiving BOT, by reducing mental stress or anxiety about their treatment.
doi:10.1186/s40780-015-0027-2
PMCID: PMC4728762  PMID: 26819737
Degludec; Glargine; Detemir; HbA1c; DTR-QOL; QOL; BOT
15.  A Randomised Controlled Trial of Artemether-Lumefantrine Versus Artesunate for Uncomplicated Plasmodium falciparum Treatment in Pregnancy 
PLoS Medicine  2008;5(12):e253.
Background
To date no comparative trials have been done, to our knowledge, of fixed-dose artemisinin combination therapies (ACTs) for the treatment of Plasmodium falciparum malaria in pregnancy. Evidence on the safety and efficacy of ACTs in pregnancy is needed as these drugs are being used increasingly throughout the malaria-affected world. The objective of this study was to compare the efficacy, tolerability, and safety of artemether-lumefantrine, the most widely used fixed ACT, with 7 d artesunate monotherapy in the second and third trimesters of pregnancy.
Methods and Findings
An open-label randomised controlled trial comparing directly observed treatment with artemether-lumefantrine 3 d (AL) or artesunate monotherapy 7 d (AS7) was conducted in Karen women in the border area of northwestern Thailand who had uncomplicated P. falciparum malaria in the second and third trimesters of pregnancy. The primary endpoint was efficacy defined as the P. falciparum PCR-adjusted cure rates assessed at delivery or by day 42 if this occurred later than delivery, as estimated by Kaplan-Meier survival analysis. Infants were assessed at birth and followed until 1 y of life. Blood sampling was performed to characterise the pharmacokinetics of lumefantrine in pregnancy. Both regimens were very well tolerated. The cure rates (95% confidence interval) for the intention to treat (ITT) population were: AS7 89.2% (82.3%–96.1%) and AL 82.0% (74.8%–89.3%), p = 0.054 (ITT); and AS7 89.7% (82.6%–96.8%) and AL 81.2% (73.6%–88.8%), p = 0.031 (per-protocol population). One-third of the PCR-confirmed recrudescent cases occurred after 42 d of follow-up. Birth outcomes and infant (up to age 1 y) outcomes did not differ significantly between the two groups. The pharmacokinetic study indicated that low concentrations of artemether and lumefantrine were the main contributors to the poor efficacy of AL.
Conclusion
The current standard six-dose artemether-lumefantrine regimen was well tolerated and safe in pregnant Karen women with uncomplicated falciparum malaria, but efficacy was inferior to 7 d artesunate monotherapy and was unsatisfactory for general deployment in this geographic area. Reduced efficacy probably results from low drug concentrations in later pregnancy. A longer or more frequent AL dose regimen may be needed to treat pregnant women effectively and should now be evaluated. Parasitological endpoints in clinical trials of any antimalarial drug treatment in pregnancy should be extended to delivery or day 42 if it comes later.
Trial Registration: Current Controlled Trials ISRCTN86353884
Rose McGready and colleagues show that an artemether-lumefantrine regimen is well tolerated and safe in pregnant Karen women with uncomplicated falciparum malaria, but efficacy is inferior to artesunate, probably because of low drug concentrations in later pregnancy.
Editors' Summary
Background.
Plasmodium falciparum, a mosquito-borne parasite that causes malaria, kills nearly one million people every year. Although most deaths occur among young children, malaria during pregnancy is also an important public-health problem. In areas where malaria transmission is high (stable transmission), women acquire a degree of immunity. Although less symptomatic than women who lack natural protection, their babies are often small and sickly because malaria-related anemia (lack of red blood cells) and parasites in the placenta limit the nutrients supplied to the baby before birth. By contrast, in areas where malaria transmission is low (unstable transmission or sporadic outbreaks), women have little immunity to P. falciparum. If these women become infected during pregnancy, “uncomplicated” malaria (fever, chills, and anemia) can rapidly progress to “severe” malaria (in which vital organs are damaged), which can be fatal to the mother and/or her unborn child unless prompt and effective treatment is given.
Why Was This Study Done?
Malaria parasites are now resistant to many of the older antimalarial drugs (for example, quinine). So, since 2006, the World Health Organization (WHO) has recommended that uncomplicated malaria during the second and third trimester of pregnancy is treated with short course (3 d) fixed-dose artemisinin combination therapy (ACT; quinine is still used in early pregnancy because it is not known whether ACT damages fetal development, which mainly occurs during the first 3 mo of pregnancy). Artemisinin derivatives are fast-acting antimalarial agents that are used in combination with another antimalarial drug to reduce the chances of P. falciparum becoming resistant to either drug. The most widely used fixed-dose ACT is artemether–lumefantrine (AL) but, although several trials have examined the safety and efficacy of this treatment in non-pregnant women, little is known about how well it works in pregnant women. In this study, the researchers compare the efficacy, tolerability, and safety of AL with a 7-d course of artesunate monotherapy (AS7; another artemisinin derivative) in the treatment of uncomplicated malaria in pregnancy in northwest Thailand, an area with unstable but highly drug resistant malaria transmission.
What Did the Researchers Do and Find?
The researchers enrolled 253 women with uncomplicated malaria during the second and third trimesters of pregnancy into their open-label trial (a trial in which the patients and their health-care workers know who is receiving which drug regimen). Half the women received each type of treatment. The trial's main outcome was the “PCR-adjusted cure rate” at delivery or 42 d after treatment if this occurred after delivery. This cure rate was assessed by examining blood smears for parasites and then using a technique called PCR to determine which cases of malaria were new infections (classified as treatment successes along with negative blood smears) and which were recurrences of an old infection (classified as treatment failures). The PCR-adjusted cure rates were 89.7% and 81.2% for AS7 and AL, respectively. Both treatments were well tolerated, few side effects were seen with either treatment, and infant health and development at birth and up to 1 y old were similar with both regimens. Finally, an analysis of blood samples taken 7 d after treatment with AL showed that blood levels of lumefantrine were below those previously associated with treatment failure in about a third of the women tested.
What Do These Findings Mean?
Although these findings indicate that the AL regimen is a well tolerated and safe treatment for uncomplicated malaria in pregnant women living in northwest Thailand, the efficacy of this treatment was lower than that of artesunate monotherapy. In fact, neither treatment reached the 90% cure rate recommended by WHO for ACTs and it is likely that cure rates in a more realistic situation (that is, not in a trial where efforts are made to make sure everyone completes their treatment) would be even lower. The findings also suggest that the reduced efficacy of the AL regimen in pregnant women compared to the efficacy previously seen in non-pregnant women may be caused by lower drug blood levels during pregnancy. Thus, a higher-dose AL regimen (or an alternative ACT) may be needed to successfully treat uncomplicated malaria during pregnancy.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050253.
The MedlinePlus encyclopedia contains a page on malaria (in English and Spanish)
Information is available from the World Health Organization on malaria (in several languages), and their 2006 Guidelines for the Treatment of Malaria includes specific recommendations for the treatment of pregnant women
The US Centers for Disease Control and Prevention provide information on malaria and on malaria during pregnancy (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on malaria during pregnancy, on artemisinin-based combination therapies, and on malaria in Thailand
doi:10.1371/journal.pmed.0050253
PMCID: PMC2605900  PMID: 19265453
16.  Internet-Based Device-Assisted Remote Monitoring of Cardiovascular Implantable Electronic Devices 
Executive Summary
Objective
The objective of this Medical Advisory Secretariat (MAS) report was to conduct a systematic review of the available published evidence on the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted remote monitoring systems (RMSs) for therapeutic cardiac implantable electronic devices (CIEDs) such as pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. The MAS evidence-based review was performed to support public financing decisions.
Clinical Need: Condition and Target Population
Sudden cardiac death (SCD) is a major cause of fatalities in developed countries. In the United States almost half a million people die of SCD annually, resulting in more deaths than stroke, lung cancer, breast cancer, and AIDS combined. In Canada each year more than 40,000 people die from a cardiovascular related cause; approximately half of these deaths are attributable to SCD.
Most cases of SCD occur in the general population typically in those without a known history of heart disease. Most SCDs are caused by cardiac arrhythmia, an abnormal heart rhythm caused by malfunctions of the heart’s electrical system. Up to half of patients with significant heart failure (HF) also have advanced conduction abnormalities.
Cardiac arrhythmias are managed by a variety of drugs, ablative procedures, and therapeutic CIEDs. The range of CIEDs includes pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. Bradycardia is the main indication for PMs and individuals at high risk for SCD are often treated by ICDs.
Heart failure (HF) is also a significant health problem and is the most frequent cause of hospitalization in those over 65 years of age. Patients with moderate to severe HF may also have cardiac arrhythmias, although the cause may be related more to heart pump or haemodynamic failure. The presence of HF, however, increases the risk of SCD five-fold, regardless of aetiology. Patients with HF who remain highly symptomatic despite optimal drug therapy are sometimes also treated with CRT devices.
With an increasing prevalence of age-related conditions such as chronic HF and the expanding indications for ICD therapy, the rate of ICD placement has been dramatically increasing. The appropriate indications for ICD placement, as well as the rate of ICD placement, are increasingly an issue. In the United States, after the introduction of expanded coverage of ICDs, a national ICD registry was created in 2005 to track these devices. A recent survey based on this national ICD registry reported that 22.5% (25,145) of patients had received a non-evidence based ICD and that these patients experienced significantly higher in-hospital mortality and post-procedural complications.
In addition to the increased ICD device placement and the upfront device costs, there is the need for lifelong follow-up or surveillance, placing a significant burden on patients and device clinics. In 2007, over 1.6 million CIEDs were implanted in Europe and the United States, which translates to over 5.5 million patient encounters per year if the recommended follow-up practices are considered. A safe and effective RMS could potentially improve the efficiency of long-term follow-up of patients and their CIEDs.
Technology
In addition to being therapeutic devices, CIEDs have extensive diagnostic abilities. All CIEDs can be interrogated and reprogrammed during an in-clinic visit using an inductive programming wand. Remote monitoring would allow patients to transmit information recorded in their devices from the comfort of their own homes. Currently most ICD devices also have the potential to be remotely monitored. Remote monitoring (RM) can be used to check system integrity, to alert on arrhythmic episodes, and to potentially replace in-clinic follow-ups and manage disease remotely. They do not currently have the capability of being reprogrammed remotely, although this feature is being tested in pilot settings.
Every RMS is specifically designed by a manufacturer for their cardiac implant devices. For Internet-based device-assisted RMSs, this customization includes details such as web application, multiplatform sensors, custom algorithms, programming information, and types and methods of alerting patients and/or physicians. The addition of peripherals for monitoring weight and pressure or communicating with patients through the onsite communicators also varies by manufacturer. Internet-based device-assisted RMSs for CIEDs are intended to function as a surveillance system rather than an emergency system.
Health care providers therefore need to learn each application, and as more than one application may be used at one site, multiple applications may need to be reviewed for alarms. All RMSs deliver system integrity alerting; however, some systems seem to be better geared to fast arrhythmic alerting, whereas other systems appear to be more intended for remote follow-up or supplemental remote disease management. The different RMSs may therefore have different impacts on workflow organization because of their varying frequency of interrogation and methods of alerts. The integration of these proprietary RM web-based registry systems with hospital-based electronic health record systems has so far not been commonly implemented.
Currently there are 2 general types of RMSs: those that transmit device diagnostic information automatically and without patient assistance to secure Internet-based registry systems, and those that require patient assistance to transmit information. Both systems employ the use of preprogrammed alerts that are either transmitted automatically or at regular scheduled intervals to patients and/or physicians.
The current web applications, programming, and registry systems differ greatly between the manufacturers of transmitting cardiac devices. In Canada there are currently 4 manufacturers—Medtronic Inc., Biotronik, Boston Scientific Corp., and St Jude Medical Inc.—which have regulatory approval for remote transmitting CIEDs. Remote monitoring systems are proprietary to the manufacturer of the implant device. An RMS for one device will not work with another device, and the RMS may not work with all versions of the manufacturer’s devices.
All Internet-based device-assisted RMSs have common components. The implanted device is equipped with a micro-antenna that communicates with a small external device (at bedside or wearable) commonly known as the transmitter. Transmitters are able to interrogate programmed parameters and diagnostic data stored in the patients’ implant device. The information transfer to the communicator can occur at preset time intervals with the participation of the patient (waving a wand over the device) or it can be sent automatically (wirelessly) without their participation. The encrypted data are then uploaded to an Internet-based database on a secure central server. The data processing facilities at the central database, depending on the clinical urgency, can trigger an alert for the physician(s) that can be sent via email, fax, text message, or phone. The details are also posted on the secure website for viewing by the physician (or their delegate) at their convenience.
Research Questions
The research directions and specific research questions for this evidence review were as follows:
To identify the Internet-based device-assisted RMSs available for follow-up of patients with therapeutic CIEDs such as PMs, ICDs, and CRT devices.
To identify the potential risks, operational issues, or organizational issues related to Internet-based device-assisted RM for CIEDs.
To evaluate the safety, acceptability, and effectiveness of Internet-based device-assisted RMSs for CIEDs such as PMs, ICDs, and CRT devices.
To evaluate the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted RMSs for CIEDs compared to usual outpatient in-office monitoring strategies.
To evaluate the resource implications or budget impact of RMSs for CIEDs in Ontario, Canada.
Research Methods
Literature Search
The review included a systematic review of published scientific literature and consultations with experts and manufacturers of all 4 approved RMSs for CIEDs in Canada. Information on CIED cardiac implant clinics was also obtained from Provincial Programs, a division within the Ministry of Health and Long-Term Care with a mandate for cardiac implant specialty care. Various administrative databases and registries were used to outline the current clinical follow-up burden of CIEDs in Ontario. The provincial population-based ICD database developed and maintained by the Institute for Clinical Evaluative Sciences (ICES) was used to review the current follow-up practices with Ontario patients implanted with ICD devices.
Search Strategy
A literature search was performed on September 21, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from 1950 to September 2010. Search alerts were generated and reviewed for additional relevant literature until December 31, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Inclusion Criteria
published between 1950 and September 2010;
English language full-reports and human studies;
original reports including clinical evaluations of Internet-based device-assisted RMSs for CIEDs in clinical settings;
reports including standardized measurements on outcome events such as technical success, safety, effectiveness, cost, measures of health care utilization, morbidity, mortality, quality of life or patient satisfaction;
randomized controlled trials (RCTs), systematic reviews and meta-analyses, cohort and controlled clinical studies.
Exclusion Criteria
non-systematic reviews, letters, comments and editorials;
reports not involving standardized outcome events;
clinical reports not involving Internet-based device assisted RM systems for CIEDs in clinical settings;
reports involving studies testing or validating algorithms without RM;
studies with small samples (<10 subjects).
Outcomes of Interest
The outcomes of interest included: technical outcomes, emergency department visits, complications, major adverse events, symptoms, hospital admissions, clinic visits (scheduled and/or unscheduled), survival, morbidity (disease progression, stroke, etc.), patient satisfaction, and quality of life.
Summary of Findings
The MAS evidence review was performed to review available evidence on Internet-based device-assisted RMSs for CIEDs published until September 2010. The search identified 6 systematic reviews, 7 randomized controlled trials, and 19 reports for 16 cohort studies—3 of these being registry-based and 4 being multi-centered. The evidence is summarized in the 3 sections that follow.
1. Effectiveness of Remote Monitoring Systems of CIEDs for Cardiac Arrhythmia and Device Functioning
In total, 15 reports on 13 cohort studies involving investigations with 4 different RMSs for CIEDs in cardiology implant clinic groups were identified in the review. The 4 RMSs were: Care Link Network® (Medtronic Inc,, Minneapolis, MN, USA); Home Monitoring® (Biotronic, Berlin, Germany); House Call 11® (St Jude Medical Inc., St Pauls, MN, USA); and a manufacturer-independent RMS. Eight of these reports were with the Home Monitoring® RMS (12,949 patients), 3 were with the Care Link® RMS (167 patients), 1 was with the House Call 11® RMS (124 patients), and 1 was with a manufacturer-independent RMS (44 patients). All of the studies, except for 2 in the United States, (1 with Home Monitoring® and 1 with House Call 11®), were performed in European countries.
The RMSs in the studies were evaluated with different cardiac implant device populations: ICDs only (6 studies), ICD and CRT devices (3 studies), PM and ICD and CRT devices (4 studies), and PMs only (2 studies). The patient populations were predominately male (range, 52%–87%) in all studies, with mean ages ranging from 58 to 76 years. One study population was unique in that RMSs were evaluated for ICDs implanted solely for primary prevention in young patients (mean age, 44 years) with Brugada syndrome, which carries an inherited increased genetic risk for sudden heart attack in young adults.
Most of the cohort studies reported on the feasibility of RMSs in clinical settings with limited follow-up. In the short follow-up periods of the studies, the majority of the events were related to detection of medical events rather than system configuration or device abnormalities. The results of the studies are summarized below:
The interrogation of devices on the web platform, both for continuous and scheduled transmissions, was significantly quicker with remote follow-up, both for nurses and physicians.
In a case-control study focusing on a Brugada population–based registry with patients followed-up remotely, there were significantly fewer outpatient visits and greater detection of inappropriate shocks. One death occurred in the control group not followed remotely and post-mortem analysis indicated early signs of lead failure prior to the event.
Two studies examined the role of RMSs in following ICD leads under regulatory advisory in a European clinical setting and noted:
– Fewer inappropriate shocks were administered in the RM group.
– Urgent in-office interrogations and surgical revisions were performed within 12 days of remote alerts.
– No signs of lead fracture were detected at in-office follow-up; all were detected at remote follow-up.
Only 1 study reported evaluating quality of life in patients followed up remotely at 3 and 6 months; no values were reported.
Patient satisfaction was evaluated in 5 cohort studies, all in short term follow-up: 1 for the Home Monitoring® RMS, 3 for the Care Link® RMS, and 1 for the House Call 11® RMS.
– Patients reported receiving a sense of security from the transmitter, a good relationship with nurses and physicians, positive implications for their health, and satisfaction with RM and organization of services.
– Although patients reported that the system was easy to implement and required less than 10 minutes to transmit information, a variable proportion of patients (range, 9% 39%) reported that they needed the assistance of a caregiver for their transmission.
– The majority of patients would recommend RM to other ICD patients.
– Patients with hearing or other physical or mental conditions hindering the use of the system were excluded from studies, but the frequency of this was not reported.
Physician satisfaction was evaluated in 3 studies, all with the Care Link® RMS:
– Physicians reported an ease of use and high satisfaction with a generally short-term use of the RMS.
– Physicians reported being able to address the problems in unscheduled patient transmissions or physician initiated transmissions remotely, and were able to handle the majority of the troubleshooting calls remotely.
– Both nurses and physicians reported a high level of satisfaction with the web registry system.
2. Effectiveness of Remote Monitoring Systems in Heart Failure Patients for Cardiac Arrhythmia and Heart Failure Episodes
Remote follow-up of HF patients implanted with ICD or CRT devices, generally managed in specialized HF clinics, was evaluated in 3 cohort studies: 1 involved the Home Monitoring® RMS and 2 involved the Care Link® RMS. In these RMSs, in addition to the standard diagnostic features, the cardiac devices continuously assess other variables such as patient activity, mean heart rate, and heart rate variability. Intra-thoracic impedance, a proxy measure for lung fluid overload, was also measured in the Care Link® studies. The overall diagnostic performance of these measures cannot be evaluated, as the information was not reported for patients who did not experience intra-thoracic impedance threshold crossings or did not undergo interventions. The trial results involved descriptive information on transmissions and alerts in patients experiencing high morbidity and hospitalization in the short study periods.
3. Comparative Effectiveness of Remote Monitoring Systems for CIEDs
Seven RCTs were identified evaluating RMSs for CIEDs: 2 were for PMs (1276 patients) and 5 were for ICD/CRT devices (3733 patients). Studies performed in the clinical setting in the United States involved both the Care Link® RMS and the Home Monitoring® RMS, whereas all studies performed in European countries involved only the Home Monitoring® RMS.
3A. Randomized Controlled Trials of Remote Monitoring Systems for Pacemakers
Two trials, both multicenter RCTs, were conducted in different countries with different RMSs and study objectives. The PREFER trial was a large trial (897 patients) performed in the United States examining the ability of Care Link®, an Internet-based remote PM interrogation system, to detect clinically actionable events (CAEs) sooner than the current in-office follow-up supplemented with transtelephonic monitoring transmissions, a limited form of remote device interrogation. The trial results are summarized below:
In the 375-day mean follow-up, 382 patients were identified with at least 1 CAE—111 patients in the control arm and 271 in the remote arm.
The event rate detected per patient for every type of CAE, except for loss of atrial capture, was higher in the remote arm than the control arm.
The median time to first detection of CAEs (4.9 vs. 6.3 months) was significantly shorter in the RMS group compared to the control group (P < 0.0001).
Additionally, only 2% (3/190) of the CAEs in the control arm were detected during a transtelephonic monitoring transmission (the rest were detected at in-office follow-ups), whereas 66% (446/676) of the CAEs were detected during remote interrogation.
The second study, the OEDIPE trial, was a smaller trial (379 patients) performed in France evaluating the ability of the Home Monitoring® RMS to shorten PM post-operative hospitalization while preserving the safety of conventional management of longer hospital stays.
Implementation and operationalization of the RMS was reported to be successful in 91% (346/379) of the patients and represented 8144 transmissions.
In the RM group 6.5% of patients failed to send messages (10 due to improper use of the transmitter, 2 with unmanageable stress). Of the 172 patients transmitting, 108 patients sent a total of 167 warnings during the trial, with a greater proportion of warnings being attributed to medical rather than technical causes.
Forty percent had no warning message transmission and among these, 6 patients experienced a major adverse event and 1 patient experienced a non-major adverse event. Of the 6 patients having a major adverse event, 5 contacted their physician.
The mean medical reaction time was faster in the RM group (6.5 ± 7.6 days vs. 11.4 ± 11.6 days).
The mean duration of hospitalization was significantly shorter (P < 0.001) for the RM group than the control group (3.2 ± 3.2 days vs. 4.8 ± 3.7 days).
Quality of life estimates by the SF-36 questionnaire were similar for the 2 groups at 1-month follow-up.
3B. Randomized Controlled Trials Evaluating Remote Monitoring Systems for ICD or CRT Devices
The 5 studies evaluating the impact of RMSs with ICD/CRT devices were conducted in the United States and in European countries and involved 2 RMSs—Care Link® and Home Monitoring ®. The objectives of the trials varied and 3 of the trials were smaller pilot investigations.
The first of the smaller studies (151 patients) evaluated patient satisfaction, achievement of patient outcomes, and the cost-effectiveness of the Care Link® RMS compared to quarterly in-office device interrogations with 1-year follow-up.
Individual outcomes such as hospitalizations, emergency department visits, and unscheduled clinic visits were not significantly different between the study groups.
Except for a significantly higher detection of atrial fibrillation in the RM group, data on ICD detection and therapy were similar in the study groups.
Health-related quality of life evaluated by the EuroQoL at 6-month or 12-month follow-up was not different between study groups.
Patients were more satisfied with their ICD care in the clinic follow-up group than in the remote follow-up group at 6-month follow-up, but were equally satisfied at 12- month follow-up.
The second small pilot trial (20 patients) examined the impact of RM follow-up with the House Call 11® system on work schedules and cost savings in patients randomized to 2 study arms varying in the degree of remote follow-up.
The total time including device interrogation, transmission time, data analysis, and physician time required was significantly shorter for the RM follow-up group.
The in-clinic waiting time was eliminated for patients in the RM follow-up group.
The physician talk time was significantly reduced in the RM follow-up group (P < 0.05).
The time for the actual device interrogation did not differ in the study groups.
The third small trial (115 patients) examined the impact of RM with the Home Monitoring® system compared to scheduled trimonthly in-clinic visits on the number of unplanned visits, total costs, health-related quality of life (SF-36), and overall mortality.
There was a 63.2% reduction in in-office visits in the RM group.
Hospitalizations or overall mortality (values not stated) were not significantly different between the study groups.
Patient-induced visits were higher in the RM group than the in-clinic follow-up group.
The TRUST Trial
The TRUST trial was a large multicenter RCT conducted at 102 centers in the United States involving the Home Monitoring® RMS for ICD devices for 1450 patients. The primary objectives of the trial were to determine if remote follow-up could be safely substituted for in-office clinic follow-up (3 in-office visits replaced) and still enable earlier physician detection of clinically actionable events.
Adherence to the protocol follow-up schedule was significantly higher in the RM group than the in-office follow-up group (93.5% vs. 88.7%, P < 0.001).
Actionability of trimonthly scheduled checks was low (6.6%) in both study groups. Overall, actionable causes were reprogramming (76.2%), medication changes (24.8%), and lead/system revisions (4%), and these were not different between the 2 study groups.
The overall mean number of in-clinic and hospital visits was significantly lower in the RM group than the in-office follow-up group (2.1 per patient-year vs. 3.8 per patient-year, P < 0.001), representing a 45% visit reduction at 12 months.
The median time from onset of first arrhythmia to physician evaluation was significantly shorter (P < 0.001) in the RM group than in the in-office follow-up group for all arrhythmias (1 day vs. 35.5 days).
The median time to detect clinically asymptomatic arrhythmia events—atrial fibrillation (AF), ventricular fibrillation (VF), ventricular tachycardia (VT), and supra-ventricular tachycardia (SVT)—was also significantly shorter (P < 0.001) in the RM group compared to the in-office follow-up group (1 day vs. 41.5 days) and was significantly quicker for each of the clinical arrhythmia events—AF (5.5 days vs. 40 days), VT (1 day vs. 28 days), VF (1 day vs. 36 days), and SVT (2 days vs. 39 days).
System-related problems occurred infrequently in both groups—in 1.5% of patients (14/908) in the RM group and in 0.7% of patients (3/432) in the in-office follow-up group.
The overall adverse event rate over 12 months was not significantly different between the 2 groups and individual adverse events were also not significantly different between the RM group and the in-office follow-up group: death (3.4% vs. 4.9%), stroke (0.3% vs. 1.2%), and surgical intervention (6.6% vs. 4.9%), respectively.
The 12-month cumulative survival was 96.4% (95% confidence interval [CI], 95.5%–97.6%) in the RM group and 94.2% (95% confidence interval [CI], 91.8%–96.6%) in the in-office follow-up group, and was not significantly different between the 2 groups (P = 0.174).
The CONNECT Trial
The CONNECT trial, another major multicenter RCT, involved the Care Link® RMS for ICD/CRT devices in a15-month follow-up study of 1,997 patients at 133 sites in the United States. The primary objective of the trial was to determine whether automatically transmitted physician alerts decreased the time from the occurrence of clinically relevant events to medical decisions. The trial results are summarized below:
Of the 575 clinical alerts sent in the study, 246 did not trigger an automatic physician alert. Transmission failures were related to technical issues such as the alert not being programmed or not being reset, and/or a variety of patient factors such as not being at home and the monitor not being plugged in or set up.
The overall mean time from the clinically relevant event to the clinical decision was significantly shorter (P < 0.001) by 17.4 days in the remote follow-up group (4.6 days for 172 patients) than the in-office follow-up group (22 days for 145 patients).
– The median time to a clinical decision was shorter in the remote follow-up group than in the in-office follow-up group for an AT/AF burden greater than or equal to 12 hours (3 days vs. 24 days) and a fast VF rate greater than or equal to 120 beats per minute (4 days vs. 23 days).
Although infrequent, similar low numbers of events involving low battery and VF detection/therapy turned off were noted in both groups. More alerts, however, were noted for out-of-range lead impedance in the RM group (18 vs. 6 patients), and the time to detect these critical events was significantly shorter in the RM group (same day vs. 17 days).
Total in-office clinic visits were reduced by 38% from 6.27 visits per patient-year in the in-office follow-up group to 3.29 visits per patient-year in the remote follow-up group.
Health care utilization visits (N = 6,227) that included cardiovascular-related hospitalization, emergency department visits, and unscheduled clinic visits were not significantly higher in the remote follow-up group.
The overall mean length of hospitalization was significantly shorter (P = 0.002) for those in the remote follow-up group (3.3 days vs. 4.0 days) and was shorter both for patients with ICD (3.0 days vs. 3.6 days) and CRT (3.8 days vs. 4.7 days) implants.
The mortality rate between the study arms was not significantly different between the follow-up groups for the ICDs (P = 0.31) or the CRT devices with defribillator (P = 0.46).
Conclusions
There is limited clinical trial information on the effectiveness of RMSs for PMs. However, for RMSs for ICD devices, multiple cohort studies and 2 large multicenter RCTs demonstrated feasibility and significant reductions in in-office clinic follow-ups with RMSs in the first year post implantation. The detection rates of clinically significant events (and asymptomatic events) were higher, and the time to a clinical decision for these events was significantly shorter, in the remote follow-up groups than in the in-office follow-up groups. The earlier detection of clinical events in the remote follow-up groups, however, was not associated with lower morbidity or mortality rates in the 1-year follow-up. The substitution of almost all the first year in-office clinic follow-ups with RM was also not associated with an increased health care utilization such as emergency department visits or hospitalizations.
The follow-up in the trials was generally short-term, up to 1 year, and was a more limited assessment of potential longer term device/lead integrity complications or issues. None of the studies compared the different RMSs, particularly the different RMSs involving patient-scheduled transmissions or automatic transmissions. Patients’ acceptance of and satisfaction with RM were reported to be high, but the impact of RM on patients’ health-related quality of life, particularly the psychological aspects, was not evaluated thoroughly. Patients who are not technologically competent, having hearing or other physical/mental impairments, were identified as potentially disadvantaged with remote surveillance. Cohort studies consistently identified subgroups of patients who preferred in-office follow-up. The evaluation of costs and workflow impact to the health care system were evaluated in European or American clinical settings, and only in a limited way.
Internet-based device-assisted RMSs involve a new approach to monitoring patients, their disease progression, and their CIEDs. Remote monitoring also has the potential to improve the current postmarket surveillance systems of evolving CIEDs and their ongoing hardware and software modifications. At this point, however, there is insufficient information to evaluate the overall impact to the health care system, although the time saving and convenience to patients and physicians associated with a substitution of in-office follow-up by RM is more certain. The broader issues surrounding infrastructure, impacts on existing clinical care systems, and regulatory concerns need to be considered for the implementation of Internet-based RMSs in jurisdictions involving different clinical practices.
PMCID: PMC3377571  PMID: 23074419
17.  Diphtheria toxin treatment of Pet-1-Cre floxed diphtheria toxin receptor mice disrupts thermoregulation without affecting respiratory chemoreception 
Neuroscience  2014;279:65-76.
In genetically-modified Lmx1bf/f/p mice, selective deletion of LMX1B in Pet-1 expressing cells leads to failure of embryonic development of serotonin (5-HT) neurons. As adults, these mice have a decreased hypercapnic ventilatory response and abnormal thermoregulation. This mouse model has been valuable in defining the normal role of 5-HT neurons, but it is possible that developmental compensation reduces the severity of observed deficits. Here we studied mice genetically modified to express diphtheria toxin receptors (DTR) on Pet-1 expressing neurons (Pet-1-Cre/Floxed DTR or Pet1/DTR mice). These mice developed with a normal complement of 5-HT neurons. As adults, systemic treatment with 2 – 35 μg diphtheria toxin (DT) reduced the number of tryptophan hydroxylase immunoreactive (TpOH-ir) neurons in the raphe nuclei and ventrolateral medulla by 80%. There were no effects of DT on baseline ventilation (VE) or the ventilatory response to hypercapnia or hypoxia. At an ambient temperature (TA) of 24°C, all Pet1/DTR mice dropped their body temperature (TB) below 35°C after DT treatment, but the latency was shorter in males than females (3.0 ± 0.37 vs 4.57 ± 0.29 days, respectively; p < 0.001). One week after DT treatment, mice were challenged by dropping TA from 37°C to 24°C, which caused TB to decrease more in males than in females (29.7 ± 0.31°C vs 33.0 ± 1.3°C, p < 0.01). We conclude that the 20% of 5-HT neurons that remain after DT treatment in Pet1/DTR mice are sufficient to maintain normal baseline breathing and a normal response to CO2, while those affected include some essential for thermoregulation, in males more than females. In comparison to models with deficient embryonic development of 5-HT neurons, acute deletion of 5-HT neurons in adults leads to a greater defect in thermoregulation, suggesting that significant developmental compensation can occur.
doi:10.1016/j.neuroscience.2014.08.018
PMCID: PMC4443915  PMID: 25171790
serotonin; chemoreception; thermoregulation; gender differences
18.  SMART Designs in Cancer Research: Past, Present and Future 
Clinical trials (London, England)  2014;11(4):445-456.
Background
Cancer affects millions of people worldwide each year. Patients require sequences of treatment based on their response to previous treatments to combat cancer and fight metastases. Physicians provide treatment based on clinical characteristics, changing over time. Guidelines for these individualized sequences of treatments are known as dynamic treatment regimens (DTRs) where the initial treatment and subsequent modifications depend on the response to previous treatments, disease progression and other patient characteristics or behaviors. To provide evidence-based DTRs, the Sequential Multiple Assignment Randomized Trial (SMART) has emerged over the past few decades.
Purpose
To examine and learn from past SMARTs investigating cancer treatment options, to discuss potential limitations preventing the widespread use of SMARTs in cancer research, and to describe courses of action to increase the implementation of SMARTs and collaboration between statisticians and clinicians.
Conclusions
There have been SMARTs investigating treatment questions in areas of cancer, but the novelty and perceived complexity has limited its use. By building bridges between statisticians and clinicians, clarifying research objectives, and furthering methods work, there should be in an increase in SMARTs addressing relevant cancer treatment questions. Within any area of cancer, SMARTs develop DTRs that can guide treatment decisions over the disease history and improve patient outcomes.
doi:10.1177/1740774514525691
PMCID: PMC4431956  PMID: 24733671
19.  Biventricular Pacing (Cardiac Resynchronization Therapy) 
Executive Summary
Issue
In 2002, (before the establishment of the Ontario Health Technology Advisory Committee), the Medical Advisory Secretariat conducted a health technology policy assessment on biventricular (BiV) pacing, also called cardiac resynchronization therapy (CRT). The goal of treatment with BiV pacing is to improve cardiac output for people in heart failure (HF) with conduction defect on ECG (wide QRS interval) by synchronizing ventricular contraction. The Medical Advisory Secretariat concluded that there was evidence of short (6 months) and longer-term (12 months) effectiveness in terms of cardiac function and quality of life (QoL). More recently, a hospital submitted an application to the Ontario Health Technology Advisory Committee to review CRT, and the Medical Advisory Secretariat subsequently updated its health technology assessment.
Background
Chronic HF results from any structural or functional cardiac disorder that impairs the ability of the heart to act as a pump. It is estimated that 1% to 5% of the general population (all ages) in Europe have chronic HF. (1;2) About one-half of the patients with HF are women, and about 40% of men and 60% of women with this condition are aged older than 75 years.
The incidence (i.e., the number of new cases in a specified period) of chronic HF is age dependent: from 1 to 5 per 1,000 people each year in the total population, to as high as 30 to 40 per 1,000 people each year in those aged 75 years and older. Hence, in an aging society, the prevalence (i.e., the number of people with a given disease or condition at any time) of HF is increasing, despite a reduction in cardiovascular mortality.
A recent study revealed 28,702 patients were hospitalized for first-time HF in Ontario between April 1994 and March 1997. (3) Women comprised 51% of the cohort. Eighty-five percent were aged 65 years or older, and 58% were aged 75 years or older.
Patients with chronic HF experience shortness of breath, a limited capacity for exercise, high rates of hospitalization and rehospitalization, and die prematurely. (2;4) The New York Heart Association (NYHA) has provided a commonly used functional classification for the severity of HF (2;5):
Class I: No limitation of physical activity. No symptoms with ordinary exertion.
Class II: Slight limitations of physical activity. Ordinary activity causes symptoms.
Class III: Marked limitation of physical activity. Less than ordinary activity causes symptoms. Asymptomatic at rest.
Class IV: Inability to carry out any physical activity without discomfort. Symptoms at rest.
The National Heart, Lung, and Blood Institute estimates that 35% of patients with HF are in functional NYHA class I; 35% are in class II; 25%, class III; and 5%, class IV. (5) Surveys (2) suggest that from 5% to 15% of patients with HF have persistent severe symptoms, and that the remainder of patients with HF is evenly divided between those with mild and moderately severe symptoms.
Overall, patients with chronic, stable HF have an annual mortality rate of about 10%. (2) One-third of patients with new-onset HF will die within 6 months of diagnosis. These patients do not survive to enter the pool of those with “chronic” HF. About 60% of patients with incident HF will die within 3 years, and there is limited evidence that the overall prognosis has improved in the last 15 years.
To date, the diagnosis and management of chronic HF has concentrated on patients with the clinical syndrome of HF accompanied by severe left ventricular systolic dysfunction. Major changes in treatment have resulted from a better understanding of the pathophysiology of HF and the results of large clinical trials. Treatment for chronic HF includes lifestyle management, drugs, cardiac surgery, or implantable pacemakers and defibrillators. Despite pharmacologic advances, which include diuretics, angiotensin-converting enzyme inhibitors, beta-blockers, spironolactone, and digoxin, many patients remain symptomatic on maximally tolerated doses.
The Technology
Owing to the limitations of drug therapy, cardiac transplantation and device therapies have been used to try to improve QoL and survival of patients with chronic HF. Ventricular pacing is an emerging treatment option for patients with severe HF that does not respond well to medical therapy. Traditionally, indications for pacing include bradyarrhythmia, sick sinus syndrome, atrioventricular block, and other indications, including combined sick sinus syndrome with atrioventricular block and neurocardiogenic syncope. Recently, BiV pacing as a new, adjuvant therapy for patients with chronic HF and mechanical dyssynchrony has been investigated. Ventricular dysfunction is a sign of HF; and, if associated with severe intraventricular conduction delay, it can cause dyssynchronous ventricular contractions resulting in decreased ventricular filling. The therapeutic intent is to activate both ventricles simultaneously, thereby improving the mechanical efficiency of the ventricles.
About 30% of patients with chronic HF have intraventricular conduction defects. (6) These conduction abnormalities progress over time and lead to discoordinated contraction of an already hemodynamically compromised ventricle. Intraventricular conduction delay has been associated with clinical instability and an increased risk of death in patients with HF. (7) Hence, BiV pacing, which involves pacing left and right ventricles simultaneously, may provide a more coordinated pattern of ventricular contraction and thereby potentially reduce QRS duration, and intraventricular and interventricular asynchrony. People with advanced chronic HF, a wide QRS complex (i.e., the portion of the electrocardiogram comprising the Q, R, and S waves, together representing ventricular depolarization), low left ventricular ejection fraction and contraction dyssynchrony in a viable myocardium and normal sinus rhythm, are the target patients group for BiV pacing. One-half of all deaths in HF patients are sudden, and the mode of death is arrhythmic in most cases. Internal cardioverter defibrillators (ICDs) combined with BiV pacemakers are therefore being increasingly considered for patients with HF who are at high risk of sudden death.
Current Implantation Technique for Cardiac Resynchronization
Conventional dual-chamber pacemakers have only 2 leads: 1 placed in the right atrium and the other in the right ventricle. The technique used for BiV pacemaker implantation also uses right atrial and ventricular pacing leads, in addition to a left ventricle lead advanced through the coronary sinus into a vein that runs along the ventricular free wall. This permits simultaneous pacing of both ventricles to allow resynchronization of the left ventricle septum and free wall.
Mode of Operation
Permanent pacing systems consist of an implantable pulse generator that contains a battery and electronic circuitry, together with 1 (single-chamber pacemaker) or 2 (dual-chamber pacemaker) leads. Leads conduct intrinsic atrial or ventricular signals to the sensing circuitry and deliver the pulse generator charge to the myocardium (muscle of the heart).
Complications of Biventricular Pacemaker Implantation
The complications that may arise when a BiV pacemaker is implanted are similar to those that occur with standard pacemaker implantation, including pneumothorax, perforation of the great vessels or the myocardium, air embolus, infection, bleeding, and arrhythmias. Moreover, left ventricular pacing through the coronary sinus can be associated with rupture of the sinus as another complication.
Conclusion of 2003 Review of Biventricular Pacemakers by the Medical Advisory Secretariat
The randomized controlled trials (RCTs) the Medical Advisory Secretariat retrieved analyzed chronic HF patients that were assessed for up to 6 months. Other studies have been prospective, but nonrandomized, not double-blinded, uncontrolled and/or have had a limited or uncalculated sample size. Short-term studies have focused on acute hemodynamic analyses. The authors of the RCTs reported improved cardiac function and QoL up to 6 months after BiV pacemaker implantation; therefore, there is level 1 evidence that patients in ventricular dyssynchrony who remain symptomatic after medication might benefit from this technology. Based on evidence made available to the Medical Advisory Secretariat by a manufacturer, (8) it appears that these 6-month improvements are maintained at 12-month follow-up.
To date, however, there is insufficient evidence to support the routine use of combined ICD/BiV devices in patients with chronic HF with prolonged QRS intervals.
Summary of Updated Findings Since the 2003 Review
Since the Medical Advisory Secretariat’s review in 2003 of biventricular pacemakers, 2 large RCTs have been published: COMPANION (9) and CARE-HF. (10) The characteristics of each trial are shown in Table 1. The COMPANION trial had a number of major methodological limitations compared with the CARE-HF trial.
Characteristics of the COMPANION and CARE-HF Trials*
COMPANION; (9) CARE-HF. (10)
BiV indicates biventricular; ICD, implantable cardioverter defibrillator; EF, ejection fraction; QRS, the interval representing the Q, R and S waves on an electrocardiogram; FDA, United States Food and Drug Administration.
Overall, CARE-HF showed that BiV pacing significantly improves mortality, QoL, and NYHA class in patients with severe HF and a wide QRS interval (Tables 2 and 3).
CARE-HF Results: Primary and Secondary Endpoints*
BiV indicates biventricular; NNT, number needed to treat.
Cleland JGF, Daubert J, Erdmann E, Freemantle N, Gras D, Kappenberger L et al. The effect of cardiac resynchronization on morbidity and mortality in heart failure (CARE-HF). New England Journal of Medicine 2005; 352:1539-1549; Copyright 2003 Massachusettes Medical Society. All rights reserved. (10)
CARE H-F Results: NYHA Class and Quality of Life Scores*
Minnesota Living with Heart Failure scores range from 0 to 105; higher scores reflect poorer QoL.
European Quality of Life–5 Dimensions scores range from -0.594 to 1.000; 1.000 indicates fully healthy; 0, dead
Cleland JGF, Daubert J, Erdmann E, Freemantle N, Gras D, Kappenberger L et al. The effect of cardiac resynchronization on morbidity and mortality in heart failure (CARE-HF). New England Journal of Medicine 2005; 352:1539-1549; Copyright 2005 Massachusettes Medical Society. All rights reserved.(10)
GRADE Quality of Evidence
The quality of these 3 trials was examined according to the GRADE Working Group criteria, (12) (Table 4).
Quality refers to criteria such as the adequacy of allocation concealment, blinding, and follow-up.
Consistency refers to the similarity of estimates of effect across studies. If there is an important unexplained inconsistency in the results, confidence in the estimate of effect for that outcome decreases. Differences in the direction of effect, the size of the differences in effect, and the significance of the differences guide the decision about whether important inconsistency exists.
Directness refers to the extent to which the people interventions and outcome measures are similar to those of interest. For example, there may be uncertainty about the directness of the evidence if the people of interest are older, sicker, or have more comorbid conditions than do the people in the studies.
As stated by the GRADE Working Group, (12) the following definitions were used in grading the quality of the evidence:
High: Further research is very unlikely to change our confidence on the estimate of effect.
Moderate: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low: Any estimate of effect is very uncertain.
Quality of Evidence: CARE-HF and COMPANION
Conclusions
Overall, there is evidence that BiV pacemakers are effective for improving mortality, QoL, and functional status in patients with NYHA class III/IV HF, an EF less than 0.35, a QRS interval greater than 120 ms, who are refractory to drug therapy.
As per the GRADE Working Group, recommendations considered the following 4 main factors:
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates, and the relative value placed on the outcome
The quality of the evidence (Table 4)
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects such as proximity to a hospital or availability of necessary expertise
Uncertainty about the baseline risk for the population of interest
The GRADE Working Group also recommends that incremental costs of health care alternatives should be considered explicitly alongside the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 5 shows the overall trade-off between benefits and harms and incorporates any risk/uncertainty.
For BiV pacing, the overall GRADE and strength of the recommendation is moderate: the quality of the evidence is moderate/high (because of some uncertainty due to methodological limitations in the study design, e.g., no blinding), but there is also some risk/uncertainty in terms of the estimated prevalence and wide cost-effectiveness estimates (Table 5).
For the combination BiV pacing/ICD, the overall GRADE and strength of the recommendation is weak—the quality of the evidence is low (because of uncertainty due to methodological limitations in the study design), but there is also some risk/uncertainty in terms of the estimated prevalence, high cost, and high budget impact (Table 5). There are indirect, low-quality comparisons of the effectiveness of BiV pacemakers compared with the combination BiV/ICD devices.
A stronger recommendation can be made for BiV pacing only compared with the combination BiV/ICD device for patients with an EF less than or equal to 0.35, and a QRS interval over or equal to 120 ms, and NYHA III/IV symptoms, and refractory to optimal medical therapy (Table 5).
There is moderate/high-quality evidence that BiV pacemakers significantly improve mortality, QoL, and functional status.
There is low-quality evidence that combined BiV/ICD devices significantly improve mortality, QoL, and functional status.
To date, there are no direct comparisons of the effectiveness of BiV pacemakers compared with the combined BiV/ICD devices in terms of mortality, QoL, and functional status.
Overall GRADE and Strength of Recommendation
BiV refers to biventricular; ICD, implantable cardioverter defibrillator; NNT, number needed to treat.
PMCID: PMC3382419  PMID: 23074464
20.  Developing an efficient scheduling template of a chemotherapy treatment unit 
The Australasian Medical Journal  2011;4(10):575-588.
This study was undertaken to improve the performance of a Chemotherapy Treatment Unit by increasing the throughput and reducing the average patient’s waiting time. In order to achieve this objective, a scheduling template has been built. The scheduling template is a simple tool that can be used to schedule patients' arrival to the clinic. A simulation model of this system was built and several scenarios, that target match the arrival pattern of the patients and resources availability, were designed and evaluated. After performing detailed analysis, one scenario provide the best system’s performance. A scheduling template has been developed based on this scenario. After implementing the new scheduling template, 22.5% more patients can be served.
Introduction
CancerCare Manitoba is a provincially mandated cancer care agency. It is dedicated to provide quality care to those who have been diagnosed and are living with cancer. MacCharles Chemotherapy unit is specially built to provide chemotherapy treatment to the cancer patients of Winnipeg. In order to maintain an excellent service, it tries to ensure that patients get their treatment in a timely manner. It is challenging to maintain that goal because of the lack of a proper roster, the workload distribution and inefficient resource allotment. In order to maintain the satisfaction of the patients and the healthcare providers, by serving the maximum number of patients in a timely manner, it is necessary to develop an efficient scheduling template that matches the required demand with the availability of resources. This goal can be reached using simulation modelling. Simulation has proven to be an excellent modelling tool. It can be defined as building computer models that represent real world or hypothetical systems, and hence experimenting with these models to study system behaviour under different scenarios.1, 2
A study was undertaken at the Children's Hospital of Eastern Ontario to identify the issues behind the long waiting time of a emergency room.3 A 20-­‐day field observation revealed that the availability of the staff physician and interaction affects the patient wait time. Jyväskylä et al.4 used simulation to test different process scenarios, allocate resources and perform activity-­‐based cost analysis in the Emergency Department (ED) at the Central Hospital. The simulation also supported the study of a new operational method, named "triage-team" method without interrupting the main system. The proposed triage team method categorises the entire patient according to the urgency to see the doctor and allows the patient to complete the necessary test before being seen by the doctor for the first time. The simulation study showed that it will decrease the throughput time of the patient and reduce the utilisation of the specialist and enable the ordering all the tests the patient needs right after arrival, thus quickening the referral to treatment.
Santibáñez et al.5 developed a discrete event simulation model of British Columbia Cancer Agency"s ambulatory care unit which was used to study the impact of scenarios considering different operational factors (delay in starting clinic), appointment schedule (appointment order, appointment adjustment, add-­‐ons to the schedule) and resource allocation. It was found that the best outcomes were obtained when not one but multiple changes were implemented simultaneously. Sepúlveda et al.6 studied the M. D. Anderson Cancer Centre Orlando, which is a cancer treatment facility and built a simulation model to analyse and improve flow process and increase capacity in the main facility. Different scenarios were considered like, transferring laboratory and pharmacy areas, adding an extra blood draw room and applying different scheduling techniques of patients. The study shows that by increasing the number of short-­‐term (four hours or less) patients in the morning could increase chair utilisation.
Discrete event simulation also helps improve a service where staff are ignorant about the behaviour of the system as a whole; which can also be described as a real professional system. Niranjon et al.7 used simulation successfully where they had to face such constraints and lack of accessible data. Carlos et al. 8 used Total quality management and simulation – animation to improve the quality of the emergency room. Simulation was used to cover the key point of the emergency room and animation was used to indicate the areas of opportunity required. This study revealed that a long waiting time, overload personnel and increasing withdrawal rate of patients are caused by the lack of capacity in the emergency room.
Baesler et al.9 developed a methodology for a cancer treatment facility to find stochastically a global optimum point for the control variables. A simulation model generated the output using a goal programming framework for all the objectives involved in the analysis. Later a genetic algorithm was responsible for performing the search for an improved solution. The control variables that were considered in this research are number of treatment chairs, number of drawing blood nurses, laboratory personnel, and pharmacy personnel. Guo et al. 10 presented a simulation framework considering demand for appointment, patient flow logic, distribution of resources, scheduling rules followed by the scheduler. The objective of the study was to develop a scheduling rule which will ensure that 95% of all the appointment requests should be seen within one week after the request is made to increase the level of patient satisfaction and balance the schedule of each doctor to maintain a fine harmony between "busy clinic" and "quiet clinic".
Huschka et al.11 studied a healthcare system which was about to change their facility layout. In this case a simulation model study helped them to design a new healthcare practice by evaluating the change in layout before implementation. Historical data like the arrival rate of the patients, number of patients visited each day, patient flow logic, was used to build the current system model. Later, different scenarios were designed which measured the changes in the current layout and performance.
Wijewickrama et al.12 developed a simulation model to evaluate appointment schedule (AS) for second time consultations and patient appointment sequence (PSEQ) in a multi-­‐facility system. Five different appointment rule (ARULE) were considered: i) Baily; ii) 3Baily; iii) Individual (Ind); iv) two patients at a time (2AtaTime); v) Variable Interval and (V-­‐I) rule. PSEQ is based on type of patients: Appointment patients (APs) and new patients (NPs). The different PSEQ that were studied in this study were: i) first-­‐ come first-­‐serve; ii) appointment patient at the beginning of the clinic (APBEG); iii) new patient at the beginning of the clinic (NPBEG); iv) assigning appointed and new patients in an alternating manner (ALTER); v) assigning a new patient after every five-­‐appointment patients. Also patient no show (0% and 5%) and patient punctuality (PUNCT) (on-­‐time and 10 minutes early) were also considered. The study found that ALTER-­‐Ind. and ALTER5-­‐Ind. performed best on 0% NOSHOW, on-­‐time PUNCT and 5% NOSHOW, on-­‐time PUNCT situation to reduce WT and IT per patient. As NOSHOW created slack time for waiting patients, their WT tends to reduce while IT increases due to unexpected cancellation. Earliness increases congestion whichin turn increases waiting time.
Ramis et al.13 conducted a study of a Medical Imaging Center (MIC) to build a simulation model which was used to improve the patient journey through an imaging centre by reducing the wait time and making better use of the resources. The simulation model also used a Graphic User Interface (GUI) to provide the parameters of the centre, such as arrival rates, distances, processing times, resources and schedule. The simulation was used to measure the waiting time of the patients in different case scenarios. The study found that assigning a common function to the resource personnel could improve the waiting time of the patients.
The objective of this study is to develop an efficient scheduling template that maximises the number of served patients and minimises the average patient's waiting time at the given resources availability. To accomplish this objective, we will build a simulation model which mimics the working conditions of the clinic. Then we will suggest different scenarios of matching the arrival pattern of the patients with the availability of the resources. Full experiments will be performed to evaluate these scenarios. Hence, a simple and practical scheduling template will be built based on the indentified best scenario. The developed simulation model is described in section 2, which consists of a description of the treatment room, and a description of the types of patients and treatment durations. In section 3, different improvement scenarios are described and their analysis is presented in section 4. Section 5 illustrates a scheduling template based on one of the improvement scenarios. Finally, the conclusion and future direction of our work is exhibited in section 6.
Simulation Model
A simulation model represents the actual system and assists in visualising and evaluating the performance of the system under different scenarios without interrupting the actual system. Building a proper simulation model of a system consists of the following steps.
Observing the system to understand the flow of the entities, key players, availability of resources and overall generic framework.
Collecting the data on the number and type of entities, time consumed by the entities at each step of their journey, and availability of resources.
After building the simulation model it is necessary to confirm that the model is valid. This can be done by confirming that each entity flows as it is supposed to and the statistical data generated by the simulation model is similar to the collected data.
Figure 1 shows the patient flow process in the treatment room. On the patient's first appointment, the oncologist comes up with the treatment plan. The treatment time varies according to the patient’s condition, which may be 1 hour to 10 hours. Based on the type of the treatment, the physician or the clinical clerk books an available treatment chair for that time period.
On the day of the appointment, the patient will wait until the booked chair is free. When the chair is free a nurse from that station comes to the patient, verifies the name and date of birth and takes the patient to a treatment chair. Afterwards, the nurse flushes the chemotherapy drug line to the patient's body which takes about five minutes and sets up the treatment. Then the nurse leaves to serve another patient. Chemotherapy treatment lengths vary from less than an hour to 10 hour infusions. At the end of the treatment, the nurse returns, removes the line and notifies the patient about the next appointment date and time which also takes about five minutes. Most of the patients visit the clinic to take care of their PICC line (a peripherally inserted central catheter). A PICC is a line that is used to inject the patient with the chemical. This PICC line should be regularly cleaned, flushed to maintain patency and the insertion site checked for signs of infection. It takes approximately 10–15 minutes to take care of a PICC line by a nurse.
Cancer Care Manitoba provided access to the electronic scheduling system, also known as "ARIA" which is comprehensive information and image management system that aggregates patient data into a fully-­‐electronic medical chart, provided by VARIAN Medical System. This system was used to find out how many patients are booked in every clinic day. It also reveals which chair is used for how many hours. It was necessary to search a patient's history to find out how long the patient spends on which chair. Collecting the snapshot of each patient gives the complete picture of a one day clinic schedule.
The treatment room consists of the following two main limited resources:
Treatment Chairs: Chairs that are used to seat the patients during the treatment.
Nurses: Nurses are required to inject the treatment line into the patient and remove it at the end of the treatment. They also take care of the patients when they feel uncomfortable.
Mc Charles Chemotherapy unit consists of 11 nurses, and 5 stations with the following description:
Station 1: Station 1 has six chairs (numbered 1 to 6) and two nurses. The two nurses work from 8:00 to 16:00.
Station 2: Station 2 has six chairs (7 to 12) and three nurses. Two nurses work from 8:00 to 16:00 and one nurse works from 12:00 to 20:00.
Station 3: Station 4 has six chairs (13 to 18) and two nurses. The two nurses work from 8:00 to 16:00.
Station 4: Station 4 has six chairs (19 to 24) and three nurses. One nurse works from 8:00 to 16:00. Another nurse works from 10:00 to 18:00.
Solarium Station: Solarium Station has six chairs (Solarium Stretcher 1, Solarium Stretcher 2, Isolation, Isolation emergency, Fire Place 1, Fire Place 2). There is only one nurse assigned to this station that works from 12:00 to 20:00. The nurses from other stations can help when need arises.
There is one more nurse known as the "float nurse" who works from 11:00 to 19:00. This nurse can work at any station. Table 1 summarises the working hours of chairs and nurses. All treatment stations start at 8:00 and continue until the assigned nurse for that station completes her shift.
Currently, the clinic uses a scheduling template to assign the patients' appointments. But due to high demand of patient appointment it is not followed any more. We believe that this template can be improved based on the availability of nurses and chairs. Clinic workload was collected from 21 days of field observation. The current scheduling template has 10 types of appointment time slot: 15-­‐minute, 1-­‐hour, 1.5-­‐hour, 2-­‐hour, 3-­‐hour, 4-­‐hour, 5-­‐hour, 6-­‐hour, 8-­‐hour and 10-­‐hour and it is designed to serve 95 patients. But when the scheduling template was compared with the 21 days observations, it was found that the clinic is serving more patients than it is designed for. Therefore, the providers do not usually follow the scheduling template. Indeed they very often break the time slots to accommodate slots that do not exist in the template. Hence, we find that some of the stations are very busy (mostly station 2) and others are underused. If the scheduling template can be improved, it will be possible to bring more patients to the clinic and reduce their waiting time without adding more resources.
In order to build or develop a simulation model of the existing system, it is necessary to collect the following data:
Types of treatment durations.
Numbers of patients in each treatment type.
Arrival pattern of the patients.
Steps that the patients have to go through in their treatment journey and required time of each step.
Using the observations of 2,155 patients over 21 days of historical data, the types of treatment durations and the number of patients in each type were estimated. This data also assisted in determining the arrival rate and the frequency distribution of the patients. The patients were categorised into six types. The percentage of these types and their associated service times distributions are determined too.
ARENA Rockwell Simulation Software (v13) was used to build the simulation model. Entities of the model were tracked to verify that the patients move as intended. The model was run for 30 replications and statistical data was collected to validate the model. The total number of patients that go though the model was compared with the actual number of served patients during the 21 days of observations.
Improvement Scenarios
After verifying and validating the simulation model, different scenarios were designed and analysed to identify the best scenario that can handle more patients and reduces the average patient's waiting time. Based on the clinic observation and discussion with the healthcare providers, the following constraints have been stated:
The stations are filled up with treatment chairs. Therefore, it is literally impossible to fit any more chairs in the clinic. Moreover, the stakeholders are not interested in adding extra chairs.
The stakeholders and the caregivers are not interested in changing the layout of the treatment room.
Given these constraints the options that can be considered to design alternative scenarios are:
Changing the arrival pattern of the patients: that will fit over the nurses' availability.
Changing the nurses' schedule.
Adding one full time nurse at different starting times of the day.
Figure 2 compares the available number of nurses and the number of patients' arrival during different hours of a day. It can be noticed that there is a rapid growth in the arrival of patients (from 13 to 17) between 8:00 to 10:00 even though the clinic has the equal number of nurses during this time period. At 12:00 there is a sudden drop of patient arrival even though there are more available nurses. It is clear that there is an imbalance in the number of available nurses and the number of patient arrivals over different hours of the day. Consequently, balancing the demand (arrival rate of patients) and resources (available number of nurses) will reduce the patients' waiting time and increases the number of served patients. The alternative scenarios that satisfy the above three constraints are listed in Table 2. These scenarios respect the following rules:
Long treatments (between 4hr to 11hr) have to be scheduled early in the morning to avoid working overtime.
Patients of type 1 (15 minutes to 1hr treatment) are the most common. They can be fitted in at any time of the day because they take short treatment time. Hence, it is recommended to bring these patients in at the middle of the day when there are more nurses.
Nurses get tired at the end of the clinic day. Therefore, fewer patients should be scheduled at the late hours of the day.
In Scenario 1, the arrival pattern of the patient was changed so that it can fit with the nurse schedule. This arrival pattern is shown Table 3. Figure 3 shows the new patients' arrival pattern compared with the current arrival pattern. Similar patterns can be developed for the remaining scenarios too.
Analysis of Results
ARENA Rockwell Simulation software (v13) was used to develop the simulation model. There is no warm-­‐up period because the model simulates day-­‐to-­‐day scenarios. The patients of any day are supposed to be served in the same day. The model was run for 30 days (replications) and statistical data was collected to evaluate each scenario. Tables 4 and 5 show the detailed comparison of the system performance between the current scenario and Scenario 1. The results are quite interesting. The average throughput rate of the system has increased from 103 to 125 patients per day. The maximum throughput rate can reach 135 patients. Although the average waiting time has increased, the utilisation of the treatment station has increased by 15.6%. Similar analysis has been performed for the rest of the other scenarios. Due to the space limitation the detailed results are not given. However, Table 6 exhibits a summary of the results and comparison between the different scenarios. Scenario 1 was able to significantly increase the throughput of the system (by 21%) while it still results in an acceptable low average waiting time (13.4 minutes). In addition, it is worth noting that adding a nurse (Scenarios 3, 4, and 5) does not significantly reduce the average wait time or increase the system's throughput. The reason behind this is that when all the chairs are busy, the nurses have to wait until some patients finish the treatment. As a consequence, the other patients have to wait for the commencement of their treatment too. Therefore, hiring a nurse, without adding more chairs, will not reduce the waiting time or increase the throughput of the system. In this case, the only way to increase the throughput of the system is by adjusting the arrival pattern of patients over the nurses' schedule.
Developing a Scheduling Template based on Scenario 1
Scenario 1 provides the best performance. However a scheduling template is necessary for the care provider to book the patients. Therefore, a brief description is provided below on how scheduling the template is developed based on this scenario.
Table 3 gives the number of patients that arrive hourly, following Scenario 1. The distribution of each type of patient is shown in Table 7. This distribution is based on the percentage of each type of patient from the collected data. For example, in between 8:00-­‐9:00, 12 patients will come where 54.85% are of Type 1, 34.55% are of Type 2, 15.163% are of Type 3, 4.32% are of Type 4, 2.58% are of Type 5 and the rest are of Type 6. It is worth noting that, we assume that the patients of each type arrive as a group at the beginning of the hourly time slot. For example, all of the six patients of Type 1 from 8:00 to 9:00 time slot arrive at 8:00.
The numbers of patients from each type is distributed in such a way that it respects all the constraints described in Section 1.3. Most of the patients of the clinic are from type 1, 2 and 3 and they take less amount of treatment time compared with the patients of other types. Therefore, they are distributed all over the day. Patients of type 4, 5 and 6 take a longer treatment time. Hence, they are scheduled at the beginning of the day to avoid overtime. Because patients of type 4, 5 and 6 come at the beginning of the day, most of type 1 and 2 patients come at mid-­‐day (12:00 to 16:00). Another reason to make the treatment room more crowded in between 12:00 to 16:00 is because the clinic has the maximum number of nurses during this time period. Nurses become tired at the end of the clinic which is a reason not to schedule any patient after 19:00.
Based on the patient arrival schedule and nurse availability a scheduling template is built and shown in Figure 4. In order to build the template, if a nurse is available and there are patients waiting for service, a priority list of these patients will be developed. They are prioritised in a descending order based on their estimated slack time and secondarily based on the shortest service time. The secondary rule is used to break the tie if two patients have the same slack. The slack time is calculated using the following equation:
Slack time = Due time - (Arrival time + Treatment time)
Due time is the clinic closing time. To explain how the process works, assume at hour 8:00 (in between 8:00 to 8:15) two patients in station 1 (one 8-­‐hour and one 15-­‐ minute patient), two patients in station 2 (two 12-­‐hour patients), two patients in station 3 (one 2-­‐hour and one 15-­‐ minute patient) and one patient in station 4 (one 3-­‐hour patient) in total seven patients are scheduled. According to Figure 2, there are seven nurses who are available at 8:00 and it takes 15 minutes to set-­‐up a patient. Therefore, it is not possible to schedule more than seven patients in between 8:00 to 8:15 and the current scheduling is also serving seven patients by this time. The rest of the template can be justified similarly.
doi:10.4066/AMJ.2011.837
PMCID: PMC3562880  PMID: 23386870
21.  The interrelationship between dengue incidence and diurnal ranges of temperature and humidity in a Sri Lankan city and its potential applications 
Global Health Action  2015;8:10.3402/gha.v8.29359.
Background
Temperature, humidity, and other weather variables influence dengue transmission. Published studies show how the diurnal fluctuations of temperature around different mean temperatures influence dengue transmission. There are no published studies about the correlation between diurnal range of humidity and dengue transmission.
Objective
The goals of this study were to determine the correlation between dengue incidence and diurnal fluctuations of temperature and humidity in the Sri Lankan city of Kandy and to explore the possibilities of using that information for better control of dengue.
Design
We calculated the weekly dengue incidence in Kandy during the period 2003–2012, after collecting data on all of the reported dengue patients and estimated midyear populations. Data on daily maximum and minimum temperatures and night-time and daytime humidity were obtained from two weather stations, averaged, and converted into weekly data. The number of days per week with a diurnal temperature range (DTR) of >10°C and <10°C and the number of days per week with a diurnal humidity range (DHR) of >20 and <15% were calculated. Wavelet time series analysis was performed to determine the correlation between dengue incidence and diurnal ranges of temperature and humidity.
Results
There were negative correlations between dengue incidence and a DTR >10°C and a DHR >20% with 3.3-week and 4-week lag periods, respectively. Additionally, positive correlations between dengue incidence and a DTR <10°C and a DHR <15% with 3- and 4-week lag periods, respectively, were discovered.
Conclusions
These findings are consistent with the results of previous entomological studies and theoretical models of DTR and dengue transmission correlation. It is important to conduct similar studies on diurnal fluctuations of humidity in the future. We suggest ways and means to use this information for local dengue control and to mitigate the potential effects of the ongoing global reduction of DTR on dengue incidence.
doi:10.3402/gha.v8.29359
PMCID: PMC4668265  PMID: 26632645
dengue; diurnal temperature range; humidity; diurnal range; Aedes; Sri Lanka; wavelet analysis; climate change; global warming; neglected diseases
22.  Intermittent versus continuous oxaliplatin and fluoropyrimidine combination chemotherapy for first-line treatment of advanced colorectal cancer: results of the randomised phase 3 MRC COIN trial 
The Lancet Oncology  2011;12(7):642-653.
Summary
Background
When cure is impossible, cancer treatment should focus on both length and quality of life. Maximisation of time without toxic effects could be one effective strategy to achieve both of these goals. The COIN trial assessed preplanned treatment holidays in advanced colorectal cancer to achieve this aim.
Methods
COIN was a randomised controlled trial in patients with previously untreated advanced colorectal cancer. Patients received either continuous oxaliplatin and fluoropyrimidine combination (arm A), continuous chemotherapy plus cetuximab (arm B), or intermittent (arm C) chemotherapy. In arms A and B, treatment continued until development of progressive disease, cumulative toxic effects, or the patient chose to stop. In arm C, patients who had not progressed at their 12-week scan started a chemotherapy-free interval until evidence of disease progression, when the same treatment was restarted. Randomisation was done centrally (via telephone) by the MRC Clinical Trials Unit using minimisation. Treatment allocation was not masked. The comparison of arms A and B is described in a companion paper. Here, we compare arms A and C, with the primary objective of establishing whether overall survival on intermittent therapy was non-inferior to that on continuous therapy, with a predefined non-inferiority boundary of 1·162. Intention-to-treat (ITT) and per-protocol analyses were done. This trial is registered, ISRCTN27286448.
Findings
1630 patients were randomly assigned to treatment groups (815 to continuous and 815 to intermittent therapy). Median survival in the ITT population (n=815 in both groups) was 15·8 months (IQR 9·4–26·1) in arm A and 14·4 months (8·0–24·7) in arm C (hazard ratio [HR] 1·084, 80% CI 1·008–1·165). In the per-protocol population (arm A, n=467; arm C, n=511), median survival was 19·6 months (13·0–28·1) in arm A and 18·0 months (12·1–29·3) in arm C (HR 1·087, 0·986–1·198). The upper limits of CIs for HRs in both analyses were greater than the predefined non-inferiority boundary. Preplanned subgroup analyses in the per-protocol population showed that a raised baseline platelet count, defined as 400 000 per μL or higher (271 [28%] of 978 patients), was associated with poor survival with intermittent chemotherapy: the HR for comparison of arm C and arm A in patients with a normal platelet count was 0·96 (95% CI 0·80–1·15, p=0·66), versus 1·54 (1·17–2·03, p=0·0018) in patients with a raised platelet count (p=0·0027 for interaction). In the per-protocol population, more patients on continuous than on intermittent treatment had grade 3 or worse haematological toxic effects (72 [15%] vs 60 [12%]), whereas nausea and vomiting were more common on intermittent treatment (11 [2%] vs 43 [8%]). Grade 3 or worse peripheral neuropathy (126 [27%] vs 25 [5%]) and hand–foot syndrome (21 [4%] vs 15 [3%]) were more frequent on continuous than on intermittent treatment.
Interpretation
Although this trial did not show non-inferiority of intermittent compared with continuous chemotherapy for advanced colorectal cancer in terms of overall survival, chemotherapy-free intervals remain a treatment option for some patients with advanced colorectal cancer, offering reduced time on chemotherapy, reduced cumulative toxic effects, and improved quality of life. Subgroup analyses suggest that patients with normal baseline platelet counts could gain the benefits of intermittent chemotherapy without detriment in survival, whereas those with raised baseline platelet counts have impaired survival and quality of life with intermittent chemotherapy and should not receive a treatment break.
Funding
Cancer Research UK.
doi:10.1016/S1470-2045(11)70102-4
PMCID: PMC3159416  PMID: 21641867
23.  BI-31ANALYSIS AND QUANTIFICATION OF MULTIPLE ANTIGEN EXPRESSION IN GLIOBLASTOMA 
Neuro-Oncology  2014;16(Suppl 5):v30.
Glioblastoma (GBM), one of the most common and fatal types of brain tumor, is marked by significant antigenic heterogeneity. Identification and quantification of tumor related antigens in the context of GBM tissue is an essential step towards developing antigen- targeted therapies. Immunohistochemistry (IHC) on formalin-fixed paraffin embedded (FFPE) clinical specimens is a valuable technique for evaluating antigen expression in large study cohorts. To overcome the limitations of manual semi-quantitative scoring and subjectivity in the evaluation of IHC staining, we analyzed and quantified multiple antigens across entire tumor sections using Image Pro Premier v9.1 (DAB plug-in). For each slide, dense tumor regions (DTRs, tumor cells >60%), tumor infiltration regions (TIRs, tumor cells <50%) and pseudopalisading necrosis regions (PPNs) were defined from HE section by a neuropathologist. We quantified the expression of tumor-associated antigens IL13Rα2, HER2, EGFR and the proliferation marker Ki67 within these defined regions for 44 brain tumor samples (35 stage IV and 9 stage III). Our results demonstrate the heterogeneous expression patterns of IL13Rα2, HER2 and EGFR in GBMs. For example, in dense tumor regions 52%, 61% and 77% of samples showed IL13Rα2, HER2 or EGFR positivity, respectively. In correlation studies, 25% of samples were triple positive, 11% of samples showed double positivity for IL13Rα2 and HER2 or IL13Rα2 and EGFR, and 25% of samples were double positive of EGFR and HER2. Less than 7% of samples were negative for all three antigens. Interestingly, a higher percentage of samples showed triple positive expression in PPN regions (43%) versus the DTR (25%) and TIR (25%) regions. Also, Ki67 positivity was higher in PPN and DTR regions. In this study we developed methods for combining pathological annotations with DAB-capturing software, which allowed us to quantify protein expression in a more precise, consistent and efficient manner.
doi:10.1093/neuonc/nou239.31
PMCID: PMC4217900
24.  Guideline on allergen-specific immunotherapy in IgE-mediated allergic diseases 
Allergo Journal International  2014;23(8):282-319.
Summary
The present guideline (S2k) on allergen-specific immunotherapy (AIT) was established by the German, Austrian and Swiss professional associations for allergy in consensus with the scientific specialist societies and professional associations in the fields of otolaryngology, dermatology and venereology, pediatric and adolescent medicine, pneumology as well as a German patient organization (German Allergy and Asthma Association; Deutscher Allergie- und Asthmabund, DAAB) according to the criteria of the Association of the Scientific Medical Societies in Germany (Arbeitsgemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften, AWMF).
AIT is a therapy with disease-modifying effects. By administering allergen extracts, specific blocking antibodies, toler-ance-inducing cells and mediators are activated. These prevent further exacerbation of the allergen-triggered immune response, block the specific immune response and attenuate the inflammatory response in tissue.
Products for SCIT or SLIT cannot be compared at present due to their heterogeneous composition, nor can allergen concentrations given by different manufacturers be compared meaningfully due to the varying methods used to measure their active ingredients. Non-modified allergens are used for SCIT in the form of aqueous or physically adsorbed (depot) extracts, as well as chemically modified allergens (allergoids) as depot extracts. Allergen extracts for SLIT are used in the form of aqueous solutions or tablets.
The clinical efficacy of AIT is measured using various scores as primary and secondary study endpoints. The EMA stipulates combined symptom and medication scores as primary endpoint. A harmonization of clinical endpoints, e. g., by using the combined symptom and medication scores (CSMS) recommended by the EAACI, is desirable in the future in order to permit the comparison of results from different studies. The current CONSORT recommendations from the ARIA/GA2LEN group specify standards for the evaluation, presentation and publication of study results.
According to the Therapy allergen ordinance (TAV), preparations containing common allergen sources (pollen from grasses, birch, alder, hazel, house dust mites, as well as bee and wasp venom) need a marketing authorization in Germany. During the marketing authorization process, these preparations are examined regarding quality, safety and efficacy. In the opinion of the authors, authorized allergen preparations with documented efficacy and safety, or preparations tradeable under the TAV for which efficacy and safety have already been documented in clinical trials meeting WAO or EMA standards, should be preferentially used. Individual formulations (NPP) enable the prescription of rare allergen sources (e.g., pollen from ash, mugwort or ambrosia, mold Alternaria, animal allergens) for specific immunotherapy. Mixing these allergens with TAV allergens is not permitted.
Allergic rhinitis and its associated co-morbidities (e. g., bronchial asthma) generate substantial direct and indirect costs. Treatment options, in particular AIT, are therefore evaluated using cost-benefit and cost-effectiveness analyses. From a long-term perspective, AIT is considered to be significantly more cost effective in allergic rhinitis and allergic asthma than pharmacotherapy, but is heavily dependent on patient compliance.
Meta-analyses provide unequivocal evidence of the efficacy of SCIT and SLIT for certain allergen sources and age groups. Data from controlled studies differ in terms of scope, quality and dosing regimens and require product-specific evaluation. Therefore, evaluating individual preparations according to clearly defined criteria is recommended. A broad transfer of the efficacy of certain preparations to all preparations administered in the same way is not endorsed. The website of the German Society for Allergology and Clinical Immunology (www.dgaki.de/leitlinien/s2k-leitlinie-sit; DGAKI: Deutsche Gesellschaft für Allergologie und klinische Immunologie) provides tables with specific information on available products for AIT in Germany, Switzerland and Austria. The tables contain the number of clinical studies per product in adults and children, the year of market authorization, underlying scoring systems, number of randomized and analyzed subjects and the method of evaluation (ITT, FAS, PP), separately given for grass pollen, birch pollen and house dust mite allergens, and the status of approval for the conduct of clinical studies with these products.
Strong evidence of the efficacy of SCIT in pollen allergy-induced allergic rhinoconjunctivitis in adulthood is well-documented in numerous trials and, in childhood and adolescence, in a few trials. Efficacy in house dust mite allergy is documented by a number of controlled trials in adults and few controlled trials in children. Only a few controlled trials, independent of age, are available for mold allergy (in particular Alternaria). With regard to animal dander allergies (primarily to cat allergens), only small studies, some with methodological deficiencies are available. Only a moderate and inconsistent therapeutic effect in atopic dermatitis has been observed in the quite heterogeneous studies conducted to date. SCIT has been well investigated for individual preparations in controlled bronchial asthma as defined by the Global Initiative for Asthma (GINA) 2007 and intermittent and mild persistent asthma (GINA 2005) and it is recommended as a treatment option, in addition to allergen avoidance and pharmacotherapy, provided there is a clear causal link between respiratory symptoms and the relevant allergen.
The efficacy of SLIT in grass pollen-induced allergic rhinoconjunctivitis is extensively documented in adults and children, whilst its efficacy in tree pollen allergy has only been shown in adults. New controlled trials (some with high patient numbers) on house dust mite allergy provide evidence of efficacy of SLIT in adults.
Compared with allergic rhinoconjunctivitis, there are only few studies on the efficacy of SLIT in allergic asthma. In this context, newer studies show an efficacy for SLIT on asthma symptoms in the subgroup of grass pollen allergic children, adolescents and adults with asthma and efficacy in primary house dust mite allergy-induced asthma in adolescents aged from 14 years and in adults.
Aspects of secondary prevention, in particular the reduction of new sensitizations and reduced asthma risk, are important rationales for choosing to initiate treatment early in childhood and adolescence. In this context, those products for which the appropriate effects have been demonstrated should be considered.
SCIT or SLIT with pollen or mite allergens can be performed in patients with allergic rhinoconjunctivitis using allergen extracts that have been proven to be effective in at least one double-blind placebo-controlled (DBPC) study. At present, clinical trials are underway for the indication in asthma due to house dust mite allergy, some of the results of which have already been published, whilst others are still awaited (see the DGAKI table “Approved/potentially completed studies” via www.dgaki.de/Leitlinien/s2k-Leitlinie-sit (according to www.clinicaltrialsregister.eu)). When establishing the indication for AIT, factors that favour clinical efficacy should be taken into consideration. Differences between SCIT and SLIT are to be considered primarily in terms of contraindications. In individual cases, AIT may be justifiably indicated despite the presence of contraindications.
SCIT injections and the initiation of SLIT are performed by a physician experienced in this type of treatment and who is able to administer emergency treatment in the case of an allergic reaction. Patients must be fully informed about the procedure and risks of possible adverse events, and the details of this process must be documented (see “Treatment information sheet”; available as a handout via www.dgaki.de/Leitlinien/s2k-Leitlinie-sit). Treatment should be performed according to the manufacturer‘s product information leaflet. In cases where AIT is to be performed or continued by a different physician to the one who established the indication, close cooperation is required in order to ensure that treatment is implemented consistently and at low risk. In general, it is recommended that SCIT and SLIT should only be performed using preparations for which adequate proof of efficacy is available from clinical trials.
Treatment adherence among AIT patients is lower than assumed by physicians, irrespective of the form of administration. Clearly, adherence is of vital importance for treatment success. Improving AIT adherence is one of the most important future goals, in order to ensure efficacy of the therapy.
Severe, potentially life-threatening systemic reactions during SCIT are possible, but – providing all safety measures are adhered to – these events are very rare. Most adverse events are mild to moderate and can be treated well.
Dose-dependent adverse local reactions occur frequently in the mouth and throat in SLIT. Systemic reactions have been described in SLIT, but are seen far less often than with SCIT. In terms of anaphylaxis and other severe systemic reactions, SLIT has a better safety profile than SCIT.
The risk and effects of adverse systemic reactions in the setting of AIT can be effectively reduced by training of personnel, adhering to safety standards and prompt use of emergency measures, including early administration of i. m. epinephrine. Details on the acute management of anaphylactic reactions can be found in the current S2 guideline on anaphylaxis issued by the AWMF (S2-AWMF-LL Registry Number 061-025).
AIT is undergoing some innovative developments in many areas (e. g., allergen characterization, new administration routes, adjuvants, faster and safer dose escalation protocols), some of which are already being investigated in clinical trials.
Cite this as Pfaar O, Bachert C, Bufe A, Buhl R, Ebner C, Eng P, Friedrichs F, Fuchs T, Hamelmann E, Hartwig-Bade D, Hering T, Huttegger I, Jung K, Klimek L, Kopp MV, Merk H, Rabe U, Saloga J, Schmid-Grendelmeier P, Schuster A, Schwerk N, Sitter H, Umpfenbach U, Wedi B, Wöhrl S, Worm M, Kleine-Tebbe J. Guideline on allergen-specific immunotherapy in IgE-mediated allergic diseases – S2k Guideline of the German Society for Allergology and Clinical Immunology (DGAKI), the Society for Pediatric Allergy and Environmental Medicine (GPA), the Medical Association of German Allergologists (AeDA), the Austrian Society for Allergy and Immunology (ÖGAI), the Swiss Society for Allergy and Immunology (SGAI), the German Society of Dermatology (DDG), the German Society of Oto-Rhino-Laryngology, Head and Neck Surgery (DGHNO-KHC), the German Society of Pediatrics and Adolescent Medicine (DGKJ), the Society for Pediatric Pneumology (GPP), the German Respiratory Society (DGP), the German Association of ENT Surgeons (BV-HNO), the Professional Federation of Paediatricians and Youth Doctors (BVKJ), the Federal Association of Pulmonologists (BDP) and the German Dermatologists Association (BVDD). Allergo J Int 2014;23:282–319
doi:10.1007/s40629-014-0032-2
PMCID: PMC4479478  PMID: 26120539
allergen-specific immunotherapy; AIT; Hyposensitization; guideline; allergen; allergen extract; allergic disease; allergic rhinitis; allergic asthma
25.  Effectiveness of Dual Focus Mutual Aid for Co-occurring Substance Use and Mental Health Disorders: A Review and Synthesis of the “Double Trouble” in Recovery Evaluation 
Substance use & misuse  2008;43(12-13):1904-1926.
Over five million adults in the U.S. have a co-occurring substance use disorder and serious psychological distress. Mutual aid (“self-help”) can usefully complement treatment, but people with co-occurring substance use and psychiatric disorders often encounter a lack of empathy and acceptance in traditional mutual aid groups. Double Trouble in Recovery (DTR) is a dual focus fellowship whose mission is to bring the benefits of mutual aid to persons recovering from co-occurring disorders. An evaluation of DTR was conducted by interviewing 310 persons attending 24 DTR meetings in New York City in 1998 and following them up for two years, in 1999 and 2000. The evaluation produced 13 articles in 12 peer reviewed journals, the main results of which are summarized here. The sample’s characteristics were: mean age, 40 years; women, 28%; black, 59%; white, 25%; Hispanic, 14%; never married, 63%; live in supported community residence, 53%; high school graduate or GED, 60%; arrested as adult, 63%; diagnoses of: schizophrenia, 39%; major depression, 21%; or bipolar disorder; 20%; currently prescribed psychiatric medication, 92%; primary substance used, current or past: cocaine/crack, 42%; alcohol 34%; or heroin, 11%. Overall, the findings indicate that DTR participation has both direct and indirect effects on several important components of recovery: drug/alcohol abstinence, psychiatric medication adherence, self-efficacy for recovery, and quality of life. The study also identified several “common” therapeutic factors (e.g., internal motivation, social support) and unique mutual aid processes (helper-therapy, reciprocal learning) that mediate the influence of DTR participation on recovery. For clinicians, these results underline the importance of fostering stable affiliation with specialized dual focus 12-step groups for their patients with co-occurring disorders, as part of a comprehensive recovery-oriented treatment approach.
doi:10.1080/10826080802297005
PMCID: PMC2923916  PMID: 19016171
co-occuring disorders; mutual aid; self-help; DTR; recovery; substance use; 12-step; addiction; mental illness

Results 1-25 (1503226)