PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1469636)

Clipboard (0)
None

Related Articles

1.  Optimising case definitions of upper limb disorder for aetiological research and prevention – a review 
Background
Experts disagree about the optimal classification of upper limb disorders (ULDs). To explore whether differences in associations with occupational risk factors offer a basis for choosing between case definitions in aetiological research and surveillance, we analysed previously published research.
Methods
Eligible reports (those with estimates of relative risk (RR) for >1 case definition relative to identical exposures were identified from systematic reviews of ULD and occupation and by hand-searching five peer-review journals published between January 1990 and June 2010. We abstracted details by anatomical site of the case and exposure definitions employed and paired estimates of RR, for alternative case definitions with identical occupational exposures. Pairs of case definitions were typically nested, a stricter definition being a subset of a simpler version. Differences in RR between paired definitions were expressed as the ratio of RRs, using that for the simpler definition as the denominator.
Results
We found 21 reports, yielding 320 pairs of RRs (82, 75 and 163 respectively at the shoulder, elbow, and distal arm). Ratios of RRs were frequently ≤1 (46%), the median ratio overall and by anatomical site being close to unity. In only 2% of comparisons did ratios reach ≥4.
Conclusion
Complex ULD case definitions (e.g. involving physical signs, more specific symptom patterns, and investigations) yield similar associations with occupational risk factors to those using simpler definitions. Thus, in population-based aetiological research and surveillance, simple case definitions should normally suffice. Data on risk factors can justifiably be pooled in meta-analyses, despite differences in case definition.
doi:10.1136/oemed-2011-100086
PMCID: PMC3427012  PMID: 22006938
2.  In Vitro Fertilization and Multiple Pregnancies 
Executive Summary
Objective
The objective of this health technology policy assessment was to determine the clinical effectiveness and cost-effectiveness of IVF for infertility treatment, as well as the role of IVF in reducing the rate of multiple pregnancies.
Clinical Need: Target Population and Condition
Typically defined as a failure to conceive after a year of regular unprotected intercourse, infertility affects 8% to 16% of reproductive age couples. The condition can be caused by disruptions at various steps of the reproductive process. Major causes of infertility include abnormalities of sperm, tubal obstruction, endometriosis, ovulatory disorder, and idiopathic infertility. Depending on the cause and patient characteristics, management options range from pharmacologic treatment to more advanced techniques referred to as assisted reproductive technologies (ART). ART include IVF and IVF-related procedures such as intra-cytoplasmic sperm injection (ICSI) and, according to some definitions, intra-uterine insemination (IUI), also known as artificial insemination. Almost invariably, an initial step in ART is controlled ovarian stimulation (COS), which leads to a significantly higher rate of multiple pregnancies after ART compared with that following natural conception. Multiple pregnancies are associated with a broad range of negative consequences for both mother and fetuses. Maternal complications include increased risk of pregnancy-induced hypertension, pre-eclampsia, polyhydramnios, gestational diabetes, fetal malpresentation requiring Caesarean section, postpartum haemorrhage, and postpartum depression. Babies from multiple pregnancies are at a significantly higher risk of early death, prematurity, and low birth weight, as well as mental and physical disabilities related to prematurity. Increased maternal and fetal morbidity leads to higher perinatal and neonatal costs of multiple pregnancies, as well as subsequent lifelong costs due to disabilities and an increased need for medical and social support.
The Technology Being Reviewed
IVF was first developed as a method to overcome bilateral Fallopian tube obstruction. The procedure includes several steps: (1) the woman’s egg is retrieved from the ovaries; (2) exposed to sperm outside the body and fertilized; (3) the embryo(s) is cultured for 3 to 5 days; and (4) is transferred back to the uterus. IFV is considered to be one of the most effective treatments for infertility today. According to data from the Canadian Assisted Reproductive Technology Registry, the average live birth rate after IVF in Canada is around 30%, but there is considerable variation in the age of the mother and primary cause of infertility.
An important advantage of IVF is that it allows for the control of the number of embryos transferred. An elective single embryo transfer in IVF cycles adopted in many European countries was shown to significantly reduce the risk of multiple pregnancies while maintaining acceptable birth rates. However, when number of embryos transferred is not limited, the rate of IVF-associated multiple pregnancies is similar to that of other treatments involving ovarian stimulation. The practice of multiple embryo transfer in IVF is often the result of pressures to increase success rates due to the high costs of the procedure. The average rate of multiple pregnancies resulting from IVF in Canada is currently around 30%.
An alternative to IVF is IUI. In spite of reported lower success rates of IUI (pregnancy rates per cycle range from 8.7% to 17.1%) it is generally attempted before IVF due to its lower invasiveness and cost.
Two major drawbacks of IUI are that it cannot be used in cases of bilateral tubal obstruction and it does not allow much control over the risk of multiple pregnancies compared with IVF. The rate of multiple pregnancies after IUI with COS is estimated to be about 21% to 29%.
Ontario Health Insurance Plan Coverage
Currently, the Ontario Health Insurance Plan covers the cost of IVF for women with bilaterally blocked Fallopian tubes only, in which case it is funded for 3 cycles, excluding the cost of drugs. The cost of IUI is covered except for preparation of the sperm and drugs used for COS.
Diffusion of Technology
According to Canadian Assisted Reproductive Technology Registry data, in 2004 there were 25 infertility clinics across Canada offering IVF and 7,619 IVF cycles performed. In Ontario, there are 13 infertility clinics with about 4,300 IVF cycles performed annually.
Literature Review
Royal Commission Report on Reproductive Technologies
The 1993 release of the Royal Commission report on reproductive technologies, Proceed With Care, resulted in the withdrawal of most IVF funding in Ontario, where prior to 1994 IVF was fully funded. Recommendations of the Commission to withdraw IVF funding were largely based on findings of the systematic review of randomized controlled trials (RCTs) published before 1990. The review showed IVF effectiveness only in cases of bilateral tubal obstruction. As for nontubal causes of infertility, there was not enough evidence to establish whether IVF was effective or not.
Since the field of reproductive technology is constantly evolving, there have been several changes since the publication of the Royal Commission report. These changes include: increased success rates of IVF; introduction of ICSI in the early 1990’s as a treatment for male factor infertility; and improved embryo implantation rates allowing for the transfer of a single embryo to avoid multiple pregnancies after IVF.
Studies After the Royal Commission Report: Review Strategy
Three separate literature reviews were conducted in the following areas: clinical effectiveness of IVF, cost-effectiveness of IVF, and outcomes of single embryo transfer (SET) in IVF cycles.
Clinical effectiveness of IVF: RCTs or meta-analyses of RCTs that compared live birth rates after IVF versus alternative treatments, where the cause of infertility was clearly stated or it was possible to stratify the outcome by the cause of infertility.
Cost effectiveness of IVF: All relevant economic studies comparing IVF to alternative methods of treatment were reviewed
Outcomes of IVF with SET: RCTs or meta-analyses of RCTs that compared live birth rates and multiple birth rates associated with transfer of single versus double embryos.
OVID MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, Cochrane Library, the International Agency for Health Technology Assessment database, and websites of other health technology assessment agencies were searched using specific subject headings and keywords to identify relevant studies.
Summary of Findings
Comparative Clinical Effectiveness of IVF
Overall, there is a lack of well composed RCTs in this area and considerable diversity in both definition and measurement of outcomes exists between trials. Many studies used fertility or pregnancy rates instead of live birth rates. Moreover, the denominator for rate calculation varied from study to study (e.g. rates were calculated per cycle started, per cycle completed, per couple, etc...).
Nevertheless, few studies of sufficient quality were identified and categorized by the cause of infertility and existing alternatives to IVF. The following are the key findings:
A 2005 meta-analysis demonstrated that, in patients with idiopathic infertility, IVF was clearly superior to expectant management, but there were no statistically significant differences in live birth rates between IVF and IUI, nor between IVF and gamete-intra-Fallopian transfer.
A subset of data from a 2000 study showed no significant differences in pregnancy rates between IVF and IUI for moderate male factor infertility.
In patients with moderate male factor infertility, standard IVF was also compared with ICSI in a 2002 meta-analysis. All studies included in the meta-analysis showed superior fertilization rates with ICSI, and the pooled risk ratio for oocyte fertilization was 1.9 (95% Confidence Interval 1.4-2.5) in favour of ICSI. Two other RCTs in this area published after the 2002 meta-analysis had similar results and further confirmed these findings. There were no RCTs comparing IVF with ICSI in patients with severe male factor infertility, mainly because based on the expert opinion, ICSI might only be an effective treatment for severe male factor infertility.
Cost-Effectiveness of IVF
Five economic evaluations of IVF were found, including one comprehensive systematic review of 57 health economic studies. The studies compared cost-effectiveness of IVF with a number of alternatives such as observation, ovarian stimulation, IUI, tubal surgery, varicocelectomy, etc... The cost-effectiveness of IVF was analyzed separately for different types of infertility. Most of the reviewed studies concluded that due to the high cost, IVF has a less favourable cost-effectiveness profile compared with alternative treatment options. Therefore, IVF was not recommended as the first line of treatment in the majority of cases. The only two exceptions were bilateral tubal obstruction and severe male factor infertility, where an immediate offer of IVF/ICSI might the most cost-effective option.
Clinical Outcomes After Single Versus Double Embryo Transfer Strategies of IVF
Since the SET strategy has been more widely adopted in Europe, all RCT outcomes of SET were conducted in European countries. The major study in this area was a large 2005 meta-analysis, followed by two other published RCTs.
All of these studies reached similar conclusions:
Although a single SET cycle results in lower birth rates than a single double embryo transfer (DET) cycle, the cumulative birth rate after 2 cycles of SET (fresh + frozen-thawed embryos) was comparable to the birth rate after a single DET cycle (~40%).
SET was associated with a significant reduction in multiple births compared with DET (0.8% vs. 33.1% respectively in the largest RCT).
Most trials on SET included women younger than 36 years old with a sufficient number of embryos available for transfer that allowed for selection of the top quality embryo(s). A 2006 RCT, however, compared SET and DET strategies in an unselected group of patients without restrictions on the woman’s age or embryo quality. This study demonstrated that SET could be applied to older women.
Estimate of the Target Population
Based on results of the literature review and consultations with experts, four categories of infertile patients who may benefit from increased access to IVF/ICSI were identified:
Patients with severe male factor infertility, where IVF should be offered in conjunction with ICSI;
Infertile women with serious medical contraindications to multiple pregnancy, who should be offered IVF-SET;
Infertile patients who want to avoid the risk of multiple pregnancy and thus opt for IVF-SET; and
Patients who failed treatment with IUI and wish to try IVF.
Since, however, the latter indication does not reflect any new advances in IVF technology that would alter existing policy, it was not considered in this analysis.
Economic Analysis
Economic Review: Cost–Effectiveness of SET Versus DET
Conclusions of published studies on cost-effectiveness of SET versus DET were not consistent. While some studies found that SET strategy is more cost-effective due to avoidance of multiple pregnancies, other studies either did not find any significant differences in cost per birth between SET and DET, or favoured DET as a more cost-effective option.
Ontario-Based Economic Analysis
An Ontario-based economic analysis compared cost per birth using three treatment strategies: IUI, IVF-SET, and IVF-DET. A decision-tree model assumed three cycles for each treatment option. Two separate models were considered; the first included only fresh cycles of IVF, while the second had a combination of fresh and frozen cycles. Even after accounting for cost-savings due to avoidance of multiple pregnancies (only short-term complications), IVF-SET was still associated with a highest cost per birth. The approximate budget impact to cover the first three indications for IVF listed above (severe male factor infertility, women with medical contraindications to multiple pregnancy, and couples who wish to avoid the risk of multiple pregnancy) is estimated at $9.8 to $12.8 million (Cdn). Coverage of only first two indications, namely, ICSI in patients with severe male factor infertility and infertile women with serious medical contraindications to multiple pregnancy, is estimated at $3.8 to $5.5 million Cdn.
Other Considerations
International data shows that both IVF utilization and the average number of embryos transferred in IVF cycles are influenced by IVF funding policy. The success of the SET strategy in European countries is largely due to the fact that IVF treatment is subsidized by governments.
Surveys of patients with infertility demonstrated that a significant proportion (~40%) of patients not only do not mind having multiple babies, but consider twins being an ideal outcome of infertility treatment.
A women’s age may impose some restrictions on the implementation of a SET strategy.
Conclusions and Recommendations
A review of published studies has demonstrated that IVF-SET is an effective treatment for infertility that avoids multiple pregnancies.
However, results of an Ontario-based economic analysis shows that cost savings associated with a reduction in multiple pregnancies after IVF-SET does not justify the cost of universal IVF-SET coverage by the province. Moreover, the province currently funds IUI, which has been shown to be as effective as IVF for certain types of infertility and is significantly less expensive.
In patients with severe male factor infertility, IVF in conjunction with ICSI may be the only effective treatment.
Thus, 2 indications where additional IVF access should be considered include:
IVF/ICSI for patients with severe male factor infertility
IVF-SET in infertile women with serious medical contraindications to multiple pregnancy
PMCID: PMC3379537  PMID: 23074488
3.  Point-of-Care International Normalized Ratio (INR) Monitoring Devices for Patients on Long-term Oral Anticoagulation Therapy 
Executive Summary
Subject of the Evidence-Based Analysis
The purpose of this evidence based analysis report is to examine the safety and effectiveness of point-of-care (POC) international normalized ratio (INR) monitoring devices for patients on long-term oral anticoagulation therapy (OAT).
Clinical Need: Target Population and Condition
Long-term OAT is typically required by patients with mechanical heart valves, chronic atrial fibrillation, venous thromboembolism, myocardial infarction, stroke, and/or peripheral arterial occlusion. It is estimated that approximately 1% of the population receives anticoagulation treatment and, by applying this value to Ontario, there are an estimated 132,000 patients on OAT in the province, a figure that is expected to increase with the aging population.
Patients on OAT are regularly monitored and their medications adjusted to ensure that their INR scores remain in the therapeutic range. This can be challenging due to the narrow therapeutic window of warfarin and variation in individual responses. Optimal INR scores depend on the underlying indication for treatment and patient level characteristics, but for most patients the therapeutic range is an INR score of between 2.0 and 3.0.
The current standard of care in Ontario for patients on long-term OAT is laboratory-based INR determination with management carried out by primary care physicians or anticoagulation clinics (ACCs). Patients also regularly visit a hospital or community-based facility to provide a venous blood samples (venipuncture) that are then sent to a laboratory for INR analysis.
Experts, however, have commented that there may be under-utilization of OAT due to patient factors, physician factors, or regional practice variations and that sub-optimal patient management may also occur. There is currently no population-based Ontario data to permit the assessment of patient care, but recent systematic reviews have estimated that less that 50% of patients receive OAT on a routine basis and that patients are in the therapeutic range only 64% of the time.
Overview of POC INR Devices
POC INR devices offer an alternative to laboratory-based testing and venipuncture, enabling INR determination from a fingerstick sample of whole blood. Independent evaluations have shown POC devices to have an acceptable level of precision. They permit INR results to be determined immediately, allowing for more rapid medication adjustments.
POC devices can be used in a variety of settings including physician offices, ACCs, long-term care facilities, pharmacies, or by the patients themselves through self-testing (PST) or self-management (PSM) techniques. With PST, patients measure their INR values and then contact their physician for instructions on dose adjustment, whereas with PSM, patients adjust the medication themselves based on pre-set algorithms. These models are not suitable for all patients and require the identification and education of suitable candidates.
Potential advantages of POC devices include improved convenience to patients, better treatment compliance and satisfaction, more frequent monitoring and fewer thromboembolic and hemorrhagic complications. Potential disadvantages of the device include the tendency to underestimate high INR values and overestimate low INR values, low thromboplastin sensitivity, inability to calculate a mean normal PT, and errors in INR determination in patients with antiphospholipid antibodies with certain instruments. Although treatment satisfaction and quality of life (QoL) may improve with POC INR monitoring, some patients may experience increased anxiety or preoccupation with their disease with these strategies.
Evidence-Based Analysis Methods
Research Questions
1. Effectiveness
Does POC INR monitoring improve clinical outcomes in various settings compared to standard laboratory-based testing?
Does POC INR monitoring impact patient satisfaction, QoL, compliance, acceptability, convenience compared to standard laboratory-based INR determination?
Settings include primary care settings with use of POC INR devices by general practitioners or nurses, ACCs, pharmacies, long-term care homes, and use by the patient either for PST or PSM.
2. Cost-effectiveness
What is the cost-effectiveness of POC INR monitoring devices in various settings compared to standard laboratory-based INR determination?
Inclusion Criteria
English-language RCTs, systematic reviews, and meta-analyses
Publication dates: 1996 to November 25, 2008
Population: patients on OAT
Intervention: anticoagulation monitoring by POC INR device in any setting including anticoagulation clinic, primary care (general practitioner or nurse), pharmacy, long-term care facility, PST, PSM or any other POC INR strategy
Minimum sample size: 50 patients Minimum follow-up period: 3 months
Comparator: usual care defined as venipuncture blood draw for an INR laboratory test and management provided by an ACC or individual practitioner
Outcomes: Hemorrhagic events, thromboembolic events, all-cause mortality, anticoagulation control as assessed by proportion of time or values in the therapeutic range, patient reported outcomes including satisfaction, QoL, compliance, acceptability, convenience
Exclusion criteria
Non-RCTs, before-after studies, quasi-experimental studies, observational studies, case reports, case series, editorials, letters, non-systematic reviews, conference proceedings, abstracts, non-English articles, duplicate publications
Studies where POC INR devices were compared to laboratory testing to assess test accuracy
Studies where the POC INR results were not used to guide patient management
Method of Review
A search of electronic databases (OVID MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, The Cochrane Library, and the International Agency for Health Technology Assessment [INAHTA] database) was undertaken to identify evidence published from January 1, 1998 to November 25, 2008. Studies meeting the inclusion criteria were selected from the search results. Reference lists of selected articles were also checked for relevant studies.
Summary of Findings
Five existing reviews and 22 articles describing 17 unique RCTs met the inclusion criteria. Three RCTs examined POC INR monitoring devices with PST strategies, 11 RCTs examined PSM strategies, one RCT included both PST and PSM strategies and two RCTs examined the use of POC INR monitoring devices by health care professionals.
Anticoagulation Control
Anticoagulation control is measured by the percentage of time INR is within the therapeutic range or by the percentage of INR values in the therapeutic range. Due to the differing methodologies and reporting structures used, it was deemed inappropriate to combine the data and estimate whether the difference between groups would be significant. Instead, the results of individual studies were weighted by the number of person-years of observation and then pooled to calculate a summary measure.
Across most studies, patients in the intervention groups tended to have a higher percentage of time and values in the therapeutic target range in comparison to control patients. When the percentage of time in the therapeutic range was pooled across studies and weighted by the number of person-years of observation, the difference between the intervention and control groups was 4.2% for PSM, 7.2% for PST and 6.1% for POC use by health care practitioners. Overall, intervention patients were in the target range 69% of the time and control patients were in the therapeutic target range 64% of the time leading to an overall difference between groups of roughly 5%.
Major Complications and Deaths
There was no statistically significant difference in the number of major hemorrhagic events between patients managed with POC INR monitoring devices and patients managed with standard laboratory testing (OR =0.74; 95% CI: 0.52- 1.04). This difference was non-significant for all POC strategies (PSM, PST, health care practitioner).
Patients managed with POC INR monitoring devices had significantly fewer thromboembolic events than usual care patients (OR =0.52; 95% CI: 0.37 - 0.74). When divided by POC strategy, PSM resulted in significantly fewer thromboembolic events than usual care (OR =0.46.; 95% CI: 0.29 - 0.72). The observed difference in thromboembolic events for PSM remained significant when the analysis was limited to major thromboembolic events (OR =0.40; 95% CI: 0.17 - 0.93), but was non-significant when the analysis was limited to minor thromboembolic events (OR =0.73; 95% CI: 0.08 - 7.01). PST and GP/Nurse strategies did not result in significant differences in thromboembolic events, however there were only a limited number of studies examining these interventions.
No statistically significant difference was observed in the number of deaths between POC intervention and usual care control groups (OR =0.67; 95% CI: 0.41 - 1.10). This difference was non-significant for all POC strategies. Only one study reported on survival with 10-year survival rate of 76.1% in the usual care control group compared to 84.5% in the PSM group (P=0.05).
Summary Results of Meta-Analyses of Major Complications and Deaths in POC INR Monitoring Studies
Patient Satisfaction and Quality of Life
Quality of life measures were reported in eight studies comparing POC INR monitoring to standard laboratory testing using a variety of measurement tools. It was thus not possible to calculate a quantitative summary measure. The majority of studies reported favourable impacts of POC INR monitoring on QoL and found better treatment satisfaction with POC monitoring. Results from a pre-analysis patient and caregiver focus group conducted in Ontario also indicated improved patient QoL with POC monitoring.
Quality of the Evidence
Studies varied with regard to patient eligibility, baseline patient characteristics, follow-up duration, and withdrawal rates. Differential drop-out rates were observed such that the POC intervention groups tended to have a larger number of patients who withdrew. There was a lack of consistency in the definitions and reporting for OAT control and definitions of adverse events. In most studies, the intervention group received more education on the use of warfarin and performed more frequent INR testing, which may have overestimated the effect of the POC intervention. Patient selection and eligibility criteria were not always fully described and it is likely that the majority of the PST/PSM trials included a highly motivated patient population. Lastly, a large number of trials were also sponsored by industry.
Despite the observed heterogeneity among studies, there was a general consensus in findings that POC INR monitoring devices have beneficial impacts on the risk of thromboembolic events, anticoagulation control and patient satisfaction and QoL (ES Table 2).
GRADE Quality of the Evidence on POC INR Monitoring Studies
CI refers to confidence interval; Interv, intervention; OR, odds ratio; RCT, randomized controlled trial.
Economic Analysis
Using a 5-year Markov model, the health and economic outcomes associated with four different anticoagulation management approaches were evaluated:
Standard care: consisting of a laboratory test with a venipuncture blood draw for an INR;
Healthcare staff testing: consisting of a test with a POC INR device in a medical clinic comprised of healthcare staff such as pharmacists, nurses, and physicians following protocol to manage OAT;
PST: patient self-testing using a POC INR device and phoning in results to an ACC or family physician; and
PSM: patient self-managing using a POC INR device and self-adjustment of OAT according to a standardized protocol. Patients may also phone in to a medical office for guidance.
The primary analytic perspective was that of the MOHLTC. Only direct medical costs were considered and the time horizon of the model was five years - the serviceable life of a POC device.
From the results of the economic analysis, it was found that POC strategies are cost-effective compared to traditional INR laboratory testing. In particular, the healthcare staff testing strategy can derive potential cost savings from the use of one device for multiple patients. The PSM strategy, however, seems to be the most cost-effective method i.e. patients are more inclined to adjust their INRs more readily (as opposed to allowing INRs to fall out of range).
Considerations for Ontario Health System
Although the use of POC devices continues to diffuse throughout Ontario, not all OAT patients are suitable or have the ability to practice PST/PSM. The use of POC is currently concentrated at the institutional setting, including hospitals, ACCs, long-term care facilities, physician offices and pharmacies, and is much less commonly used at the patient level. It is, however, estimated that 24% of OAT patients (representing approximately 32,000 patients in Ontario), would be suitable candidates for PST/PSM strategies and willing to use a POC device.
There are several barriers to the use and implementation of POC INR monitoring devices, including factors such as lack of physician familiarity with the devices, resistance to changing established laboratory-based methods, lack of an approach for identifying suitable patients and inadequate resources for effective patient education and training. Issues of cost and insufficient reimbursement strategies may also hinder implementation and effective quality assurance programs would need to be developed to ensure that INR measurements are accurate and precise.
Conclusions
For a select group of patients who are highly motivated and trained, PSM resulted in significantly fewer thromboembolic events compared to conventional laboratory-based INR testing. No significant differences were observed for major hemorrhages or all-cause mortality. PST and GP/Nurse use of POC strategies are just as effective as conventional laboratory-based INR testing for thromboembolic events, major hemorrhages, and all-cause mortality. POC strategies may also result in better OAT control as measured by the proportion of time INR is in the therapeutic range and there appears to be beneficial impacts on patient satisfaction and QoL. The use of POC devices should factor in patient suitability, patient education and training, health system constraints, and affordability.
Keywords
anticoagulants, International Normalized Ratio, point-of-care, self-monitoring, warfarin.
PMCID: PMC3377545  PMID: 23074516
4.  Extracorporeal Photophoresis 
Executive Summary
Objective
To assess the effectiveness, safety and cost-effectiveness of extracorporeal photophoresis (ECP) for the treatment of refractory erythrodermic cutaneous T cell lymphoma (CTCL) and refractory chronic graft versus host disease (cGvHD).
Background
Cutaneous T Cell Lymphoma
Cutaneous T cell lymphoma (CTCL) is a general name for a group of skin affecting disorders caused by malignant white blood cells (T lymphocytes). Cutaneous T cell lymphoma is relatively uncommon and represents slightly more than 2% of all lymphomas in the United States. The most frequently diagnosed form of CTCL is mycosis fungoides (MF) and its leukemic variant Sezary syndrome (SS). The relative frequency and disease-specific 5-year survival of 1,905 primary cutaneous lymphomas classified according to the World Health Organization-European Organization for Research and Treatment of Cancer (WHO-EORTC) classification (Appendix 1). Mycosis fungoides had a frequency of 44% and a disease specific 5-year survival of 88%. Sezary syndrome had a frequency of 3% and a disease specific 5-year survival of 24%.
Cutaneous T cell lymphoma has an annual incidence of approximately 0.4 per 100,000 and it mainly occurs in the 5th to 6th decade of life, with a male/female ratio of 2:1. Mycosis fungoides is an indolent lymphoma with patients often having several years of eczematous or dermatitic skin lesions before the diagnosis is finally established. Mycosis fungoides commonly presents as chronic eczematous patches or plaques and can remain stable for many years. Early in the disease biopsies are often difficult to interpret and the diagnosis may only become apparent by observing the patient over time.
The clinical course of MF is unpredictable. Most patients will live normal lives and experience skin symptoms without serious complications. Approximately 10% of MF patients will experience progressive disease involving lymph nodes, peripheral blood, bone marrow and visceral organs. A particular syndrome in these patients involves erythroderma (intense and usually widespread reddening of the skin from dilation of blood vessels, often preceding or associated with exfoliation), and circulating tumour cells. This is known as SS. It has been estimated that approximately 5-10% of CTCL patients have SS. Patients with SS have a median survival of approximately 30 months.
Chronic Graft Versus Host Disease
Allogeneic hematopoietic cell transplantation (HCT) is a treatment used for a variety of malignant and nonmalignant disease of the bone marrow and immune system. The procedure is often associated with serious immunological complications, particularly graft versus host disease (GvHD). A chronic form of GvHD (cGvHD) afflicts many allogeneic HCT recipients, which results in dysfunction of numerous organ systems or even a profound state of immunodeficiency. Chronic GVHD is the most frequent cause of poor long-term outcome and quality of life after allogeneic HCT. The syndrome typically develops several months after transplantation, when the patient may no longer be under the direct care of the transplant team.
Approximately 50% of patients with cGvHD have limited disease and a good prognosis. Of the patients with extensive disease, approximately 60% will respond to treatment and eventually be able to discontinue immunosuppressive therapy. The remaining patients will develop opportunistic infection, or require prolonged treatment with immunosuppressive agents.
Chronic GvHD occurs in at least 30% to 50% of recipients of transplants from human leukocyte antigen matched siblings and at least 60% to 70% of recipients of transplants from unrelated donors. Risk factors include older age of patient or donor, higher degree of histoincompatibility, unrelated versus related donor, use of hematopoietic cells obtained from the blood rather than the marrow, and previous acute GvHD. Bhushan and Collins estimated that the incidence of severe cGvHD has probably increased in recent years because of the use of more unrelated transplants, donor leukocyte infusions, nonmyeloablative transplants and stem cells obtained from the blood rather than the marrow. The syndrome typically occurs 4 to 7 months after transplantation but may begin as early as 2 months or as late as 2 or more years after transplantation. Chronic GvHD may occur by itself, evolve from acute GvHD, or occur after resolution of acute GvHD.
The onset of the syndrome may be abrupt but is frequently insidious with manifestations evolving gradually for several weeks. The extent of involvement varies significantly from mild involvement limited to a few patches of skin to severe involvement of numerous organ systems and profound immunodeficiency. The most commonly involved tissues are the skin, liver, mouth, and eyes. Patients with limited disease have localized skin involvement, evidence of liver dysfunction, or both, whereas those with more involvement of the skin or involvement of other organs have extensive disease.
Treatment
 
Cutaneous T Cell Lymphoma
The optimal management of MF is undetermined because of its low prevalence, and its highly variable natural history, with frequent spontaneous remissions and exacerbations and often prolonged survival.
Nonaggressive approaches to therapy are usually warranted with treatment aimed at improving symptoms and physical appearance while limiting toxicity. Given that multiple skin sites are usually involved, the initial treatment choices are usually topical or intralesional corticosteroids or phototherapy using psoralen (a compound found in plants which make the skin temporarily sensitive to ultraviolet A) (PUVA). PUVA is not curative and its influence on disease progression remains uncertain. Repeated courses are usually required which may lead to an increased risk of both melanoma and nonmelanoma skin cancer. For thicker plaques, particularly if localized, radiotherapy with superficial electrons is an option.
“Second line” therapy for early stage disease is often topical chemotherapy, radiotherapy or total skin electron beam radiation (TSEB).
Treatment of advanced stage (IIB-IV) MF usually consists of topical or systemic therapy in refractory or rapidly progressive SS.
Bone marrow transplantation and peripheral blood stem cell transplantation have been used to treat many malignant hematologic disorders (e.g., leukemias) that are refractory to conventional treatment. Reports on the use of these procedures for the treatment of CTCL are limited and mostly consist of case reports or small case series.
Chronic Graft Versus Host Disease
Patients who develop cGvHD require reinstitution of immunosuppressive medication (if already discontinued) or an increase in dosage and possibly addition of other agents. The current literature regarding cGvHD therapy is less than optimal and many recommendations about therapy are based on common practices that await definitive testing. Patients with disease that is extensive by definition but is indolent in clinical appearance may respond to prednisone. However, patients with more aggressive disease are treated with higher doses of corticosteroids and/or cyclosporine.
Numerous salvage therapies have been considered in patients with refractory cGvHD, including ECP. Due to uncertainty around salvage therapies, Bhushan and Collins suggested that ideally, patients with refractory cGvHD should be entered into clinical trials.
Two Ontario expert consultants jointly estimated that there may be approximately 30 new erythrodermic treatment resistant CTCL patients and 30 new treatment resistant cGvHD patients per year who are unresponsive to other forms of therapy and may be candidates for ECP.
Extracorporeal photopheresis is a procedure that was initially developed as a treatment for CTCL, particularly SS.
Current Technique
Extracorporeal photopheresis is an immunomodulatory technique based on pheresis of light sensitive cells. Whole blood is removed from patients followed by pheresis. Lymphocytes are separated by centrifugation to create a concentrated layer of white blood cells. The lymphocyte layer is treated with methoxsalen (a drug that sensitizes the lymphocytes to light) and exposed to UVA, following which the lymphocytes are returned to the patient. Red blood cells and plasma are returned to the patient between each cycle.
Photosensitization is achieved by administering methoxsalen to the patient orally 2 hours before the procedure, or by injecting methoxsalen directly ino the leucocyte rich fraction. The latter approach avoids potential side effects such as nausea, and provides a more consistent drug level within the machine.
In general, from the time the intravenous line is inserted until the white blood cells are returned to the patient takes approximately 2.5-3.5 hours.
For CTCL, the treatment schedule is generally 2 consecutive days every 4 weeks for a median of 6 months. For cGvHD, an expert in the field estimated that the treatment schedule would be 3 times a week for the 1st month, then 2 consecutive days every 2 weeks after that (i.e., 4 treatments a month) for a median of 6 to 9 months.
Regulatory Status
The UVAR XTS Photopheresis System is licensed by Health Canada as a Class 3 medical device (license # 7703) for the “palliative treatment of skin manifestations of CTCL.” It is not licensed for the treatment of cGvHD.
UVADEX (sterile solution methoxsalen) is not licensed by Health Canada, but can be used in Canada via the Special Access Program. (Personal communication, Therakos, February 16, 2006)
According to the manufacturer, the UVAR XTS photopheresis system licensed by Health Canada can also be used with oral methoxsalen. (Personal communication, Therakos, February 16, 2006) However, oral methoxsalen is associated with side effects, must be taken by the patient in advance of ECP, and has variable absorption in the gastrointestinal tract.
According to Health Canada, UVADEX is not approved for use in Canada. In addition, a review of the Product Monographs of the methoxsalen products that have been approved in Canada showed that none of them have been approved for oral administration in combination with the UVAR XTS photophoresis system for “the palliative treatment of the skin manifestations of cutaneous T-cell Lymphoma”.
In the United States, the UVAR XTS Photopheresis System is approved by the Food and Drug Administration (FDA) for “use in the ultraviolet-A (UVA) irradiation in the presence of the photoactive drug methoxsalen of extracorporeally circulating leukocyte-enriched blood in the palliative treatment of the skin manifestations of CTCL in persons who have not been responsive to other therapy.”
UVADEX is approved by the FDA for use in conjunction with UVR XTS photopheresis system for “use in the ultraviolet-A (UVA) irradiation in the presence of the photoactive drug methoxsalen of extracorporeally circulating leukocyte-enriched blood in the palliative treatment of the skin manifestations of CTCL in persons who have not been responsive to other therapy.”
The use of the UVAR XTS photopheresis system or UVADEX for cGvHD is an off-label use of a FDA approved device/drug.
Summary of Findings
The quality of the trials was examined.
As stated by the GRADE Working Group, the following definitions were used in grading the quality of the evidence.
Cutaneous T Cell Lymphoma
Overall, there is low-quality evidence that ECP improves response rates and survival in patients with refractory erythrodermic CTCL (Table 1).
Limitations in the literature related to ECP for the treatment of refractory erythrodermic CTCL include the following:
Different treatment regimens.
Variety of forms of CTCL (and not necessarily treatment resistant) - MF, erythrodermic MF, SS.
SS with peripheral blood involvement → role of T cell clonality reporting?
Case series (1 small crossover RCT with several limitations)
Small sample sizes.
Retrospective.
Response criteria not clearly defined/consistent.
Unclear how concomitant therapy contributed to responses.
Variation in definitions of concomitant therapy
Comparison to historical controls.
Some patients were excluded from analysis because of progression of disease, toxicity and other reasons.
Unclear/strange statistics
Quality of life not reported as an outcome of interest.
The reported CR range is ~ 16% to 23% and the overall reported CR/PR range is ~ 33% to 80%.
The wide range in reported responses to ECP appears to be due to the variability of the patients treated and the way in which the data were presented and analyzed.
Many patients, in mostly retrospective case series, were concurrently on other therapies and were not assessed for comparability of diagnosis or disease stage (MF versus SS; erythrodermic versus not erythrodermic). Blood involvement in patients receiving ECP (e.g., T cell clonality) was not consistently reported, especially in earlier studies. The definitions of partial and complete response also are not standardized or consistent between studies.
Quality of life was reported in one study; however, the scale was developed by the authors and is not a standard validated scale.
Adverse events associated with ECP appear to be uncommon and most involve catheter related infections and hypotension caused by volume depletion.
GRADE Quality of Studies – Extracorporeal Photopheresis for Refractory Erythrodermic Cutaneous T-Cell Lymphoma
Chronic Graft-Versus-Host Disease
Overall, there is low-quality evidence that ECP improves response rates and survival in patients with refractory cGvHD (Table 2).
Patients in the studies had stem cell transplants due to a variety of hematological disorders (e.g., leukemias, aplastic anemia, thalassemia major, Hodgkin’s lymphoma, non Hodgkin’s lymphoma).
In 2001, The Blue Cross Blue Shield Technology Evaluation Centre concluded that ECP meets the TEC criteria as treatment of cGvHD that is refractory to established therapy.
The Catalan health technology assessment (also published in 2001) concluded that ECP is a new but experimental therapeutic alternative for the treatment of the erythrodermal phase of CTCL and cGvHD in allogenic HPTC and that this therapy should be evaluated in the framework of a RCT.
Quality of life (Lansky/Karnofsky play performance score) was reported in 1 study.
The patients in the studies were all refractory to steroids and other immunosuppressive agents, and these drugs were frequently continued concomitantly with ECP.
Criteria for assessment of organ improvement in cGvHD are variable, but PR was typically defined as >50% improvement from baseline parameters and CR as complete resolution of organ involvement.
Followup was variable and incomplete among the studies.
GRADE Quality of Studies – ECP for Refractory cGvHD
Conclusion
As per the GRADE Working Group, overall recommendations consider 4 main factors.
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates and the relative value placed on the outcome.
The quality of the evidence (Tables 1 and 2).
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects such as proximity to a hospital or availability of necessary expertise.
Uncertainty about the baseline risk for the population of interest.
The GRADE Working Group also recommends that incremental costs of healthcare alternatives should be considered explicitly alongside the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 3 is the overall trade-off between benefits and harms and incorporates any risk/uncertainty.
For refractory erythrodermic CTCL, the overall GRADE and strength of the recommendation is “weak” – the quality of the evidence is “low” (uncertainties due to methodological limitations in the study design in terms of study quality and directness), and the corresponding risk/uncertainty is increased due to an annual budget impact of approximately $1.5M Cdn (based on 30 patients) while the cost-effectiveness of ECP is unknown and difficult to estimate considering that there are no high quality studies of effectiveness. The device is licensed by Health Canada, but the sterile solution of methoxsalen is not licensed.
With an annual budget impact of $1.5 M Cdn (based on 30 patients), and the current expenditure is $1.3M Cdn (for out of country for 7 patients), the potential cost savings based on 30 patients with refractory erythrodermic CTCL is about $3.8 M Cdn (annual).
For refractory cGvHD, the overall GRADE and strength of the recommendation is “weak” – the quality of the evidence is “low” (uncertainties due to methodological limitations in the study design in terms of study quality and directness), and the corresponding risk/uncertainty is increased due to a budget impact of approximately $1.5M Cdn while the cost-effectiveness of ECP is unknown and difficult to estimate considering that there are no high quality studies of effectiveness. Both the device and sterile solution are not licensed by Health Canada for the treatment of cGvHD.
If all the ECP procedures for patients with refractory erythrodermic CTCL and refractory cGvHD were performed in Ontario, the annual budget impact would be approximately $3M Cdn.
Overall GRADE and Strength of Recommendation (Including Uncertainty)
PMCID: PMC3379535  PMID: 23074497
5.  Autologous Growth Factor Injections in Chronic Tendinopathy 
Journal of Athletic Training  2014;49(3):428-430.
Reference:
de Vos RJ, van Veldhoven PLJ, Moen MH, Weir A, Tol JL. Autologous growth factor injections in chronic tendinopathy: a systematic review. Br Med Bull. 2010;95:63–77.
Clinical Question:
The authors of this systematic review evaluated the literature to critically consider the effects of growth factors delivered through autologous whole-blood and platelet-rich–plasma (PRP) injections in managing wrist-flexor and -extensor tendinopathies, plantar fasciopathy, and patellar tendinopathy. The primary question was, according to the published literature, is there sufficient evidence to support the use of growth factors delivered through autologous whole-blood and PRP injections for chronic tendinopathy?
Data Sources:
The authors performed a comprehensive, systematic literature search in October 2009 using PubMed, MEDLINE, EMBASE, CINAHL, and the Cochrane library without time limits. The following key words were used in different combinations: tendinopathy, tendinosis, tendinitis, tendons, tennis elbow, plantar fasciitis, platelet rich plasma, platelet transfusion, and autologous blood or injection. The search was limited to human studies in English. All bibliographies from the initial literature search were also viewed to identify additional relevant studies.
Study Selection:
Studies were eligible based on the following criteria: (1) Articles were suitable (inclusion criteria) if the participants had been clinically diagnosed as having chronic tendinopathy; (2) the design had to be a prospective clinical study, randomized controlled trial, nonrandomized clinical trial, or prospective case series; (3) a well-described intervention in the form of a growth factor injection with either PRP or autologous whole blood was used; and (4) the outcome was reported in terms of pain or function (or both).
Data Extraction:
All titles and abstracts were assessed by 2 researchers, and all relevant articles were obtained. Two researchers independently read the full text of each article to determine if it met the inclusion criteria. If opinions differed on suitability, a third reviewer was consulted to reach consensus. The data extracted included number of participants, study design, inclusion criteria, intervention, control group, primary outcome measures (pain using a visual analog or ordinal scale or function), time of follow-up, and outcomes for intervention and control group (percentage improvement) using a standardized data-extraction form. Function was evaluated in 9 of the 11 studies using (1) the Nirschl scale (elbow function) or the modified Mayo score for wrist flexors and extensors, (2) the Victorian Institute of Sports Assessment-Patella score, a validated outcome measure for patellar tendinopathy, or the Tegner score for patellar tendinopathy, and (3) the rearfoot score from the American Orthopaedic Foot and Ankle Scale for plantar fasciopathy.
The Physiotherapy Evidence Database (PEDro) scale contains 11 items; items 2–11 receive 1 point each for a yes response. Reliability is sufficient (0.68) for the PEDro scale to be used to assess physiotherapy trials. A score of 6 or higher on the PEDro scale is considered a high-quality study; below 6 is considered a low-quality study. The PEDro score results determined the quality of the randomized controlled trial (RCT), nonrandomized clinical trial, or prospective case series (≥6 or <6). A qualitative analysis was used with 5 levels of evidence (strong, moderate, limited, conflicting, or no evidence) to determine recommendations for the use of the intervention. The number of high-quality or low-quality RCT or nonrandomized clinical trial studies with consistent or inconsistent results determined the level of evidence (1–5).
Main Results:
Using the specific search criteria, the authors identified 418 potential sources. After screening of the title or abstract (or both), they excluded 405 sources, which left 13 studies. After viewing the full text, they excluded 2 additional sources (a case report and a study in which the outcome measure was remission of symptoms and not pain or function), leaving 11 studies for analysis. Six of the 11 studies were characterized by an observational, noncontrolled design; the remaining 5 studies were controlled clinical trials, 2 of which had proper randomization.
The mean number of participants included in the studies was 40.5 (range = 20 to 100). Three of the studies were on “tennis elbow,” 1 on “golfer's elbow,” 1 on wrist extensor or flexor tendinopathy, 3 on plantar fasciopathy, and 3 on chronic patellar tendinopathy. Based on the information reported, there was no standardization of frequency or method of growth factor injection treatment or of preparation of the volume, and an optimal mixture was not described. Autologous whole-blood injections were used in 8 studies; in 5 studies, the autologous whole-blood injection was combined with a local anesthetic. In contrast, a local anesthetic was used in only 1 of the 3 PRP injection studies. The authors of the other 2 studies did not report whether a local anesthetic was used. The number of autologous whole-blood and PRP injections varied, ranging from 1 to 3. The centrifuging process was single or double for the PRP injections. In 2 studies, calcium was added to activate the platelets. A visual analogue or ordinal pain scale was used in 10 of the 11 studies. Function was evaluated in 9 of the 11 studies using (1) the Nirschl scale in 4 elbow studies or the modified Mayo score at baseline in 1 elbow study, (2) the Victorian Institute of Sports Assessment-Patella score for 1 study and the Tegner score for 2 of the patellar tendinopathy studies, and (3) the rearfoot score of the American Orthopaedic Foot and Ankle Scale for 1 plantar fasciopathy study. Only 1 study used an appropriate, disease-specific, validated tendinopathy measure (Victorian Institute of Sports Assessment-Patella).
All intervention groups reported a significant improvement in pain or function score (or both), with a mean improvement of 66% over a mean follow-up of 9.4 months. The control groups in these studies also showed a mean improvement of 57%. None of the pain benefits among the intervention groups were greater than those for the control group at final follow-up. In 4 of the studies, the control group and the autologous growth factor injection group had similar results in pain or function or both, whereas in 2 studies, the control group had greater relief in pain than the injection group.
Eleven studies were assessed using the PEDro scale. The PEDro scores for these studies ranged from 1 to 7, with an average score of 3.4. Only 3 studies had PEDro scores of ≥6 and were considered high quality. The 3 high-quality plantar fasciopathy studies used autologous growth factor injections but did not show a significant improvement over the control group. One of the studies that showed no beneficial effect for the autologous growth factor injections was compared with corticosteroids. Compared with other treatments, level 1 (strong) evidence demonstrated that autologous growth factor injections did not improve pain or function in plantar fasciopathy. The PRP injection results were based on 3 low-quality studies, 2 for the patellar tendon and 1 for the wrist flexors-extensors; level 3 (limited) evidence suggests that PRP injections improve pain or function.
Conclusions:
Strong evidence indicates that autologous growth factor injections do not improve plantar fasciopathy pain or function when combined with anesthetic agents or when compared with corticosteroid injections, dry needling, or exercise therapy treatments. Furthermore, limited evidence suggests that PRP injections are beneficial. Except for 2 high-quality RCT studies, the rest were methodologically flawed. Additional studies should be conducted using proper control groups, randomization, blinding, and validated disability outcome measures for pain and function. Until then, the results remain speculative because autologous whole-blood and PRP injection treatments are not standardized.
doi:10.4085/1062-6050-49.3.06
PMCID: PMC4080590  PMID: 24840581
tendon injuries; platelet-rich plasma; injection therapy
6.  Ultraviolet Phototherapy Management of Moderate-to-Severe Plaque Psoriasis 
Executive Summary
Objective
The purpose of this evidence based analysis was to determine the effectiveness and safety of ultraviolet phototherapy for moderate-to-severe plaque psoriasis.
Research Questions
The specific research questions for the evidence review were as follows:
What is the safety of ultraviolet phototherapy for moderate-to-severe plaque psoriasis?
What is the effectiveness of ultraviolet phototherapy for moderate-to-severe plaque psoriasis?
Clinical Need: Target Population and Condition
Psoriasis is a common chronic, systemic inflammatory disease affecting the skin, nails and occasionally the joints and has a lifelong waning and waxing course. It has a worldwide occurrence with a prevalence of at least 2% of the general population, making it one of the most common systemic inflammatory diseases. The immune-mediated disease has several clinical presentations with the most common (85% - 90%) being plaque psoriasis.
Characteristic features of psoriasis include scaling, redness, and elevation of the skin. Patients with psoriasis may also present with a range of disabling symptoms such as pruritus (itching), pain, bleeding, or burning associated with plaque lesions and up to 30% are classified as having moderate-to-severe disease. Further, some psoriasis patients can be complex medical cases in which diabetes, inflammatory bowel disease, and hypertension are more likely to be present than in control populations and 10% also suffer from arthritis (psoriatic arthritis). The etiology of psoriasis is unknown but is thought to result from complex interactions between the environment and predisposing genes.
Management of psoriasis is related to the extent of the skin involvement, although its presence on the hands, feet, face or genitalia can present challenges. Moderate-to-severe psoriasis is managed by phototherapy and a range of systemic agents including traditional immunosuppressants such as methotrexate and cyclospsorin. Treatment with modern immunosuppressant agents known as biologicals, which more specifically target the immune defects of the disease, is usually reserved for patients with contraindications and those failing or unresponsive to treatments with traditional immunosuppressants or phototherapy.
Treatment plans are based on a long-term approach to managing the disease, patient’s expectations, individual responses and risk of complications. The treatment goals are several fold but primarily to:
1) improve physical signs and secondary psychological effects,
2) reduce inflammation and control skin shedding,
3) control physical signs as long as possible, and to
4) avoid factors that can aggravate the condition.
Approaches are generally individualized because of the variable presentation, quality of life implications, co-existent medical conditions, and triggering factors (e.g. stress, infections and medications). Individual responses and commitments to therapy also present possible limitations.
Phototherapy
Ultraviolet phototherapy units have been licensed since February 1993 as a class 2 device in Canada. Units are available as hand held devices, hand and foot devices, full-body panel, and booth styles for institutional and home use. Units are also available with a range of ultraviolet A, broad and narrow band ultraviolet B (BB-UVB and NB-UVB) lamps. After establishing appropriate ultraviolet doses, three-times weekly treatment schedules for 20 to 25 treatments are generally needed to control symptoms.
Evidence-Based Analysis Methods
The literature search strategy employed keywords and subject headings to capture the concepts of 1) phototherapy and 2) psoriasis. The search involved runs in the following databases: Ovid MEDLINE (1996 to March Week 3 2009), OVID MEDLINE In-Process and Other Non-Indexed Citations, EMBASE (1980 to 2009 Week 13), the Wiley Cochrane Library, and the Centre for Reviews and Dissemination/International Agency for Health Technology Assessment. Parallel search strategies were developed for the remaining databases. Search results were limited to human and English-language published between January 1999 and March 31, 2009. Search alerts were generated and reviewed for relevant literature up until May 31, 2009.
English language reports and human studies
Ultraviolet phototherapy interventions for plaque-type psoriasis
Reports involving efficacy and/or safety outcome studies
Original reports with defined study methodology
Standardized measurements on outcome events such as technical success, safety, effectiveness, durability, quality of life or patient satisfaction
Non-systematic reviews, letters, comments and editorials
Randomized trials involving side-to-side or half body comparisons
Randomized trials not involving ultraviolet phototherapy intervention for plaque-type psoriasis
Trials involving dosing studies, pilot feasibility studies or lacking control groups
Summary of Findings
A 2000 health technology evidence report on the overall management of psoriasis by The National Institute Health Research (NIHR) Health Technology Assessment Program of the UK was identified in the MAS evidence-based review. The report included 109 RCT studies published between 1966 and June 1999 involving four major treatment approaches – 51 on phototherapy, 32 on oral retinoids, 18 on cyclosporin and five on fumarates.. The absence of RCTs on methotrexate was noted as original studies with this agent had been performed prior to 1966.
Of the 51 RCT studies involving phototherapy, 22 involved UVA, 21 involved UVB, five involved both UVA and UVB and three involved natural light as a source of UV. The RCT studies included comparisons of treatment schedules, ultraviolet source, addition of adjuvant therapies, and comparisons between phototherapy and topical treatment schedules. Because of heterogeneity, no synthesis or meta-analysis could be performed. Overall, the reviewers concluded that the efficacy of only five therapies could be supported from the RCT-based evidence review: photochemotherapy or phototherapy, cyclosporin, systemic retinoids, combination topical vitamin D3 analogues (calcipotriol) and corticosteroids in combination with phototherapy and fumarates. Although there was no RCT evidence supporting methotrexate, it’s efficacy for psoriasis is well known and it continues to be a treatment mainstay.
The conclusion of the NIHR evidence review was that both photochemotherapy and phototherapy were effective treatments for clearing psoriasis, although their comparative effectiveness was unknown. Despite the conclusions on efficacy, a number of issues were identified in the evidence review and several areas for future research were discussed to address these limitations. Trials focusing on comparative effectiveness, either between ultraviolet sources or between classes of treatment such as methotrexate versus phototherapy, were recommended to refine treatment algorithms. The need for better assessment of cost-effectiveness of therapies to consider systemic drug costs and costs of surveillance, as well as drug efficacy, were also noted. Overall, the authors concluded that phototherapy and photochemotherapy had important roles in psoriasis management and were standard therapeutic options for psoriasis offered in dermatology practices.
The MAS evidence-based review focusing on the RCT trial evidence for ultraviolet phototherapy management of moderate-to-severe plaque psoriasis was performed as an update to the NIHR 2000 systemic review on treatments for severe psoriasis. In this review, an additional 26 RCT reports examining phototherapy or photochemotherapy for psoriasis were identified. Among the studies were two RCTs comparing ultraviolet wavelength sources, five RCTs comparing different forms of phototherapy, four RCTs combining phototherapy with prior spa saline bathing, nine RCTs combining phototherapy with topical agents, two RCTs combining phototherapy with the systemic immunosuppressive agents methotrexate or alefacept, one RCT comparing phototherapy with an additional light source (the excimer laser), and one comparing a combination therapy with phototherapy and psychological intervention involving simultaneous audiotape sessions on mindfulness and stress reduction. Two trials also examined the effect of treatment setting on effectiveness of phototherapy, one on inpatient versus outpatient therapy and one on outpatient clinic versus home-based phototherapy.
Conclusions
The conclusions of the MAS evidence-based review are outlined in Table ES1. In summary, phototherapy provides good control of clinical symptoms in the short term for patients with moderate-to-severe plaque-type psoriasis that have failed or are unresponsive to management with topical agents. However, many of the evidence gaps identified in the NIHR 2000 evidence review on psoriasis management persisted. In particular, the lack of evidence on the comparative effectiveness and/or cost-effectiveness between the major treatment options for moderate-to-severe psoriasis remained. The evidence on effectiveness and safety of longer term strategies for disease management has also not been addressed. Evidence for the safety, effectiveness, or cost-effectiveness of phototherapy delivered in various settings is emerging but is limited. In addition, because all available treatments for psoriasis – a disease with a high prevalence, chronicity, and cost – are palliative rather than curative, strategies for disease control and improvements in self-efficacy employed in other chronic disease management strategies should be investigated.
RCT Evidence for Ultraviolet Phototherapy Treatment of Moderate-To-Severe Plaque Psoriasis
Phototherapy is an effective treatment for moderate-to-severe plaque psoriasis
Narrow band PT is more effective than broad band PT for moderate-to-severe plaque psoriasis
Oral-PUVA has a greater clinical response, requires less treatments and has a greater cumulative UV irradiation dose than UVB to achieve treatment effects for moderate-to-severe plaque psoriasis
Spa salt water baths prior to phototherapy did increase short term clinical response of moderate-to-severe plaque psoriasis but did not decrease cumulative UV irradiation dose
Addition of topical agents (vitamin D3 calcipotriol) to NB-UVB did not increase mean clinical response or decrease treatments or cumulative UV irradiation dose
Methotrexate prior to NB-UVB in high need psoriasis patients did significantly increase clinical response, decrease number of treatment sessions and decrease cumulative UV irradiation dose
Phototherapy following alefacept did increase early clinical response in moderate-to-severe plaque psoriasis
Effectiveness and safety of home NB-UVB phototherapy was not inferior to NB-UVB phototherapy provided in a clinic to patients with psoriasis referred for phototherapy. Treatment burden was lower and patient satisfaction was higher with home therapy and patients in both groups preferred future phototherapy treatments at home
Ontario Health System Considerations
A 2006 survey of ultraviolet phototherapy services in Canada identified 26 phototherapy clinics in Ontario for a population of over 12 million. At that time, there were 177 dermatologists and 50 geographic regions in which 28% (14/50) provided phototherapy services. The majority of the phototherapy services were reported to be located in densely populated areas; relatively few patients living in rural communities had access to these services. The inconvenience of multiple weekly visits for optimal phototherapy treatment effects poses additional burdens to those with travel difficulties related to health, job, or family-related responsibilities.
Physician OHIP billing for phototherapy services totaled 117,216 billings in 2007, representing approximately 1,800 patients in the province treated in private clinics. The number of patients treated in hospitals is difficult to estimate as physician costs are not billed directly to OHIP in this setting. Instead, phototherapy units and services provided in hospitals are funded by hospitals’ global budgets. Some hospitals in the province, however, have divested their phototherapy services, so the number of phototherapy clinics and their total capacity is currently unknown.
Technological advances have enabled changes in phototherapy treatment regimens from lengthy hospital inpatient stays to outpatient clinic visits and, more recently, to an at-home basis. When combined with a telemedicine follow-up, home phototherapy may provide an alternative strategy for improved access to service and follow-up care, particularly for those with geographic or mobility barriers. Safety and effectiveness have, however, so far been evaluated for only one phototherapy home-based delivery model. Alternate care models and settings could potentially increase service options and access, but the broader consequences of the varying cost structures and incentives that either increase or decrease phototherapy services are unknown.
Economic Analyses
The focus of the current economic analysis was to characterize the costs associated with the provision of NB-UVB phototherapy for plaque-type, moderate-to-severe psoriasis in different clinical settings, including home therapy. A literature review was conducted and no cost-effectiveness (cost-utility) economic analyses were published in this area.
Hospital, Clinic, and Home Costs of Phototherapy
Costs for NB-UVB phototherapy were based on consultations with equipment manufacturers and dermatologists. Device costs applicable to the provision of NB-UVB phototherapy in hospitals, private clinics and at a patient’s home were estimated. These costs included capital costs of purchasing NB-UVB devices (amortized over 15-20 years), maintenance costs of replacing equipment bulbs, physician costs of phototherapy treatment in private clinics ($7.85 per phototherapy treatment), and medication and laboratory costs associated with treatment of moderate-to-severe psoriasis.
NB-UVB phototherapy services provided in a hospital setting were paid for by hospitals directly. Phototherapy services in private clinic and home settings were paid for by the clinic and patient, respectively, except for physician services covered by OHIP. Indirect funding was provided to hospitals as part of global budgeting and resource allocation. Home therapy services for NB-UVB phototherapy were not covered by the MOHLTC. Coverage for home-based phototherapy however, was in some cases provided by third party insurers.
Device costs for NB-UVB phototherapy were estimated for two types of phototherapy units: a “booth unit” consisting of 48 bulbs used in hospitals and clinics, and a “panel unit” consisting of 10 bulbs for home use. The device costs of the booth and panel units were estimated at approximately $18,600 and $2,900, respectively; simple amortization over 15 and 20 years implied yearly costs of approximately $2,500 and $150, respectively. Replacement cost for individual bulbs was about $120 resulting in total annual cost of maintenance of about $8,640 and $120 for booth and panel units, respectively.
Estimated Total Costs for Ontario
Average annual cost per patient for NB-UVB phototherapy provided in the hospital, private clinic or at home was estimated to be $292, $810 and $365 respectively. For comparison purposes, treatment of moderate-to-severe psoriasis with methotrexate and cyclosporin amounted to $712 and $3,407 annually per patient respectively; yearly costs for biological drugs were estimated to be $18,700 for alefacept and $20,300 for etanercept-based treatments.
Total annual costs of NB-UVB phototherapy were estimated by applying average costs to an estimated proportion of the population (age 18 or older) eligible for phototherapy treatment. The prevalence of psoriasis was estimated to be approximately 2% of the population, of which about 85% was of plaque-type psoriasis and approximately 20% to 30% was considered moderate-to-severe in disease severity. An estimate of 25% for moderate-to-severe psoriasis cases was used in the current economic analysis resulting in a range of 29,400 to 44,200 cases. Approximately 21% of these patients were estimated to be using NB-UVB phototherapy for treatment resulting in a number of cases in the range between 6,200 and 9,300 cases. The average (7,700) number of cases was used to calculate associated costs for Ontario by treatment setting.
Total annual costs were as follows: $2.3 million in a hospital setting, $6.3 million in a private clinic setting, and $2.8 million for home phototherapy. Costs for phototherapy services provided in private clinics were greater ($810 per patient annually; total of $6.3 million annually) and differed from the same services provided in the hospital setting only in terms of additional physician costs associated with phototherapy OHIP fees.
Keywords
Psoriasis, ultraviolet radiation, phototherapy, photochemotherapy, NB-UVB, BB-UVB PUVA
PMCID: PMC3377497  PMID: 23074532
7.  Limbal Stem Cell Transplantation 
Executive Summary
Objective
The objective of this analysis is to systematically review limbal stem cell transplantation (LSCT) for the treatment of patients with limbal stem cell deficiency (LSCD). This evidence-based analysis reviews LSCT as a primary treatment for nonpterygium LSCD conditions, and LSCT as an adjuvant therapy to excision for the treatment of pterygium.
Background
Clinical Need: Condition and Target Population
The outer surface of the eye is covered by 2 distinct cell layers: the corneal epithelial layer that overlies the cornea, and the conjunctival epithelial layer that overlies the sclera. These cell types are separated by a transitional zone known as the limbus. The corneal epithelial cells are renewed every 3 to 10 days by a population of stem cells located in the limbus.
Nonpterygium Limbal Stem Cell Deficiency
When the limbal stem cells are depleted or destroyed, LSCD develops. In LSCD, the conjunctival epithelium migrates onto the cornea (a process called conjunctivalization), resulting in a thickened, irregular, unstable corneal surface that is prone to defects, ulceration, corneal scarring, vascularization, and opacity. Patients experience symptoms including severe irritation, discomfort, photophobia, tearing, blepharospasm, chronic inflammation and redness, and severely decreased vision.
Depending on the degree of limbal stem cell loss, LSCD may be total (diffuse) or partial (local). In total LSCD, the limbal stem cell population is completed destroyed and conjunctival epithelium covers the entire cornea. In partial LSCD, some areas of the limbus are unharmed, and the corresponding areas on the cornea maintain phenotypically normal corneal epithelium.
Confirmation of the presence of conjunctivalization is necessary for LSCD diagnosis as the other characteristics and symptoms are nonspecific and indicate a variety of diseases. The definitive test for LSCD is impression cytology, which detects the presence of conjunctival epithelium and its goblet cells on the cornea. However, in the opinion of a corneal expert, diagnosis is often based on clinical assessment, and in the expert’s opinion, it is unclear whether impression cytology is more accurate and reliable than clinical assessment, especially for patients with severe LSCD.
The incidence of LSCD is not well understood. A variety of underlying disorders are associated with LSCD including chemical or thermal injuries, ultraviolet and ionizing radiation, Stevens-Johnson syndrome, multiple surgeries or cryotherapies, contact lens wear, extensive microbial infection, advanced ocular cicatricial pemphigoid, and aniridia. In addition, some LSCD cases are idiopathic. These conditions are uncommon (e.g., the prevalence of aniridia ranges from 1 in 40,000 to 1 in 100,000 people).
Pterygium
Pterygium is a wing-shaped fibrovascular tissue growth from the conjunctiva onto the cornea. Pterygium is the result of partial LSCD caused by localized ultraviolet damage to limbal stem cells. As the pterygium invades the cornea, it may cause irregular astigmatism, loss of visual acuity, chronic irritation, recurrent inflammation, double vision, and impaired ocular motility.
Pterygium occurs worldwide. Incidence and prevalence rates are highest in the “pterygium belt,” which ranges from 30 degrees north to 30 degrees south of the equator, and lower prevalence rates are found at latitudes greater than 40 degrees. The prevalence of pterygium for Caucasians residing in urban, temperate climates is estimated at 1.2%.
Existing Treatments Other Than Technology Being Reviewed
Nonpterygium Limbal Stem Cell Deficiency
In total LSCD, a patient’s limbal stem cells are completely depleted, so any successful treatment must include new stem cells. Autologous oral mucosal epithelium transplantation has been proposed as an alternative to LSCT. However, this procedure is investigational, and there is very limited level 4c evidence1 to support this technique (fewer than 20 eyes examined in 4 case series and 1 case report).
For patients with partial LSCD, treatment may not be necessary if their visual axis is not affected. However, if the visual axis is conjunctivalized, several disease management options exist including repeated mechanical debridement of the abnormal epithelium; intensive, nonpreserved lubrication; bandage contact lenses; autologous serum eye drops; other investigational medical treatments; and transplantation of an amniotic membrane inlay. However, these are all disease management treatments; LSCT is the only curative option.
Pterygium
The primary treatment for pterygium is surgical excision. However, recurrence is a common problem after excision using the bare sclera technique: reported recurrence rates range from 24% to 89%. Thus, a variety of adjuvant therapies have been used to reduce the risk of pterygium recurrence including LSCT, amniotic membrane transplantation (AMT), conjunctival autologous (CAU) transplantation, and mitomycin C (MMC, an antimetabolite drug).
New Technology Being Reviewed
To successfully treat LSCD, the limbal stem cell population must be repopulated. To achieve this, 4 LSCT procedures have been developed: conjunctival-limbal autologous (CLAU) transplantation; living-related conjunctival-limbal allogeneic (lr-CLAL) transplantation; keratolimbal allogeneic (KLAL) transplantation; and ex vivo expansion of limbal stem cells transplantation. Since the ex vivo expansion of limbal stem cells transplantation procedure is considered experimental, it has been excluded from the systematic review. These procedures vary by the source of donor cells and the amount of limbal tissue used. For CLAU transplants, limbal stem cells are obtained from the patient’s healthy eye. For lr-CLAL and KLAL transplants, stem cells are obtained from living-related and cadaveric donor eyes, respectively.
In CLAU and lr-CLAL transplants, 2 to 4 limbal grafts are removed from the superior and inferior limbus of the donor eye. In KLAL transplants, the entire limbus from the donor eye is used.
The recipient eye is prepared by removing the abnormal conjunctival and scar tissue. An incision is made into the conjunctival tissue into which the graft is placed, and the graft is then secured to the neighbouring limbal and scleral tissue with sutures. Some LSCT protocols include concurrent transplantation of an amniotic membrane onto the cornea.
Regulatory Status
Health Canada does not require premarket licensure for stem cells. However, they are subject to Health Canada’s clinical trial regulations until the procedure is considered accepted transplantation practice, at which time it will be covered by the Safety of Human Cells, Tissues and Organs for Transplantation Regulations (CTO Regulations).
Review Strategy
The Medical Advisory Secretariat systematically reviewed the literature to assess the effectiveness and safety of LSCT for the treatment of patients with nonpterygium LSCD and pterygium. A comprehensive search method was used to retrieve English-language journal articles from selected databases.
The GRADE approach was used to systematically and explicitly evaluate the quality of evidence and strength of recommendations.
Summary of Findings
Nonpterygium Limbal Stem Cell Deficiency
The search identified 873 citations published between January 1, 2000, and March 31, 2008. Nine studies met the inclusion criteria, and 1 additional citation was identified through a bibliography review. The review included 10 case series (3 prospective and 7 retrospective).
Patients who received autologous transplants (i.e., CLAU) achieved significantly better long-term corneal surface results compared with patients who received allogeneic transplants (lr-CLAL, P< .001; KLAL, P< .001). There was no significant difference in corneal surface outcomes between the allogeneic transplant options, lr-CLAL and KLAL (P = .328). However, human leukocyte antigen matching and systemic immunosuppression may improve the outcome of lr-CLAL compared with KLAL. Regardless of graft type, patients with Stevens-Johnson syndrome had poorer long-term corneal surface outcomes.
Concurrent AMT was associated with poorer long-term corneal surface improvements. When the effect of the AMT was removed, the difference between autologous and allogeneic transplants was much smaller.
Patients who received CLAU transplants had a significantly higher rate of visual acuity improvements compared with those who received lr-CLAL transplants (P = .002). However, to achieve adequate improvements in vision, patients with deep corneal scarring will require a corneal transplant several months after the LSCT.
No donor eye complications were observed.
Epithelial rejection and microbial keratitis were the most common long-term complications associated with LSCT (complications occurred in 6%–15% of transplantations). These complications can result in graft failure, so patients should be monitored regularly following LSCT.
Pterygium
The search yielded 152 citations published between January 1, 2000 and May 16, 2008. Six randomized controlled trials (RCTs) that evaluated LSCT as an adjuvant therapy for the treatment of pterygium met the inclusion criteria and were included in the review.
Limbal stem cell transplantation was compared with CAU, AMT, and MMC. The results showed that CLAU significantly reduced the risk of pterygium recurrence compared with CAU (relative risk [RR], 0.09; 95% confidence interval [CI], 0.01–0.69; P = .02). CLAU reduced the risk of pterygium recurrence for primary pterygium compared with MMC, but this comparison did not reach statistical significance (RR, 0.48; 95% CI, 0.21–1.10; P = .08). Both AMT and CLAU had similar low rates of recurrence (2 recurrences in 43 patients and 4 in 46, respectively), and the RR was not significant (RR, 1.88; 95% CI, 0.37–9.5; P = .45). Since sample sizes in the included studies were small, failure to detect a significant difference between LSCT and AMT or MMC could be the result of type II error. Limbal stem cell transplantation as an adjuvant to excision is a relatively safe procedure as long-term complications were rare (< 2%).
GRADE Quality of Evidence
Nonpterygium Limbal Stem Cell Deficiency
The evidence for the analyses related to nonpterygium LSCD was based on 3 prospective and 7 retrospective case series. Thus, the GRADE quality of evidence is very low, and any estimate of effect is very uncertain.
Pterygium
The analyses examining LSCT as an adjuvant treatment option for pterygium were based on 6 RCTs. The quality of evidence for the overall body of evidence for each treatment option comparison was assessed using the GRADE approach. In each of the comparisons, the quality of evidence was downgraded due to serious or very serious limitations in study quality (individual study quality was assessed using the Jadad scale, and an assessment of allocation concealment and the degree of loss to follow-up), which resulted in low- to moderate-quality GRADE evidence ratings (low-quality evidence for the CLAU and AMT and CLAU and MMC comparisons, and moderate-quality evidence for the CLAU and CAU comparison).
Ontario Health System Impact Analysis
Nonpterygium Limbal Stem Cell Deficiency
Since 1999, Ontario’s out-of-country (OOC) program has approved and reimbursed 8 patients for LSCTs and 1 patient for LSCT consultations. Similarly, most Canadian provinces have covered OOC or out-of-province LSCTs. Several corneal experts in Ontario have the expertise to perform LSCTs.
As there are no standard guidelines for LSCT, patients who receive transplants OOC may not receive care aligned with the best evidence. To date, many of the patients from Ontario who received OOC LSCTs received concurrent AMTs, and the evidence from this analysis questions the use of this procedure. In addition, 1 patient received a cultured LSCT, a procedure that is considered investigational. Many patients with LSCD have bilateral disease and therefore require allogeneic transplants. These patients will require systemic and topical immunosuppression for several years after the transplant, perhaps indefinitely. Thus, systemic side effects associated with immunosuppression are a potential concern, and patients must be monitored regularly.
Amniotic membrane transplantation is a common addition to many ocular surface reconstruction procedures, including LSCT. Amniotic membranes are recovered from human placentas from planned, uneventful caesarean sections. Before use, serological screening of the donor’s blood should be conducted. However, there is still a theoretical risk of disease transmission associated with this procedure.
Financial Impact
For the patients who were reimbursed for OOC LSCTs, the average cost of LSCT per eye was $18,735.20 Cdn (range, $8,219.54–$33,933.32). However, the actual cost per patient is much higher as these costs do not include consultations and follow-up visits, multiple LSCTs, and any additional procedures (e.g., corneal transplants) received during the course of treatment OOC. When these additional costs were considered, the average cost per patient was $57,583 Cdn (range, $8,219.54–$130,628.20).
The estimated average total cost per patient for performing LSCT in Ontario is $2,291.48 Cdn (range, $951.48–$4,538.48) including hospital and physician fees. This cost is based on the assumption that LSCT is technically similar to a corneal transplant, an assumption which needs to be verified. The cost does not include corneal transplantations, which some proportion of patients receiving a LSCT will require within several months of the limbal transplant.
Pterygium
Pterygium recurrence rates after surgical excision are high, ranging from 24% to 89%. However, according to clinical experts, the rate of recurrence is low in Ontario. While there is evidence that the prevalence of pterygium is higher in the “pterygium belt,” there was no evidence to suggest different recurrence rates or disease severity by location or climate.
Conclusions
Nonpterygium Limbal Stem Cell Deficiency
Successful LSCTs result in corneal re-epithelialization and improved vision in patients with LSCD. However, patients who received concurrent AMT had poorer long-term corneal surface improvements. Conjunctival-limbal autologous transplantation is the treatment option of choice, but if it is not possible, living-related or cadaveric allogeneic transplants can be used. The benefits of LSCT outweigh the risks and burdens, as shown in Executive Summary Table 1. According to GRADE, these recommendations are strong with low- to very low-quality evidence.
Benefits, Risks, and Burdens – Nonpterygium Limbal Stem Cell Deficiency
Short- and long-term improvement in corneal surface (stable, normal corneal epithelium and decreased vascularization and opacity)
Improvement in vision (visual acuity and functional vision)
Long-term complications are experienced by 8% to 16% of patients
Risks associated with long-term immunosuppression for recipients of allogeneic grafts
Potential risk of induced LSCD in donor eyes
High cost of treatment (average cost per patient via OOC program is $57,583; estimated cost of procedure in Ontario is $2,291.48)
Costs are expressed in Canadian dollars.
GRADE of recommendation: Strong recommendation, low-quality or very low-quality evidence
benefits clearly outweigh risks and burdens
case series studies
strong, but may change if higher-quality evidence becomes available
Pterygium
Conjunctival-limbal autologous transplantations significantly reduced the risk of pterygium recurrence compared with CAU. No other comparison yielded statistically significant results, but CLAU reduced the risk of recurrence compared with MMC. However, the benefit of LSCT in Ontario is uncertain as the severity and recurrence of pterygium in Ontario is unknown. The complication rates suggest that CLAU is a safe treatment option to prevent the recurrence of pterygium. According to GRADE, given the balance of the benefits, risks, and burdens, the recommendations are very weak with moderate quality evidence, as shown in Executive Summary Table 2.
Benefits, Risks, and Burdens – Pterygium
Reduced recurrence; however, if recurrence is low in Ontario, this benefit might be minimal
Long-term complications rare
Increased cost
GRADE of recommendation: Very weak recommendations, moderate quality evidence.
uncertainty in the estimates of benefits, risks, and burden; benefits, risks, and burden may be closely balanced
RCTs
very weak, other alternatives may be equally reasonable
PMCID: PMC3377549  PMID: 23074512
8.  Biventricular Pacing (Cardiac Resynchronization Therapy) 
Executive Summary
Issue
In 2002, (before the establishment of the Ontario Health Technology Advisory Committee), the Medical Advisory Secretariat conducted a health technology policy assessment on biventricular (BiV) pacing, also called cardiac resynchronization therapy (CRT). The goal of treatment with BiV pacing is to improve cardiac output for people in heart failure (HF) with conduction defect on ECG (wide QRS interval) by synchronizing ventricular contraction. The Medical Advisory Secretariat concluded that there was evidence of short (6 months) and longer-term (12 months) effectiveness in terms of cardiac function and quality of life (QoL). More recently, a hospital submitted an application to the Ontario Health Technology Advisory Committee to review CRT, and the Medical Advisory Secretariat subsequently updated its health technology assessment.
Background
Chronic HF results from any structural or functional cardiac disorder that impairs the ability of the heart to act as a pump. It is estimated that 1% to 5% of the general population (all ages) in Europe have chronic HF. (1;2) About one-half of the patients with HF are women, and about 40% of men and 60% of women with this condition are aged older than 75 years.
The incidence (i.e., the number of new cases in a specified period) of chronic HF is age dependent: from 1 to 5 per 1,000 people each year in the total population, to as high as 30 to 40 per 1,000 people each year in those aged 75 years and older. Hence, in an aging society, the prevalence (i.e., the number of people with a given disease or condition at any time) of HF is increasing, despite a reduction in cardiovascular mortality.
A recent study revealed 28,702 patients were hospitalized for first-time HF in Ontario between April 1994 and March 1997. (3) Women comprised 51% of the cohort. Eighty-five percent were aged 65 years or older, and 58% were aged 75 years or older.
Patients with chronic HF experience shortness of breath, a limited capacity for exercise, high rates of hospitalization and rehospitalization, and die prematurely. (2;4) The New York Heart Association (NYHA) has provided a commonly used functional classification for the severity of HF (2;5):
Class I: No limitation of physical activity. No symptoms with ordinary exertion.
Class II: Slight limitations of physical activity. Ordinary activity causes symptoms.
Class III: Marked limitation of physical activity. Less than ordinary activity causes symptoms. Asymptomatic at rest.
Class IV: Inability to carry out any physical activity without discomfort. Symptoms at rest.
The National Heart, Lung, and Blood Institute estimates that 35% of patients with HF are in functional NYHA class I; 35% are in class II; 25%, class III; and 5%, class IV. (5) Surveys (2) suggest that from 5% to 15% of patients with HF have persistent severe symptoms, and that the remainder of patients with HF is evenly divided between those with mild and moderately severe symptoms.
Overall, patients with chronic, stable HF have an annual mortality rate of about 10%. (2) One-third of patients with new-onset HF will die within 6 months of diagnosis. These patients do not survive to enter the pool of those with “chronic” HF. About 60% of patients with incident HF will die within 3 years, and there is limited evidence that the overall prognosis has improved in the last 15 years.
To date, the diagnosis and management of chronic HF has concentrated on patients with the clinical syndrome of HF accompanied by severe left ventricular systolic dysfunction. Major changes in treatment have resulted from a better understanding of the pathophysiology of HF and the results of large clinical trials. Treatment for chronic HF includes lifestyle management, drugs, cardiac surgery, or implantable pacemakers and defibrillators. Despite pharmacologic advances, which include diuretics, angiotensin-converting enzyme inhibitors, beta-blockers, spironolactone, and digoxin, many patients remain symptomatic on maximally tolerated doses.
The Technology
Owing to the limitations of drug therapy, cardiac transplantation and device therapies have been used to try to improve QoL and survival of patients with chronic HF. Ventricular pacing is an emerging treatment option for patients with severe HF that does not respond well to medical therapy. Traditionally, indications for pacing include bradyarrhythmia, sick sinus syndrome, atrioventricular block, and other indications, including combined sick sinus syndrome with atrioventricular block and neurocardiogenic syncope. Recently, BiV pacing as a new, adjuvant therapy for patients with chronic HF and mechanical dyssynchrony has been investigated. Ventricular dysfunction is a sign of HF; and, if associated with severe intraventricular conduction delay, it can cause dyssynchronous ventricular contractions resulting in decreased ventricular filling. The therapeutic intent is to activate both ventricles simultaneously, thereby improving the mechanical efficiency of the ventricles.
About 30% of patients with chronic HF have intraventricular conduction defects. (6) These conduction abnormalities progress over time and lead to discoordinated contraction of an already hemodynamically compromised ventricle. Intraventricular conduction delay has been associated with clinical instability and an increased risk of death in patients with HF. (7) Hence, BiV pacing, which involves pacing left and right ventricles simultaneously, may provide a more coordinated pattern of ventricular contraction and thereby potentially reduce QRS duration, and intraventricular and interventricular asynchrony. People with advanced chronic HF, a wide QRS complex (i.e., the portion of the electrocardiogram comprising the Q, R, and S waves, together representing ventricular depolarization), low left ventricular ejection fraction and contraction dyssynchrony in a viable myocardium and normal sinus rhythm, are the target patients group for BiV pacing. One-half of all deaths in HF patients are sudden, and the mode of death is arrhythmic in most cases. Internal cardioverter defibrillators (ICDs) combined with BiV pacemakers are therefore being increasingly considered for patients with HF who are at high risk of sudden death.
Current Implantation Technique for Cardiac Resynchronization
Conventional dual-chamber pacemakers have only 2 leads: 1 placed in the right atrium and the other in the right ventricle. The technique used for BiV pacemaker implantation also uses right atrial and ventricular pacing leads, in addition to a left ventricle lead advanced through the coronary sinus into a vein that runs along the ventricular free wall. This permits simultaneous pacing of both ventricles to allow resynchronization of the left ventricle septum and free wall.
Mode of Operation
Permanent pacing systems consist of an implantable pulse generator that contains a battery and electronic circuitry, together with 1 (single-chamber pacemaker) or 2 (dual-chamber pacemaker) leads. Leads conduct intrinsic atrial or ventricular signals to the sensing circuitry and deliver the pulse generator charge to the myocardium (muscle of the heart).
Complications of Biventricular Pacemaker Implantation
The complications that may arise when a BiV pacemaker is implanted are similar to those that occur with standard pacemaker implantation, including pneumothorax, perforation of the great vessels or the myocardium, air embolus, infection, bleeding, and arrhythmias. Moreover, left ventricular pacing through the coronary sinus can be associated with rupture of the sinus as another complication.
Conclusion of 2003 Review of Biventricular Pacemakers by the Medical Advisory Secretariat
The randomized controlled trials (RCTs) the Medical Advisory Secretariat retrieved analyzed chronic HF patients that were assessed for up to 6 months. Other studies have been prospective, but nonrandomized, not double-blinded, uncontrolled and/or have had a limited or uncalculated sample size. Short-term studies have focused on acute hemodynamic analyses. The authors of the RCTs reported improved cardiac function and QoL up to 6 months after BiV pacemaker implantation; therefore, there is level 1 evidence that patients in ventricular dyssynchrony who remain symptomatic after medication might benefit from this technology. Based on evidence made available to the Medical Advisory Secretariat by a manufacturer, (8) it appears that these 6-month improvements are maintained at 12-month follow-up.
To date, however, there is insufficient evidence to support the routine use of combined ICD/BiV devices in patients with chronic HF with prolonged QRS intervals.
Summary of Updated Findings Since the 2003 Review
Since the Medical Advisory Secretariat’s review in 2003 of biventricular pacemakers, 2 large RCTs have been published: COMPANION (9) and CARE-HF. (10) The characteristics of each trial are shown in Table 1. The COMPANION trial had a number of major methodological limitations compared with the CARE-HF trial.
Characteristics of the COMPANION and CARE-HF Trials*
COMPANION; (9) CARE-HF. (10)
BiV indicates biventricular; ICD, implantable cardioverter defibrillator; EF, ejection fraction; QRS, the interval representing the Q, R and S waves on an electrocardiogram; FDA, United States Food and Drug Administration.
Overall, CARE-HF showed that BiV pacing significantly improves mortality, QoL, and NYHA class in patients with severe HF and a wide QRS interval (Tables 2 and 3).
CARE-HF Results: Primary and Secondary Endpoints*
BiV indicates biventricular; NNT, number needed to treat.
Cleland JGF, Daubert J, Erdmann E, Freemantle N, Gras D, Kappenberger L et al. The effect of cardiac resynchronization on morbidity and mortality in heart failure (CARE-HF). New England Journal of Medicine 2005; 352:1539-1549; Copyright 2003 Massachusettes Medical Society. All rights reserved. (10)
CARE H-F Results: NYHA Class and Quality of Life Scores*
Minnesota Living with Heart Failure scores range from 0 to 105; higher scores reflect poorer QoL.
European Quality of Life–5 Dimensions scores range from -0.594 to 1.000; 1.000 indicates fully healthy; 0, dead
Cleland JGF, Daubert J, Erdmann E, Freemantle N, Gras D, Kappenberger L et al. The effect of cardiac resynchronization on morbidity and mortality in heart failure (CARE-HF). New England Journal of Medicine 2005; 352:1539-1549; Copyright 2005 Massachusettes Medical Society. All rights reserved.(10)
GRADE Quality of Evidence
The quality of these 3 trials was examined according to the GRADE Working Group criteria, (12) (Table 4).
Quality refers to criteria such as the adequacy of allocation concealment, blinding, and follow-up.
Consistency refers to the similarity of estimates of effect across studies. If there is an important unexplained inconsistency in the results, confidence in the estimate of effect for that outcome decreases. Differences in the direction of effect, the size of the differences in effect, and the significance of the differences guide the decision about whether important inconsistency exists.
Directness refers to the extent to which the people interventions and outcome measures are similar to those of interest. For example, there may be uncertainty about the directness of the evidence if the people of interest are older, sicker, or have more comorbid conditions than do the people in the studies.
As stated by the GRADE Working Group, (12) the following definitions were used in grading the quality of the evidence:
High: Further research is very unlikely to change our confidence on the estimate of effect.
Moderate: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low: Any estimate of effect is very uncertain.
Quality of Evidence: CARE-HF and COMPANION
Conclusions
Overall, there is evidence that BiV pacemakers are effective for improving mortality, QoL, and functional status in patients with NYHA class III/IV HF, an EF less than 0.35, a QRS interval greater than 120 ms, who are refractory to drug therapy.
As per the GRADE Working Group, recommendations considered the following 4 main factors:
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates, and the relative value placed on the outcome
The quality of the evidence (Table 4)
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects such as proximity to a hospital or availability of necessary expertise
Uncertainty about the baseline risk for the population of interest
The GRADE Working Group also recommends that incremental costs of health care alternatives should be considered explicitly alongside the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 5 shows the overall trade-off between benefits and harms and incorporates any risk/uncertainty.
For BiV pacing, the overall GRADE and strength of the recommendation is moderate: the quality of the evidence is moderate/high (because of some uncertainty due to methodological limitations in the study design, e.g., no blinding), but there is also some risk/uncertainty in terms of the estimated prevalence and wide cost-effectiveness estimates (Table 5).
For the combination BiV pacing/ICD, the overall GRADE and strength of the recommendation is weak—the quality of the evidence is low (because of uncertainty due to methodological limitations in the study design), but there is also some risk/uncertainty in terms of the estimated prevalence, high cost, and high budget impact (Table 5). There are indirect, low-quality comparisons of the effectiveness of BiV pacemakers compared with the combination BiV/ICD devices.
A stronger recommendation can be made for BiV pacing only compared with the combination BiV/ICD device for patients with an EF less than or equal to 0.35, and a QRS interval over or equal to 120 ms, and NYHA III/IV symptoms, and refractory to optimal medical therapy (Table 5).
There is moderate/high-quality evidence that BiV pacemakers significantly improve mortality, QoL, and functional status.
There is low-quality evidence that combined BiV/ICD devices significantly improve mortality, QoL, and functional status.
To date, there are no direct comparisons of the effectiveness of BiV pacemakers compared with the combined BiV/ICD devices in terms of mortality, QoL, and functional status.
Overall GRADE and Strength of Recommendation
BiV refers to biventricular; ICD, implantable cardioverter defibrillator; NNT, number needed to treat.
PMCID: PMC3382419  PMID: 23074464
9.  Stenting for Peripheral Artery Disease of the Lower Extremities 
Executive Summary
Background
Objective
In January 2010, the Medical Advisory Secretariat received an application from University Health Network to provide an evidentiary platform on stenting as a treatment management for peripheral artery disease. The purpose of this health technology assessment is to examine the effectiveness of primary stenting as a treatment management for peripheral artery disease of the lower extremities.
Clinical Need: Condition and Target Population
Peripheral artery disease (PAD) is a progressive disease occurring as a result of plaque accumulation (atherosclerosis) in the arterial system that carries blood to the extremities (arms and legs) as well as vital organs. The vessels that are most affected by PAD are the arteries of the lower extremities, the aorta, the visceral arterial branches, the carotid arteries and the arteries of the upper limbs. In the lower extremities, PAD affects three major arterial segments i) aortic-iliac, ii) femoro-popliteal (FP) and iii) infra-popliteal (primarily tibial) arteries. The disease is commonly classified clinically as asymptomatic claudication, rest pain and critical ischemia.
Although the prevalence of PAD in Canada is not known, it is estimated that 800,000 Canadians have PAD. The 2007 Trans Atlantic Intersociety Consensus (TASC) II Working Group for the Management of Peripheral Disease estimated that the prevalence of PAD in Europe and North America to be 27 million, of whom 88,000 are hospitalizations involving lower extremities. A higher prevalence of PAD among elderly individuals has been reported to range from 12% to 29%. The National Health and Nutrition Examination Survey (NHANES) estimated that the prevalence of PAD is 14.5% among individuals 70 years of age and over.
Modifiable and non-modifiable risk factors associated with PAD include advanced age, male gender, family history, smoking, diabetes, hypertension and hyperlipidemia. PAD is a strong predictor of myocardial infarction (MI), stroke and cardiovascular death. Annually, approximately 10% of ischemic cardiovascular and cerebrovascular events can be attributed to the progression of PAD. Compared with patients without PAD, the 10-year risk of all-cause mortality is 3-fold higher in patients with PAD with 4-5 times greater risk of dying from cardiovascular event. The risk of coronary heart disease is 6 times greater and increases 15-fold in patients with advanced or severe PAD. Among subjects with diabetes, the risk of PAD is often severe and associated with extensive arterial calcification. In these patients the risk of PAD increases two to four fold. The results of the Canadian public survey of knowledge of PAD demonstrated that Canadians are unaware of the morbidity and mortality associated with PAD. Despite its prevalence and cardiovascular risk implications, only 25% of PAD patients are undergoing treatment.
The diagnosis of PAD is difficult as most patients remain asymptomatic for many years. Symptoms do not present until there is at least 50% narrowing of an artery. In the general population, only 10% of persons with PAD have classic symptoms of claudication, 40% do not complain of leg pain, while the remaining 50% have a variety of leg symptoms different from classic claudication. The severity of symptoms depends on the degree of stenosis. The need to intervene is more urgent in patients with limb threatening ischemia as manifested by night pain, rest pain, ischemic ulcers or gangrene. Without successful revascularization those with critical ischemia have a limb loss (amputation) rate of 80-90% in one year.
Diagnosis of PAD is generally non-invasive and can be performed in the physician offices or on an outpatient basis in a hospital. Most common diagnostic procedure include: 1) Ankle Brachial Index (ABI), a ratio of the blood pressure readings between the highest ankle pressure and the highest brachial (arm) pressure; and 2) Doppler ultrasonography, a diagnostic imaging procedure that uses a combination of ultrasound and wave form recordings to evaluate arterial flow in blood vessels. The value of the ABI can provide an assessment of the severity of the disease. Other non invasive imaging techniques include: Computed Tomography (CT) and Magnetic Resonance Angiography (MRA). Definitive diagnosis of PAD can be made by an invasive catheter based angiography procedure which shows the roadmap of the arteries, depicting the exact location and length of the stenosis / occlusion. Angiography is the standard method against which all other imaging procedures are compared for accuracy.
More than 70% of the patients diagnosed with PAD remain stable or improve with conservative management of pharmacologic agents and life style modifications. Significant PAD symptoms are well known to negatively influence an individual quality of life. For those who do not improve, revascularization methods either invasive or non-invasive can be used to restore peripheral circulation.
Technology Under Review
A Stent is a wire mesh “scaffold” that is permanently implanted in the artery to keep the artery open and can be combined with angioplasty to treat PAD. There are two types of stents: i) balloon-expandable and ii) self expandable stents and are available in varying length. The former uses an angioplasty balloon to expand and set the stent within the arterial segment. Recently, drug-eluting stents have been developed and these types of stents release small amounts of medication intended to reduce neointimal hyperplasia, which can cause re-stenosis at the stent site. Endovascular stenting avoids the problem of early elastic recoil, residual stenosis and flow limiting dissection after balloon angioplasty.
Research Questions
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion), is primary stenting more effective than percutaneous transluminal angioplasty (PTA) in improving patency?
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion), does primary stenting provide immediate success compared to PTA?
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion), is primary stenting associated with less complications compared to PTA?
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion), does primary stenting compared to PTA reduce the rate of re-intervention?
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion) is primary stenting more effective than PTA in improving clinical and hemodynamic success?
Are drug eluting stents more effective than bare stents in improving patency, reducing rates of re-interventions or complications?
Research Methods
Literature Search
A literature search was performed on February 2, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA). Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Inclusion Criteria
English language full-reports from 1950 to January Week 3, 2010
Comparative randomized controlled trials (RCTs), systematic reviews and meta-analyses of RCTs
Proven diagnosis of PAD of the lower extremities in all patients.
Adult patients at least 18 years of age.
Stent as at least one treatment arm.
Patency, re-stenosis, re-intervention, technical success, hemodynamic (ABI) and clinical improvement and complications as at least an outcome.
Exclusion Criteria
Non-randomized studies
Observational studies (cohort or retrospective studies) and case report
Feasibility studies
Studies that have evaluated stent but not as a primary intervention
Outcomes of Interest
The primary outcome measure was patency. Secondary measures included technical success, re-intervention, complications, hemodynamic (ankle brachial pressure index, treadmill walking distance) and clinical success or improvement according to Rutherford scale. It was anticipated, a priori, that there would be substantial differences among trials regarding the method of examination and definitions of patency or re-stenosis. Where studies reported only re-stenosis rates, patency rates were calculated as 1 minus re-stenosis rates.
Statistical Analysis
Odds ratios (for binary outcomes) or mean difference (for continuous outcomes) with 95% confidence intervals (CI) were calculated for each endpoint. An intention to treat principle (ITT) was used, with the total number of patients randomized to each study arm as the denominator for each proportion. Sensitivity analysis was performed using per protocol approach. A pooled odds ratio (POR) or mean difference for each endpoint was then calculated for all trials reporting that endpoint using a fixed effects model. PORs were calculated for comparisons of primary stenting versus PTA or other alternative procedures. Level of significance was set at alpha=0.05. Homogeneity was assessed using the chi-square test, I2 and by visual inspection of forest plots. If heterogeneity was encountered within groups (P < 0.10), a random effects model was used. All statistical analyses were performed using RevMan 5. Where sufficient data were available, these analyses were repeated within subgroups of patients defined by time of outcome assessment to evaluate sustainability of treatment benefit. Results were pooled based on the diseased artery and stent type.
Summary of Findings
Balloon-expandable stents vs PTA in superficial femoral artery disease
Based on a moderate quality of evidence, there is no significant difference in patency between primary stenting using balloon-expandable bare metal stents and PTA at 6, 12 and 24 months in patients with superficial femoral artery disease. The pooled OR for patency and their corresponding 95% CI are: 6 months 1.26 (0.74, 2.13); 12 months 0.95 (0.66, 1.38); and 24 months 0.72 (0.34. 1.55).
There is no significant difference in clinical improvement, re-interventions, peri and post operative complications, mortality and amputations between primary stenting using balloon-expandable bare stents and PTA in patients with superficial femoral artery. The pooled OR and their corresponding 95% CI are clinical improvement 0.85 (0.50, 1.42); ankle brachial index 0.01 (-0.02, 0.04) re-intervention 0.83 (0.26, 2.65); complications 0.73 (0.43, 1.22); all cause mortality 1.08 (0.59, 1.97) and amputation rates 0.41 (0.14, 1.18).
Self-expandable stents vs PTA in superficial femoral artery disease
Based on a moderate quality of evidence, primary stenting using self-expandable bare metal stents is associated with significant improvement in patency at 6, 12 and 24 months in patients with superficial femoral artery disease. The pooled OR for patency and their corresponding 95% CI are: 6 months 2.35 (1.06, 5.23); 12 months 1.54 (1.01, 2.35); and 24 months 2.18 (1.00. 4.78). However, the benefit of primary stenting is not observed for clinical improvement, re-interventions, peri and post operative complications, mortality and amputation in patients with superficial femoral artery disease. The pooled OR and their corresponding 95% CI are clinical improvement 0.61 (0.37, 1.01); ankle brachial index 0.01 (-0.06, 0.08) re-intervention 0.60 (0.36, 1.02); complications 1.60 (0.53, 4.85); all cause mortality 3.84 (0.74, 19.22) and amputation rates 1.96 (0.20, 18.86).
Balloon expandable stents vs PTA in iliac artery occlusive disease
Based on moderate quality of evidence, despite immediate technical success, 12.23 (7.17, 20.88), primary stenting is not associated with significant improvement in patency, clinical status, treadmill walking distance and reduction in re-intervention, complications, cardiovascular events, all cause mortality, QoL and amputation rates in patients with intermittent claudication caused by iliac artery occlusive disease. The pooled OR and their corresponding 95% CI are: patency 1.03 (0.56, 1.87); clinical improvement 1.08 (0.60, 1.94); walking distance 3.00 (12.96, 18.96); re-intervention 1.16 (0.71, 1.90); complications 0.56 (0.20, 1.53); all cause mortality 0.89 (0.47, 1.71); QoL 0.40 (-4.42, 5.52); cardiovascular event 1.16 (0.56, 2.40) and amputation rates 0.37 (0.11, 1.23). To date no RCTs are available evaluating self-expandable stents in the common or external iliac artery stenosis or occlusion.
Drug-eluting stent vs balloon-expandable bare metal stents in crural arteries
Based on a very low quality of evidence, at 6 months of follow-up, sirolimus drug-eluting stents are associated with a reduction in target vessel revascularization and re-stenosis rates in patients with atherosclerotic lesions of crural (tibial) arteries compared with balloon-expandable bare metal stent. The OR and their corresponding 95% CI are: re-stenosis 0.09 (0.03, 0.28) and TVR 0.15 (0.05, 0.47) in patients with atherosclerotic lesions of the crural arteries at 6 months follow-up. Both types of stents offer similar immediate success. Limitations of this study include: short follow-up period, small sample and no assessment of mortality as an outcome. Further research is needed to confirm its effect and safety.
PMCID: PMC3377569  PMID: 23074395
10.  Endovascular Radiofrequency Ablation for Varicose Veins 
Executive Summary
Objective
The objective of the MAS evidence review was to conduct a systematic review of the available evidence on the safety, effectiveness, durability and cost–effectiveness of endovascular radiofrequency ablation (RFA) for the treatment of primary symptomatic varicose veins.
Background
The Ontario Health Technology Advisory Committee (OHTAC) met on August 26th, 2010 to review the safety, effectiveness, durability, and cost-effectiveness of RFA for the treatment of primary symptomatic varicose veins based on an evidence-based review by the Medical Advisory Secretariat (MAS).
Clinical Condition
Varicose veins (VV) are tortuous, twisted, or elongated veins. This can be due to existing (inherited) valve dysfunction or decreased vein elasticity (primary venous reflux) or valve damage from prior thrombotic events (secondary venous reflux). The end result is pooling of blood in the veins, increased venous pressure and subsequent vein enlargement. As a result of high venous pressure, branch vessels balloon out leading to varicosities (varicose veins).
Symptoms typically affect the lower extremities and include (but are not limited to): aching, swelling, throbbing, night cramps, restless legs, leg fatigue, itching and burning. Left untreated, venous reflux tends to be progressive, often leading to chronic venous insufficiency (CVI). A number of complications are associated with untreated venous reflux: including superficial thrombophlebitis as well as variceal rupture and haemorrhage. CVI often results in chronic skin changes referred to as stasis dermatitis. Stasis dermatitis is comprised of a spectrum of cutaneous abnormalities including edema, hyperpigmentation, eczema, lipodermatosclerosis and stasis ulceration. Ulceration represents the disease end point for severe CVI. CVI is associated with a reduced quality of life particularly in relation to pain, physical function and mobility. In severe cases, VV with ulcers, QOL has been rated to be as bad or worse as other chronic diseases such as back pain and arthritis.
Lower limb VV is a very common disease affecting adults – estimated to be the 7th most common reason for physician referral in the US. There is a very strong familial predisposition to VV. The risk in offspring is 90% if both parents affected, 20% when neither affected and 45% (25% boys, 62% girls) if one parent affected. The prevalence of VV worldwide ranges from 5% to 15% among men and 3% to 29% among women varying by the age, gender and ethnicity of the study population, survey methods and disease definition and measurement. The annual incidence of VV estimated from the Framingham Study was reported to be 2.6% among women and 1.9% among men and did not vary within the age range (40-89 years) studied.
Approximately 1% of the adult population has a stasis ulcer of venous origin at any one time with 4% at risk. The majority of leg ulcer patients are elderly with simple superficial vein reflux. Stasis ulcers are often lengthy medical problems and can last for several years and, despite effective compression therapy and multilayer bandaging are associated with high recurrence rates. Recent trials involving surgical treatment of superficial vein reflux have resulted in healing and significantly reduced recurrence rates.
Endovascular Radiofrequency Ablation for Varicose Veins
RFA is an image-guided minimally invasive treatment alternative to surgical stripping of superficial venous reflux. RFA does not require an operating room or general anaesthesia and has been performed in an outpatient setting by a variety of medical specialties including surgeons and interventional radiologists. Rather than surgically removing the vein, RFA works by destroying or ablating the refluxing vein segment using thermal energy delivered through a radiofrequency catheter.
Prior to performing RFA, color-flow Doppler ultrasonography is used to confirm and map all areas of venous reflux to devise a safe and effective treatment plan. The RFA procedure involves the introduction of a guide wire into the target vein under ultrasound guidance followed by the insertion of an introducer sheath through which the RFA catheter is advanced. Once satisfactory positioning has been confirmed with ultrasound, a tumescent anaesthetic solution is injected into the soft tissue surrounding the target vein along its entire length. This serves to anaesthetize the vein, insulate the heat from damaging adjacent structures, including nerves and skin and compresses the vein increasing optimal contact of the vessel wall with the electrodes or expanded prongs of the RF device. The RF generator is then activated and the catheter is slowly pulled along the length of the vein. At the end of the procedure, hemostasis is then achieved by applying pressure to the vein entry point.
Adequate and proper compression stockings and bandages are applied after the procedure to reduce the risk of venous thromboembolism and to reduce postoperative bruising and tenderness. Patients are encouraged to walk immediately after the procedure. Follow-up protocols vary, with most patients returning 1 to 3 weeks later for an initial follow-up visit. At this point, the initial clinical result is assessed and occlusion of the treated vessels is confirmed with ultrasound. Patients often have a second follow-up visit 1 to 3 months following RFA at which time clinical evaluation and ultrasound are repeated. If required, additional procedures such as phlebectomy or sclerotherapy may be performed during the RFA procedure or at any follow-up visits.
Regulatory Status
The Closure System® radiofrequency generator for endovascular thermal ablation of varicose veins was approved by Health Canada as a class 3 device in March 2005, registered under medical device license 67865. The RFA intravascular catheter was approved by Health Canada in November 2007 for the ClosureFast catheter, registered under medical device license 16574. The Closure System® also has regulatory approvals in Australia, Europe (CE Mark) and the United States (FDA clearance). In Ontario, RFA is not an insured service and is currently being introduced in private clinics.
Methods
Literature Search
The MAS evidence–based review was performed to support public financing decisions. The literature search was performed on March 9th, 2010 using standard bibliographic databases for studies published up until March, 2010.
Inclusion Criteria
English language full-reports and human studies Original reports with defined study methodologyReports including standardized measurements on outcome events such as technical success, safety, effectiveness, durability, quality of life or patient satisfaction Reports involving RFA for varicose veins (great or small saphenous veins)Randomized controlled trials (RCTs), systematic reviews and meta-analysesCohort and controlled clinical studies involving ≥ 1 month ultrasound imaging follow-up
Exclusion Criteria
Non systematic reviews, letters, comments and editorials Reports not involving outcome events such as safety, effectiveness, durability, or patient satisfaction following an intervention with RFAReports not involving interventions with RFA for varicose veinsPilot studies or studies with small samples (< 50 subjects)
Summary of Findings
The MAS evidence search on the safety and effectiveness of endovascular RFA ablation of VV identified the following evidence: three HTAs, nine systematic reviews, eight randomized controlled trials (five comparing RFA to surgery and three comparing RFA to ELT), five controlled clinical trials and fourteen cohort case series (four were multicenter registry studies).
The majority (12⁄14) of the cohort studies (3,664) evaluating RFA for VV involved treatment with first generation RFA catheters and the great saphenous vein (GSV) was the target vessel in all studies. Major adverse events were uncommonly reported and the overall pooled major adverse event rate extracted from the cohort studies was 2.9% (105⁄3,664). Imaging defined treatment effectiveness of vein closure rates were variable ranging from 68% to 96% at post-operative follow-up. Vein ablation rate at 6-month follow-up was reported in four studies with rates close to 90%. Only one study reported vein closure rates at 2 years but only for a minority of the eligible cases. The two studies reporting on RFA ablation with the more efficient second generation catheters involved better follow-up and reported higher ablation rates close to 100% at 6-month follow-up with no major adverse events. A large prospective registry trial that recruited over 1,000 patients at thirty-four largely European centers reported on treatment success in six overlapping reports on selected patient subgroups at various follow-up points up to 5 year. However, the follow-up for eligible recruited patients at all time points was low resulting in inadequate estimates of longer term treatment efficacy.
The overall level of evidence of randomized trials comparing RFA with surgical ligation and vein stripping (n = 5) was graded as low to moderate. In all trials RFA ablation was performed with first generation catheters in the setting of the operating theatre under general anaesthesia, usually without tumescent anaesthesia. Procedure times were significantly longer after RFA than surgery. Recovery after treatment was significantly quicker after RFA both with return to usual activity and return to work with on average a one week less of work loss. Major adverse events occurring after surgery were higher [(1.8% (n=4) vs. 0.4% (n = 1) than after RFA but not significantly. Treatment effectiveness measured by imaging defined vein absence or vein closure was comparable in the two treatment groups. Significant improvements in vein symptoms and quality of life over baseline were reported for both treatment groups. Improvements in these outcomes were significantly greater in the RFA group than the surgery group in the peri-operative period but not in later follow-up. Follow-up in these trials was inadequate to evaluate longer term recurrence for either treatment. Patient satisfaction was reported to be high for both treatments but was higher for RFA.
The studies comparing endovascular treatment approaches for VV (RFA and ELT) were more limited. Three RCT studies compared RFA (two with the second generation catheter) with ELT but mainly focused on peri-procedural outcomes such as pain, complications and recovery. Vein ablation rates were not evaluated in the trials, except for one small trial involving bilateral VV. Pain responses in patients undergoing ablation were extremely variable and up to 2 weeks, mean pain levels were significantly less with RFA than ELT ablation but differences were not significant at one month. Recovery, evaluated as return to usual activity or return to work, however, was similar in the treatment groups. Vein symptom and QOL improvements were improved in both groups but were significantly better in the RFA group than the ELT group at 2 weeks, but not at one month. Vein ablation rates were evaluated in several controlled clinical studies comparing the treatments between centers or within centers between individuals or over time. Comparisons in these studies were inconsistent with vein ablation rates for RFA reported to be similar to, higher than and lower than those with ELT.
Economic Analysis
RFA and surgical vein stripping, the main comparator reimbursed by the public system, are comparable in clinical benefits. Hence a cost-analysis was conducted to identify the differences in resources and costs between both procedures and a budgetary impact analysis (BIA) was conducted to project costs over a 5- year period in the province of Ontario. The target population of this economic analysis was patients with symptomatic varicose veins and the primary analytic perspective was that of the Ministry of Health and Long-Term Care.
The average case cost (based on Ontario hospital costs and medical resources) for surgical vein stripping was estimated to be $1,799. In order to calculate a procedural cost for RFA it was assumed that the hospital cost and physician labour fees, excluding anaesthesia and surgical assistance, were the same as vein stripping surgery. The manufacturer also provided details on the generator with a capital cost of $27,500 and a lifespan of 5 years and the disposables (catheter, sheath, guidewire) with a cost of $673 per case. The average case cost for RFA was therefore estimated to be $1,356. One-way sensitivity analysis was also conducted with hospital cost of RFA varied to 60% that of vein stripping surgery (average cost per case = $627.08) to calculate an impact to the province.
Historical volumes of vein stripping surgeries in Ontario were used to project surgeries in a linear fashion up to five years into the future. Volumes for RFA and ELT were calculated based on share capture from the surgery market based on discussion with clinical expert opinion and existing private data based on discussion with the manufacturer. RFA is expected to compete with ELT and capture some of the market. If ELT is reimbursed by the public sector then numbers will continue to increase from previous private data and share capture from the conventional surgical treatment market. Therefore, RFA cases will also increase since it will be capturing a share of the ELT market. A budget impact to the province was then calculated by multiplying volumes by the cost of the procedure.
RFA is comparable in clinical benefits to vein stripping surgery. It has the extra upfront cost of the generator and cost per case for disposables but does not require an operating theater, anaesthetist or surgical assistant fees. The impact to the province is expected to be 5 M by Year 5 with the introduction of new ELT and RFA image guided endovascular technologies and existing surgery for varicose veins.
Conclusion
The conclusions on the comparative outcomes between endovascular RFA and surgical ligation and saphenous vein stripping and between endovascular RFA and laser ablation for VV treatment are summarized in the table below (ES Table 1).
Outcome comparisons of RFA vs. surgery and RFA vs ELT for varicose veins
ELT refers to endovascular laser ablation; RFA, radiofrequency ablation
The outcomes of the evidence-based review on these treatments for VV based on different perspectives are summarized below:
RFA First versus Second Generation Catheters and Segmental Ablation
Ablation with second generation catheters and segmental ablation offered technical advantages with improved ease and significant decreases in procedure time. RFA ablation with second generation catheters is also no longer restricted to smaller (< 12 mm diameter) saphenous veins. The safety profile with the new device and method of energy delivery is as good as or improved over the first generation device. No major adverse events were reported in two multicenter prospective cohort studies in 6 month follow-up with over 500 patients. Post-operative complications such as bruising and pain were significantly less with RFA ablation with second generation catheters than ELT in two RCT trials.RFA treatment with second generation catheters has ablation rates that are higher than with first generation catheters and are more comparable with the consistently high rates of ELT.
Endovascular RFA versus Surgery
RFA has a quicker recovery attributable to decreased pain and lower minor complications.RFA, in the short term was comparable to surgery in treatment effectiveness as assessed by imaging defined anatomic outcomes such as vein closure, flow or reflux. Other treatment outcomes such as symptomatic relief and HRQOL were significantly improved in both groups and between group differences in the early peri-operative period were likely influenced by pain experiences. Longer term follow-up was inadequate to evaluate recurrence after either treatment.Patient satisfaction was high after both treatments but was higher for RFA than surgery.
Endovascular RFA versus ELT
RFA has significantly less post-operative pain than ELT but differences were not significant when pain was adjusted for analgesic use and pain differences between groups did not persist at 1 month follow-up.Treatment effectiveness, measured as symptom relief and QOL improvement were similar between the endovascular treatments in the short term (within 1 month) Treatment effectiveness measured as imaging defined vein ablation was not measured in any RCT trials (only for bilateral VV disease) and results were inconsistently reported in observational trials.Longer term follow-up was not available to assess recurrence after either treatment.
System Outcomes – RFA Replacing Surgery or Competing with ELT
RFA may offer system advantages in that the treatment can be offered by several medical specialties in outpatient settings and because it does not require an operating theatre or general anaesthesia. The treatment may result in decanting of patients from OR with decreased pre-surgical investigations, demand on anaesthetists’ time, hospital stay and wait time for VV treatment. It may also provide more reliable outpatient scheduling. Procedure costs may be less for endovascular approaches than surgery but the budget impact may be greater with insurance of RFA because of the transfer of cases from the private market to the public payer system.Competition between RFA and ELT endovascular approaches is likely to continue to stimulate innovation and technical changes to advance patient care and result in competitive pricing.
PMCID: PMC3377553  PMID: 23074413
11.  Intervention randomized controlled trials involving wrist and shoulder arthroscopy: a systematic review 
Background
Although arthroscopy of upper extremity joints was initially a diagnostic tool, it is increasingly used for therapeutic interventions. Randomized controlled trials (RCTs) are considered the gold standard for assessing treatment efficacy. We aimed to review the literature for intervention RCTs involving wrist and shoulder arthroscopy.
Methods
We performed a systematic review for RCTs in which at least one arm was an intervention performed through wrist arthroscopy or shoulder arthroscopy. PubMed and Cochrane Library databases were searched up to December 2012. Two researchers reviewed each article and recorded the condition treated, randomization method, number of randomized participants, time of randomization, outcomes measures, blinding, and description of dropouts and withdrawals. We used the modified Jadad scale that considers the randomization method, blinding, and dropouts/withdrawals; score 0 (lowest quality) to 5 (highest quality). The scores for the wrist and shoulder RCTs were compared with the Mann–Whitney test.
Results
The first references to both wrist and shoulder arthroscopy appeared in the late 1970s. The search found 4 wrist arthroscopy intervention RCTs (Kienböck’s disease, dorsal wrist ganglia, volar wrist ganglia, and distal radius fracture; first 3 compared arthroscopic with open surgery). The median number of participants was 45. The search found 50 shoulder arthroscopy intervention RCTs (rotator cuff tears 22, instability 14, impingement 9, and other conditions 5). Of these, 31 compared different arthroscopic treatments, 12 compared arthroscopic with open treatment, and 7 compared arthroscopic with nonoperative treatment. The median number of participants was 60. The median modified Jadad score for the wrist RCTs was 0.5 (range 0–1) and for the shoulder RCTs 3.0 (range 0–5) (p = 0.012).
Conclusion
Despite the increasing use of wrist arthroscopy in the treatment of various wrist disorders the efficacy of arthroscopically performed wrist interventions has been studied in only 4 randomized studies compared to 50 randomized studies of significantly higher quality assessing interventions performed through shoulder arthroscopy.
doi:10.1186/1471-2474-15-252
PMCID: PMC4123827  PMID: 25059881
Arthroscopy; Wrist; Shoulder; Randomized trials; Jadad scale; Intervention RCT; Systematic review
12.  Noninvasive Positive Pressure Ventilation for Acute Respiratory Failure Patients With Chronic Obstructive Pulmonary Disease (COPD) 
Executive Summary
In July 2010, the Medical Advisory Secretariat (MAS) began work on a Chronic Obstructive Pulmonary Disease (COPD) evidentiary framework, an evidence-based review of the literature surrounding treatment strategies for patients with COPD. This project emerged from a request by the Health System Strategy Division of the Ministry of Health and Long-Term Care that MAS provide them with an evidentiary platform on the effectiveness and cost-effectiveness of COPD interventions.
After an initial review of health technology assessments and systematic reviews of COPD literature, and consultation with experts, MAS identified the following topics for analysis: vaccinations (influenza and pneumococcal), smoking cessation, multidisciplinary care, pulmonary rehabilitation, long-term oxygen therapy, noninvasive positive pressure ventilation for acute and chronic respiratory failure, hospital-at-home for acute exacerbations of COPD, and telehealth (including telemonitoring and telephone support). Evidence-based analyses were prepared for each of these topics. For each technology, an economic analysis was also completed where appropriate. In addition, a review of the qualitative literature on patient, caregiver, and provider perspectives on living and dying with COPD was conducted, as were reviews of the qualitative literature on each of the technologies included in these analyses.
The Chronic Obstructive Pulmonary Disease Mega-Analysis series is made up of the following reports, which can be publicly accessed at the MAS website at: http://www.hqontario.ca/en/mas/mas_ohtas_mn.html.
Chronic Obstructive Pulmonary Disease (COPD) Evidentiary Framework
Influenza and Pneumococcal Vaccinations for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Smoking Cessation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Community-Based Multidisciplinary Care for Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Pulmonary Rehabilitation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Long-term Oxygen Therapy for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Acute Respiratory Failure Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Chronic Respiratory Failure Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Hospital-at-Home Programs for Patients With Acute Exacerbations of Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Home Telehealth for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Cost-Effectiveness of Interventions for Chronic Obstructive Pulmonary Disease Using an Ontario Policy Model
Experiences of Living and Dying With COPD: A Systematic Review and Synthesis of the Qualitative Empirical Literature
For more information on the qualitative review, please contact Mita Giacomini at: http://fhs.mcmaster.ca/ceb/faculty_member_giacomini.htm.
For more information on the economic analysis, please visit the PATH website: http://www.path-hta.ca/About-Us/Contact-Us.aspx.
The Toronto Health Economics and Technology Assessment (THETA) collaborative has produced an associated report on patient preference for mechanical ventilation. For more information, please visit the THETA website: http://theta.utoronto.ca/static/contact.
Objective
The objective of this evidence-based analysis was to examine the effectiveness, safety, and cost-effectiveness of noninvasive positive pressure ventilation (NPPV) in the following patient populations: patients with acute respiratory failure (ARF) due to acute exacerbations of chronic obstructive pulmonary disease (COPD); weaning of COPD patients from invasive mechanical ventilation (IMV); and prevention of or treatment of recurrent respiratory failure in COPD patients after extubation from IMV.
Clinical Need and Target Population
Acute Hypercapnic Respiratory Failure
Respiratory failure occurs when the respiratory system cannot oxygenate the blood and/or remove carbon dioxide from the blood. It can be either acute or chronic and is classified as either hypoxemic (type I) or hypercapnic (type II) respiratory failure. Acute hypercapnic respiratory failure frequently occurs in COPD patients experiencing acute exacerbations of COPD, so this is the focus of this evidence-based analysis. Hypercapnic respiratory failure occurs due to a decrease in the drive to breathe, typically due to increased work to breathe in COPD patients.
Technology
There are several treatment options for ARF. Usual medical care (UMC) attempts to facilitate adequate oxygenation and treat the cause of the exacerbation, and typically consists of supplemental oxygen, and a variety of medications such as bronchodilators, corticosteroids, and antibiotics. The failure rate of UMC is high and has been estimated to occur in 10% to 50% of cases.
The alternative is mechanical ventilation, either invasive or noninvasive. Invasive mechanical ventilation involves sedating the patient, creating an artificial airway through endotracheal intubation, and attaching the patient to a ventilator. While this provides airway protection and direct access to drain sputum, it can lead to substantial morbidity, including tracheal injuries and ventilator-associated pneumonia (VAP).
While both positive and negative pressure noninvasive ventilation exists, noninvasive negative pressure ventilation such as the iron lung is no longer in use in Ontario. Noninvasive positive pressure ventilation provides ventilatory support through a facial or nasal mask and reduces inspiratory work. Noninvasive positive pressure ventilation can often be used intermittently for short periods of time to treat respiratory failure, which allows patients to continue to eat, drink, talk, and participate in their own treatment decisions. In addition, patients do not require sedation, airway defence mechanisms and swallowing functions are maintained, trauma to the trachea and larynx are avoided, and the risk for VAP is reduced. Common complications are damage to facial and nasal skin, higher incidence of gastric distension with aspiration risk, sleeping disorders, and conjunctivitis. In addition, NPPV does not allow direct access to the airway to drain secretions and requires patients to cooperate, and due to potential discomfort, compliance and tolerance may be low.
In addition to treating ARF, NPPV can be used to wean patients from IMV through the gradual removal of ventilation support until the patient can breathe spontaneously. Five to 30% of patients have difficultly weaning. Tapering levels of ventilatory support to wean patients from IMV can be achieved using IMV or NPPV. The use of NPPV helps to reduce the risk of VAP by shortening the time the patient is intubated.
Following extubation from IMV, ARF may recur, leading to extubation failure and the need for reintubation, which has been associated with increased risk of nosocomial pneumonia and mortality. To avoid these complications, NPPV has been proposed to help prevent ARF recurrence and/or to treat respiratory failure when it recurs, thereby preventing the need for reintubation.
Research Questions
What is the effectiveness, cost-effectiveness, and safety of NPPV for the treatment of acute hypercapnic respiratory failure due to acute exacerbations of COPD compared with
usual medical care, and
invasive mechanical ventilation?
What is the effectiveness, cost-effectiveness, and safety of NPPV compared with IMV in COPD patients after IMV for the following purposes:
weaning COPD patients from IMV,
preventing ARF in COPD patients after extubation from IMV, and
treating ARF in COPD patients after extubation from IMV?
Research Methods
Literature Search
A literature search was performed on December 3, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), Wiley Cochrane, and the Centre for Reviews and Dissemination/International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2004 until December 3, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Since there were numerous studies that examined the effectiveness of NPPV for the treatment of ARF due to exacerbations of COPD published before 2004, pre-2004 trials which met the inclusion/exclusion criteria for this evidence-based review were identified by hand-searching reference lists of included studies and systematic reviews.
Inclusion Criteria
English language full-reports;
health technology assessments, systematic reviews, meta-analyses, and randomized controlled trials (RCTs);
studies performed exclusively in patients with a diagnosis of COPD or studies performed with patients with a mix of conditions if results are reported for COPD patients separately;
patient population: (Question 1) patients with acute hypercapnic respiratory failure due to an exacerbation of COPD; (Question 2a) COPD patients being weaned from IMV; (Questions 2b and 2c) COPD patients who have been extubated from IMV.
Exclusion Criteria
< 18 years of age
animal studies
duplicate publications
grey literature
studies examining noninvasive negative pressure ventilation
studies comparing modes of ventilation
studies comparing patient-ventilation interfaces
studies examining outcomes not listed below, such as physiologic effects including heart rate, arterial blood gases, and blood pressure
Outcomes of Interest
mortality
intubation rates
length of stay (intensive care unit [ICU] and hospital)
health-related quality of life
breathlessness
duration of mechanical ventilation
weaning failure
complications
NPPV tolerance and compliance
Statistical Methods
When possible, results were pooled using Review Manager 5 Version 5.1, otherwise, the results were summarized descriptively. Dichotomous data were pooled into relative risks using random effects models and continuous data were pooled using weighted mean differences with a random effects model. Analyses using data from RCTs were done using intention-to-treat protocols; P values < 0.05 were considered significant. A priori subgroup analyses were planned for severity of respiratory failure, location of treatment (ICU or hospital ward), and mode of ventilation with additional subgroups as needed based on the literature. Post hoc sample size calculations were performed using STATA 10.1.
Quality of Evidence
The quality of each included study was assessed taking into consideration allocation concealment, randomization, blinding, power/sample size, withdrawals/dropouts, and intention-to-treat analyses.
The quality of the body of evidence was assessed as high, moderate, low, or very low according to the GRADE Working Group criteria. The following definitions of quality were used in grading the quality of the evidence:
Summary of Findings
NPPV for the Treatment of ARF due to Acute Exacerbations of COPD
NPPV Plus Usual Medical Care Versus Usual Medical Care Alone for First Line Treatment
A total of 1,000 participants were included in 11 RCTs1; the sample size ranged from 23 to 342. The mean age of the participants ranged from approximately 60 to 72 years of age. Based on either the Global Initiative for Chronic Obstructive Lung Disease (GOLD) COPD stage criteria or the mean percent predicted forced expiratory volume in 1 second (FEV1), 4 of the studies included people with severe COPD, and there was inadequate information to classify the remaining 7 studies by COPD severity. The severity of the respiratory failure was classified into 4 categories using the study population mean pH level as follows: mild (pH ≥ 7.35), moderate (7.30 ≤ pH < 7.35), severe (7.25 ≤ pH < 7.30), and very severe (pH < 7.25). Based on these categories, 3 studies included patients with a mild respiratory failure, 3 with moderate respiratory failure, 4 with severe respiratory failure, and 1 with very severe respiratory failure.
The studies were conducted either in the ICU (3 of 11 studies) or general or respiratory wards (8 of 11 studies) in hospitals, with patients in the NPPV group receiving bilevel positive airway pressure (BiPAP) ventilatory support, except in 2 studies, which used pressure support ventilation and volume cycled ventilation, respectively. Patients received ventilation through nasal, facial, or oronasal masks. All studies specified a protocol or schedule for NPPV delivery, but this varied substantially across the studies. For example, some studies restricted the amount of ventilation per day (e.g., 6 hours per day) and the number of days it was offered (e.g., maximum of 3 days); whereas, other studies provided patients with ventilation for as long as they could tolerate it and recommended it for much longer periods of time (e.g., 7 to 10 days). These differences are an important source of clinical heterogeneity between the studies. In addition to NPPV, all patients in the NPPV group also received UMC. Usual medical care varied between the studies, but common medications included supplemental oxygen, bronchodilators, corticosteroids, antibiotics, diuretics, and respiratory stimulators.
The individual quality of the studies ranged. Common methodological issues included lack of blinding and allocation concealment, and small sample sizes.
Need for Endotracheal Intubation
Eleven studies reported the need for endotracheal intubation as an outcome. The pooled results showed a significant reduction in the need for endotracheal intubation in the NPPV plus UMC group compared with the UMC alone group (relative risk [RR], 0.38; 95% confidence interval [CI], 0.28−0.50). When subgrouped by severity of respiratory failure, the results remained significant for the mild, severe, and very severe respiratory failure groups.
GRADE: moderate
Inhospital Mortality
Nine studies reported inhospital mortality as an outcome. The pooled results showed a significant reduction in inhospital mortality in the NPPV plus UMC group compared with the UMC group (RR, 0.53; 95% CI, 0.35−0.81). When subgrouped by severity of respiratory failure, the results remained significant for the moderate and severe respiratory failure groups.
GRADE: moderate
Hospital Length of Stay
Eleven studies reported hospital length of stay (LOS) as an outcome. The pooled results showed a significant decrease in the mean length of stay for the NPPV plus UMC group compared with the UMC alone group (weighted mean difference [WMD], −2.68 days; 95% CI, −4.41 to −0.94 days). When subgrouped by severity of respiratory failure, the results remained significant for the mild, severe, and very severe respiratory failure groups.
GRADE: moderate
Complications
Five studies reported complications. Common complications in the NPPV plus UMC group included pneumonia, gastrointestinal disorders or bleeds, skin abrasions, eye irritation, gastric insufflation, and sepsis. Similar complications were observed in the UMC group including pneumonia, sepsis, gastrointestinal disorders or bleeds, pneumothorax, and complicated endotracheal intubations. Many of the more serious complications in both groups occurred in those patients who required endotracheal intubation. Three of the studies compared complications in the NPPV plus UMC and UMC groups. While the data could not be pooled, overall, the NPPV plus UMC group experienced fewer complications than the UMC group.
GRADE: low
Tolerance/Compliance
Eight studies reported patient tolerance or compliance with NPPV as an outcome. NPPV intolerance ranged from 5% to 29%. NPPV tolerance was generally higher for patients with more severe respiratory failure. Compliance with the NPPV protocol was reported by 2 studies, which showed compliance decreases over time, even over short periods such as 3 days.
NPPV Versus IMV for the Treatment of Patients Who Failed Usual Medical Care
A total of 205 participants were included in 2 studies; the sample sizes of these studies were 49 and 156. The mean age of the patients was 71 to 73 years of age in 1 study, and the median age was 54 to 58 years of age in the second study. Based on either the GOLD COPD stage criteria or the mean percent predicted FEV1, patients in 1 study had very severe COPD. The COPD severity could not be classified in the second study. Both studies had study populations with a mean pH less than 7.23, which was classified as very severe respiratory failure in this analysis. One study enrolled patients with ARF due to acute exacerbations of COPD who had failed medical therapy. The patient population was not clearly defined in the second study, and it was not clear whether they had to have failed medical therapy before entry into the study.
Both studies were conducted in the ICU. Patients in the NPPV group received BiPAP ventilatory support through nasal or full facial masks. Patients in the IMV group received pressure support ventilation.
Common methodological issues included small sample size, lack of blinding, and unclear methods of randomization and allocation concealment. Due to the uncertainty about whether both studies included the same patient population and substantial differences in the direction and significance of the results, the results of the studies were not pooled.
Mortality
Both studies reported ICU mortality. Neither study showed a significant difference in ICU mortality between the NPPV and IMV groups, but 1 study showed a higher mortality rate in the NPPV group (21.7% vs. 11.5%) while the other study showed a lower mortality rate in the NPPV group (5.1% vs. 6.4%). One study reported 1-year mortality and showed a nonsignificant reduction in mortality in the NPPV group compared with the IMV group (26.1% vs. 46.1%).
GRADE: low to very low
Intensive Care Unit Length of Stay
Both studies reported LOS in the ICU. The results were inconsistent. One study showed a statistically significant shorter LOS in the NPPV group compared with the IMV group (5 ± 1.35 days vs. 9.29 ± 3 days; P < 0.001); whereas, the other study showed a nonsignificantly longer LOS in the NPPV group compared with the IMV group (22 ± 19 days vs. 21 ± 20 days; P = 0.86).
GRADE: very low
Duration of Mechanical Ventilation
Both studies reported the duration of mechanical ventilation (including both invasive and noninvasive ventilation). The results were inconsistent. One study showed a statistically significant shorter duration of mechanical ventilation in the NPPV group compared with the IMV group (3.92 ± 1.08 days vs. 7.17 ± 2.22 days; P < 0.001); whereas, the other study showed a nonsignificantly longer duration of mechanical ventilation in the NPPV group compared with the IMV group (16 ± 19 days vs. 15 ± 21 days; P = 0.86). GRADE: very low
Complications
Both studies reported ventilator-associated pneumonia and tracheotomies. Both showed a reduction in ventilator-associated pneumonia in the NPPV group compared with the IMV group, but the results were only significant in 1 study (13% vs. 34.6%, P = 0.07; and 6.4% vs. 37.2%, P < 0.001, respectively). Similarly, both studies showed a reduction in tracheotomies in the NPPV group compared with the IMV group, but the results were only significant in 1 study (13% vs. 23.1%, P = 0.29; and 6.4% vs. 34.6%; P < 0.001).
GRADE: very low
Other Outcomes
One of the studies followed patients for 12 months. At the end of follow-up, patients in the NPPV group had a significantly lower rate of needing de novo oxygen supplementation at home. In addition, the IMV group experienced significant increases in functional limitations due to COPD, while no increase was seen in the NPPV group. Finally, no significant differences were observed for hospital readmissions, ICU readmissions, and patients with an open tracheotomy, between the NPPV and IMV groups.
NPPV for Weaning COPD Patients From IMV
A total of 80 participants were included in the 2 RCTs; the sample sizes of the studies were 30 and 50 patients. The mean age of the participants ranged from 58 to 69 years of age. Based on either the GOLD COPD stage criteria or the mean percent predicted FEV1, both studies included patients with very severe COPD. Both studies also included patients with very severe respiratory failure (mean pH of the study populations was less than 7.23). Chronic obstructive pulmonary disease patients receiving IMV were enrolled in the study if they failed a T-piece weaning trial (spontaneous breathing test), so they could not be directly extubated from IMV.
Both studies were conducted in the ICU. Patients in the NPPV group received weaning using either BiPAP or pressure support ventilation NPPV through a face mask, and patients in the IMV weaning group received pressure support ventilation. In both cases, weaning was achieved by tapering the ventilation level.
The individual quality of the studies ranged. Common methodological problems included unclear randomization methods and allocation concealment, lack of blinding, and small sample size.
Mortality
Both studies reported mortality as an outcome. The pooled results showed a significant reduction in ICU mortality in the NPPV group compared with the IMV group (RR, 0.47; 95% CI, 0.23−0.97; P = 0.04).
GRADE: moderate
Intensive Care Unit Length of Stay
Both studies reported ICU LOS as an outcome. The pooled results showed a nonsignificant reduction in ICU LOS in the NPPV group compared with the IMV group (WMD, −5.21 days; 95% CI, −11.60 to 1.18 days).
GRADE: low
Duration of Mechanical Ventilation
Both studies reported duration of mechanical ventilation (including both invasive and noninvasive ventilation) as an outcome. The pooled results showed a nonsignificant reduction in duration of mechanical ventilation (WMD, −3.55 days; 95% CI, −8.55 to 1.44 days).
GRADE: low
Nosocomial Pneumonia
Both studies reported nosocominal pneumonia as an outcome. The pooled results showed a significant reduction in nosocomial pneumonia in the NPPV group compared with the IMV group (RR, 0.14; 95% CI, 0.03−0.71; P = 0.02).
GRADE: moderate
Weaning Failure
One study reported a significant reduction in weaning failure in the NPPV group compared with the IMV group, but the results were not reported in the publication. In this study, 1 of 25 patients in the NPPV group and 2 of 25 patients in the IMV group could not be weaned after 60 days in the ICU.
NPPV After Extubation of COPD Patients From IMV
The literature was reviewed to identify studies examining the effectiveness of NPPV compared with UMC in preventing recurrence of ARF after extubation from IMV or treating acute ARF which has recurred after extubation from IMV. No studies that included only COPD patients or reported results for COPD patients separately were identified for the prevention of ARF postextubation.
One study was identified for the treatment of ARF in COPD patients that recurred within 48 hours of extubation from IMV. This study included 221 patients, of whom 23 had COPD. A post hoc subgroup analysis was conducted examining the rate of reintubation in the COPD patients only. A nonsignificant reduction in the rate of reintubation was observed in the NPPV group compared with the UMC group (7 of 14 patients vs. 6 of 9 patients, P = 0.67). GRADE: low
Conclusions
NPPV Plus UMC Versus UMC Alone for First Line Treatment of ARF due to Acute Exacerbations of COPD
Moderate quality of evidence showed that compared with UMC, NPPV plus UMC significantly reduced the need for endotracheal intubation, inhospital mortality, and the mean length of hospital stay.
Low quality of evidence showed a lower rate of complications in the NPPV plus UMC group compared with the UMC group.
NPPV Versus IMV for the Treatment of ARF in Patients Who Have Failed UMC
Due to inconsistent and low to very low quality of evidence, there was insufficient evidence to draw conclusions on the comparison of NPPV versus IMV for patients who failed UMC.
NPPV for Weaning COPD Patients From IMV
Moderate quality of evidence showed that weaning COPD patients from IMV using NPPV results in significant reductions in mortality, nosocomial pneumonia, and weaning failure compared with weaning with IMV.
Low quality of evidence showed a nonsignificant reduction in the mean LOS and mean duration of mechanical ventilation in the NPPV group compared with the IMV group.
NPPV for the Treatment of ARF in COPD Patients After Extubation From IMV
Low quality of evidence showed a nonsignificant reduction in the rate of reintubation in the NPPV group compared with the UMC group; however, there was inadequate evidence to draw conclusions on the effectiveness of NPPV for the treatment of ARF in COPD patients after extubation from IMV
PMCID: PMC3384377  PMID: 23074436
13.  Utilization of DXA Bone Mineral Densitometry in Ontario 
Executive Summary
Issue
Systematic reviews and analyses of administrative data were performed to determine the appropriate use of bone mineral density (BMD) assessments using dual energy x-ray absorptiometry (DXA), and the associated trends in wrist and hip fractures in Ontario.
Background
Dual Energy X-ray Absorptiometry Bone Mineral Density Assessment
Dual energy x-ray absorptiometry bone densitometers measure bone density based on differential absorption of 2 x-ray beams by bone and soft tissues. It is the gold standard for detecting and diagnosing osteoporosis, a systemic disease characterized by low bone density and altered bone structure, resulting in low bone strength and increased risk of fractures. The test is fast (approximately 10 minutes) and accurate (exceeds 90% at the hip), with low radiation (1/3 to 1/5 of that from a chest x-ray). DXA densitometers are licensed as Class 3 medical devices in Canada. The World Health Organization has established criteria for osteoporosis and osteopenia based on DXA BMD measurements: osteoporosis is defined as a BMD that is >2.5 standard deviations below the mean BMD for normal young adults (i.e. T-score <–2.5), while osteopenia is defined as BMD that is more than 1 standard deviation but less than 2.5 standard deviation below the mean for normal young adults (i.e. T-score< –1 & ≥–2.5). DXA densitometry is presently an insured health service in Ontario.
Clinical Need
 
Burden of Disease
The Canadian Multicenter Osteoporosis Study (CaMos) found that 16% of Canadian women and 6.6% of Canadian men have osteoporosis based on the WHO criteria, with prevalence increasing with age. Osteopenia was found in 49.6% of Canadian women and 39% of Canadian men. In Ontario, it is estimated that nearly 530,000 Ontarians have some degrees of osteoporosis. Osteoporosis-related fragility fractures occur most often in the wrist, femur and pelvis. These fractures, particularly those in the hip, are associated with increased mortality, and decreased functional capacity and quality of life. A Canadian study showed that at 1 year after a hip fracture, the mortality rate was 20%. Another 20% required institutional care, 40% were unable to walk independently, and there was lower health-related quality of life due to attributes such as pain, decreased mobility and decreased ability to self-care. The cost of osteoporosis and osteoporotic fractures in Canada was estimated to be $1.3 billion in 1993.
Guidelines for Bone Mineral Density Testing
With 2 exceptions, almost all guidelines address only women. None of the guidelines recommend blanket population-based BMD testing. Instead, all guidelines recommend BMD testing in people at risk of osteoporosis, predominantly women aged 65 years or older. For women under 65 years of age, BMD testing is recommended only if one major or two minor risk factors for osteoporosis exist. Osteoporosis Canada did not restrict its recommendations to women, and thus their guidelines apply to both sexes. Major risk factors are age greater than or equal to 65 years, a history of previous fractures, family history (especially parental history) of fracture, and medication or disease conditions that affect bone metabolism (such as long-term glucocorticoid therapy). Minor risk factors include low body mass index, low calcium intake, alcohol consumption, and smoking.
Current Funding for Bone Mineral Density Testing
The Ontario Health Insurance Program (OHIP) Schedule presently reimburses DXA BMD at the hip and spine. Measurements at both sites are required if feasible. Patients at low risk of accelerated bone loss are limited to one BMD test within any 24-month period, but there are no restrictions on people at high risk. The total fee including the professional and technical components for a test involving 2 or more sites is $106.00 (Cdn).
Method of Review
This review consisted of 2 parts. The first part was an analysis of Ontario administrative data relating to DXA BMD, wrist and hip fractures, and use of antiresorptive drugs in people aged 65 years and older. The Institute for Clinical Evaluative Sciences extracted data from the OHIP claims database, the Canadian Institute for Health Information hospital discharge abstract database, the National Ambulatory Care Reporting System, and the Ontario Drug Benefit database using OHIP and ICD-10 codes. The data was analyzed to examine the trends in DXA BMD use from 1992 to 2005, and to identify areas requiring improvement.
The second part included systematic reviews and analyses of evidence relating to issues identified in the analyses of utilization data. Altogether, 8 reviews and qualitative syntheses were performed, consisting of 28 published systematic reviews and/or meta-analyses, 34 randomized controlled trials, and 63 observational studies.
Findings of Utilization Analysis
Analysis of administrative data showed a 10-fold increase in the number of BMD tests in Ontario between 1993 and 2005.
OHIP claims for BMD tests are presently increasing at a rate of 6 to 7% per year. Approximately 500,000 tests were performed in 2005/06 with an age-adjusted rate of 8,600 tests per 100,000 population.
Women accounted for 90 % of all BMD tests performed in the province.
In 2005/06, there was a 2-fold variation in the rate of DXA BMD tests across local integrated health networks, but a 10-fold variation between the county with the highest rate (Toronto) and that with the lowest rate (Kenora). The analysis also showed that:
With the increased use of BMD, there was a concomitant increase in the use of antiresorptive drugs (as shown in people 65 years and older) and a decrease in the rate of hip fractures in people age 50 years and older.
Repeat BMD made up approximately 41% of all tests. Most of the people (>90%) who had annual BMD tests in a 2-year or 3-year period were coded as being at high risk for osteoporosis.
18% (20,865) of the people who had a repeat BMD within a 24-month period and 34% (98,058) of the people who had one BMD test in a 3-year period were under 65 years, had no fracture in the year, and coded as low-risk.
Only 19% of people age greater than 65 years underwent BMD testing and 41% received osteoporosis treatment during the year following a fracture.
Men accounted for 24% of all hip fractures and 21 % of all wrist fractures, but only 10% of BMD tests. The rates of BMD tests and treatment in men after a fracture were only half of those in women.
In both men and women, the rate of hip and wrist fractures mainly increased after age 65 with the sharpest increase occurring after age 80 years.
Findings of Systematic Review and Analysis
Serial Bone Mineral Density Testing for People Not Receiving Osteoporosis Treatment
A systematic review showed that the mean rate of bone loss in people not receiving osteoporosis treatment (including postmenopausal women) is generally less than 1% per year. Higher rates of bone loss were reported for people with disease conditions or on medications that affect bone metabolism. In order to be considered a genuine biological change, the change in BMD between serial measurements must exceed the least significant change (variability) of the testing, ranging from 2.77% to 8% for precisions ranging from 1% to 3% respectively. Progression in BMD was analyzed, using different rates of baseline BMD values, rates of bone loss, precision, and BMD value for initiating treatment. The analyses showed that serial BMD measurements every 24 months (as per OHIP policy for low-risk individuals) is not necessary for people with no major risk factors for osteoporosis, provided that the baseline BMD is normal (T-score ≥ –1), and the rate of bone loss is less than or equal to 1% per year. The analyses showed that for someone with a normal baseline BMD and a rate of bone loss of less than 1% per year, the change in BMD is not likely to exceed least significant change (even for a 1% precision) in less than 3 years after the baseline test, and is not likely to drop to a BMD level that requires initiation of treatment in less than 16 years after the baseline test.
Serial Bone Mineral Density Testing in People Receiving Osteoporosis Therapy
Seven published meta-analysis of randomized controlled trials (RCTs) and 2 recent RCTs on BMD monitoring during osteoporosis therapy showed that although higher increases in BMD were generally associated with reduced risk of fracture, the change in BMD only explained a small percentage of the fracture risk reduction.
Studies showed that some people with small or no increase in BMD during treatment experienced significant fracture risk reduction, indicating that other factors such as improved bone microarchitecture might have contributed to fracture risk reduction.
There is conflicting evidence relating to the role of BMD testing in improving patient compliance with osteoporosis therapy.
Even though BMD may not be a perfect surrogate for reduction in fracture risk when monitoring responses to osteoporosis therapy, experts advised that it is still the only reliable test available for this purpose.
A systematic review conducted by the Medical Advisory Secretariat showed that the magnitude of increases in BMD during osteoporosis drug therapy varied among medications. Although most of the studies yielded mean percentage increases in BMD from baseline that did not exceed the least significant change for a 2% precision after 1 year of treatment, there were some exceptions.
Bone Mineral Density Testing and Treatment After a Fragility Fracture
A review of 3 published pooled analyses of observational studies and 12 prospective population-based observational studies showed that the presence of any prevalent fracture increases the relative risk for future fractures by approximately 2-fold or more. A review of 10 systematic reviews of RCTs and 3 additional RCTs showed that therapy with antiresorptive drugs significantly reduced the risk of vertebral fractures by 40 to 50% in postmenopausal osteoporotic women and osteoporotic men, and 2 antiresorptive drugs also reduced the risk of nonvertebral fractures by 30 to 50%. Evidence from observational studies in Canada and other jurisdictions suggests that patients who had undergone BMD measurements, particularly if a diagnosis of osteoporosis is made, were more likely to be given pharmacologic bone-sparing therapy. Despite these findings, the rate of BMD investigation and osteoporosis treatment after a fracture remained low (<20%) in Ontario as well as in other jurisdictions.
Bone Mineral Density Testing in Men
There are presently no specific Canadian guidelines for BMD screening in men. A review of the literature suggests that risk factors for fracture and the rate of vertebral deformity are similar for men and women, but the mortality rate after a hip fracture is higher in men compared with women. Two bisphosphonates had been shown to reduce the risk of vertebral and hip fractures in men. However, BMD testing and osteoporosis treatment were proportionately low in Ontario men in general, and particularly after a fracture, even though men accounted for 25% of the hip and wrist fractures. The Ontario data also showed that the rates of wrist fracture and hip fracture in men rose sharply in the 75- to 80-year age group.
Ontario-Based Economic Analysis
The economic analysis focused on analyzing the economic impact of decreasing future hip fractures by increasing the rate of BMD testing in men and women age greater than or equal to 65 years following a hip or wrist fracture. A decision analysis showed the above strategy, especially when enhanced by improved reporting of BMD tests, to be cost-effective, resulting in a cost-effectiveness ratio ranging from $2,285 (Cdn) per fracture avoided (worst-case scenario) to $1,981 (Cdn) per fracture avoided (best-case scenario). A budget impact analysis estimated that shifting utilization of BMD testing from the low risk population to high risk populations within Ontario would result in a saving of $0.85 million to $1.5 million (Cdn) to the health system. The potential net saving was estimated at $1.2 million to $5 million (Cdn) when the downstream cost-avoidance due to prevention of future hip fractures was factored into the analysis.
Other Factors for Consideration
There is a lack of standardization for BMD testing in Ontario. Two different standards are presently being used and experts suggest that variability in results from different facilities may lead to unnecessary testing. There is also no requirement for standardized equipment, procedure or reporting format. The current reimbursement policy for BMD testing encourages serial testing in people at low risk of accelerated bone loss. This review showed that biannual testing is not necessary for all cases. The lack of a database to collect clinical data on BMD testing makes it difficult to evaluate the clinical profiles of patients tested and outcomes of the BMD tests. There are ministry initiatives in progress under the Osteoporosis Program to address the development of a mandatory standardized requisition form for BMD tests to facilitate data collection and clinical decision-making. Work is also underway for developing guidelines for BMD testing in men and in perimenopausal women.
Conclusion
Increased use of BMD in Ontario since 1996 appears to be associated with increased use of antiresorptive medication and a decrease in hip and wrist fractures.
Data suggest that as many as 20% (98,000) of the DXA BMD tests in Ontario in 2005/06 were performed in people aged less than 65 years, with no fracture in the current year, and coded as being at low risk for accelerated bone loss; this is not consistent with current guidelines. Even though some of these people might have been incorrectly coded as low-risk, the number of tests in people truly at low risk could still be substantial.
Approximately 4% (21,000) of the DXA BMD tests in 2005/06 were repeat BMDs in low-risk individuals within a 24-month period. Even though this is in compliance with current OHIP reimbursement policies, evidence showed that biannual serial BMD testing is not necessary in individuals without major risk factors for fractures, provided that the baseline BMD is normal (T-score < –1). In this population, BMD measurements may be repeated in 3 to 5 years after the baseline test to establish the rate of bone loss, and further serial BMD tests may not be necessary for another 7 to 10 years if the rate of bone loss is no more than 1% per year. Precision of the test needs to be considered when interpreting serial BMD results.
Although changes in BMD may not be the perfect surrogate for reduction in fracture risk as a measure of response to osteoporosis treatment, experts advised that it is presently the only reliable test for monitoring response to treatment and to help motivate patients to continue treatment. Patients should not discontinue treatment if there is no increase in BMD after the first year of treatment. Lack of response or bone loss during treatment should prompt the physician to examine whether the patient is taking the medication appropriately.
Men and women who have had a fragility fracture at the hip, spine, wrist or shoulder are at increased risk of having a future fracture, but this population is presently under investigated and under treated. Additional efforts have to be made to communicate to physicians (particularly orthopaedic surgeons and family physicians) and the public about the need for a BMD test after fracture, and for initiating treatment if low BMD is found.
Men had a disproportionately low rate of BMD tests and osteoporosis treatment, especially after a fracture. Evidence and fracture data showed that the risk of hip and wrist fractures in men rises sharply at age 70 years.
Some counties had BMD utilization rates that were only 10% of that of the county with the highest utilization. The reasons for low utilization need to be explored and addressed.
Initiatives such as aligning reimbursement policy with current guidelines, developing specific guidelines for BMD testing in men and perimenopausal women, improving BMD reports to assist in clinical decision making, developing a registry to track BMD tests, improving access to BMD tests in remote/rural counties, establishing mechanisms to alert family physicians of fractures, and educating physicians and the public, will improve the appropriate utilization of BMD tests, and further decrease the rate of fractures in Ontario. Some of these initiatives such as developing guidelines for perimenopausal women and men, and developing a standardized requisition form for BMD testing, are currently in progress under the Ontario Osteoporosis Strategy.
PMCID: PMC3379167  PMID: 23074491
14.  Risks and Benefits of Nalmefene in the Treatment of Adult Alcohol Dependence: A Systematic Literature Review and Meta-Analysis of Published and Unpublished Double-Blind Randomized Controlled Trials 
PLoS Medicine  2015;12(12):e1001924.
Background
Nalmefene is a recent option in alcohol dependence treatment. Its approval was controversial. We conducted a systematic review and meta-analysis of the aggregated data (registered as PROSPERO 2014:CRD42014014853) to compare the harm/benefit of nalmefene versus placebo or active comparator in this indication.
Methods and Findings
Three reviewers searched for published and unpublished studies in Medline, the Cochrane Library, Embase, ClinicalTrials.gov, Current Controlled Trials, and bibliographies and by mailing pharmaceutical companies, the European Medicines Agency (EMA), and the US Food and Drug Administration. Double-blind randomized clinical trials evaluating nalmefene to treat adult alcohol dependence, irrespective of the comparator, were included if they reported (1) health outcomes (mortality, accidents/injuries, quality of life, somatic complications), (2) alcohol consumption outcomes, (3) biological outcomes, or (4) treatment safety outcomes, at 6 mo and/or 1 y. Three authors independently screened the titles and abstracts of the trials identified. Relevant trials were evaluated in full text. The reviewers independently assessed the included trials for methodological quality using the Cochrane Collaboration tool for assessing risk of bias. On the basis of the I2 index or the Cochrane’s Q test, fixed or random effect models were used to estimate risk ratios (RRs), mean differences (MDs), or standardized mean differences (SMDs) with 95% CIs. In sensitivity analyses, outcomes for participants who were lost to follow-up were included using baseline observation carried forward (BOCF); for binary measures, patients lost to follow-up were considered equal to failures (i.e., non-assessed patients were recorded as not having responded in both groups). Five randomized controlled trials (RCTs) versus placebo, with a total of 2,567 randomized participants, were included in the main analysis. None of these studies was performed in the specific population defined by the EMA approval of nalmefene, i.e., adults with alcohol dependence who consume more than 60 g of alcohol per day (for men) or more than 40 g per day (for women). No RCT compared nalmefene with another medication. Mortality at 6 mo (RR = 0.39, 95% CI [0.08; 2.01]) and 1 y (RR = 0.98, 95% CI [0.04; 23.95]) and quality of life at 6 mo (SF-36 physical component summary score: MD = 0.85, 95% CI [−0.32; 2.01]; SF-36 mental component summary score: MD = 1.01, 95% CI [−1.33; 3.34]) were not different across groups. Other health outcomes were not reported. Differences were encountered for alcohol consumption outcomes such as monthly number of heavy drinking days at 6 mo (MD = −1.65, 95% CI [−2.41; −0.89]) and at 1 y (MD = −1.60, 95% CI [−2.85; −0.35]) and total alcohol consumption at 6 mo (SMD = −0.20, 95% CI [−0.30; −0.10]). An attrition bias could not be excluded, with more withdrawals for nalmefene than for placebo, including more withdrawals for safety reasons at both 6 mo (RR = 3.65, 95% CI [2.02; 6.63]) and 1 y (RR = 7.01, 95% CI [1.72; 28.63]). Sensitivity analyses showed no differences for alcohol consumption outcomes between nalmefene and placebo, but the weight of these results should not be overestimated, as the BOCF approach to managing withdrawals was used.
Conclusions
The value of nalmefene for treatment of alcohol addiction is not established. At best, nalmefene has limited efficacy in reducing alcohol consumption.
In a systematic review and meta-analysis, Florian Naudet and colleagues assess whether medication with the opioid antagonist nalmefene can reduce consumption and other outcomes of alcohol addiction.
Editors' Summary
Background
Many people enjoy an occasional alcoholic drink. But because alcohol is an addictive substance, some people (around one in 12 people in the US, for example) develop alcohol dependency (alcoholism). Such people have an excessive desire to drink or have lost control over their alcohol use, and may find it hard to relax or enjoy themselves without having a drink. As well as becoming psychologically dependent on alcohol, they can become physically dependent and may show withdrawal symptoms such as sweating, shaking, and nausea—or even delirium tremens, a psychotic condition that involves tremors, hallucinations, anxiety, and disorientation—when they attempt to reduce their drinking. Indeed, severely dependent drinkers often drink to relieve their withdrawal symptoms (“relief drinking”). Although alcohol dependency sometimes runs in families, it can also be triggered by stressful events, and the condition can damage health, emotional stability, finances, careers, and relationships.
Why Was This Study Done?
To reduce harm, alcohol-dependent individuals are usually advised to abstain from drinking, but controlled (moderate) drinking may also be helpful. To help people reduce their alcohol consumption, the European Medicines Agency recently approved nalmefene—a drug that blocks the body’s opioid receptors and reduces the craving for alcohol—for use in the treatment of alcohol dependence in adults who consume more than 60 g (for men) or 40 g (for women) of alcohol per day (a small glass of wine contains about 12 g of alcohol; a can of beer contains about 16 g). However, several expert bodies have concluded that nalmefene shows no benefit over naltrexone, an older treatment for alcohol dependency, and do not recommend its use for this indication. Here, the researchers investigate the risks and benefits of nalmefene in the treatment of alcohol dependency in adults by undertaking a systematic review and meta-analysis of double-blind randomized controlled trials (RCTs) of nalmefene for this indication. A systematic review uses predefined criteria to identify all the research on a given topic, and a meta-analysis combines the results of several studies; a double-blind RCT compares outcomes in people chosen at random to receive different treatments without the researchers or the participants knowing who received which treatment until the end of the trial.
What Did the Researchers Do and Find?
The researchers identified five RCTs that met the criteria for inclusion in their study. All five RCTs (which involved 2,567 participants) compared the effects of nalmefene with a placebo (dummy drug); none was undertaken in the population specified by the European Medicines Agency approval. Among the health outcomes examined in the meta-analysis, there were no differences between participants taking nalmefene and those taking placebo in mortality (death) after six months or one year of treatment, in the quality of life at six months, or in a summary score indicating mental health at six months. The RCTs included in the meta-analysis did not report other health outcomes such as accidents. Participants taking nalmefene had fewer heavy drinking days per month at six months and one year of treatment than participants taking placebo, and their total alcohol consumption was lower. However, more people withdrew from the nalmefene groups than from the placebo groups, often for safety reasons. Thus, attrition bias—selection bias caused by systematic differences between groups in withdrawals from a study that can affect the accuracy of the study’s findings—cannot be excluded. Indeed, when the researchers undertook an analysis in which they allowed for withdrawals, the alcohol consumption outcomes did not differ between the treatment groups.
What Do These Findings Mean?
These findings show that there is no high-grade evidence currently available from RCTs to support the use of nalmefene for harm reduction among people being treated for alcohol dependency. In addition, they provide little evidence to support the use of nalmefene to reduce alcohol consumption among this population. Thus, the value of nalmefene for the treatment of alcohol addiction is not established. Importantly, these findings reveal a lack of information on clinically relevant outcomes in the evidence that led to nalmefene approval by the European Medicines Agency. Thus, these findings also call into question the decisions of this and other regulatory and advisory bodies that have approved nalmefene on the basis of the available evidence from RCTs, and highlight the need for further RCTs of nalmefene compared to placebo and naltrexone for the indication specified in the market approval.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001924.
The US National Institute on Alcohol Abuse and Alcoholism has information about alcohol and its effects on health (including alcohol use disorder, another name for alcohol dependency); it provides interactive worksheets to help people evaluate their drinking and decide whether and how to make a change
The UK National Health Service Choices website provides detailed information about drinking and alcohol and about alcohol dependency (including a personal story about alcohol misuse), and tools for calculating alcohol consumption
The US National Council on Alcoholism and Drug Dependence provides information about alcohol addiction and a self-test for alcohol dependence
Drinkaware is a UK-based non-profit organization that aims to improve the UK’s drinking habits; its website provides information on alcohol dependence and on other aspects of alcohol and health, and a tool for calculating alcohol intake
MedlinePlus provides links to many other resources on alcohol and on alcoholism and alcohol abuse
Wikipedia has pages on nalmefene and on naltrexone (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
Details of the European Medicine Agency approval of nalmefene are available
More information about this study is available
doi:10.1371/journal.pmed.1001924
PMCID: PMC4687857  PMID: 26694529
15.  Minimally invasive surgical procedures for the treatment of lumbar disc herniation 
Introduction
In up to 30% of patients undergoing lumbar disc surgery for herniated or protruded discs outcomes are judged unfavourable. Over the last decades this problem has stimulated the development of a number of minimally-invasive operative procedures. The aim is to relieve pressure from compromised nerve roots by mechanically removing, dissolving or evaporating disc material while leaving bony structures and surrounding tissues as intact as possible. In Germany, there is hardly any utilisation data for these new procedures – data files from the statutory health insurances demonstrate that about 5% of all lumbar disc surgeries are performed using minimally-invasive techniques. Their real proportion is thought to be much higher because many procedures are offered by private hospitals and surgeries and are paid by private health insurers or patients themselves. So far no comprehensive assessment comparing efficacy, safety, effectiveness and cost-effectiveness of minimally-invasive lumbar disc surgery to standard procedures (microdiscectomy, open discectomy) which could serve as a basis for coverage decisions, has been published in Germany.
Objective
Against this background the aim of the following assessment is:
Based on published scientific literature assess safety, efficacy and effectiveness of minimally-invasive lumbar disc surgery compared to standard procedures. To identify and critically appraise studies comparing costs and cost-effectiveness of minimally-invasive procedures to that of standard procedures. If necessary identify research and evaluation needs and point out regulative needs within the German health care system. The assessment focusses on procedures that are used in elective lumbar disc surgery as alternative treatment options to microdiscectomy or open discectomy. Chemonucleolysis, percutaneous manual discectomy, automated percutaneous lumbar discectomy, laserdiscectomy and endoscopic procedures accessing the disc by a posterolateral or posterior approach are included.
Methods
In order to assess safety, efficacy and effectiveness of minimally-invasive procedures as well as their economic implications systematic reviews of the literature are performed. A comprehensive search strategy is composed to search 23 electronic databases, among them MEDLINE, EMBASE and the Cochrane Library. Methodological quality of systematic reviews, HTA reports and primary research is assessed using checklists of the German Scientific Working Group for Health Technology Assessment. Quality and transparency of cost analyses are documented using the quality and transparency catalogues of the working group. Study results are summarised in a qualitative manner. Due to the limited number and the low methodological quality of the studies it is not possible to conduct metaanalyses. In addition to the results of controlled trials results of recent case series are introduced and discussed.
Results
The evidence-base to assess safety, efficacy and effectiveness of minimally-invasive lumbar disc surgery procedures is rather limited:
Percutaneous manual discectomy: Six case series (four after 1998)Automated percutaneous lumbar discectomy: Two RCT (one discontinued), twelve case series (one after 1998)Chemonucleolysis: Five RCT, five non-randomised controlled trials, eleven case seriesPercutaneous laserdiscectomy: One non-randomised controlled trial, 13 case series (eight after 1998)Endoscopic procedures: Three RCT, 21 case series (17 after 1998)
There are two economic analyses each retrieved for chemonucleolysis and automated percutaneous discectomy as well as one cost-minimisation analysis comparing costs of an endoscopic procedure to costs for open discectomy.
Among all minimally-invasive procedures chemonucleolysis is the only of which efficacy may be judged on the basis of results from high quality randomised controlled trials (RCT). Study results suggest that the procedure maybe (cost)effectively used as an intermediate therapeutical option between conservative and operative management of small lumbar disc herniations or protrusions causing sciatica. Two RCT comparing transforaminal endoscopic procedures with microdiscectomy in patients with sciatica and small non-sequestered disc herniations show comparable short and medium term overall success rates. Concerning speed of recovery and return to work a trend towards more favourable results for the endoscopic procedures is noted. It is doubtful though, whether these results from the eleven and five years old studies are still valid for the more advanced procedures used today. The only RCT comparing the results of automated percutaneous lumbar discectomy to those of microdiscectomy showed clearly superior results of microdiscectomy. Furthermore, success rates of automated percutaneous lumbar discectomy reported in the RCT (29%) differ extremely from success rates reported in case series (between 56% and 92%).
The literature search retrieves no controlled trials to assess efficacy and/or effectiveness of laser-discectomy, percutaneous manual discectomy or endoscopic procedures using a posterior approach in comparison to the standard procedures. Results from recent case series permit no assessment of efficacy, especially not in comparison to standard procedures. Due to highly selected patients, modi-fications of operative procedures, highly specialised surgical units and poorly standardised outcome assessment results of case series are highly variable, their generalisability is low.
The results of the five economical analyses are, due to conceptual and methodological problems, of no value for decision-making in the context of the German health care system.
Discussion
Aside from low methodological study quality three conceptual problems complicate the interpretation of results.
Continuous further development of technologies leads to a diversity of procedures in use which prohibits generalisation of study results. However, diversity is noted not only for minimally-invasive procedures but also for the standard techniques against which the new developments are to be compared. The second problem refers to the heterogeneity of study populations. For most studies one common inclusion criterion was "persisting sciatica after a course of conservative treatment of variable duration". Differences among study populations are noted concerning results of imaging studies. Even within every group of minimally-invasive procedure, studies define their own in- and exclusion criteria which differ concerning degree of dislocation and sequestration of disc material. There is the non-standardised assessment of outcomes which are performed postoperatively after variable periods of time. Most studies report results in a dichotomous way as success or failure while the classification of a result is performed using a variety of different assessment instruments or procedures. Very often the global subjective judgement of results by patients or surgeons is reported. There are no scientific discussions whether these judgements are generalisable or comparable, especially among studies that are conducted under differing socio-cultural conditions. Taking into account the weak evidence-base for efficacy and effectiveness of minimally-invasive procedures it is not surprising that so far there are no dependable economic analyses.
Conclusions
Conclusions that can be drawn from the results of the present assessment refer in detail to the specified minimally-invasive procedures of lumbar disc surgery but they may also be considered exemplary for other fields where optimisation of results is attempted by technological development and widening of indications (e.g. total hip replacement).
Compared to standard technologies (open discectomy, microdiscectomy) and with the exception of chemonucleolysis, the developmental status of all other minimally-invasive procedures assessed must be termed experimental. To date there is no dependable evidence-base to recommend their use in routine clinical practice. To create such a dependable evidence-base further research in two directions is needed: a) The studies need to include adequate patient populations, use realistic controls (e.g. standard operative procedures or continued conservative care) and use standardised measurements of meaningful outcomes after adequate periods of time. b) Studies that are able to report effectiveness of the procedures under everyday practice conditions and furthermore have the potential to detect rare adverse effects are needed. In Sweden this type of data is yielded by national quality registries. On the one hand their data are used for quality improvement measures and on the other hand they allow comprehensive scientific evaluations. Since the year of 2000 a continuous rise in utilisation of minimally-invasive lumbar disc surgery is observed among statutory health insurers. Examples from other areas of innovative surgical technologies (e.g. robot assisted total hip replacement) indicate that the rise will probably continue - especially because there are no legal barriers to hinder introduction of innovative treatments into routine hospital care. Upon request by payers or providers the "Gemeinsamer Bundesausschuss" may assess a treatments benefit, its necessity and cost-effectiveness as a prerequisite for coverage by the statutory health insurance. In the case of minimally-invasive disc surgery it would be advisable to examine the legal framework for covering procedures only if they are provided under evaluation conditions. While in Germany coverage under evaluation conditions is established practice in ambulatory health care only (“Modellvorhaben") examples from other European countries (Great Britain, Switzerland) demonstrate that it is also feasible for hospital based interventions. In order to assure patient protection and at the same time not hinder the further development of new and promising technologies provision under evaluation conditions could also be realised in the private health care market - although in this sector coverage is not by law linked to benefit, necessity and cost-effectiveness of an intervention.
PMCID: PMC3011322  PMID: 21289928
16.  Positron Emission Tomography for the Assessment of Myocardial Viability 
Executive Summary
In July 2009, the Medical Advisory Secretariat (MAS) began work on Non-Invasive Cardiac Imaging Technologies for the Assessment of Myocardial Viability, an evidence-based review of the literature surrounding different cardiac imaging modalities to ensure that appropriate technologies are accessed by patients undergoing viability assessment. This project came about when the Health Services Branch at the Ministry of Health and Long-Term Care asked MAS to provide an evidentiary platform on effectiveness and cost-effectiveness of non-invasive cardiac imaging modalities.
After an initial review of the strategy and consultation with experts, MAS identified five key non-invasive cardiac imaging technologies that can be used for the assessment of myocardial viability: positron emission tomography, cardiac magnetic resonance imaging, dobutamine echocardiography, and dobutamine echocardiography with contrast, and single photon emission computed tomography.
A 2005 review conducted by MAS determined that positron emission tomography was more sensitivity than dobutamine echocardiography and single photon emission tomography and dominated the other imaging modalities from a cost-effective standpoint. However, there was inadequate evidence to compare positron emission tomography and cardiac magnetic resonance imaging. Thus, this report focuses on this comparison only. For both technologies, an economic analysis was also completed.
The Non-Invasive Cardiac Imaging Technologies for the Assessment of Myocardial Viability is made up of the following reports, which can be publicly accessed at the MAS website at: www.health.gov.on.ca/mas or at www.health.gov.on.ca/english/providers/program/mas/mas_about.html
Positron Emission Tomography for the Assessment of Myocardial Viability: An Evidence-Based Analysis
Magnetic Resonance Imaging for the Assessment of Myocardial Viability: An Evidence-Based Analysis
Objective
The objective of this analysis is to assess the effectiveness and safety of positron emission tomography (PET) imaging using F-18-fluorodeoxyglucose (FDG) for the assessment of myocardial viability. To evaluate the effectiveness of FDG PET viability imaging, the following outcomes are examined:
the diagnostic accuracy of FDG PET for predicting functional recovery;
the impact of PET viability imaging on prognosis (mortality and other patient outcomes); and
the contribution of PET viability imaging to treatment decision making and subsequent patient outcomes.
Clinical Need: Condition and Target Population
Left Ventricular Systolic Dysfunction and Heart Failure
Heart failure is a complex syndrome characterized by the heart’s inability to maintain adequate blood circulation through the body leading to multiorgan abnormalities and, eventually, death. Patients with heart failure experience poor functional capacity, decreased quality of life, and increased risk of morbidity and mortality.
In 2005, more than 71,000 Canadians died from cardiovascular disease, of which, 54% were due to ischemic heart disease. Left ventricular (LV) systolic dysfunction due to coronary artery disease (CAD)1 is the primary cause of heart failure accounting for more than 70% of cases. The prevalence of heart failure was estimated at one percent of the Canadian population in 1989. Since then, the increase in the older population has undoubtedly resulted in a substantial increase in cases. Heart failure is associated with a poor prognosis: one-year mortality rates were 32.9% and 31.1% for men and women, respectively in Ontario between 1996 and 1997.
Treatment Options
In general, there are three options for the treatment of heart failure: medical treatment, heart transplantation, and revascularization for those with CAD as the underlying cause. Concerning medical treatment, despite recent advances, mortality remains high among treated patients, while, heart transplantation is affected by the limited availability of donor hearts and consequently has long waiting lists. The third option, revascularization, is used to restore the flow of blood to the heart via coronary artery bypass grafting (CABG) or through minimally invasive percutaneous coronary interventions (balloon angioplasty and stenting). Both methods, however, are associated with important perioperative risks including mortality, so it is essential to properly select patients for this procedure.
Myocardial Viability
Left ventricular dysfunction may be permanent if a myocardial scar is formed, or it may be reversible after revascularization. Reversible LV dysfunction occurs when the myocardium is viable but dysfunctional (reduced contractility). Since only patients with dysfunctional but viable myocardium benefit from revascularization, the identification and quantification of the extent of myocardial viability is an important part of the work-up of patients with heart failure when determining the most appropriate treatment path. Various non-invasive cardiac imaging modalities can be used to assess patients in whom determination of viability is an important clinical issue, specifically:
dobutamine echocardiography (echo),
stress echo with contrast,
SPECT using either technetium or thallium,
cardiac magnetic resonance imaging (cardiac MRI), and
positron emission tomography (PET).
Dobutamine Echocardiography
Stress echocardiography can be used to detect viable myocardium. During the infusion of low dose dobutamine (5 – 10 μg/kg/min), an improvement of contractility in hypokinetic and akentic segments is indicative of the presence of viable myocardium. Alternatively, a low-high dose dobutamine protocol can be used in which a biphasic response characterized by improved contractile function during the low-dose infusion followed by a deterioration in contractility due to stress induced ischemia during the high dose dobutamine infusion (dobutamine dose up to 40 ug/kg/min) represents viable tissue. Newer techniques including echocardiography using contrast agents, harmonic imaging, and power doppler imaging may help to improve the diagnostic accuracy of echocardiographic assessment of myocardial viability.
Stress Echocardiography with Contrast
Intravenous contrast agents, which are high molecular weight inert gas microbubbles that act like red blood cells in the vascular space, can be used during echocardiography to assess myocardial viability. These agents allow for the assessment of myocardial blood flow (perfusion) and contractile function (as described above), as well as the simultaneous assessment of perfusion to make it possible to distinguish between stunned and hibernating myocardium.
SPECT
SPECT can be performed using thallium-201 (Tl-201), a potassium analogue, or technetium-99 m labelled tracers. When Tl-201 is injected intravenously into a patient, it is taken up by the myocardial cells through regional perfusion, and Tl-201 is retained in the cell due to sodium/potassium ATPase pumps in the myocyte membrane. The stress-redistribution-reinjection protocol involves three sets of images. The first two image sets (taken immediately after stress and then three to four hours after stress) identify perfusion defects that may represent scar tissue or viable tissue that is severely hypoperfused. The third set of images is taken a few minutes after the re-injection of Tl-201 and after the second set of images is completed. These re-injection images identify viable tissue if the defects exhibit significant fill-in (> 10% increase in tracer uptake) on the re-injection images.
The other common Tl-201 viability imaging protocol, rest-redistribution, involves SPECT imaging performed at rest five minutes after Tl-201 is injected and again three to four hours later. Viable tissue is identified if the delayed images exhibit significant fill-in of defects identified in the initial scans (> 10% increase in uptake) or if defects are fixed but the tracer activity is greater than 50%.
There are two technetium-99 m tracers: sestamibi (MIBI) and tetrofosmin. The uptake and retention of these tracers is dependent on regional perfusion and the integrity of cellular membranes. Viability is assessed using one set of images at rest and is defined by segments with tracer activity greater than 50%.
Cardiac Magnetic Resonance Imaging
Cardiac magnetic resonance imaging (cardiac MRI) is a non-invasive, x-ray free technique that uses a powerful magnetic field, radio frequency pulses, and a computer to produce detailed images of the structure and function of the heart. Two types of cardiac MRI are used to assess myocardial viability: dobutamine stress magnetic resonance imaging (DSMR) and delayed contrast-enhanced cardiac MRI (DE-MRI). DE-MRI, the most commonly used technique in Ontario, uses gadolinium-based contrast agents to define the transmural extent of scar, which can be visualized based on the intensity of the image. Hyper-enhanced regions correspond to irreversibly damaged myocardium. As the extent of hyper-enhancement increases, the amount of scar increases, so there is a lower the likelihood of functional recovery.
Cardiac Positron Emission Tomography
Positron emission tomography (PET) is a nuclear medicine technique used to image tissues based on the distinct ways in which normal and abnormal tissues metabolize positron-emitting radionuclides. Radionuclides are radioactive analogs of common physiological substrates such as sugars, amino acids, and free fatty acids that are used by the body. The only licensed radionuclide used in PET imaging for viability assessment is F-18 fluorodeoxyglucose (FDG).
During a PET scan, the radionuclides are injected into the body and as they decay, they emit positively charged particles (positrons) that travel several millimetres into tissue and collide with orbiting electrons. This collision results in annihilation where the combined mass of the positron and electron is converted into energy in the form of two 511 keV gamma rays, which are then emitted in opposite directions (180 degrees) and captured by an external array of detector elements in the PET gantry. Computer software is then used to convert the radiation emission into images. The system is set up so that it only detects coincident gamma rays that arrive at the detectors within a predefined temporal window, while single photons arriving without a pair or outside the temporal window do not active the detector. This allows for increased spatial and contrast resolution.
Evidence-Based Analysis
Research Questions
What is the diagnostic accuracy of PET for detecting myocardial viability?
What is the prognostic value of PET viability imaging (mortality and other clinical outcomes)?
What is the contribution of PET viability imaging to treatment decision making?
What is the safety of PET viability imaging?
Literature Search
A literature search was performed on July 17, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2004 to July 16, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. In addition, published systematic reviews and health technology assessments were reviewed for relevant studies published before 2004. Reference lists of included studies were also examined for any additional relevant studies not already identified. The quality of the body of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Inclusion Criteria
Criteria applying to diagnostic accuracy studies, prognosis studies, and physician decision-making studies:
English language full-reports
Health technology assessments, systematic reviews, meta-analyses, randomized controlled trials (RCTs), and observational studies
Patients with chronic, known CAD
PET imaging using FDG for the purpose of detecting viable myocardium
Criteria applying to diagnostic accuracy studies:
Assessment of functional recovery ≥3 months after revascularization
Raw data available to calculate sensitivity and specificity
Gold standard: prediction of global or regional functional recovery
Criteria applying to prognosis studies:
Mortality studies that compare revascularized patients with non-revascularized patients and patients with viable and non-viable myocardium
Exclusion Criteria
Criteria applying to diagnostic accuracy studies, prognosis studies, and physician decision-making studies:
PET perfusion imaging
< 20 patients
< 18 years of age
Patients with non-ischemic heart disease
Animal or phantom studies
Studies focusing on the technical aspects of PET
Studies conducted exclusively in patients with acute myocardial infarction (MI)
Duplicate publications
Criteria applying to diagnostic accuracy studies
Gold standard other than functional recovery (e.g., PET or cardiac MRI)
Assessment of functional recovery occurs before patients are revascularized
Outcomes of Interest
Diagnostic accuracy studies
Sensitivity and specificity
Positive and negative predictive values (PPV and NPV)
Positive and negative likelihood ratios
Diagnostic accuracy
Adverse events
Prognosis studies
Mortality rate
Functional status
Exercise capacity
Quality of Life
Influence on PET viability imaging on physician decision making
Statistical Methods
Pooled estimates of sensitivity and specificity were calculated using a bivariate, binomial generalized linear mixed model. Statistical significance was defined by P values less than 0.05, where “false discovery rate” adjustments were made for multiple hypothesis testing. Using the bivariate model parameters, summary receiver operating characteristic (sROC) curves were produced. The area under the sROC curve was estimated by numerical integration with a cubic spline (default option). Finally, pooled estimates of mortality rates were calculated using weighted means.
Quality of Evidence
The quality of evidence assigned to individual diagnostic studies was determined using the QUADAS tool, a list of 14 questions that address internal and external validity, bias, and generalizibility of diagnostic accuracy studies. Each question is scored as “yes”, “no”, or “unclear”. The quality of the body of evidence was then assessed as high, moderate, low, or very low according to the GRADE Working Group criteria. The following definitions of quality were used in grading the quality of the evidence:
Summary of Findings
A total of 40 studies met the inclusion criteria and were included in this review: one health technology assessment, two systematic reviews, 22 observational diagnostic accuracy studies, and 16 prognosis studies. The available PET viability imaging literature addresses two questions: 1) what is the diagnostic accuracy of PET imaging for the assessment; and 2) what is the prognostic value of PET viability imaging. The diagnostic accuracy studies use regional or global functional recovery as the reference standard to determine the sensitivity and specificity of the technology. While regional functional recovery was most commonly used in the studies, global functional recovery is more important clinically. Due to differences in reporting and thresholds, however, it was not possible to pool global functional recovery.
Functional recovery, however, is a surrogate reference standard for viability and consequently, the diagnostic accuracy results may underestimate the specificity of PET viability imaging. For example, regional functional recovery may take up to a year after revascularization depending on whether it is stunned or hibernating tissue, while many of the studies looked at regional functional recovery 3 to 6 months after revascularization. In addition, viable tissue may not recover function after revascularization due to graft patency or re-stenosis. Both issues may lead to false positives and underestimate specificity. Given these limitations, the prognostic value of PET viability imaging provides the most direct and clinically useful information. This body of literature provides evidence on the comparative effectiveness of revascularization and medical therapy in patients with viable myocardium and patients without viable myocardium. In addition, the literature compares the impact of PET-guided treatment decision making with SPECT-guided or standard care treatment decision making on survival and cardiac events (including cardiac mortality, MI, hospital stays, unintended revascularization, etc).
The main findings from the diagnostic accuracy and prognosis evidence are:
Based on the available very low quality evidence, PET is a useful imaging modality for the detection of viable myocardium. The pooled estimates of sensitivity and specificity for the prediction of regional functional recovery as a surrogate for viable myocardium are 91.5% (95% CI, 88.2% – 94.9%) and 67.8% (95% CI, 55.8% – 79.7%), respectively.
Based the available very low quality of evidence, an indirect comparison of pooled estimates of sensitivity and specificity showed no statistically significant difference in the diagnostic accuracy of PET viability imaging for regional functional recovery using perfusion/metabolism mismatch with FDG PET plus either a PET or SPECT perfusion tracer compared with metabolism imaging with FDG PET alone.
FDG PET + PET perfusion metabolism mismatch: sensitivity, 89.9% (83.5% – 96.4%); specificity, 78.3% (66.3% – 90.2%);
FDG PET + SPECT perfusion metabolism mismatch: sensitivity, 87.2% (78.0% – 96.4%); specificity, 67.1% (48.3% – 85.9%);
FDG PET metabolism: sensitivity, 94.5% (91.0% – 98.0%); specificity, 66.8% (53.2% – 80.3%).
Given these findings, further higher quality studies are required to determine the comparative effectiveness and clinical utility of metabolism and perfusion/metabolism mismatch viability imaging with PET.
Based on very low quality of evidence, patients with viable myocardium who are revascularized have a lower mortality rate than those who are treated with medical therapy. Given the quality of evidence, however, this estimate of effect is uncertain so further higher quality studies in this area should be undertaken to determine the presence and magnitude of the effect.
While revascularization may reduce mortality in patients with viable myocardium, current moderate quality RCT evidence suggests that PET-guided treatment decisions do not result in statistically significant reductions in mortality compared with treatment decisions based on SPECT or standard care protocols. The PARR II trial by Beanlands et al. found a significant reduction in cardiac events (a composite outcome that includes cardiac deaths, MI, or hospital stay for cardiac cause) between the adherence to PET recommendations subgroup and the standard care group (hazard ratio, .62; 95% confidence intervals, 0.42 – 0.93; P = .019); however, this post-hoc sub-group analysis is hypothesis generating and higher quality studies are required to substantiate these findings.
The use of FDG PET plus SPECT to determine perfusion/metabolism mismatch to assess myocardial viability increases the radiation exposure compared with FDG PET imaging alone or FDG PET combined with PET perfusion imaging (total-body effective dose: FDG PET, 7 mSv; FDG PET plus PET perfusion tracer, 7.6 – 7.7 mSV; FDG PET plus SPECT perfusion tracer, 16 – 25 mSv). While the precise risk attributed to this increased exposure is unknown, there is increasing concern regarding lifetime multiple exposures to radiation-based imaging modalities, although the incremental lifetime risk for patients who are older or have a poor prognosis may not be as great as for healthy individuals.
PMCID: PMC3377573  PMID: 23074393
17.  Greater Response to Placebo in Children Than in Adults: A Systematic Review and Meta-Analysis in Drug-Resistant Partial Epilepsy 
PLoS Medicine  2008;5(8):e166.
Background
Despite guidelines establishing the need to perform comprehensive paediatric drug development programs, pivotal trials in children with epilepsy have been completed mostly in Phase IV as a postapproval replication of adult data. However, it has been shown that the treatment response in children can differ from that in adults. It has not been investigated whether differences in drug effect between adults and children might occur in the treatment of drug-resistant partial epilepsy, although such differences may have a substantial impact on the design and results of paediatric randomised controlled trials (RCTs).
Methods and Findings
Three electronic databases were searched for RCTs investigating any antiepileptic drug (AED) in the add-on treatment of drug-resistant partial epilepsy in both children and adults. The treatment effect was compared between the two age groups using the ratio of the relative risk (RR) of the 50% responder rate between active AEDs treatment and placebo groups, as well as meta-regression. Differences in the response to placebo and to active treatment were searched using logistic regression. A comparable approach was used for analysing secondary endpoints, including seizure-free rate, total and adverse events-related withdrawal rates, and withdrawal rate for seizure aggravation. Five AEDs were evaluated in both adults and children with drug-resistant partial epilepsy in 32 RCTs. The treatment effect was significantly lower in children than in adults (RR ratio: 0.67 [95% confidence interval (CI) 0.51–0.89]; p = 0.02 by meta-regression). This difference was related to an age-dependent variation in the response to placebo, with a higher rate in children than in adults (19% versus 9.9%, p < 0.001), whereas no significant difference was observed in the response to active treatment (37.2% versus 30.4%, p = 0.364). The relative risk of the total withdrawal rate was also significantly lower in children than in adults (RR ratio: 0.65 [95% CI 0.43–0.98], p = 0.004 by metaregression), due to higher withdrawal rate for seizure aggravation in children (5.6%) than in adults (0.7%) receiving placebo (p < 0.001). Finally, there was no significant difference in the seizure-free rate between adult and paediatric studies.
Conclusions
Children with drug-resistant partial epilepsy receiving placebo in double-blind RCTs demonstrated significantly greater 50% responder rate than adults, probably reflecting increased placebo and regression to the mean effects. Paediatric clinical trial designs should account for these age-dependent variations of the response to placebo to reduce the risk of an underestimated sample size that could result in falsely negative trials.
In a systematic review of antiepileptic drugs, Philippe Ryvlin and colleagues find that children with drug-resistant partial epilepsy enrolled in trials seem to have a greater response to placebo than adults enrolled in such trials.
Editors' Summary
Background.
Whenever an adult is given a drug to treat a specific condition, that drug will have been tested in “randomized controlled trials” (RCTs). In RCTs, a drug's effects are compared to those of another drug for the same condition (or to a placebo, dummy drug) by giving groups of adult patients the different treatments and measuring how well each drug deals with the condition and whether it has any other effects on the patients' health. However, many drugs given to children have only been tested in adults, the assumption being that children can safely take the same drugs as adults provided the dose is scaled down. This approach to treatment is generally taken in epilepsy, a common brain disorder in children in which disruptions in the electrical activity of part (partial epilepsy) or all (generalized epilepsy) of the brain cause seizures. The symptoms of epilepsy depend on which part of the brain is disrupted and can include abnormal sensations, loss of consciousness, or convulsions. Most but not all patients can be successfully treated with antiepileptic drugs, which reduce or stop the occurrence of seizures.
Why Was This Study Done?
It is increasingly clear that children and adults respond differently to many drugs, including antiepileptic drugs. For example, children often break down drugs differently from adults, so a safe dose for an adult may be fatal to a child even after scaling down for body size, or it may be ineffective because of quicker clearance from the child's body. Consequently, regulatory bodies around the world now require comprehensive drug development programs in children as well as in adults. However, for pediatric trials to yield useful results, the general differences in the treatment response between children and adults must first be determined and then allowed for in the design of pediatric RCTs. In this study, the researchers investigate whether there is any evidence in published RCTs for age-dependent differences in the response to antiepileptic drugs in drug-resistant partial epilepsy.
What Did the Researchers Do and Find?
The researchers searched the literature for reports of RCTs on the effects of antiepileptic drugs in the add-on treatment of drug-resistant partial epilepsy in children and in adults—that is, trials that compared the effects of giving an additional antiepileptic drug with those of giving a placebo by asking what fraction of patients given each treatment had a 50% reduction in seizure frequency during the treatment period compared to a baseline period (the “50% responder rate”). This “systematic review” yielded 32 RCTs, including five pediatric RCTs. The researchers then compared the treatment effect (the ratio of the 50% responder rate in the treatment arm to the placebo arm) in the two age groups using a statistical approach called “meta-analysis” to pool the results of these studies. The treatment effect, they report, was significantly lower in children than in adults. Further analysis indicated that this difference was because more children than adults responded to the placebo. Nearly 1 in 5 children had a 50% reduction in seizure rate when given a placebo compared to only 1 in 10 adults. About a third of both children and adults had a 50% reduction in seizure rate when given antiepileptic drugs.
What Do These Findings Mean?
These findings, although limited by the small number of pediatric trials done so far, suggest that children with drug-resistant partial epilepsy respond more strongly in RCTs to placebo than adults. Although additional studies need to be done to find an explanation for this observation and to discover whether anything similar occurs in other conditions, this difference between children and adults should be taken into account in the design of future pediatric trials on the effects of antiepileptic drugs, and possibly drugs for other conditions. Specifically, to reduce the risk of false-negative results, this finding suggests that it might be necessary to increase the size of future pediatric trials to ensure that the trials have enough power to discover effects of the drugs tested, if they exist.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050166.
This study is further discussed in a PLoS Medicine Perspective by Terry Klassen and colleagues
The European Medicines Agency provides information about the regulation of medicines for children in Europe
The US Food and Drug Administration Office of Pediatric Therapeutics provides similar information for the US
The UK Medicines and Healthcare products Regulatory Agency also provides information on why medicines need to be tested in children
The MedlinePlus encyclopedia has a page on epilepsy (in English and Spanish)
The US National Institute for Neurological Disorders and Stroke and the UK National Health Service Direct health encyclopedia both provide information on epilepsy for patients (in several languages)
Neuroscience for Kids is an educational Web site prepared by Eric Chudler (University of Washington, Seattle, US) that includes information on epilepsy and a list of links to epilepsy organizations (mainly in English but some sections in other languages as well)
doi:10.1371/journal.pmed.0050166
PMCID: PMC2504483  PMID: 18700812
18.  Strategies for Increasing Recruitment to Randomised Controlled Trials: Systematic Review 
PLoS Medicine  2010;7(11):e1000368.
Patrina Caldwell and colleagues performed a systematic review of randomized studies that compared methods of recruiting individual study participants into trials, and found that strategies that focus on increasing potential participants' awareness of the specific health problem, and that engaged them, appeared to increase recruitment.
Background
Recruitment of participants into randomised controlled trials (RCTs) is critical for successful trial conduct. Although there have been two previous systematic reviews on related topics, the results (which identified specific interventions) were inconclusive and not generalizable. The aim of our study was to evaluate the relative effectiveness of recruitment strategies for participation in RCTs.
Methods and Findings
A systematic review, using the PRISMA guideline for reporting of systematic reviews, that compared methods of recruiting individual study participants into an actual or mock RCT were included. We searched MEDLINE, Embase, The Cochrane Library, and reference lists of relevant studies. From over 16,000 titles or abstracts reviewed, 396 papers were retrieved and 37 studies were included, in which 18,812 of at least 59,354 people approached agreed to participate in a clinical RCT. Recruitment strategies were broadly divided into four groups: novel trial designs (eight studies), recruiter differences (eight studies), incentives (two studies), and provision of trial information (19 studies). Strategies that increased people's awareness of the health problem being studied (e.g., an interactive computer program [relative risk (RR) 1.48, 95% confidence interval (CI) 1.00–2.18], attendance at an education session [RR 1.14, 95% CI 1.01–1.28], addition of a health questionnaire [RR 1.37, 95% CI 1.14–1.66]), or a video about the health condition (RR 1.75, 95% CI 1.11–2.74), and also monetary incentives (RR1.39, 95% CI 1.13–1.64 to RR 1.53, 95% CI 1.28–1.84) improved recruitment. Increasing patients' understanding of the trial process, recruiter differences, and various methods of randomisation and consent design did not show a difference in recruitment. Consent rates were also higher for nonblinded trial design, but differential loss to follow up between groups may jeopardise the study findings. The study's main limitation was the necessity of modifying the search strategy with subsequent search updates because of changes in MEDLINE definitions. The abstracts of previous versions of this systematic review were published in 2002 and 2007.
Conclusion
Recruitment strategies that focus on increasing potential participants' awareness of the health problem being studied, its potential impact on their health, and their engagement in the learning process appeared to increase recruitment to clinical studies. Further trials of recruitment strategies that target engaging participants to increase their awareness of the health problems being studied and the potential impact on their health may confirm this hypothesis.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before any health care intervention—a treatment for a disease or a measure such as vaccination that is designed to prevent an illness—is adopted by the medical community, it undergoes exhaustive laboratory-based and clinical research. In the laboratory, scientists investigate the causes of diseases, identify potential new treatments or preventive methods, and test these interventions in animals. New interventions that look hopeful are then investigated in clinical trials—studies that test these interventions in people by following a strict trial protocol or action plan. Phase I trials test interventions in a few healthy volunteers or patients to evaluate their safety and to identify possible side effects. In phase II trials, a larger group of patients receives an intervention to evaluate its safety further and to get an initial idea of its effectiveness. In phase III trials, very large groups of patients (sometimes in excess of a thousand people) are randomly assigned to receive the new intervention or an established intervention or placebo (dummy intervention). These “randomized controlled trials” or “RCTs” provide the most reliable information about the effectiveness and safety of health care interventions.
Why Was This Study Done?
Patients who participate in clinical trials must fulfill the inclusion criteria laid down in the trial protocol and must be given information about the trial, its risks, and potential benefits before agreeing to participate (informed consent). Unfortunately, many RCTs struggle to enroll the number of patients specified in their trial protocol, which can reduce a trial's ability to measure the effect of a new intervention. Inadequate recruitment can also increase costs and, in the worst cases, prevent trial completion. Several strategies have been developed to improve recruitment but it is not clear which strategy works best. In this study, the researchers undertake a systematic review (a study that uses predefined criteria to identify all the research on a given topic) of “recruitment trials”—studies that have randomly divided potential RCT participants into groups, applied different strategies for recruitment to each group, and compared recruitment rates in the groups.
What Did the Researchers Do and Find?
The researchers identified 37 randomized trials of recruitment strategies into real and mock RCTs (where no actual trial occurred). In all, 18,812 people agreed to participate in an RCT in these recruitment trials out of at least 59,354 people approached. Some of these trials investigated novel strategies for recruitment, such as changes in how patients are randomized. Others looked at the effect of recruiter differences (for example, increased contact between the health care professionals doing the recruiting and the trial investigators), the effect of offering monetary incentives to participants, and the effect of giving more information about the trial to potential participants. Recruitment strategies that improved people's awareness of the health problem being studied—provision of an interactive computer program or a video about the health condition, attendance at an educational session, or inclusion of a health questionnaire in the recruitment process—improved recruitment rates, as did monetary incentives. Increasing patients' understanding about the trial process itself, recruiter differences, and alterations in consent design and randomization generally had no effect on recruitment rates although consent rates were higher when patients knew the treatment to which they had been randomly allocated before consenting. However, differential losses among the patients in different treatment groups in such nonblinded trials may jeopardize study findings.
What Do These Findings Mean?
These findings suggest that trial recruitment strategies that focus on increasing the awareness of potential participants of the health problem being studied and its possible effects on their health, and that engage potential participants in the trial process are likely to increase recruitment to RCTs. The accuracy of these findings depends on whether the researchers identified all the published research on recruitment strategies and on whether other research on recruitment strategies has been undertaken and not published that could alter these findings. Furthermore, because about half of the recruitment trials identified by the researchers were undertaken in the US, the successful strategies identified here might not be generalizable to other countries. Nevertheless, these recruitment strategies should now be investigated further to ensure that the future evaluation of new health care interventions is not hampered by poor recruitment into RCTs.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000368.
The ClinicalTrials.gov Web site is a searchable register of federally and privately supported clinical trials in the US and around the world, providing information about all aspects of clinical trials
The US National Institutes of Health provides information about clinical trials
The UK National Health Service Choices Web site has information for patients about clinical trials and medical research
The UK Medical Research Council Clinical Trials Units also provides information for patients about clinical trials and links to information on clinical trials provided by other organizations
MedlinePlus has links to further resources on clinical trials (in English and Spanish)
The Australian Government's National Health and Medical Research Council has information about clinical trials
WHO International Clinical Trials Registry Platform aims to ensure that all trials are publicly accessible to those making health care decisions
The Star Child Health International Forum of Standards for Research is a resource center for pediatric clinical trial design, conduct, and reporting
doi:10.1371/journal.pmed.1000368
PMCID: PMC2976724  PMID: 21085696
19.  Primaquine or other 8-aminoquinoline for reducing Plasmodium falciparum transmission 
Background
Mosquitoes become infected with Plasmodium when they ingest gametocyte-stage parasites from an infected person's blood. Plasmodium falciparum gametocytes are sensitive to the drug primaquine (PQ) and other 8-aminoquinolines (8AQ); these drugs could prevent parasite transmission from infected people to mosquitoes, and consequently reduce the incidence of malaria. However, PQ will not directly benefit the individual, and could be harmful to those with glucose-6-phosphate dehydrogenase (G6PD) deficiency.
In 2010, The World Health Organization (WHO) recommended a single dose of PQ at 0.75 mg/kg, alongside treatment for P. falciparum malaria to reduce transmission in areas approaching malaria elimination. In 2013 the WHO revised this to 0.25 mg/kg due to concerns about safety.
Objectives
To assess whether giving PQ or an alternative 8AQ alongside treatment for P. falciparum malaria reduces malaria transmission, and to estimate the frequency of severe or haematological adverse events when PQ is given for this purpose.
Search methods
We searched the following databases up to 10 Feb 2014 for trials: the Cochrane Infectious Diseases Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL), published in The Cochrane Library; MEDLINE; EMBASE; LILACS; metaRegister of Controlled Trials (mRCT); and the WHO trials search portal using 'malaria*', 'falciparum', and 'primaquine' as search terms. In addition, we searched conference proceedings and reference lists of included studies, and contacted researchers and organizations.
Selection criteria
Randomized controlled trials (RCTs) or quasi-RCTs comparing PQ (or alternative 8AQ) given as a single dose or short course alongside treatment for P. falciparum malaria with malaria treatment given without PQ/8AQ in adults or children.
Data collection and analysis
Two authors independently screened all abstracts, applied inclusion criteria, and extracted data. We sought evidence of an impact on transmission (community incidence), infectiousness (mosquitoes infected from humans) and potential infectiousness (gametocyte measures). We calculated the area under the curve (AUC) for gametocyte density over time for comparisons for which data were available. We sought data on haematological and other adverse effects, as well as secondary outcomes of asexual clearance time and recrudescence. We stratified by whether the malaria treatment regimen included an artemisinin derivative or not; by PQ dose category (low < 0.4 mg/kg; medium ≥ 0.4 to < 0.6 mg/kg; high ≥ 0.6 mg/kg); and by PQ schedules. We used the GRADE approach to assess evidence quality.
Main results
We included 17 RCTs and one quasi-RCT. Eight studies tested for G6PD status: six then excluded participants with G6PD deficiency, one included only those with G6PD deficiency, and one included all irrespective of status. The remaining ten trials either did not report on whether they tested (8), or reported that they did not test (2). Nine trials included study arms with artemisinin-based malaria treatment regimens, and eleven included study arms with non-artemisinin-based treatments.
Only two trials evaluated PQ given at low doses (0.25 mg/kg in one and 0.1 mg/kg in the other).
PQ with artemisinin-based treatments: No trials evaluated effects on malaria transmission directly (incidence, prevalence, or entomological inoculation rate), and none evaluated infectiousness to mosquitoes. For potential infectiousness, the proportion of people with detectable gametocytaemia on day eight was reduced by around two thirds with high dose PQ category (RR 0.29, 95% CI 0.22 to 0.37, seven trials, 1380 participants, high quality evidence), and with medium dose PQ category (RR 0.34, 95% CI 0.19 to 0.59, two trials, 269 participants, high quality evidence), but the trial evaluating low dose PQ category (0.1 mg/kg) did not demonstrate an effect (RR 0.67, 95% CI 0.44 to 1.02, one trial, 223 participants, low quality evidence). Reductions in log(10)AUC estimates for gametocytaemia on days 1 to 43 with medium and high doses ranged from 24.3% to 87.5%. For haemolysis, one trial reported percent change in mean haemoglobin against baseline, and did not detect a difference between the two arms (very low quality evidence).
PQ with non-artemisinin treatments: No trials assessed effects on malaria transmission directly. Two small trials from the same laboratory evaluated infectiousness to mosquitoes, and report that infectivity was eliminated on day 8 in 15/15 patients receiving high dose PQ compared to 1/15 in the control group (low quality evidence). For potential infectiousness, the proportion of people with detectable gametocytaemia on day 8 was reduced by around half with high dose PQ category (RR 0.44, 95% CI 0.27 to 0.70, three trials, 206 participants, high quality evidence), and by around a third with medium dose category (RR 0.62, 0.50 to 0.76, two trials, 283 participants, high quality evidence), but the single trial using low dose PQ category did not demonstrate a difference between groups (one trial, 59 participants, very low quality evidence). Reduction in log(10)AUC for gametocytaemia days 1 to 43 were 24.3% and 27.1% for two arms in one trial giving medium dose PQ. No trials systematically sought evidence of haemolysis.
Two trials evaluated the 8AQ bulaquine, and suggest the effects may be greater than PQ, but the small number of participants (n = 112) preclude a definite conclusion.
Authors' conclusions
In individual patients, PQ added to malaria treatments reduces gametocyte prevalence when given in doses greater than 0.4 mg/kg. Whether this translates into preventing people transmitting malaria to mosquitoes has rarely been tested in controlled trials, but there appeared to be a strong reduction in infectiousness in the two small studies that evaluated this. No included trials evaluated whether this policy has an impact on community malaria transmission either in low-endemic settings approaching elimination, or in highly-endemic settings where many people are infected but have no symptoms and are unlikely to be treated.
For the currently recommended low dose regimen, there is little direct evidence to be confident that the effect of reduction in gametocyte prevalence is preserved.
Most trials excluded people with G6PD deficiency, and thus there is little reliable evidence from controlled trials of the safety of PQ in single dose or short course.
PLAIN LANGUAGE SUMMARY
A single dose of primaquine added to malaria treatment to prevent malaria transmission
We conducted a review of the effects of adding a single dose (or short course) of primaquine to malaria treatment with the aim of reducing the transmission of malaria. We included 17 randomized controlled trials and one quasi-randomized controlled trial.
What is primaquine and how might it reduce transmission
Primaquine is an antimalarial drug which does not cure malaria illness, but is known to kill the gametocyte stage of the malaria parasite which infects mosquitoes when they bite humans. Primaquine is also known to have potentially serious side effects in people with an enzyme deficiency common in many malaria endemic settings (glucose-6-phosphate dehydrogenase (G6PD) deficiency). In these people, high doses of primaquine given over several days sometimes destroys red blood cells, causing anaemia and, in some cases, possibly life-threatening effects.
The World Health Organization (WHO) recommends adding a single dose of primaquine to malaria treatment with the intention of reducing malaria transmission and to contribute to malaria elimination. This recommendation was made in 2010, but in 2013 the WHO amended its recommendation from a dose of 0.75 mg/kg to 0.25 mg/kg due to concerns about safety, and indirect evidence suggesting this was as effective as the higher dose.This review examines the evidence of benefits and harms of using primaquine in this way, and looks for evidence that primaquine will reduce malaria transmission in communities.
What the research says
We did not find any studies that tested whether primaquine added to malaria treatment reduces the community transmission of malaria.
When added to current treatments for malaria (artemisinin-based combination therapy), we found no studies evaluating the effects of primaquine on the number of mosquitoes infected. However, primaquine does reduce the duration of infectiousness (the period that gametocytes are detected circulating in the blood) when given at doses of 0.4 mg/kg or above (high quality evidence). We only found one study using 0.1 mg/kg but this study did not conclusively show that primaquine was still effective at this dose (low quality evidence).
When added to older treatments for malaria, two studies showed that primaquine at doses of 0.75 mg/kg reduced the number of mosquitoes infected after biting humans (low quality evidence). Doses above 0.4 mg/kg reduced the duration of detectable gametocytes (high quality evidence), but in a single study of the currently recommended 0.25 mg/kg no effect was demonstrated (very low quality evidence).
Some studies excluded patients with G6PD deficiency, some included them, and some did not comment. Overall the safety of PQ given as a single dose was poorly evaluated across all studies, so these data do not demonstrate whether the drug is safe or potentially harmful at this dosing level.
doi:10.1002/14651858.CD008152.pub4
PMCID: PMC4455224  PMID: 25693791
20.  Primaquine or other 8-aminoquinoline for reducing P. falciparum transmission 
Background
Mosquitoes become infected with Plasmodium when they ingest gametocyte-stage parasites from an infected person's blood. Plasmodium falciparum gametocytes are sensitive to the drug primaquine (PQ) and other 8-aminoquinolines (8AQ); these drugs could prevent parasite transmission from infected people to mosquitoes, and consequently reduce the incidence of malaria. However, PQ will not directly benefit the individual, and could be harmful to those with glucose-6-phosphate dehydrogenase (G6PD) deficiency.
In 2010, The World Health Organization (WHO) recommended a single dose of PQ at 0.75 mg/kg, alongside treatment for P. falciparum malaria to reduce transmission in areas approaching malaria elimination. In 2013 the WHO revised this to 0.25 mg/kg due to concerns about safety.
Objectives
To assess whether giving PQ or an alternative 8AQ alongside treatment for P. falciparum malaria reduces malaria transmission, and to estimate the frequency of severe or haematological adverse events when PQ is given for this purpose.
Search methods
We searched the following databases up to 10 Feb 2014 for trials: the Cochrane Infectious Diseases Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL), published in The Cochrane Library; MEDLINE; EMBASE; LILACS; metaRegister of Controlled Trials (mRCT); and the WHO trials search portal using 'malaria*', 'falciparum', and 'primaquine' as search terms. In addition, we searched conference proceedings and reference lists of included studies, and contacted researchers and organizations.
Selection criteria
Randomized controlled trials (RCTs) or quasi-RCTs comparing PQ (or alternative 8AQ) given as a single dose or short course alongside treatment for P. falciparum malaria with malaria treatment given without PQ/8AQ in adults or children.
Data collection and analysis
Two authors independently screened all abstracts, applied inclusion criteria, and extracted data. We sought evidence of an impact on transmission (community incidence), infectiousness (mosquitoes infected from humans) and potential infectiousness (gametocyte measures). We calculated the area under the curve (AUC) for gametocyte density over time for comparisons for which data were available. We sought data on haematological and other adverse effects, as well as secondary outcomes of asexual clearance time and recrudescence. We stratified by whether the malaria treatment regimen included an artemisinin derivative or not; by PQ dose category (low < 0.4 mg/kg; medium ≥ 0.4 to < 0.6 mg/kg; high ≥ 0.6 mg/kg); and by PQ schedules. We used the GRADE approach to assess evidence quality.
Main results
We included 17 RCTs and one quasi-RCT. Eight studies tested for G6PD status: six then excluded participants with G6PD deficiency, one included only those with G6PD deficiency, and one included all irrespective of status. The remaining ten trials either did not report on whether they tested (8), or reported that they did not test (2). Nine trials included study arms with artemisinin-based malaria treatment regimens, and eleven included study arms with non-artemisinin-based treatments.
Only two trials evaluated PQ given at low doses (0.25 mg/kg in one and 0.1 mg/kg in the other).
PQ with artemisinin-based treatments: No trials evaluated effects on malaria transmission directly (incidence, prevalence, or entomological inoculation rate), and none evaluated infectiousness to mosquitoes. For potential infectiousness, the proportion of people with detectable gametocytaemia on day eight was reduced by around two thirds with high dose PQ category (RR 0.29, 95% CI 0.22 to 0.37, seven trials, 1380 participants, high quality evidence), and with medium dose PQ category (RR 0.34, 95% CI 0.19 to 0.59, two trials, 269 participants, high quality evidence), but the trial evaluating low dose PQ category (0.1 mg/kg) did not demonstrate an effect (RR 0.67, 95% CI 0.44 to 1.02, one trial, 223 participants, low quality evidence). Reductions in log(10)AUC estimates for gametocytaemia on days 1 to 43 with medium and high doses ranged from 24.3% to 87.5%. For haemolysis, one trial reported percent change in mean haemoglobin against baseline, and did not detect a difference between the two arms (very low quality evidence).
PQ with non-artemisinin treatments: No trials assessed effects on malaria transmission directly. Two small trials from the same laboratory evaluated infectiousness to mosquitoes, and report that infectivity was eliminated on day 8 in 15/15 patients receiving high dose PQ compared to 1/15 in the control group (low quality evidence). For potential infectiousness, the proportion of people with detectable gametocytaemia on day 8 was reduced by around half with high dose PQ category (RR 0.44, 95% CI 0.27 to 0.70, three trials, 206 participants, high quality evidence), and by around a third with medium dose category (RR 0.62, 0.50 to 0.76, two trials, 283 participants, high quality evidence), but the single trial using low dose PQ category did not demonstrate a difference between groups (one trial, 59 participants, very low quality evidence). Reduction in log(10)AUC for gametocytaemia days 1 to 43 were 24.3% and 27.1% for two arms in one trial giving medium dose PQ. No trials systematically sought evidence of haemolysis.
Two trials evaluated the 8AQ bulaquine, and suggest the effects may be greater than PQ, but the small number of participants (n = 112) preclude a definite conclusion.
Authors' conclusions
In individual patients, PQ added to malaria treatments reduces gametocyte prevalence when given in doses greater than 0.4 mg/kg. Whether this translates into preventing people transmitting malaria to mosquitoes has rarely been tested in controlled trials, but there appeared to be a strong reduction in infectiousness in the two small studies that evaluated this. No included trials evaluated whether this policy has an impact on community malaria transmission either in low-endemic settings approaching elimination, or in highly-endemic settings where many people are infected but have no symptoms and are unlikely to be treated.
For the currently recommended low dose regimen, there is little direct evidence to be confident that the effect of reduction in gametocyte prevalence is preserved.
Most trials excluded people with G6PD deficiency, and thus there is little reliable evidence from controlled trials of the safety of PQ in single dose or short course.
PLAIN LANGUAGE SUMMARY
A single dose of primaquine added to malaria treatment to prevent malaria transmission
We conducted a review of the effects of adding a single dose (or short course) of primaquine to malaria treatment with the aim of reducing the transmission of malaria. We included 17 randomized controlled trials and one quasi-randomized controlled trial.
What is primaquine and how might it reduce transmission
Primaquine is an antimalarial drug which does not cure malaria illness, but is known to kill the gametocyte stage of the malaria parasite which infects mosquitoes when they bite humans. Primaquine is also known to have potentially serious side effects in people with an enzyme deficiency common in many malaria endemic settings (glucose-6-phosphate dehydrogenase (G6PD) deficiency). In these people, high doses of primaquine given over several days sometimes destroys red blood cells, causing anaemia and, in some cases, possibly life-threatening effects.
The World Health Organization (WHO) recommends adding a single dose of primaquine to malaria treatment with the intention of reducing malaria transmission and to contribute to malaria elimination. This recommendation was made in 2010, but in 2013 the WHO amended its recommendation from a dose of 0.75 mg/kg to 0.25 mg/kg due to concerns about safety, and indirect evidence suggesting this was as effective as the higher dose.This review examines the evidence of benefits and harms of using primaquine in this way, and looks for evidence that primaquine will reduce malaria transmission in communities.
What the research says
We did not find any studies that tested whether primaquine added to malaria treatment reduces the community transmission of malaria.
When added to current treatments for malaria (artemisinin-based combination therapy), we found no studies evaluating the effects of primaquine on the number of mosquitoes infected. However, primaquine does reduce the duration of infectiousness (the period that gametocytes are detected circulating in the blood) when given at doses of 0.4 mg/kg or above (high quality evidence). We only found one study using 0.1 mg/kg but this study did not conclusively show that primaquine was still effective at this dose (low quality evidence).
When added to older treatments for malaria, two studies showed that primaquine at doses of 0.75 mg/kg reduced the number of mosquitoes infected after biting humans (low quality evidence). Doses above 0.4 mg/kg reduced the duration of detectable gametocytes (high quality evidence), but in a single study of the currently recommended 0.25 mg/kg no effect was demonstrated (very low quality evidence).
Some studies excluded patients with G6PD deficiency, some included them, and some did not comment. Overall the safety of PQ given as a single dose was poorly evaluated across all studies, so these data do not demonstrate whether the drug is safe or potentially harmful at this dosing level.
doi:10.1002/14651858.CD008152.pub3
PMCID: PMC4456193  PMID: 24979199
21.  Cancer Screening with Digital Mammography for Women at Average Risk for Breast Cancer, Magnetic Resonance Imaging (MRI) for Women at High Risk 
Executive Summary
Objective
The purpose of this review is to determine the effectiveness of 2 separate modalities, digital mammography (DM) and magnetic resonance imaging (MRI), relative to film mammography (FM), in the screening of women asymptomatic for breast cancer. A third analysis assesses the effectiveness and safety of the combination of MRI plus mammography (MRI plus FM) in screening of women at high risk. An economic analysis was also conducted.
Research Questions
How does the sensitivity and specificity of DM compare to FM?
How does the sensitivity and specificity of MRI compare to FM?
How do the recall rates compare among these screening modalities, and what effect might this have on radiation exposure? What are the risks associated with radiation exposure?
How does the sensitivity and specificity of the combination of MRI plus FM compare to either MRI or FM alone?
What are the economic considerations?
Clinical Need
The effectiveness of FM with respect to breast cancer mortality in the screening of asymptomatic average- risk women over the age of 50 has been established. However, based on a Medical Advisory Secretariat review completed in March 2006, screening is not recommended for women between the ages of 40 and 49 years. Guidelines published by the Canadian Task Force on Preventive Care recommend mammography screening every 1 to 2 years for women aged 50 years and over, hence, the inclusion of such women in organized breast cancer screening programs. In addition to the uncertainty of the effectiveness of mammography screening from the age of 40 years, there is concern over the risks associated with mammographic screening for the 10 years between the ages of 40 and 49 years.
The lack of effectiveness of mammography screening starting at the age of 40 years (with respect to breast cancer mortality) is based on the assumption that the ability to detect cancer decreases with increased breast tissue density. As breast density is highest in the premenopausal years (approximately 23% of postmenopausal and 53% of premenopausal women having at least 50% of the breast occupied by high density), mammography screening is not promoted in Canada nor in many other countries for women under the age of 50 at average risk for breast cancer. It is important to note, however, that screening of premenopausal women (i.e., younger than 50 years of age) at high risk for breast cancer by virtue of a family history of cancer or a known genetic predisposition (e.g., having tested positive for the breast cancer genes BRCA1 and/or BRCA2) is appropriate. Thus, this review will assess the effectiveness of breast cancer screening with modalities other than film mammography, specifically DM and MRI, for both pre/perimenopausal and postmenopausal age groups.
International estimates of the epidemiology of breast cancer show that the incidence of breast cancer is increasing for all ages combined whereas mortality is decreasing, though at a slower rate. The observed decreases in mortality rates may be attributable to screening, in addition to advances in breast cancer therapy over time. Decreases in mortality attributable to screening may be a result of the earlier detection and treatment of invasive cancers, in addition to the increased detection of ductal carcinoma in situ (DCIS), of which certain subpathologies are less lethal. Evidence from the Surveillance, Epidemiology and End Results (better known as SEER) cancer registry in the United States, indicates that the age-adjusted incidence of DCIS has increased almost 10-fold over a 20 year period, from 2.7 to 25 per 100,000.
There is a 4-fold lower incidence of breast cancer in the 40 to 49 year age group than in the 50 to 69 year age group (approximately 140 per 100,000 versus 500 per 100,000 women, respectively). The sensitivity of FM is also lower among younger women (approximately 75%) than for women aged over 50 years (approximately 85%). Specificity is approximately 80% for younger women versus 90% for women over 50 years. The increased density of breast tissue in younger women is likely responsible for the decreased accuracy of FM.
Treatment options for breast cancer vary with the stage of disease (based on tumor size, involvement of surrounding tissue, and number of affected axillary lymph nodes) and its pathology, and may include a combination of surgery, chemotherapy and/or radiotherapy. Surgery is the first-line intervention for biopsy-confirmed tumors. The subsequent use of radiation, chemotherapy or hormonal treatments is dependent on the histopathologic characteristics of the tumor and the type of surgery. There is controversy regarding the optimal treatment of DCIS, which is considered a noninvasive tumour.
Women at high risk for breast cancer are defined as genetic carriers of the more commonly known breast cancer genes (BRCA1, BRCA2 TP53), first degree relatives of carriers, women with varying degrees of high risk family histories, and/or women with greater than 20% lifetime risk for breast cancer based on existing risk models. Genetic carriers for this disease, primarily women with BRCA1 or BRCA2 mutations, have a lifetime probability of approximately 85% of developing breast cancer. Preventive options for these women include surgical interventions such as prophylactic mastectomy and/or oophorectomy, i.e., removal of the breasts and/or ovaries. Therefore, it is important to evaluate the benefits and risks of different screening modalities, to identify additional options for these women.
This Medical Advisory Secretariat review is the second of 2 parts on breast cancer screening, and concentrates on the evaluation of both DM and MRI relative to FM, the standard of care. Part I of this review (March 2006) addressed the effectiveness of screening mammography in 40 to 49 year old average-risk women. The overall objective of the present review is to determine the optimal screening modality based on the evidence.
Evidence Review Strategy
The Medical Advisory Secretariat followed its standard procedures and searched the following electronic databases: Ovid MEDLINE, EMBASE, Ovid MEDLINE In-Process & Other Non-Indexed Citations, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews and The International Network of Agencies for Health Technology Assessment database. The subject headings and keywords searched included breast cancer, breast neoplasms, mass screening, digital mammography, magnetic resonance imaging. The detailed search strategies can be viewed in Appendix 1.
Included in this review are articles specific to screening and do not include evidence on diagnostic mammography. The search was further restricted to English-language articles published between January 1996 and April 2006. Excluded were case reports, comments, editorials, nonsystematic reviews, and letters.
Digital Mammography: In total, 224 articles specific to DM screening were identified. These were examined against the inclusion/exclusion criteria described below, resulting in the selection and review of 5 health technology assessments (HTAs) (plus 1 update) and 4 articles specific to screening with DM.
Magnetic Resonance Imaging: In total, 193 articles specific to MRI were identified. These were examined against the inclusion/exclusion criteria described below, resulting in the selection and review of 2 HTAs and 7 articles specific to screening with MRI.
The evaluation of the addition of FM to MRI in the screening of women at high risk for breast cancer was also conducted within the context of standard search procedures of the Medical Advisory Secretariat. as outlined above. The subject headings and keywords searched included the concepts of breast cancer, magnetic resonance imaging, mass screening, and high risk/predisposition to breast cancer. The search was further restricted to English-language articles published between September 2007 and January 15, 2010. Case reports, comments, editorials, nonsystematic reviews, and letters were not excluded.
MRI plus mammography: In total, 243 articles specific to MRI plus FM screening were identified. These were examined against the inclusion/exclusion criteria described below, resulting in the selection and review of 2 previous HTAs, and 1 systematic review of 11 paired design studies.
Inclusion Criteria
English-language articles, and English or French-language HTAs published from January 1996 to April 2006, inclusive.
Articles specific to screening of women with no personal history of breast cancer.
Studies in which DM or MRI were compared with FM, and where the specific outcomes of interest were reported.
Randomized controlled trials (RCTs) or paired studies only for assessment of DM.
Prospective, paired studies only for assessment of MRI.
Exclusion Criteria
Studies in which outcomes were not specific to those of interest in this report.
Studies in which women had been previously diagnosed with breast cancer.
Studies in which the intervention (DM or MRI) was not compared with FM.
Studies assessing DM with a sample size of less than 500.
Intervention
Digital mammography.
Magnetic resonance imaging.
Comparator
Screening with film mammography.
Outcomes of Interest
Breast cancer mortality (although no studies were found with such long follow-up).
Sensitivity.
Specificity.
Recall rates.
Summary of Findings
Digital Mammography
There is moderate quality evidence that DM is significantly more sensitive than FM in the screening of asymptomatic women aged less than 50 years, those who are premenopausal or perimenopausal, and those with heterogeneously or extremely dense breast tissue (regardless of age).
It is not known what effect these differences in sensitivity will have on the more important effectiveness outcome measure of breast cancer mortality, as there was no evidence of such an assessment.
Other factors have been set out to promote DM, for example, issues of recall rates and reading and examination times. Our analysis did not show that recall rates were necessarily improved in DM, though examination times were lower than for FM. Other factors including storage and retrieval of screens were not the subject of this analysis.
Magnetic Resonance Imaging
There is moderate quality evidence that the sensitivity of MRI is significantly higher than that of FM in the screening of women at high risk for breast cancer based on genetic or familial factors, regardless of age.
Radiation Risk Review
Cancer Care Ontario conducted a review of the evidence on radiation risk in screening with mammography women at high risk for breast cancer. From this review of recent literature and risk assessment that considered the potential impact of screening mammography in cohorts of women who start screening at an earlier age or who are at increased risk of developing breast cancer due to genetic susceptibility, the following conclusions can be drawn:
For women over 50 years of age, the benefits of mammography greatly outweigh the risk of radiation-induced breast cancer irrespective of the level of a woman’s inherent breast cancer risk.
Annual mammography for women aged 30 – 39 years who carry a breast cancer susceptibility gene or who have a strong family breast cancer history (defined as a first degree relative diagnosed in their thirties) has a favourable benefit:risk ratio. Mammography is estimated to detect 16 to 18 breast cancer cases for every one induced by radiation (Table 1). Initiation of screening at age 35 for this same group would increase the benefit:risk ratio to an even more favourable level of 34-50 cases detected for each one potentially induced.
Mammography for women under 30 years of age has an unfavourable benefit:risk ratio due to the challenges of detecting cancer in younger breasts, the aggressiveness of cancers at this age, the potential for radiation susceptibility at younger ages and a greater cumulative radiation exposure.
Mammography when used in combination with MRI for women who carry a strong breast cancer susceptibility (e.g., BRCA1/2 carriers), which if begun at age 35 and continued for 35 years, may confer greatly improved benefit:risk ratios which were estimated to be about 220 to one.
While there is considerable uncertainty in the risk of radiation-induced breast cancer, the risk expressed in published studies is almost certainly conservative as the radiation dose absorbed by women receiving mammography recently has been substantially reduced by newer technology.
A CCO update of the mammography radiation risk literature for 2008 and 2009 gave rise to one article by Barrington de Gonzales et al. published in 2009 (Barrington de Gonzales et al., 2009, JNCI, vol. 101: 205-209). This article focuses on estimating the risk of radiation-induced breast cancer for mammographic screening of young women at high risk for breast cancer (with BRCA gene mutations). Based on an assumption of a 15% to 25% or less reduction in mortality from mammography in these high risk women, the authors conclude that such a reduction is not substantially greater than the risk of radiation-induced breast cancer mortality when screening before the age of 34 years. That is, there would be no net benefit from annual mammographic screening of BRCA mutation carriers at ages 25-29 years; the net benefit would be zero or small if screening occurs in 30-34 year olds, and there would be some net benefit at age 35 years or older.
The Addition of Mammography to Magnetic Resonance Imaging
The effects of the addition of FM to MRI screening of high risk women was also assessed, with inclusion and exclusion criteria as follows:
Inclusion Criteria
English-language articles and English or French-language HTAs published from September 2007 to January 15, 2010.
Articles specific to screening of women at high risk for breast cancer, regardless of the definition of high risk.
Studies in which accuracy data for the combination of MRI plus FM are available to be compared to that of MRI and FM alone.
RCTs or prospective, paired studies only.
Studies in which women were previously diagnosed with breast cancer were also included.
Exclusion Criteria
Studies in which outcomes were not specific to those of interest in this report.
Studies in which there was insufficient data on the accuracy of MRI plus FM.
Intervention
Both MRI and FM.
Comparators
Screening with MRI alone and FM alone.
Outcomes of Interest
Sensitivity.
Specificity.
Summary of Findings
Magnetic Resonance Imaging Plus Mammography
Moderate GRADE Level Evidence that the sensitivity of MRI plus mammography is significantly higher than that of MRI or FM alone, although the specificity remains either unchanged or decreases in the screening of women at high risk for breast cancer based on genetic/familial factors, regardless of age.
These studies include women at high risk defined as BRCA1/2 or TP53 carriers, first degree relatives of carriers, women with varying degrees of high risk family histories, and/or >20% lifetime risk based on existing risk models. This definition of high risk accounts for approximately 2% of the female adult population in Ontario.
PMCID: PMC3377503  PMID: 23074406
22.  Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin 
PLoS Medicine  2013;10(1):e1001378.
Using documents obtained through litigation, S. Swaroop Vedula and colleagues compared internal company documents regarding industry-sponsored trials of off-label uses of gabapentin with the published trial reports and find discrepancies in reporting of analyses.
Background
Details about the type of analysis (e.g., intent to treat [ITT]) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.
Methods and Findings
For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses).
Conclusions
Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
To be credible, published research must present an unbiased, transparent, and accurate description of the study methods and findings so that readers can assess all relevant information to make informed decisions about the impact of any conclusions. Therefore, research publications should conform to universally adopted guidelines and checklists. Studies to establish whether a treatment is effective, termed randomized controlled trials (RCTs), are checked against a comprehensive set of guidelines: The robustness of trial protocols are measured through the Standard Protocol Items for Randomized Trials (SPIRIT), and the Consolidated Standards of Reporting Trials (CONSORT) statement (which was constructed and agreed by a meeting of journal editors in 1996, and has been updated over the years) includes a 25-point checklist that covers all of the key points in reporting RCTs.
Why Was This Study Done?
Although the CONSORT statement has helped improve transparency in the reporting of the methods and findings from RCTs, the statement does not define how certain types of analyses should be conducted and which patients should be included in the analyses, for example, in an intention-to-treat analysis (in which all participants are included in the data analysis of the group to which they were assigned, whether or not they completed the intervention given to the group). So in this study, the researchers used internal company documents released in the course of litigation against the pharmaceutical company Pfizer regarding the drug gabapentin, to compare between the internal and published reports the reporting of the numbers of participants, the description of the types of analyses, and the definitions of each type of analysis. The reports involved studies of gabapentin used for medical reasons not approved for marketing by the US Food and Drug Administration, known as “off-label” uses.
What Did the Researchers Do and Find?
The researchers identified trials sponsored by Pfizer relating to four off-label uses of gabapentin and examined the internal company protocols, statistical analysis plans, research reports, and the main publications related to each trial. The researchers then compared the numbers of participants randomized and analyzed for the main (primary) outcome and the type of analysis for efficacy and safety in both the internal research report and the trial publication. The researchers identified 21 trials, 11 of which were published RCTs that had the associated documents necessary for comparison.
The researchers found that in three out of ten trials there were differences in the internal research report and the main publication regarding the number of randomized participants. Furthermore, in six out of ten trials, the researchers were unable to compare the internal research report with the main publication for the number of participants analyzed for efficacy, because the research report either did not describe the primary outcome or did not describe the type of analysis. Overall, the researchers found that seven different types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including intention-to-treat analysis. However, the protocol or publication used six different descriptions for the intention-to-treat analysis, resulting in several important differences between the internal and published documents about the number of patients included in the analysis.
What Do These Findings Mean?
These findings from a sample of industry-sponsored trials on the off-label use of gabapentin suggest that when compared to the internal research reports, the trial publications did not always accurately reflect what was actually done in the trial. Therefore, the trial publication could not be considered to be an accurate and transparent record of the numbers of participants randomized and analyzed for efficacy. These findings support the need for further revisions of the CONSORT statement, such as including explicit statements about the criteria used to define each type of analysis and the numbers of participants excluded from each type of analysis. Further guidance is also needed to ensure consistent terminology for types of analysis. Of course, these revisions will improve reporting only if authors and journals adhere to them. These findings also highlight the need for all individual patient data to be made accessible to readers of the published article.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001378.
For more information, see the CONSORT statement website
The EQUATOR Network website is a resource center for the good reporting of health research studies and has more information about the SPIRIT initiative and the CONSORT statement
doi:10.1371/journal.pmed.1001378
PMCID: PMC3558476  PMID: 23382656
23.  Strategies for prevention of postoperative delirium: a systematic review and meta-analysis of randomized trials 
Critical Care  2013;17(2):R47.
Introduction
The ideal measures to prevent postoperative delirium remain unestablished. We conducted this systematic review and meta-analysis to clarify the significance of potential interventions.
Methods
The PRISMA statement guidelines were followed. Two researchers searched MEDLINE, EMBASE, CINAHL and the Cochrane Library for articles published in English before August 2012. Additional sources included reference lists from reviews and related articles from 'Google Scholar'. Randomized clinical trials (RCTs) on interventions seeking to prevent postoperative delirium in adult patients were included. Data extraction and methodological quality assessment were performed using predefined data fields and scoring system. Meta-analysis was accomplished for studies that used similar strategies. The primary outcome measure was the incidence of postoperative delirium. We further tested whether interventions effective in preventing postoperative delirium shortened the length of hospital stay.
Results
We identified 38 RCTs with interventions ranging from perioperative managements to pharmacological, psychological or multicomponent interventions. Meta-analysis showed dexmedetomidine sedation was associated with less delirium compared to sedation produced by other drugs (two RCTs with 415 patients, pooled risk ratio (RR) = 0.39; 95% confidence interval (CI) = 0.16 to 0.95). Both typical (three RCTs with 965 patients, RR = 0.71; 95% CI = 0.54 to 0.93) and atypical antipsychotics (three RCTs with 627 patients, RR = 0.36; 95% CI = 0.26 to 0.50) decreased delirium occurrence when compared to placebos. Multicomponent interventions (two RCTs with 325 patients, RR = 0.71; 95% CI = 0.58 to 0.86) were effective in preventing delirium. No difference in the incidences of delirium was found between: neuraxial and general anesthesia (four RCTs with 511 patients, RR = 0.99; 95% CI = 0.65 to 1.50); epidural and intravenous analgesia (three RCTs with 167 patients, RR = 0.93; 95% CI = 0.61 to 1.43) or acetylcholinesterase inhibitors and placebo (four RCTs with 242 patients, RR = 0.95; 95% CI = 0.63 to 1.44). Effective prevention of postoperative delirium did not shorten the length of hospital stay (10 RCTs with 1,636 patients, pooled SMD (standard mean difference) = -0.06; 95% CI = -0.16 to 0.04).
Conclusions
The included studies showed great inconsistencies in definition, incidence, severity and duration of postoperative delirium. Meta-analysis supported dexmedetomidine sedation, multicomponent interventions and antipsychotics were useful in preventing postoperative delirium.
doi:10.1186/cc12566
PMCID: PMC3672487  PMID: 23506796
24.  Neuroimaging for the Evaluation of Chronic Headaches 
Executive Summary
Objective
The objectives of this evidence based review are:
i) To determine the effectiveness of computed tomography (CT) and magnetic resonance imaging (MRI) scans in the evaluation of persons with a chronic headache and a normal neurological examination.
ii) To determine the comparative effectiveness of CT and MRI scans for detecting significant intracranial abnormalities in persons with chronic headache and a normal neurological exam.
iii) To determine the budget impact of CT and MRI scans for persons with a chronic headache and a normal neurological exam.
Clinical Need: Condition and Target Population
Headaches disorders are generally classified as either primary or secondary with further sub-classifications into specific headache types. Primary headaches are those not caused by a disease or medical condition and include i) tension-type headache, ii) migraine, iii) cluster headache and, iv) other primary headaches, such as hemicrania continua and new daily persistent headache. Secondary headaches include those headaches caused by an underlying medical condition. While primary headaches disorders are far more frequent than secondary headache disorders, there is an urge to carry out neuroimaging studies (CT and/or MRI scans) out of fear of missing uncommon secondary causes and often to relieve patient anxiety.
Tension type headaches are the most common primary headache disorder and migraines are the most common severe primary headache disorder. Cluster headaches are a type of trigeminal autonomic cephalalgia and are less common than migraines and tension type headaches. Chronic headaches are defined as headaches present for at least 3 months and lasting greater than or equal to 15 days per month. The International Classification of Headache Disorders states that for most secondary headaches the characteristics of the headache are poorly described in the literature and for those headache disorders where it is well described there are few diagnostically important features.
The global prevalence of headache in general in the adult population is estimated at 46%, for tension-type headache it is 42% and 11% for migraine headache. The estimated prevalence of cluster headaches is 0.1% or 1 in 1000 persons. The prevalence of chronic daily headache is estimated at 3%.
Neuroimaging
Computed Tomography
Computed tomography (CT) is a medical imaging technique used to aid diagnosis and to guide interventional and therapeutic procedures. It allows rapid acquisition of high-resolution three-dimensional images, providing radiologists and other physicians with cross-sectional views of a person’s anatomy. CT scanning poses risk of radiation exposure. The radiation exposure from a conventional CT scanner may emit effective doses of 2-4mSv for a typical head CT.
Magnetic Resonance Imaging
Magnetic resonance imaging (MRI) is a medical imaging technique used to aid diagnosis but unlike CT it does not use ionizing radiation. Instead, it uses a strong magnetic field to image a person’s anatomy. Compared to CT, MRI can provide increased contrast between the soft tissues of the body. Because of the persistent magnetic field, extra care is required in the magnetic resonance environment to ensure that injury or harm does not come to any personnel while in the environment.
Research Questions
What is the effectiveness of CT and MRI scanning in the evaluation of persons with a chronic headache and a normal neurological examination?
What is the comparative effectiveness of CT and MRI scanning for detecting significant intracranial abnormality in persons with chronic headache and a normal neurological exam?
What is the budget impact of CT and MRI scans for persons with a chronic headache and a normal neurological exam.
Research Methods
Literature Search
Search Strategy
A literature search was performed on February 18, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January, 2005 to February, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with an unknown eligibility were reviewed with a second clinical epidemiologist and then a group of epidemiologists until consensus was established.
Inclusion Criteria
Systematic reviews, randomized controlled trials, observational studies
Outpatient adult population with chronic headache and normal neurological exam
Studies reporting likelihood ratio of clinical variables for a significant intracranial abnormality
English language studies
2005-present
Exclusion Criteria
Studies which report outcomes for persons with seizures, focal symptoms, recent/new onset headache, change in presentation, thunderclap headache, and headache due to trauma
Persons with abnormal neurological examination
Case reports
Outcomes of Interest
Primary Outcome
Probability for intracranial abnormality
Secondary Outcome
Patient relief from anxiety
System service use
System costs
Detection rates for significant abnormalities in MRI and CT scans
Summary of Findings
Effectiveness
One systematic review, 1 small RCT, and 1 observational study met the inclusion and exclusion criteria. The systematic review completed by Detsky, et al. reported the likelihood ratios of specific clinical variables to predict significant intracranial abnormalities. The RCT completed by Howard et al., evaluated whether neuroimaging persons with chronic headache increased or reduced patient anxiety. The prospective observational study by Sempere et al., provided evidence for the pre-test probability of intracranial abnormalities in persons with chronic headache as well as minimal data on the comparative effectiveness of CT and MRI to detect intracranial abnormalities.
Outcome 1: Pre-test Probability.
The pre-test probability is usually related to the prevalence of the disease and can be adjusted depending on the characteristics of the population. The study by Sempere et al. determined the pre-test probability (prevalence) of significant intracranial abnormalities in persons with chronic headaches defined as headache experienced for at least a 4 week duration with a normal neurological exam. There is a pre-test probability of 0.9% (95% CI 0.5, 1.4) in persons with chronic headache and normal neurological exam. The highest pre-test probability of 5 found in persons with cluster headaches. The second highest, that of 3.7, was reported in persons with indeterminate type headache. There was a 0.75% rate of incidental findings.
Likelihood ratios for detecting a significant abnormality
Clinical findings from the history and physical may be used as screening test to predict abnormalities on neuroimaging. The extent to which the clinical variable may be a good predictive variable can be captured by reporting its likelihood ratio. The likelihood ratio provides an estimate of how much a test result will change the odds of having a disease or condition. The positive likelihood ratio (LR+) tells you how much the odds of having the disease increases when a test is positive. The negative likelihood ratio (LR-) tells you how much the odds of having the disease decreases when the test is negative.
Detsky et al., determined the likelihood ratio for specific clinical variable from 11 studies. There were 4 clinical variables with both statistically significant positive and negative likelihood ratios. These included: abnormal neurological exam (LR+ 5.3, LR- 0.72), undefined headache (LR+ 3.8, LR- 0.66), headache aggravated by exertion or valsalva (LR+ 2.3, LR- 0.70), and headache with vomiting (LR+ 1.8, and LR- 0.47). There were two clinical variables with a statistically significant positive likelihood ratio and non significant negative likelihood ratio. These included: cluster-type headache (LR+ 11, LR- 0.95), and headache with aura (LR+ 12.9, LR- 0.52). Finally, there were 8 clinical variables with both statistically non significant positive and negative likelihood ratios. These included: headache with focal symptoms, new onset headache, quick onset headache, worsening headache, male gender, headache with nausea, increased headache severity, and migraine type headache.
Outcome 2: Relief from Anxiety
Howard et al. completed an RCT of 150 persons to determine if neuroimaging for headaches was anxiolytic or anxiogenic. Persons were randomized to receiving either an MRI scan or no scan for investigation of their headache. The study population was stratified into those persons with a Hospital Anxiety and Depression scale (HADS) > 11 (the high anxiety and depression group) and those < 11 (the low anxiety and depression) so that there were 4 groups:
Group 1: High anxiety and depression, no scan group
Group 2: High anxiety and depression, scan group
Group 3: Low anxiety and depression, no scan group
Group 4: Low anxiety and depression, scan group
Anxiety
There was no evidence for any overall reduction in anxiety at 1 year as measured by a visual analogue scale of ‘level of worry’ when analysed by whether the person received a scan or not. Similarly, there was no interaction between anxiety and depression status and whether a scan was offered or not on patient anxiety. Anxiety did not decrease at 1 year to any statistically significant degree in the high anxiety and depression group (HADS positive) compared with the low anxiety and depression group (HADS negative).
There are serious methodological limitations in this study design which may have contributed to these negative results. First, when considering the comparison of ‘scan’ vs. ‘no scan’ groups, 12 people (16%) in the ‘no scan group’ actually received a scan within the follow up year. If indeed scanning does reduce anxiety then this contamination of the ‘no scan’ group may have reduced the effect between the groups results resulting in a non significant difference in anxiety scores between the ‘scanned’ and the ‘no scan’ group. Second, there was an inadequate sample size at 1 year follow up in each of the 4 groups which may have contributed to a Type II statistical error (missing a difference when one may exist) when comparing scan vs. no scan by anxiety and depression status. Therefore, based on the results and study limitations it is inconclusive as to whether scanning reduces anxiety.
Outcome 3: System Services
Howard et al., considered services used and system costs a secondary outcome. These were determined by examining primary care case notes at 1 year for consultation rates, symptoms, further investigations, and contact with secondary and tertiary care.
System Services
The authors report that the use of neurologist and psychiatrist services was significantly higher for those persons not offered as scan, regardless of their anxiety and depression status (P<0.001 for neurologist, and P=0.033 for psychiatrist)
Outcome 4: System Costs
System Costs
There was evidence of statistically significantly lower system costs if persons with high levels of anxiety and depression (Hospital Anxiety and Depression Scale score >11) were provided with a scan (P=0.03 including inpatient costs, and 0.047 excluding inpatient costs).
Comparative Effectiveness of CT and MRI Scans
One study reported the detection rate for significant intracranial abnormalities using CT and MRI. In a cohort of 1876 persons with a non acute headache defined as any type of headache that had begun at least 4 weeks before enrolment Sempere et al. reported that the detection rate was 19/1432 (1.3%) using CT and 4/444 (0.9%) using MRI. Of 119 normal CT scans 2 (1.7%) had significant intracranial abnormality on MRI. The 2 cases were a small meningioma, and an acoustic neurinoma.
Summary
The evidence presented can be summarized as follows:
Pre-test Probability
Based on the results by Sempere et al., there is a low pre-test probability for intracranial abnormalities in persons with chronic headaches and a normal neurological exam (defined as headaches experiences for a minimum of 4 weeks). The Grade quality of evidence supporting this outcome is very low.
Likelihood Ratios
Based on the systematic review by Detsky et al., there is a statistically significant positive and negative likelihood ratio for the following clinical variables: abnormal neurological exam, undefined headache, headache aggravated by exertion or valsalva, headache with vomiting. Grade quality of evidence supporting this outcome is very low.
Based on the systematic review by Detsky et al. there is a statistically significant positive likelihood ratio but non statistically significant negative likelihood ratio for the following clinical variables: cluster headache and headache with aura. The Grade quality of evidence supporting this outcome is very low.
Based on the systematic review by Detsky et al., there is a non significant positive and negative likelihood ratio for the following clinical variables: headache with focal symptoms, new onset headache, quick onset headache, worsening headache, male gender, headache with nausea, increased headache severity, migraine type headache. The Grade quality of evidence supporting this outcome is very low.
Relief from Anxiety
Based on the RCT by Howard et al., it is inconclusive whether neuroimaging scans in persons with a chronic headache are anxiolytic. The Grade quality of evidence supporting this outcome is low.
System Services
Based on the RCT by Howard et al. scanning persons with chronic headache regardless of their anxiety and/or depression level reduces service use. The Grade quality of evidence is low.
System Costs
Based on the RCT by Howard et al., scanning persons with a score greater than 11 on the High Anxiety and Depression Scale reduces system costs. The Grade quality of evidence is moderate.
Comparative Effectiveness of CT and MRI Scans
There is sparse evidence to determine the relative effectiveness of CT compared with MRI scanning for the detection of intracranial abnormalities. The Grade quality of evidence supporting this is very low.
Economic Analysis
Ontario Perspective
Volumes for neuroimaging of the head i.e. CT and MRI scans, from the Ontario Health Insurance Plan (OHIP) data set were used to investigate trends in the province for Fiscal Years (FY) 2004-2009.
Assumptions were made in order to investigate neuroimaging of the head for the indication of headache. From the literature, 27% of all CT and 13% of all MRI scans for the head were assumed to include an indication of headache. From that same retrospective chart review and personal communication with the author 16% of CT scans and 4% of MRI scans for the head were for the sole indication of headache. From the Ministry of Health and Long-Term Care (MOHLTC) wait times data, 73% of all CT and 93% of all MRI scans in the province, irrespective of indication were outpatient procedures.
The expenditure for each FY reflects the volume for that year and since volumes have increased in the past 6 FYs, the expenditure has also increased with a pay-out reaching 3.0M and 2.8M for CT and MRI services of the head respectively for the indication of headache and a pay-out reaching 1.8M and 0.9M for CT and MRI services of the head respectively for the indication of headache only in FY 08/09.
Cost per Abnormal Finding
The yield of abnormal finding for a CT and MRI scan of the head for the indication of headache only is 2% and 5% respectively. Based on these yield a high-level estimate of the cost per abnormal finding with neuroimaging of the head for headache only can be calculated for each FY. In FY 08/09 there were 37,434 CT and 16,197 MRI scans of the head for headache only. These volumes would generate a yield of abnormal finding of 749 and 910 with a CT scan and MRI scan respectively. The expenditure for FY 08/09 was 1.8M and 0.9M for CT and MRI services respectively. Therefore the cost per abnormal finding would be $2,409 for CT and $957 for MRI. These cost per abnormal finding estimates were limited because they did not factor in comparators or the consequences associated with an abnormal reading or FNs. The estimates only consider the cost of the neuroimaging procedure and the yield of abnormal finding with the respective procedure.
PMCID: PMC3377587  PMID: 23074404
25.  Internet-Based Device-Assisted Remote Monitoring of Cardiovascular Implantable Electronic Devices 
Executive Summary
Objective
The objective of this Medical Advisory Secretariat (MAS) report was to conduct a systematic review of the available published evidence on the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted remote monitoring systems (RMSs) for therapeutic cardiac implantable electronic devices (CIEDs) such as pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. The MAS evidence-based review was performed to support public financing decisions.
Clinical Need: Condition and Target Population
Sudden cardiac death (SCD) is a major cause of fatalities in developed countries. In the United States almost half a million people die of SCD annually, resulting in more deaths than stroke, lung cancer, breast cancer, and AIDS combined. In Canada each year more than 40,000 people die from a cardiovascular related cause; approximately half of these deaths are attributable to SCD.
Most cases of SCD occur in the general population typically in those without a known history of heart disease. Most SCDs are caused by cardiac arrhythmia, an abnormal heart rhythm caused by malfunctions of the heart’s electrical system. Up to half of patients with significant heart failure (HF) also have advanced conduction abnormalities.
Cardiac arrhythmias are managed by a variety of drugs, ablative procedures, and therapeutic CIEDs. The range of CIEDs includes pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. Bradycardia is the main indication for PMs and individuals at high risk for SCD are often treated by ICDs.
Heart failure (HF) is also a significant health problem and is the most frequent cause of hospitalization in those over 65 years of age. Patients with moderate to severe HF may also have cardiac arrhythmias, although the cause may be related more to heart pump or haemodynamic failure. The presence of HF, however, increases the risk of SCD five-fold, regardless of aetiology. Patients with HF who remain highly symptomatic despite optimal drug therapy are sometimes also treated with CRT devices.
With an increasing prevalence of age-related conditions such as chronic HF and the expanding indications for ICD therapy, the rate of ICD placement has been dramatically increasing. The appropriate indications for ICD placement, as well as the rate of ICD placement, are increasingly an issue. In the United States, after the introduction of expanded coverage of ICDs, a national ICD registry was created in 2005 to track these devices. A recent survey based on this national ICD registry reported that 22.5% (25,145) of patients had received a non-evidence based ICD and that these patients experienced significantly higher in-hospital mortality and post-procedural complications.
In addition to the increased ICD device placement and the upfront device costs, there is the need for lifelong follow-up or surveillance, placing a significant burden on patients and device clinics. In 2007, over 1.6 million CIEDs were implanted in Europe and the United States, which translates to over 5.5 million patient encounters per year if the recommended follow-up practices are considered. A safe and effective RMS could potentially improve the efficiency of long-term follow-up of patients and their CIEDs.
Technology
In addition to being therapeutic devices, CIEDs have extensive diagnostic abilities. All CIEDs can be interrogated and reprogrammed during an in-clinic visit using an inductive programming wand. Remote monitoring would allow patients to transmit information recorded in their devices from the comfort of their own homes. Currently most ICD devices also have the potential to be remotely monitored. Remote monitoring (RM) can be used to check system integrity, to alert on arrhythmic episodes, and to potentially replace in-clinic follow-ups and manage disease remotely. They do not currently have the capability of being reprogrammed remotely, although this feature is being tested in pilot settings.
Every RMS is specifically designed by a manufacturer for their cardiac implant devices. For Internet-based device-assisted RMSs, this customization includes details such as web application, multiplatform sensors, custom algorithms, programming information, and types and methods of alerting patients and/or physicians. The addition of peripherals for monitoring weight and pressure or communicating with patients through the onsite communicators also varies by manufacturer. Internet-based device-assisted RMSs for CIEDs are intended to function as a surveillance system rather than an emergency system.
Health care providers therefore need to learn each application, and as more than one application may be used at one site, multiple applications may need to be reviewed for alarms. All RMSs deliver system integrity alerting; however, some systems seem to be better geared to fast arrhythmic alerting, whereas other systems appear to be more intended for remote follow-up or supplemental remote disease management. The different RMSs may therefore have different impacts on workflow organization because of their varying frequency of interrogation and methods of alerts. The integration of these proprietary RM web-based registry systems with hospital-based electronic health record systems has so far not been commonly implemented.
Currently there are 2 general types of RMSs: those that transmit device diagnostic information automatically and without patient assistance to secure Internet-based registry systems, and those that require patient assistance to transmit information. Both systems employ the use of preprogrammed alerts that are either transmitted automatically or at regular scheduled intervals to patients and/or physicians.
The current web applications, programming, and registry systems differ greatly between the manufacturers of transmitting cardiac devices. In Canada there are currently 4 manufacturers—Medtronic Inc., Biotronik, Boston Scientific Corp., and St Jude Medical Inc.—which have regulatory approval for remote transmitting CIEDs. Remote monitoring systems are proprietary to the manufacturer of the implant device. An RMS for one device will not work with another device, and the RMS may not work with all versions of the manufacturer’s devices.
All Internet-based device-assisted RMSs have common components. The implanted device is equipped with a micro-antenna that communicates with a small external device (at bedside or wearable) commonly known as the transmitter. Transmitters are able to interrogate programmed parameters and diagnostic data stored in the patients’ implant device. The information transfer to the communicator can occur at preset time intervals with the participation of the patient (waving a wand over the device) or it can be sent automatically (wirelessly) without their participation. The encrypted data are then uploaded to an Internet-based database on a secure central server. The data processing facilities at the central database, depending on the clinical urgency, can trigger an alert for the physician(s) that can be sent via email, fax, text message, or phone. The details are also posted on the secure website for viewing by the physician (or their delegate) at their convenience.
Research Questions
The research directions and specific research questions for this evidence review were as follows:
To identify the Internet-based device-assisted RMSs available for follow-up of patients with therapeutic CIEDs such as PMs, ICDs, and CRT devices.
To identify the potential risks, operational issues, or organizational issues related to Internet-based device-assisted RM for CIEDs.
To evaluate the safety, acceptability, and effectiveness of Internet-based device-assisted RMSs for CIEDs such as PMs, ICDs, and CRT devices.
To evaluate the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted RMSs for CIEDs compared to usual outpatient in-office monitoring strategies.
To evaluate the resource implications or budget impact of RMSs for CIEDs in Ontario, Canada.
Research Methods
Literature Search
The review included a systematic review of published scientific literature and consultations with experts and manufacturers of all 4 approved RMSs for CIEDs in Canada. Information on CIED cardiac implant clinics was also obtained from Provincial Programs, a division within the Ministry of Health and Long-Term Care with a mandate for cardiac implant specialty care. Various administrative databases and registries were used to outline the current clinical follow-up burden of CIEDs in Ontario. The provincial population-based ICD database developed and maintained by the Institute for Clinical Evaluative Sciences (ICES) was used to review the current follow-up practices with Ontario patients implanted with ICD devices.
Search Strategy
A literature search was performed on September 21, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from 1950 to September 2010. Search alerts were generated and reviewed for additional relevant literature until December 31, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Inclusion Criteria
published between 1950 and September 2010;
English language full-reports and human studies;
original reports including clinical evaluations of Internet-based device-assisted RMSs for CIEDs in clinical settings;
reports including standardized measurements on outcome events such as technical success, safety, effectiveness, cost, measures of health care utilization, morbidity, mortality, quality of life or patient satisfaction;
randomized controlled trials (RCTs), systematic reviews and meta-analyses, cohort and controlled clinical studies.
Exclusion Criteria
non-systematic reviews, letters, comments and editorials;
reports not involving standardized outcome events;
clinical reports not involving Internet-based device assisted RM systems for CIEDs in clinical settings;
reports involving studies testing or validating algorithms without RM;
studies with small samples (<10 subjects).
Outcomes of Interest
The outcomes of interest included: technical outcomes, emergency department visits, complications, major adverse events, symptoms, hospital admissions, clinic visits (scheduled and/or unscheduled), survival, morbidity (disease progression, stroke, etc.), patient satisfaction, and quality of life.
Summary of Findings
The MAS evidence review was performed to review available evidence on Internet-based device-assisted RMSs for CIEDs published until September 2010. The search identified 6 systematic reviews, 7 randomized controlled trials, and 19 reports for 16 cohort studies—3 of these being registry-based and 4 being multi-centered. The evidence is summarized in the 3 sections that follow.
1. Effectiveness of Remote Monitoring Systems of CIEDs for Cardiac Arrhythmia and Device Functioning
In total, 15 reports on 13 cohort studies involving investigations with 4 different RMSs for CIEDs in cardiology implant clinic groups were identified in the review. The 4 RMSs were: Care Link Network® (Medtronic Inc,, Minneapolis, MN, USA); Home Monitoring® (Biotronic, Berlin, Germany); House Call 11® (St Jude Medical Inc., St Pauls, MN, USA); and a manufacturer-independent RMS. Eight of these reports were with the Home Monitoring® RMS (12,949 patients), 3 were with the Care Link® RMS (167 patients), 1 was with the House Call 11® RMS (124 patients), and 1 was with a manufacturer-independent RMS (44 patients). All of the studies, except for 2 in the United States, (1 with Home Monitoring® and 1 with House Call 11®), were performed in European countries.
The RMSs in the studies were evaluated with different cardiac implant device populations: ICDs only (6 studies), ICD and CRT devices (3 studies), PM and ICD and CRT devices (4 studies), and PMs only (2 studies). The patient populations were predominately male (range, 52%–87%) in all studies, with mean ages ranging from 58 to 76 years. One study population was unique in that RMSs were evaluated for ICDs implanted solely for primary prevention in young patients (mean age, 44 years) with Brugada syndrome, which carries an inherited increased genetic risk for sudden heart attack in young adults.
Most of the cohort studies reported on the feasibility of RMSs in clinical settings with limited follow-up. In the short follow-up periods of the studies, the majority of the events were related to detection of medical events rather than system configuration or device abnormalities. The results of the studies are summarized below:
The interrogation of devices on the web platform, both for continuous and scheduled transmissions, was significantly quicker with remote follow-up, both for nurses and physicians.
In a case-control study focusing on a Brugada population–based registry with patients followed-up remotely, there were significantly fewer outpatient visits and greater detection of inappropriate shocks. One death occurred in the control group not followed remotely and post-mortem analysis indicated early signs of lead failure prior to the event.
Two studies examined the role of RMSs in following ICD leads under regulatory advisory in a European clinical setting and noted:
– Fewer inappropriate shocks were administered in the RM group.
– Urgent in-office interrogations and surgical revisions were performed within 12 days of remote alerts.
– No signs of lead fracture were detected at in-office follow-up; all were detected at remote follow-up.
Only 1 study reported evaluating quality of life in patients followed up remotely at 3 and 6 months; no values were reported.
Patient satisfaction was evaluated in 5 cohort studies, all in short term follow-up: 1 for the Home Monitoring® RMS, 3 for the Care Link® RMS, and 1 for the House Call 11® RMS.
– Patients reported receiving a sense of security from the transmitter, a good relationship with nurses and physicians, positive implications for their health, and satisfaction with RM and organization of services.
– Although patients reported that the system was easy to implement and required less than 10 minutes to transmit information, a variable proportion of patients (range, 9% 39%) reported that they needed the assistance of a caregiver for their transmission.
– The majority of patients would recommend RM to other ICD patients.
– Patients with hearing or other physical or mental conditions hindering the use of the system were excluded from studies, but the frequency of this was not reported.
Physician satisfaction was evaluated in 3 studies, all with the Care Link® RMS:
– Physicians reported an ease of use and high satisfaction with a generally short-term use of the RMS.
– Physicians reported being able to address the problems in unscheduled patient transmissions or physician initiated transmissions remotely, and were able to handle the majority of the troubleshooting calls remotely.
– Both nurses and physicians reported a high level of satisfaction with the web registry system.
2. Effectiveness of Remote Monitoring Systems in Heart Failure Patients for Cardiac Arrhythmia and Heart Failure Episodes
Remote follow-up of HF patients implanted with ICD or CRT devices, generally managed in specialized HF clinics, was evaluated in 3 cohort studies: 1 involved the Home Monitoring® RMS and 2 involved the Care Link® RMS. In these RMSs, in addition to the standard diagnostic features, the cardiac devices continuously assess other variables such as patient activity, mean heart rate, and heart rate variability. Intra-thoracic impedance, a proxy measure for lung fluid overload, was also measured in the Care Link® studies. The overall diagnostic performance of these measures cannot be evaluated, as the information was not reported for patients who did not experience intra-thoracic impedance threshold crossings or did not undergo interventions. The trial results involved descriptive information on transmissions and alerts in patients experiencing high morbidity and hospitalization in the short study periods.
3. Comparative Effectiveness of Remote Monitoring Systems for CIEDs
Seven RCTs were identified evaluating RMSs for CIEDs: 2 were for PMs (1276 patients) and 5 were for ICD/CRT devices (3733 patients). Studies performed in the clinical setting in the United States involved both the Care Link® RMS and the Home Monitoring® RMS, whereas all studies performed in European countries involved only the Home Monitoring® RMS.
3A. Randomized Controlled Trials of Remote Monitoring Systems for Pacemakers
Two trials, both multicenter RCTs, were conducted in different countries with different RMSs and study objectives. The PREFER trial was a large trial (897 patients) performed in the United States examining the ability of Care Link®, an Internet-based remote PM interrogation system, to detect clinically actionable events (CAEs) sooner than the current in-office follow-up supplemented with transtelephonic monitoring transmissions, a limited form of remote device interrogation. The trial results are summarized below:
In the 375-day mean follow-up, 382 patients were identified with at least 1 CAE—111 patients in the control arm and 271 in the remote arm.
The event rate detected per patient for every type of CAE, except for loss of atrial capture, was higher in the remote arm than the control arm.
The median time to first detection of CAEs (4.9 vs. 6.3 months) was significantly shorter in the RMS group compared to the control group (P < 0.0001).
Additionally, only 2% (3/190) of the CAEs in the control arm were detected during a transtelephonic monitoring transmission (the rest were detected at in-office follow-ups), whereas 66% (446/676) of the CAEs were detected during remote interrogation.
The second study, the OEDIPE trial, was a smaller trial (379 patients) performed in France evaluating the ability of the Home Monitoring® RMS to shorten PM post-operative hospitalization while preserving the safety of conventional management of longer hospital stays.
Implementation and operationalization of the RMS was reported to be successful in 91% (346/379) of the patients and represented 8144 transmissions.
In the RM group 6.5% of patients failed to send messages (10 due to improper use of the transmitter, 2 with unmanageable stress). Of the 172 patients transmitting, 108 patients sent a total of 167 warnings during the trial, with a greater proportion of warnings being attributed to medical rather than technical causes.
Forty percent had no warning message transmission and among these, 6 patients experienced a major adverse event and 1 patient experienced a non-major adverse event. Of the 6 patients having a major adverse event, 5 contacted their physician.
The mean medical reaction time was faster in the RM group (6.5 ± 7.6 days vs. 11.4 ± 11.6 days).
The mean duration of hospitalization was significantly shorter (P < 0.001) for the RM group than the control group (3.2 ± 3.2 days vs. 4.8 ± 3.7 days).
Quality of life estimates by the SF-36 questionnaire were similar for the 2 groups at 1-month follow-up.
3B. Randomized Controlled Trials Evaluating Remote Monitoring Systems for ICD or CRT Devices
The 5 studies evaluating the impact of RMSs with ICD/CRT devices were conducted in the United States and in European countries and involved 2 RMSs—Care Link® and Home Monitoring ®. The objectives of the trials varied and 3 of the trials were smaller pilot investigations.
The first of the smaller studies (151 patients) evaluated patient satisfaction, achievement of patient outcomes, and the cost-effectiveness of the Care Link® RMS compared to quarterly in-office device interrogations with 1-year follow-up.
Individual outcomes such as hospitalizations, emergency department visits, and unscheduled clinic visits were not significantly different between the study groups.
Except for a significantly higher detection of atrial fibrillation in the RM group, data on ICD detection and therapy were similar in the study groups.
Health-related quality of life evaluated by the EuroQoL at 6-month or 12-month follow-up was not different between study groups.
Patients were more satisfied with their ICD care in the clinic follow-up group than in the remote follow-up group at 6-month follow-up, but were equally satisfied at 12- month follow-up.
The second small pilot trial (20 patients) examined the impact of RM follow-up with the House Call 11® system on work schedules and cost savings in patients randomized to 2 study arms varying in the degree of remote follow-up.
The total time including device interrogation, transmission time, data analysis, and physician time required was significantly shorter for the RM follow-up group.
The in-clinic waiting time was eliminated for patients in the RM follow-up group.
The physician talk time was significantly reduced in the RM follow-up group (P < 0.05).
The time for the actual device interrogation did not differ in the study groups.
The third small trial (115 patients) examined the impact of RM with the Home Monitoring® system compared to scheduled trimonthly in-clinic visits on the number of unplanned visits, total costs, health-related quality of life (SF-36), and overall mortality.
There was a 63.2% reduction in in-office visits in the RM group.
Hospitalizations or overall mortality (values not stated) were not significantly different between the study groups.
Patient-induced visits were higher in the RM group than the in-clinic follow-up group.
The TRUST Trial
The TRUST trial was a large multicenter RCT conducted at 102 centers in the United States involving the Home Monitoring® RMS for ICD devices for 1450 patients. The primary objectives of the trial were to determine if remote follow-up could be safely substituted for in-office clinic follow-up (3 in-office visits replaced) and still enable earlier physician detection of clinically actionable events.
Adherence to the protocol follow-up schedule was significantly higher in the RM group than the in-office follow-up group (93.5% vs. 88.7%, P < 0.001).
Actionability of trimonthly scheduled checks was low (6.6%) in both study groups. Overall, actionable causes were reprogramming (76.2%), medication changes (24.8%), and lead/system revisions (4%), and these were not different between the 2 study groups.
The overall mean number of in-clinic and hospital visits was significantly lower in the RM group than the in-office follow-up group (2.1 per patient-year vs. 3.8 per patient-year, P < 0.001), representing a 45% visit reduction at 12 months.
The median time from onset of first arrhythmia to physician evaluation was significantly shorter (P < 0.001) in the RM group than in the in-office follow-up group for all arrhythmias (1 day vs. 35.5 days).
The median time to detect clinically asymptomatic arrhythmia events—atrial fibrillation (AF), ventricular fibrillation (VF), ventricular tachycardia (VT), and supra-ventricular tachycardia (SVT)—was also significantly shorter (P < 0.001) in the RM group compared to the in-office follow-up group (1 day vs. 41.5 days) and was significantly quicker for each of the clinical arrhythmia events—AF (5.5 days vs. 40 days), VT (1 day vs. 28 days), VF (1 day vs. 36 days), and SVT (2 days vs. 39 days).
System-related problems occurred infrequently in both groups—in 1.5% of patients (14/908) in the RM group and in 0.7% of patients (3/432) in the in-office follow-up group.
The overall adverse event rate over 12 months was not significantly different between the 2 groups and individual adverse events were also not significantly different between the RM group and the in-office follow-up group: death (3.4% vs. 4.9%), stroke (0.3% vs. 1.2%), and surgical intervention (6.6% vs. 4.9%), respectively.
The 12-month cumulative survival was 96.4% (95% confidence interval [CI], 95.5%–97.6%) in the RM group and 94.2% (95% confidence interval [CI], 91.8%–96.6%) in the in-office follow-up group, and was not significantly different between the 2 groups (P = 0.174).
The CONNECT Trial
The CONNECT trial, another major multicenter RCT, involved the Care Link® RMS for ICD/CRT devices in a15-month follow-up study of 1,997 patients at 133 sites in the United States. The primary objective of the trial was to determine whether automatically transmitted physician alerts decreased the time from the occurrence of clinically relevant events to medical decisions. The trial results are summarized below:
Of the 575 clinical alerts sent in the study, 246 did not trigger an automatic physician alert. Transmission failures were related to technical issues such as the alert not being programmed or not being reset, and/or a variety of patient factors such as not being at home and the monitor not being plugged in or set up.
The overall mean time from the clinically relevant event to the clinical decision was significantly shorter (P < 0.001) by 17.4 days in the remote follow-up group (4.6 days for 172 patients) than the in-office follow-up group (22 days for 145 patients).
– The median time to a clinical decision was shorter in the remote follow-up group than in the in-office follow-up group for an AT/AF burden greater than or equal to 12 hours (3 days vs. 24 days) and a fast VF rate greater than or equal to 120 beats per minute (4 days vs. 23 days).
Although infrequent, similar low numbers of events involving low battery and VF detection/therapy turned off were noted in both groups. More alerts, however, were noted for out-of-range lead impedance in the RM group (18 vs. 6 patients), and the time to detect these critical events was significantly shorter in the RM group (same day vs. 17 days).
Total in-office clinic visits were reduced by 38% from 6.27 visits per patient-year in the in-office follow-up group to 3.29 visits per patient-year in the remote follow-up group.
Health care utilization visits (N = 6,227) that included cardiovascular-related hospitalization, emergency department visits, and unscheduled clinic visits were not significantly higher in the remote follow-up group.
The overall mean length of hospitalization was significantly shorter (P = 0.002) for those in the remote follow-up group (3.3 days vs. 4.0 days) and was shorter both for patients with ICD (3.0 days vs. 3.6 days) and CRT (3.8 days vs. 4.7 days) implants.
The mortality rate between the study arms was not significantly different between the follow-up groups for the ICDs (P = 0.31) or the CRT devices with defribillator (P = 0.46).
Conclusions
There is limited clinical trial information on the effectiveness of RMSs for PMs. However, for RMSs for ICD devices, multiple cohort studies and 2 large multicenter RCTs demonstrated feasibility and significant reductions in in-office clinic follow-ups with RMSs in the first year post implantation. The detection rates of clinically significant events (and asymptomatic events) were higher, and the time to a clinical decision for these events was significantly shorter, in the remote follow-up groups than in the in-office follow-up groups. The earlier detection of clinical events in the remote follow-up groups, however, was not associated with lower morbidity or mortality rates in the 1-year follow-up. The substitution of almost all the first year in-office clinic follow-ups with RM was also not associated with an increased health care utilization such as emergency department visits or hospitalizations.
The follow-up in the trials was generally short-term, up to 1 year, and was a more limited assessment of potential longer term device/lead integrity complications or issues. None of the studies compared the different RMSs, particularly the different RMSs involving patient-scheduled transmissions or automatic transmissions. Patients’ acceptance of and satisfaction with RM were reported to be high, but the impact of RM on patients’ health-related quality of life, particularly the psychological aspects, was not evaluated thoroughly. Patients who are not technologically competent, having hearing or other physical/mental impairments, were identified as potentially disadvantaged with remote surveillance. Cohort studies consistently identified subgroups of patients who preferred in-office follow-up. The evaluation of costs and workflow impact to the health care system were evaluated in European or American clinical settings, and only in a limited way.
Internet-based device-assisted RMSs involve a new approach to monitoring patients, their disease progression, and their CIEDs. Remote monitoring also has the potential to improve the current postmarket surveillance systems of evolving CIEDs and their ongoing hardware and software modifications. At this point, however, there is insufficient information to evaluate the overall impact to the health care system, although the time saving and convenience to patients and physicians associated with a substitution of in-office follow-up by RM is more certain. The broader issues surrounding infrastructure, impacts on existing clinical care systems, and regulatory concerns need to be considered for the implementation of Internet-based RMSs in jurisdictions involving different clinical practices.
PMCID: PMC3377571  PMID: 23074419

Results 1-25 (1469636)