PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1592382)

Clipboard (0)
None

Related Articles

1.  Retrospective evaluation of adverse transfusion reactions following blood product transfusion from a tertiary care hospital: A preliminary step towards hemovigilance 
Background:
The goal of hemovigilance is to increase the safety and quality of blood transfusion. Identification of the adverse reactions will help in taking appropriate steps to reduce their incidence and make blood transfusion process as safe as possible.
Aims:
To determine the frequency and type of transfusion reactions (TRs) occurring in patients, reported to the blood bank at our institute.
Materials and Methods:
A retrospective review of all TRs reported to the blood bank at the All India Institute of Medical Sciences, between December 2007 and April 2012 was done. All the TRs were evaluated in the blood bank and classified using standard definitions.
Results:
During the study period a total of 380,658 bloods and blood components were issued by our blood bank. Out of the total 196 adverse reactions reported under the hemovigilance system, the most common type of reaction observed was allergic 55.1% (n = 108), followed by febrile non-hemolytic transfusion reaction (FNHTR) 35.7% (n = 70). Other less frequently observed reactions were Anaphylactoid reactions 5.1% (n = 10), Acute non-immune HTRs 2.6% (n = 5), Circulatory overload 0.5% (n = 1), Transfusion related acute lung injury 0.5% (n = 1), Delayed HTRs 0.5% (n = 1). Not a single case of bacterial contamination was observed.
Conclusion:
The frequency of TRs in our patients was found to be 0.05% (196 out of 380,658). This can be an underestimation of the true incidence because of under reporting. It should be the responsibility of the blood transfusion consultant to create awareness amongst their clinical counterpart about safe transfusion practices so that proper hemovigilance system can be achieved to provide better patient care.
doi:10.4103/0973-6247.115564
PMCID: PMC3757769  PMID: 24014939
Adverse transfusion reactions; blood transfusion; hemovigilance
2.  Normative evaluation of blood banks in the Brazilian Amazon region in respect to the prevention of transfusion-transmitted malaria 
Objective
To evaluate blood banks in the Brazilian Amazon region with regard to structure and procedures directed toward the prevention of transfusion-transmitted malaria (TTM).
Methods
This was a normative evaluation based on the Brazilian National Health Surveillance Agency (ANVISA) Resolution RDC No. 153/2004. Ten blood banks were included in the study and classified as ‘adequate’ (≥80 points), ‘partially adequate’ (from 50 to 80 points), or ‘inadequate’ (<50 points). The following components were evaluated: ‘donor education’ (5 points), ‘clinical screening’ (40 points), ‘laboratory screening’ (40 points) and ‘hemovigilance’ (15 points).
Results
The overall median score was 49.8 (minimum = 16; maximum = 78). Five blood banks were classified as ‘inadequate’ and five as ‘partially adequate’. The median clinical screening score was 26 (minimum = 16; maximum = 32). The median laboratory screening score was 20 (minimum = 0; maximum = 32). Eight blood banks performed laboratory tests for malaria; six tested all donations. Seven used thick smears, but only one performed this procedure in accordance with Ministry of Health requirements. One service had a Program of External Quality Evaluation for malaria testing. With regard to hemovigilance, two institutions reported having procedures to detect cases of transfusion-transmitted malaria.
Conclusion
Malaria is neglected as a blood–borne disease in the blood banks of the Brazilian Amazon region. None of the institutions were classified as ‘adequate’ in the overall classification or with regard to clinical screening and laboratory screening. Blood bank professionals, the Ministry of Health and Health Surveillance service managers need to pay more attention to this matter so that the safety procedures required by law are complied with.
doi:10.1016/j.bjhh.2014.09.002
PMCID: PMC4318476  PMID: 25453648
Health evaluation; Malaria; Blood banks; Donor selection; Blood safety
3.  Pathogen Inactivation of Platelet and Plasma Blood Components for Transfusion Using the INTERCEPT Blood System™ 
Summary
Background
The transmission of pathogens via blood transfusion is still a major threat. Expert conferences established the need for a pro-active approach and concluded that the introduction of a pathogen inactivation/reduction technology requires a thorough safety profile, a comprehensive pre-clinical and clinical development and an ongoing hemovigilance program.
Material and Methods
The INTERCEPT Blood System utilizes amotosalen and UVA light and enables for the treatment of platelets and plasma in the same device. Preclinical studies of pathogen inactivation and toxicology and a thorough program of clinical studies have been conducted and an active he-movigilance-program established.
Results
INTERCEPT shows robust efficacy of inactivation for viruses, bacteria (including spirochetes), protozoa and leukocytes as well as large safety margins. Furthermore, it integrates well into routine blood center operations. The clinical study program demonstrates the successful use for very diverse patient groups. The hemovigilance program shows safety and tolerability in routine use. Approximately 700,000 INTERCEPT-treated products have been transfused worldwide. The system is in clinical use since class III CE-mark registration in 2002. The safety and efficacy has been shown in routine use and during an epidemic.
Conclusion
The INTERCEPT Blood System for platelets and plasma offers enhanced safety for the patient and protection against transfusion-transmitted infections.
doi:10.1159/000323937
PMCID: PMC3132977  PMID: 21779203
Pathogen inactivation; Platelets; Plasma; Amotosalen
4.  The importance of hemovigilance in the transmission of infectious diseases 
Background
Hemovigilance is an organized system of surveillance throughout the transfusion chain intended to evaluate information in order to prevent the appearance or recurrence of adverse reactions related to the use of blood products.
Objective
The aims of this study were to assess the late reporting of incidents related to possible seroconversion in respect to age, marital status and ethnical background, annual variations in late reporting, the number of reports opened and closed, seroconversion of donors and transfusions of blood products within the window period.
Methods
This retrospective, descriptive study used data on blood donations in the blood bank in Uberaba during the period from 2004 to 2011. Some socio-epidemiological characteristics of the donors and serology test results of donors and recipients were analyzed in respect to the late reporting of incidents related to possible seroconversion. The Chi-square test, odds ratio and a regression model were used for statistical analysis.
Results
From 2004 to 2011, the blood bank in Uberaba collected 117,857 blood bags, 284 (0.24%) of which were investigated for late reported incidents. The profile of the donors was less than 29 years old, unmarried and non-Whites. Differences in age (p-value < 0.0001), marital status (p-value = 0.0002) and ethnical background (p-value < 0.0001) were found to be statistically significant. There was no statistical difference between men and women (0.24% and 0.23% respectively; p-value = 0.951). The number of late reported incidents increased until 2008 followed by a downward trend until 2011. There were twelve cases of seroconversion in subsequent donations (seven human immunodeficiency virus, four hepatitis B and one hepatitis C) with proven human immunodeficiency virus infection after screening of only one recipient.
Conclusion
The twelve cases of seroconversion in donors with subsequent infection proven in one recipient underscores the importance of this tool to increase transfusion safety.
doi:10.5581/1516-8484.20130040
PMCID: PMC3728130  PMID: 23904807
Blood safety; Serology; Blood donors; Blood transfusion/adverse effects; Communicable diseases/transmission; Quality assurance, health care; Retrospective studies
5.  An audit of blood bank services 
Background:
An audit is a written series of simple, direct questions, which when answered and reviewed, tell whether the laboratory is performing its procedures, activities, and policies correctly and on time.
Aim:
The aim of this study is to briefly highlight the importance of audit in blood bank services.
Materials and Methods:
An Audit of Blood Bank Services was carried out in a Blood bank of the tertiary care hospital, Central India by using the tool kit, (comprised of checklists) developed by Directorate General of Health Services, Dhaka WHO, July 2008.
Results:
After going through these checklists, we observed that there is no system for assessing the training needs of staff in the blood bank. There was no provision for duty doctor's room, expert room, medical technologist room and duty care service. There was no checklist for routine check for observation of hemolysis and deterioration of blood and plasma. There was no facility for separate private interview to exclude sexual disease in the donor. Requisition forms were not properly filled for blood transfusion indications. There was no facility for notification of donors who are permanently deferred. There were no records documented for donors who are either temporarily or permanently deferred on the basis of either clinical examination, history, or serological examination. It was found that wearing of apron, cap, and mask was not done properly except in serology laboratory. When the requisition forms for blood transfusions were audited, it was found that many requisition forms were without indications.
Conclusion:
Regular audit of blood bank services needs to be initiated in all blood banks and the results needs to be discussed among the managements, colleagues, and staffs of blood bank. These results will provide a good opportunity for finding strategies in improving the blood bank services with appropriate and safe use of blood.
doi:10.4103/2277-9531.127568
PMCID: PMC3977393  PMID: 24741651
Audit; blood bank; checklists; quality control
6.  French Haemovigilance Data on Platelet Transfusion 
Summary
The Agence Française de Securite Sanitaire des Produits de Santé (Afssaps; French Health Products Safety Agency) is responsible, through its hemovigilance unit, for the organization and the functioning of the national hemovigilance network. In accordance with the French law, it receives all data on adverse transfusion reactions regardless of their severity. With the aim of evaluating the tolerance of two kinds of labile blood products (LBP), pooled platelet concentrates (PP) and apheresis platelet concentrates (APC), we screened the French national database from January 1, 2000 to December 31, 2006. We observed that the number of transfusion incident reports is more than twice as high with APC (8.61:1,000 LBP) than with PP (4.21:1,000 LBP). The difference between these two ratios is statistically significant as shown by chi-square test (e = 21.00 with α = 5%). The risk to suffer adverse reactions of any type, except for alloimmunization, is higher with APC, and the major type of diagnosis related to APC is allergic reaction (1:200 APC issued) even if those allergic reactions are rarely serious. The new French National Hemovigilance Commission should impel a working group evaluating this topic and above all the impact of additive solutions which have been used since 2005 to put forward preventives measures.
doi:10.1159/000118887
PMCID: PMC3076346  PMID: 21512639
French hemovigilance; Apheresis platelet concentratece; Pooled platelet concentrate; Adverse reaction
7.  Extracorporeal Lung Support Technologies – Bridge to Recovery and Bridge to Lung Transplantation in Adult Patients 
Executive Summary
For cases of acute respiratory distress syndrome (ARDS) and progressive chronic respiratory failure, the first choice or treatment is mechanical ventilation. For decades, this method has been used to support critically ill patients in respiratory failure. Despite its life-saving potential, however, several experimental and clinical studies have suggested that ventilator-induced lung injury can adversely affect the lungs and patient outcomes. Current opinion is that by reducing the pressure and volume of gas delivered to the lungs during mechanical ventilation, the stress applied to the lungs is eased, enabling them to rest and recover. In addition, mechanical ventilation may fail to provide adequate gas exchange, thus patients may suffer from severe hypoxia and hypercapnea. For these reasons, extracorporeal lung support technologies may play an important role in the clinical management of patients with lung failure, allowing not only the transfer of oxygen and carbon dioxide (CO2) but also buying the lungs the time needed to rest and heal.
Objective
The objective of this analysis was to assess the effectiveness, safety, and cost-effectiveness of extracorporeal lung support technologies in the improvement of pulmonary gas exchange and the survival of adult patients with acute pulmonary failure and those with end-stage chronic progressive lung disease as a bridge to lung transplantation (LTx). The application of these technologies in primary graft dysfunction (PGD) after LTx is beyond the scope of this review and is not discussed.
Clinical Applications of Extracorporeal Lung Support
Extracorporeal lung support technologies [i.e., Interventional Lung Assist (ILA) and extracorporeal membrane oxygenation (ECMO)] have been advocated for use in the treatment of patients with respiratory failure. These techniques do not treat the underlying lung condition; rather, they improve gas exchange while enabling the implantation of a protective ventilation strategy to prevent further damage to the lung tissues imposed by the ventilator. As such, extracorporeal lung support technologies have been used in three major lung failure case types:
As a bridge to recovery in acute lung failure – for patients with injured or diseased lungs to give their lungs time to heal and regain normal physiologic function.
As a bridge to LTx – for patients with irreversible end stage lung disease requiring LTx.
As a bridge to recovery after LTx – used as lung support for patients with PGD or severe hypoxemia.
Ex-Vivo Lung Perfusion and Assessment
Recently, the evaluation and reconditioning of donor lungs ex-vivo has been introduced into clinical practice as a method of improving the rate of donor lung utilization. Generally, about 15% to 20% of donor lungs are suitable for LTx, but these figures may increase with the use of ex-vivo lung perfusion. The ex-vivo evaluation and reconditioning of donor lungs is currently performed at the Toronto General Hospital (TGH) and preliminary results have been encouraging (Personal communication, clinical expert, December 17, 2009). If its effectiveness is confirmed, the use of the technique could lead to further expansion of donor organ pools and improvements in post-LTx outcomes.
Extracorporeal Lung support Technologies
ECMO
The ECMO system consists of a centrifugal pump, a membrane oxygenator, inlet and outlet cannulas, and tubing. The exchange of oxygen and CO2 then takes place in the oxygenator, which delivers the reoxygenated blood back into one of the patient’s veins or arteries. Additional ports may be added for haemodialysis or ultrafiltration.
Two different techniques may be used to introduce ECMO: venoarterial and venovenous. In the venoarterial technique, cannulation is through either the femoral artery and the femoral vein, or through the carotid artery and the internal jugular vein. In the venovenous technique cannulation is through both femoral veins or a femoral vein and internal jugular vein; one cannula acts as inflow or arterial line, and the other as an outflow or venous line. Venovenous ECMO will not provide adequate support if a patient has pulmonary hypertension or right heart failure. Problems associated with cannulation during the procedure include bleeding around the cannulation site and limb ischemia distal to the cannulation site.
ILA
Interventional Lung Assist (ILA) is used to remove excess CO2 from the blood of patients in respiratory failure. The system is characterized by a novel, low-resistance gas exchange device with a diffusion membrane composed of polymethylpentene (PMP) fibres. These fibres are woven into a complex configuration that maximizes the exchange of oxygen and CO2 by simple diffusion. The system is also designed to operate without the help of an external pump, though one can be added if higher blood flow is required. The device is then applied across an arteriovenous shunt between the femoral artery and femoral vein. Depending on the size of the arterial cannula used and the mean systemic arterial pressure, a blood flow of up to 2.5 L/min can be achieved (up to 5.5 L/min with an external pump). The cannulation is performed after intravenous administration of heparin.
Recently, the first commercially available extracorporeal membrane ventilator (NovaLung GmbH, Hechingen, Germany) was approved for clinical use by Health Canada for patients in respiratory failure. The system has been used in more than 2,000 patients with various indications in Europe, and was used for the first time in North America at the Toronto General Hospital in 2006.
Evidence-Based Analysis
The research questions addressed in this report are:
Does ILA/ECMO facilitate gas exchange in the lungs of patients with severe respiratory failure?
Does ILA/ECMO improve the survival rate of patients with respiratory failure caused by a range of underlying conditions including patients awaiting LTx?
What are the possible serious adverse events associated with ILA/ECMO therapy?
To address these questions, a systematic literature search was performed on September 28, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2005 to September 28, 2008. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with an unknown eligibility were reviewed with a second clinical epidemiologist and then a group of epidemiologists until consensus was established.
Inclusion Criteria
Studies in which ILA/ECMO was used as a bridge to recovery or bridge to LTx
Studies containing information relevant to the effectiveness and safety of the procedure
Studies including at least five patients
Exclusion Criteria
Studies reporting the use of ILA/ECMO for inter-hospital transfers of critically ill patients
Studies reporting the use of ILA/ECMO in patients during or after LTx
Animal or laboratory studies
Case reports
Outcomes of Interest
Reduction in partial pressure of CO2
Correction of respiratory acidosis
Improvement in partial pressure of oxygen
Improvement in patient survival
Frequency and severity of adverse events
The search yielded 107 citations in Medline and 107 citations in EMBASE. After reviewing the information provided in the titles and abstracts, eight citations were found to meet the study inclusion criteria. One study was then excluded because of an overlap in the study population with a previous study. Reference checking did not produce any additional studies for inclusion. Seven case series studies, all conducted in Germany, were thus included in this review (see Table 1).
Also included is the recently published CESAR trial, a multicentre RCT in the UK in which ECMO was compared with conventional intensive care management. The results of the CESAR trial were published when this review was initiated. In the absence of any other recent RCT on ECMO, the results of this trial were considered for this assessment and no further searches were conducted. A literature search was then conducted for application of ECMO as bridge to LTx patients (January, 1, 2005 to current). A total of 127 citations on this topic were identified and reviewed but none were found to have examined the use of ECMO as bridge to LTx.
Quality of Evidence
To grade the quality of evidence, the grading system formulated by the GRADE working group and adopted by MAS was applied. The GRADE system classifies the quality of a body of evidence as high, moderate, low, or very low according to four key elements: study design, study quality, consistency across studies, and directness.
Results
Trials on ILA
Of the seven studies identified, six involved patients with ARDS caused by a range of underlying conditions; the seventh included only patients awaiting LTx. All studies reported the rate of gas exchange and respiratory mechanics before ILA and for up to 7 days of ILA therapy. Four studies reported the means and standard deviations of blood gas transfer and arterial blood pH, which were used for meta-analysis.
Fischer et al. reported their first experience on the use of ILA as a bridge to LTx. In their study, 12 patients at high urgency status for LTx, who also had severe ventilation refractory hypercapnea and respiratory acidosis, were connected to ILA prior to LTx. Seven patients had a systemic infection or sepsis prior to ILA insertion. Six hours after initiation of ILA, the partial pressure of CO2 in arterial blood significantly decreased (P < .05) and arterial blood pH significantly improved (P < .05) and remained stable for one week (last time point reported). The partial pressure of oxygen in arterial blood improved from 71 mmHg to 83 mmHg 6 hours after insertion of ILA. The ratio of PaO2/FiO2 improved from 135 at baseline to 168 at 24 hours after insertion of ILA but returned to baseline values in the following week.
Trials on ECMO
The UK-based CESAR trial was conducted to assess the effectiveness and cost of ECMO therapy for severe, acute respiratory failure. The trial protocol were published in 2006 and details of the methods used for the economic evaluation were published in 2008. The study itself was a pragmatic trial (similar to a UK trial of neonatal ECMO), in which best standard practice was compared with an ECMO protocol. The trial involved 180 patients with acute but potentially reversible respiratory failure, with each also having a Murray score of ≥ 3.0 or uncompensated hypercapnea at a pH of < 7.2. Enrolled patients were randomized in a 1:1 ratio to receive either conventional ventilation treatment or ECMO while on ventilator. Conventional management included intermittent positive pressure ventilation, high frequency oscillatory ventilation, or both. As a pragmatic trial, a specific management protocol was not followed; rather the treatment centres were advised to follow a low volume low pressure ventilation strategy. A tidal volume of 4 to 8 mL/kg body weight and a plateau pressure of < 30 cm H2O were recommended.
Conclusions
ILA
Bridge to recovery
No RCTs or observational studies compared ILA to other treatment modalities.
Case series have shown that ILA therapy results in significant CO2 removal from arterial blood and correction of respiratory acidosis, as well as an improvement in oxygen transfer.
ILA therapy enabled a lowering of respiratory settings to protect the lungs without causing a negative impact on arterial blood CO2 and arterial blood pH.
The impact of ILA on patient long-term survival cannot be determined through the studies reviewed.
In-hospital mortality across studies ranged from 20% to 65%.
Ischemic complications were the most frequent adverse events following ILA therapy.
Leg amputation is a rare but possible outcome of ILA therapy, having occurred in about 0.9% of patients in these case series. New techniques involving the insertion of additional cannula into the femoral artery to perfuse the leg may lower this rate.
Bridge to LTx
The results of one case series (n=12) showed that ILA effectively removes CO2 from arterial blood and corrects respiratory acidosis in patients with ventilation refractory hypercapnea awaiting a LTx
Eight of the 12 patients (67%) awaiting a LTx were successfully transplanted and one-year survival for those transplanted was 80%
Since all studies are case series, the grade of the evidence for these observations is classified as “LOW”.
ECMO
Bridge to recovery
Based on the results of a pragmatic trial and an intention to treat analysis, referral of patient to an ECMO based centre significantly improves patient survival without disability compared to conventional ventilation. The results of CESAR trial showed that:
For patients with information about disability, survival without severe disability was significantly higher in ECMO arm
Assuming that the three patients in the conventional ventilation arm who did not have information about severe disability were all disabled, the results were also significant.
Assuming that none of these patients were disabled, the results were at borderline significance
A greater, though not statistically significant, proportion of patients in ECMO arm survived.
The rate of serious adverse events was higher among patients in ECMO group
The grade of evidence for the above observations is classified as “HIGH”.
Bridge to LTx
No studies fitting the inclusion criteria were identified.
There is no accurate data on the use of ECMO in patients awaiting LTx.
Economic Analysis
The objective of the economic analysis was to determine the costs associated with extracorporeal lung support technologies for bridge to LTx in adults. A literature search was conducted for which the target population was adults eligible for extracorporeal lung support. The primary analytic perspective was that of the Ministry of Health and Long-Term Care (MOHLTC). Articles published in English and fitting the following inclusion criteria were reviewed:
Full economic evaluations including cost-effectiveness analyses (CEA), cost-utility analyses (CUA), cost-benefit analyses (CBA);
Economic evaluations reporting incremental cost-effectiveness ratios (ICER) i.e. cost per quality adjusted life year (QALY), life years gained (LYG), or cost per event avoided; and
Studies in patients eligible for lung support technologies for to lung transplantation.
The search yielded no articles reporting comparative economic analyses.
Resource Use and Costs
Costs associated with both ILA and ECMO (outlined in Table ES-1) were obtained from the University Health Network (UHN) case costing initiative (personal communication, UHN, January 2010). Consultation with a clinical expert in the field was also conducted to verify resource utilization. The consultant was situated at the UHN in Toronto. The UHN has one ECMO machine, which cost approximately $100,000. The system is 18 years old and is used an average of 3 to 4 times a year with 35 procedures being performed over the last 9 years. The disposable cost per patient associated with ECMO is, on average, $2,200. There is a maintenance cost associated with the machine (not reported by the UHN), which is currently absorbed by the hospital’s biomedical engineering department.
The average capital cost of an ILA device is $7,100 per device, per patient, while the average cost of the reusable pump $65,000. The UHN has performed 16 of these procedures over the last 2.5 years. Similarly, there is a maintenance cost not that was reported by UHN but is absorbed by the hospital’s biomedical engineering department.
Resources Associated with Extracorporeal Lung Support Technologies
Hospital costs associated with ILA were based on the average cost incurred by the hospital for 11 cases performed in the FY 07/08 (personal communication, UHN, January 2010). The resources incurred with this hospital procedure included:
Device and disposables
OR transplant
Surgical ICU
Laboratory work
Medical imaging
Pharmacy
Clinical nutrition
Physiotherapy
Occupational therapy
Speech and language pathology
Social work
The average length of stay in hospital was 61 days for ILA (range: 5 to 164 days) and the average direct cost was $186,000 per case (range: $19,000 to $552,000). This procedure has a high staffing requirement to monitor patients in hospital, driving up the average cost per case.
PMCID: PMC3415698  PMID: 23074408
8.  Performance review of the National Blood Safety Improvement Project in Korea (2004-2009) 
Blood research  2013;48(2):139-144.
Background
In 2004, the Korean government and blood transfusion community deliberated on the issue of a national blood system reform and agreed to implement a 5-year project (2004-2009) to further improve safety measures. Our study delineates the basis of the current national blood program and analyzes the performance of this 5-year project initiated by the Korean government.
Methods
A performance review of the 5-year project was conducted from May 2009 to February 2010 using various approaches. Numerous data and documentation were collected from the Korean Red Cross and the Korean Centers for Disease Control and Prevention and reviewed by experts. Approximately 20 interviews with representatives of stakeholder groups were conducted to gather information, opinions, and perceptions. We conducted a nationwide field survey on a total of 144 blood donor centers.
Results
Among the 5 major categories of the 5-year project, blood donor recruitment, laboratory testing, and product manufacturing were improved in terms of quality performance. Specifically, government's financial support ensured that the infrastructure of blood donor centers and blood laboratory centers improved. The pivotal role of the government contributed to improvements in the national blood program and enhanced national surveillance for blood safety.
Conclusion
Korea has made a tremendous effort with positive outcomes to provide safety measures for blood products for transfusion in its citizens. In all areas of blood management, from blood donations to transfusions, continuous developments in monitoring safety standards and practices are paramount.
doi:10.5045/br.2013.48.2.139
PMCID: PMC3698400  PMID: 23826584
Blood safety; Improvement project; Quality performance
9.  Pulmonary Transfusion Reactions 
Summary
Background
In recent years, pulmonary transfusion reactions have gained increasing importance as serious adverse transfusion events.
Methods
Review of the literature.
Results
Pulmonary transfusion reactions are not extremely rare and, according to hemovigilance data, important causes of transfusion-induced major morbidity and death. They can be classified as primary with predominant pulmonary injury and secondary as part of another transfusion reaction. Primary reactions include transfusion-related acute lung injury (TRALI), transfusion-associated circulatory overload (TACO) and transfusion-associated dyspnea (TAD). Secondary pulmonary reactions are often observed in the wake of hemolytic transfusion reactions, hypotensive/anaphylactic reactions, and transfusion-transmitted bacterial infections.
Conclusion
Knowledge and careful management of cases of pulmonary transfusion reactions are essential for correct reporting to blood services and hemovigilance systems. Careful differentiation between TRALI and TACO is important for taking adequate preventive measures.
doi:10.1159/000151349
PMCID: PMC3076325  PMID: 21512622
Acute lung injury; Transfusion reaction; Transfusion risks
10.  Patients’ positive identification systems 
Blood Transfusion  2009;7(4):313-318.
Background
Blood safety must be maintained throughout the whole transfusion chain to prevent the transfusion of incorrect blood components. The estimated risk of an incorrect transfusion is in the order of 1 per 10,000 units of blood. Although several kinds of errors contribute to “wrong blood” events, 70% of errors occur in clinical areas with the most common being due to failure of the pre-transfusion bedside checking procedure.
Materials and Methods
Several methods are available to reduce such errors. The I-TRAC Plus system by Immucor consists of an identification bracelet which is a bar-coded wristband and a handheld portable computer that identifies patients and blood bags by a scanner and prints the information through a portable printer. The labels attached on the blood order forms and on the sample tubes are read and recorded in the blood bank’s informatics system (EmoNet INSIEL). Labels showing the bar-code of the assigned number, which includes the ID number of the patient, the ID number of the unit and a code identifying the kind of product and use (allogeneic or autologous), are generated and applied to the blood components. The transfusions are administered after checking the unit and the patient’s wristband using the scanner of a portable PC.
Results
In 5 years a total of 71,400 units of blood components were transfused to 15,430 patients using the I-TRAC Plus system. The system prevented 12 cases of mis-identification of patients (5 in 2003, 0 in 2004, 1 in 2005, 1 in 2006 and 5 in 2007).
Conclusions
In 2003 we introduced the use of a bar-code matching system between a patient’s wristband and the blood bag to avoid mistakes at the bedside. In 5 years the system provided benefits by avoiding errors in the identification of patients, thus preventing “wrong blood” transfusions.
doi:10.2450/2009.0001-09
PMCID: PMC2782809  PMID: 20011643
Recipient identification; transfusion safety; mistransfusion
11.  Prevalance of ABO and Rhesus Blood Groups in Blood Donors: A Study from a Tertiary Care Teaching Hospital of Kumaon Region of Uttarakhand 
Backround: ABO and Rhesus (Rh) blood group antigens are hereditary characters and are useful in population genetic studies, in resolving medico-legal issues and more importantly for the immunologic safety of blood during transfusion.
Aims: This study is aimed to determine the distribution pattern of the ABO and Rh blood groups among blood donors in Kumaon region of Uttarakhand and compare it with other data from similar studies within the India and all over the world.
Design: It is a retrospective study carried out at blood bank of Shushila Tewari Hospital of Government Medical College, Haldwani from January 2012 to December 2013.
Materials and Methods: The study was conducted on 12,701 blood donors. ABO and Rh typing was done using slide agglutination method with antisera ABO and Rh (Tulip diagnostics ltd). Doubtful cases were confirmed by tube agglutination method and reverse grouping using known pooled A and B cells. The age group and sex of donors, frequency of ABO and Rh blood groups were reported in simple percentages.
Results: The predominant donors belonged to age group between 18-35years (84.28%). Male donors were more than female donors, ratio being 352:1. Replacement donors (99.71%) were much more than voluntary donors (0.91%). The most common blood group was B (32.07%) and least common being AB (10.53%). Blood group ‘O’ and ‘A’ had same frequency. The prevalence of Rhesus positive and negative distribution in the studied population was 94.49% and 5.51% respectively. Blood group frequency with respect to ABO and Rhesus positive was found to be shown by formula B> O>A >AB. The frequency for ABO and Rhesus negative was given by the formula B>A>O>AB.
Conclusion: Knowledge of frequencies of the different blood groups is very important for blood banks and transfusion service policies that could contribute significantly to the National Health System.
doi:10.7860/JCDR/2014/9794.5355
PMCID: PMC4316263  PMID: 25653957
ABO; Blood groups; Blood donors; Kumaon; Rhesus
12.  Teaching transfusion medicine: current situation and proposals for proper medical training 
The current curricula in medical schools and hospital residence worldwide lack exposure to blood transfusion medicine, and require the reformulation of academic programs. In many countries, training in blood transfusion is not currently offered to medical students or during residency. Clinical evidence indicates that blood transfusions occur more frequently than recommended, contributing to increased risk due to this procedure. Therefore, the rational use of blood and its components is essential, due to the frequent undesirable reactions, to the increasing demand of blood products and the cost of the process. Significant improvements in knowledge of and skills in transfusion medicine are needed by both students and residents. Improvements are needed in both background knowledge and the practical application of this knowledge to improve safety. Studies prove that hemovigilance has an impact on transfusion safety and helps to prevent the occurrence of transfusion-related adverse effects. To ensure that all these aspects of blood transfusion are being properly addressed, many countries have instituted hospital transfusion committees. From this perspective, the interventions performed during the formation of medical students and residents, even the simplest, have proven effective in the acquisition of knowledge and medical training, thereby leading to a reduction in inappropriate use of blood. Therefore, we would like to emphasize the importance of the exposure of medical students and residents to blood services and transfusion medicine in order for them to acquire adequate medical training, as well as to discuss some changes in the current medical curricula regarding transfusion medicine that we judge critical.
doi:10.1016/j.bjhh.2014.11.004
PMCID: PMC4318849  PMID: 25638770
Blood transfusion; Advisory committees; Blood safety
13.  Left Ventricular Assist Devices 
Executive Summary
Objective
The objective of this health technology policy assessment was to determine the effectiveness and cost-effectiveness of using implantable ventricular assist devices in the treatment of end-stage heart failure.
Heart Failure
Heart failure is a complex syndrome that impairs the ability of the heart to maintain adequate blood circulation, resulting in multiorgan abnormalities and, eventually, death. In the period of 1994 to 1997, 38,702 individuals in Ontario had a first hospital admission for heart failure. Despite reported improvement in survival, the five-year mortality rate for heart failure is about 50%.
For patients with end-stage heart failure that does not respond to medical therapy, surgical treatment or traditional circulatory assist devices, heart transplantation (in appropriate patients) is the only treatment that provides significant patient benefit.
Heart Transplant in Ontario
With a shortage in the supply of donor hearts, patients are waiting longer for a heart transplant and may die before a donor heart is available. From 1999 to 2003, 55 to 74 people received a heart transplant in Ontario each year. Another 12 to 21 people died while waiting for a suitable donor heart. Of these, 1 to 5 deaths occurred in people under 18 years old. The rate-limiting factor in heart transplant is the supply of donor hearts. Without an increase in available donor hearts, attempts at prolonging the life of some patients on the transplant wait list could have a harmful effect on other patients that are being pushed down the waiting list (knock on effect).
LVAD Technology
Ventricular assist devices [VADs] have been developed to provide circulatory assistance to patients with end-stage heart failure. These are small pumps that usually assist the damaged left ventricle [LVADs] and may be situated within the body (intracorporeal] or outside the body [extracorporeal). Some of these devices were designed for use in the right ventricle [RVAD] or both ventricles (bi-ventricular).
LVADs have been mainly used as a “bridge-to-transplant” for patients on a transplant waiting list. As well, they have been used as a “bridge-to-recovery” in acute heart failure, but this experience is limited. There has been an increasing interest in using LVAD as a permanent (destination) therapy.
Review of LVAD by the Medical Advisory Secretariat
The Medical Advisory Secretariat’s review included a descriptive synthesis of findings from five systematic reviews and 60 reports published between January 2000 and December 2003. Additional information was obtained through consultation and by searching the websites of Health Canada, the United Network of Organ Sharing, Organ Donation Ontario, and LVAD manufacturers.
Summary of Findings
Safety and Effectiveness
Previous HTAs and current Level 3 evidence from prospective non-randomized controlled studies showed that when compared to optimal medical therapy, LVAD support significantly improved the pre-transplant survival rates of heart transplant candidates waiting for a suitable donor heart (71% for LVAD and 36% for medical therapy). Pre-transplant survival rates reported ranged from 58% to 90% (median 74%). Improved transplant rates were also reported for people who received pre-transplant LVAD support (e.g. 67% for LVAD vs 33% for medical therapy). Reported transplant rates for LVAD patients ranged from 39% to 90% (median 71%).
Patient’s age greater than 60 years and pre-existing conditions of respiratory failure associated with septicemia, ventilation, and right heart failure were independent risk factors for mortality after the LVAD implantation.
LVAD support was shown to improve the New York Heart Association [NYHA)] functional classification and quality of life of patients waiting for heart transplant. LVAD also enabled approximately 41% - 49% of patients to be discharged from hospitals and wait for a heart transplant at home. However, over 50% of the discharged patients required re-hospitalization due to adverse events.
Post-transplant survival rates for LVAD-bridged patients were similar to or better than the survival rates of patients bridged by medical therapy.
LVAD support has been associated with serious adverse events, including infection (median 53%, range 6%–72%), bleeding (8.6%–48%, median 35%), thromboembolism (5%–37%), neurologic disorders (7%–28%), right ventricular failure (11%–26%), organ dysfunction (5%–50%) and hemolysis (6%–20%). Bleeding tends to occur in the first few post-implant days and is rare thereafter. It is fatal in 2%–7% of patients. Infection and thromboembolism occurred throughout the duration of the implant, though their frequency tended to diminish with time. Device malfunction has been identified as one of the major complications. Fatalities directly attributable to the devices were about 1% in short-term LVAD use. However, mechanical failure was the second most frequent cause of death in patients on prolonged LVAD support. Malfunctions were mainly associated with the external components, and often could be replaced by backed up components.
LVAD has been used as a bridge-to-recovery in patients suffering from acute cardiogenic shock due to cardiomyopathy, myocarditis or cardiotomy. The survival rates were reported to be lower than in bridge-to-transplant (median 26%). Some of the bridge-to-recovery patients (14%–75%) required a heart transplant or remained on prolonged LVAD support. According to an expert in the field, experience with LVAD as a bridge-to-recovery technology has been more favourable in Germany than in North America, where it is not regarded as a major indication since evidence for its effectiveness in this setting is limited.
LVAD has also been explored as a destination therapy. A small, randomized, controlled trial (level 2 evidence) showed that LVAD significantly increased the 1-year survival rate of patients with end-stage heart failure but were not eligible for a heart transplant (51% LVAD vs 25% for medical therapy). However, improved survival was associated with adverse events 2.35 times higher than medically treated patients and a higher hospital re-admission rate. The 2-year survival rate on LVAD decreased to 23%, although it was still significantly better compared to patients on medical therapy (8%). The leading causes of deaths were sepsis (41%) and device failure (17%).
The FDA has given conditional approval for the permanent use of HeartMate SNAP VE LVAS in patients with end-stage heart failure who are not eligible for heart transplantation, although the long-term effect of this application is not known.
In Canada, four LVAD systems have been licensed for bridge-to-transplant only. The use of LVAD support raises ethical issues because of the implications of potential explantation that could be perceived as a withdrawal of life support.
Potential Impact on the Transplant Waiting List
With the shortage of donor hearts for adults, LVAD support probably would not increase the number of patients who receive a heart transplant. If LVAD supported candidates are prioritized for urgent heart transplant, there will be a knock on effect as other transplant candidates without LVAD support would be pushed down, resulting in longer wait, deterioration in health status and die before a suitable donor heart becomes available.
Under the current policy for allocating donor hearts in Ontario, patients on LVAD support would be downgraded to Status 3 with a lower priority to receive a transplant. This would likely result in an expansion of the transplant waiting list with an increasing number of patients on prolonged LVAD support, which is not consistent with the indication of LVAD use approved by Health Canada.
There is indication in the United Kingdom that LVAD support in conjunction with an urgent transplant listing in the pediatric population may decrease the number of deaths on the waiting list without a harmful knock-on effect on other transplant candidates.
Conclusion
LVAD support as a bridge-to-transplant has been shown to improve the survival rate, functional status and quality of life of patients on the heart transplant waiting list. However, due to the shortage of donor hearts and the current heart transplant algorithm, LVAD support for transplant candidates of all age groups would likely result in an expansion of the waiting list and prolonged use of LVAD with significant budget implications but without increasing the number of heart transplants. Limited level 4 evidence showed that LVAD support in children yielded survival rates comparable to those in the adult population. The introduction of LVAD in the pediatric population would be more cost-effective and might not have a negative effect on the transplant waiting list.
PMCID: PMC3387736  PMID: 23074453
14.  Emergency blood transfusion services after the 2005 earthquake in Pakistan 
Background
On 8 October 2005, an earthquake measuring 7.6 on the Richter Scale struck the Himalayan region of Kashmir and Hazara divisions, killing an estimated 73 000 people. Soon after, a situation and response analysis of the emergency blood transfusion services was carried out in the affected areas to ascertain specific needs and suggest appropriate measures to assist in the disaster plan.
Method
A semistructured questionnaire, complete with a checklist and participatory observation method, was used to collect data between 12 and 20 October 2005. Study sites were Abbotabad, Mansehra and Muzzafarabad in Pakistan, and interviewees were surgeons and blood bank personnel.
Results
Of the seven major hospitals in the area, 3 (43%) had a functional blood transfusion service. Although supply of voluntary blood was abundant, shortage of individual blood groups was noted at each centre. Quality assurance standards were either non‐existent or inadequate. Only three blood banks had refrigerators, but with limited storage capacities. A complete breakdown of infrastructure coupled with frequent power failures posed a serious threat to safety of the blood. The continued aftershocks added to the problems. Although initial estimates of blood requirement were high, actual demand noted later was much lower.
Discussion
Timely establishment of blood banks in disaster areas, is a challenging task. Mobile blood banks can be advantageous in such situations. Organisation at a national level for blood transfusion services and development of a minimum standard of quality assurance in normal times should ensure safe emergency blood transfusion services when disaster strikes.
doi:10.1136/emj.2006.036848
PMCID: PMC2658145  PMID: 17183037
15.  Safety of the Blood Supply in Latin America 
Clinical Microbiology Reviews  2005;18(1):12-29.
Appropriate selection of donors, use of sensitive screening tests, and the application of a mandatory quality assurance system are essential to maintain the safety of the blood supply. Laws, decrees, norms, and/or regulations covering most of these aspects of blood transfusion exist in 16 of the 17 countries in Latin America that are the subject of this review. In 17 countries, there is an information system that, although still incomplete (there are no official reports on adverse events and incidents), allows us to establish progress made on the status of the blood supply since 1993. Most advances originated in increased screening coverage for infectious diseases and better quality assurance. However, in 2001 to 2002, tainted blood may have caused infections in 12 of the 17 countries; no country reached the number of donors considered adequate, i.e., 5% of the population, to avoid blood shortages, or decreased significantly the number of blood banks, although larger blood banks are more efficient and take advantage of economies of scale. In those years, paid donors still existed in four countries and replacement donors made up >75% of the blood donors in another eight countries. In addition, countries did not report the number of voluntary donors who were repeat donors, i.e., the healthiest category. In spite of progress made, more improvements are needed.
doi:10.1128/CMR.18.1.12-29.2005
PMCID: PMC544183  PMID: 15653816
16.  Hemovigilance Program–India 
A centralized hemovigilance program to assure patient safety and to promote public health has been launched for the first time in India on Dec 10, 2012 in 60 medical colleges in the first phase along with a well-structured program for monitoring adverse reactions associated with blood transfusion and blood product administration. National Institute of Biologicals (NIB) will be the National Coordinating Centre for Hemovigilance. This program will be implemented under overall ambit of Pharmacovigilance Program of India (PvPI), which is being coordinated by Indian Pharmacopoeia Commission (IPC). All medical colleges of the country will be enrolled in this program by the year 2016 in order to have a National Centre of Excellence for Hemovigilance at NIB, which will act as a global knowledge platform.
doi:10.4103/0973-6247.106744
PMCID: PMC3613669  PMID: 23559771
Hemovigilance; India; Medical colleges; Transfusion Reaction Reporting Form
17.  Poor procedures and quality control among non-affiliated blood centers in Burkina Faso: an argument for expanding the reach of the national blood transfusion center 
Transfusion  2011;51(7PT2):1613-1618.
Introduction
The World Health Organization (WHO) recommends the creation of national blood transfusion services. Burkina Faso has a CNTS (Centre national de transfusion sanguine - National Blood Transfusion Center) but it currently covers only 53% of the national blood supply versus 47% produced by independent hospital blood banks.
Study design
To evaluate blood collection, testing, preparation and prescription practices in the regions of Burkina Faso that are not covered by the CNTS, we conducted a cross-sectional survey.
Methodology
Data were collected by trained professionals from May to June 2009, at 42 autonomous blood centers not covered by the CNTS.
Results
Blood collection was supervised in all sites by laboratory technicians without specific training. There was no marketing of community blood donation nor mobile collection. Donation was restricted to replacement (family) donors in 21.4% of sites. Pre-donation screening of donors was performed in 63.4% of sites, but some did not use written questionnaires. Testing for HIV, hepatitis B virus and syphilis was universal, although some sites did not screen for hepatitis C virus. In 83.3% of the sites blood typing was performed without reverse ABO typing. In 97.6% of the sites, nurses acted alone or in conjunction with a physician to order blood transfusions.
Conclusion
Shortcomings in non-CNTS blood centers argue for the development of a truly national CNTS. Such a national center should coordinate and supervise all blood transfusion activities, and is the essential first step for improving and institutionalizing blood transfusion safety and efficacy in a developing country.
doi:10.1111/j.1537-2995.2011.03222.x
PMCID: PMC3136812  PMID: 21736582
Blood transfusion; Blood donors; HIV; HBV; HCV–Africa; Burkina Faso
18.  An association between decreased cardiopulmonary complications (TRALI and TACO) and implementation of universal leukoreduction of blood transfusions 
Transfusion  2010;50(12):2738-2744.
Background
Cardiopulmonary adverse events after transfusion include acute lung injury (TRALI) and circulatory overload (TACO), which are potentially lethal and incompletely understood.
Study Design and Methods
To determine whether the incidence of TRALI and TACO was affected by leukoreduction we conducted a retrospective, before and after study of acute transfusion reactions for the seven years prior to and after introduction of universal leukoreduction in 2000, involving 778,559 blood components.
Results
Substantial decreases occurred in the rates of TRALI (−83%; from 2.8 cases per 100,000 components pre- to 0.48 post-universal leukoreduction; p=0.01), TACO (−49%; 7.4 to 3.8 cases per 100,000; p=0.03) and febrile reactions (−35%; 11.4 to 7.4 cases per 10,000; p<0.0001). The incidence of allergic reactions remained unchanged (7.0 per 100,000 pre- and post-universal leukoreduction). These outcomes were primarily attributable to decreased TRALI/TACO associated with RBC and platelet transfusions (−64%) with notably smaller decreases associated with FFP or cryoprecipitate transfusions (−29%). The incidence of TRALI/TACO after 28,120 washed red cell and 69,325 platelet transfusions was zero.
Conclusion
These data suggest novel hypotheses for further testing in animal models, in prospective clinical trials, and via the new US Hemovigilance System : (1) Is TACO or TRALI mitigated by leukoreduction? (2) Is the mechanism of TACO more complex than excessive blood volume? (3) Does washing mitigate TRALI and TACO due to platelet and RBC transfusions?
doi:10.1111/j.1537-2995.2010.02748.x
PMCID: PMC2944002  PMID: 20561296
19.  Estimation of the prevalence and rate of acute transfusion reactions occurring in Windhoek, Namibia 
Blood Transfusion  2014;12(3):352-361.
Background
Acute transfusion reactions are probably common in sub-Saharan Africa, but transfusion reaction surveillance systems have not been widely established. In 2008, the Blood Transfusion Service of Namibia implemented a national acute transfusion reaction surveillance system, but substantial under-reporting was suspected. We estimated the actual prevalence and rate of acute transfusion reactions occurring in Windhoek, Namibia.
Methods
The percentage of transfusion events resulting in a reported acute transfusion reaction was calculated. Actual percentage and rates of acute transfusion reactions per 1,000 transfused units were estimated by reviewing patients’ records from six hospitals, which transfuse >99% of all blood in Windhoek. Patients’ records for 1,162 transfusion events occurring between 1st January – 31st December 2011 were randomly selected. Clinical and demographic information were abstracted and Centers for Disease Control and Prevention National Healthcare Safety Network criteria were applied to categorize acute transfusion reactions1.
Results
From January 1 – December 31, 2011, there were 3,697 transfusion events (involving 10,338 blood units) in the selected hospitals. Eight (0.2%) acute transfusion reactions were reported to the surveillance system. Of the 1,162 transfusion events selected, medical records for 785 transfusion events were analysed, and 28 acute transfusion reactions were detected, of which only one had also been reported to the surveillance system. An estimated 3.4% (95% confidence interval [CI]: 2.3–4.4) of transfusion events in Windhoek resulted in an acute transfusion reaction, with an estimated rate of 11.5 (95% CI: 7.6–14.5) acute transfusion reactions per 1,000 transfused units.
Conclusion
The estimated actual rate of acute transfusion reactions is higher than the rate reported to the national haemovigilance system. Improved surveillance and interventions to reduce transfusion-related morbidity and mortality are required in Namibia.
doi:10.2450/2013.0143-13
PMCID: PMC4111817  PMID: 24333079
blood safety; blood transfusion; blood transfusion/adverse effects; surveillance; Namibia
20.  Determination of Rate and Causes of Wastage of Blood and Blood Products in Iranian Hospitals 
Turkish Journal of Hematology  2014;31(2):161-167.
Objective: The purpose of this study was to determine the rate and causes of wastage of blood and blood products (packed red cells, plasma, platelets, and cryoprecipitate) in Qazvin hospitals.
Materials and Methods: The study was conducted in all hospitals in Qazvin, including 5 teaching hospitals, 2 social welfare hospitals, 3 private hospitals, 1 charity hospital, and 1 military hospital. This descriptive study was based on available data from hospital blood banks in the province of Qazvin. The research instrument was a 2-part questionnaire. The first part was related to demographic characteristics of hospitals and the second part elicited information about blood and blood component wastage. The collected data were then analyzed using descriptive statistic methods and SPSS 11.5.
Results: Blood wastage may occur for a number of reasons, including time expiry, wasted imports, blood medically or surgically ordered but not used, stock time expired, hemolysis, or miscellaneous reasons. Data indicated that approximately 77.9% of wasted pack cell units were wasted for the reason of time expiry. Pack cell wastage in hospitals is reported to range from 1.93% to 30.7%. Wastage at all hospitals averaged 9.8% among 30.913 issued blood products. Overall blood and blood product (packed red cells, plasma, platelets, and cryoprecipitate) wastage was 3048 units and average total wastage per participant hospital for all blood groups was 254 units per year.
Conclusion: Blood transfusion is an essential part of patient care. The blood transfusion system has made significant advancements in areas such as donor management, storage of blood, cross-matching, rational use of blood, and distribution. In order to improve the standards of blood banks and the blood transfusion services in Iran, comprehensive standards have been formulated to ensure better quality control in collection, storage, testing, and distribution of blood and its components for the identified major factors affecting blood product wastage.
doi:10.4274/tjh.2012.0105
PMCID: PMC4102044  PMID: 25035674
Blood; Blood component; Wastage; Transfusion; Blood bank
21.  Red Blood Cell Transfusion and Mortality in Trauma Patients: Risk-Stratified Analysis of an Observational Study 
PLoS Medicine  2014;11(6):e1001664.
Using a large multicentre cohort, Pablo Perel and colleagues evaluate the association of red blood cell transfusion with mortality according to the predicted risk of death for trauma patients.
Please see later in the article for the Editors' Summary
Background
Haemorrhage is a common cause of death in trauma patients. Although transfusions are extensively used in the care of bleeding trauma patients, there is uncertainty about the balance of risks and benefits and how this balance depends on the baseline risk of death. Our objective was to evaluate the association of red blood cell (RBC) transfusion with mortality according to the predicted risk of death.
Methods and Findings
A secondary analysis of the CRASH-2 trial (which originally evaluated the effect of tranexamic acid on mortality in trauma patients) was conducted. The trial included 20,127 trauma patients with significant bleeding from 274 hospitals in 40 countries. We evaluated the association of RBC transfusion with mortality in four strata of predicted risk of death: <6%, 6%–20%, 21%–50%, and >50%. For this analysis the exposure considered was RBC transfusion, and the main outcome was death from all causes at 28 days. A total of 10,227 patients (50.8%) received at least one transfusion. We found strong evidence that the association of transfusion with all-cause mortality varied according to the predicted risk of death (p-value for interaction <0.0001). Transfusion was associated with an increase in all-cause mortality among patients with <6% and 6%–20% predicted risk of death (odds ratio [OR] 5.40, 95% CI 4.08–7.13, p<0.0001, and OR 2.31, 95% CI 1.96–2.73, p<0.0001, respectively), but with a decrease in all-cause mortality in patients with >50% predicted risk of death (OR 0.59, 95% CI 0.47–0.74, p<0.0001). Transfusion was associated with an increase in fatal and non-fatal vascular events (OR 2.58, 95% CI 2.05–3.24, p<0.0001). The risk associated with RBC transfusion was significantly increased for all the predicted risk of death categories, but the relative increase was higher for those with the lowest (<6%) predicted risk of death (p-value for interaction <0.0001). As this was an observational study, the results could have been affected by different types of confounding. In addition, we could not consider haemoglobin in our analysis. In sensitivity analyses, excluding patients who died early; conducting propensity score analysis adjusting by use of platelets, fresh frozen plasma, and cryoprecipitate; and adjusting for country produced results that were similar.
Conclusions
The association of transfusion with all-cause mortality appears to vary according to the predicted risk of death. Transfusion may reduce mortality in patients at high risk of death but increase mortality in those at low risk. The effect of transfusion in low-risk patients should be further tested in a randomised trial.
Trial registration
www.ClinicalTrials.gov NCT01746953
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Trauma—a serious injury to the body caused by violence or an accident—is a major global health problem. Every year, injuries caused by traffic collisions, falls, blows, and other traumatic events kill more than 5 million people (9% of annual global deaths). Indeed, for people between the ages of 5 and 44 years, injuries are among the top three causes of death in many countries. Trauma sometimes kills people through physical damage to the brain and other internal organs, but hemorrhage (serious uncontrolled bleeding) is responsible for 30%–40% of trauma-related deaths. Consequently, early trauma care focuses on minimizing hemorrhage (for example, by using compression to stop bleeding) and on restoring blood circulation after blood loss (health-care professionals refer to this as resuscitation). Red blood cell (RBC) transfusion is often used for the management of patients with trauma who are bleeding; other resuscitation products include isotonic saline and solutions of human blood proteins.
Why Was This Study Done?
Although RBC transfusion can save the lives of patients with trauma who are bleeding, there is considerable uncertainty regarding the balance of risks and benefits associated with this procedure. RBC transfusion, which is an expensive intervention, is associated with several potential adverse effects, including allergic reactions and infections. Moreover, blood supplies are limited, and the risks from transfusion are high in low- and middle-income countries, where most trauma-related deaths occur. In this study, which is a secondary analysis of data from a trial (CRASH-2) that evaluated the effect of tranexamic acid (which stops excessive bleeding) in patients with trauma, the researchers test the hypothesis that RBC transfusion may have a beneficial effect among patients at high risk of death following trauma but a harmful effect among those at low risk of death.
What Did the Researchers Do and Find?
The CRASH-2 trail included 20,127 patients with trauma and major bleeding treated in 274 hospitals in 40 countries. In their risk-stratified analysis, the researchers investigated the effect of RBC transfusion on CRASH-2 participants with a predicted risk of death (estimated using a validated model that included clinical variables such as heart rate and blood pressure) on admission to hospital of less than 6%, 6%–20%, 21%–50%, or more than 50%. That is, the researchers compared death rates among patients in each stratum of predicted risk of death who received a RBC transfusion with death rates among patients who did not receive a transfusion. Half the patients received at least one transfusion. Transfusion was associated with an increase in all-cause mortality at 28 days after trauma among patients with a predicted risk of death of less than 6% or of 6%–20%, but with a decrease in all-cause mortality among patients with a predicted risk of death of more than 50%. In absolute figures, compared to no transfusion, RBC transfusion was associated with 5.1 more deaths per 100 patients in the patient group with the lowest predicted risk of death but with 11.9 fewer deaths per 100 patients in the group with the highest predicted risk of death.
What Do These Findings Mean?
These findings show that RBC transfusion is associated with an increase in all-cause deaths among patients with trauma and major bleeding with a low predicted risk of death, but with a reduction in all-cause deaths among patients with a high predicted risk of death. In other words, these findings suggest that the effect of RBC transfusion on all-cause mortality may vary according to whether a patient with trauma has a high or low predicted risk of death. However, because the participants in the CRASH-2 trial were not randomly assigned to receive a RBC transfusion, it is not possible to conclude that receiving a RBC transfusion actually increased the death rate among patients with a low predicted risk of death. It might be that the patients with this level of predicted risk of death who received a transfusion shared other unknown characteristics (confounders) that were actually responsible for their increased death rate. Thus, to provide better guidance for clinicians caring for patients with trauma and hemorrhage, the hypothesis that RBC transfusion could be harmful among patients with trauma with a low predicted risk of death should be prospectively evaluated in a randomised controlled trial.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001664.
This study is further discussed in a PLOS Medicine Perspective by Druin Burch
The World Health Organization provides information on injuries and on violence and injury prevention (in several languages)
The US Centers for Disease Control and Prevention has information on injury and violence prevention and control
The National Trauma Institute, a US-based non-profit organization, provides information about hemorrhage after trauma and personal stories about surviving trauma
The UK National Health Service Choices website provides information about blood transfusion, including a personal story about transfusion after a serious road accident
The US National Heart, Lung, and Blood Institute also provides detailed information about blood transfusions
MedlinePlus provides links to further resources on injuries, bleeding, and blood transfusion (in English and Spanish)
More information in available about CRASH-2 (in several languages)
doi:10.1371/journal.pmed.1001664
PMCID: PMC4060995  PMID: 24937305
22.  Insights into the Management of Emerging Infections: Regulating Variant Creutzfeldt-Jakob Disease Transfusion Risk in the UK and the US 
PLoS Medicine  2006;3(10):e342.
Background
Variant Creutzfeldt-Jakob disease (vCJD) is a human prion disease caused by infection with the agent of bovine spongiform encephalopathy. After the recognition of vCJD in the UK in 1996, many nations implemented policies intended to reduce the hypothetical risk of transfusion transmission of vCJD. This was despite the fact that no cases of transfusion transmission had yet been identified. In December 2003, however, the first case of vCJD in a recipient of blood from a vCJD-infected donor was announced. The aim of this study is to ascertain and compare the factors that influenced the motivation for and the design of regulations to prevent transfusion transmission of vCJD in the UK and US prior to the recognition of this case.
Methods and Findings
A document search was conducted to identify US and UK governmental policy statements and guidance, transcripts (or minutes when transcripts were not available) of scientific advisory committee meetings, research articles, and editorials published in medical and scientific journals on the topic of vCJD and blood transfusion transmission between March 1996 and December 2003. In addition, 40 interviews were conducted with individuals familiar with the decision-making process and/or the science involved. All documents and transcripts were coded and analyzed according to the methods and principles of grounded theory. Data showed that while resulting policies were based on the available science, social and historical factors played a major role in the motivation for and the design of regulations to protect against transfusion transmission of vCJD. First, recent experience with and collective guilt resulting from the transfusion-transmitted epidemics of HIV/AIDS in both countries served as a major, historically specific impetus for such policies. This history was brought to bear both by hemophilia activists and those charged with regulating blood products in the US and UK. Second, local specificities, such as the recall of blood products for possible vCJD contamination in the UK, contributed to a greater sense of urgency and a speedier implementation of regulations in that country. Third, while the results of scientific studies played a prominent role in the construction of regulations in both nations, this role was shaped by existing social and professional networks. In the UK, early focus on a European study implicating B-lymphocytes as the carrier of prion infectivity in blood led to the introduction of a policy that requires universal leukoreduction of blood components. In the US, early focus on an American study highlighting the ability of plasma to serve as a reservoir of prion infectivity led the FDA and its advisory panel to eschew similar measures.
Conclusions
The results of this study yield three important theoretical insights that pertain to the global management of emerging infectious diseases. First, because the perception and management of disease may be shaped by previous experience with disease, especially catastrophic experience, there is always the possibility for over-management of some possible routes of transmission and relative neglect of others. Second, local specificities within a given nation may influence the temporality of decision making, which in turn may influence the choice of disease management policies. Third, a preference for science-based risk management among nations will not necessarily lead to homogeneous policies. This is because the exposure to and interpretation of scientific results depends on the existing social and professional networks within a given nation. Together, these theoretical insights provide a framework for analyzing and anticipating potential conflicts in the international management of emerging infectious diseases. In addition, this study illustrates the utility of qualitative methods in investigating research questions that are difficult to assess through quantitative means.
A qualitative study of US and UK governmental policy statements on the topic of vCJD and blood transfusion transmission identified factors responsible for differences in the policies adopted.
Editors' Summary
Background.
In 1996 in the UK, a new type of human prion disease was seen for the first time. This is now known as variant Creutzfeldt-Jakob disease (vCJD). Prion diseases are rare brain diseases passed from individual to individual (or between animals) by a particular type of wrongly folded protein, and they are fatal. It was suspected that vCJD had passed to humans from cattle, and that the agent causing vCJD was the same as that causing bovine spongiform encephalopathy (or “mad cow disease”). Shortly after vCJD was recognized, authorities in many countries became concerned about the possibility that it could be transmitted from one person to another through contaminated blood supplies used for transfusion in hospitals. Even though there wasn't any evidence of actual transmission of the disease through blood before December 2003, authorities in the UK, US, and elsewhere set up regulations designed to reduce the chance of that happening. At this early stage in the epidemic, there was little in the way of scientific information about the transmission properties of the disease. Both the UK and US, however, sought to make decisions in a scientific manner. They made use of evidence as it was being produced, often before it had been published. Despite this, the UK and US decided on very different changes to their respective regulations on blood donation. Both countries chose to prevent certain people (who they thought would be at greater risk of having vCJD) from donating blood. In the UK, however, the decision was made to remove white blood cells from donated blood to reduce the risk of transmitting vCJD, while the US decided that such a step was not merited by the evidence.
Why Was This Study Done?
This researcher wanted to understand more clearly why the UK and US ended up with different policies: what role was played by science, and what role was played by non-scientific factors? She hoped that insights from this investigation would also be relevant to similar challenges in the future—for example, as many countries try to work out how to control the threat of avian flu.
What Did the Researcher Do and Find?
The researcher searched for all relevant official government documents from the US and UK, as well as scientific papers, published between the time vCJD was first identified (March 1996) and the first instance of vCJD carried through blood (December 2003). She also interviewed people who knew about vCJD management in the US and UK—for example, members of government agencies and the relevant advisory committees. From the documents and interviews, the researcher picked out and grouped shared ideas. Although these documents and interviews suggested that policy making was rooted in scientific evidence, many non-scientific factors were also important. The researcher found substantial uncertainty in the scientific evidence available at the time. The document search and interviews showed that policy makers felt guilty about a previous experience in which people had become infected with HIV/AIDS through contaminated blood and were concerned about repeating this experience. Finally, in the UK, the possibility of blood contamination was seen as a much more urgent problem than in the US, because BSE and vCJD were found there first and there were far more cases. This meant that when the UK made its decision about whether to remove white blood cells from donated blood, there was less scientific evidence available. In fact, the main study that was relied on at the time would later be questioned.
What Do These Findings Mean?
These findings show that for this particular case, science was not the only factor affecting government policies. Historical and social factors such as previous experience, sense of urgency, public pressure, and the relative importance of different scientific networks were also very important. The study predicts that in the future, infectious disease–related policy decisions are unlikely to be the same across different countries because the interpretation of scientific evidence depends, to a large extent, on social factors.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030342.
National Creutzfeldt-Jakob Disease Surveillance Unit, Edinburgh, UK
US Centers for Disease Control and Prevention pages about prion diseases
World Health Organization variant Creutzfeldt-Jakob disease fact sheet
US National Institute of Neurological Disorders and Stroke information about prion diseases
doi:10.1371/journal.pmed.0030342
PMCID: PMC1621089  PMID: 17076547
23.  Internet-Based Device-Assisted Remote Monitoring of Cardiovascular Implantable Electronic Devices 
Executive Summary
Objective
The objective of this Medical Advisory Secretariat (MAS) report was to conduct a systematic review of the available published evidence on the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted remote monitoring systems (RMSs) for therapeutic cardiac implantable electronic devices (CIEDs) such as pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. The MAS evidence-based review was performed to support public financing decisions.
Clinical Need: Condition and Target Population
Sudden cardiac death (SCD) is a major cause of fatalities in developed countries. In the United States almost half a million people die of SCD annually, resulting in more deaths than stroke, lung cancer, breast cancer, and AIDS combined. In Canada each year more than 40,000 people die from a cardiovascular related cause; approximately half of these deaths are attributable to SCD.
Most cases of SCD occur in the general population typically in those without a known history of heart disease. Most SCDs are caused by cardiac arrhythmia, an abnormal heart rhythm caused by malfunctions of the heart’s electrical system. Up to half of patients with significant heart failure (HF) also have advanced conduction abnormalities.
Cardiac arrhythmias are managed by a variety of drugs, ablative procedures, and therapeutic CIEDs. The range of CIEDs includes pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. Bradycardia is the main indication for PMs and individuals at high risk for SCD are often treated by ICDs.
Heart failure (HF) is also a significant health problem and is the most frequent cause of hospitalization in those over 65 years of age. Patients with moderate to severe HF may also have cardiac arrhythmias, although the cause may be related more to heart pump or haemodynamic failure. The presence of HF, however, increases the risk of SCD five-fold, regardless of aetiology. Patients with HF who remain highly symptomatic despite optimal drug therapy are sometimes also treated with CRT devices.
With an increasing prevalence of age-related conditions such as chronic HF and the expanding indications for ICD therapy, the rate of ICD placement has been dramatically increasing. The appropriate indications for ICD placement, as well as the rate of ICD placement, are increasingly an issue. In the United States, after the introduction of expanded coverage of ICDs, a national ICD registry was created in 2005 to track these devices. A recent survey based on this national ICD registry reported that 22.5% (25,145) of patients had received a non-evidence based ICD and that these patients experienced significantly higher in-hospital mortality and post-procedural complications.
In addition to the increased ICD device placement and the upfront device costs, there is the need for lifelong follow-up or surveillance, placing a significant burden on patients and device clinics. In 2007, over 1.6 million CIEDs were implanted in Europe and the United States, which translates to over 5.5 million patient encounters per year if the recommended follow-up practices are considered. A safe and effective RMS could potentially improve the efficiency of long-term follow-up of patients and their CIEDs.
Technology
In addition to being therapeutic devices, CIEDs have extensive diagnostic abilities. All CIEDs can be interrogated and reprogrammed during an in-clinic visit using an inductive programming wand. Remote monitoring would allow patients to transmit information recorded in their devices from the comfort of their own homes. Currently most ICD devices also have the potential to be remotely monitored. Remote monitoring (RM) can be used to check system integrity, to alert on arrhythmic episodes, and to potentially replace in-clinic follow-ups and manage disease remotely. They do not currently have the capability of being reprogrammed remotely, although this feature is being tested in pilot settings.
Every RMS is specifically designed by a manufacturer for their cardiac implant devices. For Internet-based device-assisted RMSs, this customization includes details such as web application, multiplatform sensors, custom algorithms, programming information, and types and methods of alerting patients and/or physicians. The addition of peripherals for monitoring weight and pressure or communicating with patients through the onsite communicators also varies by manufacturer. Internet-based device-assisted RMSs for CIEDs are intended to function as a surveillance system rather than an emergency system.
Health care providers therefore need to learn each application, and as more than one application may be used at one site, multiple applications may need to be reviewed for alarms. All RMSs deliver system integrity alerting; however, some systems seem to be better geared to fast arrhythmic alerting, whereas other systems appear to be more intended for remote follow-up or supplemental remote disease management. The different RMSs may therefore have different impacts on workflow organization because of their varying frequency of interrogation and methods of alerts. The integration of these proprietary RM web-based registry systems with hospital-based electronic health record systems has so far not been commonly implemented.
Currently there are 2 general types of RMSs: those that transmit device diagnostic information automatically and without patient assistance to secure Internet-based registry systems, and those that require patient assistance to transmit information. Both systems employ the use of preprogrammed alerts that are either transmitted automatically or at regular scheduled intervals to patients and/or physicians.
The current web applications, programming, and registry systems differ greatly between the manufacturers of transmitting cardiac devices. In Canada there are currently 4 manufacturers—Medtronic Inc., Biotronik, Boston Scientific Corp., and St Jude Medical Inc.—which have regulatory approval for remote transmitting CIEDs. Remote monitoring systems are proprietary to the manufacturer of the implant device. An RMS for one device will not work with another device, and the RMS may not work with all versions of the manufacturer’s devices.
All Internet-based device-assisted RMSs have common components. The implanted device is equipped with a micro-antenna that communicates with a small external device (at bedside or wearable) commonly known as the transmitter. Transmitters are able to interrogate programmed parameters and diagnostic data stored in the patients’ implant device. The information transfer to the communicator can occur at preset time intervals with the participation of the patient (waving a wand over the device) or it can be sent automatically (wirelessly) without their participation. The encrypted data are then uploaded to an Internet-based database on a secure central server. The data processing facilities at the central database, depending on the clinical urgency, can trigger an alert for the physician(s) that can be sent via email, fax, text message, or phone. The details are also posted on the secure website for viewing by the physician (or their delegate) at their convenience.
Research Questions
The research directions and specific research questions for this evidence review were as follows:
To identify the Internet-based device-assisted RMSs available for follow-up of patients with therapeutic CIEDs such as PMs, ICDs, and CRT devices.
To identify the potential risks, operational issues, or organizational issues related to Internet-based device-assisted RM for CIEDs.
To evaluate the safety, acceptability, and effectiveness of Internet-based device-assisted RMSs for CIEDs such as PMs, ICDs, and CRT devices.
To evaluate the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted RMSs for CIEDs compared to usual outpatient in-office monitoring strategies.
To evaluate the resource implications or budget impact of RMSs for CIEDs in Ontario, Canada.
Research Methods
Literature Search
The review included a systematic review of published scientific literature and consultations with experts and manufacturers of all 4 approved RMSs for CIEDs in Canada. Information on CIED cardiac implant clinics was also obtained from Provincial Programs, a division within the Ministry of Health and Long-Term Care with a mandate for cardiac implant specialty care. Various administrative databases and registries were used to outline the current clinical follow-up burden of CIEDs in Ontario. The provincial population-based ICD database developed and maintained by the Institute for Clinical Evaluative Sciences (ICES) was used to review the current follow-up practices with Ontario patients implanted with ICD devices.
Search Strategy
A literature search was performed on September 21, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from 1950 to September 2010. Search alerts were generated and reviewed for additional relevant literature until December 31, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Inclusion Criteria
published between 1950 and September 2010;
English language full-reports and human studies;
original reports including clinical evaluations of Internet-based device-assisted RMSs for CIEDs in clinical settings;
reports including standardized measurements on outcome events such as technical success, safety, effectiveness, cost, measures of health care utilization, morbidity, mortality, quality of life or patient satisfaction;
randomized controlled trials (RCTs), systematic reviews and meta-analyses, cohort and controlled clinical studies.
Exclusion Criteria
non-systematic reviews, letters, comments and editorials;
reports not involving standardized outcome events;
clinical reports not involving Internet-based device assisted RM systems for CIEDs in clinical settings;
reports involving studies testing or validating algorithms without RM;
studies with small samples (<10 subjects).
Outcomes of Interest
The outcomes of interest included: technical outcomes, emergency department visits, complications, major adverse events, symptoms, hospital admissions, clinic visits (scheduled and/or unscheduled), survival, morbidity (disease progression, stroke, etc.), patient satisfaction, and quality of life.
Summary of Findings
The MAS evidence review was performed to review available evidence on Internet-based device-assisted RMSs for CIEDs published until September 2010. The search identified 6 systematic reviews, 7 randomized controlled trials, and 19 reports for 16 cohort studies—3 of these being registry-based and 4 being multi-centered. The evidence is summarized in the 3 sections that follow.
1. Effectiveness of Remote Monitoring Systems of CIEDs for Cardiac Arrhythmia and Device Functioning
In total, 15 reports on 13 cohort studies involving investigations with 4 different RMSs for CIEDs in cardiology implant clinic groups were identified in the review. The 4 RMSs were: Care Link Network® (Medtronic Inc,, Minneapolis, MN, USA); Home Monitoring® (Biotronic, Berlin, Germany); House Call 11® (St Jude Medical Inc., St Pauls, MN, USA); and a manufacturer-independent RMS. Eight of these reports were with the Home Monitoring® RMS (12,949 patients), 3 were with the Care Link® RMS (167 patients), 1 was with the House Call 11® RMS (124 patients), and 1 was with a manufacturer-independent RMS (44 patients). All of the studies, except for 2 in the United States, (1 with Home Monitoring® and 1 with House Call 11®), were performed in European countries.
The RMSs in the studies were evaluated with different cardiac implant device populations: ICDs only (6 studies), ICD and CRT devices (3 studies), PM and ICD and CRT devices (4 studies), and PMs only (2 studies). The patient populations were predominately male (range, 52%–87%) in all studies, with mean ages ranging from 58 to 76 years. One study population was unique in that RMSs were evaluated for ICDs implanted solely for primary prevention in young patients (mean age, 44 years) with Brugada syndrome, which carries an inherited increased genetic risk for sudden heart attack in young adults.
Most of the cohort studies reported on the feasibility of RMSs in clinical settings with limited follow-up. In the short follow-up periods of the studies, the majority of the events were related to detection of medical events rather than system configuration or device abnormalities. The results of the studies are summarized below:
The interrogation of devices on the web platform, both for continuous and scheduled transmissions, was significantly quicker with remote follow-up, both for nurses and physicians.
In a case-control study focusing on a Brugada population–based registry with patients followed-up remotely, there were significantly fewer outpatient visits and greater detection of inappropriate shocks. One death occurred in the control group not followed remotely and post-mortem analysis indicated early signs of lead failure prior to the event.
Two studies examined the role of RMSs in following ICD leads under regulatory advisory in a European clinical setting and noted:
– Fewer inappropriate shocks were administered in the RM group.
– Urgent in-office interrogations and surgical revisions were performed within 12 days of remote alerts.
– No signs of lead fracture were detected at in-office follow-up; all were detected at remote follow-up.
Only 1 study reported evaluating quality of life in patients followed up remotely at 3 and 6 months; no values were reported.
Patient satisfaction was evaluated in 5 cohort studies, all in short term follow-up: 1 for the Home Monitoring® RMS, 3 for the Care Link® RMS, and 1 for the House Call 11® RMS.
– Patients reported receiving a sense of security from the transmitter, a good relationship with nurses and physicians, positive implications for their health, and satisfaction with RM and organization of services.
– Although patients reported that the system was easy to implement and required less than 10 minutes to transmit information, a variable proportion of patients (range, 9% 39%) reported that they needed the assistance of a caregiver for their transmission.
– The majority of patients would recommend RM to other ICD patients.
– Patients with hearing or other physical or mental conditions hindering the use of the system were excluded from studies, but the frequency of this was not reported.
Physician satisfaction was evaluated in 3 studies, all with the Care Link® RMS:
– Physicians reported an ease of use and high satisfaction with a generally short-term use of the RMS.
– Physicians reported being able to address the problems in unscheduled patient transmissions or physician initiated transmissions remotely, and were able to handle the majority of the troubleshooting calls remotely.
– Both nurses and physicians reported a high level of satisfaction with the web registry system.
2. Effectiveness of Remote Monitoring Systems in Heart Failure Patients for Cardiac Arrhythmia and Heart Failure Episodes
Remote follow-up of HF patients implanted with ICD or CRT devices, generally managed in specialized HF clinics, was evaluated in 3 cohort studies: 1 involved the Home Monitoring® RMS and 2 involved the Care Link® RMS. In these RMSs, in addition to the standard diagnostic features, the cardiac devices continuously assess other variables such as patient activity, mean heart rate, and heart rate variability. Intra-thoracic impedance, a proxy measure for lung fluid overload, was also measured in the Care Link® studies. The overall diagnostic performance of these measures cannot be evaluated, as the information was not reported for patients who did not experience intra-thoracic impedance threshold crossings or did not undergo interventions. The trial results involved descriptive information on transmissions and alerts in patients experiencing high morbidity and hospitalization in the short study periods.
3. Comparative Effectiveness of Remote Monitoring Systems for CIEDs
Seven RCTs were identified evaluating RMSs for CIEDs: 2 were for PMs (1276 patients) and 5 were for ICD/CRT devices (3733 patients). Studies performed in the clinical setting in the United States involved both the Care Link® RMS and the Home Monitoring® RMS, whereas all studies performed in European countries involved only the Home Monitoring® RMS.
3A. Randomized Controlled Trials of Remote Monitoring Systems for Pacemakers
Two trials, both multicenter RCTs, were conducted in different countries with different RMSs and study objectives. The PREFER trial was a large trial (897 patients) performed in the United States examining the ability of Care Link®, an Internet-based remote PM interrogation system, to detect clinically actionable events (CAEs) sooner than the current in-office follow-up supplemented with transtelephonic monitoring transmissions, a limited form of remote device interrogation. The trial results are summarized below:
In the 375-day mean follow-up, 382 patients were identified with at least 1 CAE—111 patients in the control arm and 271 in the remote arm.
The event rate detected per patient for every type of CAE, except for loss of atrial capture, was higher in the remote arm than the control arm.
The median time to first detection of CAEs (4.9 vs. 6.3 months) was significantly shorter in the RMS group compared to the control group (P < 0.0001).
Additionally, only 2% (3/190) of the CAEs in the control arm were detected during a transtelephonic monitoring transmission (the rest were detected at in-office follow-ups), whereas 66% (446/676) of the CAEs were detected during remote interrogation.
The second study, the OEDIPE trial, was a smaller trial (379 patients) performed in France evaluating the ability of the Home Monitoring® RMS to shorten PM post-operative hospitalization while preserving the safety of conventional management of longer hospital stays.
Implementation and operationalization of the RMS was reported to be successful in 91% (346/379) of the patients and represented 8144 transmissions.
In the RM group 6.5% of patients failed to send messages (10 due to improper use of the transmitter, 2 with unmanageable stress). Of the 172 patients transmitting, 108 patients sent a total of 167 warnings during the trial, with a greater proportion of warnings being attributed to medical rather than technical causes.
Forty percent had no warning message transmission and among these, 6 patients experienced a major adverse event and 1 patient experienced a non-major adverse event. Of the 6 patients having a major adverse event, 5 contacted their physician.
The mean medical reaction time was faster in the RM group (6.5 ± 7.6 days vs. 11.4 ± 11.6 days).
The mean duration of hospitalization was significantly shorter (P < 0.001) for the RM group than the control group (3.2 ± 3.2 days vs. 4.8 ± 3.7 days).
Quality of life estimates by the SF-36 questionnaire were similar for the 2 groups at 1-month follow-up.
3B. Randomized Controlled Trials Evaluating Remote Monitoring Systems for ICD or CRT Devices
The 5 studies evaluating the impact of RMSs with ICD/CRT devices were conducted in the United States and in European countries and involved 2 RMSs—Care Link® and Home Monitoring ®. The objectives of the trials varied and 3 of the trials were smaller pilot investigations.
The first of the smaller studies (151 patients) evaluated patient satisfaction, achievement of patient outcomes, and the cost-effectiveness of the Care Link® RMS compared to quarterly in-office device interrogations with 1-year follow-up.
Individual outcomes such as hospitalizations, emergency department visits, and unscheduled clinic visits were not significantly different between the study groups.
Except for a significantly higher detection of atrial fibrillation in the RM group, data on ICD detection and therapy were similar in the study groups.
Health-related quality of life evaluated by the EuroQoL at 6-month or 12-month follow-up was not different between study groups.
Patients were more satisfied with their ICD care in the clinic follow-up group than in the remote follow-up group at 6-month follow-up, but were equally satisfied at 12- month follow-up.
The second small pilot trial (20 patients) examined the impact of RM follow-up with the House Call 11® system on work schedules and cost savings in patients randomized to 2 study arms varying in the degree of remote follow-up.
The total time including device interrogation, transmission time, data analysis, and physician time required was significantly shorter for the RM follow-up group.
The in-clinic waiting time was eliminated for patients in the RM follow-up group.
The physician talk time was significantly reduced in the RM follow-up group (P < 0.05).
The time for the actual device interrogation did not differ in the study groups.
The third small trial (115 patients) examined the impact of RM with the Home Monitoring® system compared to scheduled trimonthly in-clinic visits on the number of unplanned visits, total costs, health-related quality of life (SF-36), and overall mortality.
There was a 63.2% reduction in in-office visits in the RM group.
Hospitalizations or overall mortality (values not stated) were not significantly different between the study groups.
Patient-induced visits were higher in the RM group than the in-clinic follow-up group.
The TRUST Trial
The TRUST trial was a large multicenter RCT conducted at 102 centers in the United States involving the Home Monitoring® RMS for ICD devices for 1450 patients. The primary objectives of the trial were to determine if remote follow-up could be safely substituted for in-office clinic follow-up (3 in-office visits replaced) and still enable earlier physician detection of clinically actionable events.
Adherence to the protocol follow-up schedule was significantly higher in the RM group than the in-office follow-up group (93.5% vs. 88.7%, P < 0.001).
Actionability of trimonthly scheduled checks was low (6.6%) in both study groups. Overall, actionable causes were reprogramming (76.2%), medication changes (24.8%), and lead/system revisions (4%), and these were not different between the 2 study groups.
The overall mean number of in-clinic and hospital visits was significantly lower in the RM group than the in-office follow-up group (2.1 per patient-year vs. 3.8 per patient-year, P < 0.001), representing a 45% visit reduction at 12 months.
The median time from onset of first arrhythmia to physician evaluation was significantly shorter (P < 0.001) in the RM group than in the in-office follow-up group for all arrhythmias (1 day vs. 35.5 days).
The median time to detect clinically asymptomatic arrhythmia events—atrial fibrillation (AF), ventricular fibrillation (VF), ventricular tachycardia (VT), and supra-ventricular tachycardia (SVT)—was also significantly shorter (P < 0.001) in the RM group compared to the in-office follow-up group (1 day vs. 41.5 days) and was significantly quicker for each of the clinical arrhythmia events—AF (5.5 days vs. 40 days), VT (1 day vs. 28 days), VF (1 day vs. 36 days), and SVT (2 days vs. 39 days).
System-related problems occurred infrequently in both groups—in 1.5% of patients (14/908) in the RM group and in 0.7% of patients (3/432) in the in-office follow-up group.
The overall adverse event rate over 12 months was not significantly different between the 2 groups and individual adverse events were also not significantly different between the RM group and the in-office follow-up group: death (3.4% vs. 4.9%), stroke (0.3% vs. 1.2%), and surgical intervention (6.6% vs. 4.9%), respectively.
The 12-month cumulative survival was 96.4% (95% confidence interval [CI], 95.5%–97.6%) in the RM group and 94.2% (95% confidence interval [CI], 91.8%–96.6%) in the in-office follow-up group, and was not significantly different between the 2 groups (P = 0.174).
The CONNECT Trial
The CONNECT trial, another major multicenter RCT, involved the Care Link® RMS for ICD/CRT devices in a15-month follow-up study of 1,997 patients at 133 sites in the United States. The primary objective of the trial was to determine whether automatically transmitted physician alerts decreased the time from the occurrence of clinically relevant events to medical decisions. The trial results are summarized below:
Of the 575 clinical alerts sent in the study, 246 did not trigger an automatic physician alert. Transmission failures were related to technical issues such as the alert not being programmed or not being reset, and/or a variety of patient factors such as not being at home and the monitor not being plugged in or set up.
The overall mean time from the clinically relevant event to the clinical decision was significantly shorter (P < 0.001) by 17.4 days in the remote follow-up group (4.6 days for 172 patients) than the in-office follow-up group (22 days for 145 patients).
– The median time to a clinical decision was shorter in the remote follow-up group than in the in-office follow-up group for an AT/AF burden greater than or equal to 12 hours (3 days vs. 24 days) and a fast VF rate greater than or equal to 120 beats per minute (4 days vs. 23 days).
Although infrequent, similar low numbers of events involving low battery and VF detection/therapy turned off were noted in both groups. More alerts, however, were noted for out-of-range lead impedance in the RM group (18 vs. 6 patients), and the time to detect these critical events was significantly shorter in the RM group (same day vs. 17 days).
Total in-office clinic visits were reduced by 38% from 6.27 visits per patient-year in the in-office follow-up group to 3.29 visits per patient-year in the remote follow-up group.
Health care utilization visits (N = 6,227) that included cardiovascular-related hospitalization, emergency department visits, and unscheduled clinic visits were not significantly higher in the remote follow-up group.
The overall mean length of hospitalization was significantly shorter (P = 0.002) for those in the remote follow-up group (3.3 days vs. 4.0 days) and was shorter both for patients with ICD (3.0 days vs. 3.6 days) and CRT (3.8 days vs. 4.7 days) implants.
The mortality rate between the study arms was not significantly different between the follow-up groups for the ICDs (P = 0.31) or the CRT devices with defribillator (P = 0.46).
Conclusions
There is limited clinical trial information on the effectiveness of RMSs for PMs. However, for RMSs for ICD devices, multiple cohort studies and 2 large multicenter RCTs demonstrated feasibility and significant reductions in in-office clinic follow-ups with RMSs in the first year post implantation. The detection rates of clinically significant events (and asymptomatic events) were higher, and the time to a clinical decision for these events was significantly shorter, in the remote follow-up groups than in the in-office follow-up groups. The earlier detection of clinical events in the remote follow-up groups, however, was not associated with lower morbidity or mortality rates in the 1-year follow-up. The substitution of almost all the first year in-office clinic follow-ups with RM was also not associated with an increased health care utilization such as emergency department visits or hospitalizations.
The follow-up in the trials was generally short-term, up to 1 year, and was a more limited assessment of potential longer term device/lead integrity complications or issues. None of the studies compared the different RMSs, particularly the different RMSs involving patient-scheduled transmissions or automatic transmissions. Patients’ acceptance of and satisfaction with RM were reported to be high, but the impact of RM on patients’ health-related quality of life, particularly the psychological aspects, was not evaluated thoroughly. Patients who are not technologically competent, having hearing or other physical/mental impairments, were identified as potentially disadvantaged with remote surveillance. Cohort studies consistently identified subgroups of patients who preferred in-office follow-up. The evaluation of costs and workflow impact to the health care system were evaluated in European or American clinical settings, and only in a limited way.
Internet-based device-assisted RMSs involve a new approach to monitoring patients, their disease progression, and their CIEDs. Remote monitoring also has the potential to improve the current postmarket surveillance systems of evolving CIEDs and their ongoing hardware and software modifications. At this point, however, there is insufficient information to evaluate the overall impact to the health care system, although the time saving and convenience to patients and physicians associated with a substitution of in-office follow-up by RM is more certain. The broader issues surrounding infrastructure, impacts on existing clinical care systems, and regulatory concerns need to be considered for the implementation of Internet-based RMSs in jurisdictions involving different clinical practices.
PMCID: PMC3377571  PMID: 23074419
24.  Legal and ethical issues in safe blood transfusion 
Indian Journal of Anaesthesia  2014;58(5):558-564.
Legal issues play a vital role in providing a framework for the Indian blood transfusion service (BTS), while ethical issues pave the way for quality. Despite licensing of all blood banks, failure to revamp the Drugs and Cosmetic Act (D and C Act) is impeding quality. Newer techniques like chemiluminescence or nucleic acid testing (NAT) find no mention in the D and C Act. Specialised products like pooled platelet concentrates or modified whole blood, therapeutic procedures like erythropheresis, plasma exchange, stem cell collection and processing technologies like leukoreduction and irradiation are not a part of the D and C Act. A highly fragmented BTS comprising of over 2500 blood banks, coupled with a slow and tedious process of dual licensing (state and centre) is a hindrance to smooth functioning of blood banks. Small size of blood banks compromises blood safety. New blood banks are opened in India by hospitals to meet requirements of insurance providers or by medical colleges as this a Medical Council of India (MCI) requirement. Hospital based blood banks opt for replacement donation as they are barred by law from holding camps. Demand for fresh blood, lack of components, and lack of guidelines for safe transfusion leads to continued abuse of blood. Differential pricing of blood components is difficult to explain scientifically or ethically. Accreditation of blood banks along with establishment of regional testing centres could pave the way to blood safety. National Aids Control Organisation (NACO) and National Blood Transfusion Council (NBTC) deserve a more proactive role in the licensing process. The Food and Drug Administration (FDA) needs to clarify that procedures or tests meant for enhancement of blood safety are not illegal.
doi:10.4103/0019-5049.144654
PMCID: PMC4260301  PMID: 25535417
Blood transfusion services; Drugs and Cosmetic Act (D and C Act); ethical; legal; National Aids Control Organisation; transfusion
25.  Artificial Discs for Lumbar and Cervical Degenerative Disc Disease –Update 
Executive Summary
Objective
To assess the safety and efficacy of artificial disc replacement (ADR) technology for degenerative disc disease (DDD).
Clinical Need
Degenerative disc disease is the term used to describe the deterioration of 1 or more intervertebral discs of the spine. The prevalence of DDD is roughly described in proportion to age such that 40% of people aged 40 years have DDD, increasing to 80% among those aged 80 years or older. Low back pain is a common symptom of lumbar DDD; neck and arm pain are common symptoms of cervical DDD. Nonsurgical treatments can be used to relieve pain and minimize disability associated with DDD. However, it is estimated that about 10% to 20% of people with lumbar DDD and up to 30% with cervical DDD will be unresponsive to nonsurgical treatments. In these cases, surgical treatment is considered. Spinal fusion (arthrodesis) is the process of fusing or joining 2 bones and is considered the surgical gold standard for DDD.
Artificial disc replacement is the replacement of the degenerated intervertebral disc with an artificial disc in people with DDD of the lumbar or cervical spine that has been unresponsive to nonsurgical treatments for at least 6 months. Unlike spinal fusion, ADR preserves movement of the spine, which is thought to reduce or prevent the development of adjacent segment degeneration. Additionally, a bone graft is not required for ADR, and this alleviates complications, including bone graft donor site pain and pseudoarthrosis. It is estimated that about 5% of patients who require surgery for DDD will be candidates for ADR.
Review Strategy
The Medical Advisory Secretariat conducted a computerized search of the literature published between 2003 and September 2005 to answer the following questions:
What is the effectiveness of ADR in people with DDD of the lumbar or cervical regions of the spine compared with spinal fusion surgery?
Does an artificial disc reduce the incidence of adjacent segment degeneration (ASD) compared with spinal fusion?
What is the rate of major complications (device failure, reoperation) with artificial discs compared with surgical spinal fusion?
One reviewer evaluated the internal validity of the primary studies using the criteria outlined in the Cochrane Musculoskeletal Injuries Group Quality Assessment Tool. The quality of concealment allocation was rated as: A, clearly yes; B, unclear; or C, clearly no. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was used to evaluate the overall quality of the body of evidence (defined as 1 or more studies) supporting the research questions explored in this systematic review. A random effects model meta-analysis was conducted when data were available from 2 or more randomized controlled trials (RCTs) and when there was no statistical and or clinical heterogeneity among studies. Bayesian analyses were undertaken to do the following:
Examine the influence of missing data on clinical success rates;
Compute the probability that artificial discs were superior to spinal fusion (on the basis of clinical success rates);
Examine whether the results were sensitive to the choice of noninferiority margin.
Summary of Findings
The literature search yielded 140 citations. Of these, 1 Cochrane systematic review, 1 RCT, and 10 case series were included in this review. Unpublished data from an RCT reported in the grey literature were obtained from the manufacturer of the device. The search also yielded 8 health technology assessments evaluating ADR that are also included in this review.
Six of the 8 health technology assessments concluded that there is insufficient evidence to support the use of either lumbar or cervical ADR. The results of the remaining 2 assessments (one each for lumbar and cervical ADR) led to a National Institute for Clinical Excellence guidance document supporting the safety and effectiveness of lumbar and cervical ADR with the proviso that an ongoing audit of all clinical outcomes be undertaken owing to a lack of long-term outcome data from clinical trials.
Regarding lumbar ADR, data were available from 2 noninferiority RCTs to complete a meta-analysis. The following clinical, health systems, and adverse event outcome measures were synthesized: primary outcome of clinical success, Oswestry Disability Index (ODI) scores, pain VAS scores, patient satisfaction, duration of surgery, amount of blood loss, length of hospital stay, rate of device failure, and rate of reoperation.
The meta-analysis of overall clinical success supported the noninferiority of lumbar ADR compared with spinal fusion at 24-month follow-up. Of the remaining clinical outcome measures (ODI, pain VAS scores, SF-36 scores [mental and physical components], patient satisfaction, and return to work status), only patient satisfaction and scores on the physical component scale of the SF-36 questionnaire were significantly improved in favour of lumbar ADR compared with spinal fusion at 24 months follow-up. Blood loss and surgical time showed statistical heterogeneity; therefore, meta-analysis results are not interpretable. Length of hospital stay was significantly shorter in patients receiving the ADR compared with controls. Neither the number of device failures nor the number of neurological complications at 24 months was statistically significantly different between the ADR and fusion treatment groups. However, there was a trend towards fewer neurological complications at 24 months in the ADR treatment group compared with the spinal fusion treatment group.
Results of the Bayesian analyses indicated that the influence of missing data on the outcome measure of clinical success was minimal. The Bayesian model indicated that the probability for ADR being better than spinal fusion was 79%. The probability of ADR being noninferior to spinal fusion using a -10% noninferiority bound was 92%, and using a -15% noninferiority bound was 94%. The probability of artificial discs being superior to spinal fusion in a future trial was 73%.
Six case series were reviewed, mainly to characterize the rate of major complications for lumbar ADR. The Medical Advisory Secretariat defined a major complication as any reoperation; device failure necessitating a revision, removal or reoperation; or life-threatening event. The rates of major complications ranged from 0% to 13% per device implanted. Only 1 study reported the rate of ASD, which was detected in 2 (2%) of the 100 people 11 years after surgery.
There were no RCT data available for cervical ADR; therefore, data from 4 case series were reviewed for evidence of effectiveness and safety. Because data were sparse, the effectiveness of cervical ADR compared with spinal fusion cannot be determined at this time.
The rate of major complications was assessed up to 2 years after surgery. It was found to range from 0% to 8.1% per device implanted. The rate of ASD is not reported in the clinical trial literature.
The total cost of a lumbar ADR procedure is $15,371 (Cdn; including costs related to the device, physician, and procedure). The total cost of a lumbar fusion surgery procedure is $11,311 (Cdn; including physicians’ and procedural costs).
Conclusions
Lumbar Artificial Disc Replacement
Since the 2004 Medical Advisory Secretariat health technology policy assessment, data from 2 RCTs and 6 case series assessing the effectiveness and adverse events profile of lumbar ADR to treat DDD has become available. The GRADE quality of this evidence is moderate for effectiveness and for short-term (2-year follow-up) complications; it is very low for ASD.
The effectiveness of lumbar ADR is not inferior to that of spinal fusion for the treatment of lumbar DDD. The rates for device failure and neurological complications 2 years after surgery did not differ between ADR and fusion patients. Based on a Bayesian meta-analysis, lumbar ADR is 79% superior to lumbar spinal fusion.
The rate of major complications after lumbar ADR is between 0% and 13% per device implanted. The rate of ASD in 1 case series was 2% over an 11-year follow-up period.
Outcome data for lumbar ADR beyond a 2-year follow-up are not yet available.
Cervical Artificial Disc Replacement
Since the 2004 Medical Advisory Secretariat health technology policy assessment, 4 case series have been added to the body of evidence assessing the effectiveness and adverse events profile of cervical ADR to treat DDD. The GRADE quality of this evidence is very low for effectiveness as well as for the adverse events profile. Sparse outcome data are available.
Because data are sparse, the effectiveness of cervical ADR compared with spinal fusion cannot be determined at this time.
The rate of major complications was assessed up to 2 years after surgery; it ranged from 0% to 8.1% per device implanted. The rate of ASD is not reported in the clinical trial literature.
PMCID: PMC3379529  PMID: 23074480

Results 1-25 (1592382)