PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (853686)

Clipboard (0)
None

Related Articles

1.  Virtual blood bank 
Virtual blood bank is the computer-controlled, electronically linked information management system that allows online ordering and real-time, remote delivery of blood for transfusion. It connects the site of testing to the point of care at a remote site in a real-time fashion with networked computers thus maintaining the integrity of immunohematology test results. It has taken the advantages of information and communication technologies to ensure the accuracy of patient, specimen and blood component identification and to enhance personnel traceability and system security. The built-in logics and process constraints in the design of the virtual blood bank can guide the selection of appropriate blood and minimize transfusion risk. The quality of blood inventory is ascertained and monitored, and an audit trail for critical procedures in the transfusion process is provided by the paperless system. Thus, the virtual blood bank can help ensure that the right patient receives the right amount of the right blood component at the right time.
doi:10.4103/2153-3539.76155
PMCID: PMC3046379  PMID: 21383930
Computer crossmatch; laboratory information system; virtual blood bank
2.  Implementation of and experiences with new automation 
In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at ‘get-go’, we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products.
doi:10.1155/S1463924600000353
PMCID: PMC2562839  PMID: 18924695
3.  Autoimmune hemolytic anemia: From lab to bedside 
Autoimmune hemolytic anemia (AIHA) is not an uncommon clinical disorder and requires advanced, efficient immunohematological and transfusion support. Many AIHA patients have underlying disorder and therefore, it is incumbent upon the clinician to investigate these patients in detail, as the underlying condition can be of a serious nature such as lymphoproliferative disorder or connective tissue disorder. Despite advances in transfusion medicine, simple immunohematological test such as direct antiglobulin test (DAT) still remains the diagnostic hallmark of AIHA. The sensitive gel technology has enabled the immunohematologist not only to diagnose serologically such patients, but also to characterize red cell bound autoantibodies with regard to their class, subclass and titer in a rapid and simplified way. Detailed characterization of autoantibodies is important, as there is a relationship between in vivo hemolysis and strength of DAT; red cell bound multiple immunoglobulins, immunoglobulin G subclass and titer. Transfusing AIHA patient is a challenge to the immunohematologist as it is encountered with difficulties in ABO grouping and cross matching requiring specialized serological tests such as alloadsorption or autoadsorption. At times, it may be almost impossible to find a fully matched unit to transfuse these patients. However, transfusion should not be withheld in a critically ill patient even in the absence of compatible blood. The “best match” or “least incompatible units” can be transfused to such patients under close supervision without any serious side-effects. All blood banks should have the facilities to perform the necessary investigations required to issue “best match” packed red blood cells in AIHA. Specialized techniques such as elution and adsorption, which at times are helpful in enhancing blood safety in AIHA should be established in all transfusion services.
doi:10.4103/0973-6247.126681
PMCID: PMC3943148  PMID: 24678166
Alloadsorption; alloantibody; autoadsorption; autoantibody; autoimmune hemolytic anemia; best match blood; flow cytometry; gel technology
4.  An immunohematological ‘Wet’ workshop 
A practical workshop on ‘Immunohematology’ was conducted in conjunction with the Indian Society of Blood Transfusion and Immunohaematology annual scientific program. The participants, from many parts of India, were able to obtain valuable practice in key areas of blood group serology and by the end of the workshop were able to carry out ‘tube’ techniques for antibody detection and identification. Column agglutination methods were also demonstrated. A preliminary questionnaire was completed by participants. Results showed a wide variety in types of pretransfusion (serologic) testing being performed. Less than half of the participants had encountered hemolytic transfusion reactions. The program was rated as excellent by most participants in response to a postworkshop evaluation questionnaire, with requests for longer and more frequent workshops. Safety of blood for transfusion depends on maintenance of high standards of both microbiological and immunohematological performance by the blood bank staff.
PMCID: PMC3168125  PMID: 21938238
Blood group serology; education; immunohematology
5.  Participation in proficiency programs and promotion of quality in transfusion services of Minas Gerais 
Objective
This study aimed at identifying associations between the participation of transfusion services in immunohematology external quality control programs and their accuracy in immunohematology testing and adaptation to technical and legal operational procedures.
Methods
From 2007 to 2009, a cross-sectional study was conducted in 219 transfusion services of the State of Minas Gerais who participated in this investigation by responding to a questionnaire and conducting a proficiency test comprising: ABO and RhD phenotyping, irregular RBC antibody screening and cross-matching. Frequencies and bivariate analysis followed by binary logistic regression were used for statistical analysis.
Results
Transfusion services who participated in external quality control programs (32.4%) and those that did not (67.6%) obtained worrying error percentages in proficiency tests which may significantly increase blood transfusion risks. Shortfalls related to the establishment of protocols, standards and internal quality control were also significant. On comparing the two groups, transfusion services that participated in these programs had a 2.35 times higher chance of correct results in the proficiency panel testing, a 3.16 higher chance of having transfusional records and a 2.81 higher chance of performing preventive maintenance of equipment.
Conclusion
The study showed that independent factors associated to participation in external quality control programs suggest that more investment in internal quality control procedures is necessary and that more attention should be paid to current legislation.
doi:10.5581/1516-8484.20120009
PMCID: PMC3459603  PMID: 23049379
Quality control; Blood banks; Blood transfusion; Quality assurance, health care; Program evaluation
6.  Transfusion management using a remote-controlled, automated blood storage 
Blood Transfusion  2008;6(2):101-106.
Background
Generally, the safety of transfusion terapies for patients depends in part on the distribution of the blood products. The prevention of adverse events can be aided by technological means, which, besides improving the traceability of the process, make errors less likely. In this context, the latest frontier in automation and computerisation is the remote-controlled, automated refrigerator for blood storage.
Materials and methods
Computer cross-matching is an efficient and safe method for assigning blood components, based on Information Technology applied to typing and screening. This method can be extended to the management of an automated blood refrigerator, the programme of which is interfaced with the Transfusion Service’s information system. The connection we made in our Service between EmoNet® and Hemosafe® enables real-time, remote-controlled management of the following aspects of blood component distribution: a) release of autologous and allogeneic units already allocated to a patient, b) release of available units, which can be allocated by remote-control to known patients, in the presence of a valid computer cross-match, c) release of O-negative units of blood for emergencies.
Results
Our system combines an information database, which enables computer cross-matching, with an automated refrigerator for blood storage with controlled access managed remotely by the Transfusion Service. The effectiveness and safety of the system were validated during the 4 months of its routine use in the Transfusion Service’s outpatient department.
Conclusions
The safety and efficiency of the distribution of blood products can and must be increased by the use of technological innovations. With the EmoNet®/Hemosafe® system, the responsibility for the remote-controlled distribution of red blood cell concentrates remains with the chief of the Transfusion Services, through the use of automated computer procedures and supported by continuous training of technicians and nursing staff.
doi:10.2450/2008.0029-06
PMCID: PMC2626845  PMID: 18946954
transfusion safety; remote-control; computer cross-match
7.  Legal and ethical issues in safe blood transfusion 
Indian Journal of Anaesthesia  2014;58(5):558-564.
Legal issues play a vital role in providing a framework for the Indian blood transfusion service (BTS), while ethical issues pave the way for quality. Despite licensing of all blood banks, failure to revamp the Drugs and Cosmetic Act (D and C Act) is impeding quality. Newer techniques like chemiluminescence or nucleic acid testing (NAT) find no mention in the D and C Act. Specialised products like pooled platelet concentrates or modified whole blood, therapeutic procedures like erythropheresis, plasma exchange, stem cell collection and processing technologies like leukoreduction and irradiation are not a part of the D and C Act. A highly fragmented BTS comprising of over 2500 blood banks, coupled with a slow and tedious process of dual licensing (state and centre) is a hindrance to smooth functioning of blood banks. Small size of blood banks compromises blood safety. New blood banks are opened in India by hospitals to meet requirements of insurance providers or by medical colleges as this a Medical Council of India (MCI) requirement. Hospital based blood banks opt for replacement donation as they are barred by law from holding camps. Demand for fresh blood, lack of components, and lack of guidelines for safe transfusion leads to continued abuse of blood. Differential pricing of blood components is difficult to explain scientifically or ethically. Accreditation of blood banks along with establishment of regional testing centres could pave the way to blood safety. National Aids Control Organisation (NACO) and National Blood Transfusion Council (NBTC) deserve a more proactive role in the licensing process. The Food and Drug Administration (FDA) needs to clarify that procedures or tests meant for enhancement of blood safety are not illegal.
doi:10.4103/0019-5049.144654
PMCID: PMC4260301  PMID: 25535417
Blood transfusion services; Drugs and Cosmetic Act (D and C Act); ethical; legal; National Aids Control Organisation; transfusion
8.  Contemporary issues in transfusion medicine informatics 
The Transfusion Medicine Service (TMS) covers diverse clinical and laboratory-based services that must be delivered with accuracy, efficiency and reliability. TMS oversight is shared by multiple regulatory agencies that cover product manufacturing and validation standards geared toward patient safety. These demands present significant informatics challenges. Over the past few decades, TMS information systems have improved to better handle blood product manufacturing, inventory, delivery, tracking and documentation. Audit trails and access to electronic databases have greatly facilitated product traceability and biovigilance efforts. Modern blood bank computing has enabled novel applications such as the electronic crossmatch, kiosk-based blood product delivery systems, and self-administered computerized blood donor interview and eligibility determination. With increasing use of barcoding technology, there has been a marked improvement in patient and specimen identification. Moreover, the emergence of national and international labeling standards such as ISBT 128 have facilitated the availability, movement and tracking of blood products across national and international boundaries. TMS has only recently begun to leverage the electronic medical record to address quality issues in transfusion practice and promote standardized documentation within institutions. With improved technology, future growth is expected in blood bank automation and product labeling with applications such as radio frequency identification devices. This article reviews several of these key informatics issues relevant to the contemporary practice of TMS.
doi:10.4103/2153-3539.74961
PMCID: PMC3046378  PMID: 21383927
Blood bank; barcode; computer; donor; electronic crossmatch; FDA; informatics; transfusion medicine; virtual
9.  Assessment of performance of professionals in immunohematology proficiency tests of the public blood bank network of the state of Minas Gerais 
Background
Despite significant advances, the practice of blood transfusion is still a complex process and subject to risks. Factors that influence the safety of blood transfusion include technical skill and knowledge in hemotherapy mainly obtained by the qualification and training of teams.
Objective
This study aimed to investigate the relationship between professional categories working in transfusion services of the public blood bank network in the State of Minas Gerais and their performance in proficiency tests.
Methods
This was an observational cross-sectional study (2007-2008) performed using a specific instrument, based on evidence and the results of immunohematology proficiency tests as mandated by law.
Results
The error rates in ABO and RhD phenotyping, irregular antibody screening and cross-matching were 12.5%, 9.6%, 43.8% and 20.1%, respectively. When considering the number of tests performed, the error rates were 4.6%, 4.2%, 26.7% and 11.0%, respectively. The error rates varied for different professional categories: biochemists, biologists and biomedical scientists (65.0%), clinical pathology technicians (44.1%) and laboratory assistants, nursing technicians and assistant nurses (74.6%). A statistically significant difference was observed when the accuracy of clinical pathology technicians was compared with those of other professionals with only high school education (p-value < 0.001). This was not seen for professionals with university degrees (p-value = 0.293).
Conclusion
These results reinforce the need to invest in training, improvement of educational programs, new teaching methods and tools for periodic evaluations, contributing to increase transfusion safety and improve hemotherapy in Brazil.
doi:10.5581/1516-8484.20120027
PMCID: PMC3459385  PMID: 23049397
Blood banks/standards; Blood transfusion; Security measures; Training courses; Evaluation; Quality control
10.  External quality assessment (EQA) in molecular immunohematology: the INSTAND proficiency test program 
Transfusion  2013;53(11 0 2):10.1111/trf.12414.
Background
Genotyping for red blood cell (RBC), platelet and granulocyte antigens is a new tool for clinical pathology, transfusion medicine services and blood banks. Proficiency in laboratory tests can be established by external quality assessments (EQAs), which are required for clinical application in many health care systems. There are few EQAs for molecular immunohematology.
Methods
We analyzed the participation and pass rates in an EQA for RBC, platelet and granulocyte antigens. This EQA was distributed by INSTAND, a large non-profit provider of proficiency tests, twice per year since fall 2006 as EQA no. 235 Immunohematology A (molecular diagnostic). The coordinators defined at the outset which alleles are mandatory for detection.
Results
The number of participants steadily increased from 51 to 73 per proficiency by fall 2012. More than 60 institutions utilized this EQA at least once a year. Approximately 80% of them participated in RBC, 68% in platelet and 22% in granulocyte systems. With the exceptions of RHD (82%) and granulocytes (85%), pass rates exceeded 93%. While the pass rate increased for granulocyte and decreased for the ABO system, the pass rates for the other systems changed little over 6 ½ years.
Conclusions
The INSTAND proficiency test program was regularly used for EQA by many institutions, particularly in Central Europe. While the technical standards and pass rates in the participating laboratories were high, there has been little improvement in pass rates since 2006.
doi:10.1111/trf.12414
PMCID: PMC3830650  PMID: 24111785
11.  Long-Term follow up after intra-Uterine transfusionS; the LOTUS study 
Background
The Leiden University Medical Center (LUMC) is the Dutch national referral centre for pregnancies complicated by haemolytic disease of the fetus and newborn (HDFN) caused by maternal alloimmunization. Yearly, 20-25 affected fetuses with severe anaemia are transfused with intra-uterine blood transfusions (IUT). Mothers of whom their fetus has undergone IUT for HDFN are considered high responders with regard to red blood cell (RBC) antibody formation. Most study groups report high perinatal survival, resulting in a shift in attention towards short- and long-term outcome in surviving children.
Methods/Design
We set up a large long-term observational follow-up study (LOTUS study), in cooperation with the Sanquin Blood Supply Foundation and the LUMC departments of Obstetrics, Neonatology and ImmunoHematology & Bloodtransfusion.
The first part of this study addresses several putative mechanisms associated with blood group alloimmunization in these mothers. The second part of this study determines the incidence of long-term neurodevelopment impairment (NDI) and associated risk factors in children treated with IUT. All women and their life offspring who have been treated with IUT for HDFN in the LUMC from 1987-2008 are invited to participate and after consent, blood or saliva samples are taken. RBC and HLA antigen profile and antibodies are determined by serologic or molecular techniques. Microchimerism populations are tested by real time polymerase chain reaction (RT PCR).
All children are tested for their neurological, cognitive and psychosocial development using standardised tests and questionnaires. The primary outcome is neurodevelopmental impairment (NDI), a composite outcome defined as any of the following: cerebral palsy, cognitive or psychomotor development < 2 standard deviation, bilateral blindness and/or bilateral deafness.
Discussion
The LOTUS study includes the largest cohort of IUT patients ever studied and is the first to investigate post-IUT long-term effects in both mother and child. The results may lead to a change in transfusion policy, in particular future avoidance of certain incompatibilities. Additionally the LOTUS study will provide clinicians and parents better insights in the long-term neurodevelopmental outcome in children with HDFN treated with IUTs, and may improve the quality of antenatal counselling and long-term guidance.
doi:10.1186/1471-2393-10-77
PMCID: PMC3003623  PMID: 21122095
12.  Improving Pediatric Outcomes through Intravenous and Oral Medication Standardization 
BACKGROUND
Standardization is an invaluable tool to promote safety, improve care, and decrease costs, which ultimately improves outcomes. However, a pediatric setting presents unique challenges with its wide variety of weights, medications, and needs that are distinctly different. Our goal was to develop and implement standards in complex high risk areas that show improved outcomes and safety.
PROGRAM DESCRIPTION
A computerized prescriber order entry program with decision support for pediatrics was developed for parenteral nutrition prescribing. The program included dosing, calculations, calcium phosphate compatibility checks, automated IV compounder interface, osmolarity route calculation, end product testing verification, aluminum exposure and many other quality improvements. This same electronic order program, interface to sterile compounders, and end product testing was used to standardize and make common non-manufactured intravenous solutions. The drip compounding process was reengineered to include standard concentrations, label changes, and beta-testing of a smart syringe pump with dosing ranges for pediatrics. Common standard oral doses were developed along with standard oral formulations.
CONCLUSIONS
Total parenteral nutrition (TPN) error rates decreased from 7% to less than 1% and compatibility issues decreased from 36 to 1 per year. Neonatal osteopenia rates decreased from 15% to 2%. Results from end product testing of TPN solutions were within USP standards showing statistical correlation (p<0.001). Intravenous standardization decreased error rates by 15% and compounding time decreased by 12 minutes (64%). Drip standardization allowed for drug concentration and smart pump standardization and decreased drip errors by 73% from 3.1 to 0.8 per 1000 doses. Compounding errors decreased from 0.66 to 0.16 per 1000 doses and ten-fold errors decreased from 0.41 to 0.08 per 1000 doses. Eleven oral liquids, including 329 different doses, were standardized, decreasing the number of doses to 59 (83% change). This decreased workload 15%, wastage 90%, improved turnaround time 32%, and saved $15,000/year. One hundred evidence-based standard oral formulations were developed and used in 22 different hospitals.
doi:10.5863/1551-6776-14.4.226
PMCID: PMC3460798  PMID: 23055908
continuous infusions; intravenous; oral liquids; standardization
13.  Trends in cord blood banking 
Blood Transfusion  2012;10(1):95-100.
Background
Umbilical cord blood (UCB) is a source of hematopoietic precursor cells for transplantation. The creation of UCB banks in 1992 led to the possibility of storing units of UCB for unrelated transplants. The distribution of cell contents in historical inventories is not homogenous and many units are not, therefore, suitable for adults. The aim of this study was to analyse our UCB bank inventory, evaluate the units released for transplantation and calculate the cost of the current process per unit of UCB stored.
Methods
Three study periods were defined. In the first period, from January 1996 to January 2006, the total nucleated cell (TNC) count acceptable for processing was 4–6×108 and a manual processing system was used. In the second period, from October 2006 to July 2010, processing was automated and the acceptable TNC count varied from 8–10×108. In the third period, from January 2009 to June 2010, an automated Sepax-BioArchive procedure was used and the accepted initial TNC count was >10×108. Within each period the units were categorised according to various ranges of cryopreserved TNC counts in the units: A, >16.2×108; B1, from 12.5–16.1×108; B2, from 5.2–12.4×108; and C, <5.1×108.
Results
The third period is best representative of current practices, with homogenous TNC acceptance criteria and automated processing. In this period 15.7% of the units were category A and 25.5% were category B. Overall, the mean TNC count of units released for transplantation was 14×108 (range, 4.6×108 to 36.5×108). The cost of the processed UCB in 2009 was 720.41 euros per unit.
Conclusion
An UCB bank should store units of high-quality, in terms of the TNC count of units issued for transplantation, have a training programme to optimise the selection of donors prior to delivery, use similar volume reduction systems and homogenous recovery indices, express its indicators in the same units, use validated analytical techniques, and bear in mind ethnic minorities.
doi:10.2450/2011.0032-11
PMCID: PMC3258995  PMID: 22153685
umbilical cord blood; cord blood bank; quality; cost
14.  Corneal Donor Tissue Preparation for Endothelial Keratoplasty 
Over the past ten years, corneal transplantation surgical techniques have undergone revolutionary changes1,2. Since its inception, traditional full thickness corneal transplantation has been the treatment to restore sight in those limited by corneal disease. Some disadvantages to this approach include a high degree of post-operative astigmatism, lack of predictable refractive outcome, and disturbance to the ocular surface. The development of Descemet's stripping endothelial keratoplasty (DSEK), transplanting only the posterior corneal stroma, Descemet's membrane, and endothelium, has dramatically changed treatment of corneal endothelial disease. DSEK is performed through a smaller incision; this technique avoids 'open sky' surgery with its risk of hemorrhage or expulsion, decreases the incidence of postoperative wound dehiscence, reduces unpredictable refractive outcomes, and may decrease the rate of transplant rejection3-6.
Initially, cornea donor posterior lamellar dissection for DSEK was performed manually1 resulting in variable graft thickness and damage to the delicate corneal endothelial tissue during tissue processing. Automated lamellar dissection (Descemet's stripping automated endothelial keratoplasty, DSAEK) was developed to address these issues. Automated dissection utilizes the same technology as LASIK corneal flap creation with a mechanical microkeratome blade that helps to create uniform and thin tissue grafts for DSAEK surgery with minimal corneal endothelial cell loss in tissue processing.
Eye banks have been providing full thickness corneas for surgical transplantation for many years. In 2006, eye banks began to develop methodologies for supplying precut corneal tissue for endothelial keratoplasty. With the input of corneal surgeons, eye banks have developed thorough protocols to safely and effectively prepare posterior lamellar tissue for DSAEK surgery. This can be performed preoperatively at the eye bank. Research shows no significant difference in terms of the quality of the tissue7 or patient outcomes8,9 using eye bank precut tissue versus surgeon-prepared tissue for DSAEK surgery. For most corneal surgeons, the availability of precut DSAEK corneal tissue saves time and money10, and reduces the stress of performing the donor corneal dissection in the operating room. In part because of the ability of the eye banks to provide high quality posterior lamellar corneal in a timely manner, DSAEK has become the standard of care for surgical management of corneal endothelial disease.
The procedure that we are describing is the preparation of the posterior lamellar cornea at the eye bank for transplantation in DSAEK surgery (Figure 1).
doi:10.3791/3847
PMCID: PMC3671837  PMID: 22733178
Medicine; Issue 64; Physiology; Cornea; transplantation; DSAEK; DSEK; endothelial keratoplasty; lamellar; graft; Moria; microkeratome; precut; Fuchs dystrophy
15.  Case study of the automation options and decisions made in implementing a high-throughput cell based screen using the FLIPR™ 
This case study examines the automation and process change options available to emerging discovery/development stage pharmaceutical companies when considering implementing sophisticated high-throughput screens. Generally there are both financial and personnel constraints that have to be addressed when implementing state-of-the-art screening technology in smaller companies which generally are not as significant as in large pharmaceutical companies. When NPS Pharmaceuticals considered installing a Molecular Devices FLIPR™ for high-throughput cell based screening it became clear that, to make the best decision, the whole screening process at NPS Pharmaceuticals from screen development and validation, tissue culture, compound distribution, data handling and screening had to be re-examined to see what automation options were possible and which, if any, made sense to implement. Large scale automated systems were not considered due to their cost and the lack of in-house engineering infrastructure to support such systems. The current trend towards workstation based laboratory automation suggested that a minimalist approach to laboratory automation, coupled with improved understanding of the physical process of screening, would yield the best approach. Better understanding of the work flow within the Biomolecular Screening team enabled the group to optimize the process and decide what support equipment was needed. To install the FLIPR™, train users, set up the tissue culture protocols for cell supply, establish high-throughput screening database protocols, integrate compound distribution and re-supply and validate the pharmacology on four cell based screens took the team 3 months. The integration of the screening team at the primary, secondary and tertiary screening stages of the target discovery project teams at NPS has enabled us to incorporate minimal automation into the Biomolecular Screening Group whilst retaining an enriching work environment. This is reflected in our current consistent throughput of 64 96-well microplates per day on the FLIPR™, a figure that is comparable with that achieved within most major pharmaceutical companies. This case study suggests that process optimization coupled with modern stand alone automated workstations can achieve significant throughput in a resource constrained environment. Significantly greater throughput could be achieved by coupling the process improvement techniques described above with 384-well microplate technology.
doi:10.1155/S1463924600000213
PMCID: PMC2562850  PMID: 18924700
16.  DNA-Based Methods in the Immunohematology Reference Laboratory 
Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs.
doi:10.1016/j.transci.2010.12.011
PMCID: PMC3058268  PMID: 21257350
Antibody identification; blood group antigens; molecular methods; DNA testing; blood groups; identification of blood groups
17.  DB4US: A Decision Support System for Laboratory Information Management 
Background
Until recently, laboratory automation has focused primarily on improving hardware. Future advances are concentrated on intelligent software since laboratories performing clinical diagnostic testing require improved information systems to address their data processing needs. In this paper, we propose DB4US, an application that automates information related to laboratory quality indicators information. Currently, there is a lack of ready-to-use management quality measures. This application addresses this deficiency through the extraction, consolidation, statistical analysis, and visualization of data related to the use of demographics, reagents, and turn-around times. The design and implementation issues, as well as the technologies used for the implementation of this system, are discussed in this paper.
Objective
To develop a general methodology that integrates the computation of ready-to-use management quality measures and a dashboard to easily analyze the overall performance of a laboratory, as well as automatically detect anomalies or errors. The novelty of our approach lies in the application of integrated web-based dashboards as an information management system in hospital laboratories.
Methods
We propose a new methodology for laboratory information management based on the extraction, consolidation, statistical analysis, and visualization of data related to demographics, reagents, and turn-around times, offering a dashboard-like user web interface to the laboratory manager. The methodology comprises a unified data warehouse that stores and consolidates multidimensional data from different data sources. The methodology is illustrated through the implementation and validation of DB4US, a novel web application based on this methodology that constructs an interface to obtain ready-to-use indicators, and offers the possibility to drill down from high-level metrics to more detailed summaries. The offered indicators are calculated beforehand so that they are ready to use when the user needs them. The design is based on a set of different parallel processes to precalculate indicators. The application displays information related to tests, requests, samples, and turn-around times. The dashboard is designed to show the set of indicators on a single screen.
Results
DB4US was deployed for the first time in the Hospital Costa del Sol in 2008. In our evaluation we show the positive impact of this methodology for laboratory professionals, since the use of our application has reduced the time needed for the elaboration of the different statistical indicators and has also provided information that has been used to optimize the usage of laboratory resources by the discovery of anomalies in the indicators. DB4US users benefit from Internet-based communication of results, since this information is available from any computer without having to install any additional software.
Conclusions
The proposed methodology and the accompanying web application, DB4US, automates the processing of information related to laboratory quality indicators and offers a novel approach for managing laboratory-related information, benefiting from an Internet-based communication mechanism. The application of this methodology has been shown to improve the usage of time, as well as other laboratory resources.
doi:10.2196/ijmr.2126
PMCID: PMC3626127  PMID: 23608745
Automation, laboratory; Medical Informatics Applications; Data Mining; Quality Indicators, Health Care
18.  Clinical Chemistry Laboratory Automation in the 21st Century - Amat Victoria curam (Victory loves careful preparation) 
The Clinical Biochemist Reviews  2014;35(3):143-153.
The era of automation arrived with the introduction of the AutoAnalyzer using continuous flow analysis and the Robot Chemist that automated the traditional manual analytical steps. Successive generations of stand-alone analysers increased analytical speed, offered the ability to test high volumes of patient specimens, and provided large assay menus. A dichotomy developed, with a group of analysers devoted to performing routine clinical chemistry tests and another group dedicated to performing immunoassays using a variety of methodologies. Development of integrated systems greatly improved the analytical phase of clinical laboratory testing and further automation was developed for pre-analytical procedures, such as sample identification, sorting, and centrifugation, and post-analytical procedures, such as specimen storage and archiving. All phases of testing were ultimately combined in total laboratory automation (TLA) through which all modules involved are physically linked by some kind of track system, moving samples through the process from beginning-to-end. A newer and very powerful, analytical methodology is liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS). LC-MS/MS has been automated but a future automation challenge will be to incorporate LC-MS/MS into TLA configurations. Another important facet of automation is informatics, including middleware, which interfaces the analyser software to a laboratory information systems (LIS) and/or hospital information systems (HIS). This software includes control of the overall operation of a TLA configuration and combines analytical results with patient demographic information to provide additional clinically useful information. This review describes automation relevant to clinical chemistry, but it must be recognised that automation applies to other specialties in the laboratory, e.g. haematology, urinalysis, microbiology. It is a given that automation will continue to evolve in the clinical laboratory, limited only by the imagination and ingenuity of laboratory scientists.
PMCID: PMC4204236  PMID: 25336760
19.  Pilot Evaluation of a Prototype Critical Care Blood Glucose Monitor in Normal Volunteers 
Background
Availability of a highly accurate in-hospital automated blood glucose (BG) monitor could facilitate implementation of intensive insulin therapy protocols through effective titration of insulin therapy, improved BG control, and avoidance of hypoglycemia. We evaluated a functional prototype BG monitor designed to perform frequent automated blood sampling for glucose monitoring.
Methods
Sixteen healthy adult volunteer subjects had intravenous catheter insertions in a forearm or hand vein and were studied for 8 hours. The prototype monitor consisted of an autosampling unit with a precise computer-controlled reversible syringe pump and a glucose analytical section. BG was referenced against a Yellow Springs Instrument (YSI) laboratory analyzer. Sampling errors for automated blood draws were assessed by calculating the percent of failed draws, and BG data were analyzed using the Bland and Altman technique.
Results
Out of 498 total sample draws, unsuccessful draws were categorized as follow: 11 (2.2%) were due to autosampler technical problems, 21 (4.2%) were due to catheter-related failures, and 37 (7.4%) were BG meter errors confirmed by a glucometer-generated error code. Blood draw difficulties or failures related to the catheter site (e.g., catheter occlusion or vein collapse) occurred in 6/15 (40%) subjects. Mean BG bias versus YSI was 0.20 ± 12.6 mg/dl, and mean absolute relative difference was 10.4%.
Conclusions
Automated phlebotomy can be performed in healthy subjects using this prototype BG monitor. The BG measurement technology had suboptimal accuracy based on a YSI reference. A more accurate BG point-of-care testing meter and strip technology have been incorporated into the future version of this monitor. Development of such a monitor could alleviate the burden of frequent BG testing and reduce the risk of hypoglycemia in patients on insulin therapy.
PMCID: PMC2787022  PMID: 20144376
automated phlebotomy; glucose monitoring; glucose sampling; intensive care unit; intensive insulin therapy
20.  Twenty-Four-Hour Ambulatory Blood Pressure Monitoring in Hypertension 
Executive Summary
Objective
The objective of this health technology assessment was to determine the clinical effectiveness and cost-effectiveness of 24-hour ambulatory blood pressure monitoring (ABPM) for hypertension.
Clinical Need: Condition and Target Population
Hypertension occurs when either systolic blood pressure, the pressure in the artery when the heart contracts, or diastolic blood pressure, the pressure in the artery when the heart relaxes between beats, are consistently high. Blood pressure (BP) that is consistently more than 140/90 mmHg (systolic/diastolic) is considered high. A lower threshold, greater than 130/80 mmHg (systolic/diastolic), is set for individuals with diabetes or chronic kidney disease.
In 2006 and 2007, the age-standardized incidence rate of diagnosed hypertension in Canada was 25.8 per 1,000 (450,000 individuals were newly diagnosed). During the same time period, 22.7% of adult Canadians were living with diagnosed hypertension.
A smaller proportion of Canadians are unaware they have hypertension; therefore, the estimated number of Canadians affected by this disease may be higher. Diagnosis and management of hypertension are important, since elevated BP levels are related to the risk of cardiovascular disease, including stroke. In Canada in 2003, the costs to the health care system related to the diagnosis, treatment, and management of hypertension were over $2.3 billion (Cdn).
Technology
The 24-hour ABPM device consists of a standard inflatable cuff attached to a small computer weighing about 500 grams, which is worn over the shoulder or on a belt. The technology is noninvasive and fully automated. The device takes BP measurements every 15 to 30 minutes over a 24-to 28-hour time period, thus providing extended, continuous BP recordings even during a patient’s normal daily activities. Information on the multiple BP measurements can be downloaded to a computer.
The main detection methods used by the device are auscultation and oscillometry. The device avoids some of the pitfalls of conventional office or clinic blood pressure monitoring (CBPM) using a cuff and mercury sphygmomanometer such as observer bias (the phenomenon of measurement error when the observer overemphasizes expected results) and white coat hypertension (the phenomenon of elevated BP when measured in the office or clinic but normal BP when measured outside of the medical setting).
Research Questions
Is there a difference in patient outcome and treatment protocol using 24-hour ABPM versus CBPM for uncomplicated hypertension?
Is there a difference between the 2 technologies when white coat hypertension is taken into account?
What is the cost-effectiveness and budget impact of 24-hour ABPM versus CBPM for uncomplicated hypertension?
Research Methods
Literature Search
Search Strategy
A literature search was performed on August 4, 2011 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 1997 to August 4, 2011. Abstracts were reviewed by a single reviewer. For those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist and then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low, or very low according to GRADE methodology.
Inclusion Criteria
English language articles;
published between January 1, 1997 and August 4, 2011;
adults aged 18 years of age or older;
journal articles reporting on the effectiveness, cost-effectiveness, or safety for the comparison of interest;
clearly described study design and methods;
health technology assessments, systematic reviews, meta-analyses, or randomized controlled trials.
Exclusion Criteria
non-English papers;
animal or in vitro studies;
case reports, case series, or case-case studies;
studies comparing different antihypertensive therapies and evaluating their antihypertensive effects using 24-hour ABPM;
studies on home or self-monitoring of BP, and studies on automated office BP measurement;
studies in high-risk subgroups (e.g. diabetes, pregnancy, kidney disease).
Outcomes of Interest
Patient Outcomes
mortality: all cardiovascular events (e.g., myocardial infarction [MI], stroke);
non-fatal: all cardiovascular events (e.g., MI, stroke);
combined fatal and non-fatal: all cardiovascular events (e.g., MI, stroke);
all non-cardiovascular events;
control of BP (e.g. systolic and/or diastolic target level).
Drug-Related Outcomes
percentage of patients who show a reduction in, or stop, drug treatment;
percentage of patients who begin multi-drug treatment;
drug therapy use (e.g. number, intensity of drug use);
drug-related adverse events.
Quality of Evidence
The quality of the body of evidence was assessed as high, moderate, low, or very low according to the GRADE Working Group criteria.
As stated by the GRADE Working Group, the following definitions of quality were used in grading the quality of the evidence:
Summary of Findings
Short-Term Follow-Up Studies (Length of Follow-Up of ≤ 1 Year)
Based on very low quality of evidence, there is no difference between technologies for non-fatal cardiovascular events.
Based on moderate quality of evidence, ABPM resulted in improved BP control among patients with sustained hypertension compared to CBPM.
Based on low quality of evidence, ABPM resulted in hypertensive patients being more likely to stop antihypertensive therapy and less likely to proceed to multi-drug therapy compared to CBPM.
Based on low quality of evidence, there is a beneficial effect of ABPM on the intensity of antihypertensive drug use compared to CBPM.
Based on moderate quality of evidence, there is no difference between technologies in the number of antihypertensive drugs used.
Based on low to very low quality of evidence, there is no difference between technologies in the risk for a drug-related adverse event or noncardiovascular event.
Long-Term Follow-Up Study (Mean Length of Follow-Up of 5 Years)
Based on moderate quality of evidence, there is a beneficial effect of ABPM on total combined cardiovascular events compared to CBPM.
Based on low quality of evidence, there is a lack of a beneficial effect of ABPM on nonfatal cardiovascular events compared to CBPM; however, the lack of a beneficial effect is based on a borderline result.
Based on low quality of evidence, there is no beneficial effect of ABPM on fatal cardiovascular events compared to CBPM.
Based on low quality of evidence, there is no difference between technologies for the number of patients who began multi-drug therapy.
Based on low quality of evidence, there is a beneficial effect of CBPM on control of BP compared to ABPM. This result is in the opposite direction than expected.
Based on moderate quality of evidence, there is no difference between technologies in the risk for a drug-related adverse event.
PMCID: PMC3377518  PMID: 23074425
21.  Automated processing of whole blood units: operational value and in vitro quality of final blood components 
Blood Transfusion  2012;10(1):63-71.
Background
The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value.
Materials and methods
Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared.
Results
The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma.
Discussion
These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement.
doi:10.2450/2011.0008-11
PMCID: PMC3258991  PMID: 22044958
whole blood processing; automation; Atreus 3C system; blood components; operational value
22.  A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging 
EJNMMI Research  2013;3:55.
Background
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases.
Methods
We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework.
Results
Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis.
Conclusions
We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
doi:10.1186/2191-219X-3-55
PMCID: PMC3734217  PMID: 23879987
Quantitative analysis; Pulmonary infections; Small animal models; PET-CT; Image segmentation; H1N1; Tuberculosis
23.  Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging 
Objective
To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results.
Design
Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue.
Subjects
A random sample of 15 obese adults with type 2 diabetes.
Measurements
Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability.
Results
Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness.
Conclusion
Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.
doi:10.1038/sj.ijo.0803696
PMCID: PMC3096530  PMID: 17700582
magnetic resonance imaging; abdominal adipose tissue; software evaluation; image segmentation; visceral adipose tissue; subcutaneous adipose tissue
24.  Portable Bladder Ultrasound 
Executive Summary
Objective
The aim of this review was to assess the clinical utility of portable bladder ultrasound.
Clinical Need: Target Population and Condition
Data from the National Population Health Survey indicate prevalence rates of urinary incontinence are 2.5% in women and 1.4 % in men in the general population. Prevalence of urinary incontinence is higher in women than men and prevalence increases with age.
Identified risk factors for urinary incontinence include female gender, increasing age, urinary tract infections (UTI), poor mobility, dementia, smoking, obesity, consuming alcohol and caffeine beverages, physical activity, pregnancy, childbirth, forceps and vacuum-assisted births, episiotomy, abdominal resection for colorectal cancer, and hormone replacement therapy.
For the purposes of this review, incontinence populations will be stratified into the following; the elderly, urology patients, postoperative patients, rehabilitation settings, and neurogenic bladder populations.
Urinary incontinence is defined as any involuntary leakage of urine. Incontinence can be classified into diagnostic clinical types that are useful in planning evaluation and treatment. The major types of incontinence are stress (physical exertion), urge (overactive bladder), mixed (combined urge and stress urinary incontinence), reflex (neurological impairment of the central nervous system), overflow (leakage due to full bladder), continuous (urinary tract abnormalities), congenital incontinence, and transient incontinence (temporary incontinence).
Postvoid residual (PVR) urine volume, which is the amount of urine in the bladder immediately after urination, represents an important component in continence assessment and bladder management to provide quantitative feedback to the patient and continence care team regarding the effectiveness of the voiding technique. Although there is no standardized definition of normal PVR urine volume, measurements greater than 100 mL to 150 mL are considered an indication for urinary retention, requiring intermittent catheterization, whereas a PVR urine volume of 100 mL to 150 mL or less is generally considered an acceptable result of bladder training.
Urinary retention has been associated with poor outcomes including UTI, bladder overdistension, and higher hospital mortality rates. The standard method of determining PVR urine volumes is intermittent catheterization, which is associated with increased risk of UTI, urethral trauma and discomfort.
The Technology Being Reviewed
Portable bladder ultrasound products are transportable ultrasound devices that use automated technology to register bladder volume digitally, including PVR volume, and provide three-dimensional images of the bladder. The main clinical use of portable bladder ultrasound is as a diagnostic aid. Health care professionals (primarily nurses) administer the device to measure PVR volume and prevent unnecessary catheterization. An adjunctive use of the bladder ultrasound device is to visualize the placement and removal of catheters. Also, portable bladder ultrasound products may improve the diagnosis and differentiation of urological problems and their management and treatment, including the establishment of voiding schedules, study of bladder biofeedback, fewer UTIs, and monitoring of potential urinary incontinence after surgery or trauma.
Review Strategy
To determine the effectiveness and clinical utility of portable bladder ultrasound as reported in the published literature, the Medical Advisory Secretariat used its standard search strategy to retrieve international health technology assessments and English-language journal articles from selected databases. Nonsystematic reviews, nonhuman studies, case reports, letters, editorials, and comments were excluded.
Summary of Findings
Of the 4 included studies that examined the clinical utility of portable bladder ultrasound in the elderly population, all found the device to be acceptable. One study reported that the device underestimated catheterized bladder volume
In patients with urology problems, 2 of the 3 studies concerning portable bladder ultrasound found the device acceptable to use. However, one study did not find the device as accurate for small PVR volume as for catheterization and another found that the device overestimated catheterized bladder volume. In the remaining study, the authors reported that when the device’s hand-held ultrasound transducers (scanheads) were aimed improperly, bladders were missed, or lateral borders of bladders were missed resulting in partial bladder volume measurements and underestimation of PVR measurements. They concluded that caution should be used in interpreting PVR volume measured by portable bladder ultrasound machines and that catheterization may be the preferred assessment modality if an accurate PVR measurement is necessary.
All 3 studies with post-operative populations found portable bladder ultrasound use to be reasonably acceptable. Two studies reported that the device overestimated catheter-derived bladder volumes, one by 7% and the other by 21 mL. The third study reported the opposite, that the device underestimated catheter bladder volume by 39 mL but that the results remained acceptable
In rehabilitation settings, 2 studies found portable bladder ultrasound to underestimate catheter-derived bladder volumes; yet, both authors concluded that the mean errors were within acceptable limits.
In patients with neurogenic bladder problems, 2 studies found portable bladder ultrasound to be an acceptable alternative to catheterization despite the fact that it was not as accurate as catheterization for obtaining bladder volumes.
Lastly, examinations concerning avoidance of negative health outcomes showed that, after use of the portable bladder ultrasound, unnecessary catheterizations and UTIs were decreased. Unnecessary catheterizations avoided ranged from 16% to 47% in the selected articles. Reductions in UTI ranged from 38% to 72%.
In sum, all but one study advocated the use of portable bladder ultrasound as an alternative to catheterization.
Economic Analysis
An economic analysis estimating the budget-impact of BladderScan in complex continuing care facilities was completed. The analysis results indicated a $192,499 (Cdn) cost-savings per year per facility and a cost-savings of $2,887,485 (Cdn) for all 15 CCC facilities. No economic analysis was completed for long-term care and acute care facilities due to lack of data.
Considerations for Policy Development
Rapid diffusion of portable bladder ultrasound technology is expected. Recently, the IC5 project on improving continence care in Ontario’s complex continuing care centres piloted portable bladder ultrasound at 12 sites. Preliminary results were promising.
Many physicians and health care facilities already have portable bladder ultrasound devices. However, portable bladder ultrasound devices for PVR measurement are not in use at most health care facilities in Ontario and Canada. The Verathon Corporation (Bothell, Wisconsin, United States), which patents BladderScan, is the sole licensed manufacturer of the portable bladder ultrasound in Canada. Field monopoly may influence the rising costs of portable bladder ultrasound, particularly when faced with rapid expansion of the technology.
Several thousand residents of Ontario would benefit from portable bladder ultrasound. The number of residents of Ontario that would benefit from the technology is difficult to quantify, because the incidence and prevalence of incontinence are grossly under-reported. However, long-term care and complex continuing care institutions would benefit greatly from portable bladder ultrasound, as would numerous rehabilitation units, postsurgical care units, and urology clinics.
The cost of the portable bladder ultrasound devices ranges from $17,698.90 to $19,565.95 (Cdn) (total purchase price per unit as quoted by the manufacturer). Additional training packages, batteries and battery chargers, software, gel pads, and yearly warranties are additional costs. Studies indicate that portable bladder ultrasound is a cost-effective technology, because it avoids costs associated with catheterization equipment, saves nursing time, and reduces catheter-related complications and UTIs.
The use of portable bladder ultrasound device will affect the patient directly in terms of health outcomes. Its use avoids the trauma related to the urinary tract that catheterization inflicts, and does not result in UTIs. In addition, patients prefer it, because it preserves dignity and reduces discomfort.
PMCID: PMC3379524  PMID: 23074481
25.  Quality indicators for discarding blood in the National Blood Center, Kuala Lumpur 
Background and Objective:
The implementation of quality system and continuous evaluation of all activities of the Blood Transfusion Services (BTS) can help to achieve the maximum quantity and quality of safe blood. Optimizing blood collection and processing would reduce the rate of discard and improve the efficiency of the BTS. The objective of this study is to determine the rate of discard of blood and blood component and identify its reasons at the National Blood Centre (NBC), Kuala Lumpur, during the year of 2007 in order to introduce appropriate intervention.
Study Designs and Methods:
Data on the number of discarded whole blood units and its components, reasons for discard, and the number of blood components processed as well as the number of collected blood units were obtained from the Blood Bank Information System - NBC database. These were analyzed.
Results:
The total number of blood units collected in 2007 was 171169 from which 390636 units of components were prepared. The total number of discarded whole blood units and its components was 8968 (2.3%). Platelet concentrate recorded the highest of discard at 6% (3909) followed by whole blood at 3.7% (647), fresh frozen plasma (FFP) at 2.5% (2839), and cryoprecipitate at 2% (620). The rate of discarded packed red blood cells RBCs, plasma aphaeresis, and PLT aphaeresis was less than 1% at 0.6% (902), 0.6% (37), and 0.29% (14), respectively. RBC contamination of PLT and plasma were the major cause of discard at 40% (3558). Other causes include leakage (26% - 2306), lipemia (25% - 2208), and underweight (4% - 353).
Conclusion:
Good donor selection, training and evaluation of the staff, as well as implementation of automation will help to improve processes and output of BTS. This would reduce discard of blood components and wastage caused by non conformance.
doi:10.4103/0973-6247.95045
PMCID: PMC3353623  PMID: 22623837
Discard blood; National Blood Centre Kuala Lumpur; quality indicators

Results 1-25 (853686)