|Home | About | Journals | Submit | Contact Us | Français|
Blood banking is a medical logistic activity. It attempts to bring the potentially life-saving benefits of transfusion to the patients who need them by making blood components available, safe, effective and cheap1. Blood banks try to maximize delivering getting blood from the right donors to the right patients in a timely manner. The easiest way to assure the timely availability of blood is to have an appropriate inventory on the shelf at all times.
Standards for blood banking have evolved in response to problems observed in the past2. Donors need to be free of syphilis, hepatitis, and human immunodeficiency virus (HIV) and from a host of other diseases as well. Methods for cleaning the arms of donors should work. Blood bags should contain the appropriate solutions and be sterile. Systems for the identification of donors and patients, for the determination of antigens on their blood cells and the antibodies in their sera, and for the procedures and processes used to gather and maintain this information should be robust.
Regulatory agencies, like the United States (U.S.) Food and Drug Administration (FDA), are charged to assure that blood products are safe and effective3. They attempt to assure safety by enforcing the standards noted above. They attempt to assure effectiveness by demanding demonstrations that a reasonable fraction of red cells, platelets and plasma proteins survive after storage before licensing new blood storage systems or blood modifying products like leukocyte reduction filters4,5. Yet the reasonable fraction for red cells is 75%, for platelets is 67%, and for plasma proteins is 80%. Standards, such as insisting that every unit of blood be tested for HIV, work only if the licensed tests detect the circulating viruses, a moving target.
Better understanding of the basic biology and better tests on which to base better standards are needed. If we understood the changes that occur with the storage of red cells, platelets and plasma better, we could both design better storage systems and regulate storage more effectively. If we knew more about the viruses and bacteria and the cytokines and blood breakdown products that threaten blood safety we could institute controls to further improve blood safety. This paper will explore some specific examples.
Red blood cells are the most commonly transfused blood component6. High Human Development Index (HDI) countries use 20–70 units of red cells per thousand members of their population each year. In the U.S., 15 million units are used to care for 300 million people for a use rate of 50 U per 1000 members of the population each year. This amounts to 40,000 units for the whole population each day.
The units are collected either as whole blood into bags containing anticoagulant citrate and nutrient phosphate and dextrose (CPD) or by apheresis into acid citrate dextrose (ACD). The whole blood is centrifuged to bring down the heavier red cells, and the red cells separated from the rest of the blood. Separation is accomplished in two different ways7. One way involves draining the red cells out of a port in the bottom of the bag, leaving behind a few red cells, the buffy coat of white cells and platelets, and the plasma on top. This is called the buffy coat method, and the bags that support it are called top and bottom bags. The other method involves centrifuging the blood less hard to leave many of the platelets still suspended in the plasma. In this process, making concentrated red cells involves squeezing the platelet-rich plasma off the top to leave the red cells, the buffy coat of white cells and some platelet-rich plasma behind. This is called the platelet-rich plasma method of component manufacture. Typically, the concentrated red cells are then run through a leukocyte reduction filter, which removes most white cells and platelets, and an additive solution containing more nutrients is added to support longer storage and dilute the units so that they are less viscous and flow well during emergency administration. For the apheresis units, the collection method removes most of the white cells and platelets, and the additive solution is added directly to the collected red cell concentrate.
These methods of blood component separation are probably essentially equivalent, although each has advocates. The platelet-rich plasma method loses and possibly damages platelets, the buffy coat method looses some red cells, and the apheresis method is expensive. However, the expense of the apheresis method can be largely offset by collecting only males over 80 kg who can donate two units at once, saving the cost of a second set of infectious disease tests and a second leukoreduction filters.
None of these methods is particularly well optimized. The use of ACD and CPD is a legacy of the days when these were the best 3-week whole blood storage solutions available8. They are acidic with a pH of 5 to 5.8 so that the dextrose does not caramelize when the solutions are autoclaved. However, mixing whole blood with these acidic anticoagulants immediately drops the pH of the resulting suspension to about 7.1 leading to rapid breakdown of red cell 2,3-diphosphoglycerate (2,3-DPG). If we simply drew whole blood into neutral citrate, we would preserve 2,3-DPG better and avoid exposing many intensive care and brain injured patients to the high dextrose loads that end up in conventional blood plasma. There is enough glucose in the blood of healthy donors to support the cells until the donated blood is processed into components. In the early days of blood banking, we stored whole blood for up to five days in citrate alone.
Over subsequent weeks of storage, red cells consume dextrose through glycolysis and the hexose monophosphate shunt to produce adenosine 5’-triphosphate (ATP) and reducing substances. The production of all of these metabolic intermediates goes down over the course of storage as the glycolytic end-products, organic acids and protons, accumulate. As pH falls, the protons specifically feed back to slow the rate of glycolysis9. Conventional red cell additive storage solutions support stored red cells for about 6-weeks of storage but fail rapidly thereafter. Between a pH of 7 where red cell storage typically starts and 6.5 where it ends, a unit of red cells can buffer about 7 mEq of protons. Raising the pH to 7.2 at the beginning of storage with either a less acid anticoagulant or a more basic additive solution can add another 3 mEq of buffer capacity as can the addition of physiologic amounts of bicarbonate10. These changes can allow the continued production of ATP for 8–9 weeks. They result in higher concentrations of ATP at all of the points in between. Higher concentrations of ATP support the actions of the enzymes that keep negatively charged phospholipids on the inside of the membrane, exclude calcium, and limit the loss of membrane in apoptotic vesicles. These actions extend potential shelf-life by extending viability.
In the U.S. and Europe, the licensure of red blood cell storage systems has been based on measures of the viability and physical integrity of the stored cells4. The viability of red cells is typically measured as the fraction of cells at the end of storage that are able to circulate. Physical integrity, a necessary but not sufficient condition for viability, is measured as fractional haemolysis and will be discussed in the next section.
The standard measure of viability is the 24-hour in vivo recovery. In making this measurement, about 15 mL of stored red cells are labeled with chromium 51-and reinfused into the original donor11. Measurement of the total administered dose allows an estimation of the volume of dilution and timed sampling at 5, 7.5, 10, 12.5, and 15 minutes allows back extrapolation to that original concentration as well. Following the subsequent concentration allows ongoing clearance to be monitored, and recoveries greater than 75% at 24 hours are considered acceptable. Current storage systems maintaining leukocyte-reduced red cells in additive solutions provide about 84 ± 8 % viability after 6 weeks storage4.
A major remaining problem associated with red cell storage is that viability is very different from one donor to another. In a typical study, the individual donors red cells may have viabilities at the end of storage that range from less than 60 to more than 95%, with the larger number of donors skewed toward the upper end of the distribution. Under these circumstances, it takes more than 3 units of poorly viable red cell units to deliver the same number of persisting cells as 2 units of better-storing red cells. From cross-over and repeated donation studies, we know that post-storage viability is a stable characteristic of individual donors under a variety of storage conditions12.
Being able to identify those donors whose cells store well is potentially useful for recipients such as children with thalassemia or sickle cell anaemia who can become iron overloaded from repeat transfusions. From a unit with poor recovery, such a child receives all of the iron but only a fraction of the anticipated useful red cells. Giving the cells with high recovery and long survival to these children reduces the burden and cost of iron chelation therapy, which can be tens of thousands of dollars each year. We deal with this problem now by giving such children fresh cells when we can, but as the number of individuals with sickle cell anaemia who are on exchange transfusion programmes increases, this becomes more difficult. Measures that would allow a blood banker to choose the best red cells for this situation could improve care and markedly reduce health system cost.
For general surgical patients whose transfusions are replacement for blood lost, the extra iron helps them rebuild their own blood, but the load of effete red cells that must be cleared in the first 24 hours after the transfusion of poorly-stored blood may cause additional problems13. Increased incidences of both post-operative pneumonia and metastatic cancer after transfusion are well-recognized phenomena and may in part be related to the number of non-viable red cells presented to limited clearance mechanisms14. Again, a better understanding of the red cell storage lesion might improve overall blood safety.
Obvious mechanisms of the red cell storage lesion are the metabolic consequences of the increasingly acid storage environment and the oxidative injury to be associated with keeping oxygen, heme, and iron in the same bag15. Blood is collected at venous oxygen saturation, about 75%, where met-haemoglobin and superoxide generation are maximal. In healthy cells, methemoglobin reductase and superoxide desmutase are highly active, and secondary damage is limited. As stored red cells loose energy, the lifespan of these undesirable species increases with increased opportunities for secondary damage. With the exception of a few specific severe enzymatic defects that limit blood donation anyway, it is not known whose red cells are most susceptible to damage or if the addition of antioxidants such as vitamin E or nacetyl cysteine can safely improve storage. As our sense of the complexity of the red cell increases with the identification of more than 1500 constituent proteins, there is a need for combined conventional storage experiments and quantitative proteomics on the red cells left in a bag after the 15 mL needed for recovery measures are removed16.
The second standard measure in the licensure of red cell storage systems is the percent haemolysis. Haemoglobin is 98% of the non-water content of red cells, and when they rupture, haemoglobin is released into the suspending fluid. There, it can be detected as an increasingly red colour to the supernatant and measured spectrophotmetrically. In the U.S., haemolysis must be less than 1% at the end of storage and, in Europe, less than 0.8%. These numbers are arbitrary, and typical modern red cell storage systems average less than half those values.
Several decades ago, Greenwalt and his colleagues, noted that more than half of the supernatant haemoglobin was in the form of membrane-bounded microvesicles that could be isolated by ultracentrifugation17,18. These vesicles occur in at least three different forms. Some are made as immature red cells, reticulocytes and mature in the circulation and shed membrane to reduce their size and assume the mature biconcave disc form. Others are made by mature red cells as they shed oxidized lipids, and are characterized by high concentrations of these oxidized lipids and the membrane proteins stomatin and flotillin19. Finally, there are apoptotic vesicles made as energy depleted senescent red cells undergo programmed cell death.
There are three major determinants of the amount of haemolysis in any given unit of red cells. Haemolysis increases with the duration of storage, it is reduced by leukoreduction, and individual variation accounts for most of the remainder20. Storage durationdependent haemolysis can be reduced by suppressing apoptosis-related microvesiculation by improving glycolytic energy flux throught the storage period. Removing most white cells early, reduces red cell damage from proteases and phospholipases released when retained white cells break down21. The cause of individual variation is unknown, but in cross-over studies, it is a consistent finding associated with the individual donors.
Haemolysis is not usually a clinical problem, but it can be for patients with ongoing intravascular haemolysis or massive transfusions. A typical unit of red cells contains about 69 gm of haemoglobin, one millimole of the 69 kD haemoglobin tetramer. Haemolysis is typically 0.4% at the end of storage, of which only half is free, outside of vesicles. Outside the red cell or the vesicles the tetramer dissociates into dimers, which in turn bind to haptoglobin dimers with 1:1 stoichiometry. Haptoglobin, normally circulates at 10–52 μM concentration in plasma, so it typically takes at least 15 units of red cells to overwhelm this system. However, patients with haemolytic anaemias may have already consumed their haptoglobin and small increases in free haemoglobin can lead to greater nitric oxide scavenging and more severe pulmonary and systemic vasoconstriction22. It would be useful to be able to identify donors with very high levels of storage haemolysis.
Platelets are involved in the blood coagulation process and are given to treat or prevent bleeding. About 2 million doses of platelets are given every year in the U.S., where a dose consists of 300 to 400 billion platelets, the amount in 4–7 whole blood derived collections or one apheresis collection6. In Europe, the EU standard has been greater than 200 billion platelets representing the pooling of 3–5 whole blood-derived collections. Platelets are given either therapeutically to stop bleeding or prophylactically to prevent bleeding. Therapeutic uses range from polytrauma patients with massive ongoing haemorrhage to persistent gum bleeding in patients with congenital platelet dysfunction. Prophylactic platelet use ranges from maintaining low blood concentrations in leukemia and stem cell transplant patients to prevent bleeding to attempts to reach higher concentrations to limit bleeding in patients undergoing invasive bedside procedures or surgery23.
Platelets are collected in three ways. They can be centrifuged from platelet-rich plasma, isolated from buffy coats, or collected directly from the bloodstream by apheresis. There is some evidence that the buffy coat and apheresis methods provide better platelets, with the suggestion that centrifuging platelets against the plastic bag surfaces in the platelet-rich plasma method leads to partial or complete activation of some of the platelets24. Assays that measure the extent or effect of partial activation of platelets would be helpful. It has been hard to show that better platelets make a clinical difference, but this is probably because in the only clinical situation available for routine study, the provision of prophylactic platelets to patients undergoing leukemia treatment or marrow transplant, the dose of platelets within the clinical range does not matter24.
Platelets are stored in large flat bags with as high surface to volume ratio and on agitators to facilitate oxygen diffusion. Off agitation for more than 24 hours, the bag contents become hypoxic and metabolism shifts to anerobic glycolysis so that the contents become acidotic and the platelets loose function25. Platelets are stored at room temperature, 20–24 ºC, because below 18 ºC, their lipid bilayer membrane undergoes a phase change which allows the aggregation of surface glycoproteins26. Such cold-damaged platelets work well in in vitro physiologic tests, but are removed rapidly from the circulation after reinfusion27.
Platelets are generally stored in the plasma in which they are collected. This reduces handling of the platelets but increases the prevalence of complications of plasma exposure, immunologic transfusion reactions and hypotension from kinin exposure28. Platelet additive solutions have been developed which reduce the complications of plasma exposure and allow the plasma to be diverted to other uses, but the platelets can be damaged by the additional handling29.
It is possible to keep platelets for as long as 8–13 days, but blood banks in the U.S. are only allowed to keep them for only 5 days because of bacterial contamination. This means that 4–16% of collected platelets are lost because they do not find a recipient within their limited shelf-life. The bacteria come from the skin or blood of the donor and grow slowly at first, so it is common to hold the platelet units for 24 hours before culturing them and then hold them for another 12 hours to give the cultures time to grow before the platelet units are released from the issuing blood centre to the hospital transfusion services30. Shipping the platelets from one city to another to balance inventories and use can take an additional day, so the actual product’s available shelf-life is typically only 3 days.
It would be useful to store platelets longer and detect bacterial contamination sooner31. Better platelet storage has been demonstrated with gentle methods of platelet separation from blood such as centrifugal elutriation, and storage in buffered Ringer’s acetate where the acetate directly feeds the platelet mitochondria, and in polyvinyl chloride bags where oxygen diffusion is maximized and the plasticizers may stabilize the platelet membranes. These are empiric findings, and better scientific methods all around would be helpful32,33.
At a regulatory level, the adoption of Scott Murphy’s definition of successful platelet storage, 67% of fresh autologous platelet recovery with 58% of autologous fresh platelet survival, has allowed the development of new platelet products to go forward34. Unfortunately, the adoption of these new developments has been held in check by the fear of bacterial contamination, now down to very low levels. In the last year for which data are available, there was one death from bacterial contamination of a blood component, a platelet unit, in the whole U.S35.
While the regulations assure that the average platelet unit collected by any particular method or device is of reasonable quality, the recovery and survival of units from different donors is highly variable. In a review of many measures, the recovery varied from 20–90%4. As with red cells, there is no sense of why this variability exists, but unlike the case of ATP in red cells, there is no biochemical marker of poor recovery.
Plasma is used to control or prevent bleeding and rarely for other uses such as the acute treatment of angioedema in patients who are missing complement-1 esterase activity or chronically in children congenitally deficient of the von Willebrand factor cleaving protease. In the U.S. approximately 4 million units of plasma are used clinically each year.
Plasma for clinical use comes from volunteer donors in the course of routine blood collection. The plasma collection industry, which collects millions of liters of plasma from paid donors for the manufacture of albumin, coagulation factor concentrates and intravenous immunoglobulin, is a separate activity. Clinical transfusion plasma is what is left over when platelet-rich plasma or buffy coats are centrifuged “hard” to remove the platelets or when plasma is collected by apheresis at clinical blood collection centres. The plasma from whole blood is typically diluted with 1 part CPD anticoagulant to 4 parts plasma and ends up as units of 250 to 300 mL of 80% plasma. If such units are frozen within 8 hours of collection, they can be called fresh-frozen plasma (FFP) and if longer than 8 hours but less than 24 hours, they are called frozen plasma (FP). Thawed FFP can be kept in the refrigerator as FFP for up to 24 hours, but either FFP or FP can be kept thawed as thawed plasma (TP) for up to 5 days. If FFP is allowed to thaw in the refrigerator, a granular precipitate of fibrinogen, von Willebrands factor, factor VIII and Factor XIII remains which can be removed to form a 10 mL unit of cryoprecipitate. The plasma remaining after cryoprecipitate removal is used to treat patients with thrombotic thrombocytopenic purpura (TTP) or sold to commercial plasma fractionators to make pooled plasma products. Apheresis plasma can be collected in 600 mL amounts in ACD or buffered citrate, but is usually broken down into 300 mL units to give it the size of normal plasma units.
Plasma is usually stored frozen, at −18 ºC for a year or at −65 ºC for seven years. Once thawed, it can be kept at 1–6 ºC for 5 days. Under these conditions, bacterial contamination has not been a problem. Kinin activation occurs during liquid storage to a variable degree and can cause hypotension in some patients28. Coagulation factor VIII is poorly soluble and can remain cryoprecipitated even after 30–37ºC thawing, but is rarely a problem because haemophilia patients are treated with factor concentrates and trauma patients secrete factor VIII from their shocked endothelial cells. Even under conditions where whole blood is held at room temperature for 24-hours before processing and then the plasma is held thawed for 5-days, total losses of factor VIII are only about 15%. The effects of changes in the other thousands of proteins in plasma with the warm hold or thawed storage are largely unknown36.
Regulation of clinical plasma products has been based on demonstrations that they are not contaminated and that they contain at least 70% of normal factor VIII activity (EU) or 80% (U.S.) of normal activity coagulation factor. These standards are generally easily met, but in the past, some leukocyte reduction filters have selectively removed coagulation factors or activated kinins.
Therapeutic plasma contains moderate amounts of dextrose present in the CPD anticoagulant that may present a risk to patients with brain injury and sepsis. The simple expedient of removing the sugar from the anticoagulant and collecting apheresis plasma and, perhaps even whole blood, in neutral citrate may be impossible because no company would bear the regulatory cost necessary for licensure of such a system37. This would appear to be an obvious benefit and low-risk situation of removing an unneeded ingredient in which regulatory costs should be minimized, but regulatory agencies have their own views and agendas.
Red cells, platelets and plasma all have important roles in medical care, high efficacy for their primary indications, and no obvious replacements in the foreseeable future. They represent about 1% of the overall cost of healthcare, and about 2% of the cost of tertiary care centres. Blood collection centres and hospital transfusion services will remain largely as they are in the immediate future. There are no alternatives.
Efforts are being made to grow red cells from stem cells. At this time, a peripheral blood stem cell collection can be treated with growth factors and turned into about 3 mL of red blood cells38. This is an important demonstration of principle, but not a useful production technology. More red cells than that are lost in the lines of the apheresis device used to collect the stem cells. Even when cloned “universal donor” red cells are industrially produced, they will still be need to be in units managed in blood banks, and there will be a fraction of individuals, such as those of the Bombay phenotype, who will not be able to receive them.
Platelets for transfusion are even less likely to come from alternative sources of production, but donor-derived apheresis platelets already make up almost three-quarters of the national supply. Apheresis devices with higher yields and gentler handling can improve the number and quality of platelets in the supply. Better storage systems can make them last longer and be safer, but here, the needs of trauma patients for the plasma in platelet units compared to needs of highly transfused cancer patients to have the plasma removed, compete39. Changes in the carrier solution for platelets will require alternative formulations of plasma.
Frozen FFP and FP will remain standards, but their dilute nature remains a problem for trauma patients and field medical care situations. Freeze-dried plasma which can be reconstituted in less then normal volumes of water to make concentrated solutions offer a potential alternative40. Mixtures of bioengineered proteins are further off, have high developmental costs, and difficult regulatory paths to licensure.
Conventional blood products appear to be with us for the foreseeable future. It is important that we learn more about them. Such knowledge will help us manage them better now, and make better versions available