Newer strategies from the use of blood and blood products: Lessons learned from recent wars


The broad indications for blood transfusion are based on the fact that transfused blood is the best substitute for blood lost in acute haemorrhages. —L. Bruce Robertson, MD, Captain, Canadian Army Medical Corp, 1916

Salt water is for cooking pasta, blood is for resuscitation. —Geir Strandenes, MD, Senior Medical Officer, Norwegian Naval Special Operations Command, 2018

As with so many other aspects of care of the injured patient, the history and knowledge acquired with respect to blood transfusion is inextricably linked to the lessons learned on the battlefield. This chapter will review the last century of blood transfusions dating from the First World War to current practices in the military and their translation to the civilian trauma population.

It is oft quoted that the surgeon who wishes to understand trauma must first go to war. It is an unfortunate truism that military conflict(s) inevitably advance our knowledge and understanding of the management of the injured patient. The lessons learned on the battlefield are often referred to as “lessons written in blood” as the learning curve at the beginning of a conflict is often steep and unforgiving. A second unfortunate reflection is that these same lessons written in blood are all too readily forgotten at the conclusion of a battle when the majority of medical personnel involved in the care of the injured soldier leave the service and return to their civilian practices. In many cases these lessons are left on the battlefield and are not successfully translated as new standards of care in the civilian arena. This vexing paradox of “relearning lessons learned” is readily apparent when one reads the history of medical care during war over the last century. The history of blood transfusion is particularly exemplary of this issue.

The following brief excerpt is from the preface of the U.S. Army Medical Command’s publication on the history of blood transfusion during the second World War. Its author, Lieutenant General Leonard D. Heaton, penned these words in 1957, but they could be equally written by the authors and those who have been privileged to care for both military and civilian casualties over the last two decades:

Medical officers who, like myself served overseas in World War II and who observed the management of casualties with and without the use of whole blood, are peculiarly qualified to appreciate the achievements of the whole blood program. Its results unfolded before our eyes. In forward hospitals we saw men saved from death and, sometimes, almost brought back from the dead. In fixed hospital, we received wounded men who once would have died in forward hospitals, or even on the battlefield. We received casualties with the most serious wounds in good condition. With the aid of more blood, we performed radical surgery upon them, and we watched them withstand operation and, with still more blood, recover promptly from it.

The history of the first blood transfusion dates back to approximately 1665 when Sir Christopher Wren successfully demonstrated the transfer of blood from the carotid artery of one dog to the jugular vein of another. John Baptiste Denys began a series of attempted transfusion between animal donors (sheep and calves) to humans. As one might anticipate this was fraught with peril for the recipients. In the most notable attempt Denys transferred the blood of a lamb into a male patient with delirium and the second transfusion proved fatal. The wife of the man subsequently sued Denys for malpractice (he was acquitted) but was later proven to have poisoned her husband with arsenic. Understandably, transfusion medicine was not warmly embraced over the ensuing centuries until further inquiries established more prudent practices. Over the next 240 years transfusion was rarely attempted. A few notable exceptions include transfusion attempts during the Civil War and the work of William Halsted. Hemorrhagic shock was a major cause of death during the war, and there are two recorded transfusion attempts for wounded soldiers. Both transfusions were considered a success, as the first patient survived 3 years posttransfusion and the second died from infectious complications 9 days posttransfusion. Another notable historical exception was the record of William H. Halsted who was attending to his own sister in postpartum hemorrhagic shock (1882). Halsted harvested his own blood by syringe and immediately injected it into his sister who subsequently survived.

The early 20th century witnessed two critical scientific advances that would clear the way for advancement of transfusion medicine. The first was the groundbreaking work (in 1930) of Landsteiner (who won the Nobel Prize for this in 1930), which demonstrated the presence of isoagglutinating and isoagglutinable substances in the blood. The second was the identification by Hustin, Wal, and Lewissohn of the efficacy of sodium citrate as an effective anticoagulant (1914). Although both of these concepts would require further refinement and clarification over the next two decades, they were sufficient to open the door of possibility for the use of blood as a resuscitation fluid as the clouds of conflict began to gather over the European continent.

Blood transfusion was, at best, only sporadically practiced in the civilian setting during the first decades of the 20th century. As with all other conflicts, the generation of large number of casualties suffering from hemorrhagic shock during World War I provided the scenario for the first widespread introduction of blood as a resuscitation agent. The work of two physicians, both bearing the same last name, deserve attention and credit in this review.

Captain L.B. Robertson volunteered and was commissioned as a surgeon in the Canadian Army Medical Corps at the outbreak of hostilities in 1914. He arrived in France in September of 1915 and was “lent” to the British base station hospital for 4 months. During his service at this British base station hospital Robertson performed four transfusions of whole blood utilizing a transfusing syringe but without crossmatching. The volume of direct transfusion ranged from 500 to 1000 mL. He would subsequently submit his experience to the British Medical Journal (1916). It is the authors’ belief that this submission represents the first published call advocating for the use of whole blood in the resuscitation of the patient in hemorrhagic shock:

The broad indications for blood transfusion are based on the fact that transfused blood is the best substitute for blood lost in acute haemorrhages. . . . In cases of primary haemorrhages in the present war we see the condition of the shock due, not only to loss of blood, but to the injury causing the same. . . . The addition of salt solution to the circulation is at best only a temporary measure, and merely makes up for the loss of fluid, which is only one factor in the condition. The introduction of whole fresh blood into the circulation at once not only helps restore the depleted bulk of circulating fluid, but provides the patient with that particular body tissue to the depletion of which his immediate symptoms are due.

At the time that Robertson was practicing the standard of care, that additional blood was not necessary for casualties in hemorrhagic shock. This erroneous conclusion may have been based on the observation that most casualties demonstrated increased red blood cell mass by the time they reached the clearing hospitals (days since wounding). This led some surgeons to conclude that red blood cell volume was adequate and prompted resuscitation with only saline solutions. This conclusion failed to recognize that most patients in hemorrhagic shock were demonstrating hemoconcentration as a result of both red cell depletion and an excessive loss of fluid to the interstitial “third space”—thereby creating pseudo-hemoconcentration.

Captain Oswald H. Robertson (U.S. Army, Medical Officers Reserve Corps) entered Harvard Medical School, graduating in 1915. He subsequently worked with Peyton Rous in the description of blood typing and red cell storage. He sailed with the Harvard Medical Unit to France in 1917 and was assigned to the British Expeditionary Hospital. Based upon his previous experience in Rous’s laboratory, Robertson quickly worked to establish a program to identify and crossmatch volunteer donors from the military personnel serving at the facility. In addition, Robertson began a “blood bank” by storing 500 mL of whole blood from these donors, which was anticoagulated with approximately 400 mL of adenosine-citrate-dextrose (Rous-Turner) solution. These units would be stored for as long as 28 days in refrigerated conditions and were widely used during the bloody battles of the final year of the war.

It appears that during the final years of World War I, both British and American field hospital were utilizing whole blood transfusions, developing storage banks, variously implementing procedures for crossmatching donors and recipients, and drawing up plans to create specialized medical “resuscitation teams.” Unfortunately, these concepts were incompletely catalogued and apparently not widely distributed among the medical corps leadership back home in Great Britain or the United States.

World war II

World War II represents the largest and deadliest armed conflict in history. The history of blood transfusion therapy during WWII is complex, daunting, and illustrative of the repetitive cyclical nature of the military and its medical corps as it attempts to awaken from interwar years and prepare for an impending conflict. The interested reader is referred to an exhaustive recounting of the details of the blood transfusion program of WWII.

In the period between the first and second world wars, it appears that there was some continued implementation and integration of the use of whole blood and plasma as accepted therapy for management of hemorrhagic shock in the civilian medical community. There were many more uncertainties that remained to be sorted out during the years leading up to hostilities (1935–1940). The fundamental definition of shock remained a widely debated medical topic, and precision was lacking in defining the basic mechanisms of the hemorrhagic insult. Like today, there was great debate regarding the ideal fluid(s) to restore perfusion. Blood typing was more completely understood but the development of testing sera and crossmatching protocols were still incomplete. Sterility, packaging, and refrigeration technologies were still being developed and found wanting.

Between 1920 and the outbreak of World War II, the general belief was that plasma alone could compensate for the loss of whole blood in shock. This reflected the prevailing point of view that blood loss was not necessarily the primary cause of shock. It is not easy, looking back, to understand how these concepts were ever accepted, yet some of the most competent physicians in the country believed that plasma alone could compensate for the massive blood losses that occurred in trauma. It is a disservice to the true and important role of plasma in the therapy of shock. Also, for logistical and practical reasons, many observers who believed that only whole blood was effective in shock appeared to concede that it would never be practical to provide it to the forward area.

When the United States entered the war, the prevailing dogma described here led to a preponderance of use of plasma, crystalloid, and albumin products during the early campaigns in both the Pacific and European theaters. This prejudice for the use of plasma was furthered by the publications of the proceedings of the first meeting of the National Research Council (May 1940). Among other findings, the council concluded, “Since the major single cause of the state of shock seems to be a decrease in the circulating medium (whole blood, plasma, or water and electrolytes, or a combination of these), therapy is based upon checking such losses and replacing body fluids by the best means at hand.”

One must also consider that in 1940 the technology for production of freeze-dried plasma had outpaced the development of similar technology that would afford for the provision of whole blood at a forward location. By mid-1941 the production technology for freeze-dried plasma was already available and provided a sterilized product, packaged in a small tin can that only required sterile water for reconstitution. The administration set was compact, and the product could withstand harsh extremes of environmental temperatures and be used by medics on the battlefield near the point of injury. In contrast, the provision of whole blood was hampered by a lack of appropriate storage solution, inadequate sera for typing, large bulky collection and storage bottles, the logistic burden of air transport, and the lack of adequate refrigeration equipment. It would be only later in the war that the utility of the walking blood bank, created first in World War I, would be remembered and implemented.

As a result of the factors discussed previously, the majority of therapy during the early war years appears to have focused on the initial administration of plasma products while reserving whole blood for the more severe cases of hemorrhage or when plasma alone appeared to have failed in preparing the patient to withstand the rigors of surgery. The hard fact of the matter was that in 1940 and 1941, when the need arose, there was no real choice: if plasma had not been recommended and used, there would have been no agent at all for the treatment of large numbers of wounded casualties.

Opinion and practice began to change as the increasing experience with casualty care began to raise serious questions regarding the efficacy of plasma as the primary resuscitation strategy. A key thought leader in changing the paradigm was Dr. (COL) Edward D. Churchill, Chair of Surgery at the Massachusetts General Hospital. Churchill arrived in North Africa and was assigned to study the issue of resuscitation to determine if plasma alone was enough or if whole blood was an important component of care for patients suffering from hemorrhagic shock. Churchill’s report concluded the following:

The development of plasma was undoubtedly a great contribution to military medicine, but the early enthusiasm that accompanied its development had pushed aside sound clinical judgement and led to the widespread misconception that it was an effective substitute for blood in shock. In fact, the organization and development of effective methods for the management of shock had been handicapped to an embarrassing degree by this misconception, which was firmly entrenched in both administrative and professional minds

Churchill went on to comment that plasma could temporarily improve the appearance and condition of the seriously wounded casualty, but that whole blood transfusion was invariably necessary for the patient to tolerate surgery. Churchill attempted to persuade the medical chain of command to change the protocol for resuscitation to replace plasma with the early and more aggressive use of whole blood. Like many other military medical corps officers over the centuries, Churchill was advised to “follow the chain of command.” In his own words (from the excellent book Surgeon to Soldier the memoirs of ECH):

I sent a report to the Surgeon General and gave as conclusion the problem of shock. . . . [I]t appeared that everyone in the United States was going haywire with the belief that some mysterious entity caused shock. I was not popular when I said that wound shock is blood volume loss. It is identical with hemorrhage. The wounded require the replacement of the blood loss. . . . The Surgeon General had said that we must fight the war with plasma.

Every communication had to go through “channels.” My only recourse was to talk to a New York Times reporter and say: “You must break the story that plasma is not adequate for the treatment of wounded soldiers.”

And so, he did; the New York Times subsequently ran the headline “Plasma Alone Not Sufficient” and the tide began to turn. The British were already demonstrating quite significant success with their use of whole blood in the African theater, and this would be translated into a change in policy for U.S. forces as the war front moved to Italy. Medical care in Italy proved to be a significant step for the validation of whole blood as it proved to be a vicious and prolonged battle; but also, one in which established hospital were relatively close to the front lines. These hospitals began to move to the development of walking blood banks as well as benefit from the establishment of robust supply lines that allowed shipment of whole blood from the United States to reach their wounded.

The blood program in the European theater developed along two lines. One was the increasing realization of the necessity for whole blood rather than plasma in the management of wounded men (though the complete realization did not come until after D-Day). The other was the increasing realization that local supplies of blood could not possibly meet the needs of the theater and that blood must be flown to the theater from the zone of interior (though again it was not until after D-Day that the full realization came). This transition was not without opposition, and over the course of preparation leading up to D-Day (June 1944), it appears to have been an ongoing back-and-forth between the growing realization of the necessity of blood for the wounded soldier matched against the logistic and leadership concerns of the Office of the Surgeon of the Army and the line officers of the Army itself. In the fall of 1943, the request was forwarded to the Office of the Surgeon of the Army to move forward with the steps to assure the provision of whole blood at the forward field hospitals. This request was rejected at once citing that plasma alone was sufficient for casualties, that it was impractical to provide blood at the far-forward facilities, and that shipping space and air transport capability was too scarce to warrant the logistic burden.

Churchill and others continued to advocate for the necessity of blood as the primary component of resuscitation of hemorrhagic shock, and there was an ongoing debate about the ratio of concomitant plasma administration that was necessary. The consensus was that the quantities of blood provided should be based on the estimate that 20% of all combat casualties would require resuscitation, and 20% of these would require blood as well as plasma. It was estimated that 30 pints of protein fluid were necessary for every hundred wounded, in the proportion of 3 pints of plasma to 1 pint of blood.

(Editor’s note: It is interesting to note that this discussion presages the PROMMT study [see later in this study], which would take place almost 70 years later.)

As D-Day drew nearer, unsettling thoughts about the adequacy of the arrangements for supplying blood for wounded casualties apparently began to cross the minds of those responsible for their care. Planning for the largest invasion in the history of warfare was an incredibly complex process, no less for those in the medical planning department. By the spring of 1944 experience from the theaters of North Africa and Italy had begun to suggest a ratio of units of whole blood/casualty sustained. The figure varied from a low of 1:1.5 to a high of 1:4. These ratios then drove the projection for the provision of blood products to the beaches of Operation Overlord in early June. The complexity was increased by the relatively short (and variable) shelf life of whole blood—at the time, somewhere between 10 and 14 days. Over the ensuing year of the European campaign (June 1944 to June 1945) over 300,000 units of whole blood were transported to the European theater of the U.S. Army, and estimates are that over 85% of these units were transfused into casualties.

More important than the actual figures was the opinion of the surgeons: A medical officer on one of the 3d Auxiliary Surgical Group teams, who had previously worked in North Africa and Sicily, stated that the greatest single medical blessing in the European theater was the availability of whole blood from the blood bank, which was making it possible to operate on, and save, casualties who would never have survived on plasma alone. These same words were echoed by the surgeons of both the European and Pacific theaters, and in every major conflict since to include Korea, Vietnam, and the ongoing Global War on Terror.

Reactions to, and complications of, blood and plasma transfusions

It is still true that, whenever large numbers of transfusions are given, occasional adverse reactions will occur. Through the clarity of retrospection, the success of the whole blood transfusion program during WWII is remarkable, particularly in light of the gaps in basic science knowledge and technology that were faced by the program. The precise incidence of reactions after transfusions in World War II is not known. The circumstances in which transfusion reactions occurred did not favor accurate recording.

At the onset of World War II, reactions after transfusions were sufficiently frequent to alarm even the most enthusiastic proponents of the liberal use of whole blood. They were readily explained: blood was usually collected by an open system, and principles of sterility and of absolute cleanliness of the apparatus were enforced in only a limited number of hospitals. Many surgeons were therefore wary about using blood at all. When it began to be used in increasing amounts in the management of battle casualties, the pendulum then swung to the other extreme. Reactions were overlooked, and the widespread and highly erroneous clinical impression grew up that transfusion was an innocuous procedure. The relatively easy availability of whole blood and its widespread use provided great benefits, but it also brought inevitable misuse.

In 1942, Kilduffe and DeBakey collected from the literature 43,284 transfusions, with 80 hemolytic reactions (0.18%) and 45 deaths (0.14%). Hemolytic shock was the cause of death in 32 of the 45 fatal cases. The figures leave no doubt that incompatibility reactions are the chief cause of death after transfusion and vindicate the decision to use only O blood in the massive transfusion programs set up in the Mediterranean and the European Theaters of Operations, U.S. Army, and the Pacific areas in World War II.

At the end of the war, as the result of prewar knowledge and wartime experience, the causes of hemolytic reactions could be listed as intravascular hemolysis of incompatible donor cells, whether intergroup (A, B, O) or intragroup (Rh); intravascular hemolysis of recipient cells; intravascular hemolysis of compatible donor cells; and transfusion of hemolyzed blood. The utility of measuring serum titers in O whole blood was accepted and units demonstrating high titers were specifically labeled for infusion only to O recipients.

Korean conflict

When the Korean War broke out on June 25, 1950, less than 10 years after the United States had entered World War II and less than 5 years after that war had ended, the medical situation was in disrepair and neglect. No well-organized blood bank system was in operation, but a plan for the supply of whole blood and plasma did exist. As a recurrent theme, the run up to the Korean conflict demonstrated how quickly “lessons learned” were lost as the care providers separated from service and returned to the civilian sector following World War II. In the intervening years the nonmedical line part of the military continued to exercise and plan for future engagements but there was no continuing medical readiness plan exercised. It is extremely unfortunate that planning had not begun earlier, for the need for whole blood arises whenever combat commences; the Korean War proved again that whole blood cannot be provided promptly and efficiently unless supplies, equipment, trained personnel, and a detailed plan for its collection, processing, transportation, and distribution have already been set up. To the credit of those involved in the military health care system of the time, there was recognition that “failure to act until an emergency entails accepting the responsibility for being unprepared.” 1 Over the ensuing first years of conflict (1950–1951) the military health system was formally designated as lead agency under the direction of the secretary of defense to oversee the collection, storage, and forward delivery of blood and plasma products. Additionally, this same process led to a call for further inquiry and research into means of extending the storage life of whole blood. The need for blood during the initial months of ground fighting outstripped the ability of the awakening blood program to meet demand. An Armed Forces donor program was established to augment the donations from civilian sources (American Red Cross) and a limited donation pool (civilian and military) of donors in Japan. Over the ensuing 3 years of combat operations almost 340,000 pints of whole blood were delivered to the Korean theater. The process of storage and shipment continued to use a combination of land-based refrigeration and ice storage during air transport to Japan and then Korea.

The challenges of limited storage life, transportation, and logistics of whole blood during both World War II and Korea led to ongoing research inquiry during this period. Plastic storage bags were introduced and perfected in the United States but failed to be adopted in time for use during Korea. The issue of hepatitis transmission from pooled plasma was again noted and led to the abandonment of freeze-dried plasma stockpiles. The transmission rate of hepatitis in those receiving freeze-dried plasma was noted to be as high as 12%. At the conclusion of hostilities in 1953 the process of whole blood transfusion in the military was essentially the same as at the end of World War II.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here