Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Since the publication of the last edition of Current Therapy in Trauma and Surgical Critical Care , there have been many changes regarding the diagnosis of sepsis, the most accurate definition to capture its true incidence, and its overall burden in the United States. Additional issues concern whether there is an increasing incidence associated with a lower mortality rate and the evolution of the treatment strategy in view of our more in-depth understanding of the pathophysiological and molecular changes associated with sepsis and septic shock.
It is well known that sepsis is a frequent cause of severe disease and death globally. According to the most recent Centers for Disease Control (CDC) report, every year 1.7 million adults in the United States develop sepsis with 270,000 Americans dying from it, accounting for a 15.9% hospital mortality rate. The clinical data used to identify patients with sepsis have evolved from the Sepsis 1 and Sepsis 2 criteria to those adapted from the Sepsis 3 committee, which relies on suspicion of infection with associated organ dysfunction, based on the Sequential Organ Failure Assessment (SOFA) score. In 1991, sepsis was defined as systemic inflammatory response syndrome (SIRS) due to a suspected or confirmed infection with two or more of the following criteria: temperature > 38° C or < 36° C, heart rate > 90 beats per minute (bpm), respiratory rate > 20/minute or Paco 2 32 mm Hg, white blood cell count (WBC) > 12,000 or < 4000 cells/mm 3 or >10% bands. Severe sepsis was defined as the progression of sepsis to organ dysfunction, tissue hypoperfusion, or hypotension while septic shock necessitated the presence of hypotension and organ dysfunction that persisted despite volume resuscitation requiring vasopressor support. In 2001, the definitions were updated with the inclusion of laboratory variables, and in 2004, the Surviving Sepsis Campaign (SSC) guidelines adopted the definitions to develop a protocol-driven model for the care of sepsis. In 2016, the Sepsis 3 committee introduced the new definitions: (1) sepsis as a life-threatening condition caused by a dysregulated host response to infection resulting in organ dysfunction; (2) septic shock as circulatory, cellular, and metabolic abnormalities in septic patients, presenting as fluid-refractory hypotension with associated tissue hypoperfusion documented by a lactate level > 2 mmol/L requiring vasopressor therapy. Of note, the condition previously known as severe sepsis was eliminated.
In contrast to previous estimates based on administrative coding data that have suggested increasing incidence and decreasing mortality rates, a more recent study has found no change in the incidence of adult sepsis and associated mortality rates from 2009 to 2014. However, there is widespread agreement that sepsis remains the leading cause of death in the intensive care units (ICUs), and that unfortunately, the incidence of sepsis is increasing and will continue to rise at an approximate yearly rate of 8% to 11% due to a variety of reasons, including the increasing percentage of the population aged 80 years or older, the increased number of comorbid conditions present in patients, the increased use of cytotoxic and immunosuppressive drugs, and more importantly, the emergence of antibiotic resistant organisms. The increased incidence of sepsis and septic shock extends to trauma patients due to the presence of several incremental risk factors, including the presence of significant comorbid conditions secondary to aging and to more patients surviving their initial injuries from improved resuscitation, including the use of the massive transfusion protocol, and from the timely treatment of the injuries, therefore placing the patients at risk of developing subsequent nosocomial infections. Additionally, trauma patients have an increased risk of developing sepsis and septic shock due to the presence of central nervous injuries, pulmonary contusions from blunt chest injuries, preoperative shock, and the frequent administration of large amounts of crystalloids and the administration of blood transfusions with its immunosuppressive effects and the overall immunosuppressive effects of trauma SIRS and of the compensatory anti-inflammatory response (CARS).
Septic shock is the final stage of a continuum along a pathway that may progress in a stepwise fashion from a dysregulated response to infection with or without the presence of bacteremia characteristic of sepsis, to septic shock with multiple organ dysfunction leading to death. This continuum is characterized by the progression from signs associated with SIRS to the development of organ dysfunction, hypotension, and hypoperfusion refractory to volume resuscitation associated with an increase in lactate level, compromised peripheral perfusion, oliguria, and changes in mental status. Ultimately, patients manifest sustained hypotension (SBP < 90 mm Hg) despite adequate fluid resuscitation or patients may be normotensive with inotropic and/or vasopressor support but inadequately resuscitated from the standpoint of the microcirculation. The transition from SIRS to sepsis and septic shock is accompanied by imbalances between oxygen supply and demand and loss of hemodynamic coherence between the macro- and microcirculation causing varying degrees of tissue perfusion deficits in different organs, eventually resulting in multiple-organ dysfunction syndrome (MODS).
While only 4% of patients with SIRS progress to septic shock, 71% of patients with culture-proven septic shock are initially identified as being in a mild form of sepsis. It is for this reason that it is extremely important to identify at an early stage those patients who are at risk of progression from sepsis to septic shock. It is unclear at this time which of the available biomarkers are the best predictors in first identifying septic patients, and subsequently, in being able to identify the patients at risk of progression to septic shock in order to implement an early goal-directed therapy (EGDT) aimed at preventing the development of the full-blown late distributive shock, which in our opinion, most of the times is less amenable to successful treatment.
There are at least three areas where early diagnosis and implementation of time-sensitive therapies have been shown to improve outcome from the standpoint of overall mortality and functional outcome of the patients. They include the treatment of patients with myocardial infarction, stroke, and trauma. There is a consensus that the timely treatment, which includes four time-related phases (resuscitation, optimization, stabilization, recovery/deescalation), of these types of patients is the most essential component of the treatment strategy itself. An important question pertaining to patients with documented sepsis and septic shock is whether there is evidence to suggest that their therapy should follow a rigorous time-sensitive approach similar to that used to treat patients with myocardial infarction, stroke, and trauma. Additionally, whether there is mechanistic evidence at cellular and molecular level to support the role of early therapy directed at the achievement of the restoration of the macro- and microcirculation and reversal of the abnormalities identified in specific biochemical, cytokines, and chemokines markers associated with sepsis. Are there one or more sensitive and specific early threshold therapeutic endpoints that can prevent the progression of sepsis from a protective inflammatory response to an upregulated but dysregulated and protracted response responsible for the development of MODS and the death associated with it?
Should we still undertake EGDT in view of the results of the three large multicenter trials that have demonstrated no benefit in mortality between patients treated with EGDT as opposed to usual care? Are the results of these trials robust enough and valid to support volume restriction and therapy targeted to conventional macrohemodynamics and clinical response instead of evaluating the coherence between the macro- and microcirculation in order to assess whether the patient is responding appropriately to our treatment? If one considers that a Scvo 2 < 70% is required to identify the subset of patients who may benefit from EGDT, and that the initial Scvo 2 was 71%, 72% and 70% in the PROCESS, ARISE, and PROMISE trials, we can conclude that 50% of the patients in the usual care arms of the three studies would not have required one or more of the steps of EGDT. Another important finding of these trials includes the fact that while the timing of administration of the fluids varied among the arms of the studies, the overall intravenous fluid volume received during the first 6 hours postenrollment was between 4 and 5 L in all groups. This suggests that early large-volume fluid resuscitation was part of the usual care. Additionally, since 45%, 54%, and 35.4% of the patients in the trials had lactate < 4 mmol/L, we should be cautious about extrapolating the results of these trials to the subset of septic patients who are more severely ill.
Additionally, is there a difference between the trauma/surgical and medical patient who develops sepsis from the standpoint of pathophysiology, activation of molecular mechanism, and the quantitative and qualitative response of the microcirculation to the priming event of sepsis? Is the addition of targeting the microcirculation to enhance the availability of cytosolic and mitochondrial oxygen through an approach that emphasizes oxygen transport and output variables, such as venous oxygen saturation (SVo 2 ) and lactate, the “key” to the successful treatment of patients with sepsis and septic shock by modulating the early activation of the innate immune response and the multiple signaling pathways that lead to the development of the persistent inflammation/immunosuppression and catabolism syndrome? It is our opinion that septic patients should receive a time-sensitive approach not dissimilar to that used to treat patients with stroke, myocardial infarction, and trauma. While the time window for the treatment of patients with stroke, myocardial infarction, and trauma has been clearly identified, it remains poorly defined for septic patients. In this chapter, we will focus on strategies useful to identify septic patients, not necessarily trauma patients, at risk of progression toward septic shock, on the cellular and molecular reasons that justify the use of therapy targeted at resuscitating the macro- and microcirculation using a very narrow time-sensitive approach, and finally, on a specific treatment algorithm applicable to all septic patients.
In 2016, a combined task force of the Society of Critical Care Medicine (SCCM) and the European Society of Intensive Care Medicine (ESICM) refined the definition of sepsis. The classic definitions of both Sepsis 1 and Sepsis 2 were abandoned in favor of a definition that defines sepsis in terms of an organ dysfunction caused by a dysregulation of the host’s response to sepsis. Sepsis 1, developed in 1991, defined sepsis as the presence of two or more SIRS criteria in the presence of an active infection. The four SIRS criteria included tachycardia >90 beats/min, tachypnea >20 breaths/min, temperature >38° C (100.4° F) or <36° C (96.8° F), and white blood cell count >12,000/mm 3 , <4000/mm 3 or bandemia ≥ 10%. Based on the 1991 definition, a patient with a temperature of 38.1 ºC, and a heart rate of 91 was identified as being affected by sepsis. The 2001 Sepsis 2 task force expanded the diagnostic criteria of sepsis, but did not change the definitions. Essentially Sepsis 2 offered no change from Sepsis 1. The Sepsis 1 and Sepsis 2 definitions remain the ones used by Centers for Medicare and Medicaid Services in identifying septic patients.
Recognizing that the definition of sepsis was too broad, the Society of Critical Care Medicine/European Society of Intensive Care Medicine task force sought a better tool to identify septic patients through incorporation of risk assessment scoring systems. Seymour et al demonstrated that the predictive value of the SOFA score ( Table 1 ) was superior to the SIRS criteria of Sepsis 1 and Sepsis 2. SOFA was incorporated into the Sepsis 3 guidelines to define organ dysfunction. A rise in SOFA score greater than 2, the cutoff for organ dysfunction, was associated with a >10% increase in mortality. Sepsis 3 expanded the definition of septic shock to include the use of vasopressors to maintain a mean blood pressure greater than 65 mm Hg and a serum lactate less than 2 mmol/L.
SOFA Score | |||||
Organ System | 0 | 1 | 2 | 3 | 4 |
Respiratory | |||||
Pao 2 /Fio 2 , mm Hg | ≥4 | <4 | <3 | <2 | <100 |
Coagulation | |||||
Platelets, 10 3 /mm 3 | ≥1 | <1 | <1 | <20 | |
Liver | |||||
Bilirubin, mg/dL | <1.2 | 1.2–1.9 | 2.0–5.9 | 6.0–11.9 | ≥12 |
Cardiovascular | MAP | MAP | Dopa < 5 | Dopa 5.1 to > 15 | |
≥70 mm Hg | <70 mm Hg | Dobutamine | E ≤ 0.1 | E > 0.1 | |
Any dose | N ≤ 0.1 | NE > 0.1 | |||
CNS | |||||
GCS | 13– | 10– | 6–9 | <6 | |
Renal | |||||
Creatinine, mg/dL | <1.2 | 1.2–1.9 | 2.0–3.4 | 3.5–4.9 | >5 |
Urine output, mL/day | <5 | <200 |
While the SOFA score has proven to be accurate, it does require multiple data points that may not be available at the time of the initial evaluation of the patient; hence, it may lead to delayed identification of septic patients. This concern prompted the development of the “quick sofa” (qSOFA) consisting of only three components ( Table 2 ). The presence of at least two of these components is predictive of organ dysfunction. While SOFA has been shown to have prognostic superiority in retrospective studies, concerns have arisen regarding its sensitivity in identifying early sepsis. In a recent study of 8871 emergency room patients of which 4176 (47.1%) had SIRS, SIRS was associated with an increased risk of organ dysfunction and mortality in patients without organ dysfunction. SIRS and qSOFA showed similar discrimination for organ dysfunction; however, qSOFA was found to be more specific but less sensitive for organ dysfunction. A recent multi-institutional study of 184,875 patients has concluded that the SOFA score is superior with respect to the prediction of in-hospital mortality and that SIRS criteria have greater prognostic accuracy for in-hospital mortality than qSOFA score. An emergency room study that evaluated 879 patients, however, did not confirm the prognostic superiority of qSOFA over SIRS criteria. Regardless of its limitations, the introduction of SOFA and qSOFA scores into the 2016 Sepsis 3 guidelines has emphasized the role of organ dysfunction in sepsis, redefining sepsis and distinguishing true-life threatening sepsis from patients presenting only with a mild inflammatory response.
qSOFA Criteria | Points |
---|---|
Systolic blood pressure ≤ 100 mm Hg | 1 |
Respiratory rate ≥ 22/min | 1 |
Change in mental status | 1 |
One the more important aspects of the management of the patient with sepsis is how to establish an early diagnosis in order to provide a specific treatment that includes the administration of a proper amount of fluids for volume expansion (VE) as well as the administration of antibiotics in the first few hours from diagnosis followed by source control after the initial resuscitation phase. The early diagnosis of sepsis and septic shock as opposed to noninfectious causes of an upregulated inflammatory response is more difficult to make in trauma patients due to the universal activation of the proinflammatory response from the injury itself in the majority of patients with severe trauma. Unfortunately, it is estimated that between 30% and 50% of antibiotics administered during the hospital stay are unnecessary. The inappropriate use of antimicrobials is responsible for the increased incidence of opportunistic infections, resistance to multiple antibiotics, and toxic side effects responsible for increased morbidity and health care costs.
The most common causes of sepsis in trauma and surgical patients include ventilator-associated pneumonia from prolonged mechanical ventilation, catheter-related blood stream infections, urosepsis, and intra-abdominal infections in patients who have undergone either laparotomy for injuries to the gastrointestinal tract and/or solid organs or following complex surgical procedures. An important question regarding patients at risk of infection and/or sepsis is whether the presence of hypotension alone is a sufficiently sensitive screening marker for tissue perfusion deficits to identify the transition of patients from infection/sepsis to septic shock. Many studies support the superiority of serial measurement of lactate levels over other markers, including hypotension, from the standpoint of identifying the progression of patients from sepsis to septic shock and from the standpoint of prediction of sepsis-related mortality. While anion gap and base deficits are routinely used to risk-stratify trauma patients, they are insensitive in septic patients. Normal anion gaps and base deficits have been observed in 22% and 25% of patients with mean lactate levels of 4 and 7 mmol/L, respectively. Lactate represents a useful and clinically obtainable surrogate marker of tissue hypoxia and disease severity, independent of blood pressure. Previous studies have shown that a lactate concentration >4 mmol/L in the presence of SIRS criteria significantly increases ICU admission rates and mortality rate in normotensive patients. Lactate can be measured in the ICU as well as in the emergency department using point-of-care devices with a turnaround time of 2 minutes, and since peripheral venous lactate levels can be used in substitution of arterial lactate as long as tourniquet times are short, arterial or venous lactate levels should be obtained in surgical patients who are suspected to be septic in order to initiate early therapy. The early therapy should be directed targeting the following endpoints as soon as reasonable: (1) VE for restoration of the macro- and microcirculation; (2) administration of broad-spectrum antibiotics; (3) normalization of lactate, venous-arterial carbon dioxide difference (Pv-aco 2 ) and capillary refill time (CRT); (4) source control with interventional procedures within 3 to 6 hours to prevent the development of MODS.
Among the blood biomarkers available, the two that complement synergistically the clinical judgment and that appear to be more useful for the early diagnosis of sepsis and septic shock are procalcitonin (PCT) and lactate levels. These two biomarkers can be monitored to assess the response to therapy, although there is a difference between PCT and lactate with respect to tailoring therapy to the individual patient in that the former is a more sensitive biomarker of infection, useful to differentiate bacterial sepsis from a nonbacterial etiology, and to assess the response, the duration of antimicrobial treatment, including the decision to de-escalate antibiotic therapy, and the latter is a more sensitive marker of the recruitment of cellular perfusion with the administration of fluids and of the balance between oxygen delivery and consumption, and more importantly, is a better predictor of ICU and in-hospital mortality. While C-reactive protein (CRP), an acute-phase protein released by the liver, increases with tissue damage, inflammation, and infection, its levels are generally elevated in most ICU patients, it is not specific for infection, it does not correlate with the severity of the disease and its progression, and it has a delayed rise of ≥ 24 hours compared to PCT and other cytokines; therefore, in our opinion, it is less valuable for the diagnosis of sepsis in surgical and trauma patients.
Since its first description in 1993, PCT levels have become increasingly utilized diagnostic markers of sepsis in the clinical setting. PCT is an acute-phase reactant protein primarily produced in the C cells of the thyroid and in the lungs, kidneys, and liver. While PCT per se cannot isolate or detect specific pathogens, its level may be useful to estimate the probability of a severe bacterial infection. PCT is a 116 amino acid (AA) peptide with molecular weight of 14.5 kDa, produced in thyroid C cells, from a CALC-1 gene located on chromosome 11. Under normal physiological conditions, it is cleaved into three molecules: calcitonin (CT) (32 AA), katacalcitonin (21 AA) and N-terminal PCT (57 AA). CT is involved in the homeostasis of calcium and phosphorous. CALC-1 gene in thyroid C cells is induced by elevated calcium level, glucocorticoid, calcitonin gene-related peptide (CGRP), glucagon, gastrin, or β-adrenergic stimulation. Under normal conditions, PCT cleavage is complete, resulting in undetectable levels in healthy individuals. However, the production of PCT is upregulated in response to bacterial infections through a direct pathway induced by lipopolysaccharide or other toxic metabolites and an indirect pathway induced by inflammatory mediators like interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-α), and others.
PCT is detectable in the serum within 4 to 6 hours after the onset of bacterial infection. It reaches its peak within 24 hours and starts to decline in the case of appropriate treatment with a reduction of approximately 50% per day (24-hour half-life). PCT has been shown to correlate with the extent and the severity of microbial invasion and to the response to antimicrobial therapy. Muller and his associates conducted a study in 373 patients with community-acquired pneumonia to assess the diagnostic sensitivity and specificity of PCT; sensitivity decreased from 0.90 to 0.43 while specificity rose from 0.59 to 0.96 with increasing PCT level from ˃0.1 to > 1.0 µg/L. Additionally, a PCT cutoff value > 0.25 µg/L had a sensitivity of 98% to detect bacteremia. In a separate study of critically ill patients, Müller demonstrated that PCT is a more sensitive and specific marker of sepsis compared with serum C-reactive protein, interleukin-6, and lactate levels. The potential role of PCT in optimizing the dosing of antibiotics has been investigated in several studies. Schroeder and his associates examined the efficacy of PCT guided antibiotic therapy vs. conventional antibiotic therapy from the standpoint of overall use and cost of therapy in infected patients. With an antibiotic algorithm based on improved clinical signs of infection and a decrease > 35% of the initial value of PCT, they documented that a PCT-based algorithm can reduce the use of antibiotics as well as the expense of treatment.
However, there are limitations to PCT as a marker of infection and sepsis. Nonspecific elevations in PCT levels in the absence of a bacterial infection can occur following massive stress, such as after severe trauma and complex surgery, and in patients in cardiac shock; therefore, while it remains an efficacious biomarker of sepsis, it should be used in conjunction with other clinical parameters and the clinical judgment of the treating physician. Unfortunately, there is no single diagnostic test available, and probably never will be, that can establish the diagnosis of sepsis or septic shock.
To understand the role of using lactate levels to risk-stratify septic patients and to monitor the response to therapy by measuring serially lactate levels, as well as to understand the pathogenetic mechanisms causing the increased lactate levels in septic patients, one must understand the biochemical pathways responsible for its production. In normal conditions, a cytosolic oxygen tension of 6 mm Hg and a mitochondrial oxygen tension of 1.0 to 1.2 mm Hg are associated with normal aerobic glycolysis; therefore, the pyruvate derived from the anaerobic glycolysis enters the Krebs cycle as acetyl-CoA in the mitochondrion. In the absence of sepsis, there is autoregulation of the microcirculation through many mechanisms, including functional capillary density, the proportion of open, slow, fast, and closed capillaries to match at cellular-level oxygen supply to oxygen demand. As shown in Figure 1 , during anaerobic glycolysis there are three rate-limiting, energy-using steps; they involve the activity of glucose hexokinase, phosphofructokinase (PFK), and pyruvate kinase. PFK is the “pacemaker” of the anaerobic glycolysis because it exerts the major rate-limiting effect. It is the first irreversible reaction of the glycolytic pathway, called the committed step , and it involves the phosphorylation of fructose-6-phosphate to fructose-1-6-bisphosphate. Its function is affected by the energy state of the cell (level of adenosine triphosphate [ATP]), the pH in the cytosol, the level of available citrate, heat-shock proteins, endotoxin, and specific genes. While there is an ongoing debate regarding whether in sepsis the increased lactate level is the result of dysoxia or of a hyperactive glycolytic pathway causing a production of pyruvate in excess of the enzymatic ability of the pyruvate dehydrogenase complex to produce acetyl-CoA, the net effect is an increased production of lactate due to the transformation of pyruvate, which cannot be stored, into lactate. There is agreement that during the first 24 hours after the onset of shock, increasing lactate with a lactate-to-pyruvate ratio (LPR) > 20 is the result of cellular hypoxia, but after 24 hours, following resuscitation and stabilization, the persistent increase in lactate in the absence of an increased LPR is not of hypoxic origin but most commonly due to an upregulated adrenergic response or increased glycolysis. Depicted in Figure 2 are the two more effective biochemical pathways that shuttle pyruvate: (1) the pyruvate conversion to lactate on an equimolar basis by lactate dehydrogenase with conversion of lactate to glucose in the liver and renal cortex via the Cori cycle; (2) the transamination of pyruvate to alanine by the acceptance of an amino group by pyruvate so that they may enter the Krebs cycle. Typically, the proportional conversion of pyruvate to lactate is not associated with a change in cytosolic pH, hence cellular acidosis. It is important to understand the difference between lactate excess with and without acidosis ( Figure 3 ).
Lactic acid is a weak acid with a low pKa (3.86); it is only partially dissociated in water resulting in ion lactate and H+. Depending on the environmental pH, lactic acid is either present as the acid in its undissociated form at low pH or as the ion salt at higher pH. Under physiological circumstances the pH is generally higher than the pKa, so the majority of lactic acid in the body will be dissociated and be present as lactate. One should think of the LPR as the mirror image of the NADH/NAD ratio. In normal conditions when the energy state of the cell is within normal range, that is, when the NADH/NAD ratio is normal, the LPR is < 20. In the setting of an increase in lactate level proportional to pyruvate with a ratio < 20, there is enough energy to provide synthesis of ATP from adenosine diphosphate (ADP) and Pi and hydrogen ions; therefore there is no net increase in cytosolic hydrogen ions and no change in pH. In contrast, when the LPR is > 20, the energy state of the cell is compromised; therefore, there is hydrolysis of ATP in ADP, Pi, with an increase in the hydrogen ions concentration, hence, cellular acidosis. The subsequent hydrolysis of ADP to AMP and then adenosine is aimed at inducing a relaxation of the precapillary sphincters in order to increase functional capillary density, hence local blood flow and oxygen availability at cellular level to restore the redox potential and cellular pH.
Shown in Figure 4 is the relationship between oxygen delivery, extraction, and consumption (Do 2 , Vo 2 , and O 2 ER). In normal conditions, Vo 2 remains constant and supply independent of Do 2 due to increased O 2 ER. However, when the O 2 ER approaches 60%, the anaerobic threshold, Do 2 and Vo 2 become linearly dependent therefore, the patient is now in a state of supply-dependent Vo 2 and lactate production. Any further increase or decrease in DO 2 will be accompanied by a respective increase or decrease in Vo 2 . While the anaerobic threshold, when lactate is produced, is reached at an O 2 ER of 60%, this does not correspond to the maximal body extraction ratio, which is 80%. The question of whether the slope of the Do 2 /Vo 2 curve changes in sepsis shifting the critical level of Do 2 to the right has been debated for years without achieving a consensus. Further controversy exists surrounding the need to normalize O 2 ER in critically ill patients. As shown in Figure 4 , supply-independent Vo 2 can be maintained by increased O 2 ER up to 60%. Is the patient whose Vo 2 is supply independent through an increased O 2 ER at higher risk of developing MODS than the patient whose extraction has been normalized by a targeted therapy aimed at restoring systemic O 2 ER to a normal level? Organs such as the heart with a high O 2 ER at baseline can maintain Vo 2 constant only through increased flow, as opposed to organs with low O 2 ER, such as the kidneys, that can maintain Vo 2 constant through increased extraction. Is an increased global O 2 ER pathogenetically related to organ-specific supply/demand imbalance that can activate an inflammatory response causing MODS? We believe that while the global Do 2 /Vo 2 and flow balances can be reflected by monitoring the O 2 ER through continuous assessment of the central venous oxygen saturation (Svco 2 ), the normalization of this systemic marker cannot assure the absence of imbalances at the microcirculatory level and hence, ongoing cellular dysoxia responsible for the development of organ dysfunction.
Persistently elevated lactate has been shown to be better than oxygen transport variables (Do 2 , Vo 2 , and O 2 ER) as an indicator of mortality rate. Bakker and his associates defined “lactime” as the time during which lactate remains above 2 mmol/L and observed that the duration of lactic acidosis was predictive of organ failure and survival. Trauma patients whose lactate normalized in 24 hours had a 100% survival, whereas lactate elevation longer than 6 hours was associated with increased mortality rate. Additionally, elevated lactate concentrations up to 48 hours are associated with higher mortality rate in postoperative hemodynamically stable patients. Our own experience suggests that prolongation of lactate clearance is associated with increasing mortality in surgical patients. In fact, failure of a patient to normalize lactate is associated with 100% mortality after 96 hours.
At this point, we must caution the readers regarding the use and the interpretation of arterial and/or venous lactate levels as an isolated marker of tissue perfusion and cellular dysoxia. Lactate is a complex metabolic parameter that cannot be used as a single marker to represent tissue hypoxia because it has heterogeneous sources. A persistent elevation of lactate can be caused by an adrenergic-driven upregulated production from stress, by the exogenous administration of catecholamines, by downregulation of the pyruvate dehydrogenase complex from thiamine deficiency in alcoholics, and it can be the result of decreased hepatic clearance.
The presence of elevated lactate level with a normal O 2 ER (Scvo 2 > 70%) in the setting of sepsis can be seen when there is an ongoing imbalance between Do 2 and demand at the microcirculatory level of districts contributing lactate production. Typically, this distributive abnormality, characterized by a hyperdynamic hemodynamic profile with increased Do 2 to low demand regions and conversely, decreased Do 2 to high demand regions, is seen following resuscitation at a later stage of sepsis. At the microcirculatory level, this distributive state is characterized by heterogeneous red blood cells (RBCs) flow with stagnant and low flow capillaries adjacent to perfused fast capillaries resulting in microcirculatory shunts with decreased O 2 ER. The measured Svo 2 on the venous side of the fast capillaries exceeds 80% indicating the presence of shunt.
A normal O 2 ER is typically not associated with increased lactate due to the fact that an O 2 ER of 25% is below the anaerobic threshold; however, while the systemic global extraction could be normal, local organ-specific extraction could have reached the anaerobic threshold therefore contributing to the production of lactate. In these clinical cases, when systemic O 2 ER is normal but lactate level is elevated following resuscitation and stabilization in the first 24 hours, one must decide whether it is possible to unmask an ongoing microcirculatory oxygen debt in the setting of normal Scvo 2 by either direct evaluation of the microcirculation with handheld vital microscopy (HVM) or by surrogate markers such as measurement of Pv-aco 2 difference and/or capillary refill time (CRF). An additional but less accurate assessment involves the infusion of low-dose, short half-life vasodilators, such as PGI 2 , PGE 2 or even nitroglycerin to unmask an ongoing oxygen debt by changing functional capillary density at the level of the microcirculation, therefore readjusting the balance between Do 2 and Vo 2 at organ level. Patients who respond will manifest an increased O 2 ER with decreasing lactate levels, whereas nonresponders will retain elevated lactate levels without a change in Scvo 2 , indicating there is an irreversible defect in oxygen utilization at mitochondrial level. Therefore, a high Scvo 2 in patients with adequate Do 2 and increasing or unchanging elevated lactate levels, who are deteriorating clinically, must be interpreted as the presence of severe and possibly not correctable microcirculatory dysfunction.
The microcirculation consists of microvessels, namely arterioles, capillaries, postcapillary venules, and their cellular components with diameters < 20 µm. It is the most distal site for the oxygen transfer from the RBCs to the parenchymal cells to maintain their functional activity via two mechanisms: (1) RBC flow (convection of oxygen-carrying RBCs); and (2) diffusion of oxygen from the RBCs to tissues cells (diffusional component quantified by functional capillary density). The microvessels are almost completely lined by endothelial cells that by sensing metabolic and physical stimuli regulate in conjunction with smooth muscle cells microvascular flow through the release of the vasodilator nitric oxide (NO) in order to match RBCs flow to cellular oxygen requirements. One of the key components of the subcellular structure of the endothelium is the glycocalyx, a 0.2- to 0.5-µm gel-like layer, made of proteoglycans, glycosaminoglycans, and plasma proteins synthesized by the ECs, present on the luminal side of the endothelium. The glycocalyx is responsible for homeostasis, hemostasis, solute transport, and immunological functions. The autoregulation of the microcirculatory flow is implemented through myogenic, metabolic, and neurohumoral mechanisms. NO is considered a key component in the maintenance and autoregulation of the homeostasis and patency of the microcirculation.
In normal conditions, there is hemodynamic coherence between the macro- and microcirculation in that an improvement in macrocirculatory flow from optimization of its variables will result in a parallel improvement of the microcirculation, which in turn will improve tissue oxygenation to match the specific oxygen demand heterogeneity of the organs’ parenchymal cells. However, hemodynamic coherence is lost following an episode of shock and more frequently in septic shock. The tissues can remain hypoperfused from lack of recruitment of microcirculatory flow despite successful resuscitation of the macrocirculation with administration of fluid and vasoactive drugs. Of note, loss of coherence can occur between the different compartments of a single organ and even between groups of cells. The loss of coherence during sepsis is multifactorial, it includes abnormalities of the ECs from shedding of the glycocalyx, altered production of NO because inducible NO synthase (iNOS) becomes heterogeneously expressed in the vascular beds of the organs, resulting in pathological shunt of microvascular flow in some districts and in hypoperfusion of other iNOS-–deficient regions.
There are four types of alterations of the microcirculation responsible for the loss of hemodynamic coherence. While the four types are different, they are all associated with a decrease in FCD, hence, with the compromised ability of the microcirculation to transfer oxygen to the cells. The four types include: type 1, characterized by heterogeneity in the perfusion of the microcirculation with the presence of obstructed capillaries next to capillaries with flowing RBCs that has been documented in septic patients; type 2, hemodilution with the loss of RBC-filled capillaries with resulting increased diffusion distance is most commonly seen in cardiac surgery patients undergoing cardiopulmonary bypass; type 3, vasoconstriction/tamponade from either vasoconstriction or increased venous pressure compromising tissue oxygenation is observed with the excessive and protracted use of norepinephrine (NE); type 4, tissue edema by capillary leak causing increasing distance between RBCs and tissue cells has been documented in patients with volume overload. The type 1 microcirculatory abnormality is the one documented in septic patients; it has been associated with organ dysfunction and increased mortality. The type 1 loss of hemodynamic coherence between the microcirculation and systemic hemodynamics, the most commonly reported microcirculatory abnormality in septic patients, has been shown to be associated with worse outcomes.
The gold standard to assess tissue perfusion through the evaluation of the functional status of the microcirculation is the use of HVM. Of note, the finding of an initial low microcirculatory flow independent of systemic hemodynamics predicts the responsiveness of the microcirculation to VE as opposed to the absence of fluid responsiveness in the setting of normal microcirculatory flow by HVM. The most commonly monitored site of the microcirculation assessed with HVM is the sublingual microcirculation. However, the coherence of the sublingual circulation to that of the infradiaphragmatic microcirculation as assessed by evaluation of the intestinal microcirculation raises the issue of the response of the supra versus infradiaphragmatic microcirculation in shock based on their differential innervation, although some authors suggest that since the tongue and the intestine have the same embryological origin, they inherently must have hemodynamic coherence.
The usual response to shock is the preservation of flow to vital organs such as the heart and the brain through the diversion of blood flow from the splanchnic to the systemic circulation. Does the absence of microcirculatory abnormality in the sublingual circulation assure a similar response in the intestinal microcirculation in patients in septic shock, which, if not restored in a timely fashion, may prime the patient through an upregulated inflammatory response from bacterial translocation in the suboptimally perfused bowel? Is there hemodynamic coherence between systemic and microvascular flow in the supra versus the infradiaphragmatic microcirculation? A study by Edul et al showed that in postoperative patients with abdominal sepsis, hemodynamic coherence between systemic and microvascular flow is present in the tongue but not in the gut, suggesting that in this subset of patients with sepsis there is dissociation between the sublingual and intestinal microcirculation. Therefore, normalization of the microvascular indices of perfusion of the sublingual microcirculation with therapy targeted at restoration of systemic hemodynamics may not assure restoration of the microcirculation of the intestine.
Indices of microcirculation include total vessel density (TVD mm/mm 3 ), perfused vessel density (PVD n/mm 2 ), proportion of perfused vessels, heterogeneity index (HI %), and microvascular flow index (MFI). Two scores are used in clinical practice, the De Backer score and the MFI. The De Backer score is based on the principle that density of the vessels is proportional to the number of vessels crossing arbitrary lines. The MFI score is based on the determination of the predominant type of flow in four quadrants with the assignment of 0 = absent flow; 1= intermittent flow; 2 = sluggish flow; 3 = normal flow. The values of the four quadrants are averaged.
Despite the support for the use of HVM by the second consensus on the assessment of sublingual microcirculation in critically ill patients and the resulting set of guidelines on microcirculatory imaging with the use HVM, the use of HVM remains subject to the interpretation of the microcirculatory images for the verification of the recruitment of the microcirculation and to the issue of the dissociation between the sublingual and intestinal microcirculation in postoperative patients with abdominal sepsis. Therefore, based on these limitations, we believe that the use of HVM limited to the monitoring of the sublingual microcirculation is not generalizable at this time. Consequently, we are left with the following surrogates to monitor microcirculatory organ perfusion and anaerobic metabolism, namely, lactate, LPR, CRT, peripheral temperature, P(v-a)co 2 (ΔPco 2 ), and the ratio between the P(v-a)co 2 and arteriovenous oxygen content difference (ΔPco 2 /C(a-v)o 2 ). We believe that, at this time, taking into consideration all limitations of each parameter combined monitoring of serial lactate, Scvo 2 , CRT, and Pv-aco 2 during the early phases of resuscitation and optimization of septic shock reflect the adequacy of microvascular blood flow and should be used to guide the early therapy of sepsis and septic shock.
One yet-unanswered question when discussing the diagnosis and treatment of sepsis and septic shock is whether there is a difference between surgical intra-abdominal sepsis as opposed to medical sepsis, most commonly related to pneumonia, from the standpoint of the microcirculation and 30-day and long-term outcome. Is the response of the microcirculation in surgical patients whose septic source is nearly immediately and completely controlled different from the microcirculation of the patient with medical sepsis? A recent study of a cohort of 301 surgical patients documented a low 30-day mortality of 9.6% compared with the ongoing reported sepsis-related mortality of 30% to 50% reported in most studies. In this study, only 13 (4%) patients died within 14 days, primarily of refractory multiple-organ failure (62%). Of the 99 patients surviving 30 days who developed chronic critical illness who were discharged, 41 died at 12 months as opposed to 9 of the 189 (4.8%) patients with rapid recovery. In this study, while sepsis was managed by an evidence-based protocol, the type and time to resuscitation and the perfusion-related variables were not addressed; additionally, the authors did not address the impact of complete vs. partial source control, the number of interventions required for source control on the development of chronic critical illness, the number of patients who developed postoperative nosocomial infections, and long-term outcome. However, notwithstanding these limitations the surgical sepsis-related mortality was only 9.6%, suggesting that there may be a difference between surgical and medical sepsis.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here