Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
This chapter will:
Examine methods for measuring intermittent renal replacement therapy in critically ill patients.
Review data documenting the response to different modes of intermittent renal replacement therapy.
Examine the effect of variables that affect the efficiency of solute removal.
Recommend methods to assess and improve the adequacy of renal replacement therapy in the intensive care unit.
Hemodialysis first was used during the 1940s to sustain life in patients with acute renal failure. After a series of initial failures, it became clear that life could be prolonged and death from uremia prevented while the native kidneys recovered. In the setting of chronic renal failure, hemodialysis is an extremely effective therapy that has an impressive potential for preserving life almost indefinitely despite total loss of a vital organ. The early pioneers in hemodialysis were likely impressed with its eventual success but were limited by rudimentary equipment, arterial access, and adverse reactions, so little attention was given to optimizing its adequacy. In the 1960s, soon after the beginning of use of hemodialysis in patients with chronic renal failure, attempts were made to shorten the duration of each treatment as a means of cost reduction and to satisfy patients who naturally prefer short versus long treatment sessions. Symptoms such as muscle cramps and malaise often occur toward the end of treatment, and patients mistakenly believe these symptoms will be reduced or eliminated by shortening the treatment. Despite less need to shorten treatment time in hospitalized patients, dialysis in the intensive care unit (ICU) setting also was shortened to an average of 3 hours three times weekly, usually without measuring the dose. When the urea clearances achieved with this approach are compared with those achieved in the outpatient setting, it is clear that the average ICU patient treated three times weekly receives less hemodialysis than the average outpatient despite the theoretical need for at least as much dialysis ( Fig. 156.1 ). In fact, an argument can be made for more treatment in the ICU because of the high rates of catabolism often found in critically ill patients.
Several modalities of kidney replacement are currently available, including variations in the duration and frequency of the treatment. Although renal replacement therapy (RRT) in patients with acute kidney injury (AKI) prevents the complications of renal failure, including death from uremia, it is possible that differences in the modality as well as variations in the timing of initiation and/or dosing may affect clinical outcomes and ultimate survival. Intermittent hemodialysis (IHD) and continuous replacement (CRRT) modalities have been used to manage patients with AKI, and all the major trials of IHD and CRRT have shown equivalent results with respect to patient survival and recovery of renal function. Because intermittent dialysis three days a week in the acute care setting may not be sufficient, the Kidney Disease: Improving Global Outcomes (KDIGO) committee suggests using intermittent and continuous RRT as complementary therapies in patients with AKI.
The enormous experience with hemodialysis in outpatients has generated excellent tools for measuring adequacy that also can be applied to patients in the ICU. The intermittence of treatments in the outpatient setting and in the ICU facilitates measurement of small solute clearance and also provides the clinician with an opportunity to measure the patient's protein catabolic rate and water volume simply by sampling the blood at the beginning and end of the treatment. As discussed later, these opportunities are not as readily available in patients treated continuously or in patients with native kidney function. This chapter focuses on hemodialysis: more specifically, on the dose of dialysis and its adequacy in critically ill patients requiring intensive care. Although the techniques are similar to measurements used in stable outpatients, the scene is very different and the stakes are much higher in terms of risks from the underlying disease as well as the procedure. The good news is that recovery is possible and that near-normal kidney function after recovery is a strong possibility.
The primary goal of hemodialysis replacement therapy is to reduce the concentrations of small toxic solutes in the patient. The proven success of dialysis confirms that small solutes account for uremic toxicity, especially considering that earlier membranes effectively removed solutes of molecular weight only up to about 2000 Da. In stable patients, high urea generation rates and high urea removal rates are associated with better outcomes. Urea, the most abundant organic solute to accumulate, has a constant rate of production independent of blood urea levels (zero order kinetics), and ease of diffusion across membranes makes it an ideal marker of solute clearance. The source of uremic toxins is unknown, but the generation rates likely vary from solute to solute and from time to time, forcing the clinician to measure all of them if the goal is to reliably assess adequacy at any point in time. An alternative approach is to pick a representative solute such as urea and measure its removal by dialysis. Because diffusion and convection of solute across the dialyzer membrane are first-order processes (removal is proportional to the concentration), removal can be expressed as a clearance, which is the constant ratio of the removal rate to the concentration, as follows:
This well-known expression is especially valuable during intermittent hemodialysis or intermittent hemofiltration, when concentrations of easily dialyzed solutes fall rapidly. If expressed as a clearance, the dialysis dose is constant, independent of absolute solute concentrations, and free from the errors caused by differences in solute generation rates.
Removing or clearing small solutes from the blood is the major accomplishment of dialysis and is clearly vital to survival of the patient with kidney failure. Other techniques can be used to control fluid balance, hormone deficiencies or excesses, electrolyte balance, or larger solutes; however, if small solute removal is inadequate, patient outcome is poor. Therefore providing an adequate clearance of small solutes must be the primary focus of any attempt to measure dialysis adequacy.
The most significant benefit afforded by dialysis treatments is removal of small dialyzable solutes from the blood of patients suffering from uremic toxicity. Higher morbidity has been ascribed to high urea levels, but treatment failures have been associated more strongly with fractional small solute clearance, measured as K t/ V urea.
As previously noted, the net flux (removal rate) of a diffusible solute across the dialyzer membrane is proportional to the solute concentration (C):
where k is the elimination constant (fractional removal rate), which can also be expressed as K / V , where K is the clearance, and V is the volume of solute distribution. Integration of Eq. 2 from the beginning to the end of dialysis yields a simple expression:
where C 0 is the initial predialysis concentration and C is the concentration at time t , usually at the end of dialysis. Taking logarithms of both sides of Eq. 3 yields the following simple expression:
Kt/V is determined primarily from the ratio of predialysis to postdialysis solute concentrations and is a measure of the dialysis dose expressed as an integrated or average dialyzer clearance (K) throughout the entire dialysis time (t), and factored for the patient's size. Size is represented by the volume of urea distribution, which is equated to total body water (V) . K t/ V is essentially a clearance per dialysis and is therefore a measure of dialyzer function independent of absolute solute concentrations or removal rates.
Fig. 156.2 shows that the slope of the line connecting log urea concentrations during dialysis is the ratio K / V , which remains constant despite the rapid fall in concentration during treatment. This simplified approach, which is shown in Eq. 4 and depicted in Fig. 156.2 , although helpful for demonstration purposes, is more complicated than shown here because several other modifying variables must be included. A more complete mass balance diagram is shown in Fig. 156.3 , which also contains a slightly more complex but more accurate equation describing the rate of change in concentration (d C /dt) as a function of C but modulated to a lesser extent by changes in V , urea generation, and native kidney clearance during dialysis. An explicit solution to this equation, although more complicated and therefore more accurate, has the same basic form as Eq. 3 (exponential function of C 0 ), as follows :
where V 0 is the volume of urea distribution before dialysis, K is the sum of dialyzer clearance and native kidney clearance during dialysis, and native kidney clearance alone between dialyses, and B is the rate of fluid gain between or during dialyses (negative during).
To measure the dose of dialysis, Eq. 5 would be applied in reverse. Predialysis and postdialysis serum urea concentrations (BUN) are measured, and the parameters K / V and G are solved by iterative computer techniques, a process called “urea modeling.”
Urea modeling is performed by laboratories that service outpatient dialysis clinics. The clinic provides the blood samples together with the input data (“Data required”) listed in Table 156.1 , and the laboratory issues a report that includes the outcome data (“Information provided”) shown in Table 156.1 . Urea modeling is not automated in the ICU and therefore is done rarely. The patient's status and dialysis parameters often change from day to day, so a steady state of urea balance rarely is reached. This situation does not affect the calculation of dose but confounds the interpretation of urea generation and its derivative, protein catabolism.
Data required | Predialysis BUN |
Postdialysis BUN | |
Volume of fluid removed during the dialysis | |
Dialyzer manufacturer and model | |
Average blood flow | |
Average dialysate flow | |
Treatment time | |
Patient height and weight | |
Information provided | Effective dialyzer urea clearance |
Delivered K t/ V | |
Patient's volume of urea distribution | |
Patient's urea generation rate | |
Patient's protein catabolic rate | |
Quality assurance (comparison of prescribed with delivered Kt/V) |
The urea generation rate provided by formal urea modeling can be used to calculate the patient's net protein catabolic rate by means of a simple conversion equation. As applied in the outpatient setting, a week-to-week steady state with respect to protein intake and output is assumed. Unfortunately in the ICU setting, this assumption is most often inappropriate. In critically ill patients, total parenteral nutrition (TPN) can have a significant effect on K t/ V and V . Similarly, if the measured dialysis occurs at night when the patient has no oral (or parenteral) intake, unless an adjustment is made for lower G, K t/ V may be increased falsely. To avoid these errors, modeling can be done with three BUN measurements, the third being performed at the beginning of the next dialysis treatment. In an oversimplified state in which there is no residual clearance and no weight change, the normalized protein catabolic rate (nPCR, the patient's protein catabolic rate normalized to an ideal body weight based on V [g/kg body wt/day]) is a simple function of the rate of rise in urea concentration between treatments, as follows :
where C ′ 0 is the second predialysis BUN (mg/mL), C is the postdialysis BUN (mg/mL), and t i is the time interval between dialyses (minutes). Measuring the third BUN also eliminates the mathematical coupling between K t/ V and nPCR found in two-BUN measurements. The generation rate and nPCR derived from Eq. 6 and more formal urea modeling apply only to the interval between the two dialysis treatments, but they can be useful, especially in febrile, injured, or corticosteroid-treated patients, in whom rates of net protein catabolism are expected to be high, or in patients receiving parenteral nutrition.
The dose by urea kinetic modeling usually is calculated in retrospect, often several days after the treatment when the laboratory finishes measuring predialysis and postdialysis BUN values and returns the data. Because the essence of dose measurement as previously described is the dialyzer clearance of small solutes, methods have been developed to provide clearances in real time during the treatment, by measuring other surrogate small solutes with online instruments. Blood-side and dialysate-side approaches have been studied; urea is used as the marker for clearance on either sides of the membrane (e.g., PD), and ultraviolet absorbance and conductivity have been used on the dialysate side. The latter is essentially a measure of sodium movement across the membrane and closely approximates urea clearance because both molecules are highly diffusible. K t/ V determined by conductivity, also known as ionic dialysance, has been shown to correlate with K t/ V determined by urea kinetic modeling but can vary depending upon the method used to determine V . With online modeling, V is estimated either by an anthropometric formula or preferably determined by urea kinetic modeling because it has less variance and is patient specific. Also, kinetically derived values for V from blood-side and dialysate-side modeling are similar, but these modeled urea volumes are lower by a substantial amount than anthropometric measurements by Watson and Chertow equations. Anthropometry-based equations overestimate the urea distribution volume in hemodialysis patients and underestimate the dialysis dose. Online conductivity methods allow more frequent dose measurements in real time with no need for blood sampling. The method is based on readings from conductivity probes placed at the dialysate inlet and outlet before and after changes in the dialysate electrolyte concentration are created by the dialysate proportioning system as shown in Fig. 156.4 and Eq. 7 .
Interpretation of the readings is based on the assumption that changes in dialysate conductivity are caused by transmembrane movement of small electrolytes, mostly sodium, that behave like urea. A step-up in sodium concentration followed by a step down while measuring conductivity changes in the effluent dialysate tends to eliminate the effect of cardiopulmonary recirculation and provides a sodium clearance that is similar to or only slightly less than the simultaneously measured cross-dialyzer urea clearance. To avoid errors from changes in clearance during dialysis, multiple ionic clearance measurements must be performed throughout the treatment.
The rapid fall and subsequent rise in solute concentration caused by intermittent scheduling of treatments provides a ripe opportunity for measurement that is not available in people with native kidney function, in patients managed with continuous peritoneal dialysis, or in patients undergoing continuous renal replacement in the ICU. However, this advantage is offset by the reduction in dialysis efficiency when treatments occur infrequently.
The most efficient form of renal excretory replacement is the continuous process of solute removal provided by the native kidney. Efficiency can be defined as a ratio of effective output to energy input. In the case of dialysis, energy input is represented by the dialyzer clearance, and output is measured as a controlled reduction in solute concentrations. Regardless of the output measure selected, the effectiveness of intermittent solute removal, whether by diffusion (dialysis) or by convection (filtration), depends on the frequency of treatments, as shown in Fig. 156.5 . The reason for this dependence is threefold:
Rapid removal causes solute concentrations to fall precipitously in the patient. This decline reduces and eventually extinguishes the solute gradient across the membrane, the driving force for dialysis.
Unfettered generation of a solute between dialysis treatments raises its concentration independent of the vigor of dialysis. This accumulation of solute between treatments limits the capacity of the dialysis to control solute concentrations in the patient.
For most solutes, high-intensity (high-clearance) dialysis causes a gradient to develop within the patient, further limiting delivery of solute to the dialyzer membranes.
Early initiation or prophylactic dialysis before the development of symptoms and signs of kidney failure has failed to show a clinical benefit in patients with end-stage renal disease (ESRD), including survival. In patients with AKI, various prospective and retrospective uncontrolled trials have suggested that earlier dialysis results in better survival, but in the pilot phase of the Standard versus Accelerated Initiation of RRT in AKI (STARRT-AKI) trial, comparing a strategy of early start (within 12 hours of eligibility) with a strategy of delayed start, no difference in the primary outcome of 90-day all-cause mortality was observed. However, the study had inadequate statistical power to draw definitive conclusions. Another large, multicenter, observational study, the Program to Improve Care in Acute Renal Disease (PICARD), found that early initiation of dialysis may be associated with a survival advantage. In this study patients with a high degree of azotemia, defined as a BUN level greater than 76 mg/dL, had an increased risk of death at 60 days from the diagnosis of AKI.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here