Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Electrolyte balance within the human body is essential for maintenance of health. Dysregulation of electrolytes affects water homeostasis and acid-base status and often results in overt clinical signs and symptoms. The laboratory is tasked with aiding the clinician by providing accurate, timely results to narrow or confirm a diagnosis. In challenging cases in which the clinical context is lacking or conflicting, it is even more important for the laboratory to provide reliable data.
This chapter compares analytical methods and describes their advantages, disadvantages, and pitfalls in the analysis of electrolytes (including detailed discussions on sodium [Na + ], potassium [K + ], chloride [Cl − ], and bicarbonate [HCO 3 − ]) and blood gases. Sweat Cl − quantification, which plays a central role in the diagnosis of cystic fibrosis (CF) and is known to be technically challenging, is also discussed.
Maintenance of water homeostasis is vital to life for all organisms. In humans, the maintenance of water homeostasis in various body fluid compartments is primarily a function of the four major electrolytes, Na + , K + , Cl − , and HCO 3 − . These electrolytes also have a role in acid-base balance, heart and skeletal muscle function, and as cofactors for enzymes. Abnormal electrolyte concentrations may be the cause or the consequence of a variety of medical disorders. Because of their physiologic and clinical inter-relationships, this chapter discusses analysis of (1) electrolytes, (2) osmolality, (3) sweat Cl − , (4) blood gases and pH, and (5) oxygen hemodynamics.
Electrolytes are classified as anions, which are negatively charged ions that move toward an anode, or cations, which are positively charged ions that move toward a cathode. Important physiologic electrolytes include Na + , K + , calcium (Ca 2+ ), magnesium (Mg 2+ ), Cl − , HCO 3 − , phosphates (H 2 PO 4 − , HPO 4 2− ), sulfate (SO 4 2− ), and some organic anions, such as lactate. Although amino acids and proteins in solution also carry an electrical charge, they are usually considered separately from electrolytes. Proteins in serum are usually anions, and albumin accounts for most of the difference between the commonly measured cations (Na + , K + ) and anions (Cl − , HCO 3 − ), known as the anion gap. Hydrogen ion (H + ) concentration is routinely measured as pH, but its concentration is so low relative to other ions (nmol/L versus mmol/L) that its role as an electrolyte is negligible for clinical purposes. The major electrolytes (Na + , K + , Cl − , HCO 3 − ) occur primarily as free ions, whereas a significant proportion (>40%) of Ca 2+ , Mg 2+ , and trace elements are bound by proteins, mainly albumin. Determination of body fluid concentrations of the four major electrolytes (Na + , K + , Cl − , and HCO 3 − ) is commonly performed simultaneously with the subsequent set of results known as an electrolyte profile .
Serum and plasma are the usual specimens analyzed for electrolytes. Capillary blood, collected in microsample tubes or capillary tubes, or applied directly from a finger stick collection to a point-of-care device, is another common sample. Arterial or venous heparinized whole blood specimens obtained for blood gas and pH determinations may also be analyzed by direct ion-selective electrodes (ISEs). Differences in values between serum and plasma and between arterial and venous samples have been documented (for details see Chapter 5 ), but the difference between serum and plasma K + is considered the most clinically significant. Heparin, as either a lithium (Li + ) or ammonium salt, is required if plasma or whole blood is to be tested. The use of plasma or whole blood versus serum has the advantage of shortening turnaround time, because it is not necessary to wait for the blood to clot. Plasma or whole blood samples also provide a distinct advantage in determining K + concentrations, which are invariably higher in serum depending on platelet count, especially in disease. , Care in the interpretation of whole blood K + is however warranted as detection of significant hemolysis is not possible. Hemolysis of red blood cells will produce erroneously high K + concentrations that do not reflect the in vivo concentration. In addition, unhemolyzed specimens that are not promptly processed may have increased K + concentrations because of K + diffusion from red blood cells when whole blood samples are stored at 4 °C. Grossly lipemic blood may also be a source of analytical error (see the section on Electrolyte Exclusion Effect), with some methods requiring ultracentrifugation of lipemic serum or plasma before analysis.
Urine collection for Na + , K + , or Cl − analysis should be done without the addition of preservatives. Body fluid aspirates, feces, or gastrointestinal (GI) fluid samples may also be submitted for electrolyte analysis. (The reader is referred to Chapter 45 for an in-depth discussion of body fluid analysis.)
Sodium is the major cation in extracellular fluids, accounting for about 90% of the approximately 154 mmol of inorganic cations per liter of plasma. Consequently, sodium is responsible for almost one-half of the osmotic strength of plasma. Sodium therefore plays a central role in maintaining the normal distribution of water and osmotic pressure of the extracellular fluid compartment (ECF). The daily consumption of sodium in the diet of adult males and females varies dramatically across countries and cultures. In the United States the daily intake contains a mean ± SD of 4.2 ± 1.4 g (183 ± 61 mmol) for males and 3.1 ± 1.0 g (135 ± 43 mmol) of Na + (approximately 3 to 17.5 g of NaCl) for females, which is nearly completely absorbed from the GI tract. The body requires only 1 to 2 mmol/day, and the excess is excreted by the kidneys, which are the ultimate regulators of the amount of Na + (and thus water) in the body.
Sodium is freely filtered by the kidney glomeruli. 70% to 80% of the filtered Na + load is actively reabsorbed in the proximal tubules, with Cl − and water passively following in an iso-osmotic and electrically neutral manner. Another 20% to 25% is reabsorbed in the loop of Henle, along with Cl − and more water. In the distal tubules, interaction of the adrenal hormone aldosterone with the coupled Na + -K + and Na + -H + exchange systems results directly in the reabsorption of Na + , and indirectly of Cl − , from the remaining 5% to 10% of the filtered load. The regulation of Na + reabsorption in the distal tubules primarily determines the amount of Na + excreted in urine. These processes are discussed in detail in Chapter 49 .
Serum, plasma, and urine may be stored at 4 °C or may be frozen. Erythrocytes contain only one-tenth of the Na + present in plasma, so mild hemolysis usually does not cause significant errors in serum, plasma, or whole blood Na + concentrations. Lipemic samples should be ultracentrifuged and the infranatant analyzed unless a direct ISE is used (see section on Electrolyte Exclusion Effect).
Fecal and GI fluid specimens require preparation before analysis. Because significant electrolyte loss in feces only occurs when stools are liquid, only liquid stool samples should be submitted for analysis. Immediately after collection, liquid stool specimens should be clarified of particulate matter by filtration through gauze or filter paper, or by centrifugation. If not analyzed immediately, fecal and GI fluids should be stored frozen to prevent microbial growth.
Sodium concentration may be determined by (1) an electrochemical Na + ISE, (2) enzymatic methods, (3) flame emission spectrophotometry (FES), or (4) atomic absorption spectrophotometry (AAS). FES, now very rarely performed, was the original method of Na + determination. ISE methods are by far the most commonly used today. Excellent trueness and adequate imprecision with coefficients of variation of less than 1.5% are readily achieved with modern equipment. Because Na + and K + are routinely assayed together, methods for their analysis are described together later in this chapter.
A typical reference interval for serum Na + is 135 to 145 mmol/L. However, the central 95% of Na + values from more than 16,000 subjects in the National Health and Nutrition Examination Survey III (NHANES III) was 136 to 146 mmol/L. The reference interval for premature newborns at 48 hours is 128 to 148 mmol/L, and the mean value for umbilical cord blood from full-term newborns is approximately 127 mmol/L (see Appendix on Reference Intervals for additional newborn ranges). Laboratories should verify that these ranges are appropriate for use. Further guidance is provided in Chapter 9 .
Urinary Na + excretion varies with diet, but for an adult consuming a typical daily amount of NaCl, the central 95% of Na + excretion ranges from 57.9 to 307.5 mmol for men and 45.7 to 238.9 mmol for women. A large diurnal variation in Na + excretion has been noted, with the rate of Na + excretion during the night being only 20% of the peak rate during the day. The Na + concentration of cerebrospinal fluid is 142 to 152 mmol/L. Mean fecal Na + excretion is less than 10 mmol/day.
Potassium is the major intracellular cation. In tissue cells, its average concentration is approximately 150 mmol/L, and in erythrocytes, the concentration is approximately 105 mmol/L. High intracellular K + concentrations are maintained by the Na + - K + adenosine triphosphate (ATP)-ase pump, which continually transports K + into the cell against a concentration gradient. Diffusion of K + out of the cell into the ECF and plasma occurs whenever pump activity is decreased because of (1) depletion of metabolic substrates such as glucose, (2) competition for ATP between the pump and other energy-consuming activities of the cell, or (3) slowing of cellular metabolism (as occurs with refrigeration). The importance of these considerations for sample integrity for analysis of K + is discussed later.
The body requirement for K + is satisfied by the typical diet of adult males and females which in the United States contains a mean ± SD of 3.4 ± 1.1 g (87 ± 28 mmol) and 2.5 ± 0.8 g (64 ± 21 mmol), respectively. K + absorbed from the GI tract is rapidly distributed, with a small amount taken up by cells, and most excreted by the kidneys. K + filtered through the glomeruli is almost completely reabsorbed in the proximal tubules and is then secreted into the distal tubules in exchange for Na + under the influence of aldosterone. Aldosterone enhances K + secretion and Na + reabsorption in the distal tubules by a Na + -K + exchange mechanism. The kidneys respond to K + loading with an increase in K + output, so that urine collected during or after a period of high K + intake may have K + concentrations as high as 100 mmol/L. In contrast, the tubular response to conserve K + is slow in the initial stages of depletion. Unlike the prompt response to conserve Na + in deficit states, it can take up to 1 week for the tubules to reduce K + excretion to 5 to 10 mmol/day from the typical 50 to 100 mmol/day.
Factors that regulate distal tubular secretion of K + include intake of Na + and K + , circulating mineralocorticoid concentration, and acid-base status. Because renal conservation mechanisms are slow to respond, K + depletion can be an early consequence of restricted K + intake or loss of K + by extrarenal routes (e.g., diarrhea). Total body stores of K + rise in the setting of chronic renal failure. In renal failure the decreased glomerular filtration rate, and the consequent decrease in distal tubular flow results in less exchange of K + for Na + , resulting in K + retention.
Comments made earlier on specimens for Na + analysis are generally applicable to those for K + analysis, with some caveats. K + concentrations in plasma and whole blood are 0.1 to 0.7 mmol/L lower than those in serum, and most reference intervals for serum K + are 0.2 to 0.4 mmol/L higher than those for plasma K + . The extent of this difference depends on the platelet count because additional K + in serum is primarily a result of platelet rupture during clotting. , This variability in the amount of additional K + in serum makes plasma the specimen of choice.
Specimens for determining K + concentrations in serum or plasma must be collected by methods that minimize hemolysis, because release of K + from a small proportion of blood cells can increase K + values by 0.5 mmol/L. An increase in K + of 0.6% has been estimated for every 10 mg/dL (0.1 g/L) of plasma hemoglobin (Hb) released by hemolysis. Thus slight hemolysis (Hb ≈50 mg/dL or 0.5 g/L) can be expected to raise K + values by approximately 3%, marked hemolysis (Hb ≈200 mg/dL or 2 gL) by 12%, and gross hemolysis (Hb > 500 mg/dL or 5 g/L) by as much as 30%. Use of correction factors based on a hemolysis index have been suggested for estimating K + in hemolyzed samples, but their use has been questioned. Regardless, it is imperative that if the laboratory chooses to report K + concentrations on hemolyzed samples that the presence of hemolysis is noted with a comment that results are falsely elevated whether an estimate of the extent of elevation is provided or not. It is important to remember that if K + concentrations are determined by ISE on whole blood specimens using a blood gas analyzer or point-of-care device, that hemolysis is not readily visually detected. When hemolysis is suspected, as in the case of an unexpected increase in K + , a portion of the specimen should be centrifuged and visually inspected as a crude guide to hemolysis.
Clinically significant preanalytical errors can occur for K + determinations if blood samples are not processed quickly. As mentioned earlier, maintenance of the intracellular–extracellular K + gradient depends on the activity of the Na + -K + -ATPase pump. If a whole blood specimen is maintained at 4 °C versus 25 °C before separation, glycolysis is inhibited, and the energy-dependent Na + -K + -ATPase cannot maintain the Na + /K + gradient. An increase in plasma K + will occur as a result of K + leakage from erythrocytes and other cells. The increase of K + in plasma is of the order of 0.2 mmol/L by 1.5 hours at 25 °C, and as high as a mean increase of 1.3 mmol/L after 5 hours at 4 °C. The increase in K + associated with storage of whole blood at 4 o C also depends on hematocrit, as more significant increases (mean of 2.2 mmol/L) are seen in polycythemic samples such as those observed in neonates.
Pseudohyperkalemia may also be observed in the setting of extreme leukocytosis, particularly in patients with chronic lymphocytic leukemia (CLL). Samples from patients with CLL with a white blood cell (WBC) count >300 × 10 9 cells/L can result in a pseudohyperkalemia due to WBC rupture. , Rupture of WBCs produces large differences in K + concentration between plasma samples as compared to serum or whole blood. The cause of this difference is hypothesized to be the result of WBCs remaining on top of the barrier gel in plasma separator tubes following centrifugation, however this hypothesis has yet to be tested.
Caution is however warranted as a high WBC count may also produce the opposite effect, pseudohypokalemia, in patients with acute myeloid leukemia (AML). , The pseudohypokalemia effect in AML is observed if the sample is held at room temperature and if there is any delay between sample collection and separation via centrifugation. In the unseparated sample the glycolytic activity of the WBC shifts K + intracellularly due to the action of the Na + -K + -ATPase pump. Yet even at room temperature, extreme leukocytosis can initially cause falsely increased K + concentrations the extent of which depends on leukocyte count, temperature, and glucose concentrations, and it has been reported to be as much as 2.0 mmol/L. ,
In practical terms, for the vast majority of plasma and serum samples collected from patients without extreme leukocytosis, separation of plasma or serum from cells within 1 hour when samples are maintained at room temperature is unlikely to introduce great error. If pseudohypokalemia or pseudohyperkalemia are suspected due to leukocytosis, rapid measurement of a whole blood sample via a direct electrode method, such as those found on blood gas analyzers may be indicated.
Finally, skeletal muscle activity causes K + efflux from muscle cells into plasma and can cause a marked elevation in plasma K + values. A common example occurs when an upper arm tourniquet is not released before beginning to draw blood after a patient clenches his or her fist repeatedly. The plasma K + values can artificially increase as much as 2 mmol/L because of the muscle activity.
Reported reference intervals for the serum of adults vary from 3.5 to 5.1 mmol/L and from 3.7 to 5.9 for newborns. For plasma, a frequently cited interval is 3.3 to 4.9 mmol/L for adults (for more information refer to the Appendix on Reference Intervals). The central 95% of plasma K + values from more than 16,000 subjects in the NHANES III were from 3.4 to 4.7 mmol/L. Laboratories should verify that these intervals are appropriate for use in their own settings. Further guidance is provided in Chapter 9 .
Cerebrospinal fluid concentrations are approximately 70% those of plasma. Urinary excretion of K + varies with dietary intake, but a typical observed interval (mean ± 2 SD) is 21 to 108 mmol/day for males and 15 to 87 mmol/day for females. In severe diarrhea, GI losses may be as high as 60 mmol/day.
K + is susceptible to several causes of preanalytical error.
Improper phlebotomy techniques can cause falsely elevated K + values.
Storing whole blood at 4 °C will cause falsely elevated K + values.
Ex vivo hemolysis will cause falsely elevated K + values.
K + concentrations are higher in serum as compared to plasma
Although AAS, FES, spectrophotometric methods have been used for Na + and K + analysis in the past, most laboratories now use ISE methods. For example, in 2019, of the laboratories reporting proficiency data for Na + and K + to the CAP, greater than 97% were using ISE methods with the balance of the remaining laboratories reporting the use of spectrophotometric methods.
Analyzers fitted with ISEs usually contain Na + electrodes with glass membranes and K + electrodes with liquid ion exchange membranes that incorporate valinomycin. Potentiometry is the determination of change in electromotive force ( E ; potential) in a circuit between a measurement electrode (the ISE) and a reference electrode that occurs because the selected ion interacts with the membrane of the ISE. In instrument applications, the measuring system is calibrated by the introduction of calibrator solutions containing defined amounts of Na + and K + . The potentials of the calibrators are determined, and the Δ E /Δ log concentration responses are stored in the system memory for calculating the unknown concentration when E of the unknown is measured. The response of potentiometric electrodes to analytes is a complex process that depends on the composition, thermodynamic and kinetic properties of the sensor membrane, bathing solution, and interface zone between the membrane and analyte and between the membrane and the bathing solution. For simplicity, E is described as the sum of the boundary potential at the sample/ion-sensitive film boundary (EPB1) and at the membrane/internal contact boundary EPB2 , and by the diffusion potential (ED) inside the membrane itself. A constant, C, is added to account for potential at the internal sensor and/or contact interface.
Frequent calibration, initiated by the user or by software-controlled uptake of the calibrator, is typical of most current ISE systems. Some instruments, particularly point-of-care testing (POCT) devices and blood gas analyzers, are designed to measure Na + and K + in whole blood.
Two types of ISE methods are in use and must be distinguished (refer also to Chapter 17 for detail). With indirect ISE methods , the sample is introduced into the measurement chamber after mixing with a large volume of diluent. Indirect ISEs are the most commonly used methods on current automated high-throughput clinical chemistry systems. Indirect methods were developed early in the history of ISE technology, when dilution was necessary to present a small sample in a volume large enough to adequately cover a large electrode and to minimize the concentration of protein at the electrode surface. With direct ISE methods , the sample is presented to the electrodes without dilution. This approach became possible with the miniaturization of electrodes. Direct ISEs are used in blood gas analyzers and point-of-care devices where whole blood is directly presented to the electrodes.
Errors observed in the use of ISEs fall into three categories. First are errors caused by lack of selectivity. For instance, many Cl − electrodes lack selectivity against other halide ions. Second are errors introduced by repeated protein coating of the ion-sensitive membranes, or by contamination of the membrane or salt bridge by ions that compete or react with the selected ion and thus alter the electrode response. Such errors in ISE measurements necessitate periodic changes of the membrane as part of routine maintenance. Finally, the electrolyte exclusion effect, which applies only to indirect methods and is caused by the solvent-displacing effect of lipid and protein in the sample, results in falsely decreased values. Chapter 17 explains the differences and pitfalls of indirect versus direct ISEs in more detail.
Spectrophotometric methods are based on Na + - or K + -specific enzyme activation. However, the cost of reagents for these methods and the fact that few problems exist with ISE methods have resulted in “niche” use of these methods, primarily in the smaller instruments used in physicians’ offices, and more recently “isolation” laboratories for patients with emerging infectious diseases. Kinetic spectrophotometric assays for Na + are based on activation of the enzyme β-galactosidase by Na + to hydrolyze o -nitrophenyl-β-d-galactopyranoside. The rate of production of o -nitrophenol (the chromophore) is measured at 420 nm.
K + -specific enzyme activation assays are illustrated by methods using pyruvate kinase, which is one of several K + -enhanced enzymes that ultimately leads to an increase in the oxidized form of nicotinamide adenine dinucleotide via lactate dehydrogenase and a consequent decrease in absorbance at 340 nm.
Although at one time it was the most common method for Na + and K + analysis, FES is no longer a common laboratory method. Samples are diluted in a diluent containing known amounts of Li + and are aspirated into a propane air flame. Na + , K + , and Li + ions, when excited, emit spectra at specific unique wavelengths. The Li + emission signal is used as an internal standard against which the Na + and K + signals are compared. Limitations of this method are detailed in Chapter 16 .
The electrolyte exclusion effect describes the exclusion of electrolytes from the fraction of the total plasma volume that is occupied by solids. The volume of total solids (primarily protein and lipid) in an aliquot of plasma is approximately 7%, so that approximately 93% of plasma volume is actually water. The main electrolytes (Na + , K + , Cl − , HCO 3 − ) are confined to the water phase. When a fixed volume of total plasma (e.g., 10 μL) is pipetted for dilution before flame photometry or indirect ISE analysis, only 9.3 μL of plasma water that contains the electrolytes is added to the diluent. Thus a concentration of Na + determined by flame photometry or indirect ISE to be 140 mmol/L is the concentration in the total plasma volume, not in the plasma water volume. If the plasma contains 93% water, the concentration of Na + in plasma water is [140 × (100/93)], or 150.5 mmol/L. This negative “error” in plasma electrolyte analysis has been recognized for many years. Although it is the electrolyte concentration in plasma water that is physiologic (the Na + concentration of normal saline is 154 mmol/L), it was assumed that the volume fraction of water in plasma was sufficiently constant that this difference could be ignored. All electrolyte reference intervals are based on this assumption and actually reflect concentrations in total plasma volume and not in water volume. Virtually all concentrations measured in the clinical chemistry laboratory are related to the total sample volume rather than to the water volume. This electrolyte exclusion effect becomes problematic when pathophysiologic conditions are present that alter the plasma water volume, such as hyperlipidemia or hyperproteinemia. In these settings, falsely low electrolyte values are obtained whenever samples are diluted before analysis, as in flame photometry or with indirect ISE methods ( Fig. 37.1 ).
Indirect ISE methods require the dilution of a sample in a diluent of fixed low ionic strength so that for Na + , the activity coefficient approaches a value of 1. Under these circumstances, the measurement of activity ( a ) , where a = γ (concentration) and γ is the activity coefficient, is tantamount to the measurement of concentration. It is the dilution of total plasma volume and the assumption that plasma water volume is constant that render both indirect ISE and flame photometry methods equally subject to the electrolyte exclusion effect. In certain settings, such as severe hyperlipidemia from ketoacidosis or obstructive biliary cholestasis, , or severe hyperproteinemia in multiple myeloma, the negative exclusion effect may be so large that laboratory results lead clinicians to believe that electrolyte concentrations are normal or low when the concentration in the water phase may actually be high or normal, respectively. In severe hypoproteinemia, the effect works in reverse, resulting in falsely high (e.g., 2% to 4%) Na + or K + values. Direct ISE methods still determine the concentration relative to activity but do not require sample dilution. Because there is no dilution, activity is directly proportional to the concentration in the water phase, not the concentration in the total volume. To make results from direct ISEs equivalent to those from flame photometry and indirect ISEs, most direct ISE methods operate in what is commonly referred to as the “flame mode.” In this mode, the directly measured concentration in plasma water is multiplied by the average water volume fraction of plasma (0.93). Although the latter may vary widely, as long as the activity of the specific ion is constant, the concentration of the ion in the water phase becomes independent of the relative proportions of water and total solids if the ion is not bound by proteins. Therefore direct ISE methods are free of electrolyte exclusion effects, and the values determined by direct ISE methods—even in the flame mode—are directly proportional to activity in the water phase and define electrolyte concentrations in a more physiologic and physicochemical sense. For this reason, most clinical laboratorians have reached the conclusion that direct ISE methods for electrolyte analysis are the methods of choice. However, it is clear that results from direct methods will continue to be converted to total plasma volume concentrations using flame mode, which is fortunate because more than 80% of laboratories use indirect ISE methods.
Chloride is the major extracellular anion. Therefore like Na + , it is significantly involved in the maintenance of water distribution, osmotic pressure, and anion–cation balance in the ECF. In contrast to its high ECF concentrations (103 mmol/L), the concentration of Cl − in the intracellular fluid of erythrocytes is 45 to 54 mmol/L, and in the intracellular fluid of most other tissue cells, it is only approximately 1 mmol/L. In gastric and intestinal secretions, Cl − is the most abundant anion.
Cl − ions are almost completely absorbed from the GI tract. They are filtered from plasma at the glomeruli and are passively reabsorbed, along with Na + , in the proximal tubules. In the thick ascending limb of the loop of Henle, Cl − is actively reabsorbed by the Na + -K + -2Cl − (NKCC) pump, which promotes passive reabsorption of Na + . Loop diuretics such as furosemide inhibit the reabsorption of Cl − via the NKCC pump. Cl − concentrations are not homeostatically controlled and passively reflect the concentration of the major ions, Na + and HCO 3 − . Additionally, they fall when pathologic concentrations of other anions (e.g., ketoacids and lactate) are present.
Chloride is determined largely by ISE, with some laboratories performing coulometric-amperometric titration (see also Chapter 17 ) for analyses requiring a broad range, such as sweat Cl − testing.
Chloride is most often measured in serum, plasma, urine, and sweat. Cl − is stable in serum and plasma. Hemolysis does not significantly alter serum or plasma Cl − concentration because the erythrocyte concentration of Cl − is approximately half of that in plasma. Because very little Cl − is protein bound, change in posture, stasis, and the use of tourniquets have little effect on its plasma concentration. Measurement of Cl − in gastric aspirates or intestinal drainages is an adjunct to parenteral replacement therapy. Fecal Cl − determination may be useful for the diagnosis of congenital hypochloremic alkalosis with hyperchloridorrhea (increased excretion of Cl − in stool).
Solvent polymeric membranes that incorporate quaternary ammonium salt anion exchangers are used to construct Cl − -selective electrodes in clinical analyzers. Although they are by far the most common methods for measuring Cl − in clinical laboratories, these electrodes have been described as having membrane instability and lot-to-lot inconsistency in terms of selectivity to other anions. Anions that tend to be problematic include other halides and organic anions, such as SCN − , which can be particularly problematic because of their ability to solubilize in the polymeric organic membrane of these electrodes.
More than 97% of 5807 laboratories reporting in a 2019 CAP proficiency test survey for Cl − used ISE methods with the remaining reporting the use of spectrophotometric methods.
The general principles of coulometry and amperometry are described in Chapter 17 . Reactions in the coulometric-amperometric determination of Cl − depend on the generation of silver ions (Ag + ) from a silver electrode at a constant rate and on the reaction of Ag + with Cl − in the sample to form insoluble AgCl.
After the stoichiometric point is reached, excess Ag + in the mixture triggers the shutdown of the Ag + generation system. A timing device records elapsed time between the start and stop of Ag + generation. Because the time interval is proportional to the amount of Cl − in the sample, the concentration of Cl − can be calculated using Faraday’s law:
where Q is the charge passed for time t at the constant current i ; n is the number of electrons involved in the electrochemical reaction; F is Faraday’s constant (96,485 coulombs/mol); and N is the number of moles of analyte in the sample.
Applications of the coulometric-amperometric principle (often called the Cotlove chloridometer technique) are the most precise methods for measuring Cl − over the entire range of concentrations found in body fluids. This method is subject to interferences by other halide ions, by cyanate (CN − ) and thiocyanate (SCN − ) ions, by sulfhydryl groups, and by heavy metal contamination. Maintenance of the systems is crucial for proper operation. Today, coulometry for analysis of Cl − in plasma or urine is not commonly used, but is maintained by some laboratories specifically for sweat Cl − analysis (see later section in this chapter).
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here