Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Therapeutic drug monitoring (TDM) is the traditional term used for the activity of measuring drug concentrations to tailor the dose of the medication to an individual. The use of monitored drug therapy is generally reserved for drugs with a narrow therapeutic index (TI), with variable pharmacokinetic behavior, and for which the efficacy or toxicity is difficult to measure or detect early during therapy.
In this chapter, we review the rationale for TDM, the fundamental principles of pharmacokinetics that are required to effectively utilize drug concentration data, and analytical factors affecting concentration measurement. We also provide a broad overview of a selected group of commonly monitored drugs that includes some of the challenges and pitfalls to their monitoring.
The ability of medicines to both heal and hurt has been recognized since ancient times. Immortalized by the writing of the Renaissance Swiss-German physician, Paracelsus, over 500 years ago:
All things are poison and nothing is without poison, only the dose makes that a thing is not a poison.
The challenge in medicine is therefore to determine the optimal dose of medicine that will help a patient with limited associated harm.
Studies in the 1990s identified that adverse events associated with drug therapy rank within the top 10 causes of death in the United States. While many of the severe or fatal events may not be preventable, more than a third appear to be preventable. In addition to the tragic life and death consequences of adverse drug events (ADEs), failed drug therapy also has a significant economic impact. The estimated cost of ADEs ranges from $17 to 29 billion in the United States alone. Just examining hospital-associated ADEs, the average cost of treating each preventable event is estimated to be greater than $3000 in addition to the increased length of stay. The causes of preventable drug-related adverse events vary; however, inadequate monitoring of therapy represents major sources of preventable ADEs contributing to as much as 40% of these events. Improving monitoring strategies is therefore likely to have an important impact on both health and its associated cost.
Drug therapy may be monitored in many ways. Clinical signs and symptoms of toxicity are often an effective way to detect toxicity or treatment failure. β-Blockers represent a typical example. Blood pressure and heart rate monitoring can be used to assess efficacy and toxicity of these drugs. Both are also easily measurable in the clinical setting and even at home. As a result, there is little need to perform monitoring beyond these straightforward clinical assessments.
The efficacy and toxicity of some drugs, however, can be much more difficult to monitor on clinical signs and symptoms alone. Insulin treatment in diabetes represents a case in point. The consequences of inappropriate insulin treatment are insidious, potentially life threatening, and very difficult to detect. Excessive insulin can lead to an acute decrease in blood glucose culminating in coma, permanent brain injury, and death. Inadequate insulin dosing, although not as acutely life threatening, leads over time to vascular disease, end-stage renal failure, blindness, and neuropathy that results in significant morbidity and mortality. Laboratory testing of blood glucose, a direct biomarker of insulin’s mechanism of action, provides an ideal means to monitor insulin therapy and prevent these complications. It is so important in diabetes therapy that it has driven the commercial development of simple, point-of-care devices that patients can use to routinely monitor their therapy at home. Biomarkers of drug efficacy or toxicity such as blood glucose, while highly desirable, are not always available. In the absence of a useful biomarker of drug effect, measuring the drug itself provides a potential surrogate.
Therapeutic drug monitoring (TDM) is the traditional term used for the activity of measuring drug concentrations to tailor the dose of the medication to an individual. There is an implicit assumption in TDM of a relationship between drug concentrations and efficacy or toxicity outcomes. The use of TDM and applied pharmacokinetics to guide drug therapy began in the 1960s, coincident with the development of robust analytical techniques. Since this time, TDM has become the standard of care for monitoring therapy with many drugs, including antiepileptic drugs (AEDs), immunosuppressive drugs (ISDs), and antibiotics. TDM is a complex process that involves several members of the health care team, including pharmacists, laboratory professionals, and physicians. To justify the costs associated with TDM, it must improve clinical outcome and reduce the overall cost of drug therapy. Prospective, randomized, concentration-controlled trials (RCCTs) of TDM are limited; however, some of these RCCTs show that concentration control can improve efficacy and reduce toxicity of drug therapy. Cost-effectiveness of TDM is lacking for most drugs, but TDM has been shown to improve the costs of aminoglycoside therapy. TDM may also aid in the detection of non-adherence to drug therapy, which represents a frequent and important cause of preventable ADEs. , This benefit may even extend to drugs not routinely monitored by traditional blood concentration monitoring.
This chapter will focus on the general principles of pharmacology and their application to TDM with a discussion of the pharmacology and TDM of some commonly monitored drugs. It is difficult to provide a comprehensive review of TDM in a single chapter. Readers are therefore referred to textbooks dedicated to the subject of TDM and applied pharmacology for more in-depth information.
Pharmacology comprises the body of knowledge surrounding chemical agents and their effects on living processes. This is a broad field, and it has traditionally been confined to drugs that are useful in the prevention, diagnosis, and treatment of disease. Pharmacotherapeutics is the part of pharmacology concerned primarily with the application or administration of drugs to patients for the purpose of prevention and treatment of disease. For this aspect of medical practice to be effective, the pharmacodynamic and pharmacokinetic (PK) properties of drugs should be understood. Toxicology is the subdiscipline of pharmacology concerned with adverse effects of chemicals on living systems. Toxic effects and mechanisms of action may be different from therapeutic effects and mechanisms for the same drug. Similarly, at the high dose of drugs at which toxic effects may be produced, rate processes are frequently altered compared with those at therapeutic doses. For these reasons, the terms toxicodynamics and toxicokinetics are now applied to these special situations.
Pharmacodynamics encompasses the processes of interaction of pharmacologically active substances with target sites, and the biochemical and physiologic consequences leading to therapeutic or adverse effects. For many drugs, the ultimate effect or mechanism of action at the molecular level is poorly understood, if at all. A pharmacologic effect (i.e., the therapeutic or toxic response to a drug) may be elicited by direct interaction of the drug with the receptor controlling a specific function or by a drug-mediated alteration of the physiologic process regulating the function. For most drugs, the intensity and duration of the observed pharmacologic effect are proportional to the concentration of the drug at the receptor. In a given tissue, the site at which a drug acts to initiate events leading to a specific biological effect is called the site of action of the drug.
The mechanism of action of a drug is the biochemical or physical process that occurs at the site of action. Drug action is usually mediated through a receptor. Cellular enzymes, as well as structural or transport proteins, are important examples of drug receptors. Nonprotein macromolecules also may bind drugs, resulting in altered cellular functions controlled by membrane permeability or DNA transcription. Some drugs are chemically similar to important natural endogenous substances and may compete for binding sites. In addition, some drugs may block formation, release, uptake, or transport of essential substances. Others may produce an effect by interacting with relatively small molecules to form complexes that actively bind to receptors. These and other examples of receptor binding are more completely discussed in pharmacology texts.
Although the exact molecular interactions that give rise to the mechanism of action for many drugs remain obscure, numerous theoretical models have been developed to explain drug action. One concept postulates that a drug binds to intracellular macromolecular receptors through ionic and hydrogen bonds and van der Waals forces. This theoretical model further postulates that if the drug-receptor complex is sufficiently stable and able to modify the target system, an observable pharmacologic response will occur. As Fig. 42.1 illustrates, the response is concentration dependent until a maximal effect is reached. The plateau may be due to saturation at the receptor or overload of a transport process.
The utility of monitoring drug concentration is based on the premise that pharmacologic response correlates with the concentration of the drug at the site of action (receptor). Although attempts have been made to measure the concentration of drugs at the receptor site in a patient, in general, this approach is technically impractical, if not impossible for most drugs. Studies have shown that for many drugs, a strong correlation exists between the serum drug concentration and the observed pharmacologic effect. In addition, years of relating blood concentrations to drug effects have demonstrated the clinical utility of drug concentration information. One must nevertheless always keep in mind that a serum drug concentration does not necessarily equal the concentration at the receptor but rather merely reflects it.
Pharmacokinetics describes the processes of uptake of drugs by the body, the distribution of the drugs into tissue, the biotransformations (i.e., metabolism) they undergo, and the elimination of the drugs and their metabolites from the body. Applied pharmacokinetics is the discipline that uses the principles of pharmacokinetics to enhance safety and effectiveness of a drug in an individual patient. It is this aspect of pharmacology that most strongly influences the interpretation of TDM results and that is dealt with in more detail in this chapter. Fig. 42.2 illustrates the conceptual relationship between pharmacodynamics and pharmacokinetics and the many factors affecting drug concentration and pharmacologic response.
A large number of factors are now recognized to have a profound influence on the pharmacokinetics of drugs and consequently on a patient’s pharmacologic response ( Box 42.1 ). For example, the consideration of the patient’s history, with particular emphasis on his or her pathophysiologic state and adjunct drug therapy, is essential at the initiation of drug therapy and TDM because these important factors may affect absorption, distribution, metabolism, and excretion of a drug.
Age
Weight
Gender
Race
Genetics (e.g., metabolic enzyme polymorphisms)
Liver disease (cirrhosis, hepatitis, cholestasis)
Kidney disease
Thyroid disease (hypothyroidism or hyperthyroidism)
Cardiovascular disease (arrhythmias, congestive heart failure)
Gastrointestinal disease (e.g., sprue or other malabsorption, peptic ulcer disease)
Cancer
Surgery
Burns
Volume status (e.g., dehydration)
Nutritional status (cachectic or anorexic)
Pregnancy or other factors affecting plasma proteins or body composition
Hemodialysis
Peritoneal dialysis
Cardiopulmonary bypass
Hypothermia or hyperthermia
Food or coadministered drug affecting extent or rate of absorption
Immediate- or extended-release formulation
Coadministered drugs affecting protein binding to plasma proteins or tissue
Food, herbs, or drugs competing for metabolism
Coadministration of drugs that induce metabolic enzymes (e.g., phenobarbital)
Coadministration of drugs that inhibit metabolic enzymes (e.g., cimetidine)
Coadministration of drug competing for renal tubular secretory pathways (e.g., probenecid, penicillin)
Coadministration of drugs enhancing renal tubular reabsorption
Most drugs administered chronically to patients are administered extravascularly. Although intramuscular and subcutaneous routes are used, the oral route accounts for most of the extravascular doses administered. The absorption process depends on the drug’s dissociating from its dosing form, dissolving in gastrointestinal fluids, and then diffusing across biological membrane barriers into the bloodstream. The rate and extent of drug absorption may vary considerably depending on the nature of the drug itself (e.g., solubility, p Ka ), on the matrix in which it is present, and on the physiologic environment (e.g., pH, gastrointestinal motility, vascularity).
The fraction of a drug that is absorbed into the systemic circulation is referred to as its bioavailability. The bioavailability (f) of a given drug is usually calculated by comparing, in the same subjects, the area under the plasma concentration–time curve (AUC) of an equivalent dose of the intravenous form and oral form:
The bioavailability of a particular drug, if the drug is to be useful, must generally be great enough so that the active component can pass in sufficient amount and in a desirable time from the gut into the systemic circulation. Bioavailability of greater than 70% is most desirable for drugs to be orally useful. An exception would be a case in which the lumen of the gastrointestinal tract is the site of drug action (e.g., antibiotics used to sterilize the gut such as oral vancomycin). Low bioavailability would then be considered advantageous.
Some drugs that are rapidly and completely absorbed nevertheless have low bioavailability to the systemic circulation. This is true of drugs with a high hepatic extraction rate. After oral administration, drugs that are absorbed in the lumen of the small intestine are carried by the portal vein directly to the liver. The liver may extensively metabolize a drug with a high hepatic extraction rate before it reaches the systemic circulation, leading to low oral bioavailability. This phenomenon is the first-pass effect.
In addition to the extent of absorption, the rate of absorption is also important. The absorption of a drug is generally considered a first-order process, and the absorption rate constant of a drug is usually much greater than its elimination rate constant. Efforts are now being made in the pharmaceutical industry to decrease the apparent rate of absorption of many drugs by manipulating their formulations (e.g., theophylline, tacrolimus) to produce slow-release or sustained-release products. Formulations that provide sustained release permit drugs taken orally to be taken at less frequent intervals. Conditions that may influence the extent or rate of drug absorption include abnormal gastrointestinal motility, diseases of the stomach and of the small and large intestine, gastrointestinal infections, radiation, food, and interaction with other substances in the gastrointestinal tract. One should be particularly aware of coadministered drugs that directly affect gut absorption, such as antacids, kaolin, sucralfate, cholestyramine, and antiulcer medications.
After a drug enters the vascular compartment, it interacts with various blood constituents and is carried by various transport processes to different body organs and tissues. The overall process is referred to as distribution. The factors determining the distribution pattern of a drug are binding of the drug to circulating blood components, binding to fixed receptors, passage of the drug through membrane barriers, and the ability to dissolve in structural or storage lipids. Molecular weight, p Ka , lipid solubility, and other physical and chemical properties of the drug are important determinants of distribution.
Once a drug enters the systemic circulation, it distributes and comes to equilibrium with many of the blood components, such as plasma proteins. An equilibrium exists between free and protein-bound drug. It is generally believed that only the free fraction of the drug is available for distribution and elimination. In addition, only the free drug is available to cross cellular membranes or to interact with the drug receptor to elicit a biological response. Therefore changes in the protein-binding characteristics of a drug can have a profound influence on the distribution and elimination of a drug, as well as on the manner in which total plasma or serum steady-state concentrations are interpreted. Each drug has its own characteristic protein-binding pattern that depends on its physical and chemical properties. As a general rule, however, acidic drugs are bound primarily to albumin and basic drugs primarily to globulins, particularly α 1 -acid glycoprotein (AAG). Some drugs bind to both albumin and globulins.
Depending on its affinity for plasma proteins, a drug may be either tightly or loosely bound. A weakly bound drug can be displaced from its protein sites by a drug with a greater affinity for the plasma protein–binding sites. For example, phenytoin and valproic acid, drugs that are frequently coadministered for epilepsy, compete with each other as they bind to albumin. Because valproate is present at higher concentration, its mass causes a significant shift of phenytoin from bound to free form. Protein binding of a drug also depends on the physical characteristics of the plasma proteins and on the presence or absence of fatty acids or other drugs in the blood. Fatty acids can displace a drug from its protein-binding sites; tightly bound drugs are not displaced, but a weakly bound drug can be displaced quite rapidly by free fatty acids present in increased concentrations. It is important to recognize that even though the total drug concentration may remain unchanged, displacement of a drug from its plasma protein-binding sites increases free drug concentrations and can result in clinical toxicity. Remember that the free fraction is the form that crosses biological membranes and is available to bind to the receptor, so increasing the free fraction can produce significant toxicity.
Anything that alters the concentration of free drug in the plasma ultimately alters the amount of drug available to enter the tissues and interact with specific receptor systems. Disease states can alter free drug concentrations. For example, in uremia, the composition of plasma is altered by an increase in nonprotein nitrogen compounds, by acid-base and electrolyte imbalances, and often by a decrease in albumin; free drug concentrations are frequently increased. Patients may experience adverse effects that are a direct consequence of the increased free drug concentrations, especially if only total plasma drug concentration is monitored in these patients. For example, phenytoin is 90% bound and 10% free in healthy subjects. In uremic patients, 20 to 30% of the total plasma concentration of phenytoin may be free. In a healthy patient who has a total plasma phenytoin concentration of 15 μg/mL, the free phenytoin concentration is likely to be 1.5 μg/mL. If a uremic patient has a total concentration of 15 μg/mL, the free drug concentration may be 4.5 μg/mL. A free phenytoin concentration of 4.5 μg/mL is sufficient to precipitate severe phenytoin side effects, including lethargy and increased seizure frequency. In uremic patients, it is advisable to quantitate free phenytoin concentrations and adjust the drug dose to maintain free phenytoin concentration at approximately 2.0 μg/mL.
Alteration of protein concentration in response to acute stress can alter free drug concentration. For example, after myocardial infarction, there is a rapid rise in AAG concentration. Lidocaine is a commonly employed drug for control of arrhythmias secondary to acute myocardial infarction, but lidocaine is a basic drug that is highly bound to AAG. Doses of lidocaine adequate to control arrhythmia immediately after infarction are likely to become ineffective 48 to 72 hours later because the higher concentration of AAG that occurs after infarction diminishes the amount of free drug available to tissue. The arrhythmia reappears and because the total lidocaine plasma concentration necessary to control the arrhythmia seems to be in the toxic range, the lidocaine dose is decreased when in reality it should be increased to maintain the optimal free concentration.
Some drugs exhibit saturation of the available plasma protein–binding sites at optimal total drug concentrations. For example, disopyramide binding is concentration dependent and varies widely among patients. Consequently, its total concentration and the observed clinical responses vary markedly among patients. Valproic acid is also a drug that shows saturation at concentrations greater than 100 μg/mL. Thus an increase of total plasma valproate concentration from 100 to 125 μg/mL represents a significant increase in the free valproate concentration.
Any change in normal physiologic status can alter free drug concentrations and thus change the distribution of drugs between plasma and tissue. Geriatric patients often exhibit hypoalbuminemia with a marked decrease in protein-binding sites for drugs. In the elderly, the classic signs of drug intoxication usually are not apparent; instead, the clinical symptoms of drug intoxication are manifested as impaired cognitive function—particularly confusion, which is a common symptom in patients with dementia. Reduction of drug dose to decrease the free drug concentrations may result in dramatic improvements in cognitive function and behavior in these patients.
Estimation of the free drug concentration will continue to be of interest to TDM. Equilibrium dialysis represents the gold-standard method for measuring the free, unbound concentration of a drug. However, this method typically requires 16 to 18 hours of incubation to achieve equilibrium, which severely limits the turnaround time for testing. Ultrafiltration techniques are useful alternatives that usually can be accomplished in a fraction of the time. In ultrafiltration, a sample of serum or plasma is forced through a filter membrane with a low molecular weight cutoff value, typically by centrifugation, to yield a protein-free sample. Provided this process is done rapidly and under appropriate temperature control, ultrafiltration can provide a useful estimate of the free drug concentration in circulating blood. , Sample drawing, processing, and storage can modify dissociation equilibria for some drugs affecting both equilibrium dialysis and ultracentrifugation measurements. Measurement of drugs in oral fluid (i.e., saliva) has been advocated as an alternative to plasma or serum testing because of the ease of collection and correlation with free drug concentration for some drugs. Despite these drawbacks, free drug estimations by ultrafiltration are superior to estimations of free drug concentration based on measurements in saliva. Few drugs show a strong correlation between salivary concentration and free drug concentration in plasma. In addition, collection of saliva from acutely ill patients is often more difficult than blood collection.
The rate of the enzymatic process to metabolize a drug is usually characterized by the Michaelis-Menten equation (see also Chapter 25 )
where V max is the maximum velocity of the reaction; K m , the Michaelis-Menten constant, is the drug concentration at which the rate of metabolism is half of the maximum; and C is the drug concentration in blood.
Drugs are usually administered to achieve concentrations in the blood well below the K m of a particular drug. Therefore if K m is much greater than C, Eq. (42.2) can be simplified to
and V max / K m can be written as the constant, K, such that
where K is a simple first-order rate constant for the metabolic elimination. In other words, the rate of drug elimination from blood is proportional to the concentration of drug. First-order kinetics are characteristic of the metabolism of most drugs.
In the event that concentrations significantly exceed the K m for a particular drug, the rate of elimination of the drug becomes independent of concentration and thus descriptive of a zero-order process in which Eq. (42.2) can be approximated by:
Several drugs, notably phenytoin, salicylates, ethanol, and theophylline, cannot be characterized by simple first-order kinetics. Instead, the rate of metabolism of these compounds is said to be capacity-limited or nonlinear, meaning clearance or the apparent half-life changes with changes in concentration. Fig. 42.3 shows how the kinetics of elimination is linear (first order) until the capacity of clearance pathways is reached, which occurs at concentrations that approach the K m of the enzymatic pathways mediating metabolism. At this point, the relationship between dose and steady-state concentration becomes nonlinear. It should be evident, therefore that important clinical considerations arise when a patient is treated with a drug that displays nonlinear kinetics. First, changes in dosing result in disproportionate changes in steady-state drug concentrations so that titration to appropriate serum concentrations must be approached conservatively. Second, because both clearance and apparent half-life of the drug change with increasing drug concentration, the length of time required to reach a new steady-state concentration is prolonged.
All of the equations previously described for predicting dose or concentration assume linear kinetic systems; they are therefore not adaptable to treatment with drugs that display nonlinear kinetics. Methods for predicting phenytoin dose and concentration and using a linearized Michaelis-Menten equation have been developed and applied to individualize drug dosing regimens.
The liver is the principal organ responsible for xenobiotic metabolism. One of its major roles is to convert lipophilic nonpolar molecules to more polar water-soluble forms. The drug molecule (a xenobiotic) can be modified by phase I reactions, which alter chemical structure by oxidation, reduction, or hydrolysis; or by phase II reactions, which conjugate the drug (glucuronidation or sulfation) to water-soluble forms. Typically, both phase I and phase II reactions occur. Most drug metabolism takes place in the microsomal fraction of the hepatocytes, where many environmental chemicals and endogenous biochemicals (xenobiotics) are also processed and by the same mechanisms.
Enzymes of the hepatic microsomal system can be induced or inhibited. Enzyme induction and inhibition have greatest significance for drugs with low to moderate hepatic extraction fractions.
Microsomal enzyme induction leads to an increase in the activity of enzymes present, most commonly through increases in the quantity of the oxidizing enzymes. The many isoenzymes of cytochrome P450 are affected variably by different enzyme-inducing drugs. Two classic and clinically relevant enzyme inducers can be contrasted.
First, phenobarbital represents the type of enzyme inducer with broad induction effects. After a latency period, production of cytochrome P450, cytochrome P450 reductase, and related enzymes is increased. In addition, liver weight, hepatic blood flow, bile flow, and production of hepatic proteins also increase. This induction apparently increases the P450 isoenzyme mass for which debrisoquine is a substrate because the hepatic clearance of debrisoquin is increased after phenobarbital administration. This enzyme system is referred to as cytochrome P450-2D6. Phenobarbital induction has little effect on theophylline clearance, suggesting a different isoenzyme for theophylline metabolism.
Theophylline and polycyclic hydrocarbons in tobacco smoke (3-methylcholanthrene) represent a second type of enzyme inducer with broad induction effects. They induce cytochrome P45-1A in which no change in P45 reductase occurs, and a different terminal oxidase appears. After this type of induction, the clearance of theophylline but not that of antipyrine is increased. These substances have served as prototypes for the classification of enzyme inducers. Obviously, when patients are on a drug with a narrow TI, their dosing regimen would need to be adjusted should a known enzyme-inducing drug be added to or deleted from their therapy.
Because the drug-metabolizing enzymes of the liver are nonspecific and interact with a wide variety of endogenous and exogenous substances, it is not surprising that the presence of one drug inhibits the metabolism of a second drug that is coadministered. Several general mechanisms have been proposed to describe these events. They include substrate competition, competitive or noncompetitive inhibition, product inhibition, and repression (where the amount of enzyme is reduced by either decreased synthesis or increased degradation). Most drug-drug interactions probably fall into the categories of substrate competition or competitive or noncompetitive inhibition. Examples of drugs that have been shown to significantly inhibit drug metabolism include chloramphenicol, cimetidine, valproic acid, allopurinol, and erythromycin. As with enzyme inducers, the addition or deletion of an inhibitory drug in a patient’s drug therapy requires appropriate TDM and dose adjustment of the affected drug. TDM allows one to monitor these processes and adjust dosing accordingly.
The role of TDM becomes particularly apparent for drugs that undergo hepatic metabolism. Wide variability in the rate of metabolism of any given drug exists not only in different patients in the general population but also in the same patient at different times and in different circumstances. This variability is due to factors such as age, weight, gender, genetics, exposure to environmental substances, diet, coadministered drugs, and disease. Furthermore, unlike kidney function, in which creatinine provides a useful biomarker of function, there is no acceptable endogenous biochemical marker by which hepatic function, and consequently hepatic capability for drug clearance, can be routinely assessed before drug therapy is initiated.
The biotransformation of drugs may produce metabolites that are pharmacologically active. In such instances the metabolite should also be measured because it is contributing to the effect of the drug on the patient. Primidone and procainamide are examples of such drugs. If the metabolite is inactive, it need not be measured, but steps should be taken to ensure that it does not interfere in the analytical process. The latter problem of metabolite interference can cause significant problems for monitoring certain patients such as transplant patients receiving the ISDs cyclosporine or tacrolimus that have numerous active and inactive metabolites, which cross-react to varying degrees with the antibodies used in immunoassays for these drugs.
Excretion of drugs or chemicals from the body can occur through biliary, intestinal, pulmonary, or renal routes. Although each of these represents a possible mechanism of drug elimination, renal excretion is a major pathway for the elimination of most water-soluble drugs or metabolites and is important in TDM. Alterations in renal function may have a profound effect on the clearance and apparent half-life of the parent compound or its active metabolite(s); decreased renal function causes increased serum drug concentrations and increases the pharmacologic response.
Kidney function, in contrast to liver function, is readily and reliably evaluated by estimation of creatinine clearance. Creatinine is a metabolic product of muscle metabolism and is produced at a constant rate by the body. It is primarily eliminated from the body by the kidneys through the glomerular filtration mechanism. Renal clearance of creatinine at 120 mL/min approximates the glomerular filtration rate of 90 to 130 mL/min (see Chapter 34 ). Therefore measurement of creatinine clearance on a routine basis provides an effective tool to evaluate kidney function. A strong correlation has been shown to exist between creatinine clearance and the total body clearance or elimination rate constant of those drugs primarily dependent on the kidneys for their elimination. Examples of drugs whose therapeutic use is adjusted to account for changes in creatinine clearance include gentamicin, tobramycin, amikacin, digoxin, vancomycin, cyclosporine, and tacrolimus.
In pharmacokinetics, mathematical approaches are used to predict or describe certain events, usually for calculating a dosing regimen or predicting the serum drug concentration after a given drug dose. The mathematical tools most often used in clinical pharmacokinetics are compartmental models and model-independent relationships.
Model-independent relationships are becoming increasingly popular in clinical pharmacokinetics. The main advantages of model-independent relationships are fewer relationships to remember, fewer restrictive assumptions, a more general insight into elimination mechanisms, and easier computations. However, model-independent relationships are not without disadvantages; conceptualization of compartments or physiologic spaces may be lost, specific information that may be clinically relevant or pertinent to mechanisms of distribution or elimination can be lost, and the difficulty in constructing profiles of concentration versus time can be increased requiring greater numbers of samples to be collected.
The most frequently used model-independent, noncompartmental analysis approach for characterizing drug exposure uses algorithms to estimate the AUC after dosing of a drug. One of the simplest methods to estimate the AUC from timed concentration data uses the linear trapezoidal rule to divide the concentration-time curve of a drug into a series of trapezoids, the sum of which represents the AUC as diagrammed in Fig. 42.4 . Accurate AUC estimation using the trapezoidal rule usually requires intense blood sampling during the dose interval.
In TDM, we are rarely concerned with a drug administered as a single, one-time intravenous bolus. Drugs are administered repetitively in the usual therapeutic situations. Fig. 42.5 shows that a drug repetitively administered at a fixed dosing interval will accumulate in the body until a steady-state condition exists. Note that a typical dosing cycle is once each half-life. Steady state can be defined as that point in the dosing scheme when the amount entering the circulation (governed by dosing rate) equals the amount eliminated (governed by elimination rate).
Theoretically, the AUC for the first dose of drug when time is extrapolated from time of zero to infinity should be equal to the AUC for a dose interval (τ) at steady state (see Fig. 42.4 ). The average drug concentration at steady state (C ss ) is a frequent concentration reported for many drugs, and may be calculated using the formula
where τ represents the time duration of the dose interval. The maximum concentration ( C max ) and minimum concentration ( C min ) of a drug during a dose-interval are also frequently of interest because these concentrations may be associated with efficacy and/or toxicity of a drug. For most drugs, the C min at steady state (also frequently referred to as the trough concentration) is the concentration obtained immediately before the dose at time of zero minutes (referred to as C 0 ); however, although generally the case, it is important to recognize that the C min does not have to be equivalent to the C 0 concentration.
Knowing the AUC of a drug after a defined dose allows calculation of the model-independent parameter of clearance, which provides a useful picture of the body’s ability to eliminate a drug. Total body clearance ( Cl T or simply Cl ) is defined as the theoretical total volume of blood, serum, or plasma completely cleared of drug per unit of time. It is usually expressed in units of mL/min, L/h, mL/min/kg, or L/h/kg. Cl is the sum total of all the clearances contributed by each elimination route (i.e., Cl = Clkidney + Clliver + Clbiliary + ...). Cl is typically calculated from the AUC using the formula
where AUC 0 n ∞ is the AUC for the first dose integrated over time from zero to infinity. The variable f represents the bioavailable fraction of the drug, which is not generally known for orally administered drugs in a particular patient. Thus an apparent oral clearance ( Cl a ) of a drug is calculated using
Although Cl is model independent, it can be related to model-dependent parameters such as the volume of distribution and elimination rate in a first-order, one-compartment model, as discussed in greater detail in the following section.
For drugs dependent solely on hepatic elimination, total body clearance (Cl T ) equals hepatic clearance (Cl H ). When the liver is considered from a purely physiologic perspective, the hepatic clearance is determined by the hepatic blood flow (Q) and the hepatic extraction fraction (E).
The hepatic extraction fraction of a drug reflects the affinity of a particular drug for hepatic microsomal enzymes; E can be found experimentally or calculated by the equation
where C a is the concentration of the drug in blood entering the liver and C e is the concentration of the drug in the hepatic venous effluent. For drugs that possess a high extraction fraction, hepatic clearance approaches hepatic blood flow (Q). The total body clearance of highly extracted drugs primarily depends on hepatic blood flow for their elimination. These drugs usually have low bioavailability because of the first-pass effect described earlier. Lidocaine is an example of such a drug. The clearance of low-extracted drugs is less dependent on blood flow and more dependent on the quantity and quality of the hepatic microsomal enzymes. Total body clearance of these drugs is affected by hepatic function, enzyme inducers and inhibitors, and changes in free drug concentration. Readers should recognize that this is a superficial view of a complex process. Several excellent reviews on this subject are available. , ,
Compartmental models are deterministic; that is, the drug concentration in blood and time data determine or define the model. The number and values of compartments assigned to the model have no true physiologic meaning or anatomic reality. The intravascular fluid compartment (blood) usually is the anatomic reference compartment. The advantage of intravascular fluid as the reference compartment is the ease with which it may be sampled to provide a definitive profile of blood concentration of drug versus time. The actual number of compartments can be quite extensive. However, for the sake of simplicity, one-, two-, and three-compartment models are most often used.
In the simplest compartment model, the body is considered as a single compartment, as shown schematically in Fig. 42.6 . It is assumed that after introduction of a drug, the substance is rapidly and uniformly distributed throughout the body, or said to be kinetically homogeneous within the compartment. Such a model is frequently applied to water-soluble antibiotics such as gentamicin. Fig. 42.7 illustrates graphically the relationship between log of concentration within the compartment and time for a single-bolus injection of a drug. In the simple model of first-order elimination, the instantaneous change in quantity of drug within the compartment is proportional to the quantity (X) by the equation
Integration of this equation using Laplace transformation yields the equation
where X 0 is the initial quantity of the drug within the compartment, X t is the blood concentration of the drug as a function of time, and k e is the first-order elimination rate constant. From a practical perspective, the quantity of drug within the blood compartment cannot be easily measured. Instead, the concentration of drug within the compartment is the measured quantity. Dividing both sides of Eq. (42.12) by a volume of distribution (V d ) term converts this equation to
where C 0 , the initial concentration after bolus administration (which cannot be easily measured), is estimated by extrapolating the line shown in Fig. 42.7 to zero time. From knowledge of C 0 and k e , one can theoretically predict the concentration at any time (C t ). As shown later, most drugs are administered in repetitive doses rather than in a single bolus.
For a drug that is assumed to be administered intravenously as a rapid bolus into a single, kinetically homogenous compartment, the C 0 is related to the compartment volume as follows:
V d is called apparent volume of distribution because it is not a real volume in the physiologic sense, but instead is a proportionality constant to translate the absolute amount of drug present in the compartment (X) into its concentration relative to a volume. The V d for an orally administered drug can be determined easily from concentration data using the one-compartment model after correction for bioavailability, f by
The units of V d are usually liters (L). Although V d is a mathematical term and not a real physiologic parameter, it is useful for contrasting degrees to which different types of drugs distribute. For instance, the polar hydrophilic drug gentamicin has a V d = 0.2 L/kg of body weight, whereas the nonpolar lipophilic drug desipramine has a V d = 34 L/kg of body weight. Gentamicin is concentrated in the blood, whereas desipramine is predominantly distributed into tissue.
Using the same assumptions of a one-compartment model as described earlier for calculation of the V d , the first-order elimination rate constant can be determined by log transformation of Eq. (42.13) to give the natural logarithmic function:
Given a zero-time blood drug concentration ( C 0 ), a nonzero time concentration (C t ), and a defined time (t), then k e can be readily determined either algebraically or graphically. For example, in a plot of ln C t versus t, the slope of the linear relationship is − k e . The elimination rate constant k e represents the fraction of drug removed per unit time and has units of reciprocal time (minute −1 , hour −1 , or day −1 ).
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here