Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Biomarkers are used in the clinical laboratory for routine patient care, and in the pharmaceutical industry during drug development, and in establishing safety and efficacy of a candidate drug. Biomarkers are also used in clinical and epidemiologic research to gain a better insight into pathophysiology, to identify predictors of disease, and to refine treatment strategies. The in vitro diagnostic (IVD) industry develops most of the biomarker assays and makes them commercially available. The pharmaceutical and IVD industries, as well as epidemiologic researchers, often seek the help of clinical laboratories in their biomarker studies, thus providing a mutually beneficial and rewarding relationship.
This chapter describes, in detail, the various areas in which the pharmaceutical and IVD industries and epidemiologic and clinical researchers use biomarkers and illustrates the ways in which the clinical laboratory can be involved in providing such services, which can be both financially and intellectually rewarding. However, these opportunities have their own challenges, including the need for strict regulatory rules, extensive documentation requirements, and particular data access and storage specifications. The regulatory requirements for performing biomarker testing are described in this chapter; results may be used in premarket submissions to governmental agencies, for both drugs and assay kits. The relevant documents for analytical and clinical evaluations of biomarkers are identified and discussed. Due to the daunting task of summarizing worldwide regulations, the regulatory requirements in the United States are primarily referred to as examples. The reader should refer to their local agencies when assessing the exact needs applicable to their situation. Overall, the goal of this chapter is to provide a general overview to those in the clinical laboratory who are interested in biomarker research collaborations.
Biochemical markers are routinely and extensively used in patient care to confirm or exclude a diagnosis, screen for a disease, monitor compliance with or a response to a treatment, or assess a prognosis. Over the last several decades, genetic markers have been added to the armamentarium of clinical laboratory tests, with similar uses. To test for these markers, clinical laboratories use reagents and kits that have been developed by the in vitro diagnostic (IVD) industry and that have been approved for clinical use by the US Food and Drug Administration (FDA) in the United States or an equivalent agency, depending on the country. IVD is an industry with more than $69 billion in sales worldwide. On rare occasions, when a commercial assay is not available for a particular marker, or when an assay needs to be modified to better meet the needs for clinical decision making, clinical laboratory professionals may develop and/or adapt, validate, and offer an assay within a single laboratory for clinical use; such a test is referred to as a laboratory-developed test (LDT).
The use of both biochemical and genetic markers is certainly not restricted to clinical laboratories. The pharmaceutical industry uses biomarkers extensively during drug development to determine whether to pursue or abort the efforts to develop a drug, and to establish the safety and efficacy of a candidate drug. As part of drug development, the pharmacokinetics (PK) and pharmacodynamics (PD) of the candidate drug must be examined, thus necessitating the development of an assay for the measurement of the parent drug and its metabolites, as well as biomarkers of PD response. Furthermore, biomarkers can be used as companion diagnostics to identify those individuals who will benefit from a drug or those who are likely to experience adverse effects from a drug (personalized medicine), and on rare occasions, as surrogate markers for a primary endpoint in clinical trials.
In addition to their use in routine patient care and by the pharmaceutical industry, biomarkers are extensively used by clinical and epidemiologic researchers to gain a better insight into pathophysiology, identify predictors of disease, refine treatment strategies, and develop prognostic indicators. The National Institutes of Health (NIH) have heavily supported biomarker research; according to the NIH RePORTER database, approximately $36.5 billion were invested in biomarker research from 2009 to 2018, a staggering increase from the $4.6 billion spent during the 1999 to 2008 period. This investment resulted in a significant intellectual output; a simple search for the term “biomarker” on PubMed found more than 875,000 publications as of February 2020. Fig. 12.1 illustrates the increase in the number of NIH-funded grants that contain the term biomarker in the title and the intellectual output that is reflected by the number of resulting publications over the past two and a half decades (Figure adapted from reference ).
Clinical chemists are laboratory professionals who understand all aspects of biomarker testing, including the preanalytical, analytical, and postanalytical issues, and who are trained in the de novo development and validation of tests. They have a good understanding of regulatory requirements, appreciate the clinical context of a laboratory test, and practice their profession in a systematic and methodical manner. This combination of skills makes them desirable not only to clinical laboratories but also to the pharmaceutical and IVD industries and to laboratories performing testing for large trial cohorts. For additional discussion on the training, role, and career paths of the clinical chemist, refer to Chapter 1 .
An IVD company often needs to partner with a clinical laboratory to validate and characterize the performance of its assay in a real-life setting using a specific patient population. A pharmaceutical company may prefer to contract the development of an assay to measure the concentration of a candidate drug to an outside laboratory rather than performing it in-house. Biomarker testing for safety and efficacy during a clinical trial or in subsequent postmarketing or substudies is usually done in a clinical laboratory setting. These scenarios present financial and intellectual opportunities to clinical chemists practicing in either hospital-based or freestanding clinical laboratories. However, these opportunities certainly clinical laboratories. However, these opportunities certainly do not come without their own challenges. Strict regulatory rules, variable workflow, extensive documentation requirements, particular data access, and storage specifications are some of the difficulties. The rewards, however, can be substantial. According to the Tufts Center for Drug Development, the capitalized research and development cost for an approved drug has increased from $179 million in the 1970s to a staggering $2.5 billion in the 2000s to early 2010s ( Fig. 12.2 ). It was estimated that in 2006, $25.6 billion was spent on clinical trials in the United States alone, with a projection of more than $32 billion for 2011. The global spending on clinical trials is estimated to reach $69 billion a year by 2025. The growing use of real-world data, such as that gathered from electronic health records systems, and wearable devices, and their analysis based on artificial intelligence such as sophisticated machine-learning algorithms, may aid in the identification of suitable patients for recruitment and optimization of study design that could reduce the cost of clinical trials. However, such an approach is still in its infancy and must be proven; until then the cost of clinical trials remains staggering. Even a small percentage of 5 to 10% that is devoted to laboratory testing can translate into huge sums. In addition to the financial reward, the collaboration between clinical laboratories and the pharmaceutical and IVD industries results in intellectual benefit, in which clinical chemists become involved in the reporting of the findings and publication of the manuscripts resulting from these endeavors.
This chapter describes the nature of the studies in the pharmaceutical and IVD industries and those of pharmaceutical substudies and epidemiologic investigations, the expectations from the clinical laboratory to support such activities, and the involved regulatory requirements. Although the regulatory requirements differ among countries and regions, they tend to be similar in spirit. The regulatory aspects of this chapter are not intended for use as a blueprint to those seeking regulatory guidance or for them to be comprehensive but rather to provide the reader with an overview of the regulatory challenges and requirements in this sphere. Since the US regulations are among the most comprehensive, mature, and well understood, the FDA requirements were used as an example throughout the book to illustrate various points. However, because of the scope and focus of this textbook, both the American and the European Union regulatory requirements for IVD products have been discussed.
Pharmaceutical investigational studies provide ample opportunity for clinical laboratories to participate in clinical research of profound importance. These studies build upon early research and development of new molecular entities (NMEs), variations of known compounds and formulations, as well as new clinical uses of established drugs, in each case seeking to demonstrate safety and efficacy of the drug candidate for its proposed intended use. Because of the potential risks involved in such clinical research, every individual and organization engaging in these investigations should understand the fundamental principles of the investigational process and the ethical and statutory requirements applicable to their efforts. The following is an overview of the drug development process.
Most new drug candidates proceed through two major phases of development: preclinical and clinical trials. Preclinical trials (also known as nonclinical studies) are performed to assess the safety and tolerability of the drug in animals. These studies provide insights into the maximum tolerated dose (the maximum dose that an animal species can tolerate for a major portion of its lifetime without significant impairment or toxic effect other than carcinogenicity), the type and frequency of adverse events experienced in the framework of the animal model(s), effects on reproduction, and mutagenicity. Pharmacologic properties of the drug in animal models are also investigated. These include PK effects (the examination of the process, magnitude, and rates of absorption, distribution to organs, metabolism, and elimination of the drug [ADME]) and PD effects (the biochemical and physiologic effects of the drug). Preliminary observations of efficacy may be made in animal models of a particular disease and the inference is subsequently drawn that similar benefits may accrue in humans. However, it is generally acknowledged that observed safety, pharmacologic, and efficacy effects in animals do not always translate to humans. Completely different effects may be observed in humans than in those previously observed in animals, or the magnitude of observed effects may differ significantly between species. Variables that influence the correlation of effects between animal and human investigations include the species of animals used in the study, drug dosing concentrations and method of administration, time of observation, measurement methods, and inherent differences of biochemistry and physiology. In many cases, these variables cannot be adequately controlled to enable the accurate prediction of effects in humans based on experiences in animals. Nevertheless, the preclinical phase represents the basic foundation of the in vivo drug development effort.
Following the completion of preclinical trials, the sponsoring organization prepares an investigational new drug (IND) application for submission to the FDA (when filed in the United States; procedures vary among countries). The IND includes the results of the preclinical trials, information on the drug manufacturing process, physico-chemical characterization of the drug (molecular size, structure, stability in certain environments, and so on), and a proposed plan for clinical investigation in humans. The clinical investigational plan is divided into multiple phases ( Fig. 12.3 ). The studies constituting these phases typically are intended to satisfy “adequate tests and substantial evidence necessary for New Drug Application (NDA) approval.”
The intent of the Phase 1 PK/PD study program is to “determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness.” The design of a Phase 1 study generally involves administration of the drug in a small number of healthy subjects (perhaps as few as 20); however the Phase 1 studies in oncology are normally done in cancer patients given the high toxicities associated with these drugs. The assumption is that healthy subjects are an acceptable preliminary human model for the patient population for which the drug is intended, and that the general health of the studied subjects may attenuate any adverse effects of the drug. The so-called “mechanism of action” of the drug may also be determined in this phase, although the sponsor must only report the mechanism “if known.” The mechanism of action may be, for instance, a binding of a cell receptor, an inhibition of an enzyme, or perhaps an effect on nucleic acid or protein synthesis. Some drugs are well characterized in this area. For example, proton pump inhibitors such as omeprazole (Prilosec OTC, Procter and Gamble, Cincinnati, OH) and related compounds are a particularly well-characterized class of drugs. These drugs act by irreversibly blocking the hydrogen/potassium adenosine triphosphate enzyme system (the gastric proton pump) of gastric parietal cells, thereby blocking secretion of acid into the gastric lumen. This effect can be clearly demonstrated in healthy individuals and patients with gastrointestinal (GI) disease. In the affected population, omeprazole promotes healing of GI tissue damage that results from active duodenal ulcers, gastroesophageal reflux disease, erosive esophagitis, simple heartburn, and other related conditions. According to the FDA, approximately 70% of drugs under evaluation successfully complete the Phase 1 investigation and advance to the next phase.
Phase 2 studies include “controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the short-term side effects and risks associated with the drug. Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects.” Although Phase 2 studies are well controlled, they are not statistically powered to determine efficacy. The logic of limiting the study size is to limit unneeded exposure of a novel drug candidate with limited development knowledge. Phase 2 studies may evaluate subjects over a few months to a few years, and approximately 30% of drugs in this phase move on to the next phase of investigation. If any additional safety or preliminary efficacy concerns are revealed, a follow-up Phase 2 study may be performed. If primary safety and efficacy is established, then the Phase 3 investigation can proceed.
Phase 3 studies are the final pre-NDA phase. These studies are “intended to gather the additional information about effectiveness and safety that is needed to evaluate the overall benefit–risk relationship of the drug and to provide an adequate basis for physician labeling.” These studies are statistically powered to directly address the question of efficacy through recruitment of up to several thousand subjects, if needed. In clinical areas such as cardiovascular disease (CVD), Phase 3 studies often require long-term follow-up of subjects to determine hard endpoint outcomes such as overall survival, or alternatively as in the case of various cancers, progression-free survival. As such, Phase 3 studies are the costliest of all the investigations, both as a result of the requirement for statistical powering and for the long-term evaluation of certain drugs and intended uses. According to the FDA, approximately 30% of drugs completing Phase 3 studies receive subsequent approval. However, these statistics take into account NDAs for known compounds in new dosage forms and generic drugs. Obviously, there are fewer questions of safety and efficacy in minor modifications of known drugs and generics, and thus a higher likelihood of receiving an approval to the subsequent NDA. When looking purely at NMEs, the cumulative rate of passing all three phases of new drug development and achieving FDA approval is less than 15%.
Clinical research for new drugs sometimes involves another phase. Phase 4 studies are postapproval studies, which may be viewed through two lenses. In one aspect, Phase 4 studies provide additional evidence of safety and “real-world” effectiveness of a drug as evaluated in an observational, noninterventional trial, which is generally run under a formalized protocol. In another aspect, Phase 4 studies represent formal postmarketing surveillance evaluations intended to gather information through an adverse event monitoring system. In both views, the intent is to detect a sign that might necessitate a regulatory action, such as a change in labeling or initiation of a risk management system. These studies may be mandated under various statutes in the United States, including the Food and Drug Administrative Amendments Act of 2007 8 and under accelerated approval requirements , (often for drugs approved under the “fast track” designation for particularly life-threatening diseases, such as HIV), deferred pediatric studies, , for which such studies are required under the Pediatric Research Equity Act, and for studies in humans for drugs originally approved under the Animal Efficacy Rule. , Beyond these statutory requirements, pharmaceutical sponsors may make voluntary postmarketing commitments to perform Phase 4 studies and are expected to complete these studies in good faith. Fig. 12.4 presents the numbers of the studies, Phases 1 to 4, in the United States in 2013 and the numbers of participants.
Biomarkers have long been used in pharmaceutical research to assess toxicity, but starting in the early 1990s, these markers received increased attention due to the sharply increasing costs of drug development. Therefore biomarkers were considered tools that could help increase the efficiency of the drug development process. Currently, biomarkers are used throughout the drug development process, including leading compound selection, determining dosage, defining and understanding the mechanism of action, and identifying the patients who are most likely to benefit from the drug. Well-qualified biomarkers, especially those used as surrogate endpoints, may have a broader usefulness in regulatory context and clinical practice.
The usefulness of biomarkers and surrogate endpoints to further enhance the drug development process has been the subject of regulatory emphasis. In 2004, the FDA launched the Critical Path Initiative (CPI) with the release of a report titled “Innovation/Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products.” The CPI was the strategy adopted by the agency to “drive innovation in the scientific processes through which medical products are developed, evaluated, and manufactured.” In this first report, there was an emphasis on how biomarkers and surrogate endpoints could be used to optimize the drug development process, putting biomarkers at the forefront in pharmaceutical research. More recently, the Center for Drug Evaluation Research (CDER) within the FDA released a Guidance for Industry on the qualification of “Drug Development Tools,” which specifically presents a process for qualification of a biomarker or biomarkers for a given “context of use” in drug studies. Once qualified, the biomarker(s) may be used within context in any drug study.
There are two main areas in which biomarkers play a strategic role in drug development: (1) in enabling decision making at the early stages of drug development (whether to pursue, abort, or accelerate the development of the drug and in dose selection), and (2) in patient selection. Most of the biomarker research findings from the early strategic stage remain confidential and inaccessible to the public. However, biomarker-enabled decisions are extremely valuable to the pharmaceutical industry because they lead to significant cost and time reductions. These early decisions could be based on lack of efficacy, presence of toxicity, off-target effects of the candidate drug, or an unfavorable PK profile of the candidate drug. For biomarkers to play a strategic role in this space, good translational models and well-validated and robust clinical assays are required. Biomarkers used in patient selection tend to be used in late-phase clinical trials (Phase 3). These biomarkers are more commonly reported in the public domain and often become companion diagnostics (covered in more detail later in this chapter).
Because the framework of drug development and some background on the use of biomarkers has now been examined, the many opportunities for clinical laboratories to participate in the process can now be discussed.
Clinical laboratories are not particularly suited to animal safety studies for two reasons: (1) these studies require strict conformance to Good Laboratory Practices (GLP), and (2) the same facility housing the animals must perform the laboratory analyses. Thus these studies are often contracted to large laboratories associated with contract research organizations (CROs) that specialize in animal safety studies. In contrast, pharmaceutical sponsors may award analytical studies to clinical laboratories with recognized specialization in certain clinical or methodological areas. Such laboratories may provide expertise in complex methods such as mass spectroscopy or ultra–high-performance liquid chromatography. These laboratories offer services in methods development for animal models for specific compounds and associated metabolites, provision of kits for sample/specimen collection and shipping, sample analysis and retention, and calculation of results. Thus a clinical laboratory with experience in methods development and validation will find opportunities in the preclinical/animal study phase of drug development. Specifically, the laboratory may participate in:
Development and validation of new biomarker methods in specific animal models;
Transfer and validation of existing methods for human biomarkers in one or more animal models;
Development and validation of methods for measurement of parent drug and drug metabolites for PK assessment.
Pharmaceutical studies in humans require an extra level of diligence on the part of the clinical laboratory. At a minimum, sponsors screen potential laboratories for their abilities to perform testing on general organ panels, hematology parameters, serum chemistries, and urinalysis. These results often form the basis of baseline testing and ongoing monitoring of the health of study subjects during, and sometimes after, the drug trial. The pharmaceutical sponsor will assess the ability of the laboratory to receive, to accession and store samples, to perform testing with rapid turnaround and accurate results, to issue reports on a timely basis in accordance with the study protocol, to maintain blinding if required by the study protocol, and to retain records for later review by sponsor and regulatory resources; the sponsor will also monitor the quality of testing over the study period.
Beyond basic testing, sponsors may select laboratories in much the same way for clinical studies as for preclinical studies, that is, for recognized specialization in certain clinical and methodological areas, as well as the ability to develop and validate methods. Methods development and specialty testing is of great importance in five particular situations:
In PK and PD assessments
A situation in which a biomarker or a panel of biomarkers provides critical information for the drug safety assessment
In which a biomarker or panel of biomarkers provides an indication of the dose–response relationship of the drug
In which a biomarker or panel of biomarkers represents a primary, secondary, or exploratory endpoint of the study, particularly when the marker(s) act as a surrogate for clinical endpoint(s)
In which a biomarker is used as a companion diagnostic for personalized medicine.
Each of these areas is discussed here in turn.
It is generally recognized that PK assessments are indicative of what the body does to the drug, whereas PD assessments are indicative of what the drug does to the body.
The PK assessment involves dosing of subjects with a drug under investigation through any one of several routes (intravenous, subcutaneous, intramuscular, topical/transdermal, or per oral/sublingual). This is generally followed by peripheral blood sampling at various time intervals as appropriate for the drug. In certain circumstances, some drugs may require analysis in unusual biological fluids (e.g., cerebrospinal fluid, saliva, sweat, and so on). Analytical methods are used to measure the presence and concentrations of the parent drug and its metabolites of interest from which PK parameters are calculated. The objective is to evaluate the ADME characteristics of the drug and its metabolites. Liquid chromatography-mass spectroscopy is considered the gold standard for the measurement of small molecule drugs (nonbiologics), whereas immunoassays are commonly used for the measurement of biological drugs (e.g., monoclonal antibodies). For study designs that require characterization of both parent drug and metabolites, mass spectrometry often provides the most accurate and efficient approaches for simultaneous determination of multiple compounds. The methods used may be suitable for direct analysis in blood, plasma, or serum, or may require various sample pretreatments before analysis.
Methods already approved by regulatory agencies may exist for the measurement of the compounds of interest and be available at a CRO laboratory. For NMEs, an assay is normally developed and deployed internally before it is transferred to a CRO. In any case, the clinical laboratory should follow the Bioanalytical Methods Validation Guidance issued by the FDA or similar guidance issued by its counterpart in Europe, the European Medicines Agency (EMA) or other local agencies, depending on the country. This guidance provides the established framework for validation, including validation parameters for “chemical assays” and for microbiological and ligand-binding assays. For chemical assays, the document specifies investigations of analyte selectivity, accuracy, precision, recovery, calibration robustness, and analyte stability in short- and long-term storage, as well as stability under multiple freezing and thawing cycles. For microbiological and ligand-binding assays, the document discusses considerations of selectivity and quantification issues. The guidance further describes the desired components of an assay validation report and retention of data for later auditing and inspection. The sponsor, and therefore the clinical laboratory performing the analyses, must supply documentation to the FDA or EMA as part of the NDA, including summary information, method development and establishment information, and bioanalytical reports of the application of the method to routine analysis.
Once the method is adopted into routine analysis, accurate and thorough records of each sample analysis must be maintained. Assay runs should meet prespecified acceptance criteria for quality control to ensure validity, and individual patient results should be inspected for errors or omissions. Verified and secured data sets containing the PK results for each study subject will then be analyzed by a kineticist. Often, a pharmaceutical sponsor will have an in-house kineticist or will contract the PK analysis to a consulting kineticist; however, some laboratories offer PK analysis services.
In general, the PK data will be plotted for each individual subject with time postdose on the x -axis versus drug or metabolite concentration on the y -axis. Data are fit by either model-dependent and/or compartmental analysis or model-independent and/or noncompartmental analysis. Typical parameters calculated and reported include maximum concentration ( C max ), the corresponding time ( t max ), half-life ( t 1/2 ), elimination rate constant ( K ), area under the curve from time zero to last sampling (AUC 0– t ) and/or from time zero to infinity (AUC 0–∞ ), volume of distribution ( V d), and clearance (CL). For additional discussion on these parameters refer to Chapter 42 . For drugs administered over long periods of time, the kineticist may determine various parameters characterizing drug accumulation and circulating concentrations at steady state. Furthermore, for combination drugs or drugs often used as part of a multidrug regimen in a particular health condition, the kineticist may model drug–drug interactions to determine whether safety or efficacy varies depending on the presence and magnitude of an interaction. PK studies and specific analyses may also be conducted in populations of special interest (e.g., hepatically or renally impaired populations). As a typical summary of a PK study, the average concentrations across all subjects at the selected time points are usually presented as a line plot with population error bars. Fig. 12.5 provides an example time-concentration PK plot for a chemotherapeutic agent dosed subcutaneously in an advanced cancer study population.
The PD assessments capture the biochemical and physiologic effects of the drug and its metabolites. These effects run the spectrum from simple drug-induced variations of vital signs, such as heart rate and blood pressure, to complex modeling of drug-target binding and any downstream effects as a result of this interaction. In this regard, the PD assessment attempts to characterize the indirect effects on physiologic parameters and the specific effects on the tissue target (if known). Within this spectrum, various safety assessments may be performed, such as drug effect on liver enzymes or renal function. As with PK studies, the burden will be on the clinical laboratory to accurately test and report any parameter associated with a planned PD analysis.
Biomarkers to assess drug safety have been extensively used for decades, in both preclinical and clinical research studies. Historically, to detect drug-induced toxicity, most of the safety biomarkers used during drug development were routine clinical laboratory tests that were used to assess tissue and/or organ injury and/or function. These tests include those used to assess liver function (e.g., transaminases, bilirubin, alkaline phosphatase), kidney function (e.g., serum creatinine, creatinine clearance, cystatin C), skeletal muscle (e.g., myoglobin) or cardiomyocyte injury (e.g., creatine kinase-myocardial band, troponins I and T), and bone biomarkers (e.g., bone-specific alkaline phosphatase).
It is generally accepted that preclinical toxicology models perform well in predicting clinical toxicity. Approximately 70% concordance was found in retrospective studies. , The best concordance was observed in hematological, GI, and cardiovascular toxicities, whereas hepatic and hypersensitivity and/or cutaneous reactions had the poorest concordance. However, approximately 25% of drug failures in Phase 2 studies are still due to drug toxicity. For this reason, there has been a great deal of effort in drug development to discover novel and perhaps better biomarkers of organ toxicity. For instance, the Predictive Safety Testing Consortium of the Critical Path Institute has been spearheading efforts for the qualification of new biomarkers of drug-induced tissue and/or organ injury. The Predictive Safety Testing Consortium leads an extensive collaboration between a large number of pharmaceutical companies and regulatory agencies, such as the FDA, the EMA, and the Japanese Pharmaceuticals and Medical Devices Agency, and has successfully qualified several acute kidney injury (AKI) biomarkers (albumin, β2-microgobulin, clusterin, cystatin C, kidney injury molecule-1, trefoil factor-3, and total urinary protein) in preclinical animal studies. This qualification was endorsed by the FDA and EMA. Although the qualification of some of these biomarkers for clinical usefulness is currently ongoing in a systematic fashion, it has been concluded that the use of these biomarkers in clinical trials should be considered on a case-by-case basis.
There are also efforts to develop novel toxicity biomarkers for organs such as the liver, skeletal muscle, the heart, muscles, and the vasculature. , A great deal of these efforts are being spearheaded by the TransBioLine project ( https://transbioline.com/ ). The goal of this project, which is composed of academic, industry, and government organizations, is to develop novel safety biomarkers that will reliably indicate organ injury for drug development purposes. Currently, there are (1) five organ-specific work packages (WP) including liver, kidneys, pancreas, blood vessels, and central nervous system, (2) a liquid biopsy WP, (3) a sample and assay development WP, and (4) a data management and analysis WP. It is expected that this project will create the needed infrastructure and processes for long-term research and development of safety biomarkers. It would not be surprising if some biomarkers initially developed to monitor drug-induced injury eventually find their way into clinical practice, thus reversing the historical flow of safety biomarkers mentioned previously. Most of these biomarkers are summarized in Table 12.1 .
Organ/Tissue Toxicity | Classical Biomarkers | Biomarkers Evaluated in Preclinical Animal Models |
---|---|---|
Acute kidney injury | sCr, urea, urinary albumin | Cystatin C, NGAL, KIM-1, Clusterin, Trefoil factor 3, GST-α, GST-π, fibrinogen, miRNAs (all in urine) |
Liver | AST, ALT, GGT, 5’NT | GLDH, MDH, PON-1, PNP, ARG-1, SDH, GST-α, miRNA122 |
Muscle | AST, CK, myoglobin | sTnI, sTnT, CK protein M, Pvalb, Myl3, Fabp3, Aldoa |
Heart | cTnI, cTnT | Natriuretic peptides, interleukin 6, myeloperoxidase, sCD40 Ligand |
Pancreas | Amylase, lipase | TAP |
Vasculature | VEGF, GRO/CINC-1, TIMP-1, vWGpp, NGAL, TSP-1, smooth muscle actin, calponin, transgelin |
The emergence of the fields of proteomics, genomics, and metabolomics has also signaled a new era of biomarker use. These more novel methods may be applied as companion diagnostics (covered in more detail later in this chapter) to determine which patients receiving a given drug may benefit or encounter specific adverse events, and potentially, to what magnitude they may experience such effects. As an example, pharmacogenomic tests have been used to determine which patients receiving an antipsychotic drug may experience suicidal thoughts based on variants of two receptor genes. (For additional information of pharmacogenomics, refer to Chapter 73 .) Another example, pharmacogenetic tests for cytochrome P450 2D6 (CYP450 2D6) polymorphisms, have been cleared by the FDA (e.g., AmpliChip CYP450 test, Roche Molecular Diagnostics, Pleasanton, California) or validated as an LDT to characterize individual metabolic characteristics that might contribute to variances in safety or efficacy of drugs metabolized by this enzyme.
Also, in some cases established biomarkers have been investigated for new indications. For instance, cardiac troponin has been investigated as a risk marker for cardiotoxicity for women taking trastuzumab for advanced breast cancer and who exhibit HER2 oncogene overexpression and for children who receive doxorubicin for the treatment of acute lymphoblastic leukemia. Similar potential has been demonstrated for C-reactive protein (CRP), when measured with a high-sensitivity assay in the former population, with the marker displaying more than 90% sensitivity for detection of reduced left ventricular ejection fraction as a result of long-term trastuzumab therapy.
As in the foregoing discussion on PK and PD, laboratories with method development and validation skills will attract pharmaceutical sponsors to support drug safety assessments. A laboratory that also possesses a particular expertise in the emerging diagnostic fields, such as pharmacogenomics with a broad menu of established tests, will find themselves in high demand for participation in drug studies.
In drug development, the preclinical phases of research often involve measurement of a broad list of biomarkers for target engagement, PD response, and safety. Some of these biomarkers can be used as safety and efficacy indicators in early clinical development to give early insight into how the compound is working in humans and to inform dose selection for Phase 2 studies. The most useful biomarkers for this purpose are those that offer a clear dose response in PK/PD assessments (see section on Pharmacokinetic and Pharmacodynamic Assessments). Many Phase 1 study designs use a multiarm format, with the study arms defined by dose level. The resulting safety and PK and/or PD data are therefore analyzed individually by study arm. As part of this investigation, the selected biomarkers will be scrutinized for indications of both beneficial effects and safety risks. Similarly, Phase 2 studies often use multiple dose levels as a final attempt to select the final drug dose to evaluate in Phase 3 studies. As such, specific biomarkers may serve as one criterion in dose selection.
As an example of biomarkers used in this fashion, the CYP450 2D6 pharmacogenetic test is examined again. The test categorizes individuals into four groups of metabolic status: poor (little or no CYP450 2D6 function), intermediate (function between poor and normal), extensive (normal function), and ultra-rapid (multiple copies of the CYP450 2D6 gene are expressed, and therefore yield a greater than normal function). These categories have profound impact in the setting of opioid therapy for pain management. For drugs metabolized by this enzyme, such as hydrocodone, safety and efficacy will vary by metabolizer status. Specifically, CYP450 2D6 converts hydrocodone to hydromorphone, a more potent opioid. Thus ultra-rapid metabolizers will experience overexposure to the narcotic effects, which may cause serious adverse events or even death. Conversely, poor metabolizers will experience a reduced analgesic benefit due to underdosing of the more potent metabolite.
Thus clinical laboratories with broad and specialized menus of tests and methodologies provide critical resources for assessment of dose–response relationships in drug development. In some cases, as with the CYP450 2D6 example, the biomarker(s) will be used to categorize individual study subjects at baseline based on the potential safety and efficacy profiles of the drug in subpopulations of interest. In other cases, the biomarker(s) may be used to assess safety and efficacy by dose level after the completion of the study as a reflection of drug exposure.
The clinical development of sitagliptin is also a good example of biomarkers helping to expedite drug development through a strong dose–response relationship. Sitagliptin, a first-in-class dipeptidyl peptidase-4 (DPP-4) inhibitor, benefited from a robust engagement between the drug and its target (target engagement) and use of well-qualified disease-related biomarkers (i.e., glucose, glycosylated hemoglobin [HbA 1c ]) to achieve proof-of-concept, facilitate the design of clinical efficacy trials, and streamline dose focus and optimization studies. The DPP-4 enzyme inactivates GLP-1, which is a gut hormone involved in the regulation of blood glucose concentrations. Thus target engagement biomarkers such as DPP-4 activity, and biomarkers that demonstrated a close relationship to the target disease (proximal biomarkers), such as active and inactive GLP-1, were used in clinical development. These were clearly characterized in preclinical species and used for the translation strategy of sitagliptin. Specifically, preclinical experiments demonstrated that approximately 80% inhibition of DPP-4 activity was associated with maximal lowering of glucose concentrations. This also correlated with an increase in plasma GLP-1 concentration. PK and/or PD modeling revealed that the concentration that yielded 80% of the maximum effective response of plasma DPP-4 inhibition corresponded to a plasma sitagliptin concentration of approximately 100 nmol/L. It was also determined that a single dose of 200 mg provided DPP-4 inhibition (>80%) for 24 hours. This finding allowed the rapid determination of the ideal dose to move to the next steps in clinical development. This success was a result of the excellent usefulness of these biomarkers in preclinical species and the development of a careful translational strategy.
Adiponectin is yet another example. Specifically, it was used in the development of peroxisome proliferator-activated receptor (PPAR) agonists in patients with type 2 diabetes. A number of PPAR agonists in the thiazolidinedione family have been developed for the treatment of diabetes. Unfortunately, at the time of development of the initial drugs such as troglitazone, rosiglitazone, and pioglitazone, there were no target engagement biomarkers that could be used to help establish dose selection. Since the development of the initial thiazolidinediones, the biomarker adiponectin has been identified because it increased in a dose-dependent manner after thiazolidinedione administration. Years later, the Biomarkers Consortium ( https://fnih.org/what-we-do/biomarkers-consortium ), a public–private platform for precompetitive collaboration specific to biomarker research, endorsed adiponectin as a predictor of metabolic responses to PPAR agonists in patients with type 2 diabetes, and adiponectin became a putative target engagement biomarker for PPAR agonists.
The FDA generally prefers a clinical endpoint for a drug trial. Such endpoints may include death, disease progression, a disease-related event such as myocardial infarction, or a clinical conversion of a predisease state to the disorder of interest such as conversion from prediabetes to type 2 diabetes. Generally, the endpoints are compared between the patients randomized to the investigational drug arm and the patients randomized to a placebo or standard-of-care control arm. However, diseases that slowly evolve from risk states to clinical endpoints pose economic and operational problems for sponsors. The situation is further exacerbated by conditions of low population prevalence. How can a sponsor design a study to follow a statistically powered sampling for an outcome of low prevalence that may take 10 or more years to develop? By the time the study is completed, the company may have ceased activities due to financial issues; other drugs may have entered the market, thereby closing the window of opportunity for the drug under study; or intellectual property rights may have lapsed. For that matter, by devoting long-term resources to a given drug, the sponsor may sacrifice opportunities to explore other drug candidates. For all the foregoing reasons, pharmaceutical companies propose biomarkers as surrogate outcomes.
CVD represents a long-established condition for application of surrogate endpoints. As an example, the statin class of lipid-lowering drugs, including pravastatin, lovastatin, fluvastatin, atorvastatin, rosuvastatin, pitavastatin, and simvastatin, have generally demonstrated efficacy in studies in which low-density lipoprotein cholesterol (LDL-C) provides a surrogate for clinical outcomes, such as myocardial infarction, revascularization, or death due to coronary heart disease. Drugs that significantly reduce circulating LDL-C are inferred to reduce risk for the clinical outcomes of interest. However, some linkage of biomarkers such as LDL-C to actual outcomes has been observed. As an example, the Scandinavian Simvastatin Survival Study provided evidence in this regard. Over 5.4 years of median follow-up, simvastatin produced a 35% decrease in LDL-C, which was associated with a 42% reduction in risk of coronary death and a 37% reduction in risk of revascularization procedures. Similarly, studies of antihyperglycemic medications of various classes, including insulins, glinides, thiazolidenediones, sulfonylureas, α-glucosidase inhibitors, amylin analogs, incretin mimetics, DPP-4 inhibitors, selective sodium-glucose transporter-2 inhibitors, and metformin often use changes in HbA 1c as surrogate measures of improving or worsening glycemic status.
Even with use of surrogate measures, longitudinal studies in conditions such as CVD and diabetes nevertheless require ongoing testing at multiple time points, perhaps over several years. Thus a clinical laboratory contracted to perform study-related testing will be required to maintain accurate and reproducible methods over a long period of time. In this regard, the laboratory will benefit from participation in standardization programs such as the Centers for Disease Control and Prevention (CDC) Lipid Standardization Program (LSP) or the National Glycohemoglobin Standardization Program (NGSP). The goals of these programs are generally to provide external quality assurance measures to help maintain consistency and reliability of results. For example, the NGSP seeks to standardize HbA 1c test results to those of the Diabetes Control and Complications Trial and United Kingdom Prospective Diabetes Study, which established the direct relationships between HbA 1c values and outcome risks in patients with diabetes. If such external programs do not exist for a given biomarker, the clinical laboratory should develop internal quality control measures to achieve the same objectives. The importance of quality control and quality assurance cannot be overstated in the situation in which biomarkers provide the primary evidence of efficacy in a drug trial.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here