Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Viral illness has a wide variety of clinical manifestations ranging from focal to systemic and affecting nearly every organ system. Infections may be asymptomatic, acute, or chronic in nature. The viruses causing these syndromes are diverse; however, unrelated viruses may cause clinically indistinguishable disease. Combined, these factors drive the importance of rapid and accurate laboratory diagnosis to enable effective patient management.
In this chapter we discuss the use of qualitative and quantitative nucleic acid amplification and detection methods to diagnose and monitor viral infections. We also provide alternative diagnostic methods for those syndromes not optimally diagnosed by molecular techniques. Specific topics covered in this chapter include viral infections of the respiratory tract, viral illness in immunocompromised populations, infections of the central nervous system (CNS), sexually transmitted viral illness, viral hepatitis, viral hemorrhagic fever (VHF), and diagnosis of vaccine preventable viral infections. Epidemiology of key organisms responsible for each of the clinical syndromes are highlighted and factors impacting laboratory diagnosis and interpretation of results are discussed. Examples include optimal specimen type, availability of standardized assays, utility of qualitative versus quantitative results, multiplexed detection strategies, and potential shortcomings of current diagnostic approaches.
Viruses are among the most common and potentially devastating infectious agents associated with human illness. Non-specific symptoms, asymptomatic carriage, and ease of dissemination via aerosol or contact with bodily fluids enable viruses to infect millions of people during endemic exposure or epidemic outbreaks. Global pandemics such as the 1918 to 1919 influenza claimed 50 to 100 million lives and seasonal influenza epidemics continue to burden health care systems with thousands of hospitalizations annually. , More recently, the SARS-CoV-2 pandemic, which began in China in December of 2019, has poignantly demonstrated the ability of a novel viral pathogen to rapidly spread, cause over 1 million deaths, and upset the global economy and everyday lives of people on every continent. All this despite fantastic advances in modern medicine including antiviral and vaccine development since the 1918 pandemic. Other viral pandemics, such as human immunodeficiency virus (HIV), have persisted for decades without a vaccine or cure and continue to be a leading cause of mortality in the developing world and requiring costly lifelong suppressive therapies. Chronic viral infection with hepatitis B, hepatitis C, or human papillomavirus (HPV) affects millions of individuals globally and all are strongly associated with human carcinogenesis. Among the most insidious of viral pathogens are those that establish a lifelong latency within the human host including the herpesviruses and polyomaviruses. These latent infections remain largely asymptomatic but may reactivate to cause acute life-threatening disease in the immunocompromised host.
Traditionally, viral culture methods were used to diagnose acute and chronic viral infections. These methods were insensitive, required technical expertise, and results were of limited utility in guiding early therapy because of extended time to result. Additionally, many important viral pathogens lack in vitro models and cannot be efficiently cultured using standard methods. Prime examples of this include norovirus, a leading cause of community acquired and health care–associated gastroenteritis, and hepatitis A and E viruses, which are common causes of acute hepatitis in developing countries. Despite these limitations, maintenance of viral culture capabilities is still of value for phenotypic evaluation of antiviral resistance and for the detection of novel viral pathogens; however, this expertise is largely centralized in regional reference, public health, or government laboratories. Direct antigen detection methods have emerged as a tool to provide more rapid results and in situ detection of noncultivable viruses, but these methods often lack sensitivity and are not available for the majority of viral infections.
Over the past decade, clinical virology has experienced a dramatic shift toward the use of molecular diagnostics. This shift has been driven equally by the benefits to patient care and, in the United States, by the increasing availability of Food and Drug Administration (FDA)-cleared molecular assays. These assays increase precision, accuracy, and commutability of molecular results through automation and standardization of test parameters including specimen type, extraction method, and thermal cycling conditions. Additionally, quantitative assays often include internal and external standards for calibration or conversion of results to an international standard for quantification. This is especially important for management of some chronic viral infections such as HIV or cytomegalovirus (CMV).
In the United States, increased availability of easy-to-use assays designated as “waived” and/or “moderate complexity” by the Clinical Laboratory Improvement Amendments (CLIA) has further expanded molecular diagnostics into nonspecialized or near point of care (POC) laboratories. These tests do not require technical expertise or interpretation of results and have a turn-around time (TAT) of 15 minutes to 3 hours, which enables early patient management decisions. More recently, the development of large multiplexed “syndromic panels” (i.e., molecular assays that simultaneously detect many pathogens associated with a particular disease syndrome) have enabled the detection of multiple viral pathogens (and bacterial, parasitic, and fungal pathogens) associated with a specific clinical syndrome such as respiratory and gastrointestinal tract infections. While these tests simplify ordering for physicians and consolidate testing platforms for the laboratory, they often come at an increased pretest cost per test; further, clinical interpretation of results may be complicated when an unexpected virus is detected or when multiple pathogens are detected in a specimen.
Herein, we discuss the use of currently available qualitative and quantitative nucleic acid amplification and detection methods utilized for the diagnosis and management of patients with viral illnesses (see At a Glance ). We also discuss alternative diagnostic methods for infections that are not amenable to diagnosis using molecular methods or those where molecular methods are not well established, standardized, or available to highlight the continued challenges facing laboratory diagnosis of these conditions.
HIV Fourth-Generation Diagnostic Algorithm:
Initial diagnostic screening is by the fourth-generation antigen-antibody immunoassay.
Reactive screens are confirmed by a HIV1/HIV2 antibody differentiation immunoassay.
Patients with reactive antigen-antibody assays and nonreactive differentiation assays are confirmed by molecular testing.
Most rapid antigen tests for influenza lack sensitivity, particularly in adults.
Viral culture for influenza is sensitive and specific, though time consuming and technically demanding.
Molecular testing for influenza is highly sensitive and specific.
Uniplex or limited multiplex molecular assays (i.e., detect 2–3 targets) may exhibit higher sensitivity for influenza than multiplex syndromic panels.
Patients should initially be screened for HCV using serology.
Negative serologic results should be confirmed with follow-up RNA testing if immunocompromised or acute infection suspected.
Positive serologic results should be confirmed with follow-up RNA testing.
If positive serology results do not confirm by RNA testing and HCV infection is still suspected, testing with a different HCV antibody test or repeat molecular testing within 6 months should be considered.
HSV meningitis and cerebritis
HSV is a leading cause of recurrent aseptic meningitis in young adults and can be life threatening in immunocompromised individuals.
Culture of HSV from cerebrospinal fluid (CSF) in symptomatic patients is 4–20% sensitive and should not be used to rule out HSV involvement.
Serologic assessment of CSF is of limited utility because of the extended time to detection (1–4 weeks) and propensity for IgM detection during periods of asymptomatic reactivation.
Qualitative assessment of CSF for HSV is preferred for diagnosis; detection of any viral DNA is abnormal.
Considerations for viral load monitoring:
A significant change (>0.5–1.0 log 10 copies or IU/mL) in consecutive viral load measurements is a better prognostic indicator than a single viral load value.
Serial viral load measurements should not be conducted more often than once per 5–7 days, viral DNA half-life in blood is 3–8 days.
When following viral load trends, be sure results were obtained using the same assay and sample type. Interassay variability can range from 0.5 to 1.5 log 10 copies/mL.
Whole blood viral load values are approximately 0.8–1.2 log 10 higher than in matched plasma specimens.
In unvaccinated persons, serologic detection of anti-mumps IgM is 80–90% sensitive if specimen is collected greater than 3–5 days from symptom onset.
In persons who have received the mumps vaccine, serologic detection of anti-mumps IgM is as little as 9–47% sensitive.
Detection of mumps DNA by nucleic acid amplification test (NAAT) in saliva is greater than 90% sensitive if specimen is collected ≤2 days from symptom onset.
Collection of blood (serology) and buccal swab (NAAT) is recommended to confirm diagnosis in all patients with compatible signs and symptoms.
Respiratory viruses are a taxonomically diverse group of pathogens capable of causing disease within the upper or lower respiratory tract. This group includes influenza, parainfluenza, respiratory syncytial virus (RSV), human metapneumovirus (hMPV), adenovirus, coronavirus, rhinovirus, enterovirus, and other less common viruses (e.g., human bocavirus). Infections involving the upper respiratory system are often manifested by rhinitis, sinusitis, or pharyngitis, whereas infections involving the lower respiratory tract often result in bronchitis and pneumonia. In North America, the prevalence of both influenza and RSV is the highest during the winter months, including late fall and early spring. In contrast, infections with enterovirus exhibit an opposite seasonality, occurring mainly during the summer months. Parainfluenza occurs predominantly in the fall months, with the exception of parainfluenza virus (PIV) 3, which occurs in the summer. Adenovirus and rhinovirus are common causes of respiratory infections throughout the year.
Infections with respiratory viruses are relatively common, even among immunocompetent patients. Influenza often causes fever, upper respiratory symptoms, and occasionally lower respiratory symptoms in immunocompetent hosts. In 2017, influenza was responsible for 55.0 hospitalizations per 100,000 people in the United States and 123.8 per 100,000 globally, with a disproportional preference for elderly patients. , Influenza is also attributed to an estimated 6% of all outpatient physician visits with the highest incidence among children aged 2 to 17 years. RSV classically causes bronchiolitis, a lower respiratory infection associated with fever, cough, and wheezing most commonly in the pediatric population. Severe infections requiring hospitalization affect an estimated 57,000 to 100,000 children annually in the United States, with an estimated cost of up to $300 million. , Related mortality rates in children less than 5 years of age surpass those of influenza. PIV is another important respiratory virus in this patient population. Bronchiolitis, croup, and pneumonia resulting from PIV is responsible for approximately 1.1 hospitalizations per 1000 children under the age of 5 annually. Finally, while not as severe as influenza and RSV, mild upper respiratory infections caused by rhinovirus, coronavirus, and enterovirus have a significant impact on immunocompetent populations. The economic burden of these infections is staggering, estimated at approximately $40 billion annually. More than half of these seemingly mild infections are due to rhinovirus, which has been associated with asthma exacerbations, cystic fibrosis exacerbations, chronic obstructive pulmonary disease, and other respiratory complications.
In immunosuppressed patients, illnesses due to respiratory viruses are typically more severe. For example, influenza was more likely to cause severe disease, secondary complications, and prolonged viral shedding in this population. Immunosuppressed patients also exhibited fewer clinical signs of infection, making diagnosis challenging. RSV has been reported to cause severe pneumonia in solid organ and bone marrow transplant (BMT) patients with mortality rates as high as 70%. , hMPV also appears to disproportionally affect immunosuppressed patients. Specifically, hMPV was found to be the most prevalent respiratory virus causing symptoms in lung transplant recipients. More recent data suggest that this virus is also an important pathogen in children and critically ill adult populations. , Similarly, adenovirus infections in immunosuppressed hosts can be of high severity resulting in pneumonia, acute respiratory distress syndrome, and occasionally death. Adenovirus has been found to infect 4.9 to 29% of patients following stem cell transplant, often resulting in severe lower respiratory infections. In lung transplant patients, adenovirus has been associated with significantly higher rejection rates, bronchiolitis obliterans, and increased overall mortality. ,
Historically, viral culture was commonly utilized for the diagnosis of influenza infection. This approach had moderate sensitivity with the advantage of enabling simultaneous detection of other respiratory viruses. A significant disadvantage of this approach is the need for examination by technologists trained in identifying cytopathic effect (CPE), which is the change induced by viral infection in a cultured cell line. This change can be subtle and difficult to perceive to the untrained eye. When present, CPE typically signifies a positive result and specific morphologic features can suggest a specific viral pathogen. Positive results require 4 days to obtain on average, and additional steps such as hemagglutination or immunostaining are necessary for confirmation and definitive identification of the virus. Rapid cell culture methods utilizing shell vials have been described which enable viral identification in 1 to 3 days; however, these methods remain laborious and require technical expertise.
Antigen-based influenza detection assays address many of the disadvantages associated with viral culture. These assays are available in a variety of formats, ranging from micro-well–based enzyme-linked immunosorbent assays (ELISAs) to lateral flow immunoassays (IAs). Recently developed versions of these tests provide answers within minutes and are commonly referred to as rapid influenza diagnostic tests (RIDTs). The ease of performance has resulted in CLIA “waived” designation for many of these tests, allowing for POC use. Given the rapidity and simplicity of RIDTs, they are commonly used throughout the United States. Following the 2009 H1N1 outbreak, a survey of community hospital laboratories revealed that approximately 84% utilized RIDTs. Despite widespread use, RIDTs have important limitations. Most notable is the low sensitivity, which is dependent on influenza strain but may be only 50 to 80% compared with nucleic acid amplification tests assays (NAATs). ,
Several factors contribute to the comparatively poor sensitivity of RIDTs. First, these tests are highly dependent on the presence of an adequate amount of viral antigen for detection. As such, they perform better in individuals with higher levels of viral replication during infection (i.e., children) than in those with a lower viral burden (i.e., adults). The need for adequate antigen also necessitates proper specimen collection. Nasopharyngeal swabs allow for adequate sampling of infected cells and therefore have a significantly greater sensitivity compared to specimens such as nasal swabs. The efficacy of these assays is also highly dependent upon the antigenic make-up of the influenza virus in circulation in a given year. During the 2009 H1N1 influenza epidemic, RIDTs were found to have estimated sensitivities as low as 40%. Despite variable sensitivity, RIDTs are routinely used to make decisions regarding whether or not to provide treatment for patients with influenza-like illness. During the 2009 H1N1 epidemic it was estimated that approximately 88% of patients with negative RIDTs and symptoms suggesting influenza did not receive treatment. This prompted intervention from the FDA in the form of mandated minimal performance criteria for RIDTs prior to marketing and a reclassification from Class I in vitro devices (IVDs) to Class II IVDs with special controls. During clinical trials, the analytical performance characteristics of RIDTs must now be compared to either culture or molecular methods. The sensitivity when compared to culture must be at least 90% for influenza A and 80% for influenza B. When compared to molecular methods, the sensitivity for both influenza A and influenza B should be a minimum of 80%. Additionally, assay performance must be reassessed annually by the device manufacturer to assess the ability to detect currently circulating strains.
The adoption of molecular testing represents a significant breakthrough for influenza diagnostics (see Chapter 67 for additional discussion). Many available molecular tests have sensitivities comparable to culture with the added advantage of providing same-day results. The dominant technology employed by molecular influenza diagnostics is real-time polymerase chain reaction (PCR) (RT-PCR) (for additional discussion on this technique, refer to Chapter 64 ). Since the virus has a segmented RNA genome, each RT-PCR assay must include a reverse transcription step to generate a complementary DNA (cDNA) template prior to target amplification and detection. A comprehensive list of FDA-cleared molecular influenza diagnostics is available at https://www.fda.gov/medical-devices/vitro-diagnostics/nucleic-acid-based-tests . At the time of preparation of this chapter, 28 molecular tests for influenza have attained FDA-cleared status. These assays are available in different formats, which impact laboratory workflow and utility. Uniplex assays designed to detect influenza without differentiating type A from B impede selection of appropriate antiviral therapy since influenza B is universally resistant to the antivirals amantadine and rimantadine. , The inability to subtype influenza A may also limit the utility of these assays. This was best illustrated during the 2009 H1N1 influenza epidemic when it was revealed that a significant portion of the seasonal H1N1 viruses were oseltamivir resistant, while the 2009 H1N1 strain retained greater than 99% susceptibility. Multiplexed PCR assays such as the Cepheid Xpert Flu and Prodesse ProFLU + assay are capable of differentiating influenza A and influenza B, as well as performing limited subtyping of influenza A.
An important enhancement to recently developed influenza molecular assays is the simultaneous testing of RSV, which has a similar seasonal and symptomatic pattern to influenza. Although RSV has historically been associated primarily with disease in young children, the virus is now recognized as an important cause of respiratory disease in older adults (177,000 hospitalizations, 14,000 deaths annually in the United States). The Simplexa Flu A/B & RSV Direct assay (DiaSorin) and the Xpert Flu/RSV assay (Cepheid) both detect and differentiate influenza A and B and RSV simultaneously using multiplex PCR while maintaining high sensitivity for each target. Recently, simplification of diagnostic platforms has resulted in the development of CLIA-waived molecular influenza assays such as the Roche Cobas Liat Influenza A/B and Abbott ID NOW Influenza A & B 2 assays. The Liat Influenza A/B is performed in a closed system, provides results within 20 minutes, and demonstrates a 99.2% sensitivity compared to previously existing molecular methods. The Abbott ID NOW Influenza A & B 2 is a similar molecular test designed for near POC use with sensitivity and specificity of greater than 95 and 100%, respectively, when compared to currently available molecular tests. The appeal of these assays is the combination of the speed and ease of use of RIDTs with the sensitivity of molecular methods.
The positive impact of these waived molecular tests on patient care has been well documented. Martinot and colleagues demonstrated reduced time to antiviral administration, a decrease in antibacterial utilization, and a shortened length of stay in the emergency department following implementation of the Abbott ID NOW test, while Benirschke and colleagues demonstrated a similar positive impact to clinical care following implementation of the Roche LIAT Influenza A/B assay in an urgent care setting. Despite the advantages of POC molecular testing, introduction of molecular testing into the POC environment necessitates careful consideration. Molecular methodologies are attractive due to their high analytical sensitivity, an attribute that also makes them more susceptible to environmental contamination than less sensitive antigen tests. Moving molecular testing from controlled laboratory environments into relatively chaotic settings, such as primary care offices or emergency departments, requires careful education regarding molecular techniques and the adoption of practices to detect and prevent environmental contamination. These risks were addressed in 2019 by the College of American Pathologists (CAP) when the adoption of specific mitigation and monitoring strategies became a formal requirement for CAP-accredited laboratories that employ CLIA-waived testing methods.
Culture has historically been considered the gold standard for the diagnosis of other respiratory viruses and shares many of the same advantages and disadvantages previously discussed for influenza culture.
Similar to influenza and RSV, molecular methods are being used more frequently to identify other respiratory viruses due to their decreased TAT and high sensitivity. In contrast to influenza and RSV, commercially available assays for the detection of other respiratory viruses are more limited. As a result, many laboratory-developed tests (LDTs) that detect one or several of these pathogens have been described. These tests are often multiplexed to simultaneously detect rhinovirus, PIV, hMPV, and various other respiratory viruses. The development and maintenance of such assays can be challenging. Differences in primer and probe design, as well as nucleic acid extraction techniques can drastically affect assay sensitivity. Additionally, while many FDA-cleared assays are designed to be “moderate complexity” sample-to-answer tests, LDTs are highly complex and require separate extraction, amplification, and detection steps. The implementation and use of laboratory-developed molecular assays also requires a degree of technical expertise which may not be present in every clinical laboratory.
The introduction of highly multiplexed FDA-cleared molecular assays has made testing for common and uncommon respiratory pathogens more accessible to clinical laboratories. Currently, four test platforms (and their later iterations) are FDA-cleared for the detection and differentiation of greater than 10 respiratory viral pathogens. These assays come in various formats requiring different workflows. The Luminex xTAG RVP multiplex assay (Luminex, Austin, TX) uses an initial multiplex PCR to amplify numerous viral targets followed by target identification using a fluorescent-labeled bead array sorted by flow cytometry. A later version of the test, the Luminex xTAG RVP Fast, boasts a more rapid TAT but does not detect PIV or differentiate RSV A and B. The GenMark eSensor RVP (GenMark, Carlsbad, CA) is similar to the Luminex xTAG RVP in that it initially employs a multiplex PCR to amplify numerous viral targets. Targets are then sorted and identified using capture and signal probes coupled to ferrocene labels and gold electrodes arranged in a microarray. Target binding is detected using voltammetry as a current is applied to each electrode. A potential disadvantage of both of these assays is the need for an initial “off-line” end-point PCR step followed by a manual transfer of the amplified product to a second analyzer for target detection. This approach increases the total hands-on time, time to result, and creates the potential for amplicon contamination. These weaknesses have been addressed by more recent iterations of these assays. An updated version of the Luminex platform, the NxTAG RVP, enables streamlined processing of previously extracted nucleic acid without the need for amplified DNA handling. This assay was cleared by the FDA for nasopharyngeal swab testing in 2015. Initial evaluation of this assay demonstrated overall robust performance and simplified workflow; however, the authors noted a sensitivity of 66% for select Coronavirus types OC43 and NL63.
The BioFire FilmArray (BioFire Diagnostics, Salt Lake City, UT) and Luminex Verigene systems are also highly multiplexed but are designed to be sample-to-answer assays, eliminating the need to manipulate extracted or amplified DNA products. The FilmArray incorporates nucleic acid extraction, nested PCR, and target detection using SYBR Green fluorescence into a single-use enclosed test pouch. , , Target-specific primers are separated into an array of microwells allowing for the discernment of specific targets. Similarly, the Verigene system employs multiplex PCR for target amplification but uses a standard microarray-based approach involving immobilized capture probes for detection of target amplicon. Sensitivity is maximized through signal amplification using gold nanoparticle-conjugated detection probes thus allowing for visualization by a separate automated array reader. , Several studies comparing the accuracy and efficacy of all four of these systems have been conducted. , , A large comparative study of the eSensor, FilmArray, and xTAG systems revealed differences in sensitivities for individual targets, most notably for influenza A/B and adenovirus.
A significant advantage of large, multiplexed panels is the ability to test for many different viruses with overlapping symptomatology. This allows for providers to order a single test for patients who present with respiratory symptoms, a process referred to as “syndromic testing.” Since providers do not have to individually order testing for each specific pathogen, they are more likely to detect pathogens they did not previously suspect. This can be useful in patient populations such as the immunosuppressed who are at high risk for serious respiratory infections from a wide range of pathogens. The detection of unexpected pathogens can also be a double-edged sword. If there is a low pretest probability for a particular diagnosis, false positive results will contribute to a poor positive predictive value (PPV) and may be a distractor from the patient’s more acute pathologic process. Additionally, many of the targets present on multiplex panels are not clinically actionable since pharmacologic therapy is only available for a minority of respiratory viruses. Given the added price of multiplex tests, patients not at risk for severe disease may be better served by symptom management/supportive care rather than an expensive diagnostic test. One potential approach is to reserve syndromic testing for immunosuppressed patients who are at a higher risk for severe infection by several respiratory viruses or immunocompetent patients presenting in acute respiratory distress. This approach requires flexibility in testing that is currently being integrated into some of the existing multiplex tests. The most updated version of the Verigene assay, the FDA cleared Verigene RP Flex assay, allows for selective ordering and reporting of a panel of 16 respiratory pathogens. At the time of this writing, data regarding the performance of this assay are unavailable.
Over the past decade, several respiratory viruses have been identified as important public health concerns due to their disease severity and their potential for epidemic and pandemic spread. Extreme caution should be used when manipulating specimens from patients with suspected highly pathogenic respiratory viruses. In general, culture of these viruses should not be attempted, and testing should be deferred to specialized laboratories with the appropriate biocontainment facilities. As such, local and national reference laboratories should always be consulted when testing for any of the following pathogens is warranted.
Avian influenza A (H5N1) first emerged in 1997 as a potentially fatal human pathogen. At this time, virus transmission from poultry was responsible for 18 human infections in Hong Kong leading to 6 deaths. Since then, sporadic human infections have been reported in Africa, Asia, Europe, and the Middle East, often in association with zoonotic transmission from birds. , In 2013 another avian influenza type A strain (H7N9) was observed to be a cause of human infection in China. Since these initial reports, there have been greater than 600 cases predominantly in Eastern China. Preferred specimens for testing are nasopharyngeal swabs or aspirates. The analytical sensitivity of RIDTs for the detection of avian influenza A (H5N1) and (H7N9) has been shown to be very low by several studies, requiring at least a 10 4 median tissue culture infective dose (TCID 50 ) for positivity. , Conventionally used FDA approved molecular diagnostics for influenza have not been thoroughly evaluated for the detection of either avian influenza A (H5N1) or (H7N9). While some of these molecular tests may fail to detect virus, others may identify these viruses as influenza A (nontypeable).
In 2014, enterovirus D68 (EV-D68) was identified as the cause of severe respiratory symptoms including wheezing and shortness of breath in a cohort of children from the Midwestern United States. This virus was initially described in 1962 and had previously been associated with sporadic outbreaks of respiratory illnesses throughout the world. , The 2014 outbreak in the United States lasted 4 months and affected approximately 1152 people in 49 states. In addition to severe respiratory symptoms, neurologic sequelae such as flaccid paralysis similar to that seen in polio were later described in a small proportion of the affected patients. Subsequently, outbreaks of acute flaccid paralysis associated with EV-D68 have been identified in late summer to fall in a biannual pattern (e.g., 2014, 2016, and 2018). While EV-D68 is not identified to subtype by any commercial assay, affected patients notably tested positive for “Rhinovirus/Enterovirus” using the BioFire FilmArray Respiratory Panel. The preferred specimens for testing are nasopharyngeal, oropharyngeal, or other upper respiratory specimens.
Finally, members of the coronavirus family have also emerged from zoonotic reservoirs to become highly pathogenic human viruses. In 2003 a coronavirus was found to be responsible for an international outbreak of patients afflicted with severe acute respiratory syndrome (SARS). The implicated virus, now referred to as SARS-CoV (SARS associated coronavirus), was acquired by humans from horseshoe bats. A second coronavirus, MERs-CoV (Middle Eastern Respiratory Syndrome associated coronavirus), emerged in Saudi Arabia in 2012. Both viruses cause severe respiratory illness with associated mortality of 30 to 40%. Multiple specimens should be submitted from upper and lower respiratory sites in order to enhance diagnostic sensitivity. A RT-PCR assay has been developed by the CDC and is currently available at most state laboratories. , In 2019, a third highly pathogenic coronavirus emerged in Wuhan, China. Although initial data regarding this virus suggest a mortality of approximately 4%, these data also suggest this virus results in an increased need for ICU care and is readily transmissible in health care settings. Unlike the previous two zoonotic coronaviruses (SARS-CoV and MERS-CoV), efficient human-to-human transmission of SARS-CoV-2 has resulted in a global pandemic. Epidemiology and diagnostic approaches for SARS-CoV-2 are discussed in detail in the following section.
Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) is a member of the Coronaviridae family of enveloped RNA viruses, further classified as genus Betacoronavirus and subgenus Sarbecovirus. Taxonomically it shares greater than 96% genome sequence identity with a coronavirus of bat origin and while not a direct descendant, also clusters closely with 2003 SARS-CoV. , Human infections with SARS-CoV-2 were initially identified in Wuhan, China in December 2019 and rapidly spread via human-to-human transmission to every continent except Antarctica, accounting for greater than 30 million infections and over one million deaths in the ensuing months ( https://coronavirus.jhu.edu/map.html , accessed October 2020). The clinical presentation of patients infected with SARS-CoV-2 can range from asymptomatic to severe acute respiratory distress with multi-organ failure. Patient-specific factors including age greater than 60, underlying cardiovascular disease, and diabetes have been associated with increased risk of severe disease and mortality; however, severe symptoms requiring hospitalization, respiratory support, and mortality have been reported in immunocompetent individuals of all ages. The primary route of transmission is via respiratory droplets or aerosols. SARS-CoV-2 RNA is rarely detected in blood or urine of acutely ill individuals but can be present at a high concentration in stool, though there is little evidence to support transmission via fecal-oral route. ,
There are multiple diagnostic approaches to identify patients with acute symptomatic or asymptomatic infection including viral culture, NAATs including reverse transcriptase PCR (RT-PCR), and rapid antigen detection tests (RADTs). These methods have been applied to various upper and lower respiratory specimen types including nasopharyngeal swab, nasal swab, throat swab, saliva, sputum, and bronchioalveolar lavage (BAL) fluid. In addition, serologic tests capable of detecting SARS-CoV-2 specific IgG, IgM, IgA, and total antibody have been developed to assess prior exposure to the virus. It is critical to recognize the differences in performance and optimal utility to correctly interpret test results.
SARS-CoV-2 can be cultured from clinical specimens using routine methods and forms visible plaques in VeroE6 and Vero CCL81 cells within 2 to 3 days of inoculation. While useful in studying the basic biology of the virus, viral culture requires biosafety level-3 (BSL-3) safety precautions and is not recommended for routine diagnosis. Further, the sensitivity of culture methods may be inferior to NAATs as indicated by an inability to recover virus from respiratory specimens with less than 10 4 RNA copies/mL.
RADTs are an inexpensive and direct method to detect the presence of SARS-CoV-2 in clinical specimens. These tests are simple-to-use lateral flow assays that can be read either visually, based on the appearance of colored indicator lines, or automatically, using a simple device to detect a fluorescent signal. At the time of writing, five such assays have received emergency use authorization (EUA) by the FDA. A major benefit of these tests is the ability to deploy them outside of a high-complexity laboratory, at or near the POC to rapidly identify persons with SARS-CoV-2 infection. Importantly, these tests have generally been approved for use in symptomatic patients within 1 week of symptom onset when viral load (VL) is the highest to maximize sensitivity. An independent clinical evaluation of two RADTs (BD Veritor, Becton, Dickinson and Co., Sparks, MD and Sofia 2, Quidel Corp., San Diego, CA) demonstrated approximately 80 to 85% sensitivity compared to a PCR-based test for SARS-CoV-2 in nasal specimens collected from patients less than 7 days from symptom onset. Importantly, the performance of these tests has not been evaluated when used as a screening test in asymptomatic populations where VL (and thus sensitivity) may be lower. A commercially available RADT without EUA designation found the test to be 3 log 10 less sensitive than viral culture and 5 log 10 less sensitive than PCR in analytic limit of detection (LoD) analysis. Because of potential shortcomings in sensitivity, the CDC and WHO recommend that negative results in symptomatic patients be considered presumptive and confirmed by a PCR-based test to rule out infection. , Conversely, if used to screen asymptomatic individuals (an off-label application), positive results may require PCR confirmation in low-incidence populations.
NAATs provide the most sensitive method for direct detection of SARS-CoV-2 in clinical specimens and are considered the “gold standard” method for detection of SARS-CoV-2 and diagnosis of Coronavirus Infectious Disease 2019 (COVID-19). Approximately 6 months following the first reported transmission in the United States, greater than 200 different NAATs were made commercially available through the FDA EUA regulatory pathway. This includes high-complexity batched testing platforms, moderate complexity sample-to-answer platforms, and “rapid” POC tests. The absolute sensitivity of these NAATs is difficult to ascertain due to lack of true “gold standard” and is likely influenced by multiple factors including specimen type, collection method, and test design (e.g., genomic target, detection chemistry, etc.). The majority of available tests are designed to detect two or three distinct genomic targets specific to SARS-CoV-2 and/or are conserved among the subgenus Sarbecovirus which includes 2003 SARS-CoV and other nonhuman SARS-CoVs. The goal of utilizing multiple assay targets is to increase sensitivity and provide redundancy to insulate against the impact of potential viral mutation events on test performance. However, in instances for which solely pan- Sarbecovirus targets are detected, results are typically considered preliminarily positive given the possibility of cross-reactivity with other viruses in the family. Multiple split-sample studies have compared the performance of available laboratory-based RT-PCR tests and report 96 to 100% positive agreement (i.e., sensitivity) among these tests based on consensus result. A rapid isothermal amplification test has been designed for use at the POC and delivers a result in as little as 5 minutes; however, reduced sensitivity of 75 to 90% has been reported compared to “traditional” laboratory-based RT-PCR assays. , , This difference in sensitivity is likely due to a difference in the LoD between laboratory-based RT-PCR tests (39 to 779 copies/mL) and the POC test (3000 to 20,000 copies/mL). , ,
The sensitivity of NAATs can also be influenced by the type of specimen analyzed and when in the course of infection the specimen is collected. Among upper respiratory specimens, nasopharyngeal, throat, saliva, and anterior nares have all been evaluated. A limited number of studies have directly compared the sensitivity of viral detection in each specimen type. Results appear to support similar statistical performance among most specimen types with slightly higher sensitivity observed in NP than throat. , , Viral RNA may be detected as early as 1 week prior to symptom onset, but typically peaks at or near the time of recognized symptoms. Following symptom onset, the VL begins to decrease in all respiratory specimens but remains detectable in lower respiratory specimens such as sputum or BAL significantly longer than upper respiratory specimens such as nasal or nasopharyngeal swabs. , In addition, positive BAL specimens have been reported in symptomatic and immunocompromised patients with negative NP or throat results. Therefore consideration may be given to testing a lower respiratory specimen in patients with compatible symptoms and a negative NP or nares result. Importantly, persistence in the lower respiratory tract is frequently at less than 10 4 copies/mL, which as mentioned above correlates with a failure to recover virus in culture. The significance of long-term, low-level persistence as it related to risk of relapse or transmission is currently unclear.
The use of serologic methods to indirectly detect individuals who are currently infected or have been exposed to SARS-CoV-2 has been widely explored. Multiple technologies including highly automated laboratory-based IAs and POC “rapid” lateral flow tests are commercially available and have obtained EUA status from the US FDA ( https://www.fda.gov/medical-devices/emergency-situations-medical-devices/eua-authorized-serology-test-performance , accessed June 29, 2020). These assays most commonly utilize purified viral nucleocapsid or spike proteins to detect the presence of one or more antibody classes (e.g., IgG, IgM, IgA, total antibody). The sensitivity of these assays ranges from approximately 75% to greater than 99%, with reported specificities of approximately 90% to greater than 99%. The sensitivity of serologic tests is dependent on the time from exposure to the virus, known as the “diagnostic window.” For SARS-CoV-2, the development of specific IgM and IgG class antibodies appears to occur nearly simultaneously, with 95% of individuals generating a detectable antibody response by 14 days postexposure, and essentially 100% by 21 days postexposure. , , This response reaches a plateau within 1 week of detection and then both IgG and IgM titers begin to decline. The decline in IgG titer is less abrupt than IgM; however, at this point it is not clear how long antibody remains detectable after initial exposure. Cross-reactivity with the four “human coronaviruses” appears to be negligible for most commercially available assays, though this may vary based on assay design.
The utility and interpretation of serologic results remains in question. As of time of writing, the seroprevalence in much of the world is likely 1 to 5%. This low prevalence renders even the most specific serologic assays a PPV as low as 80%. Further, the extent and duration of protective immunity correlated with a positive serologic result has not been established. Therefore the current utility of serologic assays lies primarily in the epidemiologic assessment of exposure to SARS-CoV-2 in various populations, which may be used to guide RT-PCR based screening strategies, and in retrospective diagnosis of late-stage infections (>2 weeks after symptom onset) when direct detection tests including RT-PCR may have converted to negative.
Influenza and RSV are the most common causes of severe respiratory disease in healthy populations.
Respiratory viral culture is time consuming, technically demanding, and no longer commonly used.
Influenza rapid antigen tests have poor sensitivity and should not be used to rule out infection.
Molecular assays are the primary method used for the diagnosis of respiratory pathogens and range from uniplex to massively multiplexed (>20 targets).
Testing for highly pathogenic emerging respiratory viruses, including MERS-CoV and pandemic influenza, should be coordinated with local and national reference laboratories.
Viruses are the most common cause of infectious disease in humans. The vast majority of viral infections are self-limited or asymptomatic in persons with normal immune function and do not require medical intervention. In contrast, these same agents can cause devastating disease in persons with impaired immune function resulting from genetic disorders, infectious agents such as HIV, hematologic malignancies, or iatrogenic immunosuppression following hematopoietic or SOT. Severe viral syndromes in these populations can result from primary infection or reactivation of viruses that have established lifelong latency following initial infection, notably the herpesviruses and polyomaviruses. Reactivation of these latent viruses may include intermittent and asymptomatic shedding of virus in bodily fluids, or symptomatic illness ranging from mild fever and rash to superficial lesions to fulminant multiorgan or systemic disease. Presentation and course can be impacted by the immune status of the host and site of viral latency. Therefore several diagnostic approaches are necessary to aid in differentiating active versus latent infection status, to monitor disease progression, identify potential antiviral resistance, and to stratify risk of severe disease when considering immunosuppressive therapy or evaluating potential transplant candidates.
CMV, also known as human herpes virus 5 (HHV-5), is an obligate human pathogen and member of the Betaherpesviridae subfamily. Seroprevalence of CMV increases with age and is reported to be 50 to 80% by midlife; however, this can approach 100% in specific populations. , CMV is capable of infecting and replicating within a broad range of human tissues and cell types including differentiated epithelial, endothelial, parenchymal, smooth muscle, lymphoid, and myeloid cells. , Following primary infection, CMV establishes a lifelong association with its host, existing as an episomal circularized genome within undifferentiated cells primarily of myeloid lineage. , In healthy individuals, latent, non-replicating CMV can be found in 1:10 4 to 1:10 5 circulating monocytes. Normal physiologic processes including cellular differentiation and immune response to infections can trigger asymptomatic viral reactivation and shedding of infective virions in nearly all bodily fluids (e.g., blood, urine, saliva, stool, semen, breast milk). , Specifically, intermittent shedding of CMV in saliva has been reported in 1 to 2% of asymptomatic immunocompetent adults and in up to 46% of asymptomatic HIV-positive patients. , Additionally, CMV can be transmitted in utero resulting in congenital CMV disease which can result in mortality or cause immediate or progressive hearing, visual, and mental deficits. ,
The majority of CMV infections in the immunocompetent host are mild and self-resolving and go unnoticed or unreported. In contrast, primary infection or reactivation of latent virus can cause devastating illness in an immunocompromised host. Specific syndromes include pneumonitis, colitis, retinitis, meningitis, and systemic viral sepsis. The risk of CMV disease is highest in individuals with advanced HIV/AIDS and in patients undergoing immunosuppressive therapies following SOT or hematopoietic stem cell transplant (HSCT). Rates can be 8 to 41% following SOT or HSCT, with the highest incidence seen in heart and lung transplant patients. Among opportunistic infections, the presence or absence of CMV disease is a leading factor in successful transplantation and survival.
Laboratory approaches for the diagnosis and monitoring of CMV disease include serologic, nucleic acid-based, and antigen-based approaches. Serologic tests provide indirect evidence of exposure to CMV but are not used for diagnosis or monitoring of active disease. In contrast, direct detection methods such as PCR or viral antigen tests provide methods to detect and monitor active disease. Measurement of CMV antigenemia based on the quantification of CMV structural protein pp65 was an early approach to quantify the amount of actively replicating virus in circulation. , This approach was beneficial in monitoring patients for viral relapse, but quantification lacked standardization and assays were technically demanding to conduct. Subsequently, quantitative NAATs have been developed that offer a comparatively simplified workflow, standardization to an international scale, and increased sensitivity.
Serologic tests hold no utility for the diagnosis or monitoring of CMV disease. The titer of anti-CMV IgG is not correlated with active disease, and anti-CMV IgM is commonly associated with episodes of asymptomatic viral reactivation. The chief utility of CMV serology is to establish the serostatus of two specific populations: women, as part of a prenatal or perinatal screen, and potential donors and recipients prior to SOT or HSCT. The presence of CMV-specific IgG and IgM is determined using enzyme immunoassays (EIAs), chemiluminescent immunoassays (CIA), or indirect immunofluorescence assays (IFAs). For transplant patients determination of serostatus aids in stratifying risk and severity of potential CMV disease, and also guides dose and duration of antiviral therapy following transplant. In one study, donor positive/recipient negative (D+/R−) transplant carried a 19 to 31% risk of subsequent CMV disease compared with only 2 to 3% risk in D-/R− transplants. Extension of antiviral prophylaxis in D+/R− organ recipients may reduce the incidence of late onset CMV disease.
Beyond establishment of serostatus, differentiation between primary and past infection holds prognostic implications for the risk of congenital CMV disease. The presence of anti-CMV IgM should not be independently interpreted as evidence of primary infection due to long-term persistence (6 to 9 months) following primary infection, presence during episodes of viral reactivation, and cross-reactivity with other Herpesviridae ; however, absence of IgM effectively rules out recent infection. , Anti-CMV IgG avidity testing can be used to distinguish primary from past infection in women with dual positive IgG and IgM results. The presence of high avidity IgG is an indication that primary infection occurred 18 to 20 weeks prior to testing, that is, past infection and is associated with a 1 to 2% risk of congenital CMV infection compared with a risk of 12 to 75% in women with low avidity IgG. , Despite these prognostic implications, routine serologic screening of pregnant women for CMV is not currently recommended due to the remaining potential for congenital infection, regardless of maternal immune status, and lack of therapeutic intervention to prevent congenital transmission. IgG avidity testing has also been used as a prognostic tool in post-transplant patients. In this group, failure to develop high avidity antibodies following primary infection may be associated with increased risk for severe CMV infection or organ rejection. , Several ELISA-based CMV IgG avidity tests are commercially available; however, none have received FDA clearance to date. Laboratories that offer testing must conduct internal assay validation studies and may establish different thresholds to define high and low avidity results which contributes to variability in analytic and prognostic performance of these assays. Therefore results must be interpreted in the context of other laboratory and clinical findings.
Qualitative NAATs are used to aid in diagnosis of localized infections such as pneumonia, retinitis, meningitis, gastrointestinal, or other “end organ disease.” Potential specimens include tissue biopsy, bronchoalveolar lavage (BAL), cerebrospinal fluid (CSF), and vitreous fluid. Direct detection of CMV in these specimens is important because 50 and 70% of patients with end organ disease have undetectable CMV in whole blood (WB) or plasma specimens. The high sensitivity and negative predictive value (NPV) of qualitative NAATs can be used to essentially rule out active CMV infection in cases of presumed CMV pneumonitis, retinitis, or gastrointestinal disease. Conversely, a positive result should not be used to define “proven infection” due to an inability to differentiate asymptomatic shedding from end organ disease. While higher VLs have been associated with increased specificity for disease, specimen heterogeneity and a lack of assay standardization has precluded the development of reliable quantitative thresholds to define active infection. Therefore all positive results obtained from fluids and tissues should be confirmed by a more specific method such as histologic examination and must be correlated with clinical symptoms to support a probable diagnosis of CMV disease.
There are currently no FDA-approved qualitative CMV NAATs, but several tests are commercially available as analyte specific reagents (ASRs) or have received CE marking for use in the European Union ( Table 89.1 ). An evaluation of five of these NAATs, as well as an LDT, was conducted using 200 prospectively collected clinical samples representing respiratory, urine, CSF, biopsy tissue, and other clinical specimens. The analytic LoD ranged from 10 2 to 10 3 copies/mL for all six tests. The clinical sensitivity of 5 of 6 assays was 100%, with one of the assays demonstrating only 89% (41/46) sensitivity for detection of CMV in the clinical specimens. These data support the utility of qualitative NAATs as a high-sensitivity screen for the presence of CMV in various specimen types and an effective method to rule out CMV as the etiologic agent in these cases.
Target | TAT | Test Platforms | Sensitivity (%) | Specificity (%) | Reference | |
---|---|---|---|---|---|---|
Culture | CMV | Routine, Shell vials | 8–48 | 99–100 | , | |
Antigenemia | pp65 tegument protein | 2–6 h | None | 38–100 | 55–99 | , |
Qualitative NAAT | ||||||
Leukocytes | LDT | 94 | 50 | |||
CSF, urine, tissue, BAL, throat swab | Varies by assay | 1–3 h | ASR or CE-Mark; EraGen Multicode, Liaison MDX, Elitech MGB Alert, Roche CMV ASRs, Abbott CMV | 89–100 | 97–100 |
An important factor in predicting and monitoring CMV disease is the change in blood or plasma VL. Rapid changes in VL are prognostic for stratifying risk of disease and severity of symptoms. , Results of VL tests also impact decisions to initiate or discontinue therapy and aid in early recognition of antiviral resistance. Therefore the use of quantitative NAAT is recommended by several national and international guidelines as an important component in patient management. ,
The type of specimen tested impacts test sensitivity and the quantitative CMV VL value obtained, which in turn affects the interpretation of results. Assays using WB quantify latent, cell-associated virus in addition to infective extracellular virions. Therefore WB-based assays are more sensitive for the qualitative detection of CMV, and quantitative VL values are approximately 0.8 to 1.2 log higher than matched plasma specimens, albeit with a poor correlation coefficient ( r value, 0.19 to 0.79). Importantly, the increased sensitivity of WB assays has not demonstrated a prognostic advantage for determination of CMV disease compared to plasma-based tests. , The rate of virologic recurrence was similar in patients with undetectable VL in matched WB and plasma (23.6%) as it was in patients with a quantifiable VL in WB and CMV negative plasma (23.1%), owing to the detection of latent inactive virus in WB specimens.
Increasing analytical sensitivity of quantitative NAATs has lowered the threshold of detection to as few as 6 to 150 copies/mL in WB or plasma specimens. , At these low levels of detection, the presence of cell-associated virus or free viral DNA may contribute to a detectable VL, even following effective antiviral treatment. This is supported by the finding of detectable VL in both WB (∼70%) and plasma (∼48 to 59%) specimens in asymptomatic patients. , These data suggest that even when using plasma, a low VL does not indicate a risk of CMV disease. Further, the inability to achieve an undetectable VL can result in unnecessary antiviral therapy and increase the risk of resistance if a “treat to negative” paradigm is used. ,
Absolute quantitative VL thresholds ranging from 1000 to 10,000 CMV copies/mL have been evaluated as clinical decision points to define active infection and drive initiation or discontinuation of antiviral therapy. , , Unfortunately, the establishment of a single reliable threshold to predict disease has been hindered by a lack of standardization among commercially available and laboratory developed tests, including the use of different calibration materials, which causes interassay variability of 0.5 to 2.0 log 10 copies/mL.
A significant step toward standardization of CMV VL testing was made with the introduction of an international standard by the World Health Organization (WHO) in 2010. This allows laboratories to calibrate VL results to a single international unit (IU) standard regardless of which test is being used, thereby enabling a more accurate comparison of viremia values across institutions. Standardized IVD assays can achieve precision within as little as 0.1 log 10 copies/mL, which may be less variation than what is observed biologically during chronic infection. Specifically, a multicenter study of an FDA-cleared plasma-based NAAT calibrated to the IU standard demonstrated a narrow 95% confidence interval (CI) of 0.14 to 0.17 log 10 copies/mL for specimens tested across five different laboratories. However, a comparison of ten different CMV NAATs, all calibrated to IU, demonstrated a variance of up to 2.8 log 10 IU/mL among clinical specimens analyzed by each assay, with over 40% of specimens giving values greater than 0.5 log 10 IU/mL from the mean. Further, the use of IU will not allow for interchangeable comparison of values obtained from different specimen types such WB and plasma. These data demonstrate that even when using a standardized calibrator and reporting results as IU/mL, assay-specific factors including the use of different specimen types, CMV gene targets, cycling conditions, and nucleic acid extraction methods can cause significant inter-assay variability. Taken together, these data underscore the difficulty in obtaining commutable VL results across different laboratories and the barriers to development of a “global” CMV VL threshold for use as a clinical decision point. Until these variables can be addressed, serial VL monitoring for preemptive therapy or identification of therapy failure should be conducted by the same laboratory to avoid potentially misleading results.
An alternative approach to absolute VL thresholds is the observation of change in VL values over time. Several studies have demonstrated that a significant or rapid change in VL is a better prognostic indicator than a single value, especially when the VL is relatively low (10 2 to 10 3 copies/mL) where higher variability in assay results are expected. , , CMV replication kinetics or “doubling time” (T d ) has been used to stratify the risk of disease and necessity of antiviral therapy following transplant. , No definitive T d threshold exists; however, one study found that use of a T d less than 2 days as criteria for initiation of therapy was associated with significantly fewer days of viremia and antiviral therapy compared with the use of a VL threshold of 1000 copies/mL. While effective, this approach requires frequent (2- to 3-day interval) collection of specimens which may be inconvenient or impractical for outpatients. Importantly, once antiviral therapy has been initiated serial CMV VL tests should be conducted no more frequently than once per 5 to 7 days since the half-life of CMV DNA in blood is 3 to 8 days. Further, an increase in VL can be expected within 72 h of initiation of antiviral therapy in some patients and is not indicative of therapy failure.
Current assays that have attained FDA and CE regulatory status for IVD use include the Qiagen artus CMV RGQ MDx, the Roche COBAS AmpliPrep/COBAS TaqMan CMV test, and the Roche COBAS CMV test (run on cobas 6800 or 8800 systems). These assays all contain internal quantitative controls calibrated to the IU standard and are indicated for use only with plasma specimens. Each system includes automated nucleic acid extraction followed by RT-PCR amplification and detection of target DNA. The assays have an LoD of 34 to 91 IU/mL and a linear range of ∼10 2 to 10 7 IU/mL. Additional quantitative CMV assays that have attained CE Mark or are commercially available as ASRs for the development of LDTs are available for use on various real-time PCR platforms and demonstrate similar limits of detection and quantification to FDA-cleared tests. However, the use of different extraction methods, thermocyclers, and calibration materials can contribute to variability of VL results. Therefore regardless of assay, it is important to establish a baseline VL for patients entering a health care system or for those with VL results received from a different laboratory.
Antiviral treatment or prophylaxis plays an integral role in the management of patients at risk for CMV infection and disease. The gold standard method to assess antiviral susceptibility involves observation of viral plaques or CPE in cultured cell lines in the presence of increasing concentration of an antiviral agent. This phenotypic method has the advantage of determining a specific inhibitory concentration for an antiviral and is independent of specific resistance mechanisms. The major drawbacks to this approach are the technical expertise required to carry out viral culture, the requirement for isolated virus, and the incubation time necessary to observe CPE, which can be as long as 4 weeks. In contrast, genotypic methods rely on the detection of specific mutations in the viral genome that are associated with antiviral resistance. These methods directly analyze virus present in clinical specimens (i.e., do not require isolation of the virus) and can be completed in as little as 1 to 2 days; however, the accuracy and reliability of results is dependent on knowledge of specific mutations leading to resistance. To date, the majority of mutations resulting in resistance to commonly used antiviral agents ganciclovir, cidofovir, and foscarnet map to specific regions of two CMV genes: UL97, encoding a tyrosine kinase required for activation of ganciclovir, and UL54, the viral DNA polymerase. Molecular assays using PCR to amplify selected regions of the UL97 and UL54 genes, followed by Sanger sequence analysis, have been developed by clinical and reference laboratories and are widely used to identify resistant CMV in clinical specimens. While mutations in UL97 are most common and result in resistance to ganciclovir only, CMV may also develop mutations in UL54 that result in resistance to one or more antiviral agents. Therefore analysis of both UL97 and UL54 should be conducted when patients fail therapy. ,
Primary infection with varicella-zoster virus (VZV) manifests as a disseminated syndrome involving viral replication within lymphoid and cutaneous tissue. This gives rise to the classic symptoms of fever and vesicular rash associated with chicken pox. Following primary infection VZV achieves latency primarily within trigeminal and dorsal root sensory nerve ganglia. Reactivation of latent virus results in secondary varicella disease known as herpes zoster (HZ), which often presents as a vesicular rash in related dermatomes. VZV is highly contagious and can be transmitted through aerosolization of respiratory secretions or direct contact with vesicular lesions. Seroprevalence of VZV was reported to be 90 to 100% by adolescence even prior to the availability and use of VZV vaccine in 1995. ,
In the compromised host, primary infection or reactivation of varicella can cause severe illness including high fever, meningitis, encephalitis, pneumonia, hepatitis, retinal necrosis, or disseminated visceral disease. , Retinal necrosis is common in individuals with advanced or uncontrolled AIDS, and is accompanied by CNS involvement in up to 75% of cases. , Visceral HZ in compromised individuals is an immediate life-threatening condition associated with fever, multiple organ involvement, and disseminated intravascular coagulation (DIC) that can be associated with mortality rates of greater than 50%. , Importantly, these symptoms may be present in the absence of the characteristic vesicular rash associated with HZ (zoster sine herpete) that can impede clinical diagnosis. , Populations at the highest risk for HZ are individuals with hematopoietic malignancies, advanced HIV/AIDS, or those undergoing immunosuppressive therapy following SOT or BMT. , , The risk of HZ within 4 years of SOT has been estimated at 8 to 11% overall, but is significantly higher in patients greater than 60 years of age. Additionally, the risk of post-transplant HZ was 3.4 times higher in patients who were seronegative at the time of transplant. Therefore establishment of serostatus is important for stratifying risk and identifying patients who would benefit from HZ vaccination prior to transplant.
Serologic diagnosis of primary varicella is not routinely performed because it requires comparison of acute and convalescent sera, which delays diagnosis by 10 to 14 days. The establishment of varicella serostatus is however beneficial for screening of health care workers and pretransplant assessment of individuals with no record of vaccination or natural varicella infection. Common methods to assess serostatus include latex agglutination (LA), ELISA, fluorescent antibody to membrane antigen (FAMA), and CIA ( Table 89.2 ).
The FAMA test is based on detection of varicella envelope-specific antibodies in cultured virus. FAMA has been considered the gold standard for the establishment of serostatus because of its high sensitivity and correlation with protective immunity; however, the assay is technically demanding and time consuming. These factors have prevented widespread use of this method in clinical laboratories. LA tests are inexpensive, require no additional equipment, are simple to perform, and can be completed within 10 to 15 minutes. The sensitivity of LA tests (89 to 98%) is similar to FAMA and equivalent or superior to that of traditional ELISA but specificity is low when compared to FAMA. , , This may be due to the subjective interpretation of agglutination reactions. Newer ELISA and CIA tests based on purified envelope glycoprotein (gpELISA) have demonstrated better sensitivity (87 to 100%) than traditional ELISA tests (72 to 98%). , Additional benefits of ELISA and CIA tests include an objective result and the ability to automate testing, which enables high-throughput screening of sera.
Establishment of serostatus in immunized persons can be difficult because the antibody response to the vOka vaccine is 10-fold lower than the response to natural infection. Additionally, vaccine induced antibodies begin to decline and may become undetectable in 5 to 30% of individuals within 5 to 15 years post vaccination, resulting in 61 to 85% sensitivity of FAMA and LA in immunized populations. , , New gpELISA tests demonstrate better performance in detecting seroconversion following immunization (87 to 99% sensitive). However, long-term follow-up studies have not been conducted to determine if sensitivity remains high despite declining antibody concentration.
NAATs are the preferred method for laboratory diagnosis of acute primary varicella and secondary HZ disease because of the increased speed and sensitivity compared with culture or direct detection (e.g., direct fluorescent antibody [DFA]) methods. In patients with rash consistent with HZ, culture was found to be 20 to 53% sensitive and DFA was found to be 82% sensitive when compared to NAAT. , VZV NAATs may also be multiplexed to include HSV-1 and HSV-2. This can be valuable in the assessment of cutaneous and mucocutaneous lesions given the similarity in appearance. , Specifically, one study using a multiplexed VZV and HSV NAAT reported 11% of all positive VZV detections to be from male or female genital sites. In these cases, a VZV-specific test may not have been ordered based on clinical presentation and location of the lesion, supporting the value of combined target NAATs.
Qualitative NAATs also provide a sensitive method to detect VZV in sterile specimens such as CSF, BAL, or vitreous fluid where the presence of any amount of virus is likely causal. Pulmonary varicella or HZ can be severe in adults and if untreated carries up to 30% mortality. Supportive therapy and early treatment with acyclovir can greatly reduce mortality, however the clinical diagnosis of VZV pneumonitis is difficult because physical and radiographic findings are non-specific. , , Direct analysis of BAL specimens using NAATs has enabled rapid and definitive detection of VZV in cases of pneumonitis, resulting in early appropriate therapy to improve patient outcome. , Similarly, clinical symptoms of necrotizing retinitis are nonspecific among the various herpesviruses commonly associated with this condition (VZV, CMV, HSV). , VZV NAATs have been successfully used to analyze vitreous specimens and provide a rapid and definitive diagnosis. The use of aqueous humor rather than vitreous is less invasive, requires as little as 10 to 20 μL fluid, and also appears to be acceptable for laboratory diagnosis of viral retinitis.
Several FDA or CE-Mark qualitative VZV NAATs are commercially available in addition to analyte-specific reagents and published primer sequences ( Table 89.3 ). FDA and CE-Mark assays are often cleared for specific specimen types, typically cutaneous lesions, and require additional verification studies to enable reporting of results on alternative specimen types.
Manufacturer | Regulatory Status | Target | Specimen | Instrument | Quantitative (AMR) or Qualitative (LoD) | Reference |
---|---|---|---|---|---|---|
Quidel Lyra Direct HSV + VZV | FDA-IVD, CE-IVD | Not available | Vesicular lesion, swab | ABI7500Fast DX | Qualitative | Manufacturer product insert |
Roche | CE-IVD | Not available | CSF, vesicular exudate (extracted nucleic acid) | LightCycler 2.0 | Qualitative | Manufacturer product insert |
Cepheid Benelux (Affigene VZV) | CE-IVD | ORF 62 | CSF, vesicular swabs, blood/plasma, respiratory, eye swab, tissue (extracted nucleic acid) | Roter-Gene, ABI, iCycler | Qualitative; LoD 9.3 copies/mL | |
BioFire FilmArray ME | FDA-IVD, CE-IVD | Not available | CSF | FilmArray | Qualitative; LoD ∼10 3 copies/mL | |
DiaSorin Simplexa VZV Direct | FDA-IVD, CE-IVD | Not available | CSV, Vesicular swabs | Liaison MDX | Qualitative, ∼10 3 copies/mL | Manufacturer product insert |
Quidel Solana HSV 1+2/VZV | FDA-IVD, CE-IVD | Not available | Vesicular swabs | Solana | Qualitative | Manufacturer product insert |
LDT | n/a | ORF 28, ORF 29 | CSF, Vesicular swabs, Blood/plasma, respiratory, eye swab, tissue (extracted nucleic acid) | Open system | Qualitative; LoD 16 copies/mL | , |
LDT | n/a | ORF 62 | Vesicular lesions or crusts | ABI Prism 7700 | Qualitative; LoD 13 copies/mL | , |
LDT | n/a | ORF 29 | Whole blood, Plasma, Serum | Perkin-Elmer 9600 | Quantitative; Whole blood 80–10 6 copies/mL, Plasma 20–10 6 copies/mL | , |
Importantly, up to 5% of individuals receiving the vOka vaccine may develop a characteristic rash. , Discrimination between vaccine-induced lesions and acute infection with wild-type VZV in these patients may impact prognosis and infection prevention strategies. Importantly, none of the currently marketed assays differentiate wild-type VZV from vOka. Differentiation has been reliably achieved using additional primers that target a specific polymorphism in ORF 62 and other vaccine-specific SNPs. Currently, this testing is available through the US CDC and other specialized reference laboratories.
It is difficult to assess the absolute sensitivity of NAATs for detection of VZV in various specimens because “gold standard” reference methods including culture are significantly less sensitive than NAAT. Analytic studies typically report a lower LoD of 10 to 200 copies/mL; however, a lack of standardization of methods among LDTs makes the clinical comparison of assays difficult. For example, the initial description of a VZV LDT reported 94% sensitivity when compared to a composite method of culture, DFA, and serology. A subsequent study reported the LDT to be only ∼60% sensitive compared to a commercially available VZV NAAT. This highlights the importance of standardized test methods and the impact of the chosen gold standard comparator when reviewing the performance or comparative performance of a molecular assay(s).
Quantitative NAATs enable the enumeration of VZV copies/mL in WB, plasma, or other bodily fluids. These assays can have a broad dynamic range of less than 100 copies/mL to greater than 10 7 copies/mL providing both sensitive and accurate determination of VZV VL. Using these sensitive NAATs, a low level of VZV DNA has been detected in peripheral blood monocytes (PBMCs) isolated from 0 to 3% of asymptomatic individuals, though it is not clear if this represents differences in test sensitivity, subclinical reactivation, or latent cell-associated virus. , In patients presenting with characteristic rash, a VL threshold of 20 to 80 copies/mL was found to be 81 to 86% sensitive within 2 days following rash onset and increased to 100% for primary varicella and 80 to 89% for HZ within 1 week of rash onset. A more important role for quantitative VZV NAAT is the evaluation of compromised patients with disseminated disease or visceral HZ sine herpete. Plasma VL can reach levels of 10 3 to 10 6 copies/mL in these patients. Higher VLs were frequently correlated with more severe disease, and in all cases a rapid decrease in VL was observed following antiviral therapy. , , Importantly, the detection of VZV in plasma precedes clinical symptoms in some but not all patients, thereby limiting the utility of serial monitoring to predict disease recurrence or initiate antiviral therapy prior to symptom onset. , These data support a diagnostic and prognostic role for quantitative NAAT in the assessment of VZV in immunocompromised patients; however, it must also be noted that these studies were conducted using nonstandardized LDTs. Therefore specific VL thresholds correlated with infection, disease state, or response to therapy are not universally applicable. Serial VL testing should be conducted at a single laboratory and specific VL thresholds for clinical decision points must be established by individual laboratories or institutions.
Quantitative analysis of VZV in saliva specimens has also been proposed as a noninvasive method to aid in diagnosis of disseminated HZ, HZ with CNS involvement, and in cases of facial palsy without rash. Detection of VZV in saliva was 72 to 100% sensitive in patients with rash and clinically diagnosed HZ. , VL ranged from 10 1 to 10 7 copies/mL and correlated with subjective pain scores. Importantly, the detection of VZV in saliva is not necessarily indicative of acute HZ. Environmental or medical stress can induce subclinical reactivation and shedding of VZV in saliva in the absence of detectable blood VL or acute disease. , Given these data, the use of saliva as a specimen may have merit as a noninvasive specimen type when confirming VZV in persons with rash or when investigating atypical presentations of HZ. However, results should be interpreted in the context of other clinical findings.
Human herpesvirus 6 (HHV-6) is part of the Betaherpesvirus subfamily and encompasses variants HHV-6A and HHV-6B. These variants are serologically indistinguishable; however, nucleic acid analysis has demonstrated that greater than 95% of symptomatic infections are due to HHV-6B. , The primary syndrome associated with HHV-6 infection is roseola (sixth disease). This is almost exclusively a childhood illness and accounts for 10 to 30% of emergency department visits in children less than 2 years of age. Like all herpesviruses, HHV-6 is capable of establishing lifelong latency following initial infection which is presumed to be primarily within mononuclear cells. Latency is maintained through integration of the viral genome into the host chromosome, a characteristic unique to HHV-6 among the herpesviruses. , Seroprevalence of HHV-6 can be variable regionally and in different populations, but is typically greater than 90% by adulthood. ,
In the immunocompromised host, latent virus can reactivate to cause severe illness including pneumonitis, CNS disease, and delayed bone marrow engraftment or graft versus host disease (GVHD). , , The incidence of HHV-6 reactivation ranges from ∼0 to 80% (average 30 to 50%) in SOT or BMT patients with a slight preference for BMT. , In contrast to VZV, reactivation of HHV-6 typically occurs in the first month following transplant. ,
Serologic methods are available to identify individuals with antibodies to the HHV-6; however, due to the high seroprevalence of this virus and requirement for comparing acute and convalescent titers they play no practical role in diagnosis of acute to active HHV-6 infection.
Both qualitative and quantitative NAATs have been employed for the detection of HHV-6 in clinical specimens using various targets and methodologies. There are currently no FDA-cleared NAATs for detection of HHV-6 in serum, but CE-Marked assays are commercially available and several reference laboratories offer quantitative or qualitative testing ( Table 89.4 ).
Amplification Method | Detection Method | Target | Specimen | Quantitative or Qualitative | LoD or AMR | Reference |
---|---|---|---|---|---|---|
Nested PCR | Gel electrophoresis | orf57 | Plasma | Qualitative | 10 0 –10 1 copies/mL | , |
LAMP | Reaction turbidity | orf31 | Plasma | Qualitative | 10 1 –10 3 copies/mL | , |
Endpoint PCR | Capture EIA | orf89 | Plasma | Qualitative | 10 0 copies/mL | |
Real-time PCR | Fluorescent probe | orf67 | Plasma, tissue | Quantitative | LoD 10 0 –10 1 copies/mL; AMR 10 2 –10 6 | |
Real-time PCR | Fluorescent probe | orf67 | Plasma, serum, CSF | Quantitative | LoD 10 0 –10 1 copies/mL; AMR 10 3 –10 7 | |
Real-time PCR | Fluorescent probe | orf31 | Plasma, whole blood | Quantitative | LoD 10 1 –10 2 copies/mL; AMR NR | |
Real-time PCR | Fluorescent probe | NR | Plasma | Quantitative | LoD 10 2 –10 3 copies/mL; AMR NR |
A multiplexed panel for qualitative detection of HHV-6 in CSF has recently been cleared by the FDA for IVD use (see meningitis section of this chapter). The specific syndrome or site of infection dictates the specimen most appropriate for analysis by NAAT. These include CSF, BAL fluid, and blood/serum (when disseminated disease suspected).
Quantitative NAATs are the preferred method for laboratory diagnosis of HHV-6 reactivation disease in post-transplant or other at-risk populations. Whole blood, isolated PBMCs, or serum can be used to monitor HHV-6 VL. HHV-6 DNA has been detected in 30 to 90% of peripheral blood or PBMC samples obtained from asymptomatic individuals, frequently at levels of 10 3 copies/mL or higher. , This complicates interpretation of a single VL result in a patient with compatible symptoms. A rapid increase of 3 to 4 log 10 in HHV-6 VL, reaching 10 5 to 10 6 copies/10 6 PBMC, has been observed between 0 and 14 days following onset of symptoms. , The delayed rise in VL precludes the use of post-transplant serial monitoring to predict HHV-6 reactivation disease; however, a rise in HHV-6 VL can be useful in confirming the involvement of HHV-6 when other viral etiologies (CMV, VZV) may also be in the differential. Additionally, VL is a useful prognostic marker since high VL is correlated with delayed tissue engraftment, and VL decreases following antiviral therapy. In contrast to peripheral blood or PBMCs, HHV-6 DNA is only rarely detected in sera of asymptomatic individuals and likely represents viral DNA released from lysed PBMCs rather than active viral replication. , This suggests the detection of HHV-6 DNA in serum may be more specific for active disease compared to peripheral blood, though sensitivity may be reduced.
The lack of assay standardization and an international calibrator complicate interpretation of quantitative VL results and impede the establishment of a universal threshold for clinical significance. A greater than 15-fold variability between HHV-6 assays at the upper end of quantitation and greater than 200-fold variability in VL results for reference specimens containing ∼3 log 10 copies/mL has been reported. Recent development of the first WHO International Standard for HHV-6 has the potential to reduce interlaboratory and interassay variation in reported VL values; however, differences in assay target, extraction method, and PCR instrumentation still contribute to variation in VL values obtained from different tests. Therefore it is recommended to use the same test and laboratory when monitoring HHV-6 VL with importance being placed on the change in VL rather than the absolute value.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here