Abstract

Background

Cytometry, or “cell measurement,” can describe any process by which individual biologic cells are counted or characterized, whether or not a human observer is involved. An apparatus used in the process is called a cytometer. From the 1950s on, cytometers, nearly all automated to some degree, have replaced microscopy in an increasing number of applications in both clinical and research laboratories. Some chemical assays can also be done using cytometers or similar instruments.

Flow cytometers, in which individual cells are measured as they pass through a series of optical or electronic sensors (or both), represent the majority of instruments now in use, and at least a plurality of clinical cytometric analyses is performed on cells from the blood and immune system. Apparatus, reagents, and other tools for cytometry now represent a multibillion-dollar market.

Although the sophistication and cost of high-end cytometers continue to increase, advances in optics and electronics during the past two decades could make cytometric technology affordable and applicable for a broader range of tasks worldwide within the next few years, including point-of-care assays for diagnosis and management of infectious diseases in both affluent and resource-poor countries.

Content

This chapter provides a historical overview of how cytometry evolved from microscopy; explains how cytometers work, with examples of what is measured and why; and considers some likely directions for future developments.

Introduction: Planet cytometry and its inhabitants

Cytometry, now broadly defined to include counting, classification, and characterization of biologic cells and similarly sized objects, began when cells were first discovered in the late 1600s. The discovery of pathogenic bacteria and of disorders such as anemias, leukemias, and malaria, in which cellular changes in the blood could be correlated with clinical course, brought microscopy into clinical use in the mid-1800s. Until about 1950, cytometry depended on human observers using microscopes. From then on, increasingly sophisticated instruments known as cytometers have replaced microscopy wherever budget and infrastructure allow. Cytometry is a complex technology, but the most complex cytometers, although considerably larger, are much simpler than the simplest cells.

Clinical chemistry, one central discipline of this textbook, is generally defined as a subset of clinical pathology that excludes measurements of cells. Chapter 16 on Optical Techniques in the current and previous editions of this book include a brief discussion of cytometry, emphasizing flow cytometry. That chapter is a prerequisite to this one, in which the current editors have asked me to provide more detail and allowed me relatively free rein with respect to both content and style. Unless you already know a great deal about cytometry, you may want some additional sources of information even before you continue reading this chapter.

My 2003 book, Practical Flow Cytometry , referenced in Chapter 16 , has the advantage of being available free. The entire book and individual chapters can be viewed, courtesy of Beckman Coulter, at https://www.beckman.com/resources/reading-material/ebooks/practical-flow-cytometry . The first chapter provides a copiously illustrated 60-page introduction to flow cytometry with some discussion of other methodologies. Starting with the overture might help you recognize some of the themes in the opera.

Numerous books and serial periodical volumes include detailed protocols for various types of cytometric analyses. The most comprehensive series is Current Protocols in Cytometry . You may have noticed that often you can no longer get enough details to duplicate the experiments described in a typical scientific paper even if you combine the Supplemental Information with what appears in the Materials and Methods section. Current Protocols solicits articles from authors whose papers have described new methods and significant advances, and its articles are typically reviewed and revised every few years. Core cytometry laboratories in large institutions should subscribe; the cost is a small fraction of what they typically spend on instrument service contracts, and getting good data requires working protocols and working instruments.

There are now international, national, regional, and local cytometry organizations that run meetings and courses on cytometry. The International Society for the Advancement of Cytometry (ISAC) ( http://www.isac-net.org ), formerly the International Society for Analytical Cytology (ISAC), was founded as the Society for Analytical Cytology (SAC) in 1978. (Could ISAC also stand for International Society for Acronym Conservation?) By the mid-1980s, clinically oriented ISAC members had organized an annual meeting on Clinical Applications of Cytometry (CAC), separate from ISAC’s meeting; by 1992, they had formed the Clinical Cytometry Society (CCS), affiliated with ISAC as an associate society. By 2011, CCS had become the International Clinical Cytometry Society (ICCS) ( http://www.cytometry.org ).

The journal Cytometry , founded by SAC, began publishing in 1980, and fragmented into two journals as SAC fragmented into two societies. As of 2020, Cytometry Part A is the official journal of ISAC, and Cytometry Part B is the official journal of both ICCS, published in affiliation with the European Society for Clinical Cell Analysis (ESCCA) ( http://www.escca.eu ), founded in 2006. The major cytometry societies frequently hold joint meetings.

Cytometry is also emphasized in the meetings of the International Society for Laboratory Hematology ( http://www.islh.org ), established in 1992, and in its International Journal of Laboratory Hematology .

Since 1993, active discussions of cytometric topics have been posted on an Internet mailing list set up by Paul Robinson of Purdue University; you can join the list at https://lists.purdue.edu/mailman/listinfo/cytometry . Everything that has been posted has been archived, and participants tolerate questions from newbies, but some of us may lose patience with those whose posts indicate an obvious desire to avoid doing any work to get information.

The Internet can be both a help and a hindrance. PubMed searches on Jan. 19, 2020, on “cytometry” and “flow cytometry” in the title or abstract returned 138,099 and 133,498 citations, respectively. The earliest of the “flow cytometry” citations are from 1977, making the claims on many websites that the term was introduced at a 1978 conference untenable; the meeting actually occurred in 1976. Wikipedia comes closer but asserts that fluorescence-based flow cytometry was introduced in 1968, neglecting publications dating back to 1964. These classics, representing cytometry’s oldest oldies but goodies, are in the MEDLINE database but were not retrieved in either PubMed search.

The Internet also helps perpetuate a major urban legend of cytometry, that is, the notion that “forward scatter” (FSC), the intensity of light scattered at small angles to an illuminating beam by cells, provides a generally accurate measure of cell size. The inaccuracies often encountered in practice may be due to both cellular and instrumental factors.

Nicholas Negroponte, founding director of MIT’s Media Lab, distinguished between the properties of atoms, the fundamental units of matter, and bits, the fundamental units of information. He mentioned that you can’t eat bits; in the current big data era, however, I worry about choking on them. Cytometers generate big data, but big data can be bad data.

Cytometry and cytometers: A broad spectrum

Most of the tens of thousands of automated and semiautomated cytometers now in use in clinical and research laboratories worldwide make multiple optical or electronic measurements (or both). There are cytometers that measure single cells and others that measure single molecules, some that can measure small multicellular organisms or cell spheroids, and some that are devoted to measuring ligand binding to color-coded plastic beads. There have been and will be more cytometers in space, and there are some aboard ships and some that sit underwater for months at a time capturing images of plankton and posting them on the Internet.

The 2014 Nobel Prize in chemistry was shared by three PhDs in physics (Eric Betzig, Stefan Hell, and William Moerner) on super-resolved fluorescence microscopy. They built and used apparatus consisting of esoteric and expensive lasers and other sophisticated electro-optical devices to study the behavior of single molecules in single cells; they were and are doing cytometry, sometimes in the living room and the laboratory, and it works. Of course, some chemists might have complained that the Nobel Prize had gone to Hell. (Stefan Hell was one of the laureates.)

At the other end of the complexity spectrum, a growing crowd of enthusiastic academics ranging from high school students to occupants of endowed chairs promise rapid and affordable point-of-care malaria diagnosis in resource-poor countries using simple attachments that allow mobile phones to take “cellfies” of blood. This is also cytometry. Some of it might work.

The simplest cytometers may measure only a single property (or, to use standard cytometric jargon, “parameter”) of each cell analyzed; some instruments of this type can be bought for a few thousand US dollars or built for a few hundred. Others may have 10 or more light sources and dozens of detectors and cost hundreds of thousands of dollars. The devil is, as always, in the details. A reasonably comprehensive list of parameters measurable by cytometry appears in Box 28.1 ; only a few of the most relevant ones are discussed here.

BOX 28.1
AC, Alternating current; DC, direct current.
Cellular Parameters Measurable by Cytometry

Intrinsic structural parameters (no probe added)

  • Cell size (DC impedance; dye exclusion volume; light scattering [with caveats])

  • Cytoplasmic granularity, vacuoles, etc. (large angle light scattering; AC impedance)

  • Birefringence (depolarized light scattering; polarized light transmission)

  • Pigment content (e.g., photosynthetic pigments)

Intrinsic functional parameters (no probe added)

  • Redox state (endogenous pyridine and flavin nucleotides)

  • Fluorescent protein expression

Extrinsic structural parameters

  • DNA content

  • DNA base ratio

  • Nucleic acid sequence (DNA and RNA)

  • RNA content (double stranded)

  • Total double-stranded nucleic acid

  • Total protein (large angle light scattering)

  • Lipids

  • Surface and internal antigens

  • Surface sugars (lectin-binding sites)

  • Gram staining status

Extrinsic functional parameters

  • Surface receptors (including phage receptors)

    • Surface charge

    • Membrane fusion or turnover

    • Cell division

    • Membrane integrity (“viability”)

    • Membrane permeability (dye, drug, substrate uptake, efflux)

    • Intracellular receptors

    • Enzyme activity (chromogenic substrates)

    • Oxidative metabolism (chromogenic substrates)

    • Sulfhydryl groups (glutathione)

    • DNA synthesis

    • Membrane potential (cytoplasmic and mitochondrial)

    • Intracellular and compartmental pH

    • Intracellular [Ca ++ ]

A rudimentary cytometer intended for counting cells and perhaps determining their “viability,” which is usually inferred from whether various indicator dyes do or do not get in or out of a cell, requires little operator training. You put your cell suspension in the magic disposable counting chamber (think consumable costs...), which may be preloaded with magic juice (think again) and press the button, and the magic numbers appear. If this low-end instrument is like a kazoo, those at the high end are like large church or theater organs; they require an operator who can perform at the virtuoso level.

To buy a Stradivarius violin (“a large organ” would not be appropriate from here on) or a Formula One Ferrari (if you don’t like the musical analogy), all you need is money. To get paid to play the Stradivarius violin or drive the Ferrari, you need knowledge and skill, and the people who pay you can easily assess your level of competence. This is not always the case in the cytometry world or in other areas of science in which highly sophisticated apparatuses must be used to generate experimental results. Having more instruments available than there are people who know how to use them properly results in incorrect information getting into the literature because the shortage of technical expertise results in failures of the peer review process for both manuscripts and applications for funding.

At present, when one sets out to design an instrument for clinical use, it is necessary to document the principles of its operation and the performance of its hardware and software components and to develop appropriate standards and calibrators. Cytometers in general and fluorescence flow cytometers in particular were first commercialized for research use at a time when device regulation was rudimentary. They entered clinical laboratories via the back door when they were seen to provide the best means to assess the status of patients with HIV/AIDS and various other disorders of cells from the blood and immune system. It has taken decades to implement systems for quality assurance in these areas of application; similar effort will be required to extend the range of application into other areas (e.g., diagnosis and monitoring of treatment of bacterial, parasitic, and viral diseases).

Making the best use of the present-day technology of cytometry and improving it to deal with problems that now seem ripe for it to solve require more than superficial knowledge about cells and about the physical sciences that provide the basis for analyses. Although cells have been with us for billions of years, we became aware of their existence only about 350 years ago. What could be accomplished at any time since was a function of “who knew what when,” and looking at that is as good a way as any of getting a feel for the subject. The science in cytometry, like science in general, doesn’t always “march on.”

Cells and apparatus: Finding the words

Cytometry started around the beginning of the Age of Enlightenment. Although this period, which ended around 1800, was not noted for actual improvements in illumination technology, it yielded new ways of looking at things and new ways of seeing them, with profound implications for the arts, the sciences, and philosophy.

The use of lenses (so named because they were lentil shaped) to correct vision had begun in Italy around 1300, but it was not until about 1600 that Italian and Dutch spectacle makers combined them to bring faraway objects closer, thereby inventing the telescope, and to bring objects otherwise too small to see into view, inventing the microscope. Both instruments, named by his compatriots, were used by Galileo Galilei; he might have stayed out of trouble had his interests been restricted to microscopy. Others’ work with compound (two-lens) microscopes continued in Italy and elsewhere.

The Royal Society of London was established in 1660 to promote scientific experiment. Robert Hooke, its curator of experiments, was commissioned by the Society to write a book about his own work with compound microscopes. Micrographia: or some Physiological Descriptions of Minute Bodies made by Magnifying Glasses with Observations and Inquiries Thereupon, impressively illustrated by the author, appeared in January 1665. Hooke presented images of thin longitudinal and transverse slices of cork, showing empty spaces bounded by what would now readily be recognized as the remnants of cell walls. He called the spaces cells because they resembled the more geometrically regular cells of a honeycomb. (Don’t believe the Internet about the cells in a monastery or prison.) The book, written in still-intelligible English and freely available online, remains a classic work on popular science. The famous diarist Samuel Pepys noted that on the evening of January 21, 1665, he sat up until 2 am reading it, calling it “the most ingenious book that ever I read in my life,” but Hooke does not describe or illustrate actual cells, and there is no evidence that he saw them until the 1670s.

He was then asked to verify reports to the Royal Society from Antoni van Leeuwenhoek, a self-taught Dutch fabric merchant who had observed blood cells, sperm, protozoa, and bacteria using simple microscopes containing only a single lens but providing much higher magnification than was available from the compound microscopes then used by Hooke and others. Micrographia contained highly magnified images of pieces of woven fabrics; it has been speculated that van Leeuwenhoek had seen these and became interested in using a microscope to assess the quality of his wares, with his initial interest progressing to a highly productive obsession that brought him worldwide fame.

Hooke examined microorganisms as van Leeuwenhoek had, in glass tubes drawn “fine as a human hair.” Both men would thus have been entitled to claim they were using “microfluidics” had the term existed. Van Leeuwenhoek concluded that microorganisms and sperm were alive based on their motility, describing them as “little animalcules,” with the “anima” part suggesting possession of a soul. He imagined he saw them copulating and giving birth, and he calculated the sizes their internal organs ought to be. He characterized some other objects he saw only as “little globules,” including yeasts, the fermentative and reproductive capacities and viability of which were not immediately obvious. Many of the other globules were likely to have been optical artifacts, which plagued microscopists well into the 1800s.

Blood cells, which van Leeuwenhoek did not observe until the mid-1670s, had been visualized in the 1660s by others using relatively high-power compound microscopes. Marcello Malpighi, whose 1661 discovery of capillaries firmly established William Harvey’s theory of blood circulation, later described rouleaux of red blood cells (erythrocytes, RBCs). It is also likely that Baruch Spinoza saw blood cells; his 1665 exchange on philosophy with Henry Oldenburg of the Royal Society envisioned a worm living in the blood that might encounter the “particles” comprising blood, lymph, and so on. Spinoza made his modest living by building microscopes with very small objective lenses, offering greater magnification with the associated disadvantage of smaller depth of focus and capable of visualizing mammalian cells, if not bacteria. Similar to Galileo, he might have kept out of trouble by focusing on microscopy rather than on the wider world that occupied most of his thoughts.

The word “focus” itself appears only a few times in Micrographia, in a conjecture about refraction in the atmosphere. The Latin noun describes a hearth or fireplace. The cognate words in Romance languages, including French feu , Italian fuoco , and Spanish fuego , all mean “fire.” Because convex lenses were far more widely used as burning glasses than as microscope elements, it is reasonable to assume that the focal point or focus of a lens is the distance at which the fire starts most quickly. Kepler used the term in this sense in 1604, as did Boyle in 1685 and Isaac Newton in his Opticks in 1704. The use of “focus” as a verb, either to describe adjusting an optical system or in the more figurative sense in which it is used in the previous paragraph, began after 1800. In the following pages, as we begin to focus on the optics of cytometers, it will become apparent that focus is much less important in most flow and much image cytometry than in microscopy.

Hooke made striking drawings of what he saw; van Leeuwenhoek patiently described details he observed to an artist, until what he deemed an acceptable drawing emerged. Both men counted objects under the microscope, and both attempted to measure their size. Because England had established measures of length, Hooke could express dimensions in fractions of an inch. With no equivalent standard in place across Holland, van Leeuwenhoek compared his specimens’ sizes to those of grains of sand or “the eye of a large louse,” both widely available. Thus both men can be said to have practiced cytometry. The word, however, was not coined until about 200 years later, by which time Hooke’s term for the empty spaces in cork came to describe their living occupants and the elementary component parts of all known life forms. The “cyto-” prefix for “cell,” introduced into biologic terminology in the mid-1800s, is also found in the Greek word for cells in a honeycomb or wasps’ nest; kudos to its unknown originator. The word “cytometry” itself serves to remind us that the cell comes first; the gadgets add more to our understanding of cells than the cells do to our understanding of gadgets.

Cells: Getting to relevance

The physics in Micrographia is considerably more advanced than the chemistry in Robert Boyle’s The Skeptical Chymist , published in 1661. Although Boyle (who had employed Hooke as an assistant) had rejected the Aristotelian view of matter in favor of atomism, he retained the alchemists’ belief in transmutation, and Newton held similar views. Only carbon, sulfur, iron, tin, lead, copper, mercury, silver, and gold were known as distinct entities before the Middle Ages; arsenic, antimony, bismuth, and zinc were added to the list by 1600, but the next element recognized, phosphorus, was not discovered until 1669. Many early microscopists anticipated that improvements in optics would quickly enable them to visualize atoms. Biology was at a primitive level; van Leeuwenhoek and Francesco Redi had presented experimental evidence against spontaneous generation before 1700, but belief in the phenomenon persisted until Pasteur refuted it conclusively in 1862, and the notion of a “vital force” persisted into the 20th century.

Medicine was largely the domain of herbalists and quacks, who could offer no protection against or treatment for any human disease; plague ravaged London only a few months after Micrographia appeared and was notably documented in Pepys’s diary. One notable advance in medical treatment had, however, been made during the mid-1600s. After the Spanish and Portuguese and their African slaves had brought malaria to Central and South America, Quechua natives in Peru discovered that the disease could be cured by the bark of the Cinchona tree, and some of the Jesuits charged with converting the natives to Christianity returned to Rome with bark samples. The “Jesuit powder” became an effective, though scarce and expensive, remedy for malaria, which until the 1950s was a problem in Northern and Southern regions of Europe and the Americas. No similarly effective treatment for any other infectious disease appeared before 1900.

For almost 200 years from the time of Hooke and Leeuwenhoek, microscopists remained motivated more by intellectual curiosity than by a need or desire to identify causes and mechanisms of or cures for human diseases. Today’s students of biology and medicine may learn the names of Matthias Schleiden and Theodor Schwann as principal proponents of the theory that all living things are composed of cells and of Rudolf Virchow as having set pathology firmly on a cellular foundation. The late Sir Henry Harris’ The Birth of the Cell paints a more accurate picture of the larger cast of characters involved and the problems and controversies that arose.

By the early 1800s, when the Age of Enlightenment ended, the steam engines of the Industrial Revolution were running on coke, produced by “destructive” distillation of coal, which also yielded illuminating gas, a literal source of enlightenment. Until then, the Sun had been the only available high-brightness light source for microscopy; candles and lamps, dim and dirty, were progressively less useful as illuminators for microscopy as magnification increased. Then as now, however, using oblique illumination rather than transmitted light provided an approximation to modern dark-field microscopy, often allowing visualization of objects and structures below the resolution limit of transmitted light and increasing contrast in unstained material.

Before Daguerreotype photography was combined with microscopy in the 1840s, there were no objective means of recording the results of microscopists’ observations. Although Robert Koch, a few decades later, championed the use of photography, there were no precise ways to quantify light intensity for some time thereafter. Johann Lambert had coined the word “photometrie” as the title of the book he published in 1760 that described a method for estimating intensities of different light sources by visual comparison. In 1852, August Beer demonstrated that the light absorbance of relatively dilute solutions of colored substances was proportional to the concentration of the solute, making visual semiquantitative chemical analysis by spectrometry and colorimetry possible.

Until the mid-1800s, the relatively poor optical quality of microscopes made it difficult to distinguish cellular structures from artifacts, especially when the material being observed did not contain either pigments or constituents that differed significantly from one another in refractive index. RBCs were among the first cells to be identified by microscopy because of their content of hemoglobin, the material responsible for the red color of blood. By 1872, this was known to be an iron-containing, oxygen-carrying protein of high molecular weight. Anemia, literally a lack of blood, is typically easy to recognize from the color of skin in fair-skinned individuals and of mucous membranes in almost everybody with the condition.

By 1880, calibrated chambers were developed in which cells in suitably dilute suspensions could be counted at relatively low (100×) magnification; they were named “cytometers.” Other instruments, “hemoglobinometers,” permitted bulk determination of hemoglobin by visual colorimetric comparison with dyed standards; this was one of clinical chemistry’s first tasks. Neither blood cell counting nor hemoglobinometry would be done routinely using electronic apparatus until around 1960, and although a few modern clinical hematology counters can measure hemoglobin in individual cells using optical techniques, the majority incorporate electronic photoelectric colorimeters and measure hemoglobin in bulk in lysed blood.

Virchow had defined leukemia as an excess of leukocytes (white blood cells [WBCs]) in the 1850s, but it was not until the late 1870s that staining of cells, advanced considerably by Paul Ehrlich’s experiments as a medical student with then newly synthesized aniline dyes, made it much easier to identify and distinguish different WBC types. The task was further facilitated by improved microscope optics such as the achromatic and apochromatic and oil-immersion lenses and substage condensers developed by Zeiss and other manufacturers. Nonetheless, it was not until the 1890s that it was accepted that cells gave rise to new cells only by mitotic division, and the role of the chromosomes (the name of which provides the clue that they were not readily visible without staining) in heredity was not elucidated until the next century.

Louis Pasteur and Koch used microscopy to establish specific microbial causes for diseases such as anthrax, tuberculosis (TB), and cholera, all of which, for one reason or another, remain of concern even in the modern world. (Paul de Kruif’s decidedly unmodern Microbe Hunters is still worth reading for this story.) Ehrlich and Koch, working together, developed practical stains to distinguish mycobacteria, which cause TB and leprosy, from other bacterial species. Inspired by them, Christian Gram developed the staining technique that, although now considerably modified, still bears his name and is universally used to classify bacteria.

Although the hematologic measurements of the 1880s could diagnose anemias and leukemias, there were few, if any, effective treatments for the former and none for the latter. In 1880, however, Alphonse Laveran discovered the protozoan parasites that cause malaria. He was able to work with unstained blood specimens, looking for a dark brown pigment, hemozoin, that had been known for more than a century to accumulate in the blood and tissues of malaria patients, and, having observed motile pigment-containing objects of various sizes and shapes in a fresh blood sample, concluded that they must represent different forms of a pathogenic parasite. The same criterion van Leeuwenhoek and Laveran used to identify living organisms works today. Motility remains a useful indicator of viability; species on other planets that might not use the same genetic code apparently shared by all terrestrial organisms might elude detection by molecular methods now in vogue, but if they can run, they can’t hide.

Laveran noted in his 1907 Nobel Lecture that his work was not widely accepted until methods for staining malaria parasites were developed in the decade after his original report. When it became relatively easy to detect the presence of parasites in a febrile patient, one could administer quinine, determined in 1821 to be the active ingredient of Cinchona bark, known for centuries to be effective for treatment, and follow the disappearance of the parasite by microscopy. As noted earlier, malaria was far more common worldwide in the 19th century than it is now, and quinine, although expensive, was available at least to the minority of patients who could reach and afford medical care.

Ironically enough, the synthetic dyes introduced for cell staining by Ehrlich and others were themselves byproducts of an earlier effort to improve malaria treatment. By the 1850s, the structure of organic compounds was beginning to be understood, and methods for synthesizing them were being developed. Aniline, a nitrogen-containing organic compound, could be made easily and cheaply from coal tar, the waste product remaining after coke and illuminating gas were produced from coal. The German chemist August von Hoffman, teaching in London, aware that quinine had been found to be an organic nitrogen compound, thought it might be made profitably from aniline and had his student, William Perkin, attempt a synthesis. The unexpected product was not colorless, as was quinine, but an intense purple, and, as mauve, became wildly successful as a textile dye (Queen Victoria favored it), making Perkin rich and giving rise to dye industries in England, Germany, France, and elsewhere.

By the 1870s, Ehrlich was able to obtain specimens of numerous dyes with different colors and chemical properties, which he correctly suspected might stain different parts of different cells differently based on their chemical affinities. The internal structures of unstained WBCs were not easily visible in transmitted light microscopy. Mixtures of acidic and basic dyes derived from Ehrlich’s work produce characteristic staining patterns in WBCs, defining five major types.

Three types, called granulocytes, contain cytoplasmic granules; those of eosinophils stain most intensely with acid dyes, such as eosin; those of basophils stain most intensely with basic dyes, such as methylene blue; and those of neutrophils, the most common granulocytes, stain with both dye types. The “mononuclear” WBCs, lymphocytes and monocytes, contain fewer and smaller granules. B and T lymphocytes are, respectively, the effector cells of humoral and cellular immune responses (to oversimplify tremendously); monocytes present processed antigen to lymphocytes; they are, also, similar to granulocytes, phagocytic. In preparations stained with standard stains for blood, such as Giemsa’s, it may be difficult to tell small monocytes from lymphocytes and large lymphocytes from monocytes.

After Laveran’s discovery, Ehrlich, having found methylene blue to be taken up by malaria parasites, speculated that this dye might exert selective toxicity against them, and, in 1891, reported antimalarial action in two cases. He had already established close ties to the dye industry in his search for better stains to facilitate diagnosis; his subsequent successful demonstration of what he named “chemotherapy” motivated companies to modify molecules to treat diseases. The nascent drug industry provided Ehrlich with hundreds of compounds to test for activity against syphilis; this first “high-throughput screen” yielded the first two effective synthetic antimicrobial agents Salvarsan and Neosalvarsan.

Methylene blue itself is still used as an antimalarial, remaining effective against parasites resistant to quinine and other drugs, but does not, either by itself or combined only with eosin, produce optimal staining of malaria parasites in blood smears. Between 1891 and 1904, staining was improved by addition of azure dyes, themselves oxidation products of methylene blue, to eosin–methylene blue mixtures. Dyes in azure-augmented stains interact to color parasite cytoplasm an intense blue and nuclei a contrasting red, facilitating identification of the earliest stages of malaria parasites in RBCs. Gustav Giemsa’s 1905 dye combination, still almost universally used for malaria microscopy, is among the easiest to prepare and most consistent in staining properties; it has also remained a standard for morphologic hematology. Modern cytometry offers better alternatives.

From microscopy to cytometry, with help from einstein

By 1900, more than 2 centuries after Hooke published Micrographia , it was clear that all known organisms were composed of cells, that those multicellular organisms that reproduced sexually originated from a single cell formed by the fusion of an egg and a sperm and that eukaryotic cell reproduction involved mitotic nuclear division. It was accepted that many things that happened in cells were mediated by proteins and suspected that nucleic acids might be involved in heredity. The chemical and biochemical details remained unclear. Mendel’s work on inheritance was newly rediscovered, but genes had not yet been named. DNA would not be established as the genetic material until the 1940s (with some help from cytometry), and the mechanism and machinery by which information encoded in DNA could be used to direct synthesis of proteins would not be clarified until the 1960s.

Counting cells scientifically: The poisson distribution

Before 1900, statistical methods were not rigorously applied in either clinical medicine or experimental science. The journal Biometrika began publication in 1901. In 1907, an article by “Student” demonstrated that the theoretical minimum error of cell counts with a hemocytometer varies with the square root of the number of cells actually counted, fitting the Poisson distribution described 70 years earlier. Poisson statistics apply to cells counted by any method and to other objects encountered (and counted) in cytometry, notably the photoelectrons generated by scattered or emitted light from cells interacting with cytometers’ detectors. William S. Gossett had published under the pseudonym “Student” because his employers at the Guinness brewery were concerned that their competitors might benefit as they had from statistics (and cytometry); he had counted yeasts rather than blood cells in the hemocytometer.

In 1910, Ronald Ross, who had won the 1902 Nobel Prize in Medicine and would soon be knighted for his discovery that mosquitoes transmitted malaria, applied Gossett’s findings to calculate how much blood he and his fellow malariologists needed to analyze to detect small numbers of parasites with reasonable precision. The required amount, several microliters, spread thickly on a glass slide, would take an observer over an hour to examine thoroughly using a high-power oil immersion lens. This might be acceptable for research but would be difficult to implement on a regular basis for clinical use; there was, however, no technology even imaginable as a replacement for a human observer at that time.

Counting cells using a hemocytometer and a microscope requires only that the observer be able to distinguish the cells of interest from everything else in the sample. Even that level of discrimination may not always be necessary. Consider the cellular ecology of human blood, a common sample for cytometry.

RBCs are the most abundant (∼5,000,000/μL whole blood); their very numbers require a sample to be diluted several hundredfold to keep cells separated enough to be counted. The RBC concentration in whole blood is calculated from the number counted and the known dilution factor. Normal RBC volume is approximately 90 fL.

The typical WBC concentration in normal blood is 5000 to 10,000/μL, meaning that only one or two WBCs accompany each 1000 RBCs. WBCs vary in size from approximately 200 fL (lymphocytes) to more than 500 fL (monocytes), but there are larger lymphocytes and smaller monocytes. Although their hemoglobin content, lack of a nucleus, and smaller size make RBCs simple to discriminate from WBCs by microscopy or cytometry, most modern automated cell counters, which simply measure approximate cell size, do not make the distinction and instead include WBCs in RBC counts, with negligible effects on accuracy.

It had been known since the early days of hemocytometry that RBCs could be lysed and WBCs preserved for counting by diluting a blood sample with a hypotonic medium or with chemicals such as acids or detergents, and the same dilution procedure was later adapted to flow cytometric counters, which typically count WBCs in blood diluted approximately 1:10.

Blood platelets (thrombocytes), actually cell fragments that break off from large multinucleated megakaryocytes, which remain in the bone marrow, are normally present in blood at concentrations of 100,000 to 400,000/μL. They are much smaller (volume a few tens of fL) than RBCs or WBCs and are therefore easily distinguishable.

Although it seemed likely in 1910 that cell counts and measurements of cells’ chemical constituents could be improved if the human eye and brain could be replaced with a more precise means of quantifying light intensity, such a means did not appear until about 20 years later.

Albert Einstein won his 1921 Nobel Prize in Physics for work on the photoelectric effect; he had shown in 1905 that the generation of electrical current by certain materials exposed to light was best explained by light itself being made of particles of defined energy, which we now call photons or quanta. Photocells—devices that either produced electric current or changed their electrical properties in response to illumination—had been described in the late 1800s, but Einstein’s work contributed to their development and commercialization as devices for light measurement. The former patent clerk received a US Patent (2,058,562) in 1936 as co-inventor of a camera in which exposure was automatically controlled using a photocell. Although the apparatus was never manufactured, photocells and related detectors were soon combined with vacuum tube electronics to increase the accuracy, sensitivity, and precision of colorimeters, spectrophotometers, and other optical instruments.

Another of Einstein’s famous 1905 papers correctly attributed Brownian motion of small particles in suspension to random atomic collisions. A colloid chemist, Richard Zsigmondy, working with Heinrich Siedentopf of Zeiss, had developed a highly sensitive dark-field “ultramicroscope” in 1903; this allowed particles with dimensions of a few nanometers to be detected and their masses estimated from the amplitude of excursions during their random motions, providing the most concrete evidence for the existence of atoms many scientists had encountered up to that time. Zsigmondy received the Nobel Prize in Chemistry in 1925, and ultramicroscopy became a useful tool for analysis of both colloidal solutions and aerosols. An ultramicroscope was later modified to permit an aerosol sample to be flowed through it intermittently, allowing particles to be observed and counted in a chamber of defined volume. The actual counting was still done by eye.

Optical cell counters conceived and delivered

A 1934 paper by Andrew Moldavan in Science , frequently cited as the first publication on flow cytometry, summarizes an unsuccessful attempt to count cells (RBCs and neutral red–stained yeasts) flowing in single file through a capillary tube using a photocell attached to the eyepiece of a transmitted light microscope. Moldavan listed several obstacles to development of a practical apparatus, including the limited sensitivity of available photodetectors. The major difficulty with his system seems to have been that the scattering and absorption of light by the cells did not reduce the transmitted light signal sufficiently to be detected reliably. He also apparently assumed that an adequate detector would generate electric pulses when cells passed through, without considering how those data could be captured and stored.

The earliest working cell counters did produce robust enough pulses that, when amplified, could drive a chart recorder, through which a known length of paper would pass per unit time. The volume of blood analyzed per unit time could be computed from the flow rate of diluted sample and the dilution factor, and the number of cells per unit volume could then be obtained by counting the number of pulses visible on the chart trace. This is not an arrangement easily adapted to a high-volume clinical laboratory.

The first working flow cytometer was described in 1947 by Frank Gucker of the Chemistry Department at Northwestern University. His work, done during World War II, was sponsored by the US Army with the aim of rapid identification of biologic warfare agents (specifically anthrax spores, with less pathogenic members of the genus Bacillus serving as surrogates) in aerosols. The apparatus, clearly a descendant of the ultramicroscope, incorporated a “sheath” of filtered air to confine the air sample stream to the central portion of the flow chamber, in which it was subjected to dark-field illumination, with the optical axis of the light collection optics orthogonal to the optical axis of the illuminating beam. The light source was a Ford headlight; a photomultiplier tube (PMT), then a newly developed device, was introduced as a detector. PMTs were being mass produced at the time, not because of their exemplary performance as photodetectors (see Chapter 16 on optical techniques) but because, when they were kept in the dark and connected to high voltage, they generated noise with ideal characteristics for jamming radar. An electromechanical counter was used to eliminate the labor-intensive process described in the previous paragraph for recording cell counts. Gucker’s cytometer had about 60% probability of detecting a particle 0.6 μm in diameter; it had no way of distinguishing bacterial spores from small organic or inorganic particles that could scatter the same amount of light, which suggested that it might not do as well as a human with a microscope at finding bacteria in aerosols of battlefield dust. After a second machine was built, the original went to Harvard, where it was used for a few experiments with purer samples and eventually thrown out.

The “sheath flow” principle used in Gucker’s aerosol counter was adopted by Crosland-Taylor in England in the early 1950s for a blood cell counter in which cells in saline were detected by light scattering with dark-field illumination. During the 1950s, other industrial organizations in England, Germany, and the United States attempted to develop photoelectric cell counters.

Electronic cytometry: The coulter counter

Wallace Coulter, an American electrical engineer, explored another means of cell detection based on the low electrical conductivity of cells. Coulter reasoned that blood cells, suspended in a conductive saline solution and passing singly through a small (<100 μm) orifice, would be detectable by the transient increases in the electrical impedance of the orifice produced as the nonconducting cells passed through, displacing the conducting saline. The impedance change is proportional to cell volume; the Coulter principle can be thought of as an electrical analog of Archimedes’ principle. Coulter built a billion-dollar company, eventually merging with Beckman. His counters proved accurate for counting and sizing blood cells and other particles and apparatus based on the DC (direct current) impedance measurement. AC (alternating current; radiofrequency) impedance measurement was incorporated in some more advanced models after it was found to provide information about cells’ internal structure. After both internal growth and acquisitions, Beckman Coulter, now itself acquired by Danaher, remains a major provider of fluorescence flow cytometers and sorters.

Blood is a much cleaner sample than battlefield dust; one occasionally encounters a few objects recognizable as extraneous interfering particles (commonly described as “junk”) when doing visual counts with hemocytometers, but they are simply omitted from the count. Both optical- and impedance-based cell counters would typically count the “junk,” or at least most of it. It’s always a good idea to have a microscope somewhere near your cytometer to check for unwanted material in samples.

The earliest Coulter counters registered counts on cascaded “Dekatron” vacuum tubes, each of which could both accumulate a count between zero and nine and indicate the count by the position of a glowing spot on the circular end face of the tube. “Nixie” tubes, which could accept a decimal digital output from an electronic counter and illuminate an internal electrode in the shape of the corresponding digit, became available in the mid-1950s; both forms of display were subsequently replaced by single-digit and multidigit light emitting diode (LED) and liquid crystal displays (LCDs), which have themselves largely been supplanted by LCD screens. There were no laboratory information systems in the 1950s; if you wanted a permanent record of a Coulter counter’s count, you wrote it down.

Getting more information: Capturing size distributions

RBC counts done by microscopy rarely involved counting more than 400 cells, which according to Poisson statistics, would limit the minimum coefficient of variation (CV) to no less than 5%. Automatic counters could easily and quickly provide 10,000-cell counts, which would lower the “ideal” CV to 1%. Accurate diagnosis of anemias was considerably improved by increased precision, and because the Coulter counters could measure volumes of individual RBCs, there was considerable interest in equipping them to collect and present data on the sizes and size distributions of individual cells. This initially involved some tricks played with analog electronics and chart recorders.

By the 1960s, however, multichannel pulse height analyzers, essentially special-purpose analog/digital computers equipped with a few kilobytes of memory, had become available as laboratory instruments, used primarily for nuclear research.

It was possible to obtain a cell volume distribution by connecting the output of the Coulter counter’s detector electronics to a pulse height analyzer; after a sample had been run, the distribution stored in memory could be displayed on an oscilloscope screen or written out on a chart recorder. Coulter Electronics soon brought out its “Channelyzer,” and other flow cytometer manufacturers incorporated analyzers made by providers of nuclear instrumentation. Cytometric data analysis far beyond this level has been done by relatively modest personal computer systems since the mid-1980s.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here