Medical and Metabolic Considerations in Athletes With Stress Fractures


Introduction

Bone stress injuries (BSI) are common injuries of athletes.

Much of the work in this chapter is based on our work with elite athletes over the last 13 years ( Box 4.1 ).

Box 4.1
Elite Teams and Number of Athletes Seen in the Last 13 Years as of October 5, 2018

  • San Diego Chargers – 3

  • Houston Texans – 1

  • Oakland Raiders – 5

  • NY Football Giants – 2 a

    a Consulted but not seen in office

  • University of Georgia – 1 a

  • UC Davis – 1

  • Los Angeles Lakers – 1

  • Milwaukee Bucks – 1 b

    b Not seen yet but referred

  • Golden State Warriors – 4

  • Philadelphia 76ers – 2 & 1 a

  • Brooklyn Nets – 1

  • New Orleans Pelicans – 1

  • Dallas Mavericks – 1

  • Houston Rockets – 1 a

  • Charlotte Bobcats – 1 a

  • OKC Thunder – 2 a

  • Washington Wizards – 1

  • U of Portland male cross-country runner – 1

  • Minnesota Timberwolves – 1 a

  • Oakland Athletics – 10

  • Colorado Rockies – 2

  • Los Angeles Dodgers – 1

  • Los Angeles Angels – 1

  • Detroit Tigers – 1

  • Arizona Diamondbacks – 1

  • Cleveland Indians – 1 & 1 a

  • US Olympic Skating – 1 b

  • Harvard female cross-country runner – 1

  • UC Berkeley Basketball – 2

  • San Diego State Basketball – 2 a

  • Wesleyan softball player – 1

  • Sacramento Kings – 1

  • Valparaiso basketball player – 1 a

  • St. Louis Cardinals – 1 a

  • Minnesota Timberwolves – 1 a

  • New York Knicks – 1 a

  • New York Jets – 3 a

For the purpose of this chapter, we will define elite athlete as one who is participating in athletics at the high school, college, or professional level. While some of the information may pertain to recreational athletes, we have made no attempt to collect our information on this group of individuals.

There are numerous articles on stress fractures in military populations and settings, beginning with the original reference in 1855 of “march fractures” by Breithaupt. However, “The findings from military recruits (many of whom are under-trained) may not generalize to athletes (many of whom are well- or overtrained) as they may represent different populations.” Therefore, we did not complete a full review on the military literature, but selected certain information from that body of work that is appropriate to our work.

Definition of BSI

BSI represent a “presumed” spectrum of bone damage that occurs as the result of repetitive trauma that is not handled correctly by the bone repair mechanisms. For management purposes, BSI may be considered as a “bone stress–failure continuum” in which “shin splints” (more properly, medial tibial stress syndrome) is a relatively mild expression of the damage at the low end of the spectrum and stress fracture is a severe example of the damage at the other end of the spectrum ( Tables 4.1 and 4.2 ).

Table 4.1
Low-Risk Stress Fracture Treatment Guide
From Diehl JJ, Best TM, Kaeding CC. Classification and return-to-play considerations for stress fractures. Clin Sports Med [Internet]. 2006 Jan [cited 2018 Jul 6];25(1):17–28, Table 1.
Symptoms Goal Treatment Suggestions
Any level of pain Heal injury Titrate activity to a pain-free level for 4–8 weeks depending on the grade of injury
Braces/crutches
Modify risk factors
Pain with no functional limitations Continue participation Titrate activity to a stable or decreasing level of pain
Closely follow
Modify risk factors
Pain with functional limitation Continue participation Decrease activity level to point at which pain level is decreasing and until a functional level of pain has been achieved, then titrate activity to stable or continued decrease level of pain
Modify risk factors
Limiting pain intensifies despite functional activity modification (i.e., unable to continue to perform at any reasonable functional level despite activity modification) Heal injury Complete rest
Immobilization
Surgery
Modify risk factors

Table 4.2
Management of and Return-to-Play Strategies for High-Risk Stress Fractures
From Diehl JJ, Best TM, Kaeding CC. Classification and return-to-play considerations for stress fractures. Clin Sports Med [Internet]. 2006 Jan [cited 2018 Jul 6];25(1):17–28. Table 2 .
Anatomic Site Complications Suggested Treatment Level of Data
Femoral neck Displacement
Nonunion
Avascular necrosis
Tension: Strict NWB or bed rest
Surgical fixation
RTP when healed
Compression: NWB until pain-free with radiographic evidence of healing, then slow activity progression
RTP after no pain on examination or with any activities
Surgical fixation (optional)
Level C (expert opinion)
Level D (case series)
Anterior tibia Nonunion
Delayed union
Fracture progression
Nonoperative: NWB until pain-free with ADL; pneumatic leg splints
RTP with slow progression after nontender and pain-free with ADL (9 mo)
Operative: Intramedullary nailing
RTP is usually faster (2–4 mo)
Level A (RCT)
Level B (nonrandomized)
Levels C and D
Medial malleolus Fracture progression
Nonunion
Nonoperative: (No fracture line)
4–6 wk pneumatic casting
Avoid impact; rehabilitation
RTP when nontender, no pain with ADL
Operative: (Fracture line, nonunion, or progression)
ORIF with bone graft
Levels C and D
Tarsal navicular Nonunion
Delayed union
Displacement
Nonoperative: NWB cast 6–8 wk, then WB cast 6–8 wk
RTP is gradual after pain-free with ADL
Orthotics and rehabilitation suggested
Operative: (Complete, nonunion)
RTP only when healed
Levels C and D
Talus Nonunion
Delayed union
Nonoperative: NWB cast 6–8 wk
RTP is gradual after pain-free with ADL
Orthotics and rehabilitation suggested
Operative: Reserved for nonunion
Level C
Patella Displacement
Fracture completion
Nonoperative: (Nondisplaced)
Long-leg NWB cast 4–6 wk
Rehabilitation following RTP is gradual after pain-free with ADL
Operative: Horizonal—ORIF
Vertical—lateral fragment excision
RTP when healed
Level C
Sesamoids Nonunion
Delayed union
Refracture
Nonoperative: NWB 6–8 wk
RTO is gradual after pain-free with ADL
Operative: Excision if fail nonoperative
Level C
Fifth metatarsal Nonunion
Delayed union
Refracture
Nonoperative: (No fracture line)
NWB cast 4–6 wk followed by WB cast until healed
RTP after nontender and pain-free
Operative: (Fracture line, nonunion, or individual at high risk for refracture)
Intramedullary screw fixation
RTP 6–8 wk, early ROM/rehabilitation
Levels C and D
ADL, Activities of daily living; NWB, nonweight bearing; ORF, open reduction with internal fixation; RCT, randomized controlled trial; ROM, range of motion; RTP, return to play; WB, weight bearing; wk, week.

These conditions do not necessarily occur concurrently or in temporal sequence. In fact, we are unaware of someone with medial tibial stress syndrome who has progressed to a stress fracture or a stress reaction.

Numerous articles have defined the components of this spectrum (medial tibial stress syndrome, stress reaction, and stress fracture) (see Chapter 23 ), but there are no official diagnoses of any of these entities from the American Academy of Orthopaedic Surgeons (AAOS), the American Society for Bone and Mineral Research (ASBMR), or the American College of Sports Medicine (ACSM).

“Shin splints” is a nonspecific lay term associated with a large number of fundamentally different tibial exercise-induced leg injuries where there is repeated foot-to-ground impact. Such distinct leg injuries as tibial and fibular stress fractures, tibial periostitis, anterior and deep posterior compartment syndromes, popliteal artery entrapment, and tibialis posterior and anterior muscle strain, or tendinitis have all been referred to under the rubric of shin splints. As part of the push to make a more specific medical evidence-based diagnosis of these different entities, medial tibial stress syndrome (MTSS) has been separated from the other conditions. MTSS is a condition comprising periostitis or symptomatic periosteal modeling occurring in the vicinity of the junction of the middle and distal thirds of the medial border of the tibia. It is a more diffuse lesion spreading several centimeters over the bony surface.

As a specific entity, stress reactions are studied much less than MTSS or lower extremity stress fractures. Although the causation of stress reactions, the complaints by which they present, the history, the physical findings, and the imaging by which they are diagnosed are the same as stress fractures, the term “stress reaction” means there is no fracture line or break in the continuity of the bone. X-ray images are usually negative in an initial stress reaction. A bone scan will be positive, but this cannot help tell whether there is or is not a fracture line. Therefore, the diagnosis is made by either magnetic resonance imaging (MRI) or computed tomography (CT) scan.

Stress fractures result from the repeated application of a stress lower than that required to fracture bone in a single loading situation.

Classification Systems of Bone Stress Injuries

The classification of stress fractures has become more complex since the initial description by Breithaupt in 1855 1 and Stechow’s subsequent observation that on early radiographs, clinical findings in the feet were due to fractures. Garbuz et al. reviewed the value and role of orthopedic classification systems used to characterize the nature of a problem to guide treatment decision-making and establish an expected outcome for the natural history of a condition. This formed a basis for uniform reporting of results for surgical and nonsurgical treatments and for comparison of results from different centers.

Various classification systems have been proposed on the basis of clinical findings (e.g., client history and physical examination) ; radiographic results, including scintigraphy, ultrasound, CT, ; MRI and dynamic contrast-enhanced MRI ; fatigue versus insufficiency (pathogenesis) ; high- versus low-risk fractures ; and on practices that involve multiple components of these aspects of fractures. Arendt and Griffiths developed a classification based on x-ray, scintigraphy, and MRI findings.

The most intensive and thorough review of this issue comes from Kaeding and his group in a series of articles starting in 2005. In their initial paper, they reviewed the pathophysiology, diagnosis, and classification of stress fractures on the basis of the separation of low-risk stress fractures and high-risk stress fractures. High-risk stress fractures (see also Chapters 3 and 5 ) occur at the femoral neck, the patella, the anterior tibial diaphysis, the medial malleolus, the talus, the tarsal navicular, the proximal fifth metatarsal and the first metatarsal phalangeal sesamoids. Low-risk stress fractures include the femoral shaft, medial tibia, ribs, ulna shaft, and first through fourth metatarsals. Previous studies showed that high-grade injuries (grade 3 and 4) took longer to heal than low-grade injuries (grade 1 and 2).

The management of each fracture should be individualized. “The key difference between a low-grade stress fracture at a high-risk versus low-risk location is that an individual who has a low-grade fracture at a low-risk site can be allowed to continue to compete but an individual who has a low-grade fracture at a high-risk site needs to heal before full return to activity” ( Table 4.3 ). High-risk stress fractures have more frequent complications like delayed union, nonunion, and refracture.

Table 4.3
Stress Fracture Classification
From Kaeding CC, Najarian RG. Stress fractures: classification and management . Phys Sportsmed [Internet]. 2010 Oct 13 [cited 2018 Jul 6];38(3):45–54. Table 2 Available from: http://www.tandfonline.com/doi/full/10.3810/psm.2010.10.1807 .
High-Risk Fractures Low-Risk Fractures
Occur on tension side Occur on compression side
Natural history poor Natural history favorable
Often require aggressive treatment, including surgery or strict nonweightbearing Often require nonsurgical treatment with rest and gradual return to weightbearing

Kaeding and Najarian continued the development of their classification system in 2010, stressing the important distinction between a high-risk and a low-risk fracture.

High-risk fractures occur on the tension side, have a poor natural history, and require aggressive treatment, whereas low-risk fractures occur on the compression side, have a favorable natural history, and frequently can be handled by nonsurgical treatment. A complete classification system of a stress injury (reaction or fracture) requires knowledge of the anatomic location and the grade of injury. Arendt and Griffiths and Fredericson have developed grades of increasing severity of these stress fractures, from 1 to 4, with the latter representing a complete fracture. At that time, they felt that the management of stress injuries should be determined by the location and grade of the injury.

Subsequently, Miller, Kaeding, and Flanigan conducted a systematic review of the literature of Classification Systems of Stress Fractures. They wanted to determine if there was a system that was “reproducible, inexpensive, safe, broadly applicable, widely accessible, and clinically relevant to prognosis and treatment considerations.” According to their review, 27 previous systems were found and analyzed for strengths and weaknesses. Of the most commonly cited systems in their review, none included a clinical parameter or parameters. None of the classification systems tested for inter- or intra-observer agreement; therefore, their reproducibility of use by single or multiple observers is unknown. Of the 27 systems evaluated, 16 were applicable to the entire skeleton, whereas 11 were applicable only to a specific bone or location. The more modern classification systems included MRI as an imaging technique. Arendt and Griffiths’ system is most often cited since 1990, and provides a system of radiologic grading of stress fractures incorporating x-ray, bone scan, and MRI findings graded from 1 to 4. Five other systems used multiple imaging modalities. Four of the less frequently cited systems did use clinical parameters, and pain was the most common symptom mentioned. None of the studies incorporated an assessment of the healing capacity of the fracture combined with a notation of the extent of structural damage. The authors were able to determine from their systematic review of the literature that the ideal stress fracture classification system they hoped to find did not exist.

Subsequent to the that analysis, Kaeding and Miller sought to develop a system that incorporated their belief that the classification describe not only the extent of the structural damage but also the healing potential of the lesion. This is complicated by the tremendous variability of the stress fracture lesion. Their classification system employs three descriptors: 1) fracture grade; 2) fracture location; and 3) imaging modality used ( Table 4.4 ).

Table 4.4
Proposed Stress Fracture Classification System
From Kaeding CC, Miller T. The comprehensive description of stress fractures: a new classification system. J Bone Jt Surg - Ser A [Internet]. 2013 Jul 3 [cited 2018 Jul 6];95(13):1214–20. Table 1. Available from: https://insights.ovid.com/crossref?an=00004623-201307030-00010 .
Grade Pain Radiographic Findings (CT, MRI, Bone Scan, or Radiograph) Description
I No Imaging evidence of stress fracture, no fracture line Asymptomatic stress reaction
II Yes Imaging evidence of stress fracture, no fracture line Symptomatic stress reaction
III Yes Nondisplaced fracture line Nondisplaced fracture
IV Yes Displaced fracture (≥2 mm) Displaced fracture
V Yes Nonunion Nonunion

The system they developed had the reproducibility they desired, was simple, easy to use, and formed the basis for treatment. When reporting the stress fracture, “a CT scan revealing a nondisplaced fracture line in a tarsal navicular in a healthy collegiate basketball player would be reported as a Grade-III tarsal navicular stress fracture on CT scan.” We have adopted this system for use at our center.

Epidemiology of Bone Stress Injuries

For further details, see Chapters 3 and 5 .

Pathophysiology

The understanding of the pathogenesis of stress fractures has advanced since Breithaupt’s original description, advancing beyond the concept that they are due to performing repetitive tasks resulting in overuse with accumulation of microdamage. Recent advances in our understanding of bone biology enable us to have a deeper insight into the actual events ( Fig. 4.1 ).

Fig. 4.1, Proposed pathophysiology of stress fractures.

There are numerous beautiful descriptions of the pathogenesis of stress fractures on a macroscopic level but very few on a microscopic or nano-structural level because most of these cellular and subcellular evaluations are new and the findings are just being incorporated into the overall picture as our knowledge develops.

In 1998, Harold M. Frost, one of the clearest thinkers about bone physiology, histomorphometry, and bone pathology, stated, “Bone is a fatigue-prone material.”

In 2001, Boden et al. stated that the “exact mechanical phenomenon responsible for initiating stress fractures remains unclear.” But it is clear that an increase in the duration, intensity, or frequency of physical activity, either military basic training or athletics, without sufficient rest intervals may lead to increased osteoclast activation and bone resorption. Muscle fatigue may also result in excessive forces being transmitted to the bone. In 2002, Romani et al. added that “stress fractures are not the result of one specific insult. Instead, they arise as the result of repetitive applications of stresses that are lower than the stress required to fracture the bone in a single loading.” Whenever a low level of force is directed on to the bone, whether due to contact with the ground or muscle activity, it causes the bone to deform, which is known as a strain. The bone’s stress–strain response depends on the load’s direction; the bone’s geometry, microarchitecture, and density; and the role of attached muscle and its contractions. “In most activities of daily living (ADLs), when the force is removed, the bone elastically rebounds to its original position. The force that a bone can endure and still rebound back to its original state without damage is within the elastic range. Forces that exceed a critical level above the elastic range are in the plastic range. Once forces reach the plastic range, a lower load causes greater deformation; it is at this level that forces summate to permanently damage the bone.”

Warden and his colleagues stated in 2006 that the “precise pathophysiology of stress fractures is unknown, and current models are based on theory.” Although the pathogenesis of stress fractures in these models is usually discussed at the macroscopic level, damage really initiates at the level of the collagen fiber or below. Fatigue is the loss of strength and stiffness that occurs in materials subjected to repeated cyclic loads. Bone fatigue fractures (now known as stress fractures) are a complex in vivo phenomena in which mechanical damage and biological repair have major roles. If microdamage from bone fatigue activity accumulates at a slow rate, normal biological remodeling may be able to repair the damage and retain the structural integrity of the bone. However, the creation of microcracks initiates osteoclastic bone resorption and the microdamage removal in a bone that continues to be excessively loaded with high cyclic stresses may accelerate the accumulation of fatigue damage.

With every bone movement, and especially repetitive episodes of mechanical loading, bone strain occurs, producing microdamage. Strain is defined as the change in length per unit length of a bone and is frequently expressed as microstrain (μƐ). Usual strains (400–1500 μƐ) are far below the single load-failure threshold (10,000 μƐ). But strains below the single load-failure level can have a cumulative effect on the bone structure. Carter et al. studied the fatigue behavior of adult cortical bone. The bone fatigue microdamage accumulates at a slow but unknown rate, and how much is too much is also unknown. Cortical bone fails in fatigue within 10³ to 10 5 loading cycles when strains are between 5000 and 10,000 μƐ. According to Schaffler and colleagues, strains in the physiologic range of 1000 to 1500 μƐ in ex vivo studies have been shown to cause fatigue and microdamage but not to result in complete fracture of cortical bone even after 37 million loading cycles.

The repetition of sub-maximal strains produces microdamage in the bone. Much effort has been undertaken to decide how much strain can produce microdamage and how much microdamage can produce fatigue failure of bone. Schaffler et al. summarized some other work by noting that “strains in the range of 1200–1500 microstrain and strain rate of 0.03 s -1 are typical of the strain environment measured on tensile surfaces of long bone diaphysis during running. If one assumes (conservatively) a stride length of three feet for a runner. Each [sic] limb would be loaded every six feet, and each million cycles would correspond to about 1136 miles of running. Ten million load cycles correspond to more than 11,000 miles of continuous loading without the advantages of remodeling or repair.” This implies a much greater fatigue resistance for compact bone at physiological strains than would be calculated from earlier studies. The military recruit experience suggests that stress fractures occur within 6 weeks after the start of basic training, or at about 100 to 1000 miles of vigorous exercise, or an estimated 100,000 to 1,000,000 loading cycles by the previous calculations. This “suggests that other mechanisms may be involved in fatigue failure.” They hypothesized that “brief stress or strain loading would lead to complete fracture.”

The excessive strains produce microcracks in the bone and microdamage resulting in collagen fiber-matrix debonding, disruption of the mineral–collagen aggregate, and collagen fiber failure.

There are three distinct types of microdamage based on differences in staining properties in bone. These different staining characteristics also allow for the demonstration of microcracks in the bone. “Fatigue loading and the extent of microdamage are associated.” Microdamage is difficult to demonstrate in bone specimens and takes special staining procedures, developed initially by Frost. It is considered impossible to apply these techniques to in vivo situations. The specific interaction between mineral and collagen is poorly understood. To understand microdamage in vivo is going to take the development and use of new imaging tools, like Trabecular Bone Score (TBS) and Texture Research Investigation Platform (TRIP) (Medimaps Group SA, Geneva, Switzerland), μCT, finite element analysis (FEA), and high-resolution peripheral CT (HRpQCT).

Hughes et al. reviewed the role of adaptive bone formation in the etiology and prevention of stress fractures. The typical model of stress fracture development does not account for the actual clinical occurrence of a stress fracture. In the previous calculation by Schaffler, the basic training recruits are exposed to less than 1/400th the duration of loading required to develop a fatigue-induced stress fracture under in vitro loading conditions. This may be due to differences between the experimental loading conditions, where bones may be repetitively strained in one plane, and the live loading conditions, in which the soldier or the athlete is subjected to multidirectional loading. This may result in more abnormal loading and strain, producing more microdamage than the experimental model does. Additionally, in the experimental model, a small portion of bone is utilized, and the human bone is larger and is more likely to have weaker regions than the laboratory specimen, and in vivo bones are undergoing remodeling in response to increased mechanical loading, which can transiently weaken the bone. Remodeling directed at removing microdamage is referred to as targeted remodeling . This repair activity causes a temporary porosity that may contribute to stress fracture risk, although the authors say, “the link between increased porosity and stress fracture risk remains to be demonstrated experimentally. In principle, the process of bone remodeling in response to physical training is paradoxical in that it may promote stress fracture development by introducing an acute increase in porosity, but may also prevent stress fracture development by replacing fatigue-damaged bone.” The porosity that develops represents a temporary negative bone balance that exists until the resorption cavity is filled with new bone that becomes fully mineralized. However, deposition of osteoid, newly formed unmineralized bone, by osteoblasts does not immediately restore normal bone stiffness and other characteristics. What is needed is mineralization, which occurs in two phases: primary mineralization, which occurs during the first few weeks and results in 65%–70% of the final mineralization; and secondary mineralization, which occurs slowly over the next 8–12 months. “As stress fractures may occur within weeks of onset of physical training, newly activated remodeling cycles remain in early stages when a negative bone balance can theoretically contribute to a cycle of increased strain and accumulation of microdamage upon continued loading until stress fracture ensues.” Targeted remodeling is a key process for replacing fatigue damage. Bone remodeling is a process characterized by four phases: the activation phase, when the osteoclasts are recruited; the resorption phase, when the osteoclasts resorb bone; the reversal phase, when the osteoclasts undergo apoptosis and the osteoblasts are recruited; and the formation phase, where the osteoblasts lay down new organic bone matrix that subsequently mineralizes. By definition, remodeling is a process where osteoclasts and osteoblasts work sequentially. Bone modeling is the process in which bones are shaped or reshaped by the independent (noncoupled anatomically or temporally) action of osteoblasts and osteoclasts. Skeletal development and growth take place by the process of bone modeling.

Adaptive bone formation deposits bone, via bone modeling (as opposed to remodeling), on the periosteal, endocortical, or trabecular surfaces in response to mechanical loading. Bone modeling is the process of bone growth that takes place in infants, children, and adolescents where bone is forming and involves the independent action of osteoblasts without prior osteoclastic bone resorption. Modeling involves osteocyte activation where the osteocytes act as mechanotransducers. Osteocytes are important regulators of bone function (and will be discussed more thoroughly below). The mechanosensing and mechanotranducing osteocytes transform an induced deformation of the bone matrix from some external force into biochemical and flow signals that lead to new bone formation. The mechanism for stimulation of osteocytes is thought to be electric streaming potential created by ionic fluid movement through the lacuna-canalicular system and cellular shear stress generated by fluid flow along the osteocyte cell body and dendritic processes. Apparently, the cell body, the primary cilia, and the dendritic processes are responsible for mechanosensation. Changes in the osteocytes and their dendrites lead to increased intra-osteocyte calcium signaling, and formation of pro-osteoblastic molecules such as prostaglandin E2 (PGEƨ), insulin-like growth factor (IGF-I), nitric oxide, and adenosine triphosphate (ATP) that positively affect bone formation and suppress osteocyte production of negative regulators of the Wnt/ß-catenin pathway, such as sclerostin and dickkopf-1 (DKK1).

Deposition of bone along the diaphysis of long bones on the periosteal surface provides great mechanical advantage. Long bones with mass distributed furthest from the neutral axis, i.e., wide bones, are stronger in relation to bones with similar masses that are narrower. Stress fracture risk is affected directly by the properties of the skeleton, like wider bones or denser bones, and thus it is thought that modification of these properties via the adaptive ability of bone may be used as a way of reducing an individual’s risk. Warden et al. looked at bone adaptation to a site-specific mechanical loading program using a rat ulna axial loading model that compared the loaded right forearm (ulna) to the control, an unloaded left forearm (ulna). The mechanical loading induced bone changes that resulted in a significant increase in ulna fatigue resistance. The authors found that by improving the structural properties of a bone through a mechanical loading program, the bone’s fatigue resistance could be significantly improved. They suggested that an exercise program directed at changing the structural properties of the skeleton can be employed as a possible prevention strategy for stress fractures. When the fatigue life of the trained and untrained limbs was compared, the untrained limb fractured after 15,000 cycles whereas the trained limb failed after an average of 1.5 million cycles. This 100-fold increase in fatigue resistance after a 5-week loading regimen shows the potential impact of adaptive bone formation with physical training.

Milgrom and his group performed three prospective studies of military recruits in different basic training classes to evaluate bone’s adaptation ability to lower the incidence of stress fractures. Different groups of 452, 433, and 404 elite infantry recruits had their physical fitness assessed by a timed 2-km run, the maximum number of chinups they could perform, and the number of situps they could perform in 1 minute. The pre-induction participation in sports activity was assessed. In the third study, the questionnaire was revised to subdivide the type of ball sports into soccer, basketball, volleyball, tennis, and handball. Of the 1118 soldiers who completed basic training, stress fractures in the group that did not play ball ranged between 28.9%, 27%, and 18.8% in the individuals in each group, respectively, who did not participate in ball sports and 13.2%, 16.7%, and 16.3% in the individuals in each group who did play ball sports. They also inserted strain gauges into the tibias of three volunteers and found that the tension, compression, and shear strain rates during rebounding were higher than those during running and were 2.16 to 4.60 times higher during rebounding and running than during walking. In Scandinavian and Israeli studies, a history of long-distance running or jogging did not affect the incidence of stress fractures in military recruits. Previous activities such as weightlifting, swimming, and martial arts did not lower the incidence of stress fractures, and a history of swimming increased the risk for stress fractures. In the paper presenting three separate studies, those recruits who played ball sports for more than two years before their military training, in the first two of the three studies, had only 50% of the stress fractures; in the third study, where the recruits were specifically asked what sport they played and what ball sport they played, 90% of the recruits played basketball, and in those who played ball sports, the stress fracture rate was 20% (80% decrease) compared to those who did not play ball sports. To explain this phenomenon, in vivo human tibial bone strain measurements were obtained in a number of different studies including the above-cited one. The principal compression strain was 48% higher, the principal tension strain 15% higher, and the shear strain 64% higher during basketball, rebounding than during running. The compression strain rate was 20% higher, the tension strain rate 6% higher, and the shear strain rate 28% higher during basketball rebounding than during running. The amount of strain and strain rate change are major determinants of adaptive bone formation to loading. The authors felt that the high strains and strain rates that occur during playing basketball can cause maximum adaptive bone formation. This resulted in stiffer bone in the basketball players who played for 2 years before entering the military and, thus, less bone strain during basic training than the recruits who did not play basketball, and therefore fewer stress fractures in the basketball players. Milgrom et al. concluded that, “On the basis of this study, a logical strategy for lowering the incidence of stress fractures in military recruits and athletes would be to adapt their bones before they begin formal training. This would involve a pretraining program, over a course of at least 2 years, of properly applied high-strain- and high-strain-rate-generating exercises that mimic the strain and strain rates that occur during basketball. Such a program would ideally stiffen the bone and not lead to stress fractures during this adaptation period.” In our opinion, a modern understanding of the pathophysiology of stress fractures requires an understanding of the role of osteocytes in bone physiology and pathophysiology.

Osteocytes were first described by Carl Gegenbaur (also Gegenbauer), a German physician, anatomist, zoologist, and physiologist, in 1864, only 9 years after Breithaupt described “march” fractures. It would take over 150 years for stress fractures and osteocytes to come together.

In the last few years, there have been multiple reviews of the etiology, evolution, and function of osteocytes. It is clear from these studies that the matrix-producing osteoblasts can become an osteocyte, a lining cell, or can undergo programmed cell death.

Hazenberg and colleagues state that human that bone contains between 13,900 and 19,400 osteocytes per mm³. Buenzli and Sims give slightly different numbers of 20,000 to 30,000 osteocytes per mm³, but since it is estimated that 5% of osteocyte lacunae are empty, this suggests an average osteocyte density of 19,000 to 30,000 osteocytes per mm³. This results in an estimate of 42 billion osteocytes in the human skeleton. The osteocytes form an interconnected network through their dendritic processes, creating communication between individual osteocytes and the surface bone lining cells. The number of osteocytic dendritic processes varies per species. Osteocytes contain between 40 and 60 cell processes per osteocyte with a cell-to-cell distance of 20–30 μm per Hazenberg, and 18–106 processes per Buenzli, making a total of 3.7 trillion projecting from the osteocyte cell bodies. This results in a cumulative length of all osteocytic dendritic processed in the human skeleton to be 175,000 km (108,740 miles). One cell process may form up to 12.7 termini on average, so that a single osteocyte may possess up to 1128 termini connecting with other cells. Extrapolated to the whole skeleton, this calculates to 23.4 trillion osteocytic connections. Transmission of mechanical signals to the osteocytes can occur directly via cell surface receptors through the solid matrix of the tissue due to load-induced fluid flow or indirectly via fluid pressure and shear stresses.

Osteocytes have numerous functions in bone. They can serve as orchestrators of bone remodeling including formation and resorption; inducers of osteoclast activation; modulators of mechanical loading via mechanosensation and transduction; sources of factors and regulators of mineral metabolism; remodelers of the perilacunar matrix; and other functions, such as regulators of mineralization. Osteocyte cell death leads to skeletal fragility via the recruitment of osteoclasts to the site. Viable osteocytes secrete an as yet unknown factor or factors that inhibit osteoclast activity, and when they die, osteoclasts are released from inhibition to start the process of bone resorption. They perform these functions via the release of signaling molecules such as nitric oxide, prostaglandin E2, and ATP from the osteocytes in response to external stimuli, such as mechanical strain. Fluid fluxes in the canaliculi and, perhaps, electromechanical signals induced by the mechanical loading also participate. The osteocytes are multifunctional cells, and they undertake some of these functions via an endocrine role. Osteocytes are known to produce fibroblast growth factor 23 (FGF 23), one of the most important osteocyte-secreted endocrine factors, which plays a role in phosphate metabolism, is a marker of early kidney failure, and can down-regulate1-α hydroxylase, which is required for the conversion of 25-hydroxyvitamin D to the active 1,25-dihydroxyvitamin D. They also produce sclerostin, which is an inhibitor of bone formation in the Wnt-ß catenin system. In addition, other factors involved in phosphate metabolism, such as DMP1, PHEX, and MEPE, are expressed by the osteocyte. There is crosstalk between osteocytes and muscle cells, which also play a role in response to mechanical stimuli.

Microcracks develop as a result of daily cyclic loading, which is repaired by a balanced process between resorbing and forming cells. Microcracks can damage the osteocyte and its processes, inducing the osteocyte to send signals to initiate bone resorption and formation. Microdamage and bone fatigue are both associated with loss of osteocyte integrity. The role of osteocytes is being increasingly recognized in a wide variety of metabolic bone diseases. Lower osteocyte density has been shown to play a role in some patients who are destined to sustain a vertebral compression fracture. Apoptosis of osteocytes has been recognized as a factor in glucocorticoid-induced osteonecrosis of the hip. Parathyroid hormone (PTH) and the PTH Type 1 Receptor (PTH1R) play a role in osteocyte survival as well as in the mechanosensory process.

Although no real research has been performed on stress fracture patients, the increasing understanding of the role of fatigue and microdamage in disrupting canalicular flow and creating apoptosis of osteocytes, which initiates bone resorption and remodeling, are all steps in the pathogenesis of stress fractures. In 2016, we formulated the hypothesis that stress fractures may be due to disordered, dysfunctional, or diseased osteocytes where fluid mechanics, shear/strain forces, mechanosensory forces, mechanotransducer forces, and production and release of bone growth inhibitors (Dkk1, sclerostin) and bone growth stimulation (activation of the Wnt/ß-catenin pathway) may result in abnormal bone remodeling, including the necessary bone resorption and, perhaps, the lesser or delayed bone formation that allows bone failure to occur with subsequent development of a stress fracture. It is increasingly clear that the effect of many anti-osteoporotic drugs, like PTH, may be mediated by their direct or indirect effect on osteocytes. (The role of PTH and other anabolic drugs for bone is discussed under the section on Treatment)

Genetic Predisposition

The ease with which it is possible to study the human genome has improved tremendously in the last several years, and the cost of doing so has markedly decreased, putting these individual analyses within reach of the general public. We now can both perform genome-wide association studies (GWAS) and study single nucleotide polymorphisms (SNPs).

Many studies have looked at the genetics of osteoporosis, the genetics of low bone mineral density (BMD), and the genetics of fragility fracture.

It has been increasingly shown that many sports injuries have a genetic basis. Some of this pioneering work has been done by Stuart K. Kim and colleagues at Stanford. Among other injuries, a genetic predisposition has been shown for medical collateral ligament (MCL) rupture, shoulder dislocation, rotator cuff injury, ankle sprains and strains, De Quervain’s tenosynovitis, and plantar fasciitis. There is increasing evidence that, to some degree, there is a genetic predisposition to stress fractures.

One of the early interesting occurrences in this regard was reported by Singer et al. in 1990. Two 18-year-old identical twin brothers, who were in the same basic training program in the Israel Defense Force (IDF), were seen with pain in the proximal part of the left thigh starting 4 weeks before examination in the sixth week of their training class. Both brothers were in good physical health and exercised regularly prior to their induction. They both underwent nuclear medicine bone scans with Technesium-99m diphosphonate, which showed significant uptake in both the left and right proximal femurs along with some uptake in the tarsal bones of their right feet. Although the authors listed numerous potential clinical risk factors, they stated that genetic factors had never been considered to play a role in predisposition to stress fractures, but the finding in this monozygotic twin set suggested that genetic factors might need to be considered in the future.

In 1997, Burnstock summarized work being performed to elucidate the role of purine nucleotides as signaling molecules. Subsequent experimental studies showed that osteoclasts and, perhaps, a subpopulation of osteoblasts contain cell surface nucleotide receptors and established a role for the P2X nucleotide receptor in bone formation and resorption. P2X7 receptor-deficient mice have smaller bone diameter and lower cortical mass and a reduction in periosteal bone formation. Deletion of the P2X7 receptor resulted in decrease in periosteal mineralizing surface, mineral apposition rate, and bone formation rate consistent with reduced periosteal osteoblast number and activity. Nucleotides released from many cell types in response to mechanical stimulation are felt to mediate mechanotransduction in bone. This has now been confirmed in work by Li and Turner, where it was shown that the P2X7 nucleotide receptor mediates skeletal mechanotransduction. Subsequently, the occurrence of mutations in the cytoplasmic domain of the P2X7 receptor has been reported.

Advancing from these murine studies to human studies, numerous (and increasing) functional SNPs have been identified, which result in either gain or loss of function of the P2X7 receptor protein (P2X7R) and have been associated with important clinical bone alterations. Jørgenson and colleagues conducted a 10-year genetic analysis of SNPs of the P2X7R gene in the Danish Osteoporosis Prevention Study (DOPS) population. They were able to show that several of the uncommon loss-of-function variants induced a predisposition to accelerated loss of BMD in postmenopausal women similar to the loss produced in knockout mice in the previously cited studies. The small number of individuals in each of three different risk groups prevented them from showing a relationship to osteoporotic fracture. Gartland and colleagues, using the Aberdeen Prospective Osteoporosis Screening Study, showed that polymorphisms in the P2X7R gene were also associated with low lumbar spine BMD in addition to confirming accelerated bone loss in their postmenopausal women. Wesselius and his group, which included some of the aforementioned researchers, studied men and women ≥50 years of age who had presented to the osteoporosis clinic at the Maastricht University Medical Centre (MUMC), the Netherlands, following a traumatic or fragility fracture. The subjects were genotyped for 15 nonsynonymous SNPs within the P2RX7 gene. Some SNPs seemed to be gain-of-function polymorphisms and were associated with higher BMD, whereas others were loss-of-function polymorphisms and were associated with lower BMD. Husted et al. studied SNPs of the P2X7R gene in a population of 462 osteoporotic women and men with a T-score less than –2.5 or one low trauma vertebral compression fracture referred to the Department of Endocrinology at Aarhus University Hospital. The effect of various genotypes on fracture risk was examined and factors associated with fracture risk and BMD and/or body weight were found. Again, these findings were in accord with the phenotype of the knockout mouse described by Ke.

The vitamin D system includes 25 vitamin D, its active form 1,25 dihydroxyvitamin D, a variety of enzymes involved in its formation, and a specific receptor, the Vitamin D Receptor (VDR), which mediates its actions. Mutations in the VDR gene are known to cause disorders such as 1,25 dihydroxyvitamin D–resistant rickets, a rare monogenetic disease. Apparently, polymorphisms (more subtle sequence variations) in the VDR gene happen more frequently in the population than the severe deleterious mutations. In a study of 32 young (age range 19–30 years) stress fracture patients and 32 healthy volunteers, Chazipapas and colleagues genotyped the study subjects for four different polymorphisms of the VDR: Fokl in exon 2, BsmI and ApaI in intron 8, and Taql in exon 9. For example, the Fokl polymorphism contained FF, Ff, and ff genotypes; stress fractures were found to be eight times more likely in subjects with the ff and Ff genotypes compared to the FF genotype. Similar data were produced for the other polymorphisms. Fokl and Bsml polymorphisms were found to be independent risk factors for stress fractures.

Korvala et al. looked at the genetic predisposition for femoral neck stress fractures in a group of Finnish soldiers. All military conscripts who had suffered a femoral neck stress fracture between 1970 and 1995 were invited to a follow-up study in 2002 to 2003; 72 subjects participated. The diagnosis of stress fracture had been made based on standard X-ray, nuclear medicine, or MRI criteria. A group of 120 soldiers without stress fractures served as a control population. Clinically, the cases were shorter and had lower body weight and BMI than the controls. A total of 15 SNPs in six genes (COL1A1, COL1A2, CTR, IL-6, VDR, and LRP5) were genotyped. The COLA1A RS2586488 and COL1a2 rs3216902 SNPs were associated with stress fractures in a recessive model, and the risk was increased in carriers of the LRP5 rs2277268 SNP minor allele in comparison with noncarriers. The authors felt that genetic factors might play a role in the development of stress fractures in individuals subjected to heavy exercise and mechanical loading who were lighter weight, and thus the heavy loads they were subjected to were responsible for the relatively higher numbers of neck stress fracture than in larger individuals.

Yanovich et al. studied candidate genes in Israeli soldiers with stress fractures. The study population consisted of 203 soldiers (162 males and 41 females) with no findings of stress fractures and 182 soldiers (165 males and 17 females) with known stress fractures. Of interest, 10 participants from the stress fracture group had a family history of bone disorders or stress fractures; 5 reported that their fathers had suffered stress fractures during their military service. A total of 268 candidate SNPs from 17 genes spanning 12 chromosomes were selected for study. Sequence variants in a total of eight genes were associated with an increased risk for the development of stress fractures. Of note, variants in six genes were associated with decreased risk of stress fractures. One of the limitations of this study was that it may have been underpowered to detect significant differences, because correcting for multiple comparisons resulted in the fact that none of the comparisons remained significant.

Varley and coworkers undertook an evaluation of the role of the RANK/RANKL/OPG pathway, which is important in osteoclastogenesis controlling osteoclast activation, formation, and differentiation, working with a convenience sample of 518 elite athletes (449 males and 69 females) to form the Stress Fracture in Elite Athletes (SFEA) cohort who participated in a variety of sports including soccer, cricket, track and field, running events, rowing, boxing, tennis, hockey, and gymnastics. Each sport produced both stress fracture and nonstress fracture subjects. Genomic DNA came from saliva samples. A SNP of RANKL rs1021188 was shown to be associated with stress fractures in the whole group, the male group, and the multiple stress fracture group. A SNP of RANK rs3018362 was also shown to be associated with stress fracture occurrence in different groups. A rare allele of rs4355801 had a greater association with stress fractures in the OPG group. They concluded that SNPs of the RANK/RANKL/OPG signaling pathway were associated with stress fractures. Although the specific function of the genotyped SNPs was not known, this pathway is associated with osteoclast differentiation and activation that could decrease bone resorption, thus, perhaps, affecting the ability to respond to or repair microdamage.

Additionally, in another study, Varley and colleagues postulated that the P2X7 receptor gene, now recognized as a key regulator of bone remodeling, might play a role in the development of stress fractures both in military recruits and elite athletes. They studied a group of 210 Israeli Defense Force (IDF) military recruits and 518 elite athletes. The elite athletes (449 men and 69 women) formed the SFEA cohort. This cohort was recruited from the United Kingdom and North America and were predominantly white Caucasian (83.2% in the stress fracture cases and 79.9% in the nonstress fracture controls). Both groups had had stress fractures diagnosed in standard ways by complaints, physical examination, and appropriate imaging. From the military participants, DNA was extracted from peripheral blood leukocytes, and from the SFEA cohort, genomic DNA was derived from saliva. Analyses of five P2X7R SNPs in the SFEA cohort showed that specific SNP (designated rs1718119) was associated with multiple stress fractures. Thus, the findings in these two distinct populations (military cohort data not presented) are the first to demonstrate an independent association between stress fracture incidence and specific SNPs (rs1718119 and rs3751143). As described previously, in multiple Scandinavian studies, these SNPs have been shown to affect various bone parameters, e.g., bone loss rate, vertebral compression fracture, etc. The mechanisms by which these SNPs in the P2X7R gene are involved in the production of stress fractures is unknown. However, since it is hypothesized that stress fractures are related to repetitive loading causing microdamage and bone fragility, the sensitivity to mechanical loading and the expression of mechanotransduction by osteocytes may be at the basis of the pathophysiology.

In 2017, further studies by Varley et al. offered newer data from the previously discussed SFEA cohort. They investigated 11 SNPs in the vicinity of Wnt signaling pathway, especially the SOST gene, which have a role in bone formation and mechanotransduction. By this point in time, 125 stress fractures were reported in the SFEA cohort. Three SNPs in the SOST gene and the VDR gene studied were reported as being associated with increased incidence of stress fracture. Again, at this point in time, the mechanisms by which these SNPs increase the stress fracture risk is not known, but since the SOST gene is involved with regulating bone formation, it is possible that the rare allele of rs1877632 down-regulates sclerostin expression, via its role in inhibiting Wnt signaling, which could result in a reduction in bone formation, thus impairing the response to accumulation of stress fracture microdamage.

Risk Factors

In perhaps the largest study of stress fractures, Bulathsinhala et al. looked at racial and ethnic differences in 1.3 million US Army soldiers using the Total Army Injury and Health Outcomes Database (TAIHOD), a large repository of administrative (medical and demographic) data on the entire Active Duty Army (ADA) population. Race origin was Non-Hispanic black, Non-Hispanic white, Hispanic, American Indian/Alaskan Native, Native Hawaiian/Pacific Islander, and mixed races. Race was categorized as Black, White, American Indian, Asian, and More than one race. Ethnicity was also expressed, e.g., Asian consisted of Chinese, Japanese, Korean, Vietnamese, Filipino, Indian, and other Asian. They identified 21,549 incident stress fractures among 1,299,332 soldiers during 5,228,525 person-years. The overall incidence of stress fractures was 4.12 per 1000 person-years from 2001 to 2011. Female soldiers had a 3.6-fold higher incidence of stress fractures than did male soldiers. Non-Hispanic white and Hispanic groups had a higher risk of stress fractures than non-Hispanic blacks. Non-Hispanic white men and women had the highest risk of stress fracture. There was further breakdown of the racial and ethnic groups. The youngest soldiers (<20 years) were more susceptible to stress fractures than older groups, and those with lower weight were at higher risk than those of normal weight. The reasons for these race and ethnic-related risks for stress fractures are unknown, but probably they are related to issues of bone mineral density and bone quality including issues like bone size, bone architecture, and microdamage handling—all issues we now know are probably related to underlying genetics.

For further details of issues about Risk Factors, see Chapter 3, Chapter 8 and 28 as well as below.

History and Physical Exam

The clinical history should make the health care professional (physician, nurse practitioner, and/or physician’s assistant) suspect the presence of a stress fracture. The most important diagnostic tool is a detailed clinical history supported by a complete (for the nonsurgeon) and/or a focused physical examination (for the orthopedist). Most athletes relate an insidious onset of pain over 2–4 weeks. This is usually associated with the initiation of a training program (e.g., I thought I would run a 5K, a half-marathon, or a marathon), an increase in training regimen (e.g., getting in shape for the start of a season), or a change in equipment (e.g., new running shoes). Utilization of a list or a preset questionnaire ( Table 4.5 ) will help the history-taker be complete and cover more of the important issues. The pain is focal and local as opposed to medial tibial stress syndrome where the pain is more generalized along the anterior medial surface of the tibia.

Table 4.5
Intrinsic and Extrinsic Factors in the Causation of Stress Fractures
Adapted from Rosenthal and McMilan, Recruit Medicine , Chapter 11 , 2006 , ed, Bernard L. DeKoning, Office of the Surgeon General, pp 175–202.
Intrinsic Risk Factors Extrinsic Risk Factors
  • Gender

  • Age

  • Ethnicity

  • Body Mass Index

  • Bone characteristics

  • Muscle strength

  • Pretraining fitness level

  • Lower extremity morphology

  • Nutrition factors

  • Genetics

  • Menstrual dysfunctions

  • Muscle fatigue

  • Flexibility

  • Previous injury and inadequate rehabilitation

  • Training errors

  • Training surfaces

  • Worn-out/inappropriate footwear

  • Excessive training intensity

  • Environment

Initially, the pain occurs only during the offending activity, such as running. At this point, suspicion must be high to make the diagnosis. Usually, the athlete notices pain at the end of an event but typically the pain subsides quickly with cessation of the activity; then, over the next several days to weeks, the pain progresses to occur earlier in the event and becomes more severe, although frequently the athlete is trying to play through the pain. The pain then increases to the point where it persists for a prolonged period of time after the event and, eventually, starts to occur between events, then extends to occur between events without an obvious precipitating activity, and ultimately to pain at rest. Throughout this progression, there is a decrease in mileage or in time spent playing the activity, like basketball. At this point, when the athlete is finally unable to perform, he or she may complain to the trainer, other staff members, or another health care professional.

Physical examination, at this point, is usually focused on the site of pain. Often the patient can point to the site of the pain, especially in the lower extremity, and one can find local tenderness to palpation or possibly slight nodular swelling. In Matheson et al.’s series of 320 athletes, localized tenderness was found in 65.9% of cases and swelling in 24.6%. Milgrom and their group conducted a clinical assessment of femoral stress fractures in a prospective study of 372 male infantry recruits. If the response to the stress fracture history was positive, a complete physical examination was performed; each bone in the lower extremity was examined by palpation to determine if tenderness was present. “The femurs, because they lie within a large cuff of muscles were examined for tenderness by a ‘Fist Test. That is, pressure was applied simultaneously to the anterior aspect of both thighs, directly over the femurs, beginning distally and progressing stepwise proximally. This was done with the clenched fists of the examiner applying the weight of his upper body. An area of specific tenderness difference in sensitivity between femurs using the Fist Test was considered suggestive of a femoral stress fracture.” By using this expanded stress fracture clinical assessment (SFCA) and employing the full upper-body weight of the examiner, they uncovered more previously asymptomatic femoral stress fractures to more appropriately classify them as symptomatic. Giladi and their group also looked at external rotation of the hip as a predictor for stress fractures. Each of the group of 295 new male infantry recruits between 18 and 20 years of age who were evaluated in this study underwent a pre–basic training screening that included an extensive orthopedic examination with measurements of joint motion including the range of internal and external rotation of the hip with the hip flexed to 90°, among other measurements and assessments for ligamentous laxity. External rotation of the hip was found to have a significant relationship to all types of stress fractures (p = 0.0163), and specifically tibial stress fractures ( p = 0.0345), but not for femoral stress fractures. They divided the external rotation into two categories—external rotation ≥65° and <65°—and found that the recruits with an external rotation ≥65° had a 1.8 times higher incidence of stress fractures than the <65° group. They hypothesized that the ≥65° group might represent those with retroverted hips, increased joint laxity, a different gait pattern, or different collagen characteristics of their bone.

Matheson and colleagues found alignment and biomechanics of the lower extremities are significant factors in the causation of stress fractures. The frequency of varus alignment was reviewed: genu varum 29%, tibial varum 18.9%, subtalar varus 71.9%, and forefoot varus 72.6%. Pronated feet were most common in tibial and tarsal stress fractures and least common in metatarsal stress fractures. Cavus feet were found most commonly in metatarsal and femoral stress fractures. All of these alignment and biomechanical abnormalities create gait difficulties, and some are quite subtle. Therefore, sometimes an individual with a stress fracture or, more likely, multiple stress fractures or stress reactions or a combination of the two may be well served to have an evaluation at a human performance laboratory, including gait analysis.

Three-dimensional instrumented gait analysis (3D-GA) results in information on normal and pathological gait to provide comprehensive data about joint motions (kinematics), time-distance variables (spatio-temporal data), and joint movements and powers (kinetics). 3D-GA can be helpful for obtaining objective information for analysis of functional limitation or for follow-up over time. A number of indices have been developed including: normalcy index (NI), hip flexor index (HFI), gait deviation index (GDI), gait profile score (GPS), and movement analysis profile (MAP). The NI is the most extensively validated and used measurement in clinical gait research and practice. Napier and colleagues conducted a systematic review of gait modifications that have been undertaken to change lower extremity gait biomechanics in runners. Several measures including rearfoot eversion, vertical loading rate, and foot strike index have shown an association with running-related injuries. Some of the biomechanical issues, such as cadence and foot strike, may be modifiable. They found 27 articles that investigated different gait-retraining interventions. Foot strike manipulation was the most common intervention; step frequency and step length were also common interventions. Some studies looked at other manipulations. They covered changes in hip kinematics, changes in knee kinematics, changes in ankle kinematics, vertical, and leg/lower extremity stiffness, spatiotemporal variables, step frequency, step length, and ground contact time. Impact loading (the sudden force applied to the skeleton at initial contact) has demonstrated the greatest relationship with lower extremity overuse injuries from any of the biomechanical variables.

Zadpoor and Nikoyan conducted a metanalysis of 13 articles of the relationship between lower extremity stress fractures and the ground reaction force (GRF). The GRF is an approximate measure of the loading of the lower extremity musculoskeletal system, is fairly easy to measure, and is an important feature to measure in the study of the kinetics of the lower extremity during running. The vertical loading rate (VLR) is defined as the slope of the initial part of the vertical-GFR time curve (between the foot strike and the vertical impact peak). According to the authors’ analysis of the included literature, the studies do not agree on whether or not the vertical GFR and/or loading rate are significantly different between the stress fracture groups and the control groups. However, the average VLR and the instantaneous VLR are significantly higher in the stress fracture group (p < 0.05). One of the limitations the authors state of the cited studies was that they were only short-term studies, and many individuals with lower extremity injuries are running for a long time, and consequently, muscle fatigue may play a role in their injuries. When muscles fatigue, the amount of energy transmitted to the surrounding bones increases. Grimston and colleagues from the Human Performance Laboratory at the University of Calgary showed, in a study of a 45-minute run in subjects with a history of a tibial stress fracture (n = 5) and no stress fracture history (n = 5), maximum lateral forces were significantly greater for the stress fracture subjects during both early and late stages of the run compared to nonstress fracture subjects. “This finding of increased loads during the course of a 45 min run in SF, and constant or decreased loads in NSF, may be indicative of differences in fatigue adaptation and warrants further study.”

Of the different measurements used to test impact loading, average vertical loading rate (AVLR) is the most serious running-related injury risk factor. There is a link between step frequency, step length, and ground contact time. Typically, a greater step length and ground contact time has been associated with a higher incidence of stress fracture. Edwards and his group created a probabilistic stress fracture model based on the effects of stride length and running mileage. They investigated two stride lengths (preferred and –10% preferred) and three running regimens (3, 5, and 7 miles) in 10 experienced male runners free of any lower extremity injuries. A 10% reduction in stride length resulted in a corresponding reduction in peak resultant contact force. Increasing running mileage from 3 to 5 miles resulted in an increase in stress fracture probability of 4% to 5%. Increasing running mileage from 3 to 7 miles increased stress fracture probability from 7% to 10%. Their results suggested that a 10% reduction in preferred stride length reduces the risk for a tibial stress fracture, and that if this were done, it would allow runners to run an additional 2 miles per day and maintain the same low risk of fracture. They also felt that the benefits of reduced stride length are noticed more at higher weekly running mileages. The authors stated that the “difficulty for the clinician is in identifying those runners ‘at risk’ for stress fracture that would benefit from a 10% stride length reduction. Presumably, these would be inexperienced runners beginning a weekly running routine or runners with a history of stress fracture. Poor physical fitness and low physical activity before physical training and a previous history of stress fracture are both associated with a higher risk of stress fracture development.”

Crowell and Davis studied gait retraining to reduce lower extremity loading in runners. They performed a pretraining instrumental gait analysis. They then began a retraining program, which included eight sessions over a 2-week period in which an accelerometer was taped to the distal tibia and subjects ran on a treadmill, during which time they were instructed to “run softer”: make their footfalls quieter and to keep their acceleration peaks below a given line. The monitor depicting the peak line was placed in front of the treadmill for the runners to view. Run time was gradually increased from 15 to 30 minutes over the eight sessions. Feedback was provided continuously for the first four sessions and then removed. A comparison of the pretraining and posttraining results revealed significant reductions in peak positive acceleration (PPA), vertical instantaneous (VILR), and vertical average loading rates (VALR), and a vertical impact peak (VIP) of about 20–30%. These reductions were maintained at the 1-month follow-up. The reductions in PPA, VILR, VALR, and VIP achieved in the current study were at least two times greater than those achieved through the use of cushioning shoes, foot orthoses, or shock-attenuating insoles, indicating that an individual’s ability to alter their own running mechanics is greater than the ability of any of these external devices to assist them. So, lower extremity impact loading can be reduced with a gait retraining program that uses real-time visual feedback. But most of the gait retraining studies have taken place in the laboratory and not in the natural setting, like a track or cross-country course, a marathon course, or a basketball court.

Napier and his colleagues concluded, from their meta-analysis, that gait retraining works in the short term to produce small to large effects on kinetic, kinematic, and spatiotemporal results during running. Foot strike changes had the greatest effect on kinematic measures, and real-time feedback also had its largest change on kinetic measures, whereas combined training protocols had the biggest alteration on spatiotemporal measures. Further research on these and other interventions is still needed.

Bone Densitometry

Bone density testing, in its present form, has been available for about 30 years. Initially developed in 1961 by Cameron, and improved upon by Cameron and Sorensen, this early device measured cortical bone in the forearm by single photon absorptiometry (SPA). The first commercial devices were manufactured by Norland Corporation (now Norland at Swissray, Fort Atkinson, WI, USA). Subsequently, Mazess and others developed dual photon absorptiometry (DPA) in the early 1970s. The modern era of bone densitometry was ushered in by the development of dual-energy X-ray absorptiometry (initially DEXA; now, DXA) in 1986 by Hologic, Inc., (Waltham, MA, USA) and subsequently again by Mazess at Lunar Radiation Corporation (now GE Healthcare, Madison, WI, USA). Over the last 30 years, DXA has emerged as the primary clinical device for fracture risk assessment, for monitoring changes in bone health or for diagnosing osteoporosis before or after a fracture has occurred. The International Society for Clinical Densitometry (ISCD), through its Consensus Development Conferences, has expanded on the criteria for the diagnosis of osteoporosis and developed positions of official ROIs for diagnosis and monitoring and principles of clinical monitoring changes in bone density over time.

Although the vast majority of clinical DXA centers perform a minimum study of AP Spine (L1–L4), and one hip (measuring regions of interest [ROIs] that are known as the Total Hip and the Femoral Neck), the densitometers have developed into sophisticated tools that can measure multiple parameters of musculoskeletal health. Many of the early papers on bone densitometry in the area of stress fractures dealt with female athletes, including those with the Female Athlete Triad.

Perhaps one of the first uses of bone densitometry in athletes was the report by Drinkwater et al. in 1984 on the bone mineral content (BMC) of amenorrheic (age 24.9 ± 1.3 years) and eumenorrheic athletes (25.5 ± 1.4 years). Twenty-eight women athletes, 14 of them amenorrheic, were studied, and the 14 eumenorrheic controls were selected from a larger pool of potential subjects. SPA and DPA were used to measure forearm bone mineral density (BMD) at two sites at the one-tenth and one-fifth forearm sites and lumbar spine BMD of L1–L4, respectively. Although the forearm bone density did not differ at either site between the two groups, the BMD of the lumbar vertebrae was significantly lower in the amenorrheic group of athletes. Before the World Health Organization classification was developed, the BMD of the amenorrheic athletes was said to be equivalent to that of women 51.2 years of age; two of these athletes had a vertebral mineral density below the “fracture threshold” as defined by Riggs et al., which is 0.965 g/cm 2 .

In a follow-up study, Drinkwater and her group reported on seven of the original amenorrheic women who regained their menses within 1–10 months after 40.4 months (range 11–86 months) of amenorrhea. Even though the miles run per week by the amenorrheic group was higher than the eumenorrheic group, their mileage had decreased by 10%, their weight had increased by 1.9 kg in the women who regained their menses, and they had an increase in calcium intake. The bone density increase was 6.2% in 14.4 months.

In 1990, Myburgh and colleagues showed that low bone density was an etiologic factor for stress fractures in athletes. In their study, 25 athletes with stress fractures were recruited from the University of Cape Town Sports Injury Clinic during a 1-year time interval. They were compared with a group of control subjects who matched the injured athletes closely. Bone mineral density was measured by DXA (Hologic QDR 1000) at the lumbar spine (L2–L4) and the left hip (including the left total femur, Ward [sic] triangle, femoral neck, greater trochanter, and the intertrochanteric space), although now, per ISCD Official Positions the Ward’s triangle, greater trochanter and intertrochanteric space are not “official” ISCD ROIs. In subjects with a history of a left femoral neck or shaft stress fracture, the contralateral side was measured, and vice versa. All measurements were made at least 6 months after the diagnosis of stress fracture was made and when all previously injured subjects were exercising regularly. Of the 25 injured athletes and 25 control athletes, 32 participated in road running, 2 in track, 4 in aerobics, and 12 in both aerobics and running. Of the 50 subjects, 38 were women. They were matched for age (32 ± 8 years), body mass, and height. Injured runners and their controls were matched for average training distance (53 ± 27 and 45 ± 17 km/wk). Nonrunners and their controls had similar training times (6 ± 4 and 5 ± 2 h/wk). In the 25 athletes with stress fractures, 7 fractures occurred in the foot, 6 in the femoral neck, 3 in the pubic rami, 3 on other areas of the femur, 4 in the tibia, and 2 in the fibula. Of the 25 injured subjects, 7 had previous shin splints and 5 had a history of one or more previous stress fractures. Focusing on the bone density results only, for the purposes of this chapter, bone mineral density was lower in injured subjects than in control subjects in the lumbar and the proximal femur (p < 0.02, each, respectively). The six subjects with femoral neck stress fractures had significantly lower femoral neck bone density than the matched controls. Other clinical risk factors were also studied but are not reported here. The authors felt that the most important finding was that athletes with stress fractures had lower bone mineral densities than did well-matched control athletes. They also showed that the lower bone mineral density occurred in both the axial and appendicular skeleton. The authors concluded that “evidence suggests the etiology of stress fractures in athletes is more complex than traditionally believed.”

Giladi, Milgrom, and their colleagues reviewed identifiable risk factors for stress fractures in the Israeli Defense Force. They studied a group of 312 male military recruits during 14 weeks of basic training; after dropouts, 289 soldiers were studied. As part of the pretraining evaluation, foot and tibial radiographs were obtained to measure tibial bone width, BMC was measured by SPA at 8 cm above the ankle mortise, and BMD was measured by a Compton bone densitometer, which was a precursor of quantitative computed tomography (QCT), measuring the bone density (in grams per cubic centimeter) for a cancellous window in the center of the tibia. The main bone-related risk factor found was: soldiers with wider tibia sustained less tibial, femoral, and total stress fractures than those with narrow tibia.

Lauder et al. performed an early study on the relationship between stress fractures and bone mineral density in active-duty US Army women at Fort Lewis, WA, with a total of 423 subjects of which 190 women were available for the BMD evaluation study; 30 of these women qualified by having one or more stress fractures in the last 2 years. Five women were excluded for a variety of invalid entries on some data items, producing a study population of 185 women. Of the 185 women, 27 had stress fractures, and 158 women without stress fractures were used as controls. An extensive evaluation of demographics and risk factors for stress was undertaken. BMD of the PA lumbar spine (L2–L4) and femoral neck was measured on all subjects by DXA using a Lunar DPX (now GE Healthcare) by a trained technician. Their multivariate analysis revealed a strong inverse relationship between femoral neck BMD and the probability of a stress fracture as their most significant finding, indicating that lower levels of femoral neck BMD were associated with an increased likelihood of stress fracture. BMD of the lumbar spine was not found to be a significant predictor of stress fractures.

Marx and colleagues from the Hospital for Special Surgery looked at stress fracture sites as they are related to underlying bone health in athletic females. They noted that the most commonly described sites for stress fractures are cortical ones, including the tibia, the metatarsals, and the femoral shaft, whereas fractures are less common at sites of trabecular (or cancellous) bone, such as the femoral neck, pelvis, and sacrum. They felt that patients who had stress fractures of trabecular bone sites had lower bone mineral density than those who had cortical bone stress fractures at their Women’s Sports Medicine Center. They conducted a retrospective chart review of 65 patients diagnosed with stress fractures over a 4-year period. Patients underwent DXA scanning. They did not describe the type of DXA machine or software versions used in their study. They did state that 15 of the DXA scans were performed at their institution and 5 at other locations, so presumably they were different machines from different manufacturers. (At that time, some of the individual machine differences were not understood as clearly as they are today). They utilized the World Health Organization definition of osteopenia, which had been recently formulated. This definition has also been refined. Because of study patients eliminated for a variety of reasons, they had DXA scans of 9 patients with stress fractures of trabecular bone sites and 11 with stress fractures of cortical bone sites. Using this small population, they found that stress fractures of trabecular bone sites were associated with “early onset osteopenia (p = 0.01).” Eight of the nine patients with stress fractures at trabecular sites had DXA scans that indicated osteopenia, while only three of the patients with cortical bone site stress fractures had osteopenia. They concluded a trabecular bone site stress fracture in a young female might be a warning sign of “early onset osteopenia.” They recommended that females under age 40 who have documented stress fractures of either trabecular or cortical bone sites (with risk factors for osteopenia) undergo bone density evaluation. (Current ISCD Official Positions would refer to these DXA measurements as “below the expected range for age” if the Z-score is ≤ –2.0 and “within the expected range for age” if the Z-score is > –2.0. )

With further technologic development of DXA and sophistication of our understanding of the use of the tool, additional ROIs are available for study, like the forearm (usually, the nondominant forearm 1/3 radius site, which is an Official ISCD site) , vertebral fracture assessment (VFA), and whole body bone mass (also known as total body bone mass [TBBM]) from which body composition measurements can be determined with percent body fat and lean and fat mass.

This newer understanding of DXA and the ISCD definitions of an abnormal DXA for children, adolescents, and premenopausal women has been incorporated into the latest 2014 Female Athlete Triad Coalition Consensus Statement on Treatment and Return to Play of the Female Athlete Triad. That panel utilized the definitions published by the ISCD as well as the ACSM criteria for female athletes involved in regular weight-bearing sports ( Box 4.2 ).

Box 4.2
Low Bone Mineral Density (BMD) Diagnosis
Data from De Souza MJ, Nattiv A, Joy E, et al. 2014 Female Athlete Triad Coalition Consensus Statement on Treatment and Return to Play of the Female Athlete Triad: 1st International Conference held in San Francisco, California, May 2012 and 2nd International Conference held in Indianapolis, Indiana, May 2013. British Journal of Sports Medicine 2014;48:289.

How is Low BMD Diagnosed?

The Panel has utilized the definitions published by the International Society of Clinical Densitometry (ISCD) for low BMD and osteoporosis in children and adolescents and for premenopausal women and adolescents and for premenopausal women, as well as ACSM-suggested criteria for female athletes involved in regular weight-bearing sports. Criteria are described below for who and what site should be considered for a DXA scan and how often DXA should be performed.

Who Should Get DXA Scans for BMD Testing?

The Panel agreed that indications for obtaining a DXA scan for BMD testing in an athlete should follow Triad risk stratification (see Clearance and Return to Play section) and include the following:

  • (1)≥1 “High risk” Triad Risk Factors:

  • History of a DSM-V-diagnosed eating disorder

  • BMI ≤17.5 kg/m 2 , <85% estimated weight, OR recent weight loss of ≥10% in 1 month

  • Menarche ≥16 years of age

  • Current or history of <6 menses over 12 months

  • Two prior stress fractures, 1 high risk stress fracture, or a low-energy nontraumatic fracture

  • Prior Z-score of < –2.0 (after at least 1 year from baseline DXA)

  • OR (2)≥1 “Moderate risk” Triad Risk Factors:

  • Current or history of DE for 6 months or greater

  • BMI between 17.5 and 18.5, <90% estimated weight, OR recent weight loss of 5% to 10% in 1 month

  • Menarche between 15 and 16 years of age

  • Current or history of 6 to 8 menses over 12 months

  • One prior stress reaction/fracture

  • Prior Z-score between –1.0 and –2.0 (after at least 1-year interval from baseline DXA)

  • (3)In addition, an athlete with a history of ≥1 nonperipheral or ≥2 peripheral long bone traumatic fractures (nonstress), should be considered for DXA testing if there are 1 or more moderate- or high-risk Triad risk factors. This will depend on the likelihood of fracture given the magnitude of the trauma (low or high impact) and age at which the fracture occurred. Athletes on medications for 6 months or greater that may impact bone (such as depot medroxyprogesterone acetate, oral prednisone, and others) should also be considered for DXA testing.

How Often Should Athletes Get DXA Testing?

The Panel agreed that the frequency of BMD assessment by DXA will depend on the initial BMD and ongoing clinical status of the athlete. We agree with the ISCD 2013 guidelines that repeat DXA screening should be obtained when the expected change in BMD Z-scores equals or exceeds the least significant change. Those with definitive indications for DXA testing may require BMD testing every 1 to 2 years to determine if there is ongoing bone loss, and to evaluate treatment.

Which Sites Should be Screened with a DXA Scan?

Bone mineral density Z-scores (and not T-scores) should be reported for all children, adolescents, and premenopausal women.

  • (1) Adult women ≥20 years

  • Weight-bearing sites (posteroanterior spine, total hip, femoral neck)

  • Nonweight-bearing sites, namely the radius (33%) if weight-bearing sites cannot be assessed for any reason.

  • (2) Children, adolescents, and young women <20 years

  • Posteroanterior lumbar spine bone mineral content (BMC) and areal BMD

  • Whole body less head if possible (otherwise whole body) BMC and areal BMD

  • Adjust for growth delay (with height or height age) or maturational delay (with bone age)

  • Use pediatric reference data, and when possible, report height-adjusted Z-scores.

Looking at another role for bone densitometry in the athletic population, Gustavsson and her colleagues studied rapid bone loss of the femoral neck after cessation of ice hockey training over 6 years in young males. They assessed the effects of training and detraining on the BMD of the total body, spine, and femoral neck in a cohort of adolescent male hockey players in Sweden. The study group initially consisted of 65 ice hockey players and 30 controls with a mean age of 16.7 ± 0.6 years and 16.8 ± 0.3 years, respectively. After a mean period of 2.5 years, 59 hockey players and 30 controls agreed to participate in a first follow-up session; 12 of the ice hockey players had stopped training and were excluded; one of the controls was excluded for a variety of extraneous medical reasons. After a mean period of 5 years and 10 months, 22 active and 21 retired hockey players and 25 controls participated in a second follow-up examination. At baseline, the average training per week for the hockey players was 9.4 ± 2.6 hours. The training consisted of ice hockey training or games, with additional weight and aerobic training. The control group’s physical activity was playing soccer and football, distance running, and some weight training. At the start of the study, all boys participated in 2 hours of physical education in school each week. The subjects were divided into different pubertal Tanner stages; all were judged to be at least Tanner stage 4. Using a Lunar DPX-L (now GE Healthcare) bone densitometer, they measured the total body and spine BMD and BMC and area of the right femoral neck at baseline and the two follow-up examinations. The authors felt their most important finding was the effect of detraining on the femoral neck BMC in the retired players. These retired players lost significantly more bone at the femoral neck ROI between 19 and 22 years of age than the ice hockey players who continued their training. Thus, the gain in BMD from training effect is not sustained after cessation of training.

Studies by Davey et al. carried the differences in axial and appendicular bone density in stress-fractured and uninjured royal marine (RM) recruits even further. According to the authors, RM training is widely acknowledged as one of the most arduous and longest (32 weeks) military training programs in the world. In their study, they measured BMD by DXA, by ultrasound, and by peripheral quantitative computed tomography (pQCT). In their cohort of 1090 recruits, 78 (7%) developed one or more stress fractures; 62 pairs of stress-fractured recruits and controls were assessed with DXA; the 62 fractured recruits had 79 stress fractures; 7 recruits had 2 fractures, 3 had 3 fractures, and 1 had 4 fractures. The most common sites of fracture were the metatarsals (n = 41) and tibia (n = 26). Areal BMD assessed with DXA (two-dimensional projection) was lower at all sites in the stress fracture group compared with the control group (p < 0.01) Although they used T-scores (the group of males were aged 16–32 and Z-scores should have been utilized), they did state that 28 of the 62 had T-scores “below the normal range (T-score < –1.0)” but, if Z-scores had been appropriately used, per ISCD, these would have been classified as “normal for age” since we do not know if any had Z-scores < –2.0, which would have been classified as “below normal for age.” There was no difference in ultrasound measured by broadband ultrasound attenuation (BUA) of the dominant or nondominant foot at baseline (week 2) between stress fractures and control recruits. In the recruits measured by pQCT, there were 51 pairs for the first three slices of the tibia (4%, 14%, and 33%) and 43 pairs for the 66% slice of the tibia. There were structural differences between stress fractured recruits and controls at all slices of the tibia, with the 38% slice the most marked difference between the groups; there was also a strong negative correlation between cross-sectional area of the tibia and BMD at this site. There were no differences in the serum C-Telopeptide (s-CTx), a marker of bone resorption, between the stress fracture cases and controls. They concluded that stress–fractured young RM male recruits undergoing specialized prolonged military training had a lower BMD of the spine and hip, narrower tibiae, and reduced tibial strength indices compared with the study controls.

Edmondson and Schwartz reviewed non-BMD DXA measurements of the hip. These included hip axis length (HAL), hip structural analysis (HSA), and finite element analysis (FEA), among other techniques. Beck and colleagues utilized an additional DXA technique that involved scanning at both the midthigh (midfemur length) and distal third of the lower leg (one-third tibial length from the medial malleolus). With this DXA technique, they prospectively followed 626 US Marine Corps recruits for 12 weeks of basic training to study cross-sectional geometric properties of the midshaft femur and middistal tibia. Previous studies had shown that the most important geometric properties of a long bone are the cross-sectional area (CSA) and, for bending and torsion, the cross-sectional moment of inertia (CSMI). Within a bone under a given load, the stress forces are determined by the bone structural geometry, while the bone’s ability to resist these forces is defined by the bone material properties. The CSA is an index of axial strength and is related to shear strength, while the CSMI is an index of bending rigidity. The section modulus (Z), an index of bending strength, was also calculated as the CSMI divided by half bone width. Twenty-three of the 626 recruits (3.7%) presented with 27 lower extremity stress fractures. The most common site was the tibia (n = 11), metatarsals (n = 7), femur (n = 5), and tarsals (n = 4); two recruits had fractures at two sites and one had fractures at three sites. Most anthropometric dimensions were significantly smaller in the stress-fractured recruits than in normal individuals. Small body size and narrow long bone diaphysis relative to body size were risk factors for the development of lower limb stress fractures during the 12-week basic training program. CSA, CSMI, Z, and pelvic and knee width were significantly smaller than normal in the tibia and femur. After correcting for body weight differences, CSA, CSMI, Z, and pelvic and knee width as well as BMD were all significantly smaller in the fractured recruits. The authors stated that “bone structural data derived from DXA provides important new information that may be useful in the identification of subjects at higher risk for stress fractures under intense physical training conditions.”

Nattiv and colleagues studied the correlation of MRI grading of bone stress injuries with clinical risk factors and return to play in a 5-year prospective study in collegiate track and field athletes. In their study, DXA exams were performed at baseline and annually. Athletes with a higher MRI grade injury exhibited a lower BMD at the total hip (p < 0.050) and radius (p < 0.047). Those athletes with bone stress injuries at trabecular sites had significantly lower bone mass at the lumbar spine, femoral neck, and total hip (p < 0.001).

Other imaging modalities, such as ultrasound measurements of the calcaneus, have also been employed in studying the bone of athletes.

We utilize bone densitometry on every patient we see at our center, not to make the diagnosis of stress fracture, but to help us differentiate between those individuals with fatigue fractures and those with insufficiency fractures. Thus, at the Northern California Institute for Bone Health, Inc., which has both Lunar Prodigy Advance (GE Healthcare, Madison WI, USA) and Hologic Discovery A (Hologic, Inc., Waltham, MA, USA) bone densitometers, we perform a comprehensive DXA study including the following ROIs: PA Spine (L1–L4), both hips (for total hip and femoral neck), nondominant forearm (1/3 forearm [radius]), VFA, a screening of the PA and lateral spine from T4 to L4, TBBM, which allows analysis of lean body mass, fat mass, and percent body fat. We particularly, use TBBM in athletes less than 30 years old because they may not have achieved peak bone mass and in female athlete triad and individuals with eating disorders, even if they are older than 30 years of age. There is increasing interest in the use of TBBM in National Football League (NFL, USA) players and in other athletic settings.

Bone Quality

Since the NIH Consensus Development Panel on Osteoporosis Prevention, Diagnosis and Therapy was formed in 2000, the concept that osteoporosis is a result of compromised bone strength has evolved. Both low BMD and micro-architectural deterioration of bone tissue (bone quality) combine to lead to increased bone fragility and consequent increase in fracture risk. It seems the same principles must apply to the pathophysiology of stress fractures and increased fracture risk in other metabolic bone diseases, that is, they result from a combination of bone mass and bone quality issues. Sorting out the role of these issues in the individual patient is increasingly possible.

Interest in bone quality has grown significantly in the last several decades as investigators have realized some of the limitations of DXA: 1) many fractures occur among patients with normal bone density; 2) fluoride treatment did not reduce fractures despite large increases in spinal BMD; 3) small changes in areal BMD in patients treated with antiresorptive therapy result in greater than expected decreases in future fracture risk; 4) the measured changes in BMD in patients treated with antiresorptive drugs explain only a small part of the variance in the reduction of future fracture risk; 5) reductions in future fracture risk are evident long before early maximal changes in BMD are expected or can occur; and 6) patients receiving glucocorticoid therapy for a variety of illnesses have more fractures than individuals of the same BMD who have not received glucocorticoid therapy. Contributors to bone quality include, among others, trabecular architecture, the rate and extent of bone turnover, the organic and inorganic composition of the bone matrix, the type and amount of collagen crosslinks, the degree of matrix mineralization, microdamage accumulation, and cell viability. Issues of bone quality have been well studied in the bone fragility of osteoporosis ( Fig. 4.2 ). Tommasini et al. looked at the relationship between bone morphology and bone quality in male tibias and their implications for stress fracture risk. Having a narrow (i.e., slenderer) tibia relative to body mass, an aspect of bone geometry, is a predictor of stress fracture risk and bone fragility in male military recruits and male athletes. This was assessed by testing the biomechanical properties of tibias from young adult males. Tibias of 17 male donors (15 white, 1 Hispanic, 1 black, 32.9 ± 10.4 years of age; range 17–46 years) were acquired from the Musculoskeletal Transplant Foundation (Edison, NJ, USA). Extensive whole bone morphology was studied; CSMI, CSA, and Z were assessed. A slenderness index (S) was calculated; an inverse ratio was created so that a tibia with a large S was thinner for weight and height of the individual; and a small S reflected a heavier or larger tibia. Cortical bone samples were prepared from the diaphysis of each tibia for biomechanical testing. Tissue-level mechanical properties and damageability were assessed. There were significant correlations between tibia morphology and mechanical properties in tissue brittleness and damageability. Narrower bone was made up of tissue that failed in a more brittle way and accumulated more damage. Positive correlations were observed between measures of bone size and measures of tissue ductility, and negative correlations were observed between bone size (CSMI and Z) and tissue modulus. “The correlation between tissue ductility and bone size may help explain why male military recruits and male athletes with narrow bones show a higher incidence of stress fractures compared with individuals with wide bones.” “The data provide a new paradigm that may explain how variation in bone slenderness contributes to stress fracture risk.” Narrower tibias were composed of tissue that was more brittle and prone to accumulate more damage compared with tissue from wider tibias. “Having tissue that is more or less damageable may be inconsequential during day-to-day activities. However, tissue-level mechanical properties like total energy and ductility become particularly important in defining the response of bone to an extreme loading condition, such as that expected during military training….” From this study, it is now clearer as to why bone size is a risk factor for stress fractures.

Fig. 4.2, Increased Bone Strength

The search for how to measure bone quality clinically has been like the search for the Holy Grail (or the Holy Chalice). But in the last several years new tools (e.g., Trabecular Bone Score [TBS] [Medimaps Group, Geneva, Switzerland] and Osteoprobe [ActiveLife Scientific, Santa Barbara, CA, USA]) have become available to enable insights into bone quality.

Trabecular Bone Score (TBS)

TBS is a software program that is an add-on to the DXA software in the densitometer database. It measures the texture parameter that evaluates the pixel gray-scale variations in the DXA images of the lumbar spine. The TBS variations may reflect bone microarchitecture, and thus TBS has become a surrogate for aspects of bone quality. It uses the experimental variogram of 2D projection images. The TBS is calculated from unprocessed raw computer data from the DXA acquisition. TBS is based on X-ray absorption by tissues, similar to BMD computation. Calculation of TBS and BMD is done separately and by different methods. TBS is derived after the BMD measurement is made and at the same region of interest (PA spine). TBS is a unitless measurement. A high TBS value indicates “good” microarchitecture associated with “good” mechanical strength and a reduced fracture risk; a low TBS value indicates poor-quality microarchitecture and, therefore, increased fracture risk.

Analogous to the World Health Organization (WHO) classification of osteoporosis of normal, low bone mass (osteopenia), and osteoporosis, the TBS values for postmenopausal Caucasian women was initially established with the advent of the software: TBS ≥1.350 is “normal;” TBS between 1.200 and 1.350 is considered to be associated with “partially degraded” microarchitecture; and TBS ≤1.200 is classified as “degraded” microarchitecture ( Table 4.6 ).

Table 4.6
Trabecular Bone Score (TBS) Interpretation
How is the Number Interpreted?
BMD TBS
Normal
T-score > –1
Normal
TBS > 1.350
Low bone mass
–1 < T-score < –2.5
Partially degraded
1.200 < TBS < 1.350
Osteoporosis
T-score < –2.5
Degraded
TBS < 1.200
BMD, bone mineral density; TBS, trabecular bone score.

These cutoff thresholds were established by a working group of TBS users from different countries. McCloskey et al. conducted a meta-analysis of TBS in fracture risk prediction. This study resulted in a change of the classification thresholds so that a TBS value <1.23 is “degraded;” between 1.23 and 1.31 is “partially degraded;” and >1.31 is “normal.” There was no difference between sexes. At this point in time, the FDA has approved only a postmenopausal Caucasian database, although other gender- and race-specific databases are available: female and male, White; male and female, Black; and male and female Mexican American (personal communication). We use these additional databases in our research.

Most of the studies reported have dealt with older populations, as the primary use of TBS has been in establishing fracture risk in the osteoporotic population.

In 2015, Silva et al. reviewed the literature on fracture risk prediction by TBS for the International Society for Clinical Densitometry (ISCD) and established its official positions: 1) TBS is associated with vertebral, hip, and major osteoporotic fracture risk in postmenopausal women; 2) TBS is associated with hip fracture risk in men greater than the age of 50 years; and 3) TBS is associated with major osteoporotic fracture risk in men older than the age of 50 years. Thus, the official positions applied to an older population.

Over the last several years, there have been reports of the responses of TBS to various osteoporosis antiresorptive and anabolic medications and other disease states. Sean et al. reported on the results of teriparatide (Forteo) and ibandronate (Boniva) on spine BMD and TBS in 210 postmenopausal women with osteoporosis (70 treated with teriparatide 20 μg self-injected subcutaneously daily versus 140 treated with intravenous ibandronate 3 mg every 3 months) who were 68.9 ± 9.0 years of age versus 67.4 ± 6.5 years of age, respectively. Only women with evaluable DXA scans for both LS BMD and TBS at baseline and after 2 years were included in the analysis; this made the final numbers 65 (93%) and 122 (87%) in the teriparatide and ibandronate groups evaluable, respectively. Both groups started with TBS of 1.206 ± 0.100 versus 1.209 ± 0.100, respectively, both in the “degraded” class. After 24 months of therapy, lumbar spine BMD and TBS increased significantly more with teriparatide compared with ibandronate (+7.6 ± 6.3% versus +2.9 ± 3.3% and +4.3 ± 6.6% versus +0.3 ±4.1%, respectively; p < 0.0001 for both). Compared to baseline, increases in BMD were significant for both teriparatide and ibandronate, while increases in TBS were significant only for those treated with teriparatide (p < 0.0001), suggesting a stronger positive effect on bone microarchitecture with teriparatide. Saag et al. reviewed the results for TBS obtained from archived DXA reports from a randomized clinical trial of patients with chronic glucocorticoid therapy-induced (median 7.5 mg/d prednisone for ≥90 days) osteoporosis treated with alendronate or teriparatide. In patients treated with teriparatide, TBS was significantly increased from baseline at 18 and 24 months and increased by 3.7% at 36 months; in patients treated with alendronate (Fosamax), TBS did not change significantly at any time from baseline to 36 months. Bilezikian et al. analyzed the effects of subcutaneous abaloparatide on TBS in a post hoc retrospective analysis of 138 subjects from a phase II 24-week double-blind randomized clinical trial. In the 80 μg abaloparatide subcutaneous self-injection group (which would prove to be the clinically approved dosage), the TBS measured 1.181 ± 0.078 versus 1.201 ± 0.068 in the 20 μg teriparatide group (which is the approved dose); both these measurements are in the “degraded” group. In the 80 μg abaloparatide subcutaneous self-injection group, TBS increased by 2.37% in the abaloparatide group (n = 24) versus 1.16% in the teriparatide group (n = 31) at 12 weeks and TBS increased by 5.23% in the abaloparatide group and by 3.27% in the teriparatide group at 24 weeks. Therefore, the effect of abaloparatide on TBS was greater than in the placebo group and the teriparatide group. An increase in TBS greater than the least significant change (LSC) was attained by 52.2% of subjects treated by abaloparatide versus 30.0% of the teriparatide group. The authors conclude that the increase in TBS values in the context of anabolic treatment is associated with a reduction in fracture risk, over and above what an increase in BMD would indicate (although this remains to be demonstrated), and the results help to differentiate abaloparatide from teriparatide in terms of potential effects on bone microarchitecture as determined by its surrogate measurement, TBS. The exact implications of these differences and their significance in the treatment of stress fractures and fracture healing, if any, remains to be determined.

The effects of denosumab on TBS, the effects of other antiresorptive agents, and the effects of various disease states, such as Crohn’s disease and end-stage renal disease in patients on hemodialysis, primary aldosteronism, ankylosing spondylitis, and in so-called causes of secondary osteoporosis have also been studied, but are beyond the scope of this text.

The normative database extends down to 20 years of age (and up to 80 years of age). However, only a few studies have looked at younger individuals and at young women with anorexia nervosa. In this later study, Donaldson and colleagues reported on 57 adolescent girls age 11–18 years with anorexia nervosa recruited from an urban eating disorders clinic where they had undergone DXA examinations and peripheral QCT studies. The TBS of 6 (11%) of the participants were degraded and of 19 (33%) were partially degraded, according to adult normative values.

Heiniö et al. looked at the association between long-term exercise loading and TBS in different exercise loading groups. Eighty-eight Finnish female athletes competing at a national or international level and 19 habitually physically active nonathletes with a mean age of 24.3 years (range 17–40 years) who were all postpubertal and premenopausal were analyzed. The athletes represented seven different sports and five different loading regimens based on their sport-specific training history. Triple jumpers and high jumpers comprised the high-impact group, soccer and squash players made up an odd-impact loading group, power lifters made up the high-magnitude group, endurance runners a repetitive-impact group, and swimmers a repetitive nonimpact group. Several parameters of training and fitness status were employed, including maximal isometric force and dynamic performance of the lower extremities. Endurance runners’ mean TBS was about 6% lower than controls. Power lifters had about 3% higher TBS compared with the reference group. In the high-impact group, the correlation between maximal isometric leg press force, peak jumping force, and TBS was significantly positive. The authors “found that athletes experiencing a large number of monotonous impacts (repetitive, moderate impact loading represented by endurance runners) in their training and competition had significantly lower TBS compared with all other groups including the reference group, whereas the athletes experiencing extreme axial loading (high-magnitude loading represented by power lifters had somewhat higher crude TBS values compared with the reference group).” These TBS values were independent of lumbar spine BMD.

Thus, with increasing interest in the TBS of relatively young elite athletes, we undertook a study of our athlete population, most of whom had been referred for recurrent bone stress injuries, including traumatic fractures, stress fractures, stress reactions, and various degrees of delayed healing or nonunions. Prospectively, studying the athletes referred to us, we noticed some athletes with degraded or partially degraded TBS values. Therefore, we undertook a retrospective study of the TBS values in our athletic population in whom DXA studies had been performed. We evaluated 10 Major League Baseball (MLB) players, 5 minor league players, 7 NBA players, 2 collegiate basketball players, 5 active and 1 retired NFL players, and 4 amenorrheic intercollegiate female runners (n = 34). All TBS databases employed were gender and ethnic specific. One MLB player (Black) and one minor league player (White) had degraded TBS; one minor league player (White) had partially degraded TBS; two NBA players (Black) had degraded and one NBA player (White) had partially degraded TBS; one NFL player (Black) had partially degraded TBA (n = 7), indicating 20% of the population we deal with had an abnormally low TBS ( Fig. 4.3 ).

Fig. 4.3, Trabecular bone score (TBS) in 2 NBA centers: A = “degraded;” B = “normal;” 1st panel utilizing FDA-approved normal female Caucasian database; utilizing gender and ethnic specific-databases: 2nd panel utilizing male Black database; 3rd panel utilizing male Mexican American database; for gender- and ethnic- specific databases.

All of the individuals had normal or “high” BMD based on their Z-scores, and therefore, BMD and TBS were discordant in these individuals. Although the role that an abnormally low TBS plays in the pathogenesis of bone stress injuries or fracture healing is not certain at this time, it probably indicates some underlying microstructural abnormality for which it is a surrogate, and therefore might be able to serve as a screening tool for athletes at greater risk of bone problems during their careers. What to do about this structural abnormality remains to be seen.

New approaches are constantly being developed for further evaluation of bone. A new software product, Texture Research Investigation Platform (TRIP) (Medimaps Group SA, Geneva, Switzerland), for the analysis of individual bone TBS is now available and will be utilized going forward. This new product will enable us to take the image from various modalities including x-ray, DXA, CT, pQCT, HRpQCT, and micro CT or MRI and analyze it for bone quality and microarchitecture. It displays images with several formats (e.g., Dicom, JPG, PNG, TIFF, BMP) and various sizes (personal communication). We hope that TRIP will be able to further help evaluation of the patient with a stress fracture because it can be localized to the bone in question and not applied to a bone that may not be involved in the local process as is lumbar spine TBS.

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here