Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
For additional material related to the content of this chapter, please see Chapters 3 and 94 .
Emma is a 3-year-old female with developmental delays who presents with her parents in primary care for her annual well-visit. The parents have ongoing concerns regarding her slow language development, coupled with frequent tantrums and rigidness. Emma continues to rely primarily on single words to communicate. When she is calm, she is social, affectionate, and engaging. Some days she is engaged in her speech and special educational programming, but frequently she refuses to sit. She cries, scratches, and falls to the floor when prompted to engage. Similar behavior problems occur at home during most daily routines. Today, the parents express frustration, discouragement, worry, and confusion: “Why isn’t Emma making progress? How can we parent a child who isn’t learning? Will she ever catch up?”
Learning and behavior change are at the very heart of developmental and behavioral pediatrics. Whether clinicians are supporting children like Emma in the opening vignette, treating older youth with autism or mental health concerns, or managing any other challenge with implications for child health and wellbeing, clinicians from a wide range of disciplines and an even wider array of work settings are directly engaged in helping children develop skills, reach milestones, and move from relying on behaviors that are maladaptive to healthy interaction patterns. Pediatricians, psychologists, speech therapists, occupational therapists, educators, behavior analysts, social workers, and other clinicians are all in the “business of behavior change,” though they likely conceptualize their methods quite differently. Individual children (and their parents, who are learning right along with them) require individualized applications of methods drawn from across the continuum of learning paradigms.
This chapter discusses how the major theoretical foundations conceptualize learning and behavior change, and how these paradigms directly inform the ways that clinicians facilitate prosocial development in children. Following the introduction of key concepts from each of the most relevant theoretical areas, we will return periodically to Emma and her family to explore how theories link directly to clinical application. Interventions mentioned in this chapter are described in more detail elsewhere in the volume, but embedding applied ideas here is intended to demonstrate to the reader that these concepts are in play at all times in pediatric clinical contexts whether we attend to them or not. We propose that having a strong foundational understanding of learning and behavior change will improve your practice, help you to adjust and expand intervention approaches, and inevitably improve the outcomes of the children and families you serve.
Behavioral learning theory began with Pavlov’s famous work describing classical conditioning processes ( ). He demonstrated that biologically driven unconditioned responses (or reflexes; UR), such as pupillary dilation and the salivary reflex are naturally elicited by unconditioned stimuli (US) in the environment, and that these relationships can be extended through repeated pairings to a previously neutral stimulus (NS) that does not yet elicit any type of response ( Fig. 4.1 ). This associative learning mechanism (also sometimes referred to as respondent or Pavlovian conditioning) is considered a foundational learning process. Extensions of classical conditioning proposed later by John Watson suggested that emotions, speech, thoughts, and even personalities might be conceptualized as the result of interrelated stimuli and responses unique to an individual’s learning history. His work also laid the foundation for understanding how psychopathology might develop through disadvantageous pairings. His infamous “Little Albert” experiment paired a loud noise (US) with a white rabbit (NS) to yield a conditioned response (CR) of crying that serves as an experimental model for the development of childhood phobias.
Ensuing empirical work identified that not all stimuli pairings are created equal. Learning through associations is impacted by several parameters. Stimulus contiguity describes the latency between the presentation of the NS and the US; the shorter the interstimulus interval, the stronger the ensuing association. Of note, stimuli can be paired simultaneously, with slight delays, and even in different orders (e.g., NS followed by US, or US followed by NS) to yield an association. Contingency is also relevant to the development and strength of association, such that the probability of the occurrence of one stimulus is high given the occurrence of the other prior to the emission of the UR (i.e., a loud noise will not be successfully paired with a light cue if the rat’s fear response occurs before the light turns on). Preparedness also impacts association, where biologic readiness facilitates some associations more than others (e.g., it is evolutionarily adaptive for a single pairing of a novel food with nausea to generate an association). Associations learned through classical conditioning can be unpaired through a learning process called extinction (note that a distinct process also called extinction occurs in the operant learning paradigm described later). If the conditioned stimulus (CS) or US is repeatedly presented alone, the strength of their relationship decreases, and the strength of the CS in eliciting the CR decreases. Spontaneous recovery is the reemergence of a CR after extinction has occurred and might be observed intermittently despite the lack of continued association between the US and CS.
Research on classical conditioning focuses primarily on very specific (and at times reductionist) demonstrations that can be experimentally tested. The extent to which the paradigm can be used to explain complex human learning is less clear, but the processes of classical conditioning absolutely serve as a foundation for essentially all the learning theories that follow.
Emma’s parents report that she cries whenever they try to bring her into a public restroom. Upon discussion, her mother notes that on a few occasions Emma has been quite startled by the loud automatic flush of a toilet in these settings. A few pairings of the loud flush (US) with the public bathroom (NS) were enough to generate a conditioned fear response. The family is reluctant to attempt to extinguish her conditioned response because it will require frequent trips to public bathrooms; brainstorming with the family might generate strategies such as using noise-canceling headphones during trips to ensure she is not exposed to the full volume of the flush or providing access to an electronic tablet during the trip (creating a new, positive association) and afterwards (creating an operant contingency, described later) .
Operant conditioning, based on seminal work by B.F. Skinner, expands on the classical conditioning models described earlier by providing a paradigm to describe much broader interrelationships between behavior and environmental events, as well as how to predict and alter these relationships. Operant conditioning is the process by which contingencies between stimuli in the environment (antecedents), behaviors, and subsequent effects (consequences) are learned, as evidenced by the subsequent change in likelihood that those behaviors occur under similar circumstances in the future ( ). These relationships are conceptualized as a three-term contingency (as well as the antecedent-behavio-consequence [ABC]) model. Antecedents set the stage and conditions, and even trigger behavioral sequences; operant models then describe the antecedent as a discriminative stimulus because it specifically signals the presence of the previously learned behavioral contingency. As an example from an experimental paradigm, a pigeon might learn that when the cage light turns on, pecking the lever will release a food pellet. Since the lever does nothing when the light is off and the pigeon only pecks it when the light turns on, the pecking behavior is under the stimulus control of the light. The consequence in the ABC model simply implies that this stimulus is an effect of the behavior, and it does not imply whether the consequence is favorable or not. As indicated in Fig. 4.2 , consequences that increase the probability of that behavior being emitted again under the same antecedent conditions are said to have reinforced that behavior, and consequences that decrease the probability of the behavior occurring are said to have punished that behavior. Modifiers of positive and negative (which also carry no implication of the consequence being favorable or not) describe whether the consequence involved the addition or removal of a stimulus. Parents, teachers, and even doctoral level clinicians all routinely misuse these terms when discussing operant processes; in applied contexts it is far more important to emphasize whether the intention is to make a behavior happen more often (e.g., compliance, correct responses on a math worksheet, urinating in the toilet) or less often (e.g., aggression, self-injury, school truancy). Extended discussion of punishment is beyond the scope of this chapter because learning (the increase of a behavior or skill) is, by definition, a reinforcement-based process.
From an operant perspective, questions about how to provide reinforcement for a behavior are central to learning. Four broad categories of reinforcement encompass all possible consequences delivered following a behavior: attention (e.g., praise, affection, and other forms of social interaction), tangibles (e.g., access to toys, foods, activities, privileges), escape/avoidance (the avoidance and delay of work or academic demands, pain/discomfort, and other undesired experiences), and automatic (including internal sensory experiences and other outcomes that are not mediated by other people). Note that simply delivering praise or tablet following a child’s successful demonstration of a new skill may or may not reinforce that behavior (increase the likelihood that the behavior occurs again); whether or not the stimulus functions as a reinforcer depends on many factors, including individual preferences as well as fluctuating and relative reinforcer values (much like commodities in the stock market).
Presuming that a high-value reinforcer is available, when and how often reinforcement is delivered following a target behavior will directly impact the rate and frequency that the behavior occurs in the future. Reinforcement schedules describe these parameters. Continuous reinforcement describes a schedule where reinforcement is delivered following each instance of the desired behavior, and this consistency is ideal for supporting the development of a skill; that said, it is quite arduous to maintain. Noncontinuous schedules are described as intermittent reinforcement. Fig. 4.3 presents how four variations on these schedules of reinforcement delivery (denoted by the black tick marks) affect the rate of behavioral response. Fixed-ratio schedules describe reinforcement delivery following a set number of responses (e.g., “once you have done 5 math problems, you can take a break”); this produces a high response rate followed by a pause after reinforcement. Variable-ratio schedules involve reinforcement delivery after a variable number of responses with a set mean (i.e., a slot machine that pays out once every 100 pulls, on average); this produces a high and steady response rate. Schedules can also be based on time rather than number of responses. Fixed-interval approaches (i.e., providing a break every 30 minutes, regardless of how much work got done) and variable-interval approaches (i.e., praising a child for playing quietly every 5 minutes on average, with both longer and shorter intervals between praising) produce moderate but steady response rates and are more sustainable to keep up than ratio schedules.
The target response requirements to produce reinforcement can also be adjusted to accelerate learning and behavior change. Shaping is the process of delivering reinforcement for a behavior that is already in the individual’s repertoire (i.e., providing tablet access for wearing new glasses for 30 seconds without taking them off); once that behavior is strengthened, the behavioral requirement is incrementally increased over time and repeated successful trials (e.g., keeping the glasses on for 60 seconds, then 2 minutes, then 5 minutes, and so on) until the intended target is reached. Another mechanism for accelerating change is the use of prompting ; this antecedent-focused strategy may involve the establishment of a new, clear discriminative stimulus so that the individual is more aware of the available contingencies. This approach may include coaching parents to deliver a clear “Pick up the block” instruction rather than vague “Can you help mommy clean up?” questions, the addition of a gesture indicating what to do and how to do it, and even physical hand-over-hand guidance to eliminate any ambiguity regarding what behavior is expected. Prompt fading, the gradual removal of artificial antecedent stimuli once the target behavior is established, will ensure that the individual does not become overly reliant on this support (prompt dependent).
Not infrequently, undesired behaviors are unintentionally prompted, reinforced, and shaped by environmental contingencies (e.g., a child who has inadvertently learned that a certain four-letter word generates continuous high-value attention from surrounding adults, especially under the stimulus control of grandma). Rather than attempting to change this behavior by adding even more attention to the mix (scolding, explanations, arguing, etc.), the family can instead focus on removing or withholding reinforcement following the target behavior. The process of removing or withholding reinforcement from a behavioral contingency is called extinction (and also accurately conceptualized as negative punishment, since it is a behavior-reduction technique). Often referred to as planned ignoring in cases when attention is the reinforcer, the speed with which extinction effectively suppresses the behavior depends on the strength of the behavior, the individual’s learning history, and the fidelity of application (i.e., mostly ignoring but occasional responding is equivalent to a variable schedule, which is quite reinforcing). Even perfect application of extinction will likely be followed by an extinction burst. As shown in Fig. 4.4 , a steady rate of behavioral responding immediately increases when the reinforcement is discontinued, and the response rate only diminishes over time. A known limitation to clinical procedures (e.g., planned ignoring) that rely on an extinction process is the results are often temporary, with the punished behavior likely to reappear once the punishment procedure is withdrawn (spontaneous recovery). To mitigate this, extinction might be augmented with a simultaneous reinforcement-based plan where an alternative behavior is intentionally reinforced at a high rate (i.e., teaching the child to say “I love you!” to grandma instead of using inappropriate words to get her attention).
The entire field of applied behavior analysis (often used in the treatment of children with autism; see Chapter 94 ) is almost exclusively based on the basic operant processes of learning and behavior change described here, along with complex iterations and extensions of these concepts (including a whole branch devoted to applying these ideas to verbal behavior and language development). That said, operant paradigms become more difficult to test (and often less useful in applied settings) when the behaviors of interest are complex, difficult to measure, or impossible to observe directly.
After trying many discipline approaches, Emma’s parents have concluded that she simply does not understand consequences. They most often use verbal scolding and explanations in response to her aggression (“No! That hurts! You hurt daddy! Not nice.”). When Emma screams as the parents start to remove the tablet from her hands, they sometimes allow her to continue to have it at least for another minute. Therefore Emma appears quite adept at detecting the operant contingencies; moreover, the intermittent success she experiences has strengthened her repertoire. Even her scream has been shaped to peak effectiveness, as the aversiveness of the high pitch results in her parents relenting almost immediately. With support from the care team, Emma’s parents gradually increase their awareness that Emma is in fact learning all of the time from (sometimes unintentional) teaching, and that the current patterns hold clues for how to help her develop new habits. The team can also prepare the parents to anticipate the extinction burst process after they adopt a consistent, function-based approach .
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here