Search tips
Search criteria 


J Exp Anal Behav. 2010 May; 93(3): 349–367.
PMCID: PMC2861874

Differential Reinforcement of Alternative Behavior Increases Resistance to Extinction: Clinical Demonstration, Animal Modeling, and Clinical Test of One Solution


Basic research with pigeons on behavioral momentum suggests that differential reinforcement of alternative behavior (DRA) can increase the resistance of target behavior to change. This finding suggests that clinical applications of DRA may inadvertently increase the persistence of target behavior even as it decreases its frequency. We conducted three coordinated experiments to test whether DRA has persistence-strengthening effects on clinically significant target behavior and then tested the effectiveness of a possible solution to this problem in both a nonhuman and clinical study. Experiment 1 compared resistance to extinction following baseline rates of reinforcement versus higher DRA rates of reinforcement in a clinical study. Resistance to extinction was substantially greater following DRA. Experiment 2 tested a rat model of a possible solution to this problem. Training an alternative response in a context without reinforcement of the target response circumvented the persistence-strengthening effects of DRA. Experiment 3 translated the rat model into a novel clinical application of DRA. Training an alternative response with DRA in a separate context resulted in lower resistance to extinction than employing DRA in the context correlated with reinforcement of target behavior. The value of coordinated bidirectional translational research is discussed.

Keywords: translational research, behavioral momentum, persistence-strengthening effects of DRA, resistance to change, alternative reinforcement

Applied Behavior Analysis has proven remarkably effective at reducing a wide range of unwanted target behaviors and replacing these behaviors with prosocial alternatives (Iwata et al., 1997). Differential reinforcement of alternative behavior (DRA) has been one of the principal tools used to effect these improvements (Petscher, Rey & Bailey, 2009). Despite the development and refinement of behavior-change technologies over the decades, Applied Behavior Analysis has made limited progress in achieving long-term maintenance and generalization of treatment gains. Osnes and Lieblein (2003) reported that most clinical studies fail to demonstrate the durability and generality of treatment effects. One impediment to resolving the problem of treatment relapse, or maintenance and generalization failure, is the lack of understanding why target behaviors that have been eliminated in one context later resume in that or other contexts. However, developments in behavioral momentum theory during the past two decades may provide a model for understanding treatment relapse and the conditions under which it is more and less likely to occur.

Nevin's formulation of behavioral momentum theory has shown there are two independent features of the discriminated operant: (a) its ongoing response rate, and (b) the resistance of this response rate to change when disrupted by operations such as extinction, satiation, alternative reinforcement, and distraction (Nevin, 1992; Nevin & Grace, 2000; Nevin, Tota, Torquato & Shull, 1990). Ongoing response rate is known to be a function of the contingency between responding and its consequences (i.e., response–reinforcer relations). By contrast, the resistance of baseline response rate to change is a function of the total rate of reinforcement present in the context in which reinforced behavior occurs (i.e., context–reinforcer relations). These response–reinforcer and context–reinforcer functional relations have been shown to be robust and general across species and populations including pigeons (Nevin et al., 1990), rats (Mauro & Mace, 1996), goldfish (Igaki & Sakagami, 2004), adults with developmental disabilities (Mace et al., 1990) and college students (Cohen, 1996).

Demonstrations that resistance to change is a function of context–reinforcer relations have involved the use of alternative reinforcement in the form of time-contingent reinforcer deliveries and differential reinforcement of alternative behavior (DRA). Nevin et al. (1990, Experiment 2) illustrated the persistence-strengthening effects of DRA. Figure 1 diagrams their baseline multiple concurrent schedule of reinforcement. Pigeons were presented with two color-illuminated response keys mounted on an operant panel. Pecks on the right key were designated the target behavior and pecks on the left key were defined as alternative behavior. Condition A represents a typical DRA arrangement in which comparatively high rate reinforcement is provided for an alternative response (45 reinforcers/hr in this case) while the target behavior produces a lower rate of reinforcement (15 reinforcers/hr). With the alternative and target response keys illuminated green, the total reinforcement available in the green context, or the context–reinforcer relation, was 60 reinforcers/hr. Conditions B and C of the baseline schedule represented different rates of reinforcement for the target response without alternative reinforcement. Conditions A and C contained the same total number of context–reinforcer contingencies (60 reinforcers/hr) and Conditions A and B contained the same number of response–reinforcer contingencies for the target right key response (15 reinforcers/hr). This arrangement permitted a direct test of whether resistance to change was a function of response–reinforcer or context–reinforcer contingencies.

Fig 1
Diagram of the 3-component multiple concurrent schedule of reinforcement from Experiment 2 of Nevin et al. (1990). Condition A (green key condition) models DRA where reinforcement on the alternative left key is three times greater than reinforcement of ...

Nevin et al. (1990) measured resistance to change during the response disruptors of extinction and short and long periods of satiation. Although DRA in Condition A resulted in lower target response rates, resistance to change was comparable in Conditions A and C and greater than Condition B in all tests, thus supporting the hypothesis that resistance to change is a function of context–reinforcer contingencies. These findings demonstrated that although DRA can reduce occurrences of a target response it can also make the response more persistent when the DRA contingency is degraded in some fashion.

This finding from basic research has considerable implications for clinical applications of DRA and possible reduction or avoidance of treatment relapse. One factor contributing to treatment relapse may be that DRA strengthens the persistence of unwanted target behavior such that when there are compromises to DRA treatment integrity there is resurgence in target behavior. Basic research supports this speculation. For example, Podlesnik and Shahan (in press) arranged VI 120-s food reinforcement for pigeons in a two-component multiple schedule that also included additional time-contingent reinforcer deliveries in one component. Extinction eliminated responding in both components, but when baseline reinforcement was reinstated, responding recovered more quickly in the component with the added reinforcement. Clinical studies have reported similar recurrence of target behavior following its reduction via time-contingent reinforcer deliveries (Ahearn, Clark, Gardenier, Chung, & Dube, 2003; DeLeon, Williams, Gregory & Hagopian, 2005).

The purpose of the present investigation was threefold. First, we sought to establish whether DRA increases the resistance to extinction of target behavior in individuals with developmental disabilities (Experiment 1). Second, given demonstration of the persistence-strengthening effects of DRA, we developed an animal model of one possible solution to this clinical problem (Experiment 2). We chose an animal model to test the effectiveness of this solution in order to avoid placing both unwanted target behavior and prosocial alternative behavior in a clinical population on extinction without evidence that the proposed solution to persistence strengthening would be effective. Finally, in Experiment 3 the procedure used in the animal model was translated into a novel clinical intervention and its effectiveness at avoiding or reducing the persistence-strengthening effects of DRA was evaluated.



Participants and Setting

Three children with developmental disabilities who were referred for treatment of severe behavior disorders participated in the study. Andy, a 7-year-old boy, was diagnosed with autism and severe mental retardation. He was able to follow simple, one-step instructions by using gestures and pointing to pictures. Andy attended a private day school for children with autism and pervasive developmental disorders. Tom and Jackie were inpatients in a children's hospital unit. Tom, a 7-year-old boy, was diagnosed with Down's Syndrome and Attention Deficit Hyperactivity Disorder (ADHD). He was admitted to the unit for the treatment of food stealing, elopement and tantrums. Jackie, a 4-year-old girl, was diagnosed with severe mental retardation, microcephaly, and moderate hearing impairment who was hospitalized for the treatment of aggression and self-injurious behavior. She was responsive to a few instructions delivered by manual signs.

Sessions for Andy were conducted in his classroom in a partitioned instructional cubical measuring 6 m by 3 m containing a table and two chairs. Other students and teachers present in the classroom participated in their regularly scheduled activities, but were not within view of the cubical during experimental sessions. Sessions for Tom and Jackie were conducted in a hospital room (3 m by 5 m) with two beds and a table and chairs or in a treatment room (3 m square) with a one-way observation window.

Target Behaviors, Measurement, and Interobserver Agreement

Andy's target behavior was hair pulling, defined as an extension of the hand toward the hair of another person, including contact with the hair or attempts at contact. Tom's target behavior was food stealing, defined as an extension of the hand toward another person's plate of food during mealtimes. Food contact and attempts at food contact were recorded. The target behavior for Jackie was aggression, defined as hair pulling, scratching, hitting, and head butting. Alternative responses to be reinforced during the DRA phase of the study were appropriate toy play (touching or interacting with a toy for a minimum of 5 s) for Andy and Jackie, and appropriate requests for food (asking an adult for a snack item) for Tom.

Two independent observers collected data on the target and alternative behaviors for all 3 participants. Data on these behaviors were recorded with a count within a 10-s interval recording procedure using either computer software or paper and pencil. The second observer collected data on an average of 40% of the sessions across conditions and participants (range  = 31% to 45%). Occurrence agreement for the target behaviors, calculated on a point-by-point basis, averaged 93% for all 3 participants (range  = 89% to 100%).


Pre-study functional analysis

Functional behavioral assessment interviews (adapted from Iwata, Wong, Riordan, Dorsey & Lau, 1982) and a functional analysis (Iwata, Dorsey, Slifer, Bauman and Richman (1982/1994) indicated that Andy's hair pulling was maintained by adult attention and Jackie's aggression was maintained by access to food. An interview and home visit suggested that Tom's food stealing was maintained by access to food.

Baseline (BL)

For Andy, two teachers in the preschool served as experimenters. Andy sat at his desk in the cubical with several toys available. One teacher sat on the floor 1 m from Andy and read a magazine. Another teacher was positioned behind Andy on the other side of the partition and was able to observe his behavior. Contingent on hair pulling or attempts to pull the hair of the teacher present in the cubicle, the teacher positioned behind the partition entered the cubicle and provided a verbal reprimand on a variable ratio 3 (VR 3) schedule. Immediately following the reprimand the teacher returned to her position behind the partition. No demands or other forms of attention were provided during this condition.

For Tom and Jackie, a therapist served as experimenter. Tom and the therapist sat at a table in a therapy room. Tom was given a hospital meal to eat. Approximately once every 30 s during Tom's meal, the therapist ate snack foods that Tom preferred from a plate positioned approximately 0.67 m from Tom. Food stealing resulted in access to bite-size pieces of food on a fixed ratio 1 (FR 1) schedule. Attempts at food stealing were not blocked and the therapist did not provide attention contingent on food stealing.

Jackie and the therapist were in a hospital room and the therapist held a box of preferred snack food in her hand. Aggressive responses resulted in the therapist providing Jackie with one piece of the snack food on a variable interval 60-s schedule (VI 60 s); a limited hold was not employed. Interval values for all VI schedules in the experiment were calculated using the Fleshler-Hoffman (1962) VI generator formulas and were signaled to the instructor via audiotape with earphones.

Differential reinforcement of alternative behavior (DRA)

An alternative to target behavior was reinforced under DRA schedules that were substantially richer than those supporting the target behavior. These programmed values expressed as the percentage of the baseline rate of reinforcement for target behaviors were 195% for Andy, 165% for Tom, and 185% for Jackie.

Andy sat at his desk with several toys available as in baseline. Immediately before each session, the teacher who had been positioned behind the partition used a graduated prompt hierarchy (gesture, verbal, physical prompts) to assist Andy to interact appropriately with the toys for approximately 15 s. Following this assistance, the teacher went to her position behind the partition. During the session, the teacher present in the cubicle provided descriptive praise statements contingent on appropriate play on a VI 30-s schedule. The baseline procedures and contingencies for reinforcing hair pulling remained in effect to more closely replicate Experiment 2 of Nevin et al. (1990).

For Tom, appropriate requests for food were reinforced on an FR 1 schedule. Contingent on an appropriate request for food, the therapist praised the request and handed one piece of preferred snack food to Tom. Food thefts continued to be reinforced as in baseline.

For Jackie, social reinforcement followed alternative behavior even though her target behavior, aggression, was maintained by access to food. We used this arrangement to represent the use of DRA in cases where the reinforcer maintaining the target behavior is unknown or maintained by automatic reinforcement. Five seconds of toy play was reinforced on a VR 2 schedule and aggression continued to produce access to food on a VI 60-s schedule as in baseline.

Extinction (EXT)

For all 3 participants, response blocking was used to effect an extinction operation. Circumstances replicated those of baseline, with two exceptions. First, attempts to pull hair, steal food, or engage in aggression were blocked by the teacher or therapist in the room. Blocking consisted of the adult placing his/her hand or arm in front of the participants' hand each time the participant attempted to engage in the target behavior. Second, the reinforcer maintaining each participant's target behavior was withheld. No other interactions took place between the experimenters and participants.

Experimental Design

The experimental conditions described above were presented to participants in a sequence of phases that arranged extinction to alternately follow baseline and DRA reinforcement in a counterbalanced order across participants. The sequence order for Andy was BL-DRA-EXT-BL-EXT. For Tom and Jackie, the order was BL-EXT-BL-DRA-EXT. The duration of each phase was determined by evidence of visual stability in the data during baseline and DRA and an apparent extinction process during extinction.

Results and Discussion

Figure 2 presents target response rates for the 3 participants across all phases of the study. In the initial baseline, Andy's hair pulls occurred at high rates (M  = 186.7/hr; range  = 172 to 209; top panel). The addition of the DRA intervention resulted in a sharp decrease in hair pulling after 2 sessions that generally continued over 19 sessions (M  = 65.0/hr; range  = 29 to 171). Implementation of extinction via response blocking in the third phase resulted in a large increase in hair pulling during the first extinction session (266/hr) that decreased steadily to near-zero levels after 36 sessions of extinction (M  = 51.7/hr; range  = 7.8 to 266). The return to baseline conditions in the fourth phase resulted in an immediate resumption of high and steady rates of hair pulling, although to a lower level than the initial baseline (M  = 103.8/hr; range  = 76 to 175). The second implementation of extinction in the subsequent phase resulted in an immediate reduction in hair pulling without an extinction burst. Hair pulling decreased to near-zero levels after three sessions and remained at those levels for the remaining eight sessions of extinction (M  = 14.8/hr; range  = 7.8 to 46.9). Tom's first baseline phase resulted in variable but stable levels of food thefts (M  = 74.2/hr; range  = 20 to 119). The first session of the initial extinction phase produced a burst of food thefts to 371 per hour followed by an immediate decrease in food thefts at the third session to near-zero levels that remained low during the next 10 sessions (M  = 52.6/hr; range = 1 to 371). The return to baseline recaptured high rates of food thefts (M  = 106.3/hr; range  = 66.7 to 166.6) and the subsequent DRA intervention produced substantial reductions in food stealing over the course of 19 sessions, although food thefts resurged temporarily on the 17th session of DRA (M  = 24.7/hr; range  = 1 to 150). Tom's second extinction phase did not result in an initial burst but food thefts continued intermittently in the baseline range over the first 22 sessions and took 30 sessions to consistently occur at near-zero rates. Finally, Jackie's initial baseline rates of aggression were highly variable but stable (M  = 285/hr; range  = 45.5 to 636.4). Blocking aggression produced an immediate reduction in the target behavior during the first extinction phase that remained consistently at zero or near-zero levels after 8 sessions. The second baseline phase saw a resumption of high and variable rates of aggression (M  = 427.02/hr; range,  = 72.7 to 854.6). Although the DRA intervention that followed the second baseline reduced rates of aggression, treatment effects were not as large as those observed for Andy and Tom (M  = 257.0/hr; range  = 62 to 632). The extinction phase that followed DRA required 34 sessions before aggression was reduced consistently to zero or near-zero levels (M  = 135.3/hr; range  = 0 to 895.9). Rates of the alternative behavior during the DRA intervention varied widely across participants. The occurrence of appropriate toy play was 383/hr and 17/hr for Andy and Jackie, respectively. Mean appropriate requests for food were 177/hr for Tom.

Fig 2
Experiment 1: Rates of target behaviors during baseline, DRA and extinction conditions across participants.

The comparison of primary interest in this experiment is the relative resistance to extinction following baseline versus DRA rates of reinforcement. Figure 3 superimposes target behavior response rates during the two extinction phases expressed as the proportion of the preceding baseline response rate for all 3 participants. Reductions in the target behaviors during extinction took three times or more longer for all 3 participants. The number of extinction sessions following DRA versus baseline was 36 versus 10 for Andy (360% more following DRA); 36 versus 12 for Tom (300%), and 37 versus 12 for Jackie (308%).

Fig 3
Experiment 1: Comparison of rates of target behaviors during extinction following baseline reinforcement and DRA reinforcement. Data are expressed as the proportion of the preceding baseline response rate.

The results of Experiment 1 demonstrated the persistence-strengthening effects of DRA on target behavior. Although DRA reduced occurrences of target behaviors for all 3 participants, DRA was associated with greater resistance to change during the adjacent extinction phase compared to extinction following baseline rates of reinforcement. These results are consistent with and predicted by Experiment 2 of Nevin et al. (1990).


Experiment 1 illustrated the capacity of DRA to increase resistance to extinction of problem behavior. Experiment 2 evaluated a rat model of a possible solution to this untoward side effect of DRA. According to behavioral momentum theory, DRA increases persistence by adding reinforcement to the context in which the target behavior has a history of reinforcement. As Nevin et al. (1990, Experiment 2) demonstrated, reinforcing alternative behavior (the left key in condition A shown in Figure 1) increased the persistence of the target behavior during extinction tests (the right key in condition A, Figure 1). In our rat model, an alternative response was first trained in a context separate from the one in which the target behavior occurred. In the extinction test, the discriminative stimuli correlated with the alternative response were introduced to the context correlated with the target behavior. This practice is contrary to the widespread use of DRA in the context in which the target behavior occurs (e.g., Carr & Durand, 1985; Fisher et al., 1993). We hypothesized that the separate context training procedure would result in less resistance to extinction than DRA implemented in the target behavior context.

We chose to test our hypothesis with a rat model for two reasons. First, it permitted precise control of the subjects' historical experience with specific discriminative stimuli–reinforcer delivery pairings, which remains largely unknown in human participants. Second, by first testing this hypothesis with rats, we did not have to place alternative prosocial behavior on extinction with human clinical participants without having a basis for expecting the hypothesis would be supported.



Four experimentally-naïve male Charles River CD rats (MV 54 to MV 57) were used in this experiment. The animals were between the ages of 2 and 4 months at the beginning of the experiment and were housed individually throughout the experiment. The rats were maintained at 80% of their free-feeding weight through supplemental feeding. Water was continuously available in each animal's home cage.


Experimental sessions were conducted in four BRS/LVE two-lever operant chambers for rats (Model RTC-002). The left and right levers protruded 2.2 cm into the chamber from the right wall (operant panel) and each lever required a force of 0.15 N to operate. A bank of three jeweled lights was located 4.3 cm above each lever. Only the center light of each bank of lights was used as the visual stimulus and provided either flickering or steady illumination. An automatic food dispenser delivered 45-g Noyes pellets to a scoop-type chute centered on the operant panel 1.2 cm above the grid floor.

The experimental chambers were placed in MED Associates sound and light attenuating enclosures (Model ENV-015M). Additional sound attenuation was provided by a ventilation fan and a Grason-Stadler noise generator (Model 901B) that produced masking noise within 1 m of the enclosures. A computer using MED Associates notation was used to program stimuli and record responses through a MED Associates interface.


Preliminary training

Responding on the left and right levers was separately established using differential reinforcement of successive approximations. This was followed by continuous reinforcement (CRF) and fixed-ratio (FR) training in which the reinforcement schedule was gradually thinned to FR15 across 3 to 9 days. The order in which training occurred on the left and right levers was counterbalanced across rats. Subsequently, a series of concurrent VI VI schedules was introduced and the schedules were gradually thinned from VI 10 s to VI 30 s across 10 days.


Baseline sessions consisted of three cycles of a three-component multiple concurrent schedule of reinforcement. Each component was 6 min in duration and was presented quasirandomly without replacement within a session. Components were separated by a 30-s intercomponent period without any experimentally-programmed events occurring. Thus, each cycle lasted 19.5 min and the total session duration was 58.5 min. The stimulus arrangements for the three components during baseline are illustrated in the top panel of Figure 4. Component 1 was accompanied by a darkened left light and a right flicker light at a rate of 1 flick per second (1 f/s). Both left and right lights flickered at 5 f/s during Component 2. During Component 3, the right light was darkened and the left light was on continuously. Baseline sessions were conducted daily. The experimental design involved several baseline sessions followed by a single extinction session. The number of baseline sessions for MV 54 to MV 57 was 41, 39, 20, and 39, respectively.

Fig 4
Experiment 2: Diagram of the discriminative stimuli and reinforcement rates in the three-component multiple concurrent schedule of the rat model. The top panel depicts the baseline arrangement and the bottom panel represents arrangements during the extinction ...

The arrangement of concurrent reinforcement for each of the three baseline components is also illustrated in the top panel of Figure 4. A single VI 30-s schedule operated continuously throughout the session and arranged reinforcers for either the left or right lever press through a probability gate (p). In Component 1, extinction was arranged on the left lever (p  = 0) and the programmed reinforcement rate per hr on the right lever was 24 food pellets (p  = .2). During Component 2, the programmed rate of reinforcement was 96 (p  = .8) food pellets per hr on the left lever and 24 (p  = .2) on the right lever. In Component 3, 96 (p  = .8) food pellets per hr were programmed on the left lever and extinction was arranged on the right lever (p  = 0). Thus, the combined reinforcement arranged in Components 1 and 3 equaled the rate of reinforcement provided in Component 2. Reinforcers programmed but not earned when each component ended were cancelled. For each individual animal, the response rate per hr during baseline was judged as stable by visual inspection before the extinction test was conducted. Additionally, performance was deemed stable when it met both of the following criteria: (a) response rates did not exceed or fall below ± 15 responses/hr on the right and ± 10 response/hr on the left lever in all three components for five consecutive sessions, and (b) stable ordinal relationships across the three components on the left and right levers for five consecutive sessions.

Extinction test

A single extinction test session was conducted after baseline performance was judged to be stable. All food reinforcement was discontinued during the extinction test by rendering the pellet dispenser inoperative. A multiple concurrent schedule similar to that used in baseline was employed during the extinction test. To allow responding in all three components to contact extinction during the beginning, middle and end of the test, the components were presented quasirandomly within blocks of three components and data were recorded at the end of each 19.5 min block. Components 1, 2, and 3 were presented randomly once per block. The extinction session was terminated at the completion of the block in which there had been no response on the right lever for each of the three components. Visual stimuli presented during the extinction test are displayed in the lower panel of Figure 4. Components 1 and 2 were signaled by the same visual stimuli used in baseline. A stimulus compound was introduced in a third component that combined the stimuli correlated with Components 1 and 3 during baseline. This stimulus compound modeled the clinical situation in which DRA is used to train an alternative behavior in a separate context (baseline Component 3) and the discriminative stimuli corresponding to DRA training are introduced into the context in which the target problem behavior has been reinforced (baseline Component 1). Thus, the signaled reinforcement history in Component 2 and Component 3 (the Stimulus Compound of Baseline Components 1 and 3) were equal during extinction (120/hr), and were greater than the signaled reinforcement history correlated with Component 1 (24/hr).

Results and Discussion

The mean terminal absolute response and reinforcement rates for the left and right levers and the relative response and reinforcement rates for the right lever (target response) are displayed in Table 1. Obtained rates of reinforcement for both response alternatives (left and right levers) across all three components approximated the programmed rates of reinforcement for each alternative, suggesting experimental control over the absolute and relative rates of reinforcement. For all subjects, within-component response rates on the left lever were comparable for Components 2 and 3 that programmed 96 pellets/hr and were lowest in Component 1 (0 pellets/hr). Response rates on the right lever were highest in Component 1 (24 pellets/hr with no alternative reinforcement) and lowest in Component 3 (0 pellets/hr) for all subjects. An examination of the absolute and relative response rates in Components 1 and 2 (both arranging 24 pellets/hr) shows that for all subjects target responding was lower in the component that arranged response-contingent alternative reinforcement (i.e., DRA in Component 2).

Table 1
Absolute and relative response and reinforcement rates for the target (right) and alternative responses (left) during Baseline in Experiment 2. Each value is the mean of the final five sessions.

The log proportion of baseline responding on the right (target) lever across successive blocks of extinction for all 4 rats is presented in Figure 5. For all 4 rats, responding took the shortest amount of time to extinguish in Component 1 where the visual stimuli (1 f/s) were correlated with the least amount of programmed reinforcement (24/hr) and no alternative reinforcement. Conversely, the proportion of baseline target response rates for all subjects took longest to extinguish in Component 2 in which the visual stimuli correlated with the overall programmed rates of reinforcement (120/hr) were the same for the target and alternative responses (5 f/s). Target responding in Component 3 that compounded the stimuli signaling baseline reinforcement for Component 1 (1 f/s) and Component 3 (constant light) was markedly lower than that in Component 2, even though the total programmed reinforcement correlated with the discriminative stimuli during extinction was equal (120/hr) (for MV-54, MV-55, MV-56, and MV-57, respectively, mean proportion of baseline log response rate on the target lever  = .64, .54, .76, and .52 for Component 2 and .37, .33, .33, and .31 for Compound C1 and C3). Of relevance to the main experimental hypothesis, responding during extinction was comparable during Component 1 and the Compound Stimulus for MV 54 and MV 56, and lower in the Compound Stimulus than Component 1 for MV 55.

Fig 5
Experiment 2: Log proportion of baseline response rates during six blocks of a single extinction session for Components 1, 2 and Compound of Baseline Component 1 and Component 3 stimuli across 4 rats.

The purpose of Experiment 2 was to model a possible solution to the problem of DRA increasing the resistance to extinction of unwanted target behavior. Baseline Component 1 represented reinforcement of the target response only, Component 2 simulated DRA reinforcement of alternative behavior in the same context in which the target response is also reinforced, and Component 3 represented establishing an alternative response with high rate reinforcement in a context separate from the one correlated with reinforcement of the target behavior. When the discriminative stimuli in baseline Component 3 were placed in compound with the stimuli in Component 1, increased resistance to extinction was avoided in 3 of 4 subjects. For the 4th subject, responding during extinction was still less than during Component 2.


Experiment 2 modeled a method of implementing DRA that attenuated the persistence-strengthening effects of DRA. The model suggests that a homologous clinical intervention may effectively avoid the persistence-strengthening effects of DRA and, hence, justify a clinical trial in which both the target and alternative responses are placed on extinction in a clinical population. As a preliminary test of this proposition, in Experiment 3 we taught a communication response using functional communication training (a form of DRA) to children with severe disruptive behavior in a context in which disruptive behavior had no history of reinforcement. As in Experiment 2, we then compared target responding during extinction in the separate context DRA to the context in which DRA was implemented in the same context in which target behaviors had a history of reinforcement.


Participants and Setting

The participants were 2 males with developmental disabilities who were admitted to an inpatient hospital program for the treatment of severe behavior disorders. Mickey was 11 years old and diagnosed with moderate mental retardation. He spoke in simple sentences and engaged in severe disruptive behavior during performance of daily living skills. Paul was 21 years old, and diagnosed with severe mental retardation. He was nonverbal and engaged in severe disruptive behavior during vocational tasks. Sessions were conducted in three different rooms on an inpatient hospital unit, each containing a table and task materials.

Target Behaviors, Measurement, and Interobserver Agreement

Disruptive behavior was the target for both participants. Mickey's disruptive behavior was defined as throwing task items, pounding on the table, shouting and slapping instructors. Paul's disruptive behavior was defined as throwing or destroying task materials, pounding on the table, kicking the underside of the table with his knees, and slapping instructors. Requests were defined as speaking the phrase ‘Break please’ (Mickey) or touching a 7.6 cm by 20.3 cm card with the printed word ‘Stop’ on it (Paul).

Data on target behaviors, requests for breaks, and reinforcer deliveries were collected using a continuous count within a 10-s interval recording procedure during 10-min sessions. Interobserver agreement was calculated on a minimum of 37% of the sessions. Mean occurrence agreement was 87% (range  = 61% to 100%) across participants.

Pre-Study Functional Analysis

Prior to the study, functional analysis procedures described for Andy and Jackie of Experiment 1 indicated that both participants' disruptive behavior was maintained primarily by negative reinforcement in the form of escape from demands.

Baseline Multiple Concurrent Schedule

Baseline conditions employed the multiple concurrent schedule of reinforcement depicted in the top portion of Table 2. Components were presented to participants in random order without replacement with 5 to 10 min between components, during which the experimenters did not interact with the participants. One or two sets of three components were presented per day, 5 days per week. Therapists wore different colored hospital gowns in the three schedule components. Gown colors varied by component as noted in Table 2. Each component was also conducted in a different room to promote component discrimination. In order to control for the presence of two instructors in Component 2, two instructors were present during all three components for Mickey and both wore the same colored hospital gowns.

Table 2
Discriminative stimuli (gown color and room) and programmed rates of reinforcement per hour during the baseline multiple concurrent schedule of reinforcement and extinction tests in Experiment 3.

Component 1

This component represented reinforcement of target behavior without reinforcement of alternative prosocial behavior. Different task demands were presented to participants. Mickey was asked to sort and fold shirts and pants and Paul was instructed to wash tables and windows using paper towels and a spray bottle. Two instructors stood on either side of Mickey. The instructor on the right side presented task demands while the other instructor did not interact with the participant. One instructor was present for Paul. Sessions began with the instructor placing the task materials on the table and saying, “Do your work, please.” Following this initial instruction, instructions to perform relevant task steps were presented using a graduated three-step prompt hierarchy (verbal prompt, modeling, physical prompt) with prompts separated by 5 s. Contingent on disruptive behavior, the instructor said, “Okay, take a break,” and moved at least 2 m away from the participant. No programmed interaction took place during the break. Escape from demands was arranged on a VI 75-s schedule of negative reinforcement for which interval values were calculated using the procedure of Fleshler and Hoffman (1962). The completion of each interval was signaled to the instructor via earphones.

Component 2

This component represented negative reinforcement of alternative behavior in the same context in which disruptive behavior was negatively reinforced. Procedures in Component 2 were identical to those of Component 1 except that an instructor provided prompts to request a break from instruction on a fixed interval 20-s (FI 20-s) schedule. Prompts were “Ask if you want a break” for Mickey and “Point to the card if you want a break” for Paul. Contingent on requests for breaks, the instructor provided a break as described in Component 1. If the participant requested a break prior to the elapse of the FI 20-s interval, the instructor said, “Okay, just a little while longer,” and continued instruction until the reinforcement interval elapsed. Disruptive behavior continued to be reinforced on the VI 75-s schedule described in Component 1. For Mickey, the instructor to his left provided instruction and reinforcement of requests for a break. The instructor to Mickey's right reinforced disruptive behavior. For Paul, the single instructor reinforced both requests and disruptive behavior on their respective schedules.

Component 3

This component represented reinforcement of alternative behavior in a context that provided no reinforcement of disruptive behavior. The task instruction described in Component 1 and reinforcement of requests for breaks described in Component 2 were replicated in the third component. However, the instructor did not provide breaks from instruction contingent on disruptive behavior.

Extinction Test

Instruction of each participant's daily living task continued as in baseline during all three components. However, all reinforcement of target disruptive behavior and requests was discontinued in order to test the hypothesis that reinforcement of prosocial alternative behavior in a separate context from one that also reinforced disruptive behavior could reduce or avoid the persistence-strengthening effects of DRA. The discriminative stimuli and arranged baseline rates of reinforcement are presented in the lower portion of Table 2. The discriminative stimuli for Components 1 and 2 were identical to those in baseline. The discriminative stimuli in a third component during extinction comprised a compound of the baseline stimuli for Components 1 and 3. This represented the clinical situation in which a prosocial alternative behavior, such as communication, is first established in a context with no history of reinforcement of unwanted target behavior prior to introducing the discriminative stimuli correlated with alternative behavior into the context that has a history of reinforcement for target behavior (see lower portion of Table 2).

Results and Discussion

Figure 6 presents baseline data from the three components of the multiple concurrent schedule. For both participants, the highest rates of disruptive behavior occurred in Component 1, in which only disruptive behavior was reinforced (Mickey, M  = 50.4 disruptions /hr, range  = 30.0 to 93.5; Paul, M  = 135.7, range  = 35.0 to 292.0). Rates of disruptive behavior were substantially lower in Component 2, in which requests were reinforced 3.75 times more often than disruptive behavior (Mickey, M  = 22.9 disruptions/hr, range  = 5.5 to 48.3; Paul, M  = 34.06, range  = 5.0 to 173.0). The lowest rates of disruptive behavior occurred in Component 3, in which requests only were reinforced (Mickey, no disruptive behavior; Paul, M  = 15.2 disruptions/hr, range  = 0 to 75.0).

Fig 6
Experiment 3: Disruptions per hour during baseline sessions of the multiple concurrent schedule of reinforcement. Instructors wore different colored hospital gowns and conducted sessions in different rooms to differentiate the schedule components.

Resistance to extinction expressed as proportion of baseline response rate was greatest in Component 2 for both participants (see Figure 7) where the discriminative stimuli were identical to those in baseline (see lower portion of Table 2). Both participants showed an extinction burst in Component 2. Mickey's disruptive behavior reduced to low rates after four extinction sessions; however, Paul continued to engage in high rates of disruptive behavior in Component 2 during nine sessions of extinction. Component 2 sessions were discontinued for Paul prior to effecting an extinction process for ethical reasons (his disruptive behavior had reduced to below baseline rates in Components 1 and the Compound Stimulus Component). Resistance to extinction was considerably lower in Component 1 where the discriminative stimuli were the same as baseline and baseline rates of reinforcement were approximately one-quarter of those in Component 2. Of primary interest to the experimental hypothesis of the study are the data on resistance to extinction during the Compound Stimulus of baseline Components 1 and 3. This compound stimulus was correlated with the same reinforcement rates as Component 2. However, rates of disruptive behavior during extinction were much lower than in Component 2 and were comparable to those in Component 1 for both participants where the added reinforcement of DRA was absent. These results replicate the rat model of Experiment 2 by showing that the persistence-strengthening effects of DRA can be reduced or avoided by using DRA in a context separate from that correlated with a history of reinforcement of unwanted target behavior.

Fig 7
Experiment 3: Rates of disruptive behavior during extinction sessions expressed as proportion of baseline response rates for Components 1 and 2 and a Compound of Baseline Component 1 and Component 3 stimuli.


We conducted two clinical studies and one basic research experiment that were coordinated to demonstrate the persistence-strengthening effects of DRA and develop a possible solution to the problem. Experiment 1 showed that clinical applications of DRA can increase the resistance of unwanted target behavior to extinction. The duration and response levels of the extinction processes were substantially greater following DRA reinforcement than after baseline reinforcement of target behavior. In Experiment 2, we devised a DRA treatment for target behavior in a rat model that was aimed at avoiding or reducing the persistence-strengthening effects of DRA. The source of the problem is that typical applications of DRA involve adding reinforcement to the context in which target behavior is, or has been, reinforced. When a new alternative response such as communication is established with DRA, it is generally reinforced at a high rate both to teach the response and to compete with reinforcement for target behavior. For example, Fisher et al. (1993) used functional communication training (FCT) to reduce unwanted target behavior with and without extinction of problem behavior. Communication responses were reinforced on a FR 1 schedule and this resulted in rates of communication ranging from 4.5 per min to 7.0 per min for one study participant. To avoid adding reinforcement to the context in which the modeled target behavior occurs, we reinforced an alternative response in a different context in our rat model of DRA treatment (Component 3). During the extinction tests, we arranged a stimulus compound composed of the stimuli correlated with reinforcement of the target behavior (Component 1) and those correlated with reinforcement of alternative behavior in a context without reinforcement of the target behavior (Component 3). Although the total reinforcement in the compound stimulus equaled that of the component that included DRA and reinforcement of target behavior (Component 2), resistance to extinction was much lower in the component with the compound stimulus. We translated these findings into a novel approach to DRA treatment in a clinical study in Experiment 3. Consistent with our rat model, using FCT to establish communication in a context without reinforcement of target behavior circumvented the persistence-strengthening effects of DRA during extinction tests.

These findings have implications for resolving some longstanding limitations to Applied Behavior Analysis interventions. Long-term maintenance and generalization of treatment gains are often not reported or achieved (Osnes & Leiblien, 2003; Stokes & Osnes, 1989). One possible reason for this is that lapses in treatment integrity can result in recurrence of problem behavior. Lapses in treatment integrity can occur when rates of DRA reinforcement drop precipitously or diminish over time resulting in a temporary extinction operation or an increase in relative reinforcement for target behavior. Burst responding with the initiation of extinction is widely reported. For example, Lerman, Iwata and Wallace (1999) reported extinction bursts in 62% of clinical cases in which extinction of self-injurious behavior followed reinforcement-based treatments. Partial compromises to treatment integrity have also been related to the resumption of problem behavior. DiGennaro, Martens and Kleinmann (2007) identified 13 procedural steps for teachers to follow to implement a DRA intervention. They found significant inverse correlations between the percentage of accurately implemented treatment procedures and occurrences of target behavior for three-quarters of their participants, ranging from -.45 to -.78. Vollmer, Roane, Ringdahl and Marcus (1999) systematically varied DRA treatment integrity for 3 children with developmental disabilities. For 2 participants, varied levels of treatment integrity were 0/100 (baseline), 25/75, 50/50, 75/25, and 100/0 (full treatment). In their notation, 75/25 denotes 75% of occurrences of prosocial alternative behaviors were reinforced and 25% of target behaviors were reinforced. Resumption of target behavior was observed when treatment integrity dropped to or below 50/50.

Basic research on resurgence predicts that such lapses in treatment integrity would result in recurrence of target behavior after successful treatment (Podlesnik, Jimenez-Gomez, & Shahan, 2006; Podlesnik & Shahan, in press). In the resurgence paradigm, after a target response is established, it is then extinguished while also reinforcing an alternative response (analogous to DRA plus extinction). If reinforcement of the alternative response is subsequently diminished, recovery of the target response is observed even though reinforcement of the target response does not resume.

A second factor known to contribute to the maintenance and generalization failure is the reintroduction of stimuli that were previously correlated with pretreatment reinforcement of target behavior (Osnes & Lieblein, 2003; Stokes & Baer, 1977; Stokes & Osnes, 1989). Transferring treatment effects from behavior analysts to parents or educational personnel is impeded by the phenomena of reinstatement and renewal. In the reinstatement paradigm, a response reinforced in baseline is then eliminated by extinction or alternative reinforcement. Subsequent time-contingent delivery of the reinforcer that maintained baseline responding then results in recovery of the target response despite ongoing extinction (Shaham, Shalev, Lu, de Wit, & Stewart, 2003). DeLeon et al. (2005) demonstrated reinstatement effects in a clinical population. After extinction had reduced target behavior to low levels, introduction of reinforcers on a time-contingent schedule increased rates of target behavior despite ongoing extinction. The clinical problem this poses is that parents and educational personnel will almost certainly deliver reinforcers noncontingently that previously maintained target behavior; thus, reinstatement effects should be expected.

Renewal occurs when a response is eliminated in a context different from the one correlated with baseline reinforcement as might occur when an individual receives treatment in a clinic for unwanted behavior that occurs at home. After successful reduction of target responding, recurrence of the target response is observed when the baseline context is reintroduced. Poldesnik & Shahan (in press) demonstrated renewal effects with homing pigeons. A baseline multiple schedule of reinforcement (mult VI 120 s VI 120 s VT 20 s) was presented with a steady houselight illuminated, followed by extinction during a flashing houselight. After responding was eliminated in the presence of the flashing light, extinction was then applied with the baseline steady houselight in an A-B-A arrangement. Recovery was observed in both components of the second A phase but was greater in the schedule component with the added time-contingent reinforcers.

Results from Experiment 1 demonstrate that DRA can strengthen the persistence of unwanted target behavior. Basic research on reinstatement, resurgence and renewal further suggest the conditions under which these persistence-strengthening effects are likely to occur. Resurgence effects may result from lapses in treatment integrity. Reinstatement and renewal effects may interfere with efforts to transfer treatment gains first obtained in a treatment setting to a natural setting with a history of reinforcement of target behavior. The growing number of clinical demonstrations that alternative reinforcement can strengthen persistence (Ahearn et al., 2003; DeLeon et al., 2005) points to a significant clinical problem in need of solution.

In addition to the separate context solution demonstrated in Experiments 2 and 3, we can suggest two additional remedies that differ from conventional Applied Behavior Analysis practice. First, when a pretreatment functional analysis identifies a socially mediated reinforcer that maintains problem behavior, it can often be withheld in an extinction operation. Matching theory predicts that reductions in problem behavior may be achieved with low rates of DRA reinforcement when juxtaposed with extinction for target behavior (i.e., the low-rate DRA solution). This remedy contrasts with conventional practice that typically arranges high-rate reinforcement for the prosocial alternative behavior (e.g., Fisher et al., 1993; Vollmer et al., 1999).

A second possible solution is a variant of the separate context solution. Establishing a new alternative response often requires high-rate reinforcement to compete with the reinforcement history for unwanted target behavior. However, if high-rate DRA is introduced in a separate context and then faded to low-rate DRA prior to introducing the DRA discriminative stimuli to the target context, this may avoid adding to the context–reinforcer contingencies that would contribute to persistence strengthening. This separate context schedule thinning solution is suggested because persistence-strengthening effects in humans appear to be sensitive to recent rates of alternative reinforcement. In Experiment 1, resistance to extinction was greater when extinction immediately followed DRA reinforcement than when it immediately followed baseline reinforcement. Ahearn et al. (2003) reported similar findings in a clinical population using response-independent reinforcer deliveries. This remedy departs from the usual practice of DRA schedule thinning in the context in which target behavior has a history of reinforcement.

The three experiments reported here represent an explicitly bidirectional approach to translational research. While the past three decades have seen growing recognition of the importance of linking basic and applied research to promote advancements in behavioral technologies and establish the broad generality of behavioral laws (e.g., Dietz, 1978; Hake, 1982; Lerman, 2003; Mace, 1994; Mace & Wacker, 1994), most translational research has focused on establishing the generality of basic research to human affairs. However, coordinated basic and applied research can stimulate both sectors to pursue research questions and paradigms less likely to be followed when basic and applied researchers work separately. Experiment 1 was stimulated by our collaboration with Tony Nevin on clinical applications of behavioral momentum. Through this collaboration it became apparent that Condition A in Experiment 2 of Nevin et al. (1990) represented a clinical application of DRA (see Figure 1). That high-rate reinforcement of prosocial behavior could inadvertently strengthen the persistence of the same unwanted target behavior it reduces was counterintuitive. Our clinical demonstration that DRA can increase resistance to extinction, in turn, stimulated further collaboration among basic and applied researchers to attempt a solution to this problem.

The present series of experiments is not without limitations. First, Experiment 1 did not replicate the extinction following baseline and extinction following DRA conditions within participants. We chose not to replicate the extinction conditions so as not to subject the participants to additional lengthy extinction processes. However, the principal finding in Experiment 1 that DRA can increase the persistence of target behavior was replicated in both participants in Experiment 3 in the multiple schedule design employed. Second, the extinction conditions in Experiments 2 and 3 presented the discriminative stimuli correlated with the target response in Component 1 and the compound of Components 1 and 3 twice as many times as Component 2. Although greater resistance to extinction was apparent early in the extinction process for both rats and humans, replications of this research should conduct extinction only with the discriminative stimuli in Component 2 and compound of Components 1 and 3 to eliminate this possible confound.

The present investigation suggests several areas for future study that extend the bidirectional nature of this research and address some limitations of our studies. First, we did not measure the resistance of alternative behavior to extinction. Because strengthening the persistence of prosocial behavior is an important clinical goal, this should be integral to future research protocols. Second, the generality of the present findings on DRA and those using time-contingent schedules (Ahearn et al., 2003; DeLeon et al., 2005) should be examined for other forms of alternative reinforcement used in clinical practice (such as differential reinforcement of other behavior, differential reinforcement of high rates, and differential reinforcement of low rates). Finally, the persistence-strengthening effects of alternative reinforcement may be best understood within the contexts of the reinstatement, resurgence and renewal paradigms discussed above. Explicit linkage of this research to the clinical problem of failures of treatment maintenance and generalization may lead to novel solutions to an important limitation of Applied Behavior Analysis interventions.


This research was presented at an invited symposium at the 30th annual Convention of the Association of Behavior Analysis International in Phoenix, Arizona (May, 2009). A nontechnical summary of this research appears in Mace, F.C., McComas, J.J., Mauro, B.C., Progar, P.R., Taylor, B.A., Ervin, R. & Zangrillo, A.N. (2009). The persistence-strengthening effects of DRA: An illustration of bi-directional translational research. The Behavior Analyst, 32, 293–300.


  • Ahearn W.H, Clark K.M, Gardenier N.C, Chung B.I, Dube W.V. Persistence of stereotypic behavior: Examining the effects of external reinforcers. Journal of Applied Behavior Analysis. 2003;36:439–448. [PMC free article] [PubMed]
  • Carr E.G, Durand V.M. Reducing behavior problems through functional communication training. Journal of Applied Behavior Analysis. 1985;18:111–126. [PMC free article] [PubMed]
  • Cohen S.L. Behavioral momentum of typing behavior in college students. Journal of Behavior Analysis and Therapy. 1996;1:36–51.
  • DeLeon I.G, Williams D.C, Gregory M.K, Hagopian L.P. Unexamined potential effects of the noncontingent delivery of reinforcers. European Journal of Behavior Analysis. 2005;5:57–69.
  • Dietz S.M. Current status of applied behavior analysis: Science versus technology. American Psychologist. 1978;33:805–814.
  • DiGennaro F.D, Marten B.K, Kleinmann A.E. A comparison of performance feedback procedures on teachers' treatment implementation integrity on students' inappropriate behavior in special education classrooms. Journal of Applied Behavior Analysis. 2007;40:447–461. [PMC free article] [PubMed]
  • Fisher W.W, Piazza C.C, Cataldo M, Harrell R, Jefferson G, Conner R. Functional communication training with and without extinction and punishment. Journal of Applied Behavior Analysis. 1993;26:23–36. [PMC free article] [PubMed]
  • Fleshler M, Hoffman H.S. A progression for generating variable-interval schedules. Journal of the Experimental Analysis of Behavior. 1962;5:529–530. [PMC free article] [PubMed]
  • Hake D.F. The basic–applied continuum and the possible evolution of human operant social and verbal research. The Behavior Analyst. 1982;5:21–28. [PMC free article] [PubMed]
  • Igaki T, Sakagami T. Resistance to change in goldfish. Behavioural Processes. 2004;66:139–152. [PubMed]
  • Iwata B.A, Bailey J.S, Neef N.A, Wacker D.P, Repp A.C, Shook G.L. Behavior analysis in developmental disabilities, 1968–95 from the Journal of Applied Behavior Analysis. Bloomington, IN: Society for the Experimental Analysis of Behavior; 1997.
  • Iwata B.A, Dorsey M.F, Slifer K.J, Bauman K.E, Richman G.S. Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis. 1994;27:197–209. [PMC free article] [PubMed]
  • Iwata B.A, Wong S.E, Riordan M.M, Dorsey M.F, Lau M.M. Assessment and training of clinical interviewing skills: Analogue analysis and field replication. Journal of Applied Behavior Analysis. 1982;15:191–203. [PMC free article] [PubMed]
  • Lerman D.C. From the laboratory to community application: Translational research in behavior analysis. Journal of Applied Behavior Analysis. 2003;36:415–419. [PMC free article] [PubMed]
  • Lerman D.C, Iwata B.A, Wallace M.D. Side effects of extinction: Prevalence of bursting and aggression during the treatment of self-injurious behavior. Journal of Applied Behavior Analysis. 1999;32:1–8. [PMC free article] [PubMed]
  • Mace F.C. Basic research needed for stimulating the development of behavioral technologies. Journal of the Experimental Analysis of Behavior. 1994;61:529–550. [PMC free article] [PubMed]
  • Mace F.C, Lalli J.S, Shea M.C, Lalli E.P, West B.J, Roberts M, et al. The momentum of human behavior in a natural setting. Journal of the Experimental Analysis of Behavior. 1990;54:163–172. [PMC free article] [PubMed]
  • Mace F.C, Wacker D.P. Toward greater integration of basic and applied behavioral research: An introduction. Journal of Applied Behavior Analysis. 1994;27:569–574. [PMC free article] [PubMed]
  • Mauro B.C, Mace F.C. Differences in the effect of Pavlovian contingencies upon behavioral momentum using auditory versus visual stimuli. Journal of the Experimental Analysis of Behavior. 1996;65:389–399. [PMC free article] [PubMed]
  • Nevin J.A. An integrative model for the study of behavioral momentum. Journal of the Experimental Analysis of Behavior. 1992;57:301–316. [PMC free article] [PubMed]
  • Nevin J.A, Grace R.C. Behavioral momentum and the Law of Effect. Behavioral and Brain Sciences. 2000;23:73–130. [PubMed]
  • Nevin J.A, Tota M.E, Torquato R.D, Shull R.L. Alternative reinforcement increases resistance to change: Pavlovian or operant contingencies. Journal of the Experimental Analysis of Behavior. 1990;53:359–379. [PMC free article] [PubMed]
  • Osnes P.G, Lieblein T. An explicit technology of generalization. The Behavior Analyst Today. 2003;3:364–374.
  • Petscher E.S, Rey C, Bailey J.A. A review of empirical support for differential reinforcement of alternative behavior. Research in Developmental Disabilities. 2009;30:409–425. [PubMed]
  • Podlesnik C.A, Jimenez-Gomez C, Shahan T.A. Resurgence of alcohol seeking produced by discontinuing non-drug reinforcement as an animal model of drug relapse. Behavioural Pharmacology. 2006;17:369–374. [PubMed]
  • Podlesnik C.A, Shahan T.A. Behavioral momentum and relapse of extinguished operant responding. Learning and Behavior. in press. [PMC free article] [PubMed]
  • Shaham Y, Shalev U, Lu L, de Wit H, Stewart J. The reinstatement model of drug relapse: History, methodology and major findings. Psychopharmacology. 2003;168:3–20. [PubMed]
  • Stokes T.F, Baer D.M. An implicit technology of generalization. Journal of Applied Behavior Analysis. 1977;10:349–367. [PMC free article] [PubMed]
  • Stokes T.F, Osnes P.G. An operant pursuit of generalization. Behavior Therapy. 1989;20:337–355.
  • Vollmer T.R, Roane H.S, Ringdahl J.E, Marcus B.A. Evaluating treatment challenges with differential reinforcement of alternative behavior. Journal of Applied Behavior Analysis. 1999;32:9–23.

Articles from Journal of the Experimental Analysis of Behavior are provided here courtesy of Society for the Experimental Analysis of Behavior