PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Behav Processes. Author manuscript; available in PMC 2010 May 1.
Published in final edited form as:
PMCID: PMC2666111
NIHMSID: NIHMS81958

Reinforcer Satiation and Resistance to Change of Responding Maintained by Qualitatively Different Reinforcers

Abstract

In previous research on resistance to change, differential disruption of operant behavior by satiation has been used to assess the relative strength of responding maintained by different rates or magnitudes of the same reinforcer in different stimulus contexts. The present experiment examined resistance to disruption by satiation of one reinforcer type when qualitatively different reinforcers were arranged in different contexts. Rats earned either food pellets or a 15% sucrose solution on variable-interval 60-s schedules of reinforcement in the two components of a multiple schedule. Resistance to satiation was assessed by providing free access either to food pellets or the sucrose solution prior to or during sessions. Responding systematically decreased more relative to baseline in the component associated with the satiated reinforcer. These findings suggest that when qualitatively different reinforcers maintain responding, relative resistance to change depends upon the relations between reinforcers and disrupter types.

Keywords: behavioral momentum theory, qualitatively different reinforcers, disrupters, satiation, contrafreeloading, lever press, rat

The discriminated operant is considered the fundamental unit of operant behavior and is composed of a discriminative-stimulus (SD) context, a response, and a reinforcing consequence (Skinner, 1938). A number of studies have shown that with training, animals learn associations between each of the components of the discriminated operant (see Colwill & Rescorla, 1986, for a review), and the strength of these associations come to motivate or modulate the rate and persistence of operant behavior (e.g., Nevin & Grace, 2000; Rescorla & Solomon, 1967; Trapold & Overmier, 1972). Given that a wide range of maladaptive and undesirable behavior (e.g., crop damage by wild animals; drug abuse and overeating in humans) likely is modulated by contextual stimuli and consequences, understanding variables that modulate such behavior is necessary to decrease its frequency and/or persistence.

Studies within the framework of behavioral momentum theory have found that responding is more resistant to disruption (e.g., satiation) in stimulus contexts associated with higher rates or larger magnitudes of identical reinforcer types (see Nevin, 1992; Nevin & Grace, 2000, for reviews). According to behavioral momentum theory, the persistence of operant behavior is governed by the Pavlovian relation between an SD context and the rate of reinforcement presented in that context (i.e., stimulus-reinforcer relation; see Podlesnik & Shahan, 2008, for a discussion of some exceptions). When disrupters are introduced similarly to both stimulus contexts, any differences in resistance to disruption across contexts should reflect only differences in the response strengthening effects of the baseline conditions of reinforcement (see Nevin, 1974, for a discussion; but see Shull & Grimes, 2006, for exceptions with extinction).

If qualitatively different reinforcers maintain responding across contexts, however, relative disruption might depend upon which of the reinforcers is devalued with satiation—even when reinforcement rates are equal across contexts. Responding might decrease more (or exclusively) in the context in which the disrupter is the same as the reinforcer type that maintains responding. However, there has been relatively little research on responding maintained by qualitatively different reinforcers within the framework of behavioral momentum theory. From studies of associative learning, the effect of devaluing reinforcers through satiation has been shown to be dependent on the specific training conditions. For instance, extended training (Dickinson et al., 1995; Holland, 2004) and the use of interval, rather than ratio, schedules (Dickinson et al., 1983) have been shown to attenuate the response-rate decreasing effects of pairing reinforcers with illness or satiation. Conversely, other studies have shown that responding continues to be susceptible to devaluation despite extensive training with interval schedules when subjects are trained to make different responses for different reinforcer types (e.g., Holland, 2004; see also Colwill & Rescorla, 1985). Studies of resistance to change of operant behavior typically arrange extensive exposure to variable-interval (VI) schedules of reinforcement within different stimulus contexts arranged by multiple-schedules of reinforcement (see Nevin, 1974, for a discussion). Thus, generally speaking, it is unclear whether satiation with one reinforcer type would be expected to differentially decrease responding for that reinforcer type using procedures typical of resistance-to-change studies.

The goal of the present experiment was to use procedures typical of those used in resistance-to-change studies to assess the effect of satiation with one reinforcer type when qualitatively different reinforcers maintain responding. Lever pressing of rats was maintained on separate VI 60-s schedules of reinforcement in a multiple schedule with food pellets presented in one component and a 15% sucrose solution in the other component. Several disrupters were examined that provided free access to either food pellets or the sucrose solution: (a) prior to the session while food and sucrose continued to be presented during the session; (b) prior to extinction sessions; and (c) throughout the session inside the operant chamber (i.e., contrafreeloading; see Inglis, Forkman, & Lazarus, 1997; Osborne, 1977, for reviews).

Method

Subjects

Four Long Evans rats obtained from Charles River (Portage, MI, USA) were maintained at approximately 80% of their adult weights (± 10 g). Rats were approximately 120 days old and experimentally naïve at the start of the experiment. Running weights were 365 g, 376 g, 376 g, and 361 g for N53, N54, N55, and N56, respectively, and were maintained by postsession feeding of Harlan Teklad (Madison, WI, USA) 8604 Rat Diet as necessary. When not in experimental sessions during their light cycle, rats were housed individually in a temperature-controlled colony with a 12:12 hr light/dark cycle (lights on at 7 a.m.). All rats had free access to water in their home cages.

Apparatus

Four Med Associates® (St. Albans, VT, USA) operant conditioning chambers were used. Each chamber was approximately 30 cm long, 24 cm wide, and 21 cm high, and housed in a sound-attenuating cubicle. The front panel of each chamber was equipped with two response levers centered 13 cm apart, a horizontal array of red, yellow, and green LEDs above each lever, a 28-V DC houselight at the top center of the panel, and a Sonalert (2900 ± 500 Hz, 75-85 dB). Between the two levers was a rectangular opening (6.5 cm wide by 4.2 cm high) centered with its bottom edge 2 cm above a grid floor and divided in half vertically. The left side of the opening provided 3-s access to a solenoid-operated dipper that delivered 0.1 ml of a 15% sucrose solution. The right side of the opening provided access to 45-mg Noyes® food pellets (Formula A/I), each of which were accompanied by an audible “click” upon delivery. During each dipper or pellet delivery, the lever LEDs and houselight were darkened and a light inside the corresponding side of the opening was turned on for 3 s. Timing of other events was suspended during this time. Extraneous noise was masked by a chamber ventilation fan and white noise. Control of experimental events and data recording was conducted with Med Associates® (St. Albans, VT, USA) interfacing and programming. Sucrose solutions were prepared bi-weekly as percent weight per volume with distilled water and table sugar and refrigerated when not in use. Sucrose reservoirs were washed following each session.

Procedure

Preliminary Training

During training sessions, only one reinforcer type was available each day. Either food pellets or sucrose were presented response independently on a variable-time (VT) 60-s schedule. At the start of each session, the lever LEDs over one lever also were turned on and pressing that lever produced either food or sucrose on a fixed-ratio (FR) 1 schedule. The LEDs over the other lever were off for the duration of that session. The response-independent and response-dependent presentations were always the same reinforcer type within a training session. For rats N53 and N55, a steady tone and houselight with the lever LEDs flashing on and off every 0.1 s signaled that responding on the right lever would produce food pellets. A pulsing tone, flashing houselight, and flashing lever LEDs turning on and off every 0.5 s signaled that responding on the left lever would produce sucrose. The lever and stimulus assignments were reversed for rats N54 and N56. These training sessions ended after 60 min or when 200 reinforcer presentations occurred, whichever came first. Rats N53 and N54 were trained with food during the first and third training sessions and with sucrose on the second and fourth training sessions. The order of training sessions was reversed for rats N55 and N56.

Baseline

After the four training sessions, all subjects were reliably pressing both levers. The VT schedule was turned off and a two-component multiple schedule was introduced. In the multiple schedule, variable-interval (VI) schedules arranged food pellets in one component (hereafter Food component) and sucrose in the other component (hereafter Sucrose component). Across sessions, the VI schedules increased in both components from VI 1-s to VI 60-s schedules, at which point baseline conditions began. All VI schedules included 13 intervals selected without replacement and constructed as described by Fleshler and Hoffman (1962). Lever and stimulus assignments in the components were the same as during training sessions. Sessions began with a 20-s blackout before the first component and all components were separated by a 20-s intercomponent interval (ICI) during which all stimuli were turned off. The Food or Sucrose component was chosen randomly following the initial blackout and strictly alternated for the rest of the session. Each component was 60 s in duration and sessions ended after a total of 40 components were presented. If a reinforcer was available but not obtained, it was presented after the first response the next time that component was presented. Sessions occurred 7 days per week at approximately the same time.

Disruption

Prior to beginning any condition of disruption, baseline conditions were maintained until responding was stable as judged by visual inspection with no increasing or decreasing trends for at least six consecutive sessions. Responding was disrupted in several ways. During presession food (PF) or presession sucrose (PS), a 12 cm diameter by 2 cm deep porcelain dish was placed in the home cage 2 hr prior to the start of the session for 3 consecutive days. On PF days, food pellets (approximately 45 g) were placed in the dish. On PS days, the 15% sucrose solution (125 ml) was placed in the dish. In other disruption conditions, PF and PS occurred for 3 consecutive days prior to sessions of extinction (EXT). Other than extinction during the session, PF+EXT and PS+EXT were identical to the PF and PS disrupters. Both PF+EXT and PS+EXT conditions were replicated once after all other disrupters were completed. Responding also was disrupted for three consecutive sessions using a contrafreeloading procedure, in which either 45 g of food pellets (i.e., CFL F) or 125 ml of sucrose (i.e., CFL S) was available in the back of the operant chamber in a porcelain dish. Given that no rat finished all the food pellets or sucrose during any of the disruption conditions, they can be considered conditions of disruption by “free access” to food or sucrose. The order of conditions of disruption for all rats is presented in Table 1.

Table 1
Order of disrupters and sucrose solution used in each condition for all rats. Disrupters were extinction (EXT), presession food (PF), presession sucrose (PS), presession food plus extinction (PF+EXT), presession sucrose plus extinction (PS+EXT), contrafreeloading ...

Results

Appendix 1 presents the number of baseline sessions prior to disruption, number of sessions during disruption, and active- and inactive-lever response rates in the Food and Sucrose components during baseline and disruption. Reinforcement rates were slightly lower than scheduled but similar in the Food component (M = 0.93 per min; SD = 0.03) and Sucrose component (M = 0.92 per min; SD = 0.03).

Appendix
Active- and inactive-lever response rates and number of baseline (BL) sessions in each condition. Conditions are presented in the order they occurred. Baselines are indicated by the following disrupter (presession food [PF], presession sucrose [PS], presession ...

Figure 1 shows response rates in the Food and Sucrose components across successive conditions of baseline prior to each disrupter. Overall, response rates were higher in the Food component than in the Sucrose component in 26 out of 32 instances. For rats N53 and N55, response rates consistently were higher in the Food component. For the first three baselines for N54, response rates consistently were higher in the Food component than in the Sucrose component, after which response rates decreased in the Food component and were similar thereafter in the two components. Response rates were similar across components for N56 but tended to be slightly higher in the Food component. Using a two-way repeated-measures ANOVA (component × successive baseline), response rates were significantly higher in the Food component, (F[1, 24] = 32.98, p < 0.0001), but successive baseline and the component × successive baseline interaction were not (p > 0.05).

Figure 1
Response rates in the Food and Sucrose components across successive exposures to the baseline condition. Data points represent the means of the last 6 sessions of exposure to the conditions. Error bars represent ±1 SD. Response rates on the inactive ...

Inactive lever response rates also are shown in Figure 1 to demonstrate that the rats discriminated between the components. Inactive-lever response rates in the Food component are presented as solid lines and inactive-lever response rates in the Sucrose component are presented as dashed lines. Thus, the active lever in the Food component is represented by a solid line with filled points, and when the same lever is inactive in the Sucrose component the line is dashed and without points. The active lever in the Sucrose component is represented with a dashed line and empty points, and when the same lever is inactive in the Food component the line is solid and without points. Response rates consistently were higher when the levers were active than when inactive, indicating that responding was under the control of the discriminative stimuli. Mean discrimination indices support this conclusion. Discrimination indices were calculated as the proportion of responses on one lever when a component was active in the numerator and responses on that lever when the component was active and inactive in the denominator (Rat N53: M = 0.88, SD = 0.03; Rat N54: M = 0.80, SD = 0.10; Rat N55: M = 0.90, SD = 0.04; Rat N56: M = 0.86, SD = 0.05).

Figure 2 shows resistance to disruption in the component in which the reinforcer was different than the disrupter (e.g., Food component with PS disrupter; y-axis) as a function of the component in which the reinforcer was the same as the disrupter (e.g., Sucrose component with PS disrupter; x-axis). Each data point represents the logarithm (log) of the mean proportion of baseline response rates for the three sessions of each disruption condition. The relative magnitude of disruption is indexed by how far the symbols decrease from 0.0 down along the y-axis and to the left along the x-axis. Therefore, points falling above the diagonal line indicate that responding decreased relatively more in the component that produced the same reinforcer type as the disrupter. Filled symbols indicate disruption by providing access to food and open symbols indicate disruption by providing access to sucrose.

Figure 2
Data points represent means from each 3-session condition of disruption (presession food plus extinction [PF+EXT], presession sucrose plus extinction [PS+EXT], presession food [PF], presession sucrose [PS], contrafreeloading food [CFL F], and contrafreeloading ...

Overall, data points fell above the diagonal line in 31 out of 32 instances across rats, with the only exception being from the first PS+EXT for N53. These data indicate that responding typically was disrupted more in the component that produced the reinforcer that was the same as the disrupter. This conclusion was supported by a two-way repeated-measures ANOVA (component × disrupter) with significant main effects of component (F[1, 24] = 106.97, p < 0.0001) and disrupter (F[7, 24] = 21.22, p < 0.0001), and a component × disrupter interaction (F[7, 24] = 2.96, p < 0.05). When comparing access to food versus sucrose as disrupters, in all cases, access to food decreased responding more in both components than access to sucrose. This is indicated by the solid data points falling further down along the y-axis and to the left along the x-axis than the corresponding open data points. In addition, disruption with PF+EXT and CFL F typically produced greater amounts of disruption in both components than PF. Similarly, disruption with PS+EXT and CFL S typically produced greater amounts of disruption in both components than PS. Finally, the difference in relative resistance to PF+EXT and PS+EXT tended to increase from the first to the second PF+EXT and PS+EXT. The exception was PS+EXT for N55; those data points overlap. Overall, responding decreased more when the disrupters were the same as the reinforcers in that component than when they were different.

No food pellets remained in the pellet troughs following disruption sessions, indicating that pellets were eaten when they were earned. Such verification of consumption was not possible with the sucrose reinforcers.

Discussion

When identical reinforcer types maintain responding across contexts, disrupters such as presession satiation differentially decrease responding across components purportedly as a direct function of the differences in strengthening effects of the baseline conditions of reinforcement (see Nevin & Grace, 2000). The present experiment examined the disruptive effects of providing free access to food or sucrose on responding maintained across components by food or sucrose reinforcement. Despite the fact that responding was extensively trained using interval schedules of reinforcement, conditions shown to decrease sensitivity to reinforcer devaluation (Dickinson et al., 1995; Holland, 2004), responding decreased to a greater extent in the component presenting the reinforcer that was the same as the disrupter type. Importantly, these findings show that responding for qualitatively different reinforcers is sensitive to reinforcer devaluation by satiation of one reinforcer type using procedures typical of studies of resistance to change. Comparing the present findings to previous studies of resistance to change (see Nevin, 1992, for a review), these findings show that disrupters that satiate experimental subjects affect operant behavior differently depending on whether identical or qualitatively different reinforcer types maintain responding.

Behavioral momentum theory accounts for relative resistance to change using two terms—the strengthening effects of reinforcement and the magnitude, or force, of disrupters (see Nevin & Grace, 2000). The strengthening effect of higher baseline reinforcement rates counter the response-rate decreasing effects of disrupters. Within the context of behavioral momentum theory, the present findings could be viewed as reflecting that the force of disruption was greater in the component in which the disrupter and reinforcer type were the same. This interpretation is consistent with the approach used to account for less resistance to extinction in components presenting very high rates of reinforcement (i.e., partial-reinforcement-extinction effect; see Nevin et al., 2001). Further development of this approach could result in a quantitative account of the effects of satiation on responding maintained by qualitatively different reinforcers.

The patterns of disruption obtained in the present experiment likely were a result of the specific experimental parameters used. For instance, both food and sucrose disrupters tended to decrease responding in both contexts, rather than selectively decreasing responding only in the component that produced the same reinforcer type and not in the other component. The exception is the presession sucrose disrupter in which responding only decreased in the Sucrose component. However, there was very little disruption in either component with PS and, therefore, appears to be representative of a general weakness of the sucrose disrupters relative to the corresponding food disrupters (e.g., PS versus PF). Thus, the differential but not exclusively selective disruptions suggest that food and sucrose attenuated the state of deprivation similarly but not identically. The lack of an identical disruption across components likely is a function of the constellation of qualitative and quantitative differences between food and sucrose (e.g., solid versus liquid, taste) and their relation to current levels of food deprivation. It is important to note that different patterns of disruption likely would have occurred with differences in deprivation type (e.g., food versus water), duration of deprivation, and with different configurations of reinforcers and/or disrupters.

Differences in the pattern of disruption also were found within the PF+EXT and PS+EXT disrupters. In 7 out of 8 instances with the replication of those disrupters, disruption of responding was greater in the component presenting the same reinforcer type as the disrupter relative to the component presenting the different reinforcer type. The overall greater number of baseline training sessions from the start of the experiment prior to the replication could have been responsible. Counter to the effects of reinforcer devaluation following extensive training (Dickinson et al., 1995; Holland, 2004), the two reinforcer types might have become better associated with the responses and/or stimulus contexts with increased training (see Colwill & Rescorla, 1985; Holland 2004). Alternatively, some learning about the disrupters themselves might have occurred from the first to the second presentation of those disrupters (see Balleine, 2001; Anger & Anger, 1976, for examples). Although it is unclear whether they were a result of experience with the baseline or disruption conditions, these findings suggest that interactions between reinforcers and disrupters can change with continued exposure. Given that PF+EXT and PS+EXT were the only disrupter types that were replicated, it is unclear whether similar results would have been found with the other disrupters.

The CFL data from the present experiment might have some relevance to understanding contrafreeloading in general. Contrafreeloading occurs when animals continue to respond for a response-dependent source of food when a freely available source of food is available (see Inglis et al., 1997; Osborne, 1977, for reviews). The information hypothesis of contrafreeloading (see Inglis et al., 1977) suggests that animals respond for food when free food is available to expedite the location of future profitable food patches. The present results suggest that seeking alternative food sources (i.e., the response-dependent source) occurs less frequently when the free source of food is the same as the potential alternative, response-dependent source.

In summary, the present findings at first glance might suggest a fairly straightforward relation between the qualitatively different reinforcers that maintain responding and the conditions of disruption used to decrease responding. Providing free access to one food type decreases responding for that food type more than responding for other food types. However, varying experimental factors such as deprivation level, arranging different reinforcer types that vary in their strengthening effects (see Mace et al., 1997), or the substitutability of reinforcers and disrupters (see Green & Freed, 1993) likely will also impact resistance to disruption. For instance, arranging more similar (i.e., substitutable) reinforcers across components likely would produce more uniform disruptions across components. An implication of using satiation to decrease problematic behavior in natural contexts (e.g., McComas et al., 2003) where similar reinforcers maintain both problematic and desirable behavior could result in unintentional decreases in desirable behavior. Therefore, using such operations requires consideration of the greater context in which these operations are used.

Acknowledgments

This experiment was part of the first author’s doctoral dissertation in Psychology at Utah State University and he thanks the committee members Scott Bates, Clint Field, Amy Odum, and Tim Slocum. The authors would like to thank Corina Jimenez-Gomez for her thoughtful comments during the preparation of this manuscript.

Footnotes

Portions of these data were presented at the 2006 annual meeting of the Society for the Quantitative Analyses of Behavior.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • Anger D, Anger K. Behavior changes during repeated eight-day extinctions. Experimental Analysis of Behavior. 1976;26:181–190. [PMC free article] [PubMed]
  • Balleine B. Incentive processes in instrumental conditioning. In: Mowrer RR, Klein SB, editors. Handbook of contemporary learning theories. Mahwah, NJ: Lawrence Erlbaum Associates; 2001. pp. 307–366.
  • Colwill RM, Rescorla RA. Instrumental responding remains sensitive to reinforcer devaluation after extensive training. Journal of Experimental Psychology: Animal Behavior Processes. 1985;11:520–526.
  • Colwill RM, Rescorla RA. Associative structures underlying instrumental learning. In: Bower GH, editor. The psychology of learning and motivation. New York: Academic Press; 1986. pp. 55–104.
  • Dickinson A, Balleine B, Watt A, Gonzalez F, Boakes RA. Motivational control after extended instrumental training. Animal Learning & Behavior. 1995;23:197–206.
  • Fleshler M, Hoffman HS. A progression for generating variable-interval schedules. Journal of the Experimental Analysis of Behavior. 1962;5:529–530. [PMC free article] [PubMed]
  • Green L, Freed DE. The substitutability of reinforcers. Journal of the Experimental Analysis of Behavior. 1993;60:141–158. [PMC free article] [PubMed]
  • Shull RL, Grimes JA. Resistance to extinction following variable-interval reinforcement: Reinforcer rate and amount. Journal of the Experimental Analysis of Behavior. 2006;85:23–39. [PMC free article] [PubMed]
  • Herrnstein RJ. On the law of effect. Journal of the Experimental Analysis of Behavior. 1970;13:243–266. [PMC free article] [PubMed]
  • Holland PC. Relations between Pavlovian-instrumental transfer and reinforcer devaluation. Journal of Experimental Psychology: Animal Behavior Processes. 2004;30:104–117. [PubMed]
  • Inglis IR, Forkman B, Lazarus J. Free food or earned food? A review and fuzzy model of contrafreeloading. Animal Behaviour. 1997;53:1171–1191. [PubMed]
  • McComas JJ, Thomas A, Johnson L. The effects of presession attention on problem behavior maintained by different reinforcers. Journal of Applied Behavior Analysis. 2003;36:297–307. [PMC free article] [PubMed]
  • Murphy ES, McSweeney FK, Kowal BP. Within-session decreases in operant responding as a function of pre-session feedings. Psychological Record. 2003;53:313–326.
  • Nevin JA. Response strength in multiple schedules. Journal of the Experimental Analysis of Behavior. 1974;21:389–408. [PMC free article] [PubMed]
  • Nevin JA. An integrative model for the study of behavioral momentum. Journal of the Experimental Analysis of Behavior. 1992;57:301–316. [PMC free article] [PubMed]
  • Nevin JA. Behavioral economics and behavioral momentum. Journal of the Experimental Analysis of Behavior. 1995;64:385–395. [PMC free article] [PubMed]
  • Nevin JA. Measuring behavioral momentum. Behavioural Processes. 2002;57:187–198. [PubMed]
  • Nevin JA, Grace RC. Behavioral momentum and the Law of Effect. Behavioral and Brain Sciences. 2000;23:73–130. [PubMed]
  • Nevin JA, McLean AP, Grace RC. Resistance to extinction: Contingency termination and generalization decrement. Animal Learning & Behavior. 2001;29:176–191.
  • Osborne SR. The free food (contrafreeloading) phenomenon: A review and analysis. Animal Learning & Behavior. 1977;5:221–235.
  • Podlesnik CA, Shahan TA. Response-reinforcer relations and resistance to change. Behavioural Processes. 2008;77:109–125. [PubMed]
  • Rescorla RA, Solomon RL. Two-process learning theory: Relationships between Pavlovian conditioning and instrumental learning. Psychological Review. 1967;74:151–182. [PubMed]
  • Skinner BF. The behavior of organisms: An experimental analysis. Cambridge, MA: Appleton-Century-Crofts; 1938.
  • Trapold MA, Overmier JB. The second learning process in instrumental learning. In: Black AA, Prokasy WF, editors. Classical conditioning II: Current research and theory. New York: Appleton-Century-Crofts; 1972. pp. 427–452.
  • Williams BA. The effects of response contingency and reinforcement identity on response suppression by alternative reinforcement. Learning and Motivation. 1989;20:204–224.