Search tips
Search criteria 


J Appl Behav Anal. 2010 Spring; 43(1): 95–100.
PMCID: PMC2831456


Henry S Roane, Action Editor


Most research on stimulus preference and reinforcer assessment involves a preference assessment that is followed by a reinforcer assessment. Typically, the most and least preferred stimuli are tested as reinforcers. In the current study, we first quantified the reinforcing efficacies of six food items and then assessed relative preference for each item. Relative preference ranking and reinforcer efficacies showed almost perfect concordance for 1 participant and partial concordance for the other. Discordance tended to occur with the weakest reinforcers.

Keywords: developmental disabilities, preference, reinforcer assessment

An important goal of preference assessments is to identify stimuli that will function as effective positive reinforcers. Most studies have demonstrated that high-preference (HP) stimuli are effective reinforcers, although some studies have shown mixed results about low-preference (LP) stimuli functioning as reinforcers. Inconsistency among these findings may stem from variations in the methods used to assess reinforcer efficacy. For example, Roscoe, Iwata and Kahng (1999) identified an HP and an LP stimulus based on preference assessment selection percentages using both single- and paired-stimulus presentation formats. Seven of the 8 participants showed higher response rates for the HP stimulus than the LP stimulus in the subsequent concurrent-schedule reinforcer assessment. In a single-operant reinforcer assessment using the LP stimulus, however, response rates for 6 of the 7 participants were similar to those produced by the HP stimulus under the concurrent-operants arrangement. These results suggested that LP stimuli could function as effective reinforcers under some circumstances.

Roscoe et al. (1999) suggested that this finding could have been affected either by the assessment methods used to identify stimulus preference (single- vs. paired-stimulus assessments) or by the presentation format used to evaluate reinforcer efficacy (single vs. concurrent schedules). Alternatively, because the reinforcer efficacies were unknown until after the preference assessments were conducted, it is possible that most or all items assessed would have been potent reinforcers regardless of their relative preference. Relative preference for these stimuli may have been influenced by the format with which they were evaluated. This possibility suggests that an alternative way to study the relation between reinforcing efficacy and preference is to include items with a wide range of predetermined reinforcing efficacies in preference assessments.

Although studies have shown that preference and reinforcer efficacy are positively correlated (e.g., DeLeon, Frank, Gregory, & Allman, 2009), no study has examined this relation by treating reinforcer efficacy as an independent variable. In the current study, we quantified a range of reinforcing efficacies for six stimuli. We then assessed preferences for all six reinforcers with 2 adults with developmental disabilities.


Participants and Setting

Lynn and James, aged 37 and 35 years old, respectively, participated. Their health records indicated that both participants had severe developmental disabilities. Both had no speech, but Lynn was able to respond to some gestures. James was able to follow some simple instructions and had some manual signs. Five to seven individual sessions were conducted per week, and each session lasted approximately 30 min. In each session, the experimenter and participant sat at a table facing each other, and the materials needed for a particular trial were placed on the table.

Reinforcer Assessments

We used a single-operant arrangement, a fixed-ratio (FR) 1 schedule of reinforcement, and a reversal (ABAB) design to identify six food reinforcers for each participant. Caregivers nominated food items that participants had consumed previously. During baseline, the experimenter started each session by modeling the target response (i.e., pressing a round microswitch to produce an audible click), guiding the participant to press the switch, praising the participant immediately after the response, and then vocally instructing the participant to press the switch to begin the session. The experimenter did not provide any reinforcement following a switch press. However, the experimenter praised a behavior other than switch pressing (e.g., “nice sitting”) once per minute starting at 30 s and repeated the vocal instruction to press the switch once per minute starting at 1 min. During the reinforcement condition, the procedures were the same as those used during baseline except that the participant received a small piece of the food being evaluated immediately after each switch press. A desk lamp was on during all baseline sessions and off during all reinforcement sessions to facilitate discrimination between the two phases.

The experimenter recorded switch presses during each session, and those data were converted to a rate (responses per minute) by dividing the total number of responses in each session by the duration of the session (5 min). Time spent presenting and consuming the food items was excluded from the data analysis so that each participant had 5 min to emit the target response in each session. Each phase continued until a stability criterion was met, which was defined as three consecutive sessions in which response rates differed by less than 20% from the mean rate of those three sessions. For example, consecutive session rates of 12, 16, and 14 responses per minute (three-session M  = 14) would have met the stability criterion.

We defined each food's reinforcing efficacy as the mean percentage change in responses from the two baseline phases to the two reinforcement phases. We determined mean percentage change by (a) using the final three sessions from each baseline phase to obtain the mean response rate for the two baseline phases, (b) using the final three sessions from each reinforcement phase to obtain the mean response rate for the two reinforcement phases, and (c) calculating the percentage change as follows: Mean response rate for reinforcement phases minus mean response rate for baseline phases was divided by mean response rate for baseline phases, and this ratio was converted to a percentage. Assessments continued until six food items with a range of positive reinforcing efficacies were identified. For Lynn, 10 food items were assessed, and six eventually met criteria for use in the subsequent preference assessment. For the four excluded items, the experimenter terminated the assessments because Lynn began pushing the item away, the reinforcing efficacy of the stimulus was similar to other stimuli that had already been identified, or the food item could not be purchased. For James, all six food items assessed met the criteria for reinforcing efficacy.

Preference Assessments

We used a paired-stimulus preference assessment (Fisher et al., 1992) to assess each participant's preferences for the six reinforcers identified above. Each food item was paired with every other food item 10 times, for a total of 150 trials. The pairings were presented over eight sessions in a quasirandomized sequence, and the foods' positions were counterbalanced across trials. On each trial, the experimenter asked a participant to pick one of the two food items presented side by side in front of him or her. If the participant approached (pointed to or touched) an item, the experimenter provided that item immediately for consumption. If the participant approached both items (which never occurred in the study) or did not respond within 5 s (which occurred once), the experimenter removed both items from the table for 5 s and repeated the trial. The experimenter recorded the item selected on each trial and calculated the preference ranking for each item by dividing the number of selections for an item by the number of trials in which that item was presented and converting the resulting ratio to a percentage.

Interobserver Agreement and Procedural Integrity Checks

During reinforcer assessments, a second observer scored 35% and 49% of the sessions for Lynn and James, respectively. For each session, the 5-min responding period was divided into 30 10-s intervals, and the numbers of switch presses recorded within each interval were compared. Percentage agreement per session between the experimenter and observer was calculated by dividing the smaller recorded number of switch presses by the larger number of switch presses within each 10-s interval and then averaging across all 30 intervals. The mean percentage agreement across sessions was 92% (range, 56% to 100%) for Lynn and 98% (range, 78% to 100%) for James. During preference assessments, a second observer scored 63% of the sessions for each participant. A trial was an agreement if both the observer and the experimenter recorded the same item selection; otherwise, it was a disagreement. Percentage agreement was calculated by dividing the number of agreements by the number of agreements plus disagreements. The mean trial-by-trial agreement on the selection responses per session was 99% (range, 95% to 100%) for Lynn and 100% for James. Procedural fidelity was assessed using a checklist of programmed steps for the reinforcer and preference assessments, in which the number of steps implemented correctly was divided by the total number of steps, and this ratio was converted to a percentage. The mean percentage of correct steps carried out by the experimenter was 99.8% (range, 90% to 100%) for Lynn and 99% (range, 90% to 100%) for James in the reinforcer assessment and 99% (range, 98% to 100%) for Lynn and 100% for James in the preference assessment.


Figure 1 shows the results of reinforcer assessments for Lynn and James. For all stimuli, higher rates of responding were observed in the reinforcement condition than in baseline. All reinforcers met the three-session stability criterion in each phase, except for one reinforcer, which was based on two instead of three sessions in the second reinforcement phase (cheese for Lynn). Figure 2 shows the results of the reinforcer assessments (right y axis) presented against the rankings from the preference assessment (left y axis) for each stimulus for Lynn and James. In general, reinforcer efficacy and preference ranking correlated positively (Kendall's tau r  = 0.73 for Lynn and 0.47 for James). For both participants, the two most effective reinforcers were also the two most preferred items. However, some relatively less preferred items were, in fact, relatively effective reinforcers (e.g., pretzels for James), and some relatively preferred items were not effective reinforcers (e.g., cereal for Lynn). These results are consistent with previous reports that LP stimuli may function as effective reinforcers under some circumstances (e.g., Roscoe et al., 1999; Taravella, Lerman, Contrucci, & Roane, 2000). From an applied standpoint, this is a concern because potentially effective reinforcers might have been excluded from use, or stimuli identified as highly preferred may have been ineffective reinforcers.

Figure 1
Responses per minute per session for each food item during reinforcer assessments for Lynn (left) and James (right). For each participant, the reinforcers are ordered by efficacies from the strongest (top) to the weakest (bottom). The mean for the last ...
Figure 2
Preference (left y axis) for reinforcers of different efficacies (right y axis) for Lynn (top) and James (bottom). For each participant, stimuli are ordered from the strongest to the weakest reinforcing efficacies on the x axis.

In terms of how relative preference varied across different reinforcing efficacies, the current results showed an almost perfect relation across items for Lynn and partial concordance across items for James. Preference overestimated one of the weakest reinforcers (cereal) for Lynn and three of the weakest reinforcers for James. It is not clear why discordance tended to occur among the weaker reinforcers. However, we did not repeat the reinforcer tests for those stimuli after the preference assessments. Therefore, we cannot discount the possibility that the reinforcing efficacies of some reinforcers may have changed following the reinforcer tests, and that weaker reinforcers may be more easily influenced by motivating variables (e.g., satiation).

This study has several limitations. First, the observed relation must be viewed with caution because of the small number of participants. Second, preference responding may have been influenced by the preceding reinforcer tests, possibly due to response priming (e.g., Ayllon & Azrin, 1968). In our study, however, the reinforcer and preference assessments occurred several days apart, making it less plausible that reinforcer testing affected preference assessment results. Third, the results may be limited by how we quantified reinforcer efficacies. Changing parameters such as reinforcement schedule, antecedent stimuli, or stability criteria would likely alter the derived reinforcing efficacies and their relation to preference.

Our results suggest several topics for further research. Food tends to displace nonfood reinforcers in preference assessments (e.g., Bojak & Carr, 1999; DeLeon, Iwata, & Roscoe, 1997); identifying food and nonfood reinforcers with similar reinforcing efficacies and then assessing preferences for these items would permit systematic examination of relative preference for these item types. Second, Glover, Roane, Kadey, and Grow (2008) showed that progressive-ratio schedules may be effective at differentiating the reinforcing efficacies of HP and LP stimuli. Therefore, these schedules may be more useful than FR schedules for identifying different reinforcer efficacies. Third, it may be useful to examine the relations between reinforcer efficacy and preference using a concurrent-operants arrangement.


We thank the participants and staff at St. Amant for their cooperation throughout the study and Jennifer Thorsteinsson, Leah Enns, Breanne Byiers, Richard Patton, Quinn Senkow, Duong Nguyen, Kerri Walters, Aynsley Verbeke, and Colleen Murphy for their assistance with interobserver agreement assessments. This research was supported by Grant MOP-77604 from the Canadian Institutes of Health Research and funding from the Province of Manitoba, through the Manitoba Research and Innovation Fund.


  • Ayllon T, Azrin N.H. Reinforcer sampling: A technique for increasing the behavior of mental patients. Journal of Applied Behavior Analysis. 1968;1:13–20. [PMC free article] [PubMed]
  • Bojak S.L, Carr J.E. On the displacement of leisure items by food during multiple-stimulus preference assessments. Journal of Applied Behavior Analysis. 1999;32:515–518. [PMC free article] [PubMed]
  • DeLeon I.G, Frank M.A, Gregory M.K, Allman M.J. On the correspondence between preference assessment outcomes and progressive-ratio schedule assessments of stimulus value. Journal of Applied Behavior Analysis. 2009;42:729–733. [PMC free article] [PubMed]
  • DeLeon I.G, Iwata B.A, Roscoe E.M. Displacement of leisure reinforcers by food during preference assessments. Journal of Applied Behavior Analysis. 1997;30:475–484. [PMC free article] [PubMed]
  • Fisher W, Piazza C.C, Bowman L.G, Hagopian L.P, Owens J.C, Slevin I. A comparison of two approaches for identifying reinforcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis. 1992;25:491–498. [PMC free article] [PubMed]
  • Glover A.C, Roane H.S, Kadey H.J, Grow L.L. Preference for reinforcers under progressive- and fixed-ratio schedules: A comparison of single and concurrent arrangements. Journal of Applied Behavior Analysis. 2008;41:163–176. [PMC free article] [PubMed]
  • Roscoe E.M, Iwata B.A, Kahng S.W. Relative versus absolute reinforcement effects: Implications for preference assessments. Journal of Applied Behavior Analysis. 1999;32:479–493. [PMC free article] [PubMed]
  • Taravella C.C, Lerman D.C, Contrucci S.A, Roane H.S. Further evaluation of low-ranked items in stimulus-choice preference assessments. Journal of Applied Behavior Analysis. 2000;33:105–108. [PMC free article] [PubMed]

Articles from Journal of Applied Behavior Analysis are provided here courtesy of Society for the Experimental Analysis of Behavior