|Home | About | Journals | Submit | Contact Us | Français|
The present experiment examined the effects of varying stimulus disparity and relative punisher frequencies on signal detection by humans. Participants were placed into one of two groups. Group 3 participants were presented with 1:3 and 3:1 punisher frequency ratios, while Group 11 participants were presented with 1:11 and 11:1 punisher frequency ratios. For both groups, stimulus disparity was varied across three levels (low, medium, high) for each punisher ratio. In all conditions, correct responses were intermittently reinforced (1:1 reinforcer frequency ratio). Participants were mostly biased away from the more punished alternative, with more extreme response biases found for Group 11 participants compared to Group 3. For both groups, estimates of discriminability increased systematically across the three disparity levels and were unaffected by the punisher ratios. Likewise, estimates of response bias and sensitivity to the punisher ratios were unaffected by changes in discriminability, supporting the assumption of parameter invariance in the Davison and Tustin (1978) model of signal detection. Overall, the present experiment found no relation between stimulus control and punisher control, and provided further evidence for similar but opposite effects of punishers to reinforcers in signal-detection procedures.
The study of detection focuses on decision-making in situations of uncertainty. In everyday life, many situations require organisms to make choices involving the detection or identification of stimuli. For example, an animal must decide whether a plant is toxic or safe to eat, a pedestrian must decide whether it is safe to cross the road, or a quality control officer must decide whether a product meets certain production standards. In these situations, negative outcomes arising from incorrect choices and positive outcomes arising from correct choices both play an integral part in decision-making.
The signal-detection task is a discrete-trial procedure, where, on each trial, the subject is presented with one of two discriminative stimuli (S1 or S2). These can vary on some physical (e.g., intensity) or temporal (e.g., stimulus presentation duration) dimension, or can be the presence or absence of a stimulus (e.g., a sound against background noise). The subject then makes one of two available responses (B1 or B2) to identify which stimulus was presented. With the combination of two stimulus types and two response alternatives, four response outcomes are possible: responding B1 following S1 (B11), responding B2 following S1 (B12), responding B1 following S2 (B21), and responding B2 following S2 (B22). Correct responses (B11 and B22) can be reinforced, for example, with money (Johnstone & Alsop, 2000), food (McCarthy & Davison, 1979), or brain stimulation (Terman, 1970). Errors (B12 and B21) usually have no consequence; however, they can also be punished (e.g., time-out; Hume & Irwin, 1974).
Traditionally, models of signal detection have investigated performance as a function of two independent parameters: discriminability and response bias. Discriminability measures how well the subject can tell the two stimuli (S1 and S2) apart. Response bias measures the subject's tendency to make more of one response over another response, irrespective of which stimulus (S1 or S2) was presented. Response bias is often manipulated by varying the relative frequency or magnitude of reinforcement for B11 and B22 responses. Models of signal detection aim to provide measures of discriminability which are independent of changes in response bias, and measures of response bias which are unaffected by changes in discriminability (see Macmillan & Creelman, 2005).
where B1 and B2 denote the number of responses made on Alternatives 1 and 2 respectively, and R1 and R2 denote the number of reinforcers obtained by making B1 and B2 responses. The parameter c (or log c) is a measure of inherent bias independent of changes in the reinforcer ratio (e.g., side or color preference). The parameter a (termed sensitivity) measures the effect that variations in the reinforcer distribution (R1/R2) have on the distribution of the subject's response distribution (B1/B2).
Davison and Tustin (1978) stated that when two stimuli (S1 and S2) are indistinguishable in a signal-detection procedure, behavior follows the generalized matching law (Equation 1). When the stimuli are more distinguishable, the subject makes more correct (B11 and B22) responses and Davison and Tustin formulated two separate equations to describe this relation. In the case where reinforcers (R11 and R22) are intermittently obtained for correct responses, choice in detection tasks is described on S1 trials by:
and on S2 trials by:
All notations are as above, and the parameter log d measures stimulus discriminability. When log d = 0, S1 and S2 are indiscriminable and Equations 2 and 3 reduce to the generalized matching law. As S1 and S2 become more disparate, the subject makes more correct responses (B11 and B22); thus, log d is additive in Equation 2 and subtractive in Equation 3. Algebraic subtraction of Equations 2 and 3 provides a bias-free measure of discriminability:
where all notations are as above. Equation 4 predicts that discriminability should be independent of the reinforcer ratio (R11:R22). However, Johnstone and Alsop (1999) re-analyzed a number of past detection studies and found greater estimates of log d for unequal reinforcer ratios than equal reinforcer ratios.
Algebraic addition of Equations 2 and 3 provides a discriminability-free measure of response bias. This bias is attributable to the effects of both the distribution of reinforcers and any inherent bias (log c):
where all notations are as above. Equation 5 states that response bias, log b, should follow the generalized matching law and this expression includes no term for discriminability. Thus, it predicts that the effects of the reinforcer ratio (and the sensitivity of the subject's behavior to changes to that ratio, a) should be independent of changes in discriminability (log d; McCarthy & Davison, 1980). This prediction, however, has received mixed support with some studies finding no systematic relation and others finding an inverse relation between sensitivity and discriminability. As an example, Figure 1 plots the relation between estimates of sensitivity and discriminability for three studies that varied both the reinforcer frequency ratio and the disparity between S1 and S2 (Alsop & Davison, 1991; Alsop & Porritt, 2006; McCarthy & Davison, 1984). These studies employed controlled reinforcer procedures where the obtained reinforcer ratios were constrained to match the arranged reinforcer ratios (Stubbs & Pliskoff, 1969). Because subjects made too few errors (B12 and B21 responses) at the highest discriminability levels to calculate accurate estimates of a and log d, only conditions with estimates of log d below 1.75 (i.e., around 98% correct assuming no inherent bias) were plotted.
Figure 1 (top) shows the data from McCarthy and Davison (1984), which arranged four stimulus discriminability levels, and for each level, arranged three different reinforcer frequency ratios (4:1, 1:1, and 1:4). For example, in the 4:1 condition, pigeons obtained reinforcement for correct B11 responses around four times more frequently than for correct B22 responses. McCarthy and Davison found no systematic relation between sensitivity to the reinforcer ratios and discriminability. Figure 1 (middle) plots the data from Alsop and Davison (1991), who also arranged a number of different stimulus discriminability levels with three reinforcer frequency ratios (9:1, 1:1, and 1:9). Unlike McCarthy and Davison however, they found an inverse relation between sensitivity and discriminability—as their pigeons' estimates of discriminability increased, estimates of sensitivity to the reinforcer ratios systematically decreased. Finally, Figure 1 (bottom) plots the data from Alsop and Porritt (2006), where instead of varying the reinforcer frequency ratios, they varied the reinforcer magnitude ratios (3:1, 1:1, and 1:3). For example, in the 3:1 condition, pigeons received 6-s access to food for reinforced B11 responses and only 2-s access to food for reinforced B22 responses. Similar to Alsop and Davison's finding with reinforcer frequency, Alsop and Porritt found an inverse relation between sensitivity to reinforcer magnitude and discriminability.
The mixed findings regarding the relation between sensitivity to reinforcement and discriminability are not limited to the above studies, but these provide an illustration of the conflicting outcomes. Of particular relevance to the present study, Johnstone and Alsop (2000) conducted the only study investigating the effects of stimulus disparity and reinforcer frequency with human participants. In their study, one group of participants received a 4:1 controlled reinforcer ratio across conditions while another group received a 1:4 controlled reinforcer ratio. Participants in both groups completed four conditions where the disparity between S1 and S2 stimuli was varied. Figure 2 plots the results from their study. As expected, when stimulus disparity increased, discriminability (log d) also increased (Figure 2, top). Furthermore, estimates of response bias (log b—Figure 2, bottom) were negative for the 1:4 group (black bars) and positive for the 4:1 group (grey bars), demonstrating reinforcer control (i.e., participants were biased towards responding on the alternative associated with the greater frequency of reinforcement). However, there were no systematic effects of stimulus disparity on response bias for either group. These data are consistent with the notion of an independence between reinforcer control (log b or a) and stimulus control (log d).
The studies described above varied the relative frequencies or magnitudes of reinforcers for the two types of correct responses. In fact, most detection research (using both nonhuman and human animals) has focused on the effects of positive outcomes for correct responses. In contrast, little attention has been given to the effects of negative outcomes for errors. This lack of research on aversive consequences is of concern because many real-life detection scenarios involve both positive outcomes (e.g., crossing the road safely) and negative outcomes (e.g., getting hit by a car). Recently, Lie and Alsop (2009) found that punishers had parallel but opposite effects to reinforcers using a signal-detection task with human participants. In one of their experiments, participants occasionally received points for correct responses and lost points for errors. The punisher frequency ratio was varied across four conditions (5:1, 2:1, 1:2, and 1:5) and Lie and Alsop found that participants were systematically biased away from the response alternative associated with the higher rate of punishment.
The present experiment used the same detection task as Lie and Alsop (2009) to investigate the relation between relative punisher frequency and stimulus disparity. There were two groups of participants. For each group, the relative frequency of punishment (P21:P12) was varied across two levels (1:3 and 3:1, Group 3; 1:11 and 11:1, Group 11) and across three levels of stimulus disparity. These results were compared to Johnstone and Alsop's (2000) study which found no relation between the effects of stimulus disparity and the relative frequency of reinforcers.
Twenty-four undergraduate students from the University of Otago were recruited from the psychology participant pool. Group 3 consisted of 12 females aged between 18 and 21 years (M = 19.5 years). Group 11 consisted of 11 females and 1 male aged between 20 and 24 yr (M = 21.0 yr).
The experiment was conducted in a small room (2.3 m × 3.0 m), where the window blinds were closed to minimize visual distractions. The participant sat facing a PC computer, with his or her head approximately 0.5 m from a 43-cm (17-inch) color monitor. The computer presented the instructions, ran the signal-detection program, and recorded the participant's responses. The program was written in VisualBasic 6.0. Stimuli were 12 × 12 arrays presented in the centre of the white screen, with each array position occupied by either a blue or red alien cartoon character (“greeblie”) measuring approximately 8 mm wide and 9 mm high (see Figure 3, Lie & Alsop, 2009). Stimuli with more blue objects than red objects were classed as “more blue” (S1) and stimuli with more red objects than blue objects were classed as “more red” (S2). The difference between the number of blue and red objects in the array varied across three disparity levels. At the easiest level (high stimulus disparity), there were 77 of one color and 67 of the other color. At the medium level (medium disparity), there were 75 of one color and 69 of the other color. At the hardest level (low disparity), there were 73 of one color and 71 of the other color. For a particular disparity level, the program randomly determined the arrangement of blue and red greeblies within the stimulus array on each trial. Participants responded on a two-key response panel (with telegraph Morse keys) connected to the computer's USB port via a Lab JackTM interface device, with the left key labeled as the response for “more blue” (B1) and the right key labeled as the response for “more red” (B2).
The first 12 participants were assigned to Group 3 and the next 12 participants were assigned to Group 11. Participants in each Group completed six experimental sessions (one condition per session) no less than 24 hr apart and no more than one week apart. Before the start of the first session, participants read an information sheet and signed an informed consent form. Each session consisted of a set of instructions presented on the computer screen followed by the experimental trials. The following instructions were presented:
If the participant had no questions, the experimenter started the trials. Each trial began with a 1-s presentation of a small animated picture of a juggler in the middle of the screen (warning stimulus). A stimulus array containing either more blue or red greeblies then appeared and remained on the screen until the participant responded on the response panel, or for a maximum of 3 s. The stimulus presentation probability was .5 for all conditions; that is, participants were equally likely to be presented with S1 (more blue greeblies) or S2 (more red greeblies). If the participant had not responded after the 3-s stimulus presentation, the screen remained blank until the participant pressed a response key.
Following each response, there were three possible consequences. First, if neither a reinforcer nor a punisher was scheduled, the screen went blank for 1 s; that is, no feedback was given. This was followed by a 1-s intertrial interval (ITI) with a blank screen, and the next trial began.
Second, if the participant made a correct response (i.e., B11 or B22) and a reinforcer was scheduled for that response, the center of the screen displayed the statement: “Correct! You are one point closer to finishing the session.” The start of this presentation was accompanied by a 1-s “ta da!” sound. If the participant made a correct “more blue” response (i.e., the participant responded on the left key to a “more blue” stimulus—B11), a picture of a blue tick () appeared in the lower left corner of the screen. Likewise, if it was a correct “more red” response (the participant responded on the right key to a “more red” stimulus—B22), a picture of a red tick appeared in the lower right corner of the screen. The vertical thermometer bar also went up one “space” (out of 50 spaces). The statement, colored tick, and thermometer bar remained on the screen for 3 s. This was followed by a 1-s ITI and the next trial began.
Finally, if the participant made an incorrect response and a punisher was scheduled for that response, then the center of the screen displayed the statement: “Incorrect! You are one point further from finishing the session!” The start of this presentation was accompanied by a 1-s “argh!” sound. If the participant made an incorrect “more blue” response (i.e., the participant responded on the left key to a “more red” stimulus—B21), a picture of a blue “x” appeared in the lower left corner of the screen. If the participant made an incorrect “more red” response (B12), a picture of a red “x” appeared in the lower right corner of the screen. The vertical thermometer bar went down one space. The statement, colored “x”, and thermometer bar remained on the screen for 3 s. This was followed by a 1-s ITI and the next trial began.
The session ended when the participant reached a net total of 50 points. If the participant had not reached 50 points within 50 min, the session also ended. At the end of the last trial, the screen displayed the statement: “Congratulations. You have reached the end of the session” for 4 s and the program ended.
The six experimental conditions varied stimulus disparity and the distribution of punishers for errors. Stimulus disparity was varied across three levels, high, medium, and low, as described above. Incorrect responses were intermittently punished and the distribution of punishers (P21:P12) varied across two levels at each level of stimulus disparity. For Group 3 participants, the punisher distributions were 3:1 and 1:3; for example, if the punisher distribution was 3:1, participants were three times more likely to receive P21 punishers than P12 punishers. For Group 11 participants, the punisher distributions were 11:1 and 1:11. Correct responses were intermittently reinforced and the distribution of reinforcers (point gains) was held constant and equal (1:1) across all conditions; that is, participants were equally likely to receive R11 and R22 reinforcers for correct B11 and B22 responses, respectively. The presentation order of the conditions was partially counterbalanced across all the participants for each group, with the constraint that conditions with the same punisher ratio were run consecutively (see Appendix for condition order).
The distributions of reinforcers and punishers were arranged using interdependent scheduling (e.g., McCarthy & Davison, 1984; Stubbs & Pliskoff, 1969) where the computer program randomly selected the next correct response (B11 or B22) to be reinforced, or the next incorrect response (B21 or B12) to be punished, in accordance with the arranged reinforcer and punisher ratios. The reinforcer (or punisher) then had to be received before the program selected the next response to be reinforced or punished. The rates of reinforcement and punishment differed depending on disparity level to try to control for a potential confound where the overall rates of reinforcement and punishment could vary as a function of stimulus disparity. For example, at the highest disparity level, where participants are correct more often (and thus incorrect less often), participants could obtain reinforcers at a higher overall rate, and punishers at a lower overall rate, compared to lower disparity levels (where participants make more errors). For the high disparity conditions, the overall rate of reinforcement was based on a VI 15-s schedule and the overall rate of punishment was based on a VI 10-s schedule. For the medium disparity conditions, reinforcement was based on a VI 10-s schedule and punishment was based on a VI 15-s schedule. For the low disparity conditions, reinforcement was based on a VI 10-s schedule and punishment was based on a VI 40-s schedule.
The numbers of obtained reinforcers (R11 and R22) and punishers (P21 and P12) were calculated across all trials for each session. The numbers of left key (“more blue”) responses following S1 (B11) and S2 (B21), and right key (“more red”) responses following S1 (B12) and S2 (B22) were calculated across the last 120 trials from each condition for each participant. Trials before the last 120 trials were discarded to allow participants adequate contact with the reinforcers and punishers.
The Appendix shows the results for individual participants in each group. These data show that the manipulations were successful at keeping the reinforcer ratios constant and equal across conditions. Furthermore, the obtained punisher ratios approximated the arranged punisher ratios for each condition.
Individual estimates of discriminability (log d) and response bias (log b) were calculated for each participant in each condition using Equations 4 and 5 (Davison & Tustin, 1978), respectively. Because there were a few instances where participants made zero responses in the last 120 trials for a particular response type (B11, B12, B21, or B22), a correction of 0.25 was applied to all response counts for log b and log d calculations (Brown & White, 2005). These estimates were then averaged across all participants in each group for each condition. Figure 3 presents these mean estimates of discriminability (log d—top) and response bias (log b—bottom) for Group 3 (left) and Group 11 (right) for the three stimulus disparity levels and the different punisher ratios (3:1 and 1:3 for Group 3; 11:1 and 1:11 for Group 11). Figure 3 (top) shows that, as expected, estimates of discriminability significantly increased as stimulus disparity increased for both Group 3, F(2,22) = 94.45, p < .001, and Group 11, F(2,22) = 141.7, p < .001. Furthermore, there were no significant differences in discriminability between the two punisher ratios for Group 3, F(1,11) = .791, p = .39, or Group 11, F(1,11) = .229, p = .64.
Figure 3 (bottom) shows that mean response biases were more extreme for Group 11 (right) compared to Group 3 (left). The effect of punisher ratio on response bias was significant for both Group 3, F(1,11) = 4.895, p < .05, and Group 11, F(1,11) = 21.77, p < .01. Furthermore, there were no significant differences in response bias across the three stimulus disparity levels for Group 3, F(2,22) = .168, p = .85, or Group 11, F(2,22) = .748, p = .49. Despite not finding a significant effect, however, there appeared to be an increasing trend for the 1:11 punisher ratio across the three disparity levels for Group 11 (Figure 3, bottom right—black striped bars). To investigate this further, Figure 4 presents the individual response bias data for Group 11 participants. Figure 4 (left) shows that the apparent increase in mean response bias estimates across the stimulus disparity levels for the 1:11 condition was predominantly due to extreme data points from 2 participants (Participants 15 and 24). Figure 4 (right) confirms the pattern seen in Figure 3 (bottom right—grey striped bars), with no effect of disparity level on response bias estimates in the 11:1 condition.
Another way to assess the effects of punisher ratios at each level of stimulus disparity is to compare the change in bias estimates from the 3:1 (Group 3) and 11:1 (Group 11) conditions with their reversals (i.e., 1:3 and 1:11 respectively). This was done using Equation 6 for each participant at each disparity level:
where all notations are as above. Equation 6 is analogous to Equation 5 in that it measures the sensitivity of behavior to the changes in the punisher ratios (ap). There are, of course, only two conditions for each fit of Equation 6, therefore the estimates of ap will not be very precise. However, it does allow for a rough comparison between the present experiment and previous studies which examined changes in sensitivity of behavior to changes in reinforcer ratios (a, Equation 5) across different levels of stimulus disparity. In the present experiment, positive ap estimates indicate a systematic bias away from the more frequently punished alternative (i.e., towards the less frequently punished alternative). Because there were 8 sessions (out of 144) where some participants received no punishers for one response alternative, a correction of 0.25 was applied to all punisher counts for log (P12/P21) calculations.
Table 1 displays individuals' estimates of sensitivity to changes in the punisher ratio (ap) for the three different levels of disparity in Group 3 and Group 11. For Group 3, positive ap values (indicating a bias away from the more punished alternative) were found for 10 out of the 12 participants at the lowest level of disparity (M = 0.14). At the medium disparity level, positive ap values were found for 8 out of the 12 participants (M = 0.18). However, at the highest disparity level, a less consistent pattern was found, with 6 participants obtaining negative ap values (indicating a bias towards the more punished alternative), and the remaining 6 participants with positive ap values (M = −0.04). One-sample t-tests conducted on the ap values for each disparity level found that estimates of sensitivity were significantly greater than zero for the lowest disparity level only; Low: t(11) = 2.223, p < .05; Medium: t(11) = 2.008, p = .07; High: t(11) = −0.298 , p = .77. A repeated-measures ANOVA found no significant differences in sensitivity estimates between the three disparity levels, F(2,22) = 1.306, p = .29.
For Group 11, almost all participants obtained positive ap values at all three disparity levels (Low: M = 0.26; Medium: M = 0.32; High: M = 0.33), therefore showing some sensitivity to change in the punisher ratio. Only 2 participants (Participant 16 at all three levels, and Participant 14 at the highest disparity level) had small negative ap values. One-sample t-tests found that ap values were significantly greater than zero for all three disparity levels, Low: t(11) = 5.385, p < .001, Medium: t(11) = 3.888, p < .01, High: t(11) = 3.548, p < .01. Consistent with the findings from Group 3, no significant differences in sensitivity estimates were found across the three disparity levels for Group 11 participants, F(2,22) = .617, p = .55.
Figure 5 plots the estimates of sensitivity to changes in the punisher ratio (ap) against estimates of discriminability (log d). For Group 3 (left), estimates of sensitivity appeared lower at higher discriminability values compared to lower discriminability values, but this correlation was not significant (r = −.23, n = 36, p = .18, two-tailed). For Group 11 (right), sensitivities appeared greater at higher discriminability levels than lower discriminability levels. However, like Group 3, the correlation between sensitivity and discriminability was not significant (r = .30, n = 36, p = .08, two-tailed).
The above analyses calculated sensitivity by examining the change in response bias as a function of the relative frequency of punishment (log P21/P12). However, changes in response bias can also be assessed as a function of the combined effects of the reinforcers and punishers. Researchers using standard concurrent-schedule choice procedures have proposed two competing models that attempt to combine reinforcer and punisher effects. An additive model of punishment (e.g., Deluty, 1976) predicts that the effects of punishers obtained for responding on one alternative add to the effects of reinforcers obtained for responding on the other alternative. On the other hand, a subtractive model (e.g., de Villiers, 1980; Farley & Fantino, 1978) predicts that the effects of punishers directly subtract from the effects of reinforcers for the same alternative. Lie and Alsop (2007, 2009) integrated each of the two competing models into Davison and Tustin's (1978) model of signal detection. For the additive model, response bias can be calculated by
with all notation as above, and ar+p is the sensitivity of the subject's behavior to the combined additive effects of reinforcers and punishers. The scaling parameter q is used to equate the value of a punisher to the value of a reinforcer (e.g., if q = .5, then one punisher has half the subjective weighting of one reinforcer). For the subtractive model, response bias can be calculated by
with all notation as above, and ar−p is the sensitivity of the subject's behavior to the combined subtractive effects of reinforcers and punishers. Research on the combined effects of reinforcers and punishers using concurrent-schedule and signal-detection procedures has found stronger support for a subtractive model over an additive model (Critchfield, Paletz, MacAleese, & Newland, 2003; de Villiers, 1980; Farley, 1980; Farley & Fantino, 1978; Lie & Alsop, 2009).
Table 2 displays individuals' estimates of sensitivity calculated by the additive model (ar+p) and the subtractive model (ar−p) for the three different levels of disparity for Group 3 and Group 11. For all calculations, it was assumed that q = 1 because reinforcers and punishers were equivalent in magnitude (i.e., one-point gains and losses respectively). In all cases, estimates of sensitivity calculated by the additive model (Group 3: Low: M = 0.52, Medium: M = 0.80, High: M = −0.12; Group 11: Low: M = 1.41, Medium: M = 2.48, High: M = 3.26) were more extreme than the corresponding estimates calculated by the subtractive model (Group 3: Low: M = 0.22, Medium: M = 0.33, High: M = −0.10; Group 11: Low: M = 0.63, Medium: M = 1.41, High: M = 2.19). For both Group 3 and Group 11, sensitivity estimates spanned a wide range of values (Additive: −3.44 to 10.58; Subtractive: −1.95 to 8.42).
As before, we also examined whether sensitivity estimates significantly differed from zero. For Group 3, estimates of sensitivity did not significantly differ from zero for each disparity level for either sensitivity measure (Additive: Low: t(11) = 2.052, p = .07; Medium: t(11) = 2.050, p = .07; High: t(11) = .188, p = .85; Subtractive: Low: t(11) = 1.960, p = .08; Medium: t(11) = 1.955, p = .08, High: t(10) = .281, p = .79). For Group 11 however, estimates of sensitivity were significantly greater than zero at all disparity levels for both sensitivity measures (Additive: Low: t(11) = 5.389, p < .001; Medium: t(11) = 3.295, p < .01; High: t(11) = 3.222, p < .01; Subtractive: Low: t(11) = 5.147, p < .001; Medium: t(11) = 2.860, p < .05, High: t(11) = 2.978, p < .05).
Finally, we examined whether the additive and subtractive sensitivity estimates significantly differed across the three disparity levels. For the additive model, a repeated-measures ANOVA found no significant differences in sensitivity estimates (ar+p) across the three disparity levels for Group 3 participants, F(2,22) = .972, p = .39, or Group 11 participants, F(2,22) = 3.213, p = .06. For the subtractive model however, the data violated the assumption of sphericity (Mauchly's W = .489, p < .05). Using a Greenhouse-Geisser correction, the difference in sensitivity estimates (ar−p) between disparity levels approached significance, F(1.324,14.56) = 4.210, p = .050. A cursory examination of the data found that sensitivity estimates appeared to increase as the stimulus disparity level increased (i.e., from low to high). To investigate this further, a Page's trend test was conducted; however, this test found no significant increase in ar−p estimates across increases in stimulus disparity (L = 151, n = 12, k = 3).
The present study examined the relation between stimulus control and punisher control using human participants in a signal-detection task. There were three main findings. First, the participants' behavior was found to be under stimulus control. This was demonstrated by the significant effect that varying stimulus disparity had on the participants' ability to discriminate between the stimuli (‘more blue’ versus ‘more red’). Participants made more correct responses when stimuli were more disparate than when they were less disparate (Figure 3, top). This was consistent with Johnstone and Alsop (2000—Figure 2, top) who also found an increase in discriminability with increases in stimulus disparity using human participants. While this finding was not surprising, it was important to confirm that the disparity levels were sufficient to affect participants' estimates of discriminability.
Second, behavior was also under punisher control. Overall, participants were biased away from the response alternative associated with the higher frequency of punishment for each punisher ratio. The size of this effect was also somewhat dependent on the punisher ratio in place—larger response biases were found for Group 11 (1:11 and 11:1 punisher ratios), compared to Group 3 (1:3 and 3:1 punisher ratios). Significant differences in response bias were found between the two punisher ratios for both Groups 3 and 11. Furthermore, when estimates of sensitivity to the punisher ratios (ap) were calculated for each participant, sensitivity values were significantly greater than zero across all three disparity levels for Group 11, and the lowest disparity level for Group 3. Mean sensitivity estimates (ap) for Group 11 (between .25 and .32) were similar to the mean sensitivity estimate of .20 found by Lie and Alsop (2009), who used a similar task and sample of participants. Although these sensitivity to punishment (ap) values were slightly lower than sensitivity to reinforcement (a) values found with human participants in previous studies using reinforcer manipulations (e.g., Alsop, Rowley, & Fon, 1995; Johnstone & Alsop, 2000; Lie & Alsop, 2009—mean sensitivity between .30 and .40), the effects of the punishers in the present experiment were probably attenuated by the effects of the constant and equal (1:1) distribution of reinforcers. However, this background rate of reinforcement was necessary to maintain participant responding.
Finally, the present experiment found no evidence of a relation between stimulus control and punisher control, consistent with the predictions of the Davison and Tustin (1978) model of signal detection. First, estimates of discriminability did not differ across the two punisher ratios for Group 3 or Group 11 and the mean estimates of discriminability were similar across groups at each level of disparity (Figure 3, top). Second, estimates of response bias (away from the more frequently punished alternative) did not significantly differ across the three stimulus disparity levels for either group (Figure 3, bottom). This parallels the results of Johnstone and Alsop's (2000) study where they found no changes in response bias (towards the more frequently reinforced alternative) across their four disparity levels (Figure 2). Johnstone and Alsop also used a similar sample of participants (university students), similar stimuli (visual arrays), and the same reinforcer type (points) as the present study. Last, estimates of sensitivity to the punisher ratios (ap) were not significantly correlated with discriminability for either group in the present study (Figure 5). While this was consistent with nonhuman research looking at sensitivity to reinforcer ratios (a) conducted by McCarthy and Davison (1984 —Figure 1, top), it was inconsistent with the research conducted by Alsop and Davison (1991—Figure 1, middle) and Alsop and Porritt (2006—Figure 1, bottom).
Although the present experiment found no relation between stimulus control and punisher control, this finding should perhaps be interpreted with some caution because of the limited number of conditions that participants received; this was unfortunately due to time constraints on subject participation. It is possible that the addition of more punisher ratios may have revealed some interaction between the parameters of interest. For example, Johnstone and Alsop (1999) found lower estimates of discriminability (log d) for equal reinforcer ratios compared to unequal reinforcer ratios with nonhuman subjects (pigeons). It is unclear whether this would also have occurred had an equal punisher ratio (1:1) been included in the present experiment. However, Lie and Alsop (2009) arranged four punisher ratios (5:1, 2:1, 1:2, 1:5) in a similar procedure from the same participant pool as the present study and found no significant differences in discriminability across the four ratios. This perhaps suggests that the inclusion of additional punisher ratios in the present study would have yielded similar results to Lie and Alsop (2009).
Another drawback from the limited condition numbers in the present study was that estimates of sensitivity to the punisher ratios (ap) could only be calculated across two punisher ratios for each group. The present study was designed to be similar to Johnstone and Alsop's (2000) study, where only between-group comparisons could be made between their two reinforcer ratios across different stimulus disparity levels. However, participants within each group in the present study experienced both punisher ratios, thus making it possible to evaluate within-subject changes in response bias between the two punisher ratios. By doing so in a standard way (i.e., calculation of sensitivity to punishment estimates), this allowed us to make tentative comparisons between the results of the present study to those of previous studies. Because of the limited condition numbers however, sensitivity estimates obtained from the present study should be interpreted with caution.
More generally, it is unclear why some studies and not others have found a relation between sensitivity (or response bias) and discriminability. The present study, and those mentioned above (Alsop & Davison, 1991; Alsop & Porritt, 2006; Johnstone & Alsop, 2000; McCarthy & Davison, 1984) all employed controlled reinforcer procedures (i.e., dependent or interdependent scheduling), and have obtained mixed results. Other detection studies using controlled reinforcer procedures have found decreases in sensitivity with increases in discriminability (Davison & McCarthy, 1987; Godfrey & Davison, 1998; Nevin, Cate, & Alsop, 1993), but also no relation between sensitivity and discriminability (Godfrey & Davison, 1998). Studies which have arranged reinforcers using uncontrolled procedures (where the obtained reinforcer ratios can co-vary with the subject's behavior, i.e., independent scheduling) have also found conflicting findings. Again, some of these studies have found decreases in sensitivity with increases in discriminability (Johnstone & Alsop, 2000; McCarthy & Davison, 1984) or no such relation between the two parameters (e.g., McCarthy & Davison, 1980). Thus, although the present experiment found no relation between sensitivity and discriminability using a controlled punisher ratio procedure, it seems that procedural variations can influence the findings obtained. In fact, the task of designing an experiment to adequately test the relation between discriminability and response bias appears to be quite a challenge. As described by Nevin (1984), any such experiment needs to arrange suitable conditions which demonstrate that: (1) log d does not vary systematically across changes in the reinforcer (or punisher) ratio; (2) increases in stimulus disparity lead to consistent increases in log d; and (3) that the effects of varied reinforcer (or punisher) ratios (i.e., log b, a, or ap) are unchanged by variations in log d. Furthermore, these effects need to be demonstrated across a number of stimulus modalities as well as species. Given the mixed findings described above, it appears that this overall approach has not yet been achieved.
The present experiment also calculated sensitivity by using the modified Davison and Tustin (1978) equations proposed by Lie and Alsop (2007; 2009). Sensitivity to the combined additive (ar+p) and subtractive (ar−p) effects of reinforcement and punishment were calculated using Equations 7 and 8; however, these were not particularly sensible. In fact, the ranges of sensitivity estimates (Additive: −3.44 to 10.58; Subtractive: −1.95 to 8.42) were well outside the usual range of sensitivity to reinforcement estimates (a) obtained from human detection experiments which studied the effects of reinforcers alone (e.g., −0.01 to 0.58; Lie & Alsop, 2009). These additive and subtractive model fits were conducted under the assumption that q (the scaling parameter) was equal to 1 for all participants. While this assumption appeared reasonable because the reinforcers and punishers were equal in physical value (i.e., one point gain vs. one point loss), it is possible that their subjective values differed across participants. Had we allowed q to vary as well as ar+p for the additive model (or ar−p for the subtractive model), more sensible sensitivity estimates might have been obtained. However, with only two conditions in the present experiment (and two free parameters), this was not possible. Nevertheless, this was the first attempt to fit the additive and subtractive models to empirical data. Future experiments arranging more conditions could explore the quantitative predictions of the additive and subtractive punishment models further.
Although the integration of punisher effects into Davison and Tustin's (1978) well known behavioral model appears to be the most logical first step, Alsop and Davison (1991; see also Davison & Nevin, 1999) have also proposed a competing detection model based on the discriminability of stimulus–response (ds) and response–reinforcer (dr) relations. The Alsop and Davison model has been somewhat successful in capturing the effects of reinforcers independent of stimulus discriminability (Davison & Nevin, 1999; Godfrey & Davison, 1998). However, as noted by Lie and Alsop (2009), it is unclear how the Alsop and Davison model can be extended to include punisher as well as reinforcer effects. For example, would separate stimulus–response discriminability and response–reinforcer discriminability parameters be required for reinforcers and punishers? Because the present experiment found no relation between punisher effects (log b and a) and stimulus effects (log d) using the Davison–Tustin model, and punisher effects are more easily integrated into the Davison–Tustin model than the Alsop–Davison model, it seems more parsimonious to use the former instead of the latter at this stage.
Finally, the present experiment was not without limitations. Although an effort was made to equate the overall reinforcer and punisher rates across stimulus disparity levels, this proved to be quite difficult, especially in the high disparity conditions. In fact, significant differences in overall reinforcer rates were found across the three stimulus disparity levels for Group 3, F(2,46) = 29.12, p < .001, and Group 11, F(2,26) = 19.98, p < .001. Similarly, significant differences in overall punisher rates were also found, Group 3: F(2,46) = 10.96, p < .001; Group 11: F(2,46) = 9.399, p < .001. While previous research has found that changes in overall reinforcer rates can affect sensitivity to reinforcement in concurrent-schedule procedures (Alsop & Elliffe, 1988), a wide range of reinforcer rates was used to demonstrate the effect (i.e., from 0.22 to 10 reinforcers per min). In the present experiment, mean reinforcer rates for each disparity level ranged from 2.3 to 2.7 reinforcers per min and mean punisher rates ranged from 0.60 to 1.08 punishers per min. Thus, it seems quite unlikely that such small changes in overall reinforcer and punisher rates would affect sensitivity estimates.
Because of the relatively low rates of punishment, there were 8 sessions (out of a total of 144) where a few participants received no punishers for responding to one response alternative (see Appendix). In all eight cases, participants showed a bias towards responding on the alternative where they received no punishers; this may demonstrate that very few punishers are necessary to influence choice behavior in this task. Upon closer inspection, seven of the eight sessions were high disparity conditions where participants were very accurate (discriminability M = 1.01), and thus made few errors. This is a difficulty inherent in arranging high disparity conditions in detection procedures that punish incorrect responses. Future studies on the effects of punishment in detection procedures should take this into consideration and either arrange very many trials per condition, or make stimuli less disparate in their highest disparity condition(s) so that participants come into adequate contact with the punisher contingencies.
The present experiment is the first to examine the relation between punisher control and stimulus control in signal-detection procedures. While studies of signal detection have largely focused on the effects of positive outcomes for correct responses, it is also important to study the effects of negative outcomes for errors because organisms encounter both types of outcomes in many everyday situations. The present study thus provides a possible direction for future research on the effects of negative outcomes on human (as well as nonhuman) behavior in situations of uncertainty.
This research was conducted as part of the first author's doctoral thesis, supported by a University of Otago Postgraduate Scholarship. Portions of these data were presented at the 30th Annual Conference of the Society for the Quantitative Analyses of Behavior in San Diego, California.