Journal of the experimental analysis of behavior
-
In five experiments we studied the effects on pigeons' key pecking of the location of four or more successive response-dependent reinforcers imbedded in a schedule arranging otherwise response-independent reinforcers. In Experiment 1, high local response rates early in the session were extended farther into the session as the number of response-dependent reinforcers at the beginning of the session increased. A block of four successive response-dependent reinforcers then was scheduled at the beginning, middle, or end of the session (Experiment 2) resulting in higher local response rates at those times in the session when the response-dependent reinforcers were arranged. ⋯ In Experiment 5, responding early in the session had no consequence other than allowing access to the schedule of response-independent food delivery. As in the first experiment, local rates generally were higher early in the session. The results indicate that the location of response-reinforcer dependencies precisely control behavior and that such effects often are not captured by descriptions of behavior in terms of overall response rates.
-
The effects of two alternative sources of food delivery on the key-peck responding of pigeons were examined. Pecking was maintained by a variable-interval 3-min schedule. In the presence of this schedule in different conditions, either a variable-time 3-min schedule delivering food independently of responding or an equivalent schedule that required a minimum 2-s pause between a key peck and food delivery (a differential-reinforcement-of-other-behavior schedule) was added. ⋯ Response rates and median delay between responses and reinforcers were negatively correlated. These results contradict earlier conclusions about the behavioral effects of alternative reinforcement. They suggest that an interpretation in terms of response-reinforcer contiguity is consistent with the data.
-
The effects on pigeons' key pecking of unsignaled delays of reinforcement and response-independent reinforcement were compared after either variable-interval or differential-reinforcement-of-low-rate baseline schedules. One 30-min session arranging delayed reinforcement and one 30-min session arranging response-independent reinforcement were conducted daily, 6 hr apart. ⋯ Under both schedules, response rates were lower when obtained delays were greater. These results bear upon methodological and conceptual issues regarding comparisons of contingencies that change the temporal response-reinforcer relations.
-
The operant conditioning of response variability under free-operant and discrete-response procedures was investigated. Two pigeons received food only if their pattern of four pecks on two response keys differed from the patterns emitted on the two immediately preceding trials. Under the free-operant procedure, the keys remained illuminated and operative throughout each trial. ⋯ Variability increased under this procedure, and the pigeons obtained three fourths of the available reinforcers. Previous successes and failures to produce response variability may have been due to the use or failure to use, respectively, a discrete-response procedure. Respondent effects inherent in the free-operant procedure may encourage the development of response stereotypy and, in turn, prevent the development of response variability.
-
In a discrete-trial procedure, pigeons could choose between 2-s and 6-s access to grain by making a single key peck. In Phase 1, the pigeons obtained both reinforcers by responding on fixed-ratio schedules. In Phase 2, they received both reinforcers after simple delays, arranged by fixed-time schedules, during which no responses were required. ⋯ By varying the size of the schedule for the 2-s reinforcer across conditions, several such indifference points were obtained from both fixed-time conditions and fixed-ratio conditions. The resulting "indifference curves" from fixed-time conditions and from fixed-ratio conditions were similar in shape, and they suggested that a hyperbolic equation describes the relation between ratio size and reinforcement value as well as the relation between reinforcer delay and its reinforcement value. The results from Phase 3 showed that subjects chose fixed-time schedules over fixed-ratio schedules that generated the same average times between a choice response and reinforcement.