Question sheet 7a

# Question sheet 7a - 830:311H CONDITIONING AND LEARNING...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 830:311H CONDITIONING AND LEARNING Question Sheet 7, Chapter 7 (pp. 231-250) Terms : cumulative response cumulative record fixed ratio variable ratio CRF fixed interval variable interval scalloping postreinforcement pause chain schedule (a complex schedule) link cycle concurrent schedule Herrnstein matching law reinforcement density self-control commitment concurrent chain schedule choice link delay of gratification value discounting function reward discounting tandem, mixed, multiple, and chain schedules Essay : 1. Prepare a 2 x 2 table showing the simple reinforcement schedules? Draw on a single set of axes the cumulative response rates expected for each schedule. Give an example of each. • Table on 237. Draw what Rovee drew for us at the review. • Fixed ratio- FR2 schedule, the animal is reinforced every second response • Variable ratio- VR4, avg number of responses required to get reward is 4, first reward may be after 2, second after 6, slot machines are an example • Fixed interval - f1 min - first response emitted after minute has elapsed is rewarded • Variable interval- v1 min- first reward earned after 30 seconds, next one after 90 seconds 2. Why does scalloping occur? Why is it adaptive? What schedule is it associated with? • Scalloping is adaptive because reward receipt resets the interval, so it would be biologically and temporally inefficient to make the most responses at the beginning of a fixed interval period when only on e response earns the reward. If the timeout point is learned, and that a response produces a reward, the reward becomes the new CS and the interval is the US. • In scalloping behavior begins to reflect timing of the reinforcers. • They respond only a little bit after each reward and then responses increase in anticipation of the reward • Occurs because the animal is learning to time the interval • The elapsed time becomes a conditional reinforcer • Getting reward done is a cue to restart interval • It is associated with the fixed interval schedule 1 • It is adaptive bc animal learns the time in between rewards and only responds a lot when it expects the reward to come. It saves time and energy. Animal can do other things while waiting for the reward to come. 4. Why are DRL and DRH schedules called response density schedules? Indicate how they differ from simple reinforcement schedules. Draw a diagram that illustrates each of them, and give an example of each. What is a DRO schedule? • Called that because reinforcement depends on the interval or time between their two responses • In DRH, a short interval b/w successive responses is required • In DRL, a long interval is required • DRH leads to a high or dense rate of reponses because you only get reward if the time b/w responses is short enough • Different from simple reinforcement schedules because two responses are required so you can look at the time between them • See diagram on notes- an example is a shopping spree • DRL leads to a low rate of responses because you only get reward if the time b/w...
View Full Document

## This note was uploaded on 04/05/2008 for the course PSYCH 311 taught by Professor Rovee-collier during the Spring '06 term at Rutgers.

### Page1 / 6

Question sheet 7a - 830:311H CONDITIONING AND LEARNING...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online