35102017

# 35102017 - Thomas G Brown Ph.D   Operant methodology –  Discrete trial –  Free operant   Basic contingencies   Instinctive drift

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 10/2/08 Thomas G. Brown, Ph.D.   Operant methodology –  Discrete trial –  Free operant   Basic contingencies   Instinctive drift   Schedules of reinforcement   Constant Schedules of Reinforcement - continuous reinforcement - extinction –  Every response treated the same –  All reinforced or all not reinforced   CRF   EXT   Intermittent Schedules of Reinforcement –  Sometimes a response is reinforced –  Not every time, but sometimes   Four   FR simple ones FI VR VI 1 10/2/08   Patterns   Rates of responding   Pauses   What do we need to control for? and magnitude –  Reinforcement density   Frequency   Two methods –  Yoked control –  Within subject Yoked control - uses two chambers VR in one and VI in other   VI is determined by VR     –  Each time the animal gets food on VR, the next response in the other chamber is reinforced Guarantees equal rft distributions while keeping different contingencies in place   Faster on ratio schedules   –  Ferster and Skinner (1957) –  VR: 2-3/sec and VI: 1/sec 2 10/2/08 Within subject design Alternate a VR (green) and VI (red)   Each VI period determined by previous VR (how long it took)   VR faster then VI (again)   Herrnstein (1964)       Why? Need to guarantee equal rft distributions while keeping the different contingencies in place   Yoked control - uses two chambers   –  Faster on ratio schedules     Ferster and Skinner (1957) VR: 2-3/sec and VI: 1/sec   Within subject design –  Alternate a VR (green) and VI (red) –  VR faster then VI (again)   Herrnstein (1964)   In ratio schedules: –  Differential reinforcement for higher rates –  Work faster and get food sooner   In interval schedules: –  No such differential reinforcement –  Working faster doesn’t result in sooner food     Molar level analysis –  Explains “pressure” to respond faster on ratios But there is also “pressure” to respond more slowly on interval schedules 3 10/2/08     Inter-response time - IRT –  Time between responses Longer IRT means slower responding   Assume a FI 60” –  What is probability of rft if IRT is 15 seconds? –  If 30 seconds? –  If 60 seconds?   So “pressure” is to lengthen IRT –  to slow down and increase rft probability   Assume a FR 60 –  What is probability of rft if IRT is 15 seconds? –  If 30 seconds? –  If 60 seconds?   So there is no “pressure” is to lengthen IRT –  slowing down either increases or decreases rft probability - for a given response   Molar Level analysis – Overall rate of responding   Molecular Level Analysis – Response to response analysis – IRT- Inter-Response Time 4 10/2/08   Molar Level analysis –  In ratio schedules:   Work faster and get food sooner Working faster doesn’t result in sooner food –  In interval schedules:   –  So there is “pressure” to respond faster on ratios   Molecular Level analysis –  In interval schedules:   Slower responding (longer IRT) increases rft probability Slower responding does not increase probability of reinforcement –  In ratio schedules:   –  So there is“pressure” to slow down on intervals     Fixed-Time (or Variable-Time); FT or VT Limited-hold –  Feature of interval schedules –  Raises response rate     Response-rate schedules –  Spaced-responding schedules DRH - Differential reinforcement of high rates of responding (x number in x seconds)   DRL - Differential reinforcement of low rates of responding (one per unit of time) DRL 60 seconds No responses or Clock Resets One response Results in Rft FI 60 seconds Responses irrelevant One response Results in Rft FT 60 seconds Responses irrelevant Rft Results Without a Response 5 10/2/08   Goal   How is to reflect reality can we make the artificial chamber more closely resemble the real world?   Combinations of schedules –  Life is messy (but lawful) –  Compound and complex schedules –  Concurrent schedules   More   Later robust environment   Following FR and FI reinforcement Pause (PRP)   Pause After Reinforcement (PAR)   Probability of reinforcement is zero   Post-Reinforcement –  With regard to the one (usually) response were looking at   Skinner’s   Fixed-Time “Superstition” in the Pigeon schedule –  Non-contingent reinforcement –  Reinforcement not dependent on response   Stadden and Simmelhag (1971) –  Monitored multiple activities (16) –  Terminal responses –  Interim responses 6 10/2/08   Adjunctive   Focus Behavior on facilitated behavior occurring just after pause but before food responding renews   Drinking tube: Scheduled-Induced Polydipsia   Wheel: Scheduled-Induced Wheel Running   Target: Scheduled-Induced Aggression (Attack)   Neutral stimuli: Scheduled-Induced Escape   Inedible material: Scheduled-Induced Pica   Functional class of behaviors   Multi-species   Engendered   During by intermittent schedules post-reinforcement pause (sort of)   Bitonic function –  Effective range of inter-reinforcement values   Effects of satiation or reduced deprivation itself has reinforcing properties   Response-reinforcer dependency not needed   Behavior   Aversive and frustrative properties of intermittent schedules   Stimuli paired with long periods of unreinforced responding   Interrupted feeding –  But, timing is late   Food delivery also signals the beginning of a period of low probability of reinforcement 7 10/2/08   Ethanol –  Suppression of drinking?   Sucrose –  Facilitation of drinking?   Aspartame vs. saccharin   Back to square one Pavlov’s Biological Theory Thorndike’s Psychological Theory   Skinner’s Behavioral Theory   Hull’s Drive Theory of Behavior   Drive-Stimulus Reduction Theory   Bandura’s Theories of     –  Imitation –  Social Reinforcement     Inside the Brain - ESB The Premack Principle Focused on biologically meaningful stimuli that produce reflexes   Humans and dogs struggle against confinement   –  Freedom reflex Appetitive UCSs like foods   Aversive (defensive) UCSs like sour   Both produce salivation   Aversive UCSs like shock produce leg-flexion   All enter into associations   8 10/2/08 Animals are motivated to survive Reflexes are physiological adaptations that promote survival   Neutral stimuli paired with innate reflexes alter brain connections so that reflexes occur to formerly neutral stimuli   Usually conditioned reflexes are (also) adaptive in that they promote well-being and survival       Early theory of human motivation labeled based on perceived effect   Satisfiers   Annoyers   Neutral   Stamping in of the immediately preceding responses (and connections)   Stimuli   So, innately satisfying and annoying events provide motivation for learning   Learned responses inevitable result of satisfyers and annoyers   Via Law of Effect 9 10/2/08   Both: Satisfactions of appetite and freedom in opposition to annoyances of aversive events   Both: Positive and negative events support parallel systems for learning new responses   Pavlov - physiological responses   Thorndike - instrumental responses   Similar or different? Atheoretical? Strict behaviorist - questions related to motivation unnecessary and undesirable   Learning is a hypothetical construct   To say animal is motivated to learn and has memory for what was learned doesn’t add anything to observation that behavior changed as a result of experience   Description of behavior change based on experimental operations is sufficient explanation     “Empty organism” - inputs/outputs Atheoretical position intended to make analysis more rigorous and scientific   Reinforcers/punishers defined by operations and resulting effects (2 x 2 matrix)   Pavlov and Thorndike spoke of innately pleasurable stimuli (and aversive)   Skinner makes no such requirement   Purely operational definition   Circularity?     10 10/2/08 Neobehaviorist Dissatisfied with S-R approach   Better to speak of drives and need states that are met through learning   Learning is driven (motivated) by necessity of meeting physiological demands   S-O-R     –  organismic variables like thirst and hunger   No empty “black box” a la Skinner Food/water deprivation set up conditions of physiological needs which are translated into motivated behavior by specific drive states   Drive reduction theory   –  Homeostatic balance –  Reducing a drive reinforces behavior instrumental in securing the food or water   System of quantification   SER = SHR x D x V x K - (IR + SIR) is operationally defined   e.g., D in terms of hours of deprivation   Each –  K in terms of reward magnitude   Acquired incentives –  Stimuli are paired with innate incentives –  Higher-order conditioning   No drive - no behavior 11 10/2/08   To handle the zero problem –  Non-nutritive saccharin provides for learning   Saccharin is a drive-stimulus –  Reducing the drive associated with it is reinforcing   Stimulation or satiation –  Male rats prevented from ejaculating –  Sexual excitement itself Profound impact Learning variables vs. Performance variables   Hypothetico-deductive model     –  Data used to generate a formal behavior theory Theory incomplete, but drive reduction and acquired incentives major contributions   Theory was premature and overly ambitious, but commendable   Testable       Saccharin and foreplay are reinforcing –  Not surprising, many activities are After primary needs are met, most activities we do are pleasurable even though our immediate survival doesn’t depend on them   Exercise - label behaviors as –  Innately reinforcing –  Acquired through experience with environment   Acquired motivation –  Those in list give pleasure (not survival based) 12 10/2/08 Social approval is powerful, but not innate or drive reducing (drive-stimulus?)   Theory of imitation and social reinforcement proposed to account for modeling behavior in children   Non-learning accounts of behavior   Argues much behavior is imitative and persists in absence of overt reinforcers described by Skinner and Thorndike       Proposes social reinforcement theory Imitation –  Stick out your tongue to an older infant –  Intinctive? –  Imitated response can then be reinforced   Observational learning   From my perspective, social reinforcers are a sub -set of secondary reinforcers –  Smiling –  Paying attention –  Saying “good” De-emphasizes drive and drive-stimulus reduction Proposes that engaging in pleasurable activities is a reinforcing event   Previously, a reinforcer was a stimulus event     –  Pavlov’s UCS –  Thorndike’s satisfyer –  Hull’s need satisfying food   Now for Premack, effects of foods can’t be separated from the behavior of eating food 13 10/2/08 Both food and eating food occurred during reinforcement   And, eating is the reinforcer, not the food   Experiments showed all behaviors not equally reinforcing   –  Varies at different times   Running and drinking example –  Manipulate through deprivation –  Changes probability of behavior The more probable of two responses will reinforce the less probable response.   Reversible   If hungry, most behaviors are less likely than eating. So, eating reinforces almost all other behaviors.   Similarly, if thirsty …   Children - candy vs. pinball   –  Preferred activity would reinforce other –  Not the reverse, even though pleasurable   Behaviors may reverse solely because of time of day   Extends understanding of reinforcement   Engaging in certain behaviors can be reinforcing   Behavior modification 14 10/2/08   Assume almost all behaviors humans engage in are being maintained by a common process of reinforcement   What is the nature of this common reinforcer?   Pleasure?   Need to look inside “black box”   Olds and Milner, 1954 15 ...
View Full Document

## This note was uploaded on 10/24/2008 for the course PSY 351 taught by Professor Brown during the Spring '08 term at Utica.

Ask a homework question - tutors are online