click below
click below
Normal Size Small Size show me how
psych learning
principles of learning exam 2 lecture 20
| Term | Definition |
|---|---|
| Schedule of reinforcement | The response requirement that must be met to obtain reinforcement |
| Ratio schedules | Certain number of responses earns a reinforcer |
| Interval schedules | First response after some amount of time earns a reinforcer The rate of reinforcement increases along with the response rate at low response rates, the extra responding doesn’t increase reinforcement much (or at all) at higher response rates |
| 2 basic types of schedules | Ratio schedules, Interval schedules |
| Fixed ratio (FR) schedules | The number of responses required to earn one reinforcer is always the same (and can be learned by the subject) |
| Variable ratio (VR) schedules | The rate of reinforcement is directly related to the rate of responding (doubling response rate doubles reinforcement), but the number of responses needed to earn a reinforcer varies |
| Fixed-ratio schedules | Tend to cause a distinct pattern, with a post-reinforcement pause when there is no responding, followed by a steady burst of responding until another reinforcer is earned (called a ratio run) |
| Post-reinforcement pause | A pause in responding that typically occurs after the delivery of the reinforcer on fixed-ratio and fixed-interval schedules of reinforcement |
| Ratio run | A steady burst of responding until another reinforcer is earned |
| Continuous reinforcement schedule | A special version of the fixed ratio schedule, each specified response is reinforced |
| Continuous reinforcement schedule | – Lead to a steady and moderate rate of responding – Generally no post-reinforcement pause with this schedule • Works ok, but the organism stops responding quickly if you stop giving the reinforcement |
| Intermittent reinforcement schedule | Essentially any schedule other than a continuous reinforcement schedule – These schedules are resistant to extinction |
| Variable ratio (VR) schedules | The subject doesn’t know how many responses are required Causes rapid responding No pauses unless the required number of responses is very high This is a preferred way to get very high rates of responding |
| Gambling and VR schedules | This is EXACTLY the principle used to get people to continuously (and rapidly) put coins into slot machines • You never know when it will pay out, but the more times you spin, the more likely it is that you will “win” something… |
| “Lean” schedule | Many responses required for each reinforcer |
| “Rich” schedules | Few responses required for each reinforcer |
| Ratio strain | This process of making the schedule more lean needs to be done gradually or the individual may give up A disruption in responding due to an overly demanding response requirement |
| Interval schedules | In interval schedules, the average amount of time (in seconds) to earn a reinforcer is in the name |
| Interval schedules | Ex: FI5 schedule means that the first response more than 5 seconds after the last reinforcer delivery will be reinforced It is fixed, but it is 5 every time and the average is 5 |
| Fixed interval | The amount of time after each reinforcer delivery until the next reinforcer can be earned is always the same (and the subject can figure it if they can estimate times) |
| Fixed interval | – Ex: After each reinforcer, 40 seconds need to pass before a response will earn another reinforcer |
| Scallop | Responding is low after each reinforcer and then ramps up as the end of the interval approaches because they are having trouble estimating the time but become more sure as time progresses Ex: Cramming for an exam |
| Variable interval (VI) schedule | The amount of time after each reinforcer delivery until the next reinforcer can be earned is varied (but has some average value) – Ex: After each reinforcer, 1-59 seconds need to pass before a response will earn another reinforcer (average of 30) |
| Variable interval (VI) schedule | Cause a nice steady rate of responding without pausing, but slower than a variable ratio schedule – This works ok for training dogs, people • Advantage- interval means you don’t need to carry reinforcer around in case they respond really fast |
| Transitioning from continuous reinforcement to intermittent reinforcement schedules | Generally start with a continuous reinforcement schedule for initial learning –It is hard for organisms to learn the operant relationships initially if they aren’t reinforced everytime –However they will quickly stop responding once reinforcement stops |
| Transitioning from continuous reinforcement to intermittent reinforcement schedules | Later, you transition them to a variable schedule – Causes high response rates and is resistant to extinction – VR schedules will cause faster responding, but interval or ratio schedules will both cause resistance to extinction |