click below
click below
Normal Size Small Size show me how
Learning (ch. 6)
Psych 111 Intro to Psych
| Question | Answer |
|---|---|
| Permanent change in behavior. Happens because of experience | Learning |
| Type of learning. Involves associations between environmental stimuli and responses. Environment is involved in your learning | Conditioning |
| Behind the idea of Classical Conditioning? | Ivan Pavlov |
| Learn to associate 2 stimuli (make a connection). A response that's originally produced by the other. | Classical Conditioning |
| Example: learning to walk far from the swings to avoid collisions. | Classical Conditioning |
| The stimulus that naturally evokes the unconditioned response. Food naturally evoked salivation. | Unconditioned Stimulus (US) |
| Automatically produced response. naturally occurs. Ex: salivation | Unconditioned Response (UR) |
| Example: blinking when puff of air at the eye. | Unconditioned Response (UR) |
| Originally neutral stimulus that creates a behavior after being paired with an unconditioned stimulus. Ex: tone | Conditioned Stimulus (CS) |
| Example: bell causes blinking | Conditioned Stimulus (CS) |
| Response created by the Conditioned Stimulus. Ex: tone causes salivation. | Conditioned Response (CR) |
| Example: salivate to bell | Conditioned Response (CR) |
| CR and UR are usually ____________? | the same |
| When does conditioning work the best? | if the Conditioned Stimulus (CS) appears before the Unconditioned Stimulus (US) and both stimuli end at the same time. |
| When do subjects acquire a conditioned response? | When a Conditioned Stimulus (CS) is paired with an Unconditioned Stimulus (US) |
| Repeating the Conditioned Stimulus (CS) without the Unconditioned Stimulus (US)? | Extinction |
| Over time the Conditioned Response (CR) disappears? | Extinction |
| Weakening of a Conditioned Response (CR) | Extinction |
| Example: over time, dog loses the defense reflex of the metronome when it doesn't get shocks. | Extinction |
| after response has been extinguished, may reappear after a period of time with exposure to the Conditioned Stimulus (CS) again. | Spontaneous Recovery |
| Example: bell doesn't work, then it does === dog salivates | Spontaneous Recovery |
| Process by which a neutral stimulus comes to act as a Conditioned Stimulus (CS) by being paired with another stimulus that already creates a Conditioned Response (CR) | Higher-Order Conditioning |
| More likely to show extinction. | Higher-Order Conditioning |
| This creates a weaker Conditioned Response (CR) | Higher-Order Conditioning |
| Example: food with bell bell with light (goes 1 step further) | Higher-Order Conditioning |
| The tendency to respond to a new stimulus as if it were the original Conditioned Stimulus (CS) | Stimulus Generalization |
| Happens most when the new stimulus resembles the original Conditioned Stimulus (CS) | Stimulus Generalization |
| Example: Little Albert experiment | Stimulus Generalization |
| Example: sudden noise in the middle of the night associated with the alarm clock | Stimulus Generalization |
| The tendency to lack a Conditioned Response (CR) to a new stimulus that resembles the original Conditioned Stimulus (CS). | Stimulus Discrimination |
| One learns to realize the difference between similar stimuli. | Stimulus Discrimination |
| Example: we respond differently when an fire alarm wakes us up and when an alarm clock wakes us up (JUMP VS. SNOOZE) | Stimulus Discrimination |
| classical conditioned response. Responsive to some behavioral treatments. US: chemo UR: nausea CS: treatment room/needles CR: nausea in response to the room/needles | Anticipatory Nausea + Vomiting (ANV) |
| Example of the Garcia Effect. | Anticipatory Nausea + Vomiting |
| When patients enter the treatment room or see the needle they feel nauseous, but the medication is what makes them sick. | Anticipatory Nausea + Vomiting |
| Example: gotten sick and never want to eat that food again | Garcia Effect |
| Behavior is dependent on its consequences. Often new responses. Responses are controlled by their consequences. | Operant Conditioning |
| Another name for Operant Conditioning | Instrumental Learning |
| Proposed the law of effect. Studied cats (needed to unlock latches to get out of crate) | Thorndike |
| States that any behavior that has good consequences will tend to be repeated, and any behavior that has bad consequences will tend to be avoided. A satisfying result strengthens and increases a behavior. | Law of Effect |
| Thinks that ALL behavior is explained by looking outside the individual. People + animals tend to repeat behaviors which have + consequences and decrease behaviors that have - consequences. Studied Operant Conditioning | B.F. Skinner |
| Neutral, Reinforcement, Punishment | 3 types of consequences |
| NOT more or less likely to see behavior patterns change. NO EFFECT ON BEHAVIOR. | Neutral Consequence |
| Anything that will make a response more likely to occur. | Reinforcement |
| Examples: "good job", award, praise | Reinforcement |
| Anything that will make a response LESS likely to occur | Punishment |
| Naturally satisfying. Ex: food, water, sex. | Primary Reinforcers |
| Satisfying from an association with primary reinforcers. Ex: money, praise, grades. | Secondary Reinforcers |
| Inherently unpleasant. Decreases the likelihood of a response occurring again. If something bad happens you are less likely to act that way again. | Punishment |
| A stimulus is implemented after a response so that the response happens less often. Something unpleasant happens. Ex: spanked, soap in mouth) | Positive Punishment |
| Removal of a stimulus after a response so that the response will happen less often. Something unpleasant is removed Ex: no TV, no dessert | Negative Punishment |
| Naturally unpleasant. Ex: pain, freezing temperatures. | Primary Punisher |
| Unpleasant because they become associated with primary punishers. Ex: failing grades, social disapproval. | Secondary Punisher |
| __________ helps to increase a behavior, while _____________ helps to decrease a behavior. | Reinforcement, Punishment |
| A stimulus is implemented AFTER a response so that the response will happen more. | Positive Reinforcement |
| Removal of a stimulus after a response so that the response will happen more. | Negative Reinforcement |
| Acquiring a response that DECREASES an aversive stimulation (ending it early) | Escape Learning |
| Acquiring a response that PREVENTS some aversive stimuli from happening. | Avoidance Learning |
| When it happens right after a behavior it has the strongest effect. | Reinforcement/Punishment |
| Reward/punishment happens each time the behavior happens. | Continuous |
| Example: every tim kid is bad ---- TV is taken away | Continuous |
| Reinforcement/punishment happens when a response happens ONLY some of the time. | Intermittent (partial)s schedule |
| Ex: tv only taken away sometimes. | Intermittent (partial) schedule |
| Yields a higher response rate. | Rato Schedules |
| Reinforcement after a certain # of responses | Ratio Schedule |
| Reinforcement after a fixed # of response Ex: every 4 times | Fixed Ratio Schedule |
| Example: every 4 times you do something, you ill get rewarded 1 time. | Fixed Ratio Schedule |
| Reinforcement after some average # of responses. Ex: on avg. 7 | Variable-Ratio Schedule |
| Example: person trying to win a game by getting heads on a coin toss gets heads every 2 times (avg.) that she tosses the coin. Sometimes she may toss it 1 time and get heads, but other times she may have to toss it many times before getting a heads. | Variable-Ratio Schedule |
| Reinforcement happens after a particular avg. amount of time and once desired behavior has happened. | Variable-Ratio Schedule |
| This type of schedule yields more resistance to extinction. | Variable-Ratio Schedule |
| Reinforcement happens after a fixed amount of time has passed since the last reinforcer. A set schedule. Ex: every 5 minutes | Fixed-Interval Schedule |
| Reinforcement happens if a variable amount of times has passed since last reinforcer. Ex: on avg. 5 minutes (could be 3 or 7 minutes -- varies) | Variable-Interval Schedule |
| For a response to persist it should be reinforced __________, making responses harder to extinguish. | Intermittently |
| Procedure in which reinforcement is used to guide a response closer and closer to a desired response. Uses successive approximation (reinforce responses that are increasingly similar to desired response) | Shaping |
| Shape behavior to way you want your child to behave. | Shaping |
| Example: teaching a pigeon to turin in a circle or play ping-pong. | Shaping |
| Used to reach more complex sequence of behaviors, reinforcing various simple behaviors separately, then linking them. | Chaining |
| Shape final response in sequence and work back until sequence is learned. | Chaining |
| Reinforce separate behaviors being done in a specific sequence. | Chaining |
| Example: 1) eat dinner 2) take shower 3) brush teeth [do 1,2,3 and you can watch tv) | Chaining |
| Learning without realizing that you are learning. Learn everyday whether we realize it or not. | Latent Learning |
| Believe there's a higher level cognitive process to how we learn which impacts attitudes, beliefs, and expectations. | Observational Learning |
| Kids usually behave like their parents. Adults = Models | Observational Learning |
| Example: Albert Bandura and Bobo doll study | Observational Learning |
| Had some kids watch a video of a woman beating a bobo doll. Those who saw the video were much more aggressive than those who didn't. Kids also used guns and other violent weapons that were placed in the room. | Albert Bandura and Bobo doll study |
| RESULTS: did what adult models did; observed their behavior and modeled it. | Albert Bandura and Bobo doll study |
| Prosocial behavior. Can be learned through modeling. | Observational Learning |
| 15 male/15 female 1st graders watch 30 minutes of tv show under 3 different conditions assigned randomly. IV: tv show watched 1) someone saved Lassie 2) neutral (no humans helping dogs) 3) Brady Bunch (+ family interactions, no dog scen | Lassie Study |
| Double blind experiment. Kids taken to game room and given prizes by # of points earned. Kids wear headphones to press "help" button if puppies bark (couldn't keep playing if they asked for help). Kids heard same tape (30 sec. silence & 120 sec. barkin | Lassie Study |
| DV: amount time each kid pressed "help" button during the barking, and the speed they intervened one barking started. RESULTS: during 120 sec. barking period, kids in Lassie Rescue group-77% helping, neutral group-53%, Brady Bunch-34%. | Lassie Study |
| Prosocial Lassie group were quickest to respond [imitated what they had seen in their tv show] | Lassie Study |
| 1) attention 2) retention 3) reproduction 4) motivation | Bandura's 4 key components to Observational Learning |
| pay attention to what you see. | Attention |
| store what you observe. | Retention |
| imitate. | Reproduction |
| determined to imitate the behavior because it will have a + result. | Motivation |