🧠 Psychology – Learning & Behaviorism
PRACTICE SET 2 | 75 MCQs | Difficulty: 10/10
Every question: Scenario-Based, Passage-Based, or Complex Integration
PASSAGE 1 — Read carefully, answer Q1–Q7
"Dr. Harmon is a researcher studying emotional learning. She brings a 9-month-old infant named Tommy into the lab. Every time Tommy reaches for a wooden block (which he enjoys), a loud horn sounds behind him. After 7 pairings, Tommy cries and pulls away whenever he sees the block — even before touching it. Dr. Harmon then notices Tommy also cries when he sees similarly shaped rectangular objects — a matchbox, a small book — but does NOT cry when he sees a round ball. Later, Dr. Harmon presents the wooden block 30 times in a row without the horn. Tommy's crying gradually stops. Two weeks later, with no further training, Tommy whimpers slightly when shown the block again."
Q1. In this study, what is the UCS, UCR, CS, and CR in the correct order?
A) Block = UCS; Crying to block = UCR; Horn = CS; Crying to horn = CR
B) Horn = UCS; Crying to horn = UCR; Block = CS; Crying to block = CR
C) Crying = UCS; Block = UCR; Horn = CS; Tommy = CR
D) Block = CS; Horn = UCS; Pulling away = UCR; Crying = CR only after pairing
Answer: B) Horn = UCS; Crying to horn = UCR; Block = CS; Crying to block = CR
The horn naturally causes crying without learning (UCS → UCR). After repeated pairing with the block (neutral → CS), the block alone causes crying (CR). Note: the CR (crying to block) typically appears slightly weaker than the UCR (crying to horn) — a key distinguishing feature.
Q2. Tommy crying at rectangular objects (matchbox, small book) but NOT at a round ball illustrates which TWO processes occurring simultaneously?
A) Extinction and spontaneous recovery
B) Stimulus generalization AND stimulus discrimination
C) Higher-order conditioning and extinction
D) Vicarious conditioning and biological preparedness
Answer: B) Stimulus generalization AND stimulus discrimination
Generalization: Tommy responds to stimuli similar to the original CS (rectangular shapes). Discrimination: Tommy does NOT respond to sufficiently dissimilar stimuli (round ball). Both processes operate together — generalization broadens the response; discrimination narrows it based on dissimilarity.
Q3. The process of presenting the block 30 times without the horn, causing Tommy's crying to stop, is called:
A) Spontaneous recovery
B) Stimulus discrimination training
C) Extinction
D) Counterconditioning
Answer: C) Extinction
Extinction in classical conditioning occurs when the CS (block) is repeatedly presented without the UCS (horn). The conditioned association weakens and the CR (crying) disappears. Crucially, the original learning is not erased — it is suppressed.
Q4. Tommy whimpering two weeks later (without any retraining) is an example of:
A) Re-acquisition
B) Spontaneous recovery
C) Higher-order conditioning
D) Stimulus generalization
Answer: B) Spontaneous recovery
After a rest period following extinction, the CR returns at reduced strength — this is spontaneous recovery. It is one of the strongest pieces of evidence that extinction suppresses rather than erases the original CS-UCS association.
Q5. This experiment is MOST similar to which famous historical study?
A) Tolman's latent learning maze experiment
B) Skinner's operant chamber studies
C) Watson and Rayner's Little Albert experiment
D) Bandura's Bobo Doll experiment
Answer: C) Watson and Rayner's Little Albert experiment
Watson conditioned Little Albert to fear a white rat using a loud noise — identical procedure. Tommy is essentially a replication of the Little Albert paradigm: a neutral object paired with a fear-inducing noise to produce a conditioned emotional response in an infant.
Q6. Dr. Harmon later pairs the BLOCK (now established CS) with a flash of blue light — without ever pairing the blue light with the horn. After several pairings, Tommy shows mild distress at the blue light alone. This demonstrates:
A) Stimulus generalization to blue light
B) Second-order (higher-order) conditioning
C) Spontaneous recovery transferred to a new stimulus
D) Vicarious conditioning
Answer: B) Second-order (higher-order) conditioning
The block (CS1) is used as though it were a UCS to condition the blue light (CS2 — new neutral stimulus). CS2 never pairs directly with the UCS (horn) yet elicits a CR, because CS1 "carries" the associative strength. This is the defining feature of higher-order conditioning.
Q7. Tommy's mother tells the researchers she ALSO cries when she smells antiseptic solution, ever since a painful medical procedure years ago. She learned this fear through watching her own mother cry at hospitals, without ever having a traumatic procedure herself. Her fear was acquired through:
A) Classical conditioning via direct experience
B) Vicarious conditioning
C) Operant conditioning via negative reinforcement
D) Latent learning
Answer: B) Vicarious conditioning
Vicarious classical conditioning occurs when an observer acquires a conditioned emotional/physiological response by watching another person experience the CS-UCS pairing — without direct personal experience. The mother "learned" the fear through observation of her mother's response.
PASSAGE 2 — Read carefully, answer Q8–Q14
"A behavioral researcher places a rat named Rex in an operant chamber. Initially, Rex receives a food pellet every time he presses the lever (Condition A). Next, the researcher only delivers food after every 5th lever press (Condition B). Later, Rex receives food after an unpredictable number of presses — sometimes 3, sometimes 10, sometimes 7 (Condition C). In a final condition, a green light is turned on and off randomly — Rex only receives food when the green light is ON and he presses the lever (Condition D). After Condition C, the researcher disconnects the food mechanism entirely. Rex continues pressing the lever for a remarkably long time before stopping."
Q8. Condition A (food every press) is which schedule? What is its primary disadvantage?
A) Fixed ratio; produces rapid extinction
B) Continuous reinforcement; produces fastest extinction when stopped
C) Variable ratio; produces unpredictable behavior
D) Fixed interval; produces scallop pattern
Answer: B) Continuous reinforcement; produces fastest extinction when stopped
Continuous reinforcement (CRF) — one reinforcement per response — produces fastest initial learning but fastest extinction. Because every response has always been reinforced, non-reinforcement immediately signals "the contingency has ended," causing rapid extinction.
Q9. Condition B (food every 5th press) is which schedule, and what behavioral pattern does it produce?
A) Variable ratio; highest steady response rate
B) Fixed ratio; high response rate with a post-reinforcement pause
C) Fixed interval; scallop-shaped responding
D) Variable interval; steady moderate responding
Answer: B) Fixed ratio; high response rate with a post-reinforcement pause
FR-5 reinforces after every 5th response. This produces high response rates because more responses = more rewards, but the organism pauses briefly after each reinforcement (post-reinforcement pause) before starting the next ratio run.
Q10. Condition C (unpredictable number of presses required) is which schedule, and why does Rex press so long after extinction begins?
A) Fixed interval; post-reinforcement pauses make extinction gradual
B) Variable ratio; the unpredictable ratio creates maximum resistance to extinction (partial reinforcement effect)
C) Variable interval; time-based uncertainty slows extinction
D) Fixed ratio; ratio strain causes gradual decrease
Answer: B) Variable ratio; the unpredictable ratio creates maximum resistance to extinction (partial reinforcement effect)
VR produces the highest steady response rate AND the greatest resistance to extinction — this is the partial reinforcement effect. Rex cannot tell whether non-reinforcement means "contingency ended" or simply "this is one of those longer stretches" — so he keeps pressing far longer than under CRF.
Q11. The green light in Condition D is functioning as:
A) A conditioned stimulus triggering a reflexive response
B) A discriminative stimulus signaling when lever pressing will be reinforced
C) A secondary reinforcer paired with food
D) A punishing stimulus that suppresses pressing
Answer: B) A discriminative stimulus signaling when lever pressing will be reinforced
The green light signals "lever pressing NOW leads to food." Sᴰ (discriminative stimulus) sets the occasion for operant behavior by indicating reinforcement availability. When the light is OFF, pressing is not reinforced — Rex learns to press only during the light (stimulus control).
Q12. Skinner once observed that pigeons given food at random intervals (not contingent on any specific behavior) developed idiosyncratic rituals — turning in circles, bowing repeatedly. Rex, in Condition A before the lever was introduced, occasionally pressed a corner of the cage right before a randomly timed pellet dropped, and began pressing that corner repeatedly. This illustrates:
A) Instinctive drift — behavior reverting to species-typical patterns
B) Superstitious reinforcement — accidental contingency between arbitrary behavior and reward
C) Latent learning — unrewarded exploration forming a cognitive map
D) Biological preparedness — corner-pressing is biologically prepared in rats
Answer: B) Superstitious reinforcement — accidental contingency between arbitrary behavior and reward
Superstitious behavior develops when reinforcement is delivered non-contingently but happens to follow a particular behavior by chance. The organism "acts as if" that behavior caused the reward and repeats it. The behavior is maintained by the accidental reinforcement history.
Q13. A new researcher argues they should use CONTINUOUS punishment (shock every single incorrect behavior) to eliminate Rex's unwanted behaviors most efficiently. Based on the drawbacks of severe punishment, which response BEST critiques this approach?
A) Punishment cannot decrease behavior frequency under any circumstances
B) Continuous severe punishment suppresses behavior temporarily, may cause emotional disturbance and aggression, doesn't teach correct behavior, and may generalize to suppress all behavior or cause fear of the entire testing environment
C) Punishment is only ineffective in animals — it works reliably in humans
D) The only drawback is that punishment takes longer to work than reinforcement
Answer: B) Continuous severe punishment suppresses behavior temporarily, may cause emotional disturbance and aggression, doesn't teach correct behavior, and may generalize to suppress all behavior or cause fear of the entire testing environment
Well-documented drawbacks of severe punishment: (1) suppresses but doesn't eliminate behavior; (2) causes fear/avoidance of the punisher and setting; (3) models aggression; (4) produces emotional disturbance; (5) doesn't teach the correct alternative behavior. Effective punishment is immediate, consistent, moderate, and paired with reinforcement of alternatives.
Q14. A researcher uses Rex's behavior to train him to run a complex obstacle course: first, any movement toward the course is reinforced; then only entering the start zone; then moving to the first obstacle; and so on until the full course is run. This training method is:
A) Chaining
B) Shaping via successive approximation
C) Fixed ratio scheduling
D) Latent learning exploitation
Answer: B) Shaping via successive approximation
Shaping reinforces behaviors progressively closer to the target. Each step is reinforced while earlier approximations are extinguished, gradually building the full target behavior from scratch. This is used when the final behavior is not yet in the organism's repertoire.
PASSAGE 3 — Read carefully, answer Q15–Q21
"Dr. Voss runs a classroom-based behavior modification program. Students earn poker chips for completing assignments on time, raising their hand, and helping classmates. At the end of the week, chips can be exchanged for extra recess, snacks, or homework passes. One student, Kai, has been repeatedly sent to a small room alone for 10 minutes every time he disrupts the class. Another student, Priya, had her recess time reduced by 15 minutes for talking back. A third student, Leo, was given extra chores as a consequence of his aggressive behavior. Dr. Voss also uses a brain-activity monitoring system to help a student with ADHD learn to produce calmer, more focused brain wave patterns through real-time feedback."
Q15. The poker chip system in Dr. Voss's classroom is a:
A) Primary reinforcement system using basic biological rewards
B) Token economy based on secondary reinforcement
C) Continuous reinforcement schedule using fixed ratio
D) Systematic desensitization program
Answer: B) Token economy based on secondary reinforcement
Tokens (poker chips) are secondary (conditioned) reinforcers — they have no intrinsic value but acquire reinforcing power through association with primary/preferred reinforcers (snacks, recess). Token economies are among the most empirically supported behavior modification interventions.
Q16. Kai being placed alone in a room for 10 minutes is an example of:
A) Positive punishment — adding an aversive stimulus
B) Negative reinforcement — removing an aversive stimulus
C) Negative punishment — removing access to reinforcement (time-out)
D) Extinction — ignoring the disruptive behavior
Answer: C) Negative punishment — removing access to reinforcement (time-out)
Time-out = time-out FROM positive reinforcement. A desirable environment (classroom with peers, stimulation) is removed following the unwanted behavior. "Negative" = removing something; "Punishment" = behavior decreases. Time-out only works if the original environment is reinforcing — if class itself is aversive to Kai, time-out will backfire.
Q17. Priya losing 15 minutes of recess for talking back is:
A) Positive punishment — adding chores
B) Negative punishment — removing a pleasant activity (response cost)
C) Negative reinforcement — avoiding loss of more recess
D) Extinction — ignoring talking back
Answer: B) Negative punishment — removing a pleasant activity (response cost)
Response cost is a form of negative punishment where a specified amount of a reinforcer is removed following an undesired behavior. Recess (pleasant, rewarding) is removed following talking back, aiming to decrease the frequency of talking back.
Q18. Leo receiving extra chores for aggression is:
A) Negative punishment
B) Negative reinforcement
C) Positive punishment
D) Extinction
Answer: C) Positive punishment
Something aversive (extra chores) is ADDED following an undesired behavior (aggression) to decrease its future frequency. Positive = adding; Punishment = behavior decreases.
Q19. The brain-activity monitoring system Dr. Voss uses with the ADHD student is called:
A) Biofeedback — general physiological monitoring
B) Neurofeedback — specifically training brainwave (EEG) patterns
C) Systematic desensitization — pairing calm states with stimuli
D) Reciprocal inhibition training
Answer: B) Neurofeedback — specifically training brainwave (EEG) patterns
Neurofeedback is a specific subtype of biofeedback that provides real-time EEG information, allowing the individual to learn to operantly control their own brainwave patterns (increasing alpha/SMR waves associated with calm focus, decreasing theta waves associated with inattention). This uses operant conditioning principles on what are normally involuntary processes.
Q20. Dr. Voss wants to teach a new student to raise their hand before speaking — a behavior the student has never done. She starts by reinforcing any quiet pause before speaking, then reinforcing any hand movement upward, then reinforcing a partial raise, then requiring a full raised hand. This is:
A) Chaining — linking existing behaviors in sequence
B) Successive approximation (shaping)
C) Fixed interval reinforcement
D) Higher-order conditioning applied to behavior
Answer: B) Successive approximation (shaping)
The target behavior (hand raising) is not initially in the student's repertoire in that form. Dr. Voss reinforces progressively closer approximations — each step narrows in on the final target behavior while earlier approximations are no longer reinforced (differential reinforcement).
Q21. If Dr. Voss wanted to teach the full morning routine (enter class → hang up coat → sit down → take out materials → begin warm-up) as a linked sequence where each completed step cues the next, she would use:
A) Shaping
B) Chaining
C) A fixed ratio schedule
D) A token economy alone
Answer: B) Chaining
Chaining links already-learned or newly trained behaviors into a sequence. Each behavior serves as the discriminative stimulus for the next AND is reinforced by the opportunity to perform the next step, with the terminal reinforcer (beginning work) completing the chain. This is used to teach multi-step routines and complex skills.
PASSAGE 4 — Read carefully, answer Q22–Q28
"Dr. Tolman places three groups of rats in an identical maze.
Group 1: Runs the maze daily and receives food at the end each day. By day 5, they navigate near-perfectly.
Group 2: Runs the maze daily with NO food reward for the first 10 days. Their performance appears poor and random. On day 11, food is introduced. By day 12, their performance matches Group 1 — almost as if they had been learning all along.
Group 3: Runs the maze daily and NEVER receives food. They never improve significantly.
Dr. Tolman also later places rats in a maze, drains it, and refills it with a different entry point. Rats go directly to the goal box location despite the altered entry — suggesting they 'know where' the goal is, not just 'which turns to make.'"
Q22. What does Group 2's sudden improvement on day 11 demonstrate?
A) The partial reinforcement effect — intermittent reward increases resistance to extinction
B) Latent learning — learning occurred during unrewarded trials but was only expressed when motivation (reward) was introduced
C) Insight learning — an "aha moment" solution appeared on day 11
D) Superstitious reinforcement — food accidentally paired with correct navigation
Answer: B) Latent learning — learning occurred during unrewarded trials but was only expressed when motivation (reward) was introduced
Latent learning is learning that occurs without obvious reinforcement and remains hidden until incentive (motivation) is provided. Group 2 formed a cognitive map of the maze during unrewarded runs; food provided the motivation to use it. This challenged behaviorism's claim that reinforcement is necessary for learning.
Q23. Group 3 (never rewarded, never improves) serves what critical purpose in Tolman's study?
A) It demonstrates that latent learning can occur even without any experience
B) It serves as the control, showing that maze exposure alone (without cognitive engagement or reward) is insufficient — confirming reinforcement matters for performance
C) It proves that all learning requires reward
D) It demonstrates that biological constraints prevent rats from learning mazes without reward
Answer: B) It serves as the control, showing that maze exposure alone (without cognitive engagement or reward) is insufficient — confirming reinforcement matters for performance
Group 3 distinguishes between simple exposure and latent learning. Group 2 apparently engaged cognitively with the maze during exploration; Group 3 did not show the same latent learning. The study as a whole shows reward affects performance but not necessarily acquisition.
Q24. The finding that rats navigated directly to the goal box from a new entry point — demonstrating they knew the goal's LOCATION, not just a sequence of turns — is evidence for:
A) Stimulus generalization across maze orientations
B) Instinctive drift — rats naturally navigate to food locations
C) A cognitive map — an internal spatial representation of the environment
D) Insight learning applied to navigation
Answer: C) A cognitive map — an internal spatial representation of the environment
Tolman coined "cognitive map" to describe the internal mental representation of the spatial layout of an environment. Rats weren't running a fixed stimulus-response chain of turns — they had an overview-style representation allowing flexible navigation from any entry point.
Q25. Tolman's findings were revolutionary because they:
A) Confirmed that all behavior is determined purely by reinforcement history
B) Introduced the concept of internal cognitive representations into a field dominated by pure stimulus-response behaviorism
C) Proved that classical conditioning is more powerful than operant conditioning
D) Demonstrated that biological instincts prevent effective maze learning
Answer: B) Introduced the concept of internal cognitive representations into a field dominated by pure stimulus-response behaviorism
Tolman's work was a direct challenge to Skinner and Watson's pure behaviorism, which rejected internal mental processes. By showing that rats formed internal maps, Tolman helped pave the way for cognitive psychology and cognitive-behavioral approaches.
Q26. A student studies for an exam while listening to jazz music. During the exam there is silence. The student performs well. Later the student listens to jazz while NOT studying and doesn't think about course material at all. Which of Tolman's concepts BEST explains why the student still retained the academic content?
A) Latent learning during jazz sessions transferred to exam context
B) The jazz music was a discriminative stimulus triggering memory retrieval
C) Cognitive maps of the subject formed during studying were retained and expressed when tested
D) Spontaneous recovery of previously extinguished memory occurred
Answer: C) Cognitive maps of the subject formed during studying were retained and expressed when tested
Tolman's cognitive map concept extends beyond physical mazes — it applies to any organized internal representation of learned material. The student formed an internal representation of the content that was retained and accessed during testing, regardless of the music context.
Q27. The critical distinction Tolman's study draws between "learning" and "performance" most directly parallels which of Bandura's findings?
A) Reciprocal determinism — behavior changes environment which changes behavior
B) Children who watched a punished model showed no imitation (performance) but still knew the aggressive behaviors (learning) when incentivized
C) The four elements of observational learning must all be present for any behavior to occur
D) Vicarious reinforcement increases motivation to perform observed behaviors
Answer: B) Children who watched a punished model showed no imitation (performance) but still knew the aggressive behaviors (learning) when incentivized
Both Tolman and Bandura found the same fundamental distinction: learning (acquisition) can occur without performance, and performance depends on motivation/reward. In both cases, what was "hidden" learning became visible when incentive was provided.
Q28. Which biological constraint could explain why Group 3 rats NEVER learned the maze even with extensive exposure?
A) Instinctive drift — rats instinctively return to home territory, preventing maze learning
B) The absence of biologically relevant consequences meant no adaptive value triggered neural consolidation of the spatial information
C) Biological preparedness prevents rats from forming cognitive maps in artificial environments
D) Conditioned taste aversion interfered with maze exploration
Answer: B) The absence of biologically relevant consequences meant no adaptive value triggered neural consolidation of the spatial information
This question integrates biological constraints with cognitive learning. While Tolman showed Group 2 rats did form latent maps, Group 3 rats apparently did not encode the maze well enough — possibly because without any biological relevance (food, escape, survival), the neural machinery for spatial consolidation was not fully engaged.
PASSAGE 5 — Read carefully, answer Q29–Q35
"Mariana grew up watching her mother react with intense fear every time a thunderstorm occurred — covering her ears, hiding in a closet, and trembling. By age 8, Mariana also showed intense fear during thunderstorms despite never having had a personally traumatic storm experience. At 25, Mariana seeks therapy. Her therapist first teaches her progressive muscle relaxation. Then they construct a hierarchy: (1) seeing a photo of clouds → (2) hearing soft rain sounds → (3) watching a storm video → (4) sitting near a window in light rain → (5) standing outside in a moderate storm. After six weeks, Mariana no longer experiences fear at any level of the hierarchy."
Q29. Mariana's acquisition of storm fear through watching her mother is an example of:
A) Classical conditioning through direct experience
B) Operant conditioning — her mother's hiding behavior was negatively reinforced
C) Vicarious conditioning — emotional responses learned by observing another's reactions
D) Biological preparedness — humans are evolutionarily primed to fear storms
Answer: C) Vicarious conditioning — emotional responses learned by observing another's reactions
Mariana never experienced the UCS (traumatic storm) directly. She acquired the fear CR by observing her mother's fear response, making this vicarious classical conditioning — a form of observational learning of emotional responses.
Q30. The therapeutic process Mariana's therapist uses is called:
A) Flooding — immediate full exposure to the feared stimulus
B) Aversion therapy — pairing the feared stimulus with an aversive event
C) Systematic desensitization — gradual exposure while maintaining relaxation
D) Token economy — reinforcing approach behaviors toward storms
Answer: C) Systematic desensitization — gradual exposure while maintaining relaxation
Systematic desensitization (Wolpe) = (1) relaxation training, (2) fear hierarchy construction, (3) gradual imaginal/in vivo exposure while maintaining relaxation. It is based on reciprocal inhibition — relaxation and fear are incompatible responses that cannot coexist.
Q31. The principle underlying systematic desensitization — that relaxation and anxiety cannot occur simultaneously — is called:
A) Stimulus discrimination
B) Reciprocal inhibition
C) The Premack Principle
D) Latent inhibition
Answer: B) Reciprocal inhibition
Wolpe's reciprocal inhibition states that two incompatible physiological states cannot coexist — relaxation inhibits anxiety. By maintaining relaxation while exposing the client to feared stimuli, anxiety is progressively inhibited until the CS no longer elicits it.
Q32. Which psychologist is credited as the "Mother of Behavior Therapy" for first demonstrating this type of fear elimination clinically?
A) Mary Cover Jones — eliminated a child's phobia using gradual exposure + pleasant activity
B) Rosalie Rayner — eliminated Little Albert's fear using counterconditioning
C) Mary Ainsworth — eliminated attachment anxiety through systematic proximity
D) Anna Freud — eliminated phobias through talk therapy with children
Answer: A) Mary Cover Jones — eliminated a child's phobia using gradual exposure + pleasant activity
Jones (1924) treated "Little Peter" — a boy afraid of rabbits — by gradually bringing the rabbit closer while Peter ate his favorite food (a pleasant, incompatible state). This anticipated systematic desensitization by 30 years and established behavior therapy as a clinical discipline.
Q33. During therapy, Mariana notices that she now feels slightly uneasy at the sound of any white noise machine — a device often running in waiting rooms. The white noise had been playing softly in the therapy office during early sessions when Mariana was discussing her most distressing storm memories. The white noise becoming aversive is BEST explained by:
A) Stimulus generalization from the storm to white noise
B) Higher-order conditioning — therapy distress (CS1) conditioned the white noise (CS2)
C) Instinctive drift — humans instinctively avoid white noise
D) Conditioned taste aversion applied to sound
Answer: B) Higher-order conditioning — therapy distress (CS1) conditioned the white noise (CS2)
The storm fear/anxiety became a CS1 during therapy sessions. The white noise (neutral) was repeatedly present during these anxiety-evoking discussions, becoming a CS2 through higher-order conditioning — never directly paired with the original UCS (traumatic storm) but associated with CS1 (storm anxiety).
Q34. Mariana's therapist notes that she stopped fearing storms only in the therapy office initially, but still feared them at home. This limitation of the therapy reflects:
A) Spontaneous recovery after extinction
B) Failure of stimulus generalization — extinction in one context doesn't automatically generalize
C) Biological preparedness overriding extinction
D) Instinctive drift causing re-emergence of fear
Answer: B) Failure of stimulus generalization — extinction in one context doesn't automatically generalize
Context specificity of extinction is a well-documented phenomenon. Extinction is partially context-dependent — the CS-no UCS learning is encoded with contextual cues. Therapy in a different setting from where fear occurs means the extinction may not generalize. This is why in-vivo (real-world) exposure is critical.
Q35. Mariana later reports that just THINKING about her mother hiding in the closet during storms still makes her feel anxious, though storms themselves no longer do. This persistent emotional association between her mother's behavior and anxiety is:
A) A conditioned taste aversion
B) A conditioned emotional response (CER) formed through vicarious conditioning
C) Latent learning about her mother's behavior
D) A superstitious reinforcement pattern
Answer: B) A conditioned emotional response (CER) formed through vicarious conditioning
Conditioned Emotional Responses are classically conditioned emotional reactions to previously neutral stimuli. Mariana vicariously conditioned a CER to her mother's storm-related behaviors. Even after desensitization to storms, this separately conditioned CER to the memory/image of her mother's behavior persists — illustrating that different CS-UCS associations can be independently extinguished.
PASSAGE 6 — Read carefully, answer Q36–Q43
"Professor Kaur teaches psychology and observes several students:
— Jordan studies intensively only the night before exams and coasts the rest of the semester.
— Sam studies steadily every day, never knowing when a pop quiz will occur.
— Devon gets paid $2 for every essay paragraph written.
— Alex receives a $50 bonus for every 5 lab reports submitted.
— Riley presses a vending machine button that dispenses a snack after an unpredictable number of button combinations.
Professor Kaur also designs an experiment: she tells students that for the next 8 weeks, they will receive bonus points only on Tuesdays for completing readings. Students read intensively on Mondays but barely at all on Wednesday through Saturday."
Q36. Jordan's studying behavior (cramming before exams, coasting otherwise) maps to which schedule of reinforcement?
A) Variable ratio — unpredictable study session produces reward
B) Fixed interval — scallop pattern of low responding early, high responding near the reward deadline
C) Fixed ratio — studying a set number of hours triggers a reward
D) Variable interval — studying at random times is reinforced
Answer: B) Fixed interval — scallop pattern of low responding early, high responding near the reward deadline
Fixed interval (FI) produces the "scallop" — organisms respond little immediately after reinforcement and accelerate as the interval end approaches. Jordan coasts (low response rate after the last exam) and cramps intensively as the next exam (reward deadline) nears. Classic FI scallop.
Q37. Sam's steady daily studying (not knowing when pop quizzes occur) is on which schedule?
A) Fixed ratio
B) Fixed interval
C) Variable interval
D) Variable ratio
Answer: C) Variable interval
Variable interval (VI) delivers reinforcement for the first correct response after an unpredictable time period. Sam cannot predict WHEN the quiz will come, so studying consistently every day is the optimal strategy. VI produces steady, moderate response rates — the most stable of the four schedules.
Q38. Devon ($2 per paragraph) and Alex ($50 per 5 lab reports) are both on ratio schedules. What is the KEY difference between them?
A) Devon is on variable ratio; Alex is on fixed ratio
B) Devon is on continuous reinforcement; Alex is on fixed ratio
C) Both are on fixed ratio but with different ratios (FR-1 for Devon, FR-5 for Alex)
D) Devon is on fixed ratio FR-1; Alex is on fixed interval
Answer: C) Both are on fixed ratio but with different ratios (FR-1 for Devon, FR-5 for Alex)
Devon receives $2 after every paragraph = FR-1 (every single response reinforced — technically also continuous reinforcement). Alex receives $50 after every 5 reports = FR-5. Both show high rates and post-reinforcement pauses, but Alex's pauses will be proportionally larger (larger ratio = larger post-reinforcement pause).
Q39. Riley's vending machine behavior (unpredictable number of button combinations required) is on which schedule, and why is this schedule the MOST resistant to extinction?
A) Fixed ratio — high consistent response rate
B) Variable ratio — organism cannot distinguish "extinction" from "just a longer stretch without reward"
C) Variable interval — time unpredictability creates persistence
D) Fixed interval — the scallop creates sustained effort near the end
Answer: B) Variable ratio — organism cannot distinguish "extinction" from "just a longer stretch without reward"
VR produces maximum resistance to extinction because the unpredictable nature of reinforcement means any given non-reinforced trial could just be part of the normal variability. Riley has no way to know when the machine has been disconnected vs. when this is just a longer-than-usual run without reward — so pressing continues.
Q40. Professor Kaur's experiment (bonus points only on Tuesdays, students read intensively on Mondays) illustrates:
A) Variable ratio schedule producing steady responding
B) Fixed interval scallop — accelerating responses as the Tuesday deadline approaches
C) The Premack Principle — high-probability behavior reinforcing low-probability behavior
D) Latent learning — students learn the material without needing the bonus
Answer: B) Fixed interval scallop — accelerating responses as the Tuesday deadline approaches
The Tuesday bonus = FI reinforcement. Students learn that early-week reading doesn't "count" toward the approaching deadline, so they respond minimally early (Wednesday–Saturday) and intensively near the reinforcement deadline (Monday). This is the textbook FI scallop.
Q41. Professor Kaur wants to MAXIMIZE resistance to extinction for study habits after her course ends. Based on reinforcement principles, she should SWITCH from weekly scheduled exams to:
A) No reinforcement at all — extinction begins immediately
B) Continuous reinforcement — reward every single study session
C) Partial reinforcement (variable ratio or variable interval) — resistance to extinction is maximized
D) Fixed ratio reinforcement — one reward per chapter completed
Answer: C) Partial reinforcement (variable ratio or variable interval) — resistance to extinction is maximized
The partial reinforcement effect (PRE): behaviors reinforced intermittently are far more resistant to extinction than continuously reinforced behaviors. Switching to partial reinforcement after establishing the behavior creates habits that persist even when reinforcement becomes unavailable (after the course ends).
Q42. A student in Professor Kaur's class who scored 100% on the last 3 exams begins to carry a specific "lucky pencil" to every exam. He performed well on those exams with that pencil but has no logical basis for believing it helps. This is BEST explained by:
A) Instinctive drift — returning to familiar exam behaviors
B) Superstitious reinforcement — accidental contingency between carrying the pencil and exam success
C) Higher-order conditioning — the pencil became a CS2
D) Latent learning — the pencil triggered hidden knowledge
Answer: B) Superstitious reinforcement — accidental contingency between carrying the pencil and exam success
The pencil was present during exam success (a rewarding outcome). The student's carrying behavior was accidentally reinforced, leading to the superstitious belief. This mirrors Skinner's pigeons — organisms attribute causality to whatever behavior coincidentally preceded reinforcement.
Q43. Professor Kaur introduces a rule: students who are caught on phones during class lose their participation grade for the day. Two students respond differently: Marcus stops using his phone entirely; Keisha uses it MORE covertly. The consequence (losing grade) worked for Marcus but not Keisha because:
A) The punishment was too severe for Marcus but too mild for Keisha
B) For punishment to work, the behavior must be consistently detected and immediately punished — Keisha learned that covert use goes undetected, making punishment inconsistent and ineffective
C) Marcus experienced negative reinforcement; Keisha experienced positive punishment
D) Keisha was on a variable ratio schedule of phone use that made extinction impossible
Answer: B) For punishment to work, the behavior must be consistently detected and immediately punished — Keisha learned that covert use goes undetected, making punishment inconsistent and ineffective
For punishment to suppress behavior effectively, it must be: (1) immediate, (2) consistent (applied every time the behavior occurs), (3) inescapable, and (4) moderate. Keisha discovered she could avoid punishment through covert use — inconsistent punishment can actually increase the target behavior by teaching "how to avoid getting caught."
PASSAGE 7 — Read carefully, answer Q44–Q52
"Dr. Alvarez is studying social influences on behavior. She shows Group A children a video of an adult model punching a Bobo doll, then receiving praise from another adult. Group B watches the same video but the model is scolded. Group C watches a neutral video with no aggression. All children are then placed alone with a Bobo doll and various toys.
Group A children immediately imitate the model's specific punching behaviors.
Group B children show little aggression spontaneously.
Group C children show minimal aggression.
Dr. Alvarez then tells ALL children they will receive stickers if they can show her all the aggressive behaviors they remember. Both Group A and Group B demonstrate the full range of aggressive behaviors — Group B's performance matches Group A when incentivized.
Dr. Alvarez also interviews the children afterward. Some children say they 'didn't want to' hit the doll even though they knew how."
Q44. This experiment is a replication of whose work, using which paradigm?
A) Tolman's maze study — demonstrating latent learning through delayed reward
B) Bandura's Bobo Doll experiment — demonstrating observational learning and vicarious reinforcement
C) Seligman's learned helplessness — demonstrating passive acceptance of negative outcomes
D) Watson's Little Albert study — demonstrating classical conditioning of fear
Answer: B) Bandura's Bobo Doll experiment — demonstrating observational learning and vicarious reinforcement
Bandura's original Bobo Doll studies (1961, 1963) used exactly this design — model rewarded, model punished, control group — to study observational learning and the role of vicarious consequences on performance of learned behaviors.
Q45. The fact that Group B (model punished) did NOT imitate spontaneously but DID imitate when offered stickers demonstrates:
A) That Group B had never learned the behaviors
B) The critical distinction between learning (acquisition through observation) and performance (expression dependent on motivation)
C) That vicarious punishment permanently suppressed the neural circuits for aggression
D) That continuous reinforcement always overrides vicarious punishment effects
Answer: B) The critical distinction between learning (acquisition through observation) and performance (expression dependent on motivation)
Group B learned the behaviors through observation (acquisition) but didn't perform them because vicarious punishment (seeing model scolded) reduced motivation. When incentive was provided, learning was revealed. This is the same learning-vs-performance distinction as Tolman's Group 2 rats — parallel finding across species.
Q46. Group A's spontaneous imitation was driven by:
A) Direct positive reinforcement of their own aggressive behaviors
B) Vicarious reinforcement — seeing the model praised increased their motivation to imitate
C) Biological preparedness for aggression
D) Conditioned emotional response to the Bobo doll
Answer: B) Vicarious reinforcement — seeing the model praised increased their motivation to imitate
Vicarious reinforcement occurs when an observer sees a model rewarded for behavior. This increases the observer's motivation to perform that same behavior — without the observer receiving any direct reinforcement. Group A was motivated to imitate because they had vicariously experienced the social reward.
Q47. List the FOUR elements required for Group A children to successfully imitate the model, in the correct order of necessity:
A) Desire → Memory → Attention → Imitation
B) Attention → Memory → Imitation ability → Motivation/Desire
C) Memory → Attention → Desire → Imitation
D) Imitation → Attention → Memory → Desire
Answer: B) Attention → Memory → Imitation ability → Motivation/Desire
Bandura's four elements: (1) Attention — must notice and attend to the model; (2) Retention/Memory — must encode and remember behaviors; (3) Motor Reproduction/Imitation — must be physically capable of performing behaviors; (4) Motivation/Desire — must have reason to perform (vicarious reinforcement, anticipated reward). All four are necessary; absence of any one prevents imitation.
Q48. Some children said they "didn't want to" hit the Bobo doll even though they "knew how." This reflects which element of observational learning being ABSENT?
A) Attention — they weren't paying attention to the model
B) Memory — they couldn't encode the aggressive behaviors
C) Motor reproduction — they were physically incapable of the behaviors
D) Motivation/Desire — they had no incentive to perform the behavior
Answer: D) Motivation/Desire — they had no incentive to perform the behavior
"Didn't want to" directly indicates motivational absence. The children had attention (they watched), memory (they knew how), and motor ability (they were physically capable) — but lacked the desire or motivation to translate learning into performance.
Q49. Dr. Alvarez notes that children who came from homes where parents modeled prosocial behavior tended to show less aggression than those from homes where aggression was modeled. She concludes that home environment, parenting behavior, and child behavior mutually influence each other. This is Bandura's concept of:
A) Latent learning applied to social environments
B) Learned helplessness across family systems
C) Reciprocal determinism — behavior, personal factors, and environment all influence each other bidirectionally
D) Biological preparedness for prosocial behavior
Answer: C) Reciprocal determinism — behavior, personal factors, and environment all influence each other bidirectionally
Reciprocal determinism: The child's behavior influences the home environment (aggressive child changes family dynamics), the environment influences the child (aggressive home models aggression), and the child's personal factors (beliefs, cognitive schema) influence how they interpret both. It's a three-way bidirectional system.
Q50. After the Bobo doll session, one child — Marcus — is afraid of the Bobo doll itself because it was present while he watched an older child in a separate incident get hurt playing aggressively. The Bobo doll's associative value here was acquired through:
A) Operant conditioning — touching the doll was punished
B) Classical conditioning — the doll (CS) was paired with pain/fear (UCS)
C) Vicarious conditioning — he observed another child's fear of the doll
D) Higher-order conditioning using the aggression video as CS1
Answer: B) Classical conditioning — the doll (CS) was paired with pain/fear (UCS)
The doll (originally neutral → CS) was present when Marcus witnessed injury (UCS → UCR of fear). The doll now elicits fear (CR). This is direct classical conditioning (or could also be argued as vicarious conditioning in C — but the question specifies "associative value" through his OWN observational experience pairing the doll with an aversive UCS event, making classical conditioning more precise here).
Q51. Which researcher would argue that Marcus's acquired fear of the Bobo doll could be most effectively treated by pairing the doll with Marcus's favorite snack while gradually increasing proximity?
A) B.F. Skinner — using positive reinforcement to override fear
B) Mary Cover Jones — using counterconditioning (pleasant activity + gradual exposure) as she did with Little Peter
C) Edward Tolman — building a cognitive map of the safe environment
D) Martin Seligman — using internal locus of control retraining
Answer: B) Mary Cover Jones — using counterconditioning (pleasant activity + gradual exposure) as she did with Little Peter
Jones's method with Little Peter (1924) — the first clinical behavior therapy — paired a feared object (rabbit) with a pleasant stimulus (food) during gradual exposure. This directly applies here: snack + gradual proximity to Bobo doll. Jones is the "Mother of Behavior Therapy" for pioneering exactly this technique.
Q52. Dr. Alvarez hypothesizes that children raised with NO models of any behavior will not develop any social behaviors. This reflects which theoretical position?
A) Biological preparedness — instincts override all modeling
B) Watson's environmental determinism — the environment exclusively shapes behavior
C) Bandura's social learning theory — behavior is acquired through observation of models; no models = no social learning
D) Tolman's cognitive map theory — without environments to map, no behavior forms
Answer: C) Bandura's social learning theory — behavior is acquired through observation of models; no models = no social learning
Bandura's Social Learning Theory posits that observational learning (modeling) is a primary mechanism of behavior acquisition. Without models, children would lack crucial templates for social behavior. This differs from Watson's pure environmental determinism in that cognition and observation mediate learning.
PASSAGE 8 — Read carefully, answer Q53–Q62
"A comprehensive psychology exam question presents the following vignette:
'Alejandro has been depressed for three years. He believes nothing he does matters — every job application, every social attempt fails regardless of effort. He has stopped trying. His therapist notes that Alejandro attributes all outcomes to fate and other people. Meanwhile, Alejandro's brother Roberto tried the same number of job applications, also failed, but applied his own analysis to his failures, made targeted improvements, and eventually succeeded. Roberto now believes the harder he works, the more likely he is to succeed.
In therapy, Alejandro watches videos of people successfully navigating job interviews using specific behaviors. His therapist uses a token system where Alejandro earns points for every job application submitted, redeemable for activities he enjoys. After 8 weeks, Alejandro begins applying more consistently and attributes some successes to his own improved efforts.'"
Q53. Alejandro's belief that "nothing I do matters, outcomes are controlled by fate" reflects:
A) Internal locus of control
B) Learned helplessness with external locus of control
C) Reciprocal determinism applied externally
D) Biological preparedness for passivity
Answer: B) Learned helplessness with external locus of control
Seligman's learned helplessness: repeated uncontrollable failure → belief that responses don't affect outcomes → passive giving-up. Rotter's external locus of control: belief that outcomes are controlled by luck, fate, or others rather than one's own behavior. Alejandro shows both — learned helplessness producing an entrenched external locus of control.
Q54. Roberto's belief — "the harder I work, the more likely I am to succeed" — reflects:
A) External locus of control
B) Superstitious reinforcement
C) Internal locus of control
D) Reciprocal inhibition
Answer: C) Internal locus of control
Internal locus of control: belief that outcomes are contingent on one's own behavior, effort, and ability. Roberto attributes success/failure to controllable internal factors. Research links internal locus of control with higher achievement, better problem-solving, and greater psychological resilience.
Q55. The key difference between Alejandro and Roberto — despite identical initial experiences — is BEST explained by:
A) Different biological preparedness for success
B) Different schedules of reinforcement in childhood
C) Attribution style and locus of control affecting how they interpret and respond to failure
D) Roberto having better cognitive maps of the job market
Answer: C) Attribution style and locus of control affecting how they interpret and respond to failure
Same objective events (repeated job rejections) produced completely different outcomes because of cognitive interpretation. Roberto attributed failure to specific, controllable, changeable factors (internal). Alejandro attributed failure to uncontrollable, global, stable factors (external) — leading to learned helplessness.
Q56. Alejandro's therapist has him watch videos of successful job interviews. What element of observational learning must be present FIRST for this intervention to work?
A) Motivation — Alejandro must want the job
B) Motor reproduction — Alejandro must be able to do the behaviors physically
C) Attention — Alejandro must actively observe and process the model's behaviors
D) Memory — Alejandro must already have the behaviors encoded
Answer: C) Attention — Alejandro must actively observe and process the model's behaviors
Attention is the first and foundational element of observational learning. Without attending to the model's behavior, no encoding occurs, no memory is formed, and no imitation is possible. A depressed individual with low motivation may have difficulty deploying attention — which is why motivation must often be addressed first clinically, but attention is the first element in Bandura's model.
Q57. The token system in Alejandro's therapy (points for applications → redeemable for enjoyable activities) is BEST described as combining:
A) Classical conditioning + systematic desensitization
B) Token economy (secondary reinforcement) + the Premack Principle (preferred activities reinforce less preferred)
C) Fixed ratio schedule + primary reinforcement
D) Shaping + conditioned emotional response reduction
Answer: B) Token economy (secondary reinforcement) + the Premack Principle (preferred activities reinforce less preferred)
Token economy: tokens (secondary reinforcers) are exchanged for backup reinforcers. Premack Principle: high-probability preferred activities (enjoyable activities Alejandro likes) are used to reinforce low-probability behaviors (submitting job applications — currently very low frequency). Both principles operate simultaneously in this intervention.
Q58. After 8 weeks, Alejandro attributes some successes to "his own improved efforts." His locus of control is:
A) Remaining fully external
B) Shifting toward internal — he is beginning to see effort-outcome contingencies
C) Becoming superstitious — attributing success to lucky behaviors
D) Demonstrating learned helplessness in reverse
Answer: B) Shifting toward internal — he is beginning to see effort-outcome contingencies
Effective treatment for learned helplessness involves creating controllable success experiences that reinforce internal attributions. Alejandro is learning that HIS behavior (applications) produces outcomes (interview opportunities) — breaking the learned helplessness and shifting locus of control inward.
Q59. Alejandro's therapist also uses biofeedback to help Alejandro recognize and reduce his physiological anxiety responses during mock interviews. The SPECIFIC type used to help him control muscle tension during anxiety is:
A) Neurofeedback — targeting brainwave patterns
B) EMG biofeedback — targeting skeletal muscle tension
C) Systematic desensitization — targeting emotional associations
D) Reciprocal inhibition — pairing tension with a relaxation response
Answer: B) EMG biofeedback — targeting skeletal muscle tension
While neurofeedback specifically targets EEG/brainwaves, EMG (electromyographic) biofeedback targets muscle tension — providing real-time feedback on muscle activity so the individual can learn to reduce tension voluntarily. This is a specific application of biofeedback appropriate for anxiety-related muscle tension.
Q60. Alejandro's learned helplessness was originally conditioned in a manner most analogous to which experimental paradigm?
A) Pavlov's classical conditioning of salivation in dogs
B) Seligman's dogs exposed to inescapable shocks who later failed to escape avoidable shocks
C) Tolman's Group 2 rats who showed latent learning
D) Bandura's Bobo Doll children who learned aggression vicariously
Answer: B) Seligman's dogs exposed to inescapable shocks who later failed to escape avoidable shocks
Seligman directly modeled human depression through learned helplessness in dogs. Dogs given inescapable shocks later sat passively and accepted shocks even when escape was easy — they had learned their behavior was ineffective. Alejandro shows the human analog: years of uncontrollable failures → passive non-trying.
Q61. The Premack Principle in Alejandro's case is: "If Alejandro _____ (low-preference behavior), then he gets to _____ (high-preference behavior)." Which fills in correctly?
A) Watches job interview videos → submits an application
B) Submits a job application → engages in an enjoyable activity he likes
C) Earns tokens → submits an application
D) Relaxes → gains access to mock interviews
Answer: B) Submits a job application → engages in an enjoyable activity he likes
Premack: High-probability (preferred) activity reinforces low-probability (non-preferred) activity. Submitting applications is the low-probability behavior Alejandro avoids; enjoyable activities are high-probability behaviors he prefers. Using access to preferred activities to reinforce non-preferred behavior is "Grandma's Rule."
Q62. Alejandro's entire treatment integrates multiple psychological frameworks. Rank these from most fundamental (underlying mechanism) to most applied (specific technique):
A) Learned helplessness theory → Locus of control shift → Token economy → Premack Principle
B) Token economy → Premack Principle → Learned helplessness → Locus of control
C) Operant conditioning → Behavior modification → Token economy → Premack Principle as specific rule
D) Classical conditioning → Systematic desensitization → Token economy → Biofeedback
Answer: C) Operant conditioning → Behavior modification → Token economy → Premack Principle as specific rule
The hierarchy: Operant conditioning is the foundational science (Skinner/Thorndike). Behavior modification is the applied field using operant principles. Token economy is a specific behavior modification technique. The Premack Principle is the specific rule determining WHICH activities serve as reinforcers within the token economy. Each level is nested within the one above.
SECTION: Direct Definition + Identification — Rapid Fire
Q63. Wolfgang Köhler placed a banana out of reach of a chimpanzee named Sultan. Sultan failed repeatedly. After a pause, Sultan suddenly stacked two boxes and climbed to reach the banana. This demonstrates:
A) Latent learning — unrewarded prior exploration paid off
B) Insight learning ("aha moment") — sudden solution through internal mental reorganization
C) Trial-and-error operant shaping
D) Stimulus generalization of tool-use behaviors
Answer: B) Insight learning ("aha moment") — sudden solution through internal mental reorganization
Insight learning (Köhler) involves sudden problem solution without step-by-step trial-and-error. There is a period of apparent failure (incubation), then abrupt solution — the "aha moment." It cannot be explained by prior reinforcement history because the solution appears all at once, suggesting internal cognitive restructuring.
Q64. A raccoon trained by Breland & Breland to deposit coins in a bank for food rewards began instead rubbing the coins together and "washing" them — reverting to food-washing instinct despite this reducing reward frequency. This illustrates:
A) Superstitious reinforcement
B) Latent learning
C) Instinctive drift — trained behavior drifting back toward species-typical instinctive patterns
D) Stimulus generalization of washing behavior
Answer: C) Instinctive drift — trained behavior drifting back toward species-typical instinctive patterns
Instinctive drift (Breland & Breland) is a biological constraint on operant learning — trained behaviors that conflict with species-typical instincts will be overridden by those instincts over time, regardless of reinforcement. The raccoon's food-washing instinct is stronger than the trained coin-depositing behavior.
Q65. Garcia and Koelling's experiment showed rats developed taste aversions to flavored water (not to noise or lights) when paired with radiation-induced nausea hours later. This violates which classical conditioning "rule" and demonstrates which concept?
A) Violates the contiguity rule (CS-UCS must be immediate); demonstrates biological preparedness
B) Violates the frequency rule (many trials needed); demonstrates latent learning
C) Violates the generalization rule; demonstrates instinctive drift
D) Violates the voluntary behavior rule; demonstrates operant conditioning
Answer: A) Violates the contiguity rule (CS-UCS must be immediate); demonstrates biological preparedness
Standard classical conditioning requires CS-UCS contiguity (close time proximity). Garcia's taste aversion formed with hours between CS (taste) and UCS (nausea) — in a single trial. This is possible because evolution specifically prepared taste-illness associations (adaptive for survival), demonstrating biological preparedness overrides standard conditioning parameters.
Q66. David Premack studied which behavior in Cebus monkeys that led to his principle?
A) He noticed high-frequency play behavior could reinforce low-frequency lever pressing
B) He found monkeys preferred variable ratio schedules to fixed interval schedules
C) He discovered monkeys could form cognitive maps of their enclosures
D) He showed monkeys acquired insight learning faster than rats
Answer: A) He noticed high-frequency play behavior could reinforce low-frequency lever pressing
Premack's original observation: some behaviors occur at high frequency naturally (play, eating) while others are low frequency (pressing a lever). If access to a high-frequency behavior is made contingent on a low-frequency behavior, the low-frequency behavior increases. This is the Premack Principle.
Q67. (Hard Identification) A child who fears dogs is gradually desensitized. At the end of treatment, the therapist says: "Your calmness (CR) now occurs in the presence of dogs (CS) instead of fear." This NEW learned association (calm to dogs) was established by pairing dogs with relaxation — which is:
A) Extinction — simply removing the fear response
B) Counterconditioning — replacing one CR (fear) with an incompatible CR (calm) to the same CS
C) Shaping — approximating calmness in the presence of dogs
D) Latent learning — the calm response was always there, waiting for motivation
Answer: B) Counterconditioning — replacing one CR (fear) with an incompatible CR (calm) to the same CS
Counterconditioning doesn't just extinguish the old CR — it actively replaces it with a new, incompatible CR to the same CS. The dog becomes a CS for calm (new CR) rather than fear (old CR). This is mechanistically more powerful than simple extinction because a competing response actively inhibits the old one.
Q68. A hospital patient receiving chemotherapy always passes a particular mural in the hallway on the way to treatment. Over several sessions, the patient begins feeling nauseated even before entering the treatment room — just from seeing the mural. Later, the patient is cured and chemotherapy stops. The patient visits the hospital for a checkup months later and feels slightly nauseated upon seeing the mural again, despite having visited multiple times without nausea. This final nausea is:
A) Extinction of the conditioned taste aversion
B) Spontaneous recovery of the conditioned nausea response after a period of rest
C) Instinctive drift causing nausea in medical settings
D) Higher-order conditioning from the hospital scent
Answer: B) Spontaneous recovery of the conditioned nausea response after a period of rest
The patient underwent extinction (visiting without nausea during checkups). After a rest period, the CR (nausea) partially returns to the CS (mural) — spontaneous recovery. This is clinically significant because it explains why "cured" phobias or aversions can re-emerge and why treatment must include strategies for managing spontaneous recovery.
Q69. (Integration — All Concepts) Professor Singh assigns students to build a complex research project over a semester. She:
- First asks students to submit just an outline (reinforces any organized attempt)
- Then requires a rough draft (outline is no longer reinforced)
- Then requires a polished draft (rough draft not reinforced)
- Finally requires the full project (full reinforcement)
Meanwhile, students who turn in each component on time earn "research coins" (redeemable for bonus points). Students who miss deadlines lose 30 minutes of the final exam time.
Identify ALL operant principles present:
A) Shaping only
B) Shaping + Token economy + Negative punishment (lost exam time)
C) Fixed ratio + Primary reinforcement + Positive punishment
D) Chaining + Continuous reinforcement + Negative reinforcement
Answer: B) Shaping + Token economy + Negative punishment (lost exam time)
Shaping: progressive reinforcement of closer approximations to the full project. Token economy: research coins (secondary reinforcers) redeemable for bonus points. Negative punishment: losing exam time (removing something positive — exam time) for missing deadlines to decrease deadline-missing behavior. A comprehensive, multi-principle behavioral intervention.
Q70. (Ultimate Integration Scenario) "A neuroscientist is studying a patient, Ezra, who has a spider phobia. Brain imaging reveals heightened amygdala activity to spider images. Ezra reports he never had a direct bad experience with a spider but his older sister screamed violently at a spider when he was 5. He now avoids all spider-related media, has nightmares, and cannot enter rooms without checking for spiders. His avoidance behavior is maintained because it prevents anxiety. A therapist uses real-time EEG monitoring to help Ezra achieve calmer brain states while viewing spider images, and simultaneously introduces gradual exposure while Ezra practices breathing."
This case integrates WHICH combination of concepts in the CORRECT sequence of how Ezra's fear developed and is maintained?
A) Biological preparedness → Vicarious conditioning → CER → Negative reinforcement maintaining avoidance → Neurofeedback + Systematic desensitization as treatment
B) Operant conditioning → Fixed ratio → Token economy → Latent learning → Insight learning as treatment
C) Instinctive drift → Superstitious reinforcement → Learned helplessness → Biofeedback as treatment
D) Higher-order conditioning → Extinction → Spontaneous recovery → Positive punishment as treatment
Answer: A) Biological preparedness → Vicarious conditioning → CER → Negative reinforcement maintaining avoidance → Neurofeedback + Systematic desensitization as treatment
Perfect case formulation: (1) Biological preparedness — humans are evolutionarily prepared to acquire spider fear rapidly; (2) Vicarious conditioning — fear acquired by observing sister, not direct experience; (3) CER — conditioned emotional response (anxiety) to spider-related stimuli; (4) Negative reinforcement — avoidance removes anxiety, increasing avoidance frequency; (5) Neurofeedback — EEG biofeedback for brain state control; (6) Systematic desensitization — gradual exposure + relaxation (breathing) addressing the conditioned fear. Every element fits.
Q71. Ivan Pavlov noticed his dogs began salivating at the sound of the researcher's footsteps before food was even presented. At the time he called this a "psychic secretion." In modern terminology, the researcher's footsteps would be classified as:
A) An unconditioned stimulus — they naturally produce salivation
B) A conditioned stimulus — they have been repeatedly paired with food (UCS) and now elicit salivation (CR)
C) A discriminative stimulus — they signal when operant lever-pressing will be reinforced
D) A secondary reinforcer — they have been paired with food and gained reinforcing properties
Answer: B) A conditioned stimulus — they have been repeatedly paired with food (UCS) and now elicit salivation (CR)
Footsteps (originally neutral) were repeatedly paired with food delivery — becoming a CS that elicited the CR (salivation) before food was even visible. This was Pavlov's accidental discovery that launched classical conditioning research. The "psychic secretion" was the conditioned salivation response.
Q72. Which of the following CORRECTLY distinguishes John Watson's position from Albert Bandura's position?
A) Watson accepted cognitive processes; Bandura rejected them
B) Watson believed behavior is entirely environmentally determined (no cognitive mediation); Bandura proposed reciprocal determinism where cognition, behavior, and environment all interact
C) Watson studied observational learning; Bandura studied environmental determinism
D) Both Watson and Bandura agreed that reinforcement is unnecessary for learning
Answer: B) Watson believed behavior is entirely environmentally determined (no cognitive mediation); Bandura proposed reciprocal determinism where cognition, behavior, and environment all interact
Watson's radical behaviorism: environment → behavior (one-way determinism; cognition irrelevant). Bandura: Person (cognition/beliefs) ↔ Behavior ↔ Environment (three-way reciprocal). This is one of the most fundamental theoretical distinctions in learning theory history.
Q73. Edward Thorndike's cat in a puzzle box repeatedly made random movements. When it accidentally pressed the lever and escaped, it took progressively less time to escape on subsequent trials. This provides evidence for:
A) Insight learning — the cat had an "aha moment"
B) Latent learning — prior exploration was expressed when the door opened
C) The Law of Effect — satisfying consequences (escape) strengthen the behavior that produced them
D) Stimulus generalization — the cat generalized escape behavior from previous environments
Answer: C) The Law of Effect — satisfying consequences (escape) strengthen the behavior that produced them
The Law of Effect: behaviors producing satisfying outcomes are more likely to be repeated; behaviors producing annoying outcomes are less likely to be repeated. The cat's escape behavior was strengthened by the satisfying consequence of freedom — the foundational insight of all operant conditioning. Note: it was trial-and-error, NOT insight (unlike Köhler's chimps).
Q74. (Comparison Scenario) Two patients seek treatment for phobias:
- Patient A (spider phobia): Therapist immediately exposes them to a room full of spiders for 2 hours until anxiety subsides
- Patient B (spider phobia): Therapist teaches relaxation, constructs a hierarchy, and gradually exposes while relaxed
Identify the techniques and the underlying principle each uses:
A) Patient A = Systematic desensitization (reciprocal inhibition); Patient B = Flooding (extinction)
B) Patient A = Flooding (extinction through prolonged exposure); Patient B = Systematic desensitization (reciprocal inhibition)
C) Patient A = Aversion therapy; Patient B = Token economy
D) Patient A = Counterconditioning; Patient B = Shaping toward calmness
Answer: B) Patient A = Flooding (extinction through prolonged exposure); Patient B = Systematic desensitization (reciprocal inhibition)
Flooding: immediate full-intensity exposure maintained until fear response extinguishes — based on extinction (CS without UCS repeatedly → CR disappears). Systematic desensitization: gradual hierarchy + relaxation — based on reciprocal inhibition (relaxation and anxiety are incompatible). Both treat phobias but through different mechanisms. Flooding is faster but more distressing; SD is slower but gentler.
Q75. (Grand Integration — Final Question) "A complete learning theory course could be organized around one central insight: behavior is never simply caused by a single factor."
Which answer BEST represents this insight by correctly linking the THEORIST → CHALLENGE to simple behaviorism → MECHANISM they proposed?
A) Tolman: learning requires reinforcement → cognitive maps; Bandura: behavior is only shaped by environment → reciprocal determinism; Seligman: organisms always seek control → learned helplessness; Köhler: insight requires training → aha moment
B) Tolman: learning requires reinforcement → latent learning/cognitive maps; Bandura: behavior is only directly experienced → observational learning + reciprocal determinism; Seligman: behavior is always controllable → learned helplessness + locus of control; Köhler: learning is only gradual trial-and-error → insight learning; Garcia: all stimuli are equally conditionable → biological preparedness
C) Pavlov: all learning is voluntary → classical conditioning; Skinner: classical conditioning is more powerful than operant → operant conditioning; Watson: cognition matters → environmental determinism; Thorndike: consequences don't matter → Law of Effect
D) Bandura: reinforcement strengthens behavior → social learning reduces the need for reinforcement; Seligman: insight learning is impossible → learned helplessness proves cognitive mediation; Tolman: shaping creates complex behaviors → latent learning bypasses shaping
Answer: B) Tolman: learning requires reinforcement → latent learning/cognitive maps; Bandura: behavior is only directly experienced → observational learning + reciprocal determinism; Seligman: behavior is always controllable → learned helplessness + locus of control; Köhler: learning is only gradual trial-and-error → insight learning; Garcia: all stimuli are equally conditionable → biological preparedness
This answer correctly maps each theorist to the specific behaviorist assumption they challenged and the mechanism they proposed: Tolman challenged "no reinforcement = no learning." Bandura challenged "only direct experience matters." Seligman challenged "organisms always try to control outcomes." Köhler challenged "all learning is gradual S-R." Garcia challenged "any CS can be equally conditioned to any UCS." Together, they transformed behaviorism into modern learning theory.
📊 Practice 2 — Complete Coverage Map
| Concept | Questions |
|---|
| CS/UCS/CR/UCR identification from scenario | Q1, Q6 |
| Stimulus generalization + discrimination (simultaneous) | Q2, Q68 |
| Extinction | Q3, Q8 |
| Spontaneous recovery | Q4, Q68 |
| Higher-order conditioning | Q6, Q33 |
| Vicarious conditioning | Q7, Q29 |
| Little Albert parallel | Q5 |
| CER | Q35, Q70 |
| Biological preparedness | Q16, Q65 |
| Continuous vs. partial reinforcement + PRE | Q8, Q10, Q41 |
| FR, VR, FI, VI identification | Q9, Q36, Q37, Q38, Q39, Q40 |
| Superstitious reinforcement | Q12, Q42 |
| Discriminative stimulus | Q11, Q73 |
| Positive/Negative Punishment identification | Q17, Q18, Q25, Q43 |
| Positive/Negative Reinforcement identification | Q23, Q24 |
| Time-out (negative punishment) | Q16 |
| Drawbacks of punishment | Q13, Q43 |
| Token economy | Q15, Q21, Q57 |
| Shaping + successive approximation | Q14, Q20, Q69 |
| Chaining | Q21 |
| Biofeedback + Neurofeedback | Q19, Q59 |
| Systematic desensitization | Q30, Q74 |
| Reciprocal inhibition | Q31 |
| Mary Cover Jones | Q32, Q51 |
| Counterconditioning | Q67 |
| Flooding vs. SD comparison | Q74 |
| Conditioned taste aversion | Q50, Q65 |
| Tolman's latent learning (3 groups) | Q22, Q23 |
| Cognitive maps | Q24, Q25, Q26 |
| Latent learning-performance distinction | Q27, Q45 |
| Instinctive drift | Q64 |
| Insight learning (Köhler) | Q63 |
| Premack Principle | Q61, Q66 |
| Learned helplessness | Q53, Q60 |
| Locus of control (internal/external) | Q54, Q55, Q58 |
| Bandura's Bobo Doll | Q44 |
| Four elements of observational learning | Q47, Q48, Q56 |
| Vicarious reinforcement | Q46 |
| Learning vs. performance distinction | Q45, Q27 |
| Reciprocal determinism | Q49, Q72 |
| Watson vs. Bandura | Q72 |
| Thorndike Law of Effect | Q73 |
| Grand integration | Q62, Q70, Q75 |
75 scenario/passage-based questions — every concept tested through application, identification, comparison, and multi-concept integration, exactly as requested.