I will provide you all the topic I want you create question similar to given and from topic. it should be direct question and senario based question and some similar to example question it should be all MCQ and I want answer right under each question. do mix of It and along with answer explain it. give hardest level 10/10 level. cover all mini and max detail. design as every concept get clear and learn everything. Learning, Maturation, Behavioral Perspective (John Watson), Environmental Determinism, Classical Conditioning, Ivan Pavlov, Learning to make a reflex response to a stimulus, neutral condition, unconditional stimulus, conditional stimulus, unconditional response, conditional response, Stimulus Discrimination, Stimulus Generalization, Extinction, Spontaneous Recovery, identify conditions and response and identify all from on passage senario, Higher-Order Conditioning, Conditioned Emotional Response, Little Albert, Vicarious conditioning, Biological preparedness, Operant Conditioning, Voluntary behavior learned through consequences, B. F. Skinner “Father of Modern Behaviorism”, Edward Thorndike “Law of Effect”, puzzle box, Reinforcing Stimulus, Punishing Stimulus, Primary reinforcer, Secondary reinforcer, Positive & negative Reinforcement, Positive & negative Punishment, identify between positive and negative punishment and Reinforcement, Application, Removal, Partial Reinforcement Effect, Continuous Reinforcement Effect, Superstitions Reinforcement, Fixed Ratio, Variable Ratio, Fixed Interval, Variable Interval, identify between variable fixed ratio and interval, Drawbacks to severe punishment, Punishment should be, Discriminative stimulus, Shaping & Successive Approximation, Chaining, Behavior modification, Token economy, Time-out, Neurofeedback, biofeedback, Comparing Two Kinds of Conditioning, Mary Cover Jones, Mother of Behavior Therapy”, Desensitization, Reciprocal inhibition, Systematic Desensitization, Conditioned Taste Aversion, Edward Tolman’s Rats are being conditioned, Latent Learning, group 1, Group 2 In maze every day; only rewarded on 10th day Demonstrated learning of maze almost immediately after receiving reward, Group 3 Never rewarded Did not learn maze well, Biological Constraints: Instinctive drift, Wolfgang Köhler found evidence of insight learning, aha moment, David Premack, “Premack Principle”, Martin Seligman Learned Helplessness, Locus of Control, internal Control, external Control, Albert Bandura, Social Learning Theory, 2 parts to Observational Learning Vicarious Learning, Vicarious reinforcement, Albert Bandura: “Bobo Doll”, Two conditions, The Four Elements of Observational Learning , Attention, memory, Imitation, Desire, Reciprocal Determinism,

Asking for Preferences
I'll generate 75 hard MCQs covering every concept you listed — direct questions, scenario-based, passage-based, and example-style, all with explained answers. This will be long and thorough.

🧠 Psychology – Learning & Behaviorism

75 MCQs | Difficulty: 10/10


SECTION 1: Learning, Maturation & Behavioral Perspective

Q1. A child raised in isolation begins walking at 14 months without any training. A second child receives intense daily walking practice but walks at the same age. What does this BEST illustrate?
A) Classical conditioning
B) Operant conditioning
C) Maturation overriding learning
D) Observational learning
Answer: C) Maturation overriding learning Maturation refers to biologically programmed developmental changes that occur regardless of environmental input. Walking is a maturationally driven milestone — practice cannot significantly accelerate it, distinguishing it from learned behaviors.

Q2. John Watson's behavioral perspective argued that psychology should:
A) Study unconscious drives and repressed memories
B) Focus exclusively on observable, measurable behavior
C) Combine introspection with brain scanning
D) Accept free will as a variable in behavior
Answer: B) Focus exclusively on observable, measurable behavior Watson founded behaviorism and insisted psychology must be a strictly objective, experimental science. He rejected introspection and any reference to mental states, arguing only observable behavior is scientifically valid.

Q3. Environmental determinism, as advocated by Watson, holds that:
A) Genetics set hard limits on behavior that environment cannot alter
B) A person's behavior is entirely shaped by their surroundings and experiences
C) Both genes and environment equally determine behavior
D) Internal mental processes are the primary drivers of behavior
Answer: B) A person's behavior is entirely shaped by their surroundings and experiences Watson famously claimed he could take any infant and, given full control of the environment, train them to become any type of specialist — doctor, lawyer, thief — regardless of heredity. This is pure environmental determinism.

SECTION 2: Classical Conditioning & Pavlov

Q4. Ivan Pavlov initially discovered classical conditioning while studying:
A) Voluntary muscle movement in dogs
B) Salivation in dogs in response to food
C) Maze-running behavior in rats
D) Anxiety responses in children
Answer: B) Salivation in dogs in response to food Pavlov was a physiologist studying digestion. He noticed dogs began salivating before food was presented — at the sight of the food bowl or the researcher. This "psychic secretion" led him to investigate conditioned reflexes.

Q5. In Pavlov's original experiment, which of the following correctly maps all four components?
A) Bell = UCS; Salivation to food = UCR; Food = CS; Salivation to bell = CR
B) Food = UCS; Salivation to food = UCR; Bell = CS; Salivation to bell = CR
C) Bell = UCS; Food = CS; Salivation to bell = UCR; Salivation to food = CR
D) Food = CS; Bell = UCS; Salivation = both UCR and CR simultaneously
Answer: B) Food = UCS; Salivation to food = UCR; Bell = CS; Salivation to bell = CR The UCS (unconditioned stimulus) naturally produces a response without learning — food produces salivation (UCR). After pairing with the bell (neutral → CS), the bell alone elicits salivation (CR). The CR is typically weaker than the UCR.

Q6. (Scenario) Maria always burns incense before meditating. After months of this routine, she notices she feels calm and relaxed the moment she smells incense, even before she sits down. Identify the CS, UCS, CR, and UCR.
A) CS = meditation; UCS = incense; CR = calmness; UCR = calmness
B) CS = incense; UCS = meditation; CR = calmness; UCR = calmness
C) CS = incense; UCS = meditation practice; CR = relaxation to incense; UCR = relaxation from meditation
D) UCS = incense; CS = meditation; UCR = relaxation; CR = relaxation to meditation
Answer: C) CS = incense; UCS = meditation practice; CR = relaxation to incense; UCR = relaxation from meditation Meditation naturally induces relaxation (UCR). After repeated pairing with incense (neutral → CS), incense alone now triggers relaxation (CR). The UCS is the meditation itself which unconditionally produces the relaxation response.

Q7. (Passage) Read the following and answer:
"Dr. Reyes conducts an experiment. Every time a tone sounds, he gives a puff of air to participants' eyes, causing them to blink. After 40 pairings, participants blink when they hear the tone alone. Dr. Reyes then sounds the tone repeatedly without the air puff. After 20 trials, blinking to the tone stops. One week later, without any retraining, participants blink slightly when they hear the tone again."
What is the name of the phenomenon in the LAST sentence?
A) Stimulus generalization
B) Extinction
C) Spontaneous recovery
D) Higher-order conditioning
Answer: C) Spontaneous recovery After extinction (the CR disappears), a rest period can cause the CR to reappear at reduced strength — this is spontaneous recovery. It proves extinction suppresses, but does not erase, the conditioned association.

Q8. In the passage above (Q7), what does the process described in sentence 4 ("tone repeated without air puff, blinking stops") represent?
A) Stimulus discrimination
B) Extinction
C) Punishment
D) Counterconditioning
Answer: B) Extinction Extinction in classical conditioning occurs when the CS is repeatedly presented without the UCS. The association weakens and the CR eventually disappears.

Q9. A dog trained to salivate to a 1000 Hz tone also salivates to 950 Hz and 1050 Hz tones, but not to a 500 Hz tone. This illustrates:
A) Extinction of the original response
B) Stimulus discrimination at 500 Hz only
C) Stimulus generalization to similar stimuli
D) Higher-order conditioning
Answer: C) Stimulus generalization to similar stimuli Stimulus generalization is the tendency to respond to stimuli similar to the original CS. The closer the new stimulus is to the original CS, the stronger the CR — this is the generalization gradient.

Q10. A pigeon is trained to peck at a red light for food. The experimenter then reinforces pecking ONLY to red, never to orange or yellow. Over time the pigeon pecks exclusively at red. This process is called:
A) Extinction
B) Stimulus generalization
C) Stimulus discrimination
D) Spontaneous recovery
Answer: C) Stimulus discrimination Stimulus discrimination is learned through differential reinforcement — responding to the CS (red) is reinforced while responding to other stimuli is not. The organism learns to distinguish between similar stimuli.

Q11. Which statement about extinction is MOST accurate?
A) The original CS-UCS neural pathway is permanently destroyed
B) The conditioned response is suppressed but the original learning is retained
C) Extinction is identical to forgetting
D) Once extinction occurs, the CS can never elicit the CR again
Answer: B) The conditioned response is suppressed but the original learning is retained Evidence for retained learning includes spontaneous recovery and the fact that reacquisition (relearning) after extinction is much faster than original learning — proving the association was inhibited, not erased.

SECTION 3: Higher-Order Conditioning, CER, Little Albert

Q12. In higher-order conditioning, a new neutral stimulus becomes a CS by being paired with:
A) The UCS directly
B) An already established CS
C) A punishing stimulus
D) A primary reinforcer
Answer: B) An already established CS In higher-order (second-order) conditioning, a well-established CS is used as if it were a UCS to condition a new neutral stimulus. Example: Bell → food (1st order); light → bell (2nd order) — light now elicits salivation without ever being paired with food.

Q13. John Watson and Rosalie Rayner conditioned "Little Albert" to fear a white rat by:
A) Shocking him whenever he saw the rat
B) Making a loud noise every time he reached for the rat
C) Pairing the rat with another feared animal
D) Withholding food when the rat was present
Answer: B) Making a loud noise every time he reached for the rat The UCS was the sudden loud bang of a metal rod (which naturally caused fear/crying — UCR). Paired with the white rat (NS → CS), Albert began to cry and avoid the rat alone (CR). This demonstrated conditioned emotional responses in humans.

Q14. After conditioning, Little Albert showed fear not only of the white rat but also of a white rabbit, a Santa Claus mask, and a fur coat. This is an example of:
A) Higher-order conditioning
B) Stimulus generalization
C) Stimulus discrimination
D) Spontaneous recovery
Answer: B) Stimulus generalization Albert's fear generalized to other white, furry objects sharing perceptual features with the original CS (rat). This illustrates how fears can spread beyond the original conditioning event.

Q15. Vicarious conditioning refers to:
A) Learning a conditioned response by observing another person experience the CS-UCS pairing
B) Conditioning that requires multiple UCS exposures
C) A reflex conditioned through punishment
D) Conditioning that occurs only after extinction
Answer: A) Learning a conditioned response by observing another person experience the CS-UCS pairing In vicarious classical conditioning, you acquire an emotional or physiological response (fear, nausea, attraction) simply by watching someone else react — you don't need to experience the UCS directly yourself.

Q16. Biological preparedness explains why:
A) Any neutral stimulus can be conditioned equally with any UCS
B) Organisms are evolutionarily predisposed to form certain CS-UCS associations more rapidly
C) Extinction always takes longer than acquisition
D) Higher-order conditioning is impossible in most species
Answer: B) Organisms are evolutionarily predisposed to form certain CS-UCS associations more rapidly Garcia & Koelling's work showed rats learn taste-illness (not tone-illness) associations in one trial, even with hours between CS and UCS. Evolution wires us to rapidly associate biologically relevant stimuli — taste with sickness, appearance with danger. Not all stimuli are equally associable.

SECTION 4: Operant Conditioning

Q17. The fundamental difference between classical and operant conditioning is:
A) Classical involves animals; operant involves humans only
B) Classical conditions reflexive/involuntary responses; operant conditions voluntary behavior through consequences
C) Classical uses punishment; operant uses reinforcement
D) Classical requires a neutral stimulus; operant does not
Answer: B) Classical conditions reflexive/involuntary responses; operant conditions voluntary behavior through consequences Classical conditioning pairs stimuli to elicit automatic responses (salivation, fear). Operant conditioning changes the frequency of voluntary behaviors based on their consequences (reinforcement increases frequency; punishment decreases it).

Q18. Edward Thorndike's "Law of Effect" states:
A) Behaviors followed by neutral consequences are repeated most often
B) Behaviors followed by satisfying consequences are strengthened; those followed by unpleasant consequences are weakened
C) All behaviors are instinctive and cannot be modified
D) Punishment always leads to faster learning than reinforcement
Answer: B) Behaviors followed by satisfying consequences are strengthened; those followed by unpleasant consequences are weakened Thorndike placed cats in puzzle boxes. Cats that accidentally triggered the escape mechanism (satisfying consequence: freedom) repeated that behavior more quickly on subsequent trials. This was the foundational insight for all of operant conditioning.

Q19. B.F. Skinner is called the "Father of Modern Behaviorism" primarily because he:
A) Discovered classical conditioning and the conditioned reflex
B) Systematically developed operant conditioning principles and the Skinner box
C) Proposed the Social Learning Theory
D) First used punishment to modify behavior in clinical settings
Answer: B) Systematically developed operant conditioning principles and the Skinner box Skinner created the operant chamber (Skinner box) to precisely measure how schedules of reinforcement shape voluntary behavior. His work extended Thorndike's Law of Effect into a comprehensive science of behavior.

Q20. Which of the following is a PRIMARY reinforcer?
A) Money
B) Praise
C) Food when hungry
D) A gold star sticker
Answer: C) Food when hungry Primary reinforcers satisfy biological drives directly — food, water, warmth, sex. They do not require learning to be reinforcing. Secondary (conditioned) reinforcers like money, grades, and praise acquire value through association with primary reinforcers.

Q21. A teacher gives students tokens for good behavior, which they can exchange for prizes at the end of the week. The tokens are:
A) Primary reinforcers
B) Negative reinforcers
C) Secondary (conditioned) reinforcers
D) Punishing stimuli
Answer: C) Secondary (conditioned) reinforcers Tokens have no intrinsic value but gain reinforcing power through association with primary reinforcers (prizes, food, privileges). This is the basis of a token economy.

Q22. Identify the correct definition of POSITIVE REINFORCEMENT:
A) Adding an unpleasant stimulus to decrease behavior
B) Removing an unpleasant stimulus to increase behavior
C) Adding a pleasant stimulus to increase behavior
D) Removing a pleasant stimulus to decrease behavior
Answer: C) Adding a pleasant stimulus to increase behavior "Positive" = adding something; "Reinforcement" = behavior increases. Example: a dog sits → gets a treat → sits more often. The treat is added (+) and sitting increases (reinforcement).

Q23. A student studies hard to stop feeling anxious about exams. Studying is maintained by:
A) Positive reinforcement
B) Positive punishment
C) Negative reinforcement
D) Negative punishment
Answer: C) Negative reinforcement "Negative" = removing something; "Reinforcement" = behavior increases. Anxiety (unpleasant stimulus) is removed when studying occurs → studying increases. Negative reinforcement is NOT punishment — it always increases behavior.

Q24. (Hard Distinction) A teenager keeps their room clean to avoid losing their phone. This is BEST described as:
A) Positive reinforcement — gaining phone privileges
B) Negative reinforcement — avoiding phone removal
C) Negative punishment — phone is removed for dirty room
D) Positive punishment — chores are added for dirty room
Answer: B) Negative reinforcement — avoiding phone removal The teenager is removing/avoiding an aversive consequence (losing phone). Behavior (cleaning) is maintained by the removal of something unpleasant. This is negative reinforcement. If the phone had already been taken and cleaning gets it back, it's still negative reinforcement (removal of aversive state).

Q25. A child hits a sibling and is made to do 50 push-ups. This is:
A) Negative reinforcement
B) Negative punishment
C) Positive punishment
D) Extinction
Answer: C) Positive punishment "Positive" = something is added; "Punishment" = behavior decreases. An aversive stimulus (push-ups) is added following the behavior (hitting) to reduce its future frequency.

Q26. A child loses 30 minutes of video game time for failing to do homework. This is:
A) Negative reinforcement
B) Positive punishment
C) Negative punishment
D) Extinction
Answer: C) Negative punishment "Negative" = something is removed; "Punishment" = behavior decreases. A pleasant stimulus (game time) is taken away following the undesired behavior. Also called "response cost."

Q27. (Scenario) Four students receive different consequences for talking in class:
  • Student A: receives detention (added aversive event, behavior decreases)
  • Student B: loses recess (removed pleasant event, behavior decreases)
  • Student C: teacher ignores them completely (behavior decreases over time)
  • Student D: gets extra praise for raising hand (added pleasant event, behavior increases)
Match each student to the correct operant process:
A) A=Positive Punishment; B=Negative Punishment; C=Extinction; D=Positive Reinforcement
B) A=Negative Punishment; B=Positive Punishment; C=Positive Reinforcement; D=Extinction
C) A=Positive Punishment; B=Extinction; C=Negative Punishment; D=Positive Reinforcement
D) A=Negative Reinforcement; B=Negative Punishment; C=Extinction; D=Positive Punishment
Answer: A) A=Positive Punishment; B=Negative Punishment; C=Extinction; D=Positive Reinforcement A: Adding detention (aversive) → Positive Punishment. B: Removing recess (pleasant) → Negative Punishment. C: Removing all consequences (ignoring) → Extinction. D: Adding praise (pleasant) → Positive Reinforcement.

SECTION 5: Schedules of Reinforcement

Q28. The "partial reinforcement effect" means:
A) Behaviors reinforced only sometimes are MORE resistant to extinction than continuously reinforced behaviors
B) Partial reinforcement produces faster initial learning than continuous reinforcement
C) Behaviors must be reinforced on every trial to be maintained
D) Partial reinforcement always leads to superstitious behavior
Answer: A) Behaviors reinforced only sometimes are MORE resistant to extinction than continuously reinforced behaviors Continuous reinforcement (CRF) produces fastest acquisition but fastest extinction. Partial (intermittent) reinforcement creates behaviors that are very hard to extinguish because the organism has learned that non-reinforcement doesn't signal the end of reinforcement availability.

Q29. Slot machines in casinos reinforce gambling on which schedule?
A) Fixed ratio
B) Fixed interval
C) Variable ratio
D) Variable interval
Answer: C) Variable ratio Variable ratio (VR) reinforces after an unpredictable number of responses. This produces the highest and most steady response rate with the greatest resistance to extinction — exactly what makes gambling addictive. The gambler never knows which pull will pay off.

Q30. A factory worker is paid $10 for every 5 items assembled. This is a:
A) Fixed interval schedule
B) Variable ratio schedule
C) Fixed ratio schedule
D) Variable interval schedule
Answer: C) Fixed ratio schedule Fixed ratio (FR) reinforces after a set, predictable number of responses. This produces a high response rate with a characteristic "post-reinforcement pause" — the worker pauses briefly after each payment before resuming.

Q31. A teacher gives pop quizzes at unpredictable times. Students who study consistently score best. What schedule best describes students' studying behavior?
A) Fixed ratio
B) Fixed interval
C) Variable interval
D) Variable ratio
Answer: C) Variable interval Variable interval (VI) reinforces the first correct response after an unpredictable time period. It produces steady, moderate response rates — students study consistently because they don't know when the quiz will come.

Q32. Employees who receive a paycheck every two weeks are on which schedule?
A) Fixed ratio
B) Variable ratio
C) Variable interval
D) Fixed interval
Answer: D) Fixed interval Fixed interval (FI) reinforces the first response after a fixed time period. It produces a "scallop" pattern — slow responding after reinforcement, accelerating as the interval end approaches. Workers may "coast" after payday and work harder as the next payday nears.

Q33. (Scenario) A researcher observes four rats on four different schedules:
  • Rat 1: Highest and most consistent response rate, very resistant to extinction
  • Rat 2: Responds in "scallop" pattern, pauses after reinforcement
  • Rat 3: Post-reinforcement pause, then bursts of high activity
  • Rat 4: Steady moderate responding, resistant to extinction
Match rats to schedules (VR, FI, FR, VI):
A) Rat1=VR; Rat2=FI; Rat3=FR; Rat4=VI
B) Rat1=VI; Rat2=FR; Rat3=FI; Rat4=VR
C) Rat1=FR; Rat2=VR; Rat3=VI; Rat4=FI
D) Rat1=FI; Rat2=VR; Rat3=VI; Rat4=FR
Answer: A) Rat1=VR; Rat2=FI; Rat3=FR; Rat4=VI VR: highest, consistent rate, most extinction-resistant. FI: scallop pattern (low after reinforcement, high near end of interval). FR: post-reinforcement pause then high rate. VI: steady moderate rate, good extinction resistance.

Q34. Skinner noticed that pigeons given food at random intervals began performing elaborate rituals (turning in circles, bowing). This illustrates:
A) Instinctive drift
B) Superstitious reinforcement
C) Latent learning
D) Insight learning
Answer: B) Superstitious reinforcement When reinforcement is delivered non-contingently (randomly), whatever behavior happened to occur just before the reward gets accidentally reinforced. The pigeon "believes" that behavior caused the food, creating superstitious rituals.

SECTION 6: Punishment — Drawbacks & Proper Use

Q35. Which of the following is NOT a recognized drawback of severe punishment?
A) It suppresses behavior without teaching the correct alternative
B) It can cause aggression and emotional disturbance
C) It always produces permanent behavior change
D) It may generalize, causing the person to fear the punisher or setting
Answer: C) It always produces permanent behavior change Severe punishment suppresses behavior temporarily but does not eliminate the underlying motivation. It teaches what NOT to do, not what TO do. It can also cause avoidance of the punishing agent, increased aggression, and negative emotional responses.

Q36. For punishment to be most effective, it should be:
A) Severe, delayed, and inconsistent
B) Immediate, consistent, and paired with explanation of acceptable behavior
C) Unpredictable so the individual stays vigilant
D) Applied only in public settings
Answer: B) Immediate, consistent, and paired with explanation of acceptable behavior Effective punishment is: immediate (minimizes time between behavior and consequence), consistent (every occurrence punished), moderate in intensity, and combined with reinforcement of desired alternative behavior.

SECTION 7: Discriminative Stimulus, Shaping, Chaining

Q37. A discriminative stimulus (Sᴰ) signals that:
A) A behavior will be punished
B) Reinforcement is NOT available
C) A particular behavior will be reinforced in this context
D) Extinction is about to occur
Answer: C) A particular behavior will be reinforced in this context A discriminative stimulus sets the occasion for a behavior by signaling reinforcement availability. Example: a "OPEN" sign on a restaurant is an Sᴰ — it signals that entering (behavior) will lead to being served (reinforcement).

Q38. (Scenario) A dog only sits on command when the owner holds a treat in their hand. Without the treat, the dog ignores the command. The treat held in the owner's hand is functioning as:
A) A primary reinforcer
B) A conditioned stimulus
C) A discriminative stimulus
D) A punishing stimulus
Answer: C) A discriminative stimulus The treat in the owner's hand signals to the dog that sitting will be reinforced. It cues the behavior rather than reinforcing it directly. This is a discriminative stimulus — it controls when the behavior occurs.

Q39. Shaping uses a process called successive approximation, which means:
A) Waiting for the complete target behavior before reinforcing
B) Reinforcing progressively closer approximations to the target behavior
C) Punishing behaviors that deviate from the target
D) Presenting the target behavior and asking the learner to copy it
Answer: B) Reinforcing progressively closer approximations to the target behavior Shaping is used when a target behavior is not yet in the organism's repertoire. By reinforcing behaviors that increasingly resemble the target and extinguishing earlier approximations, the full target behavior is gradually developed.

Q40. Chaining differs from shaping because:
A) Chaining teaches a single behavior; shaping teaches a sequence
B) Chaining links a series of already-learned behaviors into a sequence; shaping develops a novel behavior
C) Chaining uses punishment; shaping uses reinforcement
D) Chaining requires a discriminative stimulus; shaping does not
Answer: B) Chaining links a series of already-learned behaviors into a sequence; shaping develops a novel behavior In chaining, each response in a sequence serves as the discriminative stimulus for the next response and is reinforced by completing that next response — until the final behavior produces the terminal reinforcer. This is used to teach complex multi-step skills.

SECTION 8: Behavior Modification Techniques

Q41. A token economy is classified as a behavior modification technique based on:
A) Negative punishment — tokens are removed for bad behavior only
B) Secondary reinforcement — tokens gain value through association with primary reinforcers
C) Classical conditioning — tokens are conditioned stimuli
D) Shaping — tokens gradually approximate primary reinforcers
Answer: B) Secondary reinforcement — tokens gain value through association with primary reinforcers Tokens are secondary reinforcers. Their power comes entirely from being exchangeable for primary or other reinforcers. Token economies are widely used in psychiatric units, schools, and ABA therapy.

Q42. Time-out is BEST described as:
A) Positive punishment — adding a period of sitting still
B) Negative reinforcement — removing an aversive activity
C) Negative punishment — removing access to reinforcement
D) Extinction — ignoring behavior
Answer: C) Negative punishment — removing access to reinforcement Time-out (time-out from positive reinforcement) removes the individual from access to rewarding environments or stimuli. It is a form of negative punishment — something desirable (the reinforcing environment) is taken away.

Q43. Biofeedback uses real-time physiological data to help individuals:
A) Condition involuntary physiological responses through operant control
B) Extinguish conditioned fear responses through flooding
C) Apply token economies to self-monitoring
D) Practice vicarious conditioning through watching others
Answer: A) Condition involuntary physiological responses through operant control Biofeedback provides real-time data on heart rate, muscle tension, brain waves, etc., allowing individuals to learn to voluntarily regulate what are normally involuntary responses. Neurofeedback is a specific type targeting brainwave activity.

Q44. Neurofeedback specifically trains individuals to modify:
A) Muscle tension patterns
B) Heart rate variability
C) Brainwave activity (EEG patterns)
D) Galvanic skin response
Answer: C) Brainwave activity (EEG patterns) Neurofeedback is EEG biofeedback. Individuals see real-time displays of their brain activity and learn to produce desired brainwave patterns. Used for ADHD, epilepsy, anxiety, and peak performance training.

SECTION 9: Classical vs. Operant — Comparison

Q45. Which comparison between classical and operant conditioning is MOST accurate?
FeatureClassicalOperant
Response typeInvoluntary reflexVoluntary behavior
Key figurePavlovSkinner/Thorndike
MechanismStimulus associationConsequence-based
ExtinctionCS without UCSBehavior without reinforcement
A) All four rows are correct
B) Only the "Key figure" row is correct
C) Only "Response type" and "Mechanism" are correct
D) None of the rows are correct
Answer: A) All four rows are correct This comparison accurately captures the core distinctions: classical conditioning pairs stimuli to elicit automatic responses; operant conditioning modifies voluntary behavior through consequences. Both have extinction processes but with different mechanisms.

Q46. Mary Cover Jones is called the "Mother of Behavior Therapy" because she:
A) Developed the Skinner box for animal training
B) Demonstrated the first therapeutic application of classical conditioning by eliminating a child's phobia
C) Proposed the Premack Principle
D) Conducted the Little Albert experiment
Answer: B) Demonstrated the first therapeutic application of classical conditioning by eliminating a child's phobia Jones (1924) worked with "Little Peter," a boy afraid of rabbits. Using gradual exposure while eating (a pleasant activity), she eliminated the fear — anticipating systematic desensitization by decades and establishing behavior therapy.

Q47. Reciprocal inhibition, as described by Joseph Wolpe, states that:
A) A conditioned response can be inhibited by pairing the CS with an unconditioned aversive stimulus
B) Two incompatible responses (e.g., relaxation and anxiety) cannot occur simultaneously
C) Extinction inhibits spontaneous recovery permanently
D) Classical and operant conditioning cannot occur in the same organism simultaneously
Answer: B) Two incompatible responses (e.g., relaxation and anxiety) cannot occur simultaneously Wolpe used reciprocal inhibition as the basis for systematic desensitization — if you pair the feared stimulus with deep relaxation (an incompatible response), anxiety is inhibited and eventually extinguished.

Q48. The correct sequence of steps in systematic desensitization is:
A) Exposure → Relaxation training → Hierarchy construction
B) Relaxation training → Hierarchy construction → Gradual exposure while relaxed
C) Flooding → Relaxation training → Hierarchy construction
D) Hierarchy construction → Flooding → Desensitization
Answer: B) Relaxation training → Hierarchy construction → Gradual exposure while relaxed Step 1: Client learns deep muscle relaxation. Step 2: Client and therapist construct a fear hierarchy from least to most frightening stimuli. Step 3: Client is gradually exposed to each step while maintaining relaxation, working up the hierarchy.

SECTION 10: Conditioned Taste Aversion

Q49. Conditioned taste aversion is a special form of classical conditioning because:
A) It requires hundreds of trials to develop
B) It can develop after a single CS-UCS pairing, even with a long delay between them
C) It involves operant rather than reflexive responses
D) It cannot be extinguished once formed
Answer: B) It can develop after a single CS-UCS pairing, even with a long delay between them Garcia and Koelling showed rats developed taste aversion after ONE pairing of a flavor with illness, even when illness followed hours later. This violates the typical rules of contiguity and frequency in conditioning and reflects biological preparedness.

Q50. A cancer patient receiving chemotherapy develops a strong aversion to a food eaten before treatment. This is BEST explained by:
A) Operant conditioning — the food caused punishment
B) Biological preparedness and conditioned taste aversion
C) Stimulus discrimination
D) Insight learning
Answer: B) Biological preparedness and conditioned taste aversion The nausea from chemotherapy (UCS → UCR) becomes associated with the food eaten before treatment (CS → CR of aversion), even though the food didn't cause the nausea. Evolution primes us to form rapid taste-illness associations as a survival mechanism.

SECTION 11: Cognitive Maps & Latent Learning (Tolman)

Q51. Edward Tolman's maze study with three rat groups demonstrated latent learning. Which group BEST demonstrated that learning had occurred without reinforcement?
A) Group 1 — rewarded every day, performed best throughout
B) Group 2 — rewarded only starting on day 11, dramatically improved almost immediately
C) Group 3 — never rewarded, never improved
D) Group 1 and Group 3 equally
Answer: B) Group 2 — rewarded only starting on day 11, dramatically improved almost immediately Group 2 had been running the maze unrewarded, apparently learning nothing. But when reward was introduced, their performance jumped to match Group 1 almost immediately — revealing they had formed a cognitive map (latent learning) during unrewarded trials.

Q52. Tolman's concept of a "cognitive map" challenged behaviorism because it implied:
A) Reinforcement is unnecessary for operant behavior
B) Internal mental representations (not just stimulus-response connections) mediate behavior
C) Classical conditioning requires no neutral stimulus
D) Punishment is more effective than reinforcement
Answer: B) Internal mental representations (not just stimulus-response connections) mediate behavior Tolman showed that learning is not just S-R connections but involves forming internal cognitive representations of the environment. This introduced cognition into learning theory, challenging pure behaviorism.

Q53. In Tolman's study, Group 3 (never rewarded):
A) Learned the maze as well as Group 1
B) Eventually outperformed Group 2 with extended practice
C) Did not learn the maze well, confirming reinforcement's role in performance
D) Demonstrated the highest rate of spontaneous recovery
Answer: C) Did not learn the maze well, confirming reinforcement's role in performance Group 3, never rewarded, never improved. This shows that while learning can occur without reinforcement (Group 2), motivation (reward) is important for performance. Tolman distinguished learning (acquisition) from performance.

SECTION 12: Biological Constraints

Q54. Instinctive drift refers to:
A) Animals that spontaneously develop new instincts through conditioning
B) The tendency of trained behaviors to drift back toward species-typical instinctive behaviors over time
C) The biological limit on how many behaviors can be learned
D) Conditioned behaviors that are biologically impossible
Answer: B) The tendency of trained behaviors to drift back toward species-typical instinctive behaviors over time Breland & Breland observed that despite successful initial training, animals' behavior would revert to instinct-based behaviors (e.g., raccoons "washing" coins instead of depositing them). Instinctual behaviors compete with and override learned behaviors.

Q55. Wolfgang Köhler's work with chimpanzees demonstrated "insight learning" — this involved:
A) Gradual trial-and-error learning over many sessions
B) A sudden "aha" moment solution to a novel problem, without prior step-by-step shaping
C) Operant shaping of language using food reinforcement
D) Formation of conditioned taste aversions
Answer: B) A sudden "aha" moment solution to a novel problem, without prior step-by-step shaping Köhler placed bananas out of reach. Chimpanzees initially failed, then suddenly — after a pause suggesting mental processing — stacked boxes or joined sticks to retrieve the bananas. This "insight" could not be explained by simple S-R learning or shaping.

Q56. The Premack Principle, proposed by David Premack, states:
A) Any stimulus can serve as a reinforcer if it precedes behavior
B) A more preferred (high-probability) activity can reinforce a less preferred (low-probability) activity
C) Punishment is more effective than reinforcement for complex behaviors
D) Organisms can only be reinforced by primary reinforcers
Answer: B) A more preferred (high-probability) activity can reinforce a less preferred (low-probability) activity "Grandma's Rule" is the informal version: "Eat your vegetables (low-preference) and THEN you can have dessert (high-preference)." The high-frequency preferred behavior acts as the reinforcer for the low-frequency behavior.

Q57. Martin Seligman's concept of learned helplessness was demonstrated when:
A) Dogs given uncontrollable shocks later failed to escape avoidable shocks
B) Rats that received no food eventually stopped pressing the lever
C) Pigeons trained on variable ratio became superstitious
D) Chimpanzees failed to solve insight tasks when given distractors
Answer: A) Dogs given uncontrollable shocks later failed to escape avoidable shocks Seligman found that dogs exposed to inescapable shocks learned that their actions had no effect on outcomes. When later placed in situations where escape WAS possible, they passively accepted the shock — they had learned they were helpless. This models depression in humans.

Q58. Locus of control refers to:
A) The brain region controlling voluntary behavior
B) A person's belief about whether they control outcomes (internal) or outcomes are controlled by outside forces (external)
C) The number of discriminative stimuli an organism can respond to
D) The degree of biological preparedness for a specific behavior
Answer: B) A person's belief about whether they control outcomes (internal) or outcomes are controlled by outside forces (external) Julian Rotter proposed this concept. Internal locus: "My success/failure is due to my own effort and ability." External locus: "What happens to me is controlled by luck, fate, or powerful others." Seligman's learned helplessness is linked to an external locus of control.

Q59. A student who fails an exam thinks: "I should have studied harder — I'll do better next time." This reflects:
A) External locus of control
B) Learned helplessness
C) Internal locus of control
D) Reciprocal determinism
Answer: C) Internal locus of control The student attributes the outcome to their own behavior (studying) — a controllable, internal factor. Internal locus of control is associated with better academic achievement, resilience, and mental health outcomes.

SECTION 13: Social Learning Theory (Bandura)

Q60. Albert Bandura's Social Learning Theory extended behaviorism by emphasizing:
A) That all learning requires direct personal experience with reinforcement
B) That behavior, personal factors, and environment interact in reciprocal determinism
C) That biological drives exclusively control behavior
D) That reinforcement is unnecessary for learning
Answer: B) That behavior, personal factors, and environment interact in reciprocal determinism Bandura proposed that behavior, cognitive/personal factors, and environment all mutually influence each other — not a one-way causal chain. This broke from simple stimulus-response behaviorism.

Q61. In the Bobo Doll experiment, Bandura demonstrated that children:
A) Learned aggression only through direct reinforcement for hitting
B) Imitated aggressive behavior simply by observing an adult model being aggressive
C) Only imitated behavior if they were punished for non-imitation
D) Did not imitate behavior that was punished in the model
Answer: B) Imitated aggressive behavior simply by observing an adult model being aggressive Children who watched an adult punch, kick, and verbally abuse a Bobo doll reproduced those specific behaviors without any direct reinforcement. This established observational learning as a major mechanism of behavior acquisition.

Q62. In Bandura's Bobo Doll study, a key finding about vicarious reinforcement was:
A) Children who saw the model punished showed no reduction in learning (they still knew how to be aggressive)
B) Children who saw the model punished showed complete extinction of the aggressive behavior
C) Vicarious punishment eliminated both learning and performance of aggression equally
D) Children only imitated models who were the same sex as themselves
Answer: A) Children who saw the model punished showed no reduction in learning (they still knew how to be aggressive) Children who saw the model punished performed LESS aggression spontaneously — but when offered incentives, they showed they had learned the aggressive behaviors. This distinguishes learning (acquisition) from performance, and shows vicarious punishment affects performance, not learning.

Q63. Vicarious reinforcement in observational learning means:
A) Being directly rewarded for imitating a model
B) Learning is strengthened by watching a model receive reinforcement
C) Reinforcement delivered on a variable schedule
D) Imagining receiving reinforcement without observing it
Answer: B) Learning is strengthened by watching a model receive reinforcement When we observe a model being rewarded for a behavior, we are more motivated to imitate that behavior. This is vicarious (experienced through others) reinforcement — we learn the behavior AND the likely consequences without direct experience.

Q64. The FOUR elements required for observational learning (Bandura) are:
A) Attention, Memory, Imitation (motor reproduction), Desire (motivation)
B) Conditioning, Reinforcement, Punishment, Extinction
C) Stimulus, Response, Reinforcer, Discriminative stimulus
D) Insight, Latent learning, Cognitive map, Biological preparedness
Answer: A) Attention, Memory, Imitation (motor reproduction), Desire (motivation) All four must be present: 1) Attention — notice the behavior; 2) Memory/Retention — encode and remember it; 3) Motor Reproduction/Imitation — physically able to reproduce it; 4) Motivation/Desire — have reason to perform it (including vicarious reinforcement).

Q65. (Scenario) A student watches a YouTube tutorial on skateboarding tricks, remembers the steps, practices them, but only keeps practicing because they want to impress their friends. Which element is "wanting to impress friends"?
A) Attention
B) Memory
C) Motor reproduction
D) Motivation (desire)
Answer: D) Motivation (desire) The desire/motivation element asks: "Why would I perform this behavior?" Anticipated rewards — social approval, money, internal satisfaction — supply the motivational force to translate learning into actual performance.

Q66. Reciprocal determinism means:
A) A person's behavior is entirely determined by the environment
B) Behavior, cognitive factors, and environment each influence the others in a bidirectional cycle
C) Reinforcement and punishment reciprocally balance each other
D) Classical and operant conditioning reciprocally produce the same behaviors
Answer: B) Behavior, cognitive factors, and environment each influence the others in a bidirectional cycle Unlike Watson's one-way environmental determinism, Bandura's reciprocal determinism is a three-way interaction: Person ↔ Behavior ↔ Environment. Your behavior changes your environment, which changes you, which changes your behavior, etc.

SECTION 14: Advanced Scenario-Based & Integration Questions

Q67. (Complex Scenario) Sarah was bitten by a dog at age 7 (a large German Shepherd). Now at age 30, she is afraid of all dogs — large and small. She is not afraid of wolves (which she's never seen in person) but is terrified of even small breeds. Her therapist first teaches her deep muscle relaxation, then has her look at pictures of tiny dogs while relaxed, then sit near a small dog in a cage, and eventually pet a small dog. Over months she is no longer afraid.
Which technique did the therapist use?
A) Token economy
B) Flooding
C) Systematic desensitization
D) Aversion therapy
Answer: C) Systematic desensitization The therapist used relaxation training + a fear hierarchy (pictures → caged dog → petting) with gradual exposure while relaxed. This is classic systematic desensitization, based on reciprocal inhibition — relaxation and fear cannot coexist.

Q68. In the scenario above (Q67), Sarah's fear spreading to all dogs (not just German Shepherds) but NOT to wolves illustrates:
A) Extinction only
B) Stimulus generalization (to dogs) but some stimulus discrimination (wolves excluded)
C) Higher-order conditioning only
D) Instinctive drift
Answer: B) Stimulus generalization (to dogs) but some stimulus discrimination (wolves excluded) Sarah generalized her fear to the category "dogs" but discriminated wolves as sufficiently different. Stimulus generalization and discrimination operate simultaneously on a continuum — the more similar a stimulus is to the original CS, the stronger the generalized response.

Q69. (Passage) "A 10-year-old watches his older brother play violent video games and watches the brother get praised by friends for making aggressive jokes. The next day, the younger boy begins making similar aggressive jokes at school. When a teacher gives him detention, the behavior decreases. However, the boy still KNOWS how to make the aggressive jokes — he just chooses not to."
This passage BEST illustrates which combination of concepts?
A) Classical conditioning + extinction + spontaneous recovery
B) Observational learning + vicarious reinforcement + the distinction between learning and performance
C) Operant conditioning + negative punishment + latent learning
D) Shaping + token economy + learned helplessness
Answer: B) Observational learning + vicarious reinforcement + the distinction between learning and performance The younger brother acquired the behavior through observation (observational learning) motivated by watching his brother receive social reward (vicarious reinforcement). Detention decreased the performance but not the learning — demonstrating Bandura's learning vs. performance distinction.

Q70. Which of the following BEST illustrates higher-order conditioning?
A) A dog salivates to food (UCS) and later salivates to a bell (CS1) paired with food
B) A dog that salivates to a bell (CS1) now salivates to a light (CS2) that was paired with the bell — without the light ever being paired with food
C) A dog stops salivating to the bell after the bell is presented without food
D) A dog salivates to both the bell and a similar-sounding buzzer
Answer: B) A dog that salivates to a bell (CS1) now salivates to a light (CS2) that was paired with the bell — without the light ever being paired with food Higher-order conditioning uses an established CS1 as though it were a UCS to condition a new CS2. The CS2 (light) was never directly paired with food yet elicits salivation because CS1 (bell) "stands in" for the UCS.

Q71. (Hard Scenario) A psychologist is treating a patient with alcohol use disorder. She pairs images of alcohol (CS) with a drug that causes intense nausea (UCS). After several sessions, the patient feels nauseated just seeing alcohol. Which technique is this, and what type of conditioning underlies it?
A) Systematic desensitization; classical conditioning
B) Aversion therapy; classical conditioning
C) Token economy; operant conditioning
D) Flooding; operant conditioning
Answer: B) Aversion therapy; classical conditioning Aversion therapy uses classical conditioning to pair a previously desired stimulus (alcohol) with an aversive UCS (nausea-inducing drug). The goal is to condition a negative CR (nausea/disgust) to the previously attractive CS (alcohol), reducing approach behavior.

Q72. Which pairing of researcher → contribution is ENTIRELY correct?
A) Pavlov=operant conditioning; Skinner=classical conditioning; Bandura=biological preparedness
B) Thorndike=Law of Effect/puzzle box; Pavlov=classical conditioning; Bandura=social learning theory
C) Tolman=shaping; Köhler=latent learning; Premack=insight learning
D) Seligman=token economy; Watson=social learning; Jones=learned helplessness
Answer: B) Thorndike=Law of Effect/puzzle box; Pavlov=classical conditioning; Bandura=social learning theory All three are correctly matched. Thorndike studied cats in puzzle boxes and formulated the Law of Effect. Pavlov discovered classical conditioning via dogs. Bandura proposed Social Learning Theory and reciprocal determinism.

Q73. (Extreme Scenario) A rat is placed in an operant chamber. It receives food for pressing a lever. A researcher then introduces a red light. When the red light is ON, pressing the lever produces food. When the red light is OFF, pressing produces nothing. The rat quickly learns to press only when the red light is on.
The red light is functioning as:
A) A conditioned stimulus
B) A discriminative stimulus
C) A secondary reinforcer
D) A punishing stimulus
Answer: B) A discriminative stimulus The red light signals when the operant behavior (lever pressing) will be reinforced. It controls when the behavior occurs — the defining feature of a discriminative stimulus. This differs from a CS (which triggers a reflex response directly).

Q74. (Integration) Which of the following sequences CORRECTLY maps the progression of behavior modification using shaping?
A child is learning to write the letter "A":
  1. Child randomly makes any mark → reinforced with praise
  2. Child makes a vertical line → reinforced
  3. Child makes two diagonal lines → reinforced
  4. Child makes the crossbar → reinforced
  5. Complete "A" formed → heavily reinforced
A) This is chaining, not shaping, because each step adds to the previous
B) This is shaping via successive approximations — behaviors progressively closer to the target are reinforced
C) This is a fixed ratio schedule — one reinforcement per completed letter
D) This is higher-order conditioning — each step conditions the next
Answer: B) This is shaping via successive approximations — behaviors progressively closer to the target are reinforced Shaping requires reinforcing each behavior that is closer to the target and extinguishing earlier approximations. The sequence shown progressively narrows toward the complete letter "A" — textbook successive approximation.

Q75. (Final Integration) Marcus was shy and avoided social situations (external locus of control; believed others controlled his social success). His therapist: (1) Had him watch a confident person successfully start conversations (vicarious learning) (2) Praised him every time he attempted any social interaction (positive reinforcement) (3) Taught him that his own behavior determines social outcomes (internal locus shift) (4) Used token economy for consistent social practice
Which theoretical frameworks are represented in steps 1–4 respectively?
A) Social Learning Theory; Operant Conditioning; Cognitive/Seligman's Locus of Control; Behavior Modification
B) Classical Conditioning; Negative Reinforcement; Biological Preparedness; Chaining
C) Latent Learning; Instinctive Drift; Systematic Desensitization; Biofeedback
D) Observational Learning; Positive Punishment; Environmental Determinism; Token Economy
Answer: A) Social Learning Theory; Operant Conditioning; Cognitive/Seligman's Locus of Control; Behavior Modification Step 1: Watching confident model = Social Learning/Observational Learning. Step 2: Praise for behavior = Positive Reinforcement (Operant). Step 3: Shifting belief about control = Internal Locus of Control (Seligman/Rotter). Step 4: Token economy = Behavior modification program based on secondary reinforcement.

📊 Topic Coverage Summary

Topic AreaQuestions Covered
Learning/Maturation/WatsonQ1–Q3
Classical Conditioning (all components)Q4–Q11
Higher-Order, CER, Little Albert, Vicarious, PreparednessQ12–Q16
Operant Conditioning (Skinner, Thorndike, reinforcers)Q17–Q21
+/- Reinforcement & Punishment (identify & apply)Q22–Q27
Schedules of Reinforcement (all 4 + PRE)Q28–Q34
Punishment drawbacks & proper useQ35–Q36
Discriminative Stimulus, Shaping, ChainingQ37–Q40
Behavior Modification (token economy, time-out, biofeedback)Q41–Q44
Classical vs. Operant comparisonQ45
Mary Cover Jones, Desensitization, Reciprocal InhibitionQ46–Q48
Conditioned Taste AversionQ49–Q50
Tolman (Latent Learning, Groups 1–3)Q51–Q53
Biological Constraints (instinctive drift, insight, Premack)Q54–Q56
Learned Helplessness, Locus of ControlQ57–Q59
Bandura: Social Learning, Bobo Doll, Vicarious, 4 Elements, Reciprocal DeterminismQ60–Q66
Integration/Advanced ScenariosQ67–Q75

75 questions total — every concept from your list is covered with at least one question, many concepts appear in multiple question types (direct definition, scenario, passage, identification). All answers include mechanistic explanations to maximize conceptual learning.

give practice no.2. same everything but little harder, mixed and senario base like section 14. give practice two similar to section 14. include every concept in multiple question types (direct definition, scenario, passage, identification)

🧠 Psychology – Learning & Behaviorism

PRACTICE SET 2 | 75 MCQs | Difficulty: 10/10

Every question: Scenario-Based, Passage-Based, or Complex Integration


PASSAGE 1 — Read carefully, answer Q1–Q7

"Dr. Harmon is a researcher studying emotional learning. She brings a 9-month-old infant named Tommy into the lab. Every time Tommy reaches for a wooden block (which he enjoys), a loud horn sounds behind him. After 7 pairings, Tommy cries and pulls away whenever he sees the block — even before touching it. Dr. Harmon then notices Tommy also cries when he sees similarly shaped rectangular objects — a matchbox, a small book — but does NOT cry when he sees a round ball. Later, Dr. Harmon presents the wooden block 30 times in a row without the horn. Tommy's crying gradually stops. Two weeks later, with no further training, Tommy whimpers slightly when shown the block again."

Q1. In this study, what is the UCS, UCR, CS, and CR in the correct order?
A) Block = UCS; Crying to block = UCR; Horn = CS; Crying to horn = CR B) Horn = UCS; Crying to horn = UCR; Block = CS; Crying to block = CR C) Crying = UCS; Block = UCR; Horn = CS; Tommy = CR D) Block = CS; Horn = UCS; Pulling away = UCR; Crying = CR only after pairing
Answer: B) Horn = UCS; Crying to horn = UCR; Block = CS; Crying to block = CR The horn naturally causes crying without learning (UCS → UCR). After repeated pairing with the block (neutral → CS), the block alone causes crying (CR). Note: the CR (crying to block) typically appears slightly weaker than the UCR (crying to horn) — a key distinguishing feature.

Q2. Tommy crying at rectangular objects (matchbox, small book) but NOT at a round ball illustrates which TWO processes occurring simultaneously?
A) Extinction and spontaneous recovery B) Stimulus generalization AND stimulus discrimination C) Higher-order conditioning and extinction D) Vicarious conditioning and biological preparedness
Answer: B) Stimulus generalization AND stimulus discrimination Generalization: Tommy responds to stimuli similar to the original CS (rectangular shapes). Discrimination: Tommy does NOT respond to sufficiently dissimilar stimuli (round ball). Both processes operate together — generalization broadens the response; discrimination narrows it based on dissimilarity.

Q3. The process of presenting the block 30 times without the horn, causing Tommy's crying to stop, is called:
A) Spontaneous recovery B) Stimulus discrimination training C) Extinction D) Counterconditioning
Answer: C) Extinction Extinction in classical conditioning occurs when the CS (block) is repeatedly presented without the UCS (horn). The conditioned association weakens and the CR (crying) disappears. Crucially, the original learning is not erased — it is suppressed.

Q4. Tommy whimpering two weeks later (without any retraining) is an example of:
A) Re-acquisition B) Spontaneous recovery C) Higher-order conditioning D) Stimulus generalization
Answer: B) Spontaneous recovery After a rest period following extinction, the CR returns at reduced strength — this is spontaneous recovery. It is one of the strongest pieces of evidence that extinction suppresses rather than erases the original CS-UCS association.

Q5. This experiment is MOST similar to which famous historical study?
A) Tolman's latent learning maze experiment B) Skinner's operant chamber studies C) Watson and Rayner's Little Albert experiment D) Bandura's Bobo Doll experiment
Answer: C) Watson and Rayner's Little Albert experiment Watson conditioned Little Albert to fear a white rat using a loud noise — identical procedure. Tommy is essentially a replication of the Little Albert paradigm: a neutral object paired with a fear-inducing noise to produce a conditioned emotional response in an infant.

Q6. Dr. Harmon later pairs the BLOCK (now established CS) with a flash of blue light — without ever pairing the blue light with the horn. After several pairings, Tommy shows mild distress at the blue light alone. This demonstrates:
A) Stimulus generalization to blue light B) Second-order (higher-order) conditioning C) Spontaneous recovery transferred to a new stimulus D) Vicarious conditioning
Answer: B) Second-order (higher-order) conditioning The block (CS1) is used as though it were a UCS to condition the blue light (CS2 — new neutral stimulus). CS2 never pairs directly with the UCS (horn) yet elicits a CR, because CS1 "carries" the associative strength. This is the defining feature of higher-order conditioning.

Q7. Tommy's mother tells the researchers she ALSO cries when she smells antiseptic solution, ever since a painful medical procedure years ago. She learned this fear through watching her own mother cry at hospitals, without ever having a traumatic procedure herself. Her fear was acquired through:
A) Classical conditioning via direct experience B) Vicarious conditioning C) Operant conditioning via negative reinforcement D) Latent learning
Answer: B) Vicarious conditioning Vicarious classical conditioning occurs when an observer acquires a conditioned emotional/physiological response by watching another person experience the CS-UCS pairing — without direct personal experience. The mother "learned" the fear through observation of her mother's response.

PASSAGE 2 — Read carefully, answer Q8–Q14

"A behavioral researcher places a rat named Rex in an operant chamber. Initially, Rex receives a food pellet every time he presses the lever (Condition A). Next, the researcher only delivers food after every 5th lever press (Condition B). Later, Rex receives food after an unpredictable number of presses — sometimes 3, sometimes 10, sometimes 7 (Condition C). In a final condition, a green light is turned on and off randomly — Rex only receives food when the green light is ON and he presses the lever (Condition D). After Condition C, the researcher disconnects the food mechanism entirely. Rex continues pressing the lever for a remarkably long time before stopping."

Q8. Condition A (food every press) is which schedule? What is its primary disadvantage?
A) Fixed ratio; produces rapid extinction B) Continuous reinforcement; produces fastest extinction when stopped C) Variable ratio; produces unpredictable behavior D) Fixed interval; produces scallop pattern
Answer: B) Continuous reinforcement; produces fastest extinction when stopped Continuous reinforcement (CRF) — one reinforcement per response — produces fastest initial learning but fastest extinction. Because every response has always been reinforced, non-reinforcement immediately signals "the contingency has ended," causing rapid extinction.

Q9. Condition B (food every 5th press) is which schedule, and what behavioral pattern does it produce?
A) Variable ratio; highest steady response rate B) Fixed ratio; high response rate with a post-reinforcement pause C) Fixed interval; scallop-shaped responding D) Variable interval; steady moderate responding
Answer: B) Fixed ratio; high response rate with a post-reinforcement pause FR-5 reinforces after every 5th response. This produces high response rates because more responses = more rewards, but the organism pauses briefly after each reinforcement (post-reinforcement pause) before starting the next ratio run.

Q10. Condition C (unpredictable number of presses required) is which schedule, and why does Rex press so long after extinction begins?
A) Fixed interval; post-reinforcement pauses make extinction gradual B) Variable ratio; the unpredictable ratio creates maximum resistance to extinction (partial reinforcement effect) C) Variable interval; time-based uncertainty slows extinction D) Fixed ratio; ratio strain causes gradual decrease
Answer: B) Variable ratio; the unpredictable ratio creates maximum resistance to extinction (partial reinforcement effect) VR produces the highest steady response rate AND the greatest resistance to extinction — this is the partial reinforcement effect. Rex cannot tell whether non-reinforcement means "contingency ended" or simply "this is one of those longer stretches" — so he keeps pressing far longer than under CRF.

Q11. The green light in Condition D is functioning as:
A) A conditioned stimulus triggering a reflexive response B) A discriminative stimulus signaling when lever pressing will be reinforced C) A secondary reinforcer paired with food D) A punishing stimulus that suppresses pressing
Answer: B) A discriminative stimulus signaling when lever pressing will be reinforced The green light signals "lever pressing NOW leads to food." Sᴰ (discriminative stimulus) sets the occasion for operant behavior by indicating reinforcement availability. When the light is OFF, pressing is not reinforced — Rex learns to press only during the light (stimulus control).

Q12. Skinner once observed that pigeons given food at random intervals (not contingent on any specific behavior) developed idiosyncratic rituals — turning in circles, bowing repeatedly. Rex, in Condition A before the lever was introduced, occasionally pressed a corner of the cage right before a randomly timed pellet dropped, and began pressing that corner repeatedly. This illustrates:
A) Instinctive drift — behavior reverting to species-typical patterns B) Superstitious reinforcement — accidental contingency between arbitrary behavior and reward C) Latent learning — unrewarded exploration forming a cognitive map D) Biological preparedness — corner-pressing is biologically prepared in rats
Answer: B) Superstitious reinforcement — accidental contingency between arbitrary behavior and reward Superstitious behavior develops when reinforcement is delivered non-contingently but happens to follow a particular behavior by chance. The organism "acts as if" that behavior caused the reward and repeats it. The behavior is maintained by the accidental reinforcement history.

Q13. A new researcher argues they should use CONTINUOUS punishment (shock every single incorrect behavior) to eliminate Rex's unwanted behaviors most efficiently. Based on the drawbacks of severe punishment, which response BEST critiques this approach?
A) Punishment cannot decrease behavior frequency under any circumstances B) Continuous severe punishment suppresses behavior temporarily, may cause emotional disturbance and aggression, doesn't teach correct behavior, and may generalize to suppress all behavior or cause fear of the entire testing environment C) Punishment is only ineffective in animals — it works reliably in humans D) The only drawback is that punishment takes longer to work than reinforcement
Answer: B) Continuous severe punishment suppresses behavior temporarily, may cause emotional disturbance and aggression, doesn't teach correct behavior, and may generalize to suppress all behavior or cause fear of the entire testing environment Well-documented drawbacks of severe punishment: (1) suppresses but doesn't eliminate behavior; (2) causes fear/avoidance of the punisher and setting; (3) models aggression; (4) produces emotional disturbance; (5) doesn't teach the correct alternative behavior. Effective punishment is immediate, consistent, moderate, and paired with reinforcement of alternatives.

Q14. A researcher uses Rex's behavior to train him to run a complex obstacle course: first, any movement toward the course is reinforced; then only entering the start zone; then moving to the first obstacle; and so on until the full course is run. This training method is:
A) Chaining B) Shaping via successive approximation C) Fixed ratio scheduling D) Latent learning exploitation
Answer: B) Shaping via successive approximation Shaping reinforces behaviors progressively closer to the target. Each step is reinforced while earlier approximations are extinguished, gradually building the full target behavior from scratch. This is used when the final behavior is not yet in the organism's repertoire.

PASSAGE 3 — Read carefully, answer Q15–Q21

"Dr. Voss runs a classroom-based behavior modification program. Students earn poker chips for completing assignments on time, raising their hand, and helping classmates. At the end of the week, chips can be exchanged for extra recess, snacks, or homework passes. One student, Kai, has been repeatedly sent to a small room alone for 10 minutes every time he disrupts the class. Another student, Priya, had her recess time reduced by 15 minutes for talking back. A third student, Leo, was given extra chores as a consequence of his aggressive behavior. Dr. Voss also uses a brain-activity monitoring system to help a student with ADHD learn to produce calmer, more focused brain wave patterns through real-time feedback."

Q15. The poker chip system in Dr. Voss's classroom is a:
A) Primary reinforcement system using basic biological rewards B) Token economy based on secondary reinforcement C) Continuous reinforcement schedule using fixed ratio D) Systematic desensitization program
Answer: B) Token economy based on secondary reinforcement Tokens (poker chips) are secondary (conditioned) reinforcers — they have no intrinsic value but acquire reinforcing power through association with primary/preferred reinforcers (snacks, recess). Token economies are among the most empirically supported behavior modification interventions.

Q16. Kai being placed alone in a room for 10 minutes is an example of:
A) Positive punishment — adding an aversive stimulus B) Negative reinforcement — removing an aversive stimulus C) Negative punishment — removing access to reinforcement (time-out) D) Extinction — ignoring the disruptive behavior
Answer: C) Negative punishment — removing access to reinforcement (time-out) Time-out = time-out FROM positive reinforcement. A desirable environment (classroom with peers, stimulation) is removed following the unwanted behavior. "Negative" = removing something; "Punishment" = behavior decreases. Time-out only works if the original environment is reinforcing — if class itself is aversive to Kai, time-out will backfire.

Q17. Priya losing 15 minutes of recess for talking back is:
A) Positive punishment — adding chores B) Negative punishment — removing a pleasant activity (response cost) C) Negative reinforcement — avoiding loss of more recess D) Extinction — ignoring talking back
Answer: B) Negative punishment — removing a pleasant activity (response cost) Response cost is a form of negative punishment where a specified amount of a reinforcer is removed following an undesired behavior. Recess (pleasant, rewarding) is removed following talking back, aiming to decrease the frequency of talking back.

Q18. Leo receiving extra chores for aggression is:
A) Negative punishment B) Negative reinforcement C) Positive punishment D) Extinction
Answer: C) Positive punishment Something aversive (extra chores) is ADDED following an undesired behavior (aggression) to decrease its future frequency. Positive = adding; Punishment = behavior decreases.

Q19. The brain-activity monitoring system Dr. Voss uses with the ADHD student is called:
A) Biofeedback — general physiological monitoring B) Neurofeedback — specifically training brainwave (EEG) patterns C) Systematic desensitization — pairing calm states with stimuli D) Reciprocal inhibition training
Answer: B) Neurofeedback — specifically training brainwave (EEG) patterns Neurofeedback is a specific subtype of biofeedback that provides real-time EEG information, allowing the individual to learn to operantly control their own brainwave patterns (increasing alpha/SMR waves associated with calm focus, decreasing theta waves associated with inattention). This uses operant conditioning principles on what are normally involuntary processes.

Q20. Dr. Voss wants to teach a new student to raise their hand before speaking — a behavior the student has never done. She starts by reinforcing any quiet pause before speaking, then reinforcing any hand movement upward, then reinforcing a partial raise, then requiring a full raised hand. This is:
A) Chaining — linking existing behaviors in sequence B) Successive approximation (shaping) C) Fixed interval reinforcement D) Higher-order conditioning applied to behavior
Answer: B) Successive approximation (shaping) The target behavior (hand raising) is not initially in the student's repertoire in that form. Dr. Voss reinforces progressively closer approximations — each step narrows in on the final target behavior while earlier approximations are no longer reinforced (differential reinforcement).

Q21. If Dr. Voss wanted to teach the full morning routine (enter class → hang up coat → sit down → take out materials → begin warm-up) as a linked sequence where each completed step cues the next, she would use:
A) Shaping B) Chaining C) A fixed ratio schedule D) A token economy alone
Answer: B) Chaining Chaining links already-learned or newly trained behaviors into a sequence. Each behavior serves as the discriminative stimulus for the next AND is reinforced by the opportunity to perform the next step, with the terminal reinforcer (beginning work) completing the chain. This is used to teach multi-step routines and complex skills.

PASSAGE 4 — Read carefully, answer Q22–Q28

"Dr. Tolman places three groups of rats in an identical maze. Group 1: Runs the maze daily and receives food at the end each day. By day 5, they navigate near-perfectly. Group 2: Runs the maze daily with NO food reward for the first 10 days. Their performance appears poor and random. On day 11, food is introduced. By day 12, their performance matches Group 1 — almost as if they had been learning all along. Group 3: Runs the maze daily and NEVER receives food. They never improve significantly. Dr. Tolman also later places rats in a maze, drains it, and refills it with a different entry point. Rats go directly to the goal box location despite the altered entry — suggesting they 'know where' the goal is, not just 'which turns to make.'"

Q22. What does Group 2's sudden improvement on day 11 demonstrate?
A) The partial reinforcement effect — intermittent reward increases resistance to extinction B) Latent learning — learning occurred during unrewarded trials but was only expressed when motivation (reward) was introduced C) Insight learning — an "aha moment" solution appeared on day 11 D) Superstitious reinforcement — food accidentally paired with correct navigation
Answer: B) Latent learning — learning occurred during unrewarded trials but was only expressed when motivation (reward) was introduced Latent learning is learning that occurs without obvious reinforcement and remains hidden until incentive (motivation) is provided. Group 2 formed a cognitive map of the maze during unrewarded runs; food provided the motivation to use it. This challenged behaviorism's claim that reinforcement is necessary for learning.

Q23. Group 3 (never rewarded, never improves) serves what critical purpose in Tolman's study?
A) It demonstrates that latent learning can occur even without any experience B) It serves as the control, showing that maze exposure alone (without cognitive engagement or reward) is insufficient — confirming reinforcement matters for performance C) It proves that all learning requires reward D) It demonstrates that biological constraints prevent rats from learning mazes without reward
Answer: B) It serves as the control, showing that maze exposure alone (without cognitive engagement or reward) is insufficient — confirming reinforcement matters for performance Group 3 distinguishes between simple exposure and latent learning. Group 2 apparently engaged cognitively with the maze during exploration; Group 3 did not show the same latent learning. The study as a whole shows reward affects performance but not necessarily acquisition.

Q24. The finding that rats navigated directly to the goal box from a new entry point — demonstrating they knew the goal's LOCATION, not just a sequence of turns — is evidence for:
A) Stimulus generalization across maze orientations B) Instinctive drift — rats naturally navigate to food locations C) A cognitive map — an internal spatial representation of the environment D) Insight learning applied to navigation
Answer: C) A cognitive map — an internal spatial representation of the environment Tolman coined "cognitive map" to describe the internal mental representation of the spatial layout of an environment. Rats weren't running a fixed stimulus-response chain of turns — they had an overview-style representation allowing flexible navigation from any entry point.

Q25. Tolman's findings were revolutionary because they:
A) Confirmed that all behavior is determined purely by reinforcement history B) Introduced the concept of internal cognitive representations into a field dominated by pure stimulus-response behaviorism C) Proved that classical conditioning is more powerful than operant conditioning D) Demonstrated that biological instincts prevent effective maze learning
Answer: B) Introduced the concept of internal cognitive representations into a field dominated by pure stimulus-response behaviorism Tolman's work was a direct challenge to Skinner and Watson's pure behaviorism, which rejected internal mental processes. By showing that rats formed internal maps, Tolman helped pave the way for cognitive psychology and cognitive-behavioral approaches.

Q26. A student studies for an exam while listening to jazz music. During the exam there is silence. The student performs well. Later the student listens to jazz while NOT studying and doesn't think about course material at all. Which of Tolman's concepts BEST explains why the student still retained the academic content?
A) Latent learning during jazz sessions transferred to exam context B) The jazz music was a discriminative stimulus triggering memory retrieval C) Cognitive maps of the subject formed during studying were retained and expressed when tested D) Spontaneous recovery of previously extinguished memory occurred
Answer: C) Cognitive maps of the subject formed during studying were retained and expressed when tested Tolman's cognitive map concept extends beyond physical mazes — it applies to any organized internal representation of learned material. The student formed an internal representation of the content that was retained and accessed during testing, regardless of the music context.

Q27. The critical distinction Tolman's study draws between "learning" and "performance" most directly parallels which of Bandura's findings?
A) Reciprocal determinism — behavior changes environment which changes behavior B) Children who watched a punished model showed no imitation (performance) but still knew the aggressive behaviors (learning) when incentivized C) The four elements of observational learning must all be present for any behavior to occur D) Vicarious reinforcement increases motivation to perform observed behaviors
Answer: B) Children who watched a punished model showed no imitation (performance) but still knew the aggressive behaviors (learning) when incentivized Both Tolman and Bandura found the same fundamental distinction: learning (acquisition) can occur without performance, and performance depends on motivation/reward. In both cases, what was "hidden" learning became visible when incentive was provided.

Q28. Which biological constraint could explain why Group 3 rats NEVER learned the maze even with extensive exposure?
A) Instinctive drift — rats instinctively return to home territory, preventing maze learning B) The absence of biologically relevant consequences meant no adaptive value triggered neural consolidation of the spatial information C) Biological preparedness prevents rats from forming cognitive maps in artificial environments D) Conditioned taste aversion interfered with maze exploration
Answer: B) The absence of biologically relevant consequences meant no adaptive value triggered neural consolidation of the spatial information This question integrates biological constraints with cognitive learning. While Tolman showed Group 2 rats did form latent maps, Group 3 rats apparently did not encode the maze well enough — possibly because without any biological relevance (food, escape, survival), the neural machinery for spatial consolidation was not fully engaged.

PASSAGE 5 — Read carefully, answer Q29–Q35

"Mariana grew up watching her mother react with intense fear every time a thunderstorm occurred — covering her ears, hiding in a closet, and trembling. By age 8, Mariana also showed intense fear during thunderstorms despite never having had a personally traumatic storm experience. At 25, Mariana seeks therapy. Her therapist first teaches her progressive muscle relaxation. Then they construct a hierarchy: (1) seeing a photo of clouds → (2) hearing soft rain sounds → (3) watching a storm video → (4) sitting near a window in light rain → (5) standing outside in a moderate storm. After six weeks, Mariana no longer experiences fear at any level of the hierarchy."

Q29. Mariana's acquisition of storm fear through watching her mother is an example of:
A) Classical conditioning through direct experience B) Operant conditioning — her mother's hiding behavior was negatively reinforced C) Vicarious conditioning — emotional responses learned by observing another's reactions D) Biological preparedness — humans are evolutionarily primed to fear storms
Answer: C) Vicarious conditioning — emotional responses learned by observing another's reactions Mariana never experienced the UCS (traumatic storm) directly. She acquired the fear CR by observing her mother's fear response, making this vicarious classical conditioning — a form of observational learning of emotional responses.

Q30. The therapeutic process Mariana's therapist uses is called:
A) Flooding — immediate full exposure to the feared stimulus B) Aversion therapy — pairing the feared stimulus with an aversive event C) Systematic desensitization — gradual exposure while maintaining relaxation D) Token economy — reinforcing approach behaviors toward storms
Answer: C) Systematic desensitization — gradual exposure while maintaining relaxation Systematic desensitization (Wolpe) = (1) relaxation training, (2) fear hierarchy construction, (3) gradual imaginal/in vivo exposure while maintaining relaxation. It is based on reciprocal inhibition — relaxation and fear are incompatible responses that cannot coexist.

Q31. The principle underlying systematic desensitization — that relaxation and anxiety cannot occur simultaneously — is called:
A) Stimulus discrimination B) Reciprocal inhibition C) The Premack Principle D) Latent inhibition
Answer: B) Reciprocal inhibition Wolpe's reciprocal inhibition states that two incompatible physiological states cannot coexist — relaxation inhibits anxiety. By maintaining relaxation while exposing the client to feared stimuli, anxiety is progressively inhibited until the CS no longer elicits it.

Q32. Which psychologist is credited as the "Mother of Behavior Therapy" for first demonstrating this type of fear elimination clinically?
A) Mary Cover Jones — eliminated a child's phobia using gradual exposure + pleasant activity B) Rosalie Rayner — eliminated Little Albert's fear using counterconditioning C) Mary Ainsworth — eliminated attachment anxiety through systematic proximity D) Anna Freud — eliminated phobias through talk therapy with children
Answer: A) Mary Cover Jones — eliminated a child's phobia using gradual exposure + pleasant activity Jones (1924) treated "Little Peter" — a boy afraid of rabbits — by gradually bringing the rabbit closer while Peter ate his favorite food (a pleasant, incompatible state). This anticipated systematic desensitization by 30 years and established behavior therapy as a clinical discipline.

Q33. During therapy, Mariana notices that she now feels slightly uneasy at the sound of any white noise machine — a device often running in waiting rooms. The white noise had been playing softly in the therapy office during early sessions when Mariana was discussing her most distressing storm memories. The white noise becoming aversive is BEST explained by:
A) Stimulus generalization from the storm to white noise B) Higher-order conditioning — therapy distress (CS1) conditioned the white noise (CS2) C) Instinctive drift — humans instinctively avoid white noise D) Conditioned taste aversion applied to sound
Answer: B) Higher-order conditioning — therapy distress (CS1) conditioned the white noise (CS2) The storm fear/anxiety became a CS1 during therapy sessions. The white noise (neutral) was repeatedly present during these anxiety-evoking discussions, becoming a CS2 through higher-order conditioning — never directly paired with the original UCS (traumatic storm) but associated with CS1 (storm anxiety).

Q34. Mariana's therapist notes that she stopped fearing storms only in the therapy office initially, but still feared them at home. This limitation of the therapy reflects:
A) Spontaneous recovery after extinction B) Failure of stimulus generalization — extinction in one context doesn't automatically generalize C) Biological preparedness overriding extinction D) Instinctive drift causing re-emergence of fear
Answer: B) Failure of stimulus generalization — extinction in one context doesn't automatically generalize Context specificity of extinction is a well-documented phenomenon. Extinction is partially context-dependent — the CS-no UCS learning is encoded with contextual cues. Therapy in a different setting from where fear occurs means the extinction may not generalize. This is why in-vivo (real-world) exposure is critical.

Q35. Mariana later reports that just THINKING about her mother hiding in the closet during storms still makes her feel anxious, though storms themselves no longer do. This persistent emotional association between her mother's behavior and anxiety is:
A) A conditioned taste aversion B) A conditioned emotional response (CER) formed through vicarious conditioning C) Latent learning about her mother's behavior D) A superstitious reinforcement pattern
Answer: B) A conditioned emotional response (CER) formed through vicarious conditioning Conditioned Emotional Responses are classically conditioned emotional reactions to previously neutral stimuli. Mariana vicariously conditioned a CER to her mother's storm-related behaviors. Even after desensitization to storms, this separately conditioned CER to the memory/image of her mother's behavior persists — illustrating that different CS-UCS associations can be independently extinguished.

PASSAGE 6 — Read carefully, answer Q36–Q43

"Professor Kaur teaches psychology and observes several students: — Jordan studies intensively only the night before exams and coasts the rest of the semester. — Sam studies steadily every day, never knowing when a pop quiz will occur. — Devon gets paid $2 for every essay paragraph written. — Alex receives a $50 bonus for every 5 lab reports submitted. — Riley presses a vending machine button that dispenses a snack after an unpredictable number of button combinations. Professor Kaur also designs an experiment: she tells students that for the next 8 weeks, they will receive bonus points only on Tuesdays for completing readings. Students read intensively on Mondays but barely at all on Wednesday through Saturday."

Q36. Jordan's studying behavior (cramming before exams, coasting otherwise) maps to which schedule of reinforcement?
A) Variable ratio — unpredictable study session produces reward B) Fixed interval — scallop pattern of low responding early, high responding near the reward deadline C) Fixed ratio — studying a set number of hours triggers a reward D) Variable interval — studying at random times is reinforced
Answer: B) Fixed interval — scallop pattern of low responding early, high responding near the reward deadline Fixed interval (FI) produces the "scallop" — organisms respond little immediately after reinforcement and accelerate as the interval end approaches. Jordan coasts (low response rate after the last exam) and cramps intensively as the next exam (reward deadline) nears. Classic FI scallop.

Q37. Sam's steady daily studying (not knowing when pop quizzes occur) is on which schedule?
A) Fixed ratio B) Fixed interval C) Variable interval D) Variable ratio
Answer: C) Variable interval Variable interval (VI) delivers reinforcement for the first correct response after an unpredictable time period. Sam cannot predict WHEN the quiz will come, so studying consistently every day is the optimal strategy. VI produces steady, moderate response rates — the most stable of the four schedules.

Q38. Devon ($2 per paragraph) and Alex ($50 per 5 lab reports) are both on ratio schedules. What is the KEY difference between them?
A) Devon is on variable ratio; Alex is on fixed ratio B) Devon is on continuous reinforcement; Alex is on fixed ratio C) Both are on fixed ratio but with different ratios (FR-1 for Devon, FR-5 for Alex) D) Devon is on fixed ratio FR-1; Alex is on fixed interval
Answer: C) Both are on fixed ratio but with different ratios (FR-1 for Devon, FR-5 for Alex) Devon receives $2 after every paragraph = FR-1 (every single response reinforced — technically also continuous reinforcement). Alex receives $50 after every 5 reports = FR-5. Both show high rates and post-reinforcement pauses, but Alex's pauses will be proportionally larger (larger ratio = larger post-reinforcement pause).

Q39. Riley's vending machine behavior (unpredictable number of button combinations required) is on which schedule, and why is this schedule the MOST resistant to extinction?
A) Fixed ratio — high consistent response rate B) Variable ratio — organism cannot distinguish "extinction" from "just a longer stretch without reward" C) Variable interval — time unpredictability creates persistence D) Fixed interval — the scallop creates sustained effort near the end
Answer: B) Variable ratio — organism cannot distinguish "extinction" from "just a longer stretch without reward" VR produces maximum resistance to extinction because the unpredictable nature of reinforcement means any given non-reinforced trial could just be part of the normal variability. Riley has no way to know when the machine has been disconnected vs. when this is just a longer-than-usual run without reward — so pressing continues.

Q40. Professor Kaur's experiment (bonus points only on Tuesdays, students read intensively on Mondays) illustrates:
A) Variable ratio schedule producing steady responding B) Fixed interval scallop — accelerating responses as the Tuesday deadline approaches C) The Premack Principle — high-probability behavior reinforcing low-probability behavior D) Latent learning — students learn the material without needing the bonus
Answer: B) Fixed interval scallop — accelerating responses as the Tuesday deadline approaches The Tuesday bonus = FI reinforcement. Students learn that early-week reading doesn't "count" toward the approaching deadline, so they respond minimally early (Wednesday–Saturday) and intensively near the reinforcement deadline (Monday). This is the textbook FI scallop.

Q41. Professor Kaur wants to MAXIMIZE resistance to extinction for study habits after her course ends. Based on reinforcement principles, she should SWITCH from weekly scheduled exams to:
A) No reinforcement at all — extinction begins immediately B) Continuous reinforcement — reward every single study session C) Partial reinforcement (variable ratio or variable interval) — resistance to extinction is maximized D) Fixed ratio reinforcement — one reward per chapter completed
Answer: C) Partial reinforcement (variable ratio or variable interval) — resistance to extinction is maximized The partial reinforcement effect (PRE): behaviors reinforced intermittently are far more resistant to extinction than continuously reinforced behaviors. Switching to partial reinforcement after establishing the behavior creates habits that persist even when reinforcement becomes unavailable (after the course ends).

Q42. A student in Professor Kaur's class who scored 100% on the last 3 exams begins to carry a specific "lucky pencil" to every exam. He performed well on those exams with that pencil but has no logical basis for believing it helps. This is BEST explained by:
A) Instinctive drift — returning to familiar exam behaviors B) Superstitious reinforcement — accidental contingency between carrying the pencil and exam success C) Higher-order conditioning — the pencil became a CS2 D) Latent learning — the pencil triggered hidden knowledge
Answer: B) Superstitious reinforcement — accidental contingency between carrying the pencil and exam success The pencil was present during exam success (a rewarding outcome). The student's carrying behavior was accidentally reinforced, leading to the superstitious belief. This mirrors Skinner's pigeons — organisms attribute causality to whatever behavior coincidentally preceded reinforcement.

Q43. Professor Kaur introduces a rule: students who are caught on phones during class lose their participation grade for the day. Two students respond differently: Marcus stops using his phone entirely; Keisha uses it MORE covertly. The consequence (losing grade) worked for Marcus but not Keisha because:
A) The punishment was too severe for Marcus but too mild for Keisha B) For punishment to work, the behavior must be consistently detected and immediately punished — Keisha learned that covert use goes undetected, making punishment inconsistent and ineffective C) Marcus experienced negative reinforcement; Keisha experienced positive punishment D) Keisha was on a variable ratio schedule of phone use that made extinction impossible
Answer: B) For punishment to work, the behavior must be consistently detected and immediately punished — Keisha learned that covert use goes undetected, making punishment inconsistent and ineffective For punishment to suppress behavior effectively, it must be: (1) immediate, (2) consistent (applied every time the behavior occurs), (3) inescapable, and (4) moderate. Keisha discovered she could avoid punishment through covert use — inconsistent punishment can actually increase the target behavior by teaching "how to avoid getting caught."

PASSAGE 7 — Read carefully, answer Q44–Q52

"Dr. Alvarez is studying social influences on behavior. She shows Group A children a video of an adult model punching a Bobo doll, then receiving praise from another adult. Group B watches the same video but the model is scolded. Group C watches a neutral video with no aggression. All children are then placed alone with a Bobo doll and various toys. Group A children immediately imitate the model's specific punching behaviors. Group B children show little aggression spontaneously. Group C children show minimal aggression. Dr. Alvarez then tells ALL children they will receive stickers if they can show her all the aggressive behaviors they remember. Both Group A and Group B demonstrate the full range of aggressive behaviors — Group B's performance matches Group A when incentivized. Dr. Alvarez also interviews the children afterward. Some children say they 'didn't want to' hit the doll even though they knew how."

Q44. This experiment is a replication of whose work, using which paradigm?
A) Tolman's maze study — demonstrating latent learning through delayed reward B) Bandura's Bobo Doll experiment — demonstrating observational learning and vicarious reinforcement C) Seligman's learned helplessness — demonstrating passive acceptance of negative outcomes D) Watson's Little Albert study — demonstrating classical conditioning of fear
Answer: B) Bandura's Bobo Doll experiment — demonstrating observational learning and vicarious reinforcement Bandura's original Bobo Doll studies (1961, 1963) used exactly this design — model rewarded, model punished, control group — to study observational learning and the role of vicarious consequences on performance of learned behaviors.

Q45. The fact that Group B (model punished) did NOT imitate spontaneously but DID imitate when offered stickers demonstrates:
A) That Group B had never learned the behaviors B) The critical distinction between learning (acquisition through observation) and performance (expression dependent on motivation) C) That vicarious punishment permanently suppressed the neural circuits for aggression D) That continuous reinforcement always overrides vicarious punishment effects
Answer: B) The critical distinction between learning (acquisition through observation) and performance (expression dependent on motivation) Group B learned the behaviors through observation (acquisition) but didn't perform them because vicarious punishment (seeing model scolded) reduced motivation. When incentive was provided, learning was revealed. This is the same learning-vs-performance distinction as Tolman's Group 2 rats — parallel finding across species.

Q46. Group A's spontaneous imitation was driven by:
A) Direct positive reinforcement of their own aggressive behaviors B) Vicarious reinforcement — seeing the model praised increased their motivation to imitate C) Biological preparedness for aggression D) Conditioned emotional response to the Bobo doll
Answer: B) Vicarious reinforcement — seeing the model praised increased their motivation to imitate Vicarious reinforcement occurs when an observer sees a model rewarded for behavior. This increases the observer's motivation to perform that same behavior — without the observer receiving any direct reinforcement. Group A was motivated to imitate because they had vicariously experienced the social reward.

Q47. List the FOUR elements required for Group A children to successfully imitate the model, in the correct order of necessity:
A) Desire → Memory → Attention → Imitation B) Attention → Memory → Imitation ability → Motivation/Desire C) Memory → Attention → Desire → Imitation D) Imitation → Attention → Memory → Desire
Answer: B) Attention → Memory → Imitation ability → Motivation/Desire Bandura's four elements: (1) Attention — must notice and attend to the model; (2) Retention/Memory — must encode and remember behaviors; (3) Motor Reproduction/Imitation — must be physically capable of performing behaviors; (4) Motivation/Desire — must have reason to perform (vicarious reinforcement, anticipated reward). All four are necessary; absence of any one prevents imitation.

Q48. Some children said they "didn't want to" hit the Bobo doll even though they "knew how." This reflects which element of observational learning being ABSENT?
A) Attention — they weren't paying attention to the model B) Memory — they couldn't encode the aggressive behaviors C) Motor reproduction — they were physically incapable of the behaviors D) Motivation/Desire — they had no incentive to perform the behavior
Answer: D) Motivation/Desire — they had no incentive to perform the behavior "Didn't want to" directly indicates motivational absence. The children had attention (they watched), memory (they knew how), and motor ability (they were physically capable) — but lacked the desire or motivation to translate learning into performance.

Q49. Dr. Alvarez notes that children who came from homes where parents modeled prosocial behavior tended to show less aggression than those from homes where aggression was modeled. She concludes that home environment, parenting behavior, and child behavior mutually influence each other. This is Bandura's concept of:
A) Latent learning applied to social environments B) Learned helplessness across family systems C) Reciprocal determinism — behavior, personal factors, and environment all influence each other bidirectionally D) Biological preparedness for prosocial behavior
Answer: C) Reciprocal determinism — behavior, personal factors, and environment all influence each other bidirectionally Reciprocal determinism: The child's behavior influences the home environment (aggressive child changes family dynamics), the environment influences the child (aggressive home models aggression), and the child's personal factors (beliefs, cognitive schema) influence how they interpret both. It's a three-way bidirectional system.

Q50. After the Bobo doll session, one child — Marcus — is afraid of the Bobo doll itself because it was present while he watched an older child in a separate incident get hurt playing aggressively. The Bobo doll's associative value here was acquired through:
A) Operant conditioning — touching the doll was punished B) Classical conditioning — the doll (CS) was paired with pain/fear (UCS) C) Vicarious conditioning — he observed another child's fear of the doll D) Higher-order conditioning using the aggression video as CS1
Answer: B) Classical conditioning — the doll (CS) was paired with pain/fear (UCS) The doll (originally neutral → CS) was present when Marcus witnessed injury (UCS → UCR of fear). The doll now elicits fear (CR). This is direct classical conditioning (or could also be argued as vicarious conditioning in C — but the question specifies "associative value" through his OWN observational experience pairing the doll with an aversive UCS event, making classical conditioning more precise here).

Q51. Which researcher would argue that Marcus's acquired fear of the Bobo doll could be most effectively treated by pairing the doll with Marcus's favorite snack while gradually increasing proximity?
A) B.F. Skinner — using positive reinforcement to override fear B) Mary Cover Jones — using counterconditioning (pleasant activity + gradual exposure) as she did with Little Peter C) Edward Tolman — building a cognitive map of the safe environment D) Martin Seligman — using internal locus of control retraining
Answer: B) Mary Cover Jones — using counterconditioning (pleasant activity + gradual exposure) as she did with Little Peter Jones's method with Little Peter (1924) — the first clinical behavior therapy — paired a feared object (rabbit) with a pleasant stimulus (food) during gradual exposure. This directly applies here: snack + gradual proximity to Bobo doll. Jones is the "Mother of Behavior Therapy" for pioneering exactly this technique.

Q52. Dr. Alvarez hypothesizes that children raised with NO models of any behavior will not develop any social behaviors. This reflects which theoretical position?
A) Biological preparedness — instincts override all modeling B) Watson's environmental determinism — the environment exclusively shapes behavior C) Bandura's social learning theory — behavior is acquired through observation of models; no models = no social learning D) Tolman's cognitive map theory — without environments to map, no behavior forms
Answer: C) Bandura's social learning theory — behavior is acquired through observation of models; no models = no social learning Bandura's Social Learning Theory posits that observational learning (modeling) is a primary mechanism of behavior acquisition. Without models, children would lack crucial templates for social behavior. This differs from Watson's pure environmental determinism in that cognition and observation mediate learning.

PASSAGE 8 — Read carefully, answer Q53–Q62

"A comprehensive psychology exam question presents the following vignette:
'Alejandro has been depressed for three years. He believes nothing he does matters — every job application, every social attempt fails regardless of effort. He has stopped trying. His therapist notes that Alejandro attributes all outcomes to fate and other people. Meanwhile, Alejandro's brother Roberto tried the same number of job applications, also failed, but applied his own analysis to his failures, made targeted improvements, and eventually succeeded. Roberto now believes the harder he works, the more likely he is to succeed.
In therapy, Alejandro watches videos of people successfully navigating job interviews using specific behaviors. His therapist uses a token system where Alejandro earns points for every job application submitted, redeemable for activities he enjoys. After 8 weeks, Alejandro begins applying more consistently and attributes some successes to his own improved efforts.'"

Q53. Alejandro's belief that "nothing I do matters, outcomes are controlled by fate" reflects:
A) Internal locus of control B) Learned helplessness with external locus of control C) Reciprocal determinism applied externally D) Biological preparedness for passivity
Answer: B) Learned helplessness with external locus of control Seligman's learned helplessness: repeated uncontrollable failure → belief that responses don't affect outcomes → passive giving-up. Rotter's external locus of control: belief that outcomes are controlled by luck, fate, or others rather than one's own behavior. Alejandro shows both — learned helplessness producing an entrenched external locus of control.

Q54. Roberto's belief — "the harder I work, the more likely I am to succeed" — reflects:
A) External locus of control B) Superstitious reinforcement C) Internal locus of control D) Reciprocal inhibition
Answer: C) Internal locus of control Internal locus of control: belief that outcomes are contingent on one's own behavior, effort, and ability. Roberto attributes success/failure to controllable internal factors. Research links internal locus of control with higher achievement, better problem-solving, and greater psychological resilience.

Q55. The key difference between Alejandro and Roberto — despite identical initial experiences — is BEST explained by:
A) Different biological preparedness for success B) Different schedules of reinforcement in childhood C) Attribution style and locus of control affecting how they interpret and respond to failure D) Roberto having better cognitive maps of the job market
Answer: C) Attribution style and locus of control affecting how they interpret and respond to failure Same objective events (repeated job rejections) produced completely different outcomes because of cognitive interpretation. Roberto attributed failure to specific, controllable, changeable factors (internal). Alejandro attributed failure to uncontrollable, global, stable factors (external) — leading to learned helplessness.

Q56. Alejandro's therapist has him watch videos of successful job interviews. What element of observational learning must be present FIRST for this intervention to work?
A) Motivation — Alejandro must want the job B) Motor reproduction — Alejandro must be able to do the behaviors physically C) Attention — Alejandro must actively observe and process the model's behaviors D) Memory — Alejandro must already have the behaviors encoded
Answer: C) Attention — Alejandro must actively observe and process the model's behaviors Attention is the first and foundational element of observational learning. Without attending to the model's behavior, no encoding occurs, no memory is formed, and no imitation is possible. A depressed individual with low motivation may have difficulty deploying attention — which is why motivation must often be addressed first clinically, but attention is the first element in Bandura's model.

Q57. The token system in Alejandro's therapy (points for applications → redeemable for enjoyable activities) is BEST described as combining:
A) Classical conditioning + systematic desensitization B) Token economy (secondary reinforcement) + the Premack Principle (preferred activities reinforce less preferred) C) Fixed ratio schedule + primary reinforcement D) Shaping + conditioned emotional response reduction
Answer: B) Token economy (secondary reinforcement) + the Premack Principle (preferred activities reinforce less preferred) Token economy: tokens (secondary reinforcers) are exchanged for backup reinforcers. Premack Principle: high-probability preferred activities (enjoyable activities Alejandro likes) are used to reinforce low-probability behaviors (submitting job applications — currently very low frequency). Both principles operate simultaneously in this intervention.

Q58. After 8 weeks, Alejandro attributes some successes to "his own improved efforts." His locus of control is:
A) Remaining fully external B) Shifting toward internal — he is beginning to see effort-outcome contingencies C) Becoming superstitious — attributing success to lucky behaviors D) Demonstrating learned helplessness in reverse
Answer: B) Shifting toward internal — he is beginning to see effort-outcome contingencies Effective treatment for learned helplessness involves creating controllable success experiences that reinforce internal attributions. Alejandro is learning that HIS behavior (applications) produces outcomes (interview opportunities) — breaking the learned helplessness and shifting locus of control inward.

Q59. Alejandro's therapist also uses biofeedback to help Alejandro recognize and reduce his physiological anxiety responses during mock interviews. The SPECIFIC type used to help him control muscle tension during anxiety is:
A) Neurofeedback — targeting brainwave patterns B) EMG biofeedback — targeting skeletal muscle tension C) Systematic desensitization — targeting emotional associations D) Reciprocal inhibition — pairing tension with a relaxation response
Answer: B) EMG biofeedback — targeting skeletal muscle tension While neurofeedback specifically targets EEG/brainwaves, EMG (electromyographic) biofeedback targets muscle tension — providing real-time feedback on muscle activity so the individual can learn to reduce tension voluntarily. This is a specific application of biofeedback appropriate for anxiety-related muscle tension.

Q60. Alejandro's learned helplessness was originally conditioned in a manner most analogous to which experimental paradigm?
A) Pavlov's classical conditioning of salivation in dogs B) Seligman's dogs exposed to inescapable shocks who later failed to escape avoidable shocks C) Tolman's Group 2 rats who showed latent learning D) Bandura's Bobo Doll children who learned aggression vicariously
Answer: B) Seligman's dogs exposed to inescapable shocks who later failed to escape avoidable shocks Seligman directly modeled human depression through learned helplessness in dogs. Dogs given inescapable shocks later sat passively and accepted shocks even when escape was easy — they had learned their behavior was ineffective. Alejandro shows the human analog: years of uncontrollable failures → passive non-trying.

Q61. The Premack Principle in Alejandro's case is: "If Alejandro _____ (low-preference behavior), then he gets to _____ (high-preference behavior)." Which fills in correctly?
A) Watches job interview videos → submits an application B) Submits a job application → engages in an enjoyable activity he likes C) Earns tokens → submits an application D) Relaxes → gains access to mock interviews
Answer: B) Submits a job application → engages in an enjoyable activity he likes Premack: High-probability (preferred) activity reinforces low-probability (non-preferred) activity. Submitting applications is the low-probability behavior Alejandro avoids; enjoyable activities are high-probability behaviors he prefers. Using access to preferred activities to reinforce non-preferred behavior is "Grandma's Rule."

Q62. Alejandro's entire treatment integrates multiple psychological frameworks. Rank these from most fundamental (underlying mechanism) to most applied (specific technique):
A) Learned helplessness theory → Locus of control shift → Token economy → Premack Principle B) Token economy → Premack Principle → Learned helplessness → Locus of control C) Operant conditioning → Behavior modification → Token economy → Premack Principle as specific rule D) Classical conditioning → Systematic desensitization → Token economy → Biofeedback
Answer: C) Operant conditioning → Behavior modification → Token economy → Premack Principle as specific rule The hierarchy: Operant conditioning is the foundational science (Skinner/Thorndike). Behavior modification is the applied field using operant principles. Token economy is a specific behavior modification technique. The Premack Principle is the specific rule determining WHICH activities serve as reinforcers within the token economy. Each level is nested within the one above.

SECTION: Direct Definition + Identification — Rapid Fire

Q63. Wolfgang Köhler placed a banana out of reach of a chimpanzee named Sultan. Sultan failed repeatedly. After a pause, Sultan suddenly stacked two boxes and climbed to reach the banana. This demonstrates:
A) Latent learning — unrewarded prior exploration paid off B) Insight learning ("aha moment") — sudden solution through internal mental reorganization C) Trial-and-error operant shaping D) Stimulus generalization of tool-use behaviors
Answer: B) Insight learning ("aha moment") — sudden solution through internal mental reorganization Insight learning (Köhler) involves sudden problem solution without step-by-step trial-and-error. There is a period of apparent failure (incubation), then abrupt solution — the "aha moment." It cannot be explained by prior reinforcement history because the solution appears all at once, suggesting internal cognitive restructuring.

Q64. A raccoon trained by Breland & Breland to deposit coins in a bank for food rewards began instead rubbing the coins together and "washing" them — reverting to food-washing instinct despite this reducing reward frequency. This illustrates:
A) Superstitious reinforcement B) Latent learning C) Instinctive drift — trained behavior drifting back toward species-typical instinctive patterns D) Stimulus generalization of washing behavior
Answer: C) Instinctive drift — trained behavior drifting back toward species-typical instinctive patterns Instinctive drift (Breland & Breland) is a biological constraint on operant learning — trained behaviors that conflict with species-typical instincts will be overridden by those instincts over time, regardless of reinforcement. The raccoon's food-washing instinct is stronger than the trained coin-depositing behavior.

Q65. Garcia and Koelling's experiment showed rats developed taste aversions to flavored water (not to noise or lights) when paired with radiation-induced nausea hours later. This violates which classical conditioning "rule" and demonstrates which concept?
A) Violates the contiguity rule (CS-UCS must be immediate); demonstrates biological preparedness B) Violates the frequency rule (many trials needed); demonstrates latent learning C) Violates the generalization rule; demonstrates instinctive drift D) Violates the voluntary behavior rule; demonstrates operant conditioning
Answer: A) Violates the contiguity rule (CS-UCS must be immediate); demonstrates biological preparedness Standard classical conditioning requires CS-UCS contiguity (close time proximity). Garcia's taste aversion formed with hours between CS (taste) and UCS (nausea) — in a single trial. This is possible because evolution specifically prepared taste-illness associations (adaptive for survival), demonstrating biological preparedness overrides standard conditioning parameters.

Q66. David Premack studied which behavior in Cebus monkeys that led to his principle?
A) He noticed high-frequency play behavior could reinforce low-frequency lever pressing B) He found monkeys preferred variable ratio schedules to fixed interval schedules C) He discovered monkeys could form cognitive maps of their enclosures D) He showed monkeys acquired insight learning faster than rats
Answer: A) He noticed high-frequency play behavior could reinforce low-frequency lever pressing Premack's original observation: some behaviors occur at high frequency naturally (play, eating) while others are low frequency (pressing a lever). If access to a high-frequency behavior is made contingent on a low-frequency behavior, the low-frequency behavior increases. This is the Premack Principle.

Q67. (Hard Identification) A child who fears dogs is gradually desensitized. At the end of treatment, the therapist says: "Your calmness (CR) now occurs in the presence of dogs (CS) instead of fear." This NEW learned association (calm to dogs) was established by pairing dogs with relaxation — which is:
A) Extinction — simply removing the fear response B) Counterconditioning — replacing one CR (fear) with an incompatible CR (calm) to the same CS C) Shaping — approximating calmness in the presence of dogs D) Latent learning — the calm response was always there, waiting for motivation
Answer: B) Counterconditioning — replacing one CR (fear) with an incompatible CR (calm) to the same CS Counterconditioning doesn't just extinguish the old CR — it actively replaces it with a new, incompatible CR to the same CS. The dog becomes a CS for calm (new CR) rather than fear (old CR). This is mechanistically more powerful than simple extinction because a competing response actively inhibits the old one.

Q68. A hospital patient receiving chemotherapy always passes a particular mural in the hallway on the way to treatment. Over several sessions, the patient begins feeling nauseated even before entering the treatment room — just from seeing the mural. Later, the patient is cured and chemotherapy stops. The patient visits the hospital for a checkup months later and feels slightly nauseated upon seeing the mural again, despite having visited multiple times without nausea. This final nausea is:
A) Extinction of the conditioned taste aversion B) Spontaneous recovery of the conditioned nausea response after a period of rest C) Instinctive drift causing nausea in medical settings D) Higher-order conditioning from the hospital scent
Answer: B) Spontaneous recovery of the conditioned nausea response after a period of rest The patient underwent extinction (visiting without nausea during checkups). After a rest period, the CR (nausea) partially returns to the CS (mural) — spontaneous recovery. This is clinically significant because it explains why "cured" phobias or aversions can re-emerge and why treatment must include strategies for managing spontaneous recovery.

Q69. (Integration — All Concepts) Professor Singh assigns students to build a complex research project over a semester. She:
  1. First asks students to submit just an outline (reinforces any organized attempt)
  2. Then requires a rough draft (outline is no longer reinforced)
  3. Then requires a polished draft (rough draft not reinforced)
  4. Finally requires the full project (full reinforcement)
Meanwhile, students who turn in each component on time earn "research coins" (redeemable for bonus points). Students who miss deadlines lose 30 minutes of the final exam time.
Identify ALL operant principles present:
A) Shaping only B) Shaping + Token economy + Negative punishment (lost exam time) C) Fixed ratio + Primary reinforcement + Positive punishment D) Chaining + Continuous reinforcement + Negative reinforcement
Answer: B) Shaping + Token economy + Negative punishment (lost exam time) Shaping: progressive reinforcement of closer approximations to the full project. Token economy: research coins (secondary reinforcers) redeemable for bonus points. Negative punishment: losing exam time (removing something positive — exam time) for missing deadlines to decrease deadline-missing behavior. A comprehensive, multi-principle behavioral intervention.

Q70. (Ultimate Integration Scenario) "A neuroscientist is studying a patient, Ezra, who has a spider phobia. Brain imaging reveals heightened amygdala activity to spider images. Ezra reports he never had a direct bad experience with a spider but his older sister screamed violently at a spider when he was 5. He now avoids all spider-related media, has nightmares, and cannot enter rooms without checking for spiders. His avoidance behavior is maintained because it prevents anxiety. A therapist uses real-time EEG monitoring to help Ezra achieve calmer brain states while viewing spider images, and simultaneously introduces gradual exposure while Ezra practices breathing."
This case integrates WHICH combination of concepts in the CORRECT sequence of how Ezra's fear developed and is maintained?
A) Biological preparedness → Vicarious conditioning → CER → Negative reinforcement maintaining avoidance → Neurofeedback + Systematic desensitization as treatment
B) Operant conditioning → Fixed ratio → Token economy → Latent learning → Insight learning as treatment
C) Instinctive drift → Superstitious reinforcement → Learned helplessness → Biofeedback as treatment
D) Higher-order conditioning → Extinction → Spontaneous recovery → Positive punishment as treatment
Answer: A) Biological preparedness → Vicarious conditioning → CER → Negative reinforcement maintaining avoidance → Neurofeedback + Systematic desensitization as treatment Perfect case formulation: (1) Biological preparedness — humans are evolutionarily prepared to acquire spider fear rapidly; (2) Vicarious conditioning — fear acquired by observing sister, not direct experience; (3) CER — conditioned emotional response (anxiety) to spider-related stimuli; (4) Negative reinforcement — avoidance removes anxiety, increasing avoidance frequency; (5) Neurofeedback — EEG biofeedback for brain state control; (6) Systematic desensitization — gradual exposure + relaxation (breathing) addressing the conditioned fear. Every element fits.

Q71. Ivan Pavlov noticed his dogs began salivating at the sound of the researcher's footsteps before food was even presented. At the time he called this a "psychic secretion." In modern terminology, the researcher's footsteps would be classified as:
A) An unconditioned stimulus — they naturally produce salivation B) A conditioned stimulus — they have been repeatedly paired with food (UCS) and now elicit salivation (CR) C) A discriminative stimulus — they signal when operant lever-pressing will be reinforced D) A secondary reinforcer — they have been paired with food and gained reinforcing properties
Answer: B) A conditioned stimulus — they have been repeatedly paired with food (UCS) and now elicit salivation (CR) Footsteps (originally neutral) were repeatedly paired with food delivery — becoming a CS that elicited the CR (salivation) before food was even visible. This was Pavlov's accidental discovery that launched classical conditioning research. The "psychic secretion" was the conditioned salivation response.

Q72. Which of the following CORRECTLY distinguishes John Watson's position from Albert Bandura's position?
A) Watson accepted cognitive processes; Bandura rejected them B) Watson believed behavior is entirely environmentally determined (no cognitive mediation); Bandura proposed reciprocal determinism where cognition, behavior, and environment all interact C) Watson studied observational learning; Bandura studied environmental determinism D) Both Watson and Bandura agreed that reinforcement is unnecessary for learning
Answer: B) Watson believed behavior is entirely environmentally determined (no cognitive mediation); Bandura proposed reciprocal determinism where cognition, behavior, and environment all interact Watson's radical behaviorism: environment → behavior (one-way determinism; cognition irrelevant). Bandura: Person (cognition/beliefs) ↔ Behavior ↔ Environment (three-way reciprocal). This is one of the most fundamental theoretical distinctions in learning theory history.

Q73. Edward Thorndike's cat in a puzzle box repeatedly made random movements. When it accidentally pressed the lever and escaped, it took progressively less time to escape on subsequent trials. This provides evidence for:
A) Insight learning — the cat had an "aha moment" B) Latent learning — prior exploration was expressed when the door opened C) The Law of Effect — satisfying consequences (escape) strengthen the behavior that produced them D) Stimulus generalization — the cat generalized escape behavior from previous environments
Answer: C) The Law of Effect — satisfying consequences (escape) strengthen the behavior that produced them The Law of Effect: behaviors producing satisfying outcomes are more likely to be repeated; behaviors producing annoying outcomes are less likely to be repeated. The cat's escape behavior was strengthened by the satisfying consequence of freedom — the foundational insight of all operant conditioning. Note: it was trial-and-error, NOT insight (unlike Köhler's chimps).

Q74. (Comparison Scenario) Two patients seek treatment for phobias:
  • Patient A (spider phobia): Therapist immediately exposes them to a room full of spiders for 2 hours until anxiety subsides
  • Patient B (spider phobia): Therapist teaches relaxation, constructs a hierarchy, and gradually exposes while relaxed
Identify the techniques and the underlying principle each uses:
A) Patient A = Systematic desensitization (reciprocal inhibition); Patient B = Flooding (extinction) B) Patient A = Flooding (extinction through prolonged exposure); Patient B = Systematic desensitization (reciprocal inhibition) C) Patient A = Aversion therapy; Patient B = Token economy D) Patient A = Counterconditioning; Patient B = Shaping toward calmness
Answer: B) Patient A = Flooding (extinction through prolonged exposure); Patient B = Systematic desensitization (reciprocal inhibition) Flooding: immediate full-intensity exposure maintained until fear response extinguishes — based on extinction (CS without UCS repeatedly → CR disappears). Systematic desensitization: gradual hierarchy + relaxation — based on reciprocal inhibition (relaxation and anxiety are incompatible). Both treat phobias but through different mechanisms. Flooding is faster but more distressing; SD is slower but gentler.

Q75. (Grand Integration — Final Question) "A complete learning theory course could be organized around one central insight: behavior is never simply caused by a single factor."
Which answer BEST represents this insight by correctly linking the THEORIST → CHALLENGE to simple behaviorism → MECHANISM they proposed?
A) Tolman: learning requires reinforcement → cognitive maps; Bandura: behavior is only shaped by environment → reciprocal determinism; Seligman: organisms always seek control → learned helplessness; Köhler: insight requires training → aha moment
B) Tolman: learning requires reinforcement → latent learning/cognitive maps; Bandura: behavior is only directly experienced → observational learning + reciprocal determinism; Seligman: behavior is always controllable → learned helplessness + locus of control; Köhler: learning is only gradual trial-and-error → insight learning; Garcia: all stimuli are equally conditionable → biological preparedness
C) Pavlov: all learning is voluntary → classical conditioning; Skinner: classical conditioning is more powerful than operant → operant conditioning; Watson: cognition matters → environmental determinism; Thorndike: consequences don't matter → Law of Effect
D) Bandura: reinforcement strengthens behavior → social learning reduces the need for reinforcement; Seligman: insight learning is impossible → learned helplessness proves cognitive mediation; Tolman: shaping creates complex behaviors → latent learning bypasses shaping
Answer: B) Tolman: learning requires reinforcement → latent learning/cognitive maps; Bandura: behavior is only directly experienced → observational learning + reciprocal determinism; Seligman: behavior is always controllable → learned helplessness + locus of control; Köhler: learning is only gradual trial-and-error → insight learning; Garcia: all stimuli are equally conditionable → biological preparedness This answer correctly maps each theorist to the specific behaviorist assumption they challenged and the mechanism they proposed: Tolman challenged "no reinforcement = no learning." Bandura challenged "only direct experience matters." Seligman challenged "organisms always try to control outcomes." Köhler challenged "all learning is gradual S-R." Garcia challenged "any CS can be equally conditioned to any UCS." Together, they transformed behaviorism into modern learning theory.

📊 Practice 2 — Complete Coverage Map

ConceptQuestions
CS/UCS/CR/UCR identification from scenarioQ1, Q6
Stimulus generalization + discrimination (simultaneous)Q2, Q68
ExtinctionQ3, Q8
Spontaneous recoveryQ4, Q68
Higher-order conditioningQ6, Q33
Vicarious conditioningQ7, Q29
Little Albert parallelQ5
CERQ35, Q70
Biological preparednessQ16, Q65
Continuous vs. partial reinforcement + PREQ8, Q10, Q41
FR, VR, FI, VI identificationQ9, Q36, Q37, Q38, Q39, Q40
Superstitious reinforcementQ12, Q42
Discriminative stimulusQ11, Q73
Positive/Negative Punishment identificationQ17, Q18, Q25, Q43
Positive/Negative Reinforcement identificationQ23, Q24
Time-out (negative punishment)Q16
Drawbacks of punishmentQ13, Q43
Token economyQ15, Q21, Q57
Shaping + successive approximationQ14, Q20, Q69
ChainingQ21
Biofeedback + NeurofeedbackQ19, Q59
Systematic desensitizationQ30, Q74
Reciprocal inhibitionQ31
Mary Cover JonesQ32, Q51
CounterconditioningQ67
Flooding vs. SD comparisonQ74
Conditioned taste aversionQ50, Q65
Tolman's latent learning (3 groups)Q22, Q23
Cognitive mapsQ24, Q25, Q26
Latent learning-performance distinctionQ27, Q45
Instinctive driftQ64
Insight learning (Köhler)Q63
Premack PrincipleQ61, Q66
Learned helplessnessQ53, Q60
Locus of control (internal/external)Q54, Q55, Q58
Bandura's Bobo DollQ44
Four elements of observational learningQ47, Q48, Q56
Vicarious reinforcementQ46
Learning vs. performance distinctionQ45, Q27
Reciprocal determinismQ49, Q72
Watson vs. BanduraQ72
Thorndike Law of EffectQ73
Grand integrationQ62, Q70, Q75

75 scenario/passage-based questions — every concept tested through application, identification, comparison, and multi-concept integration, exactly as requested.
This is a shared conversation. Sign in to Orris to start your own chat.