The classic examples of classical conditioning are Pavlov's dogs and Watson's Little Albert. In the 1890's Pavlov, a Russian physiologist, was observing the production of saliva by dogs as they were fed when he noticed that saliva was also produced when the person who fed them appeared (without food). This is not surprising. Every farm boy for thousands of years has realized, of course, that animals become excited when they hear the sounds that indicate they are about to be fed. But Pavlov carefully observed and measured one small part of the process. He paired a sound, a tone, with feeding his dogs so that the tone occurred several times right before and during the feeding. Soon the dogs salivated to the tone, something like they did to the food (1 above). They had learned a new connection: tone with food or tone with saliva response.
Similarly, John B. Watson, an early American psychologist, presented an 11-month-old child, Albert, with a loud frightening bang and a rat at the same time. After six or seven repetitions of the noise and rat together over a period of a week, the child became afraid of the rat, which he hadn't been, something like his fear of the noise (2 above). Actually, although very famous, Watson's experiment didn't work very well (Samuelson, 1980); yet, the procedure shows how one might learn to associate a neutral event, called the conditioned stimulus (strange as it may seem--the rat), with another event to which one has a strong automatic reaction, called the unconditioned stimulus (the scary loud sound). (What I find even more amazing is that Watson described three ways to remove this learned fear but it was 40 years later before psychology took his therapeutic ideas seriously.)
Eventually both the unconditioned (UCS) and the conditioned stimulus (CS) elicit similar (but we now know not the same) responses--an automatic, involuntary response which the person frequently (but not always) can not control. Examples of unconditioned stimuli and responses are: pain and jerking away, a puff of air to the eye and a blink, approaching danger and fear, light and pupil constriction. Classical conditioning sounds simple. Actually, there are many complexities. That's why Pavlov persisted for 30 years. He discovered many of the basic learning processes, such as the necessary timing when pairing the conditioned stimulus with the unconditioned stimulus, inhibition, extinction, generalization, discrimination, higher order conditioning, and others. All still described in Introductory Psychology textbooks today. Pavlov thought he was discovering the fundamental building blocks of all behavior (and to some extent he was). He even found that animals (he didn't work with humans) went crazy--barking, struggling to get away--when they could no longer discriminate between two tones, CS+ and CS-, becoming more and more alike, one tone (CS+) had been conditioned to produce saliva and a very similar tone (CS-) conditioned to inhibit saliva. Pavlov concluded that all psychopathology was learned via classical conditioning. He wasn't always right, but he was a brilliant researcher.
How can we use this information? What are common, everyday examples of classical conditioning? The Good Humor Wagon and the bakery attract you with bells and smells previously paired with food. TV advertisers pair their product with beautiful scenes or with attractive, sexy, successful or important people in an effort to get you to like their products more. Studying may be unpleasant for John because it has been paired with frustration (hating to do it). Much of what we like or dislike is a result of classical conditioning. Let's take drinking coffee as an example.
Have you ever wondered why and how so many people become habituated to things that naturally taste bad? At first, coffee tastes awful! Yet, many people drink it regularly (me too). Cigarettes taste terrible! Alcohol too! Surely the taste of fingernails and filth under the nails isn't very good! But many college students bite their nails. How do we learn to like these things? Probably through classical conditioning. How?
I'll tell you how I learned to like coffee. My first job as a young psychologist was in a Psychiatry clinic. I was the only psychologist and alone a lot. Needing to talk to someone besides patients, I started taking a coffee break with the secretaries, who were attractive and interesting. Coffee started to taste better and better because I liked the secretaries and enjoyed meeting my social needs. The clever reader might ask why didn't I come to dislike secretaries instead of liking coffee. That would have been possible if the awful taste had been stronger than my social needs. I would have stopped taking breaks if none of my needs were being met.
Even though I'm aware that what I originally really liked and needed was socializing with good looking women, not coffee, I am still 35 years later compelled to have a cup in the morning (only at the office because coffee drinking is under environmental control). I've learned to like it (and I still like women too). Indeed, coffee can now be used to change my reaction to something else. For example, if I now started to eat nutritious but terrible tasting diet cookies with my coffee, I would come to like the cookies after hundreds of associations together (this is higher order conditioning). In turn, the cookies could subsequently influence my reaction to something else, and on and on.
In my case, coffee was paired with satisfaction of social needs. Cigarettes are often paired with relaxation, alcohol with fun activities, nail-biting with relief of anxiety while alone, work and study with the reduction of anxiety, etc. If coffee, cigarettes, and alcohol are paired thousands of times with relaxing, then these behaviors become capable of calming us down. The body, in its wisdom, will start to use these habits as a relaxant when we are up tight. Thus, research shows that feeling stressed and helpless causes a smoker to want a cigarette more than just smelling the smoke and seeing that a cigarette and ash tray are available. With this understanding, it isn't surprising that heavy smokers are more likely to be depressed and anxious than light smokers or non-smokers. And, bulimic women report more sexual abuse than non-bulimic women. Classical conditioning connects feelings with environmental cues and with behaviors.
The examples above involve mostly taste but many other things which we come to have a reaction to (but didn't originally) are conditioned: the music we like, the social activities we like and dislike, the people we like and dislike, the way we like to dress, the desire to be the center of attention, the reluctance to approach the opposite sex, the work we like and dislike, etc. Obviously, these subtle preferences may have an enormous impact on our lives.
Pavlov's experiments dramatically demonstrated the environment's control over behavior. We are highly responsive to cues in our environment. We see dessert and can't avoid eating it. We act differently with our mother than we act with our boy/girlfriend. We have a place where we can really concentrate and study. We feel uptight goofing off and get back to work. In fact, classical conditioning is involved in almost everything we do (even though brushing your teeth isn't the emotional high point of your day, notice how you feel if you don't brush your teeth at the regular time). Thus, changing our environment is one of the most effective self-help methods (see ch. 11). Changing our reaction to the environment is another self-help approach based on classical conditioning methods. Indeed, learning to reduce our fears and other unwanted emotions is a major part of gaining control over your life (see ch. 12).
Operant or Instrumental Learning
While Pavlov was studying reflexes in Russia, Edward Lee Thorndike was a graduate student at Harvard observing cats and dogs trying to get out of a cage he had built with a trap door (opened by the animal pulling a string) in order to get food. He wanted to know which animals were the smartest and how does the mind help animals cope. From these studies, he concluded that animals (dogs, cats and chickens) don't learn by imitation, don't reason, don't have insight, and don't have good memories. At first, this must have pleased the anti-evolutionists! But Thorndike did not glorify the human mind; in fact, he concluded that all learning, even in humans, doesn't involve the mind! Learning was for him simply the building of a connection between the situation (S) and a response (R), depending on the rewarding or punishing consequences to the animal. His basic conclusion was: rewards strengthen the previous response and punishment weakens the previous response.
In the 1930's B. F. Skinner built a "box" in which an animal could get a pellet of food if it learned to press a bar or to peck a light. Thousands of research studies have been done on animals in the Skinner Box. Therefore, the most common textbook examples of operant or instrumental conditioning are a rat pressing a bar in a Skinner Box or a pigeon learning to peck a light to get food (See 4 in Table 4.1). In real life, common examples of operant conditioning would be working for a weekly pay check (5 in Table 4.1) and disciplining a child to change his/her behavior. The use of rewards and punishment has been known to man for thousands, maybe hundreds of thousands, of years. These response tendencies may be built into the species. Indeed, even animals punish their young for nursing too vigorously or for misbehaving. During the 1960's and 70's, the use of reinforcement, called behavior modification, became very popular with psychologists, especially in schools and with the mentally or emotionally handicapped.
The basic idea, straight from Thorndike, is seductively simple: reward the behavior you desire in others or in yourself. This is Skinner's key to utopia. There is also a parallel notion: if you don't understand why you do certain things, go look for the possible rewards following the behavior (Hodgson & Miller, 1982). Then change the reinforcers if you want to change the behavior. This is a key method in self-help. Behavioral analysis (understanding the antecedents and consequences) and positive reinforcement are undoubtedly powerful and under used methods but probably not the solution to all human problems. Don't other factors besides reinforcement influence behavior? What about hoped for rewards? plans? intentions? powerful emotions?
Nevertheless, the Skinner box has undoubtedly given the world valuable knowledge about different kinds of reinforcement schedules, i.e. the consequences of reinforcing every bar press response vs. every 3rd or 10th press vs. every 30 seconds of pressing the bar, etc. As a result, psychologists and efficiency experts know a great deal about getting the most work out of rats certainly and people perhaps in highly controlled environments. Advertisers and politicians certainly know how to sell things. But, psychologists know a lot less about self-control in more complex situations where people have many alternatives and can make their own decisions and plans.
Operant conditioning involves operating on the environment in very specific ways, namely, delivering reinforcers or punishment right after the "target" behavior. There are several situations in which behavior-consequence contingencies might be established:
- You may reward or punish some specific behavior of someone else, i.e. you are changing his/her environment in hopes of changing his/her behavior.
- Some specific behavior of yours may be rewarded--or punished--by someone else or by yourself.
- You may engage in some specific behavior because you expect it to yield some desired change in your environment--a payoff (5 & 6 in Table 4.1).
Furthermore, learning not only involves acquiring a new response but also learning to effectively use that response in other situations (generalization) and learning to not use the response in other situations where it won't work (discrimination). Thus, as with classical conditioning, the setting exercises great control over our operant behavior.
Classical and operant conditioning were not new kinds of learning invented by Pavlov and Thorndike. Conditioning has always existed; psychologists just studied and described its forms more carefully in the last 90 years. No doubt, animal trainers, parents, bosses, and lovers used rewards, punishment, and change of the environment quite effectively 10,000 years ago, much as they do today.
Other examples (5 above) of operant conditioning are salespersons on a commission and factory workers doing "piece work," where the better or faster they work the more they get paid. Likewise, studying for grades, dressing to be attractive, being considerate to make friends, getting angry to get our way, cleaning up our messes for approval or because we enjoy neatness, etc., etc., are behaviors operating on the environment. If they work (yield rewards) the behaviors are strengthened, i. e. become more likely to occur in the future, because they have been reinforced.
There are many other self-modification methods based on operant procedures: self-punishment, negative reinforcement, intrinsic satisfaction, covert (mental) rewards and punishment, extinction (no rewards or punishment after the behavior), and others discussed near the end of this chapter and in chapter 11. You should know them all.