You must turn off your ad blocker to use Psych Web; however, we are taking pains to keep advertising minimal and unobtrusive (one ad at the top of each page) so interference to your reading should be minimal.




If you need instructions for turning off common ad-blocking programs, click here.

If you already know how to turn off your ad blocker, just hit the refresh icon or F5 after you do it, to see the page.

Psi man mascot

Punishment

Punishment occurs when a stimulus is applied and the effect is to make a behavior less frequent. Sometimes this is called positive punishment. "Positive" in this context means a stimulus is added.

Few psychologists use the word positive when discussing punishment. When you see the word punishment by itself, this means an aversive stimulus is applied.

What is punishment?

Punishment, like reinforcement, is defined by its effect. A stimulus that decreases the probability of a behavior it follows is a punisher, by definition.

Electric shock punishment is probably the most potent punishing stimulus. It works in situations where nothing else does, for example, in cases where a person cannot stop coughing or hiccuping for days at a time, creating a medical emergency.

Brief application of small electric shocks immediately after each hiccup (or cough) can quickly suppress the troublesome behavior. In cases like that, small electric shocks are well worth enduring.

When can small electric shocks produce welcome relief?

Punishment can be inadvertent and accidental, just like reinforcement can be. On the previous page we used the example of praising a student who raises a hand in class. If the student does not raise a hand for the rest of the term, the attention was punishing, not reinforcing.

We could speculate why (perhaps the student is shy, or embarrassed by the praise) but that is irrelevant to the functional definition of punishment. If the behavior becomes less frequent after a stimulus is applied, the stimulus is punishment, by definition.

Behavioral psychologists often make a distinction between reward and reinforcement. To behaviorists in Skinner's tradition, reward refers to the intention of a stimulus, not its effect.

Giving a student a gold star is a reward. Whether the gold star is a reinforcer is a different matter. Does the rewarded behavior become more frequent? If so, then the gold star was a reinforcer. If not, the gold star was not a reinforcer.

To determine if something is a rein­forcer, you have to observe its effect on behavior. If the behavior gets more frequent, then yes, it is a reinforcer. If the behavior becomes less frequent, then the stimulus is a punisher, by definition

Money can be a punisher. If it takes away motivation in a formerly self-motivated individual, making a behavior less frequent, then it is a punisher.

Children who are rewarded for doing an activity sometimes become less likely to do it in the future. If that happens, the reward (despite its intentions) functioned as a punisher.

Why do some psychologists avoid using the word "reward" as a synonym for "reinforcement"? What is a punisher, by definition?

The usefulness of these concepts is linked to their abstract quality. People have intuitions about what is reinforcing and punishing. Sometimes those intuitions are wrong.

By stepping back and analyzing the situation (Is a stimulus being added or subtracted? Is the behavior getting more frequent or less frequent?) one can categorize the situation and identify a reinforcer or a punisher.

For example, you might know somebody who teases you. You respond in a way that (you assume) will make the teasing stop, for example, by showing irritation.

But if the teasing continues, and you know how to make a behavioral analysis, you might realize your expression of irritation was serving as a reinforcer. You might think, "How could acting irritated by a reinforcer?" but that does not matter. If the behavior continues, you are evidently not punishing it with your reaction. In that case, it is time to try something else.

I noticed a few years back that a web site advertised a dramatic new technique for reducing bullying. The author found that if the bullied child protested the bullying, it did not stop. It actually encouraged more bullying.

The web site's author said he dis­covered a technique that worked better. He taught bullied children to react to a bully with friendliness. Remarkably enough, this reduced bullying behavior.

The author of the web site was not familiar with behavioral terminology. He reacted with surprise when I mentioned in an email that, technically, his procedure was punishing the bully.

To him, punishment had to hurt! How­ever, the anti-bullying technique he discovered was using friendly behavior was a punishing stimulus, because it reduced the incidence of bullying.

Response Cost (Negative Punishment)

Response cost or negative punishment is another way to make behavior less frequent. It is therefore a form of punishment.

Response cost occurs when a stimulus is taken away as a consequence of behavior, and the observed effect is to reduce the frequency of the behavior. The word negative in negative punish­ment indicates a stimulus is removed.

What is response cost or negative punishment?

When people use the word penalty they are talking about response cost. A speeding ticket is a negative reinforcer. Money is taken away to reduce the frequency of speeding behavior.

How are penalties negative punishment?

This is a form of punishment because the stimulus (having money taken away) makes the behavior less frequent in the future. If you focus on the ticket, you could call the situation positive punishment (the police person gives you an aversive stimulus, a ticket). If you focus on the money, it is negative punishment or response cost (the ticket causes a valued stimulus, money, to be removed).

Extinction vs Response Cost

Extinction and response cost both make a behavior less frequent by taking away something good. The distinction between them is fairly subtle. In extinction the reinforcer that maintains a behavior is withheld.

This means the cause of the behavior has been analyzed. The reinforcer causing the behavior has been identified and taken away.

That produces extinction. An example would be extinguishing the bar press operant by turning off the food dispenser in a Skinner Box.

Response cost, by contrast, involves any valued stimulus being re­moved, whether or not it caused the behavior. If you get a speeding ticket, your money (a valued stimulus) is taken away from you.

Your money did not cause the speeding. Therefore a speeding ticket is categor­ized as response cost (negative punishment) rather than extinction.

How is extinction distinguished from response cost? Why is a speeding ticket response cost rather than extinction? How would you extinguish speeding?

How could you extinguish speeding? If a person drove fast for thrills, then to extinguish speeding you would have to eliminate the thrills.

This might occur naturally if, for example, a person grew bored with speeding. The result would be extinction of speeding behavior in that individual.

Avoidance and Escape Learning

The term aversive control is used to cover situations in which behavior is motivated by the threat of an unpleasant stimulus. There are two main categories of behavior under aversive control: avoidance behavior and escape behavior.

What is aversive control? What are the two main types?

Escape conditioning occurs when the animal learns to perform an operant to terminate an ongoing, aversive stimulus. It is a "get me out of here" or "shut this off" reaction, aimed at escaping pain or annoyance.

Escape behavior is defined as occurring after an aversive stimulus starts. Escape behavior is therefore negatively reinforced (reinforced by the eliminating a stimulus).

For example, you could get a rat to jump off a platform into some water–which a rat is normally reluctant to do–by electrifying the top of the platform. The rat will jump off the platform to escape.

When the rat jumps off, it escapes the shock. If the platform is not electrified, and is the only place to rest from swimming, the rat will stay on the platform until it gets shocked. The jump is an escape behavior.

How can escape conditioning be converted into avoidance conditioning?

Escape conditioning is converted into avoidance conditioning by giving a signal before the aversive stimulus starts. Once the animal learns that a signal comes before an aversive stimulus, it will try to avoid the aversive stimulus by performing an anticipatory avoidance behavior.

This kind of learning occurs quickly. It is very durable. For example, when we put anti-flea medication on two new kittens, they hated it.

The next time we approached them with a tube of the medication, they ran out of the room. That early learning lasted for many months. As a result, we had to use oral flea medication with those cats.

Avoidance behaviors can be very per­sistent. Each time the animal carries out an avoidance maneuver, it feels relief, whether or not there was an actual threat.

In that sense, avoidance behavior is self-reinforcing. It can keep going forever, even when the original threat is removed.

Why are avoidance behaviors so persistent?

This point was demonstrated in a classic experiment by psychologist Richard Solomon and colleagues (Solomon, Kamin, and Wynne, 1953). In this research, called the shuttle box experiment, Solomon placed a dog in a large cage with two chambers. A low wall separated the two chambers.

When the researchers electrified the floor on one side, the dog jumped to the other side. This was escape conditioning because the dog jumped as soon as it felt the electric shock.

If the researchers electrified the floor on the second side of the cage, the dog jumped back to the original side. It was no longer electri­fied.

To convert the situation into avoid­ance learning, the researchers sounded a tone ten seconds before the shock. The dogs learned to jump when they heard the tone, rather than waiting for the shock. Now they were performing an avoidance behavior.

The researchers found they could turn off the shock generator, and the dogs continued to jump each time they heard the tone. The behavior never extinguished. The dogs did not allow themselves to discover the shock generator was turned off.

What was Solomon's shuttle-box experiment, and what point does it illustrate?

This is an important pattern, because something similar happens to humans. Avoidance behavior is self-reinforcing. Relief is the reinforcer, and relief occurs whether or not the threat is still present. Consequently, avoidance behaviors can go on forever, even if there is no longer a reason for it.

In what sense is avoidance conditioning "self-reinforcing"?

A student who has trouble with math in high school may feel relief by avoiding math in college. Given a choice, the student might never take another math class, even if (in reality) the student would do well in a college math class.

The student feels relief each time math is avoided. The avoidance behavior could last forever unless the student summons up the courage to take math and find out "it is really not so bad."

A 2 x 2 Table of Consequences

A handy 2x2 table of consequences shows the types of reinforcement and punishment we have discussed.

Increases frequency
or probability
of behavior
Decreases frequency
or probability
of behavior
Stimulus is applied Positive reinforcement Punishment
Stimulus is removed Negative reinforcement Response cost

---------------------
Reference:

Solomon R. L., Kamin L. J., & Wynne L. C. (1953). Traumatic avoidance learning: the outcomes of several extinction procedures with dogs. Journal of Abnormal and Social Psychology, 48, 291-302.


Write to Dr. Dewey at psywww@gmail.com.


Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below.