The Theory of Signal Detection

A funny thing happened to the concept of threshold on the way to the second half of the 20th Century: it disappeared. Or maybe we should say it became mobile. Experiments showed there was no magic line which, when crossed, made a stimulus perceivable. Instead, people act like the threshold is a decision point that can be adjusted depending upon different circumstances.

This conclusion came from a new field called information theory or communications theory. Information theory started after World War II as scientists tried to improve existing communication systems such as the telephone system and as they grappled with the new technology of the computer. Information theorists found that the detectability of a signal depended upon several factors that could be manipulated independently: (1) the level of background noise, (2) the strength of the stimulus, and (3) the redundancy (amount of repeated information) in the stimulus.

What factors determine the detectability of a signal?

If a person is trying to detect very weak signals in a background of noise-for example, picking out blips on a radar screen-the problem confronting the person is to pick out the signal from the noise. But if the signal is very faint, or the noise level is very high, the observer might make errors. There are two types of errors a person can make.

What are false positives and false negatives?

1. False positives occur if a person says yes (a positive response) but this is wrong (false) because no signal was presented. A false positive response can also be called a false alarm. If you thought you heard somebody call your name, but nobody actually did, that is a false positive.

2. False negatives occur when a person says no (a negative response) but this is false because actually a signal was presented. An example would be failing to detect a blip on a radar screen.

Psychologists found that the decision about when to say Yes or No could be changed, depending on which type of errors a subject wanted to avoid. A subject who wanted to avoid false positives, for example, could be extra careful never to say Yes unless absolutely sure a signal had been detected.

Why would people use different thresholds in different situations?

In some situations, false positives do not cause as much harm as false negatives. Consider a blood bank screening samples for the AIDS (HIV) virus. The initial screening of blood samples uses a very sensitive test designed to eliminate false negatives, even though that means there will be some false positives. False positives, in this case, are blood samples which test positive for the HIV virus, although later testing shows they are not really infected. These false positives have a cost—some blood is wasted—but that is a small price to pay for the security of knowing that no infected blood is given to hospital patients who receive transfusions. In other words, there must be no false negatives in this situation.

In other situations, a more important goal is to avoid false positives at all costs. A hunter must learn not to shoot at everything that moves in the bushes, because the moving object might be a human or a dog. The hunter must wait until the form of the object becomes clear. The threshold for pulling the trigger and shooting must be raised so that a signal does not lead to a response unless the signal is clearly perceived. False negatives (failing to shoot) are less important than false positives (shooting the wrong thing).

Write to Dr. Dewey at

Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below.

Custom Search

Copyright © 2007 Russ Dewey