In the next couple of weeks I’ll start to teach my students about Tversky & Kahneman’s work on heuristics and biases. As a warm-up to that I this week ran through a few examples of cognitive biases. I’m not sure why this came as a surprise, but I realised just how compelling examples of cognitive biases are, particularly those illustrating numerical biases.

My favourite is over thirty years old, but stills seems to engage today’s student:

Disease X is found in 1 in 1000 people

There is a test for disease X that is 100% accurate at detecting the disease where it is present

The disease has a 5% false positive rate? That is in 5% of cases where the disease is not present the test will say that it is present

If you select a person at random and test them for the disease and receive a positive result what is the chance that this person actually has the disease ?

When presented with this question the vast majority of people say that there is a 95% chance that the randomly selected person has the disease. What is really intriguing is that if you step students through the simple arithmetic of this problem they have little trouble appreciating the correct answer i.e.

Imagine testing one thousand people, one of those will have the disease and thus 999 will not have the disease. The test of the one person with the disease will yield a positive result. The tests of the remaining 999 will yield 50 positive results (5% of 999). That’s 51 positive tests from 1000, even though we know that only one person actually have the disease . 1 out of 51 represents means that the chance of a randomly selected person having the disease is actually 1.96% ! For those interested in the psychology behind this, the problem is a demonstration of something called base-rate neglect. Put simply, people ignore how often the disease actually occurs.

This can be a little easier to see in a simple diagram:

The huge discrepancy between their response and the actually answer seems to have a big impact on students, and hopefully makes them aware of consciously reviews responses to statistical questions. These type of problem seem like an excellent way of illustrating to students that we are naturally not disposed to deal well will statistics, and so I’m going to throw a few more in over the next few weeks. There is a second version of this problem which is equally powerful, and I’m going to use it next week to see if the students have retained the concept from the DIsease X problem :

- “A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue.

- A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.

- What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?”

## One Response to “Demonstrating that we really are very bad at dealing with numbers ?”