Tag Archives: Statistics

My 1st rational thinking book of year 2014 (and its only July !!)

18 Jul

I love using numerical examples of rational thinking to introduce students to the concept. There is something about how badly so many of us were taught maths at School that means that when we grasp a mathematical idea we intuitively understand that we’ve gained a skill that most people don’t have. I know it’s only half way through the year, but I’ve just read what I’m sure will be one of my rational thinking book of the year, and will be a great source of numerical examples for years to come.


Gerd Gigerenzer is Director at the Max Planck Institute for Human Development and Director of the Harding Center for Risk Literacy in Berlin, and is a big star in the academic decision making literature. Gigerenzer is often cast as the anti-Kahneman (the author of one of my books of last year), but in reality his work is an excellent adjunct to reading Kahneman, and shows the breadth of the academic working doing on in this area.

In a similar manner to Kahneman’s ‘Thinking Fast and Slow’, Gigerenzer’s ‘Risk Savvy’ summarises decades of academic work for a lay audience, and acts as something of a manifesto for creating a risk-aware population. Unlike many ‘popular’ books by leading academics I’d thoroughly recommend ‘Risk Savvy’ to anyone, and for a teacher it’s wonderful as it contains endless examples that will engage students.

If you’ve not come across Gigerenzer’s work before (and unless you’re a psychologist you probably wont have done !) you can find an excellent introduction to it in this article from the BBC’s website

Another example of how little we understand numbers

17 Mar


Back in October 2013 I wrote about Kahneman’s Disease X problem, and how it’s a really useful way to explain to students that our brains are really bad a dealing with numbers. In Kahneman’s version people are asked to estimate how reliable a particular medical test is. Most people think the correct answer is 95%, when it is actually 1.96% !

I was reminder of this last earlier in the week when I read the news of a blood test to predict Alzheimer’s disease. The study by Mapstone etal “Plasma phospholipids identify antecedent memory impairment in older adults” was published in Nature Medicine, and reports a small scale, but extremely interesting study that suggests that particular fats in the blood might be predictors of Alzheimer’s Disease. As you might imagine, this story appeared in the popular press under headlines like ‘Blood test that can predict Alzheimer’s’. The first line of the Daily Mail’s version of the story was ‘A simple blood test has been developed that gives healthy elderly people precious early warning they may get Alzheimer’s within the next three years’.

What really interests me about this story is that buried deep in the reporting were two apparently innocuous bits of information, the sensitivity and specificity of the test were 90%. That is the test will be 90% accurate in determining positive cases correctly and 90% accurate in determining negative cases correctly. To the novice thinker this sounds like the test has an overall accuracy of 90%. However, as can be seen from the Disease X problem, there is one piece of information missing from this equation, namely the chances of actually developing Alzheimer’s. One of the difficulties of looking at stats for Alzheimer’s is that the chance of developing the disease increases rapidly with age, e.g. 1 in 1400 people between 40-64 have the disease, but the figure rises to 1 in 6 for the 80+ age group. The best estimate I can find for the lifetime risk of developing Alzheimer’s disease is around 15%.

If you plug all these numbers into the calculation from the Disease X scenario you come up with some interesting information on how useful this would currently be as a predictive tests for Alzheimers:



Using these figures you get a figure of 68% accuracy for the test (rather than the 90% that the stories imply). I should say that I’ve been very conservative with these numbers. The press stories about this imply that relatively young people could test for potential Alzheimer’s, and for them the base rate is not 15% but more like 1 in 1000 !!! Slot 1 in 1000 into the above calculation, and suddenly the figures look very bad.

It should be said that none of this should take away from the quality of this research. It clearly seems to be a big step on the road to a test for Alzheimer’s , it’s just not quite what the press is reporting. All in all this seems like an excellent real-world example to add to any demonstration of how bad we are at numbers

The perils of scientists who go fishing (Is Oscar the cat making me depressed)

27 Feb


My usual focus here is on materials that are generic, and thus anyone could use them, but I’ve been thinking this week about research methods and thus this idea might really only be applicable to those teaching rational thinking at undergraduate level or above.

If you’ve  read anything else I’ve written here you’ll know that I have a strong focus on students understand the power of the scientific method to find ‘answers’. The current big idea of ‘big data’, and researchers ‘mining’ that data is somewhat alien to me. The  example of ‘big data’ that is currently exercising the British press is the government’s idea to make the data the National Health Service holds available to researchers.

I should say that there are clearly very good reasons to make such data available to researchers. Ben Goldacre has written an excellent article in the Guardian explaining why ‘big data’ is so valuable. My worry about ‘big data’ is that it encourages researchers to go ‘fishing’ through mountains of data without firstly progressing through making an observation, finding a theory and deriving a hypothesis from that theory. I’ve seen a great example of this data ‘fishing’ this week that rather sums up my fears. ‘Describing the Relationship between Cat Bites and Human Depression Using Data from an Electronic Health Record‘ by Hanauer etal was published in the peer-reviewed PLOS ONE journal, and using a very large ‘data mining’ sample produces evidence for exactly what the title describes.

Now, it may well be that this paper has discovered a great unrecognised source of depression (and god knows I’m no stats genius), but I’m really bewildered by the idea of using null-hypothesis testing when you didn’t actually have a hypothesis before you started the study. My worry about this sort of thing was further compounded by reading an excellent article in a recent edition of Nature that gets to the heart of the idea that many scientists really don’t understand the statistics they are using. In particular the article looks at the impact on statistical significance if one factors in the plausibility of the initial hypothesis. Maybe cats really do cause depression (my own cat certainly annoys me when he decides to wake up at 5AM), but just how plausible is this hypothesis, and thus how much can we trust the statistically significant result that has been found.

Here endeth this odd detour into the land of statistics, I shall return to rants about the popular press undermining rational thinking in the near future.

The Daily Express can’t make the numbers add up !!

18 Feb


The front page of yesterday’s Daily Express managed to combine glorious unintended irony with some really questionable reporting of statistics. AS you’ll see from the image above the lead story reported an opinion poll where 70% of respondents wanted all immigration to be stopped immediately. It’s quite entertaining that of three people pictured on the front page one was born in India and a second is the daughter of Russian immigrant parents (presumably of the three it is Simon Cowell that the Express are happy to have as a citizen !).

Despite the glorious irony of the pictures, it’s the stats that I find really interesting. The stories headline is very clear that ‘70% say we must ban new migrants’, and yet the very first line of the story says ‘Almost three out of four Britons want immigration to be reduced or stopped completely, a poll shows’ (My BOLD). So, within the space of one line we’ve gone from 70% wanting a complete ban to 70% wanting a reduction !!! If you then look at the actual survey on which the story is based you discover that rather than 70% of respondents wanting an immediate ban on immigration the actual figure was 21%, with a rather 49% wanting immigration to be reduced.

This seems like a lovely example to introduce students be being discerning consumers of the media, after all a contradiction between the headline and the very first line of the story ought to be easy to spot.

(P.S. It’s just been pointed out to me that the mother of Simon Cowell’s son is American, and thus it may be three of the four people pictured on the front page that the Express have an issue with !!!)

British public wrong about nearly everything, survey shows

17 Jul

One of the great pleasures of teaching what I do over a long period of time is that colleagues send me newspaper articles that provide me with raw materials for new lectures. This week I received a link to a wonderful story in ‘The Independent’ Newspaper headlined ‘British public wrong about nearly everything’ !

The story reports a survey conducted for The Royal Statistical Society and King’s College London, where the polling company Ipsos Mori questioned the great British Public about facts concerning the major political issues of the days. For example, ‘What proportional of public money is spend on state pensions in comparison with unemployment benefits’, of ‘What percentage of under 16-year girls become pregnant every year’. In each case the public demonstrated a spectacular ignorance of the the facts. Full details of the survey can be found on Ipsos Mori’s website. I’m not entirely sure whether this says more about a lack of understanding of percentages, rather that the underlying questions, but either way it’s of interest. I was particularly taken with the average response to the question about ‘What percentage of under 16 year old girls become pregnant every year was 15%. Can people really think that 1 in 7 under 16 year old girls are pregnant at any one time ?? (The actual answer is 0.6%). When you read stories like this is becomes clear why politicians have so little interest in evidence-based policy making. After all, the very people that elect them seem to have little understanding of evidence.

It occurred to me that this would make a lovely teaching exercise, to demonstrate to students the necessity of researching the background of a particular question before coming to a conclusion. I’m thinking about asking what do they think and what do they think ‘the average man in the street’.

I shall try this in September and report back.

%d bloggers like this: