Tag Archives: Opinion Polls

When ‘consensus’ doesn’t mean what you think it means

14 Oct

Tom-And-Jerry-Picture

I’ve just started teaching my new cohort of students, and this week used my favourite example of questionable peer-reviewed research, in which conclusions are drawn from self-report data on penis size ! As ever, even though the student were one-week into a three year degree programme they were well able to see that the paper, although published in a reputable peer-reviewed journal, was clearly nonsense. I was therefore really please to receive another great example this week from our brilliant librarian Ian Clark.

Last week saw a lot of reporting of a paper from ‘Psychology of Popular Media Culture’ suggesting that there was a consensus view that media violence leads to childhood aggression. The general tone of the reporting can be seen in this article from Time magazine. This example is in many ways far better that my favourite ‘penis-size’ paper, in that at first glance it looks entirely sensible and is published in a peer-reviewed journal from the august body that is the American Psychological Association. However, a few interesting points appear when one starts to delve:

  • The paper uses the words ‘broad consensus’ In it’s title, yet it appears that 69% of the participants agreed that media violence led to aggression. I may be a raging pedant, but when I see the phrase ‘broad consensus’ I was expecting something rather higher than 69% !
  • The study is essentially an opinion poll, none of the participants appear to have been asked if they have any evidence to back up their view. Whilst opinion polls are interesting, I’m not sure a peer-reviewed scientific journal is the place for them.
  • Even if one doesn’t think that the above two points are an issue, the fact that 36% of the participants in the survey had no further qualification to comment on the topic than that they were parents is truly worrying. Surely, a peer-reviewed journal ought to be soliciting the views of those who have conducted evidence-based research on the question to hand.

One final point, that I won’t dwell on here, but is very intriguing is the second  footnote that appears on page four of the paper:

 

The version of this manuscript initially submitted and

accepted was based on a different analysis, with communication

scientists and media psychologists combined in one

group as media researchers and identifying consensus as a

significant difference from the midpoint in groups’ average

responses. In reviewing an earlier draft of this manuscript,

the authors of a comment on this article (Ivory et al., in

press) correctly pointed out that these results could not be

interpreted as consensus. The editor gave us permission to

conduct a new set of analyses using a different operational

definition of consensus.

 

All in all this seems like a great way to demonstrate to students the necessity of reading beyond the headlines, even when reading a reputable peer-reviewed journal !

 

 

British public wrong about nearly everything, survey shows

17 Jul

One of the great pleasures of teaching what I do over a long period of time is that colleagues send me newspaper articles that provide me with raw materials for new lectures. This week I received a link to a wonderful story in ‘The Independent’ Newspaper headlined ‘British public wrong about nearly everything’ !

The story reports a survey conducted for The Royal Statistical Society and King’s College London, where the polling company Ipsos Mori questioned the great British Public about facts concerning the major political issues of the days. For example, ‘What proportional of public money is spend on state pensions in comparison with unemployment benefits’, of ‘What percentage of under 16-year girls become pregnant every year’. In each case the public demonstrated a spectacular ignorance of the the facts. Full details of the survey can be found on Ipsos Mori’s website. I’m not entirely sure whether this says more about a lack of understanding of percentages, rather that the underlying questions, but either way it’s of interest. I was particularly taken with the average response to the question about ‘What percentage of under 16 year old girls become pregnant every year was 15%. Can people really think that 1 in 7 under 16 year old girls are pregnant at any one time ?? (The actual answer is 0.6%). When you read stories like this is becomes clear why politicians have so little interest in evidence-based policy making. After all, the very people that elect them seem to have little understanding of evidence.

It occurred to me that this would make a lovely teaching exercise, to demonstrate to students the necessity of researching the background of a particular question before coming to a conclusion. I’m thinking about asking what do they think and what do they think ‘the average man in the street’.

I shall try this in September and report back.

Teaching statistical thinking might have just got easier

6 Aug

Over seventy years ago Samuel Wilkes, the then President of the American Statistical Association, wrote “Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write!”. (Interestingly he was paraphrasing the work of the the British writer HG Wells from 1903 !!). Given the welter of statistics that we are confronted with on a daily basis, it seem s perfectly reasonable to say that that day has arrived, and yet what I see of students entering undergraduate studies suggests that we have a long way to go in developing ‘efficient citizenship’.

My own discipline, psychology, requires students to have detailed knowledge of the statistics of null-hypothesis testing and yet in focusing on an understanding of t-tests, ANOVAs regression etc etc I suspect that more apparently ‘basic’ statistical ideas such as understanding distributions and sampling are often neglected. It has be to said that the fault does not entirely lay with those of us teaching in higher education institutions. Psychology students appear to arrive at university with an aversion to anything the looks like mathematics, that you have to assume is a product of the nature of pre-16 mathematics teaching.

Ive written before about using opinion polling problems as a route into teaching about sampling, but I’ve just come across a little book that seems like a perfect way of getting students to grasp ‘statistical thinking’.

‘How to lie with statistics’ by Darrell Huff is a tiny (124 pages) fifty year old book that contains a huge range of the type of examples that I like to use when teaching, and delivers them in a style that is accessible to modern-day students. It seems to me that this book would make an excellent basis for the first few weeks of any introductory statistics undergraduate course and ought to be compulsory reading to guarantee ‘efficient citizenship’.

US Presidential Polling Disaster

5 May

With the upcoming US presidential elections I shall be wheeling out stuff about opinion polling next semester. At the most basic it’s worth just asking students to think about how national polls can be accurate when they only survey around 1000 people. Students can usual derive the idea of representative samples for themselves without much prompting.

I then tell them the story of the 1936 US Election, and the spectacularly inaccurate poll conducted by ‘Literary Digest’ magazine that I first read about in the excellent book ’Critical Thinking about Research’ . What makes this story of particular interest is that the magazine polled 2 million people, and that their polls in the previous four US Presidential campaigns had been very precise. The 1936 ‘Literary Digest’ poll predicted a win for the Republican candidate Alf Landon, but the actual result was a landslide win for the Democratic candidate Franklin Roosevelt, with Landon only winning two states.

Interestingly, when I ask students why they think it might have been this wrong they seem to initially think about political bias from the publisher, however, with a little prompting about what was going on in America at the time (i.e. the Great Depression) they get around to asking who the magazine actaully questioned.

When told that polling cards were sent to magazine subscibers, names from the telephone directory and new car owners they soon recognise that the poll may have only questioned those who were already disposed to vote for Landon

The whole saga of the 1936 ‘Literary Digest’ poll is a great example of huge samples not always being better, and that if your sample is biased in some way to start with mking the sample bigger is just going to make the result even more inaccurate. An interesting postscript to this story is that one poll from 1936, over only 1000 people, was accurate. It was run by a man called George Gallup, whose organisation is still working today. The Literary Digest folded soon after the 1936 election !

%d bloggers like this: