Archive | October, 2014

Is the ‘flipped classroom’ the answer to all our problems ?

27 Oct

flippedgraphic(web1100px)_0

Since the beginning of the summer I’ve been heavily involved in by institution’s introduction of a pilot of lecture capture technology. So far I’ve been hugely impressed by the software we’ve decided on from Panopto. This week I attended a conference run by Panopto at Senate House in London, and was intrigued to see that the debate had moved on from just recording all lectures to the concept of the ‘flipped classroom’. Whilst the idea goes back as far as Eric Mazur work in the 1990’s, it was in the early 2000’s that genuine discussion appeared of the idea pre-recording materials for students to watch before attending a teaching session. Indeed, the Learning & Teaching management of my own institution seem really keen that we move towards flipping our classes.

I’m genuinely torn as to whether the ‘flipped classroom’ is a good idea for me or not. I can well understand that if I were teaching Research Methods I would be rushing towards this approach, but for what I do, I still can’t decide whether it’s a good idea or not. I’ve always been a supporter of the ‘lecture’ as a  great way of demonstrating ‘the academic process’ to students, so hopefully the best of my lectures set out to make a specific point, and along the way use evidence as ‘scaffolding’ to get to that point. In my case it may well be that there are small chunks of my lectures that I could ‘flip’ in order to free up class time, rather than it being whole sessions. What I found interesting about the Panopto Conference I attended was that the speaker giving an example of a flipped classroom seemed equally as conflicted as I am. The big question surrounding ‘flipping’ seems to be ‘what do you do with the freed up class time’, and it was that point that seemed to be lacking from the conference presentation. (I should confess that my other fear is ‘What if they don’t all watch the flipped material before the teaching session?’)

I suspect I’ll give this a go next year on a small scale, but in the interim there are a range of examples I’m looking for :

1) Flipped classes in universities that are not highly selective

2) Flipped classes with cohorts of 200+ students

3) Flipped classes at 1st year (freshman) level, rather than final year

As an aside, for anyone considering lecture capture software, Panopto is an excellent solution. I’ve been very pleasantly surprised as to how robust the software is, and how easy it is to use.

When ‘consensus’ doesn’t mean what you think it means

14 Oct

Tom-And-Jerry-Picture

I’ve just started teaching my new cohort of students, and this week used my favourite example of questionable peer-reviewed research, in which conclusions are drawn from self-report data on penis size ! As ever, even though the student were one-week into a three year degree programme they were well able to see that the paper, although published in a reputable peer-reviewed journal, was clearly nonsense. I was therefore really please to receive another great example this week from our brilliant librarian Ian Clark.

Last week saw a lot of reporting of a paper from ‘Psychology of Popular Media Culture’ suggesting that there was a consensus view that media violence leads to childhood aggression. The general tone of the reporting can be seen in this article from Time magazine. This example is in many ways far better that my favourite ‘penis-size’ paper, in that at first glance it looks entirely sensible and is published in a peer-reviewed journal from the august body that is the American Psychological Association. However, a few interesting points appear when one starts to delve:

  • The paper uses the words ‘broad consensus’ In it’s title, yet it appears that 69% of the participants agreed that media violence led to aggression. I may be a raging pedant, but when I see the phrase ‘broad consensus’ I was expecting something rather higher than 69% !
  • The study is essentially an opinion poll, none of the participants appear to have been asked if they have any evidence to back up their view. Whilst opinion polls are interesting, I’m not sure a peer-reviewed scientific journal is the place for them.
  • Even if one doesn’t think that the above two points are an issue, the fact that 36% of the participants in the survey had no further qualification to comment on the topic than that they were parents is truly worrying. Surely, a peer-reviewed journal ought to be soliciting the views of those who have conducted evidence-based research on the question to hand.

One final point, that I won’t dwell on here, but is very intriguing is the second  footnote that appears on page four of the paper:

 

The version of this manuscript initially submitted and

accepted was based on a different analysis, with communication

scientists and media psychologists combined in one

group as media researchers and identifying consensus as a

significant difference from the midpoint in groups’ average

responses. In reviewing an earlier draft of this manuscript,

the authors of a comment on this article (Ivory et al., in

press) correctly pointed out that these results could not be

interpreted as consensus. The editor gave us permission to

conduct a new set of analyses using a different operational

definition of consensus.

 

All in all this seems like a great way to demonstrate to students the necessity of reading beyond the headlines, even when reading a reputable peer-reviewed journal !

 

 

%d bloggers like this: