The first was an article by Jack Grove quoting work showing that certain types of student feedback on teaching quality tend to reduce learning. I have come across similar findings before, but I do not know the primary literature well. Of course it is not at all surprising. I think it was Clark Glymour who quoted that every time you put an equation in a lecture to those enrolled on degrees that are viewed as being non-mathematically challenging, then student evaluations of lecturers go down. If you tie this sort of silly feedback to staff progression or reward then you are on a race to the bottom. I think the phenomenon is real, and at the softer edge, widespread. Learning is hard, and should be challenging. The pleasure and reward of mastery, only come with effort. The teacher and learner share an asymmetrical relationship.
The second article, brought a smile to my face, because my own university—like most others I imagine— is anxious about what students say about it in the National Students Survey (NSS). Economics students at Manchester have been campaigning for their course to give greater prominence to alternative theories, especially in the light of the 2008 financial crash. They are asking students to withhold filling in their NSS feedback until a decision is reached about including a course with the title ‘Bubbles, Panics and Crashes’. The students are quoted:
The university is very keen to get students to fill out the NSS, saying that they take their feedback very seriously – but this approach does not always seem to translate to non-NSS forms of feedback, like our petition, which is quite sad…
Well, as Clay Shirky said: ‘Institutions tend to preserve the problem they were created to solve’. There is a real problem here about what sort of feedback matters, and what does not. Just as bad money drives out good money, so do the usual ‘happy charts’ that universities have embraced, at worst drive down standards, and more usually obscure attempts to improve teaching. One issue here is that universities increasingly think they are corporations engaged in the ‘education business’, rather than institutions devoted to truth — however inconvenient this may be on occasion. You cannot pretend to understand the world selectively; choosing different standards for what is in the classroom and what is outside the classroom. As Bronowski said in a very different context, knowledge and our world view has to be ‘of a piece’. When we selectively pick and choose knowledge and descriptions, we call it advertising or propaganda. It is not education. So, I have little sympathy for the University of Manchester.
All of this is not unrelated (at least in my mind) to a video from Derek Muller. Muller is a physicist with a bit interest in science communication and how people learn (or more usually, do not learn) physics. Now, a few words of warning. I am one of those people who thinks physics is indeed the queen of sciences. I remain ever in awe of how powerful a few deep concepts can be in explaining so much of the world around us ( note the contrast with much modern day academic economics). It is not that all of biology is stamp collecting (let alone dermatology), but with one or two exceptions, biology and medicine rarely ever approaches the sort of deep structures that you find in physics. Physicists make a habit of this sort of approach. So, it is no accident that the work on the force concept inventory, shows a depth of thinking that few papers in medical education every come anyway near approaching (one exception would be some of Geoff Norman’s stuff, but then again he started off as a physicist).
The Muller video is both good and bad. He talks about the Khan videos, and although he is very polite, he is cautious about their pedagogical impact. I agree. But I wish he wouldn’t emulate this naff handwriting on screen technique. Typed text is so much easier, and it is clear that the software environment is not up to what people want to achieve. Yes, in class, writing on a board acts as a break on the all too usual power pint madness, with lecturers racing through slide after slide, but this is not to a reason to emulate slates again. [rant over].
What however I do like (and why this blog post has some degree of coherence…) is his exposition of the limitations of videos as teaching aids and his findings from his own work that students can rate material highly, think they are learning, and yet objective studies show that they have not learned anything. It is not that he is against videos, but that we have to learn how to make videos. Here I am not talking about the software environment (except when it gets in the way!), but about how pedagogically you approach the problem. One of his findings is how showing misconceptions can force students to improve on their knowledge. This of course highlights the importance of both domain knowledge in teaching, and domain knowledge of what students find difficult, and what they get wrong. It is one of the great virtues of the National Academy of Science report How People Learn to debunk the idea of ‘generic teaching skills’ being sufficient. Unfortunately, all too often ‘educationalists’ have a vested interest in pretending otherwise: good teaching requires domain subject knowledge, knowledge of teaching within a domain, and knowledge of learning. Most importantly however, is to realise that what students know and can do, is the only reliable guide to how effective your strategy is. Much of the time this will require the sorts of analytical skills that we call research. Students not filling is surveys, might be telling us they know more than we thought.