Who needs the feedback most?

by reestheskin on 28/01/2016

Comments are disabled

I have written previously about how we often do not know much about our own diagnostic processes. Curiously this does not mean we necessarily need to obtain large grants to get us out of our uncertainty. In many domains of life we can make a lot of progress using small tweaks to the system, experience, careful measures of outcomes, and interaction with the environment. The latter is crucial. In many crafts (and medicine is a craft, if a peculiar one) like cooking, or art, or music, we can rapidly remix inputs in the light of any output. Any teacher merely facilitates this. But what of the teachers themselves?

When students do their dermatology attachment in Edinburgh, they are with us for just 2 weeks (far too little I hear you say, but his question deserves more thought and less partisanship than it usually gets, and so I will leave to another occasion). Most mornings they receive a 2.5 hour patient based clinical tutorial where patients are shown to them . In addition they will sit in a ‘business’ clinic and a surgical session within each two week block. They can also sit an online formative assessment if they wish, and in the second week, there is an MCQ quiz, which they self mark, as I go through the answers with them, and talk around the topics. I have been running the quiz for about 6 years with various questions. More recently I introduced a new set of questions, more carefully tuned to a major revision in the content we deliver.

What I have found surprising is how difficult it has been to anticipate how they will fare on individual questions. I first came across this several years ago when I first set the quiz, and the results with the new quiz, are similar. There are a couple of dimensions to this.

First, even in a seminar group of 16 or so, I find it almost impossible to anticipate how many students will have got the right answer based on my reading of their behaviour. Yes, some questions, stand out as being very easy, but given the dynamics of interaction and willingness to shout out answers, it is only when I see the written answers ( at the end of the seminar, they are collected in and collated over the year), that I have a good idea of what they know, and what they do not know. We have tried clickers, but had practical problems with them, but I do use simple ‘close you eyes and show fingers’ to reduce peer pressure. Second, when I ask other colleagues who also teach our students, objectively they also have a poor idea of how students will fare on particular questions, although they often think they know.

Any school teacher will I expect be surprised by the above—not the difficulty in guessing what students know, which is to be expected —but the expectation that you can estimate what students know in the absence of testing or coursework. But for medicine this is a real issue, because regular day-to-day assessment of performance is rare. Why? First, student numbers are large, as are the numbers of staff involved with teaching. In our own unit most staff never see most students on more than one occasion. Repeated interactions are the exception, so sampling is a problem. Second, apprenticeship has been replaced largely by ‘observation’ with little opportunity for students to carry out tasks under supervision. Alison Gopnik, the distinguished psychologist of infant learning at Berkeley, has contrasted the way people learn to cook and the way they are supposed to learn science at University: would you really lecture them for three years before letting them crack an egg? We have similar problems in medicine (and they are not simply due to patient safety issues) where the link between seeing and doing has been broken).

As has been said on more than one occasion, the core of feedback is allowing students to know how they are doing, with pointers to how they can change their state. Ditto for staff. The feedback loop is exactly that: a loop.

Nothing surprising here, I hear you say. Indeed.