Medical education: A minimalist manifesto (part 1) (TIJABP)
Clusters of events in time and space are the epidemiologist’s friend. The equivalent of buses coming in threes near a nuclear power station so to speak. Chance? Or something else going on?
Yesterday was one of those days for me, and whilst the events were not of the same class, they made me think. I went to an interesting Keynote address by Professor Trudie Roberts from the Leeds Institute of Medical Education. I then came back to my office across town and ‘wasted’ a few minutes reading a couple of wonderfully angry letters from some junior doctors on how bad their training is (?education). I then had one of my almost weekly conversations with a colleague, somebody I respect enormously in terms of his clinical abilities, in which we cast out world-weary eyes over what has happened to medical training in the UK. Finally, I glanced at the page of contents for the most recent edition of Medical Education.
I do not know Trudie Roberts personally and have not read any of her work. She gave a lecture on leadership in medical education. It was a good lecture, witty and measured. I enjoyed it. The leadership bit I am always nervous about. Everybody seems to be running courses on ‘Leadership’, and it seems that you just complete your modules and you are a leader. I guess there is a certificate too. I wondered whether before Sydney Brenner and others made the 20 century revolution in biology happen, they had been on some leadership courses first. Let alone that major prophet 2000 years ago.
In her lecture she described how she felt the field of medical education has developed and grown over the recent past. There were some familiar themes. The need for research funding; the need to justify to the ‘central university’ the value of what they do (lest they end up like those academics at Barts/ Queen Mary); the question of the skill mix or difference in background of those entering the field; the issue of how you measure value that largely falls within an institution (as compared with so much other research activity where it is your peers outside your institution that matter most). Overall, the talk was upbeat and, as I said, I enjoyed it.
When I came back to my office I read the two wonderful letters from a bunch of junior doctors. Here was reality, or so I think, in my hands (or on screen literally). They condemn the mindless tick boxing that medical training has become.
The quotes say it all (the letters were in response to an article by Iona Heath)
I agree with Heath that something has gone very wrong with medical training.1 After six years at medical school I am sure I have had to tick (literally) more boxes than any previous generation.
I am glad that Heath acknowledges the commitment and dedication of most junior doctors. However, our conscientious nature makes clambering through these endless hoops burdensome and demoralising. Add to this the huge debt we face after six years at university and it is easy to see why some feel despondent at the start of their careers.
Unfortunately, I suspect it won’t be long before we have our commitment and dedication “snuffed out.”1 If Heath wants a doctor who thinks and questions,1 she and the royal colleges urgently need to act to change clinical education.
The other letter:
As people about to cross the student-doctor boundary we echo Heath’s sentiments.1 During the past five years, our performance has been determined by exercises and assignments that reduced us to tick boxes.
We were given formulas to perform examinations, take histories, and break bad news. We were even assessed on our abilities to reflect, and there was a formula for that too. As we start working on the wards for the first time we anticipate discovering a strategy for success in that environment. To one of the authors’ chagrin, when a junior doctor asked a consultant during a ward round about a patient’s management, the consultant retorted, “You’re not here to think, boy.”
I think it was Virchow who said something along the lines that ‘not for the first or the last time youth was right’. (Of course, if I say youth is right, is that another Cretan Liar paradox?)
The next event in my little cluster of events was a chat with a colleague. He is a close friend and IMHO a superb doctor. He has a set of personal skills that allow him to practice in a caring profession, although I think his medical education has had nothing to do with these. But his clinical expertise is based on something that Alvan Feinstein, the late US epidemiologist pointed out in his classic book ‘Clinical Judgement‘ and, from a different vantage point, Geoff Norman, has gone and on and on about: superb clinicians know more, they know more both about disease and patients with disease.
Finally, there was the current edition of Medical Education. I once liked this journal mainly because Geoff Norman published a lot of work in it. It is often easy to recognise one of his papers. They are usually beautifully written, often funny, but most of all there is a raging intellect that is trying to make sense of the world. He ever so clearly wants to find out how things work, and how we could modify things to make things better. (He did of course start out as a PhD physicist). The contrast with the current bunch of articles in Medical Education could not be more depressing: Conflict and Power…, Self Regulated learning.., The Hidden Curriculum etc. All well and good, but is this what we need to sort out the mess we are in? Dismal.
Now this is not one of those lets kick all the market sellers out of the temple moments, but I want to sketch an alternative for how we could think about things. Warning: a few pencil lines on paper, not a completed canvas. TIJABP after all.
Despite rumours to the contrary there is little robust consensus on what doctors need to know to get their certificates in the UK. Yes, there is the series of inspections by the GMC, external examiners etc, but none of this in my opinion stands up to any sort of scientific scrutiny. There is understandably a lot more concern about the reliability of medical exams than there used to be but, for me at least, the trickier issue of validity has been quietly ignored. Anybody who practices medicine in the NHS knows that there is tremendous variation in the abilities of medical graduates (and, of course, tremendous variation in the abilities of different doctors). What do graduates need to know, and how can we measure it?
For many University subjects the obvious solution is for the students of one University to be examined by staff from another university (or better still a bunch of other universities). Yes, it won’t work for all subjects, but lets continue the chain of thought. What are doctors expected to know at graduation? And here it is no use saying, as many do say for dermatology, that they should know how to diagnose psoriasis or melanoma. The issue is, which cases of melanoma and psoriasis do we expect them to get right, and which ones do we forgive them for getting wrong. As currently framed the current lists of ‘competencies’ or learning outcomes do not fit the bill. They are usually so bland and platitudinous as to be useless. Tick boxing.
I would think about these things differently. A key skill — some would say the key skill— of a doctor, is to be able to take a history from a patient, come up with a sensible differential diagnosis, and plan an appropriate course of action. Action here could include therapy, or investigation, or getting advice from a more experienced colleague. Now the thought experiment. If we could get each student to be assessed on 500 cases, we would simply score them on these abilities. If the student did well, I would care little why they did well. Certification is certification, and I would be indifferent to how well how they behaved as a ‘reflective’ practitioner, ‘team player’, lifelong learner etc. It is not that these attributes might not be important during your career as a doctor, but that they are secondary characteristics, and any employer needs to make judgements about them as a person moves through their career when there is meaningful sampling. But no, I don’t think many of the these educational bolt-ons are central for undergraduates. Lack of communication skills for example is not something that has ever worried me about my colleagues over the last 30 years (whether anybody can meaningfully interact with patients in less than 10 minutes is an NHS issue, not necessarily a testament to lack of ability). On the other hand, doctors not knowing things terrifies me.
So how do we examine them on 500 patents. Well, the first thing is that we get rid of modular assessment or at least we have to accept that at a particular ‘end point’ doctors have to be competent across medicine. Much of medicine now is based on activities that we can simulate artificially: photographs of skin rashes rather than the real patient, CXR, ECG, ultrasound images etc. For history taking, we use simulated patients. And remember this is not the long case of old, assessments should reflect what comprises most consultations, ten minutes and less. We need a test of doctoring skills not what NHS hospitals want cheap labour for.
The salient points of this model are that it is blind to what has gone before. If you pass the exam, you get the certificate. If it turned out you had skipped most of your course, well all well and good. If you pass, you pass. Now the objections will be legion. Is there not more to being a doctor, and what about teaching life long learning, reflection, professionalism etc. All those skills that are cluttering up our curriculum? Well simple: they are hardly skills in the formal sense of the word, and they have to play a long second fiddle (a viola?) to what doctoring is all about: experts know things about patients and diseases that novices do not.
Of course the thought experiment sample size of 500 is a guess. There are the logistics to consider, but also the numbers must reflect test performance. The number may be much much lower, but stick with the thought experiment for the moment. In John Searle’s celebrated thought experiment, he argues that if there was a program that allowed one to answer questions phrased in Chinese, in Chinese, and the program was given to say an English speaker, would you say that this person understood the conversation? (Shades of the Turing test I know the way I have described it). In his view the answer is no. My view is that for doctors this doesn’t matter: if the individual can perform like a doctor, she is a doctor. That is how we should judge her. Certification is distinct from what has gone on before or goes on inside. Much of what we currenlty fill up undergraduate training with is best left to a later stage. I may sound like the ‘they need more anatomy’ crowd, but my thesis is different. Can I now tick the box for reflection on my CME form for my annual GMC appraisal?