Its always a cheap shot (but true, nonetheless) to respond to students who say their medical school is better than most other medical schools, with the question “Have you been any other medial school?”. They may have some data, or have spoken to students from elsewhere, but most of the time I think it reflects the standard psychological bias we all suffer from. I am so glad I did dermatology rather than cardiology… (I am, I will add). And so on.
There is of course another bias, and that it to imagine that the grass is always greener else where. I certainly am guilty in this respect. I was always critical of my undergraduate medical degree at Newcastle, but with time I think I judged some of it too harshly. I am not referring to the fact that 10% of the clinical teaching sessions were no-shows by teaching staff who showed no interest in students. There were real problems, and there still are at most UK medical schools. I am just saying there were some good things. Let me describe two.
In the first two clinical years (years 3 and 4), Wednesday afternoons were sports afternoons. Or not, depending on your preferences — but there was no timetabled teaching. Wednesday mornings were devoted to a ‘special study research project’. This was highly informal, in that you decided whether you wanted to do it or not, and there was no formal structure to it, and of course, no exam. It didn’t count in terms of marks for any formal assessment. I doubt it has survived in this form, but as I look back now, I marvel at how farsighted the people were who planned it this way. If you didn’t want to do one or, like most research, it went nowhere, you just stopped it. You could stay in bed, or go to the library. Terrific. For me of course, it was how I met Sam Shuster, and that meeting determined the rest of my professional career.
The second thing, was the nature of the intercalated degree. Unlike many of the Scottish schools at least, the BMedSci was a bespoke course, built bottom up with the goal of providing training in medical science, not of allowing students to join other non-medical BSc courses. You could do it after year 2 or year 4. The idea being that being able to do it after you had clinical exposure, would encourage people to do clinical science projects. So, most students still wanted to become budding T cell immunologists (‘thymologists’ we called them) but people like me could tackle clinical problems (if they wished).
The degree was unmistakably a research degree. There was an intensive three month introduction, with my syllabus including med stats, epidemiology, health economics, and computing. The class size for all of this bar the stats was n=2 (for statistics it was n=12). Teaching was in a lecturer’s offices over coffee. After the 3 month intensive course you were left to get on with your work for the remaining 9 months, with only 2 seminars to present, and a written individually bespoke exam, and a full length doctoral thesis to write. I forget the exact break down of marks, but over 70% were dependent on the thesis, which was examined by an external and internal, and included a viva. The degree was marketed (not the right word) to not just high fliers, but as an escape valve for those who found much of undergraduate medicine crushingly boring. This post is a way of saying thank you, and also a marker of how higher education has changed.
On why you might prefer med school to vet school. Philip Greenspun. Or not.
A simple thought after reading yet more emails from my university about teaching ‘management’.
The following quote is by Paul Seabright, a very interesting economist.
‘It is both an admirable and melancholy fact that training and and the standardisation of working methods are designed to reduce the impact of personal idiosyncrasy on the job’
It is from his book, The Company of Strangers. Everywhere I look in medicine, I see similar shadows at work. I have written about it here (Evidence and the industrialization of medicine) and more recently in relation to skin cancer here and here (industrialise skin cancer) . My guess is that we are now seeing similar things in higher education. There must be a better way?
Fundamentally, this is a problem of misplaced economic incentives. As long as the academic credential is worth more to a student than the knowledge gained in getting that credential, there will be an incentive to cheat.
The student loan system enables vast numbers of young people to study for three years. It also supports a vast higher education industry. What it does not do is enable students to find well paid jobs.
My undergraduate course in Newcastle starting in 1976 was said to be ‘integrated’. At the time I am not certain I knew what the alternative was, but soon cottoned on. The idea was that we wouldn’t have a series of lectures on anatomy, the physiology, then pharmacology etc. Rather some higher order topic (e.g. the CVS) would be covered by staff with different areas of expertise. I assumed that no single member of staff had the necessary expertise across the board. There are pros and cons to this approach. I quite like the idea of a single individual setting out how they will cover a topic, as you have time to get used to lecturing style and so on; and it is perhaps easier to read ahead, if one person is in control. On the other hand, for many if not most topics, the traditional or historical disciplines don’t make sense. In the clinical area, physicians carry out procedures and surgeons don’t just don’t operate (although many of them would like to).
There are however challenges to having an integrated course. One example I remember, was listening to a lecture on some type of cancer and the surgeon explaining the basis of the classification used. The next lecturer, a pathologist, led with an opening statement about how the classification method described by the surgeon just a few minutes earlier, was wrong. Of course the surgeon had left by this time. Nobody it seemed to me knew what anybody else was talking about with any fine sense of granularity. I learned that the course may be said to be integrated, but the staff were not, and if the staff are not, then it isn’t really integrated at all. What seemed to happen was that everybody came on, presented their monologue, and then dispersed back to their silo. I doubt if the ‘course’ actually ever existed except as a timetable arranged by a course organiser with one hour time-slots: ‘surgical bit’, ‘pathology bit’; ‘physician bit’, ‘ethics bit’ etc.
I was thinking about this because of a mistake I made recently. I had just diagnosed a case of cutaneous endometriosis, and the history of cyclical pain, bleeding and swelling of the skin lesion, is quite unforgettable. It is of course very rare, and not the sort of thing we expect students to know about. But if the patient is in front of you, it makes (I thought) a memorable learning moment. So, I introduced the patient to a group of students, and I was aware at least at a subconscious level that they didn’t seem so enthralled at the diagnosis as me. I registered some degree of puzzlement, but thought nothing of it until I was chatting to one the students later in the day. The students were year 4 students, the year in which they are exposed to O+G— this I knew. But what I had forgotten, was that this was the first carousel of the year: none of these students would have done O+G, and therefore I couldn’t have expected them to know about endometriosis. That is why they looked puzzled.
The lesson for me here is that one giant challenge for undergraduate medicine is that the teachers need to be ‘integrated’. But we are not. Most of us live in professional silos. A danger is that you end up with lots of excel spreadsheets mapping out all the ‘competencies’ that you claim students need. This is a complex process, and one that I think exists in documents but not in the real world. Just as a physics teacher needs to know whether a student can understand calculus, a proper clinical teacher needs to know all the things students have been exposed to. This means that all staff need to know what students learn in many other disciplines (e.g. ethics, statistics) so that any clinical exposure can build on this knowledge. We don’t need another statistics slot (to give an example) but we do need all the staff to already know all the statistics the students have been taught , so that the basic nuts and bolts of clinical teaching can be enriched by these other domains; and the statistics knowledge be consolidated at the same time. This of course means we have to focus on what sort of staff deliver teaching, as much as the curriculum. I do not think we have got it right.
Scotland has 15 universities, while there are approximately 130 in the UK as a whole. Thus the percentage of research funding that goes to Scottish institutions is close to the proportion that its universities represent in the UK as a whole. Considerable research income, as well as volume of publications, is associated with medical schools and the biomedical sciences, substantial funding for these areas coming from the Medical Research Council and Biotechnology and Biological Sciences Research Council, as well as the Wellcome Trust and other medically related charities. Scotland has five medical schools (including the pre-clinical school at St Andrews) in a population of 5.3 million, while England has 25 and Wales two for populations of 53 million and 3.1 million respectively. On a pro rata basis with Scotland, England would have 50 medical schools and Wales three.
This is a nice post about a talk by Audrey Watters on how the dream of ‘the’ portal hasn’t died (unfortunately). What do people think links are for? What do people think a web is?
One of the most interesting aspects of our work on red hair and the melanocortin 1 receptor (MC1R) was what it might say about human evolution. Although the chief justification of sequencing human DNA was understanding disease, what always seemed most interesting to me, was what we would learn about ourselves. The story of man: the greatest story ever told. And it is being told, as you can read from this article in Science:
How do you make a modern European? For years, the favored recipe was this: Start with DNA from a hunter-gatherer whose ancestors lived in Europe 45,000 years ago, then add genes from an early farmer who migrated to the continent about 9000 years ago. An extensive study of ancient DNA now points to a third ingredient for most Europeans: blood from an Asian nomad who blew into central Europe perhaps only about 4000 or 5000 years ago.