Being awarded a medical degree in the UK confers certain privileges in terms of clinical practice. I think most people would therefore believe that what students are required to know —and what universities to some extent are required to teach— would be fairly clearly laid out somewhere. This is not the case however, certainly not in comparison with gaining a driving license or learning to fly a plane. There are high level descriptions, but exposure differs widely between different medical schools. At the postgraduate level, things are probably clearer. Other countries do things differently. I tend to think that measures of process —as in most of education—have a limited role. They are too easy to game, for one thing; and what you have to do to ensure learning takes place, is hard to capture on spreadsheets. What you can do however, is to check the outcomes: you can test that students are capable of what you want them to know and, with effort (and scale), you can do this robustly. You just have to remember that whilst you want them to pass the exam, education is about more than passing the exam.
One problem we still therefore have to grapple with is knowing what you want students to learn. One popular approach is to limit teaching (and testing) to a limited range of conditions. The arguments for this approach are obvious: there is an awful lot of medical knowledge out there, and we have to guide students as to what we think is important. This approach leads to the ‘we expect students to know the ’10 commonest presentations’ in speciality X. I go along with this in my own discipline. If you look at different dermatology textbooks you quickly see that although there is some core material, the differences are enormous, and in general the level of detail in many textbooks unrealistic in terms of course structures. Recommending a book, without lots of annotation, seems inappropriate. So, we provide very detailed guidance on what material students are expected to master (for example see skincancer909 for skin cancer, and there is similar material for ‘rashes’ on the university teaching pages (firewalled)).
There are however problems with this approach, and they relate to how a confident diagnosis is achieved. If you diagnose a scaly red rash as psoriasis, you are doing two things. First, you are saying the physical signs match those you see in psoriasis, but second, you also saying that the signs match those seen in psoriasis more than they match those seen in other conditions. I am not trying to represent this formally, but the decision is a function of the likelihood of psoriasis / not-psoriasis. In the schema below, I have represented the ‘core knowledge’ as circle 1. But to diagnose these conditions, requires you to have to have knowledge of the other (non-core) conditions in circle 2. Circle 2 will usually be larger than circle 1. Then there is circle 3, representing those conditions that are either much rarer or much less important. Which of these do you mention?
Of course, the content of the circles is not just a measure of frequency, but has to include a weighting for severity and conditions ‘not to be missed’. The ability to diagnose a lesion as a basal cell carcinomas confidently, means knowing that a particular lesions is not a squamous cell carcinoma or a melanoma or a range of other tumours. You can only diagnosis a BCC confidently when you know that the lesion is not something else. As circle 1 becomes small in comparison with circle 2, diagnostic confidence drops. It is for these reasons that the classic ‘compare and contrast’ questions, and the ability to run through a differential diagnosis, matters more for learners than experts.
I do not have a solution, except that categorisation tasks (which is probably the key skill we want students to acquire) are much more error prone if the light you possess is so weak that most of the search space remains in darkness.