Many many years ago I wrote a few papers about — amongst other things — the statistical naivety of the EBM gang. I enjoyed writing them but I doubt they changed things very much. EBM as Bruce Charlton pointed out many years ago has many of the characteristic of a cult (or was it a Zombie? — you cannot kill it because it is already dead). Anyway one of the reasons I disliked a lot of EBM advocates was because I think they do not understand what RCTs are, and of course they are often indifferent to science. Now, in one sense, these two topics are not linked. Science is meant to be about producing broad ranging theories that both predict how the world works and explain what goes on. Sure, there may be lots of detail on the way, but that is why our understanding of DNA and genetics today is so different from that of 30 years ago.
By contrast RCTs are usually a form of A/B testing. Vital, in many instances, but an activity that is often a terminal side road rather than a crossroads on the path to understanding how the world works. That is not to say they are not important, nor worthy of serious intellectual endeavour. But the latter activity is for those who are capable of thinking hard about statistics and design. Instead the current academic space makes running or enrolling people in RCT some sort of intellectual activity : it isn’t, rather it is a part of professional practice, just as seeing patients is. Companies used to do it all themselves many decades ago, and they didn’t expect to get financial rewards from the RAE/REF for this sort of thing. There are optimal ways to stack shelves that maths geeks get excited about, but those who do the stacking do not share in the kudos — as in the cudos [1] — of discovery.
Anyway, this is by way of highlighting a post I came to by Frank Harrell, with title:
Randomized Clinical Trials Do Not Mimic Clinical Practice, Thank Goodness
Harrell is the author of one of those classic books…. . But I think the post speaks to something basic. RCT are not facsimiles of clinical practice, but some sort of bioassay to guide what might go on in the clinic. Metaphors if you will, but acts of persuasion not brittle mandates. This all leaves aside worthy debates on the corruption that has overtaken many areas of clinical measurement, but others can speak better to that than me.
[1] I really couldn’t resist.
It is not a real crisis, but perhaps not far from it. People have looked upon science as producing ‘reliable knowledge’, and now it seems as though much science is not very reliable at all. If it isn’t about truth, why should we consider it special? Well, a good questions for an interested medical student to think about. But hard to do so. Part of the answer lies with statistical paradigms (or at least the way we like to play within those paradigms), part with the sociology and economics of careers in science, and part with the means by which modern societies seek to control and fund ‘legitimate’ science. Let me start with a few quotes to illustrate some of the issues.
A series of simple experiments were published in June 1947 in the Proceedings of the Royal Society by Lord Rayleigh–a distinguished Fellow of the Society–purporting to show that hydrogen atoms striking a metal wire transmit to it energies up to a hundred electron volts. This, if true, would have been far more revolutionary than the discovery of atomic fission by Otto Hahn. Yet, when I asked physicists what they thought about it, they only shrugged their shoulders. They could not find fault with the experiment yet not one believed in its results, nor thought it worth while to repeat it. They just ignored it. [and they were right to do so]
The Republic of Science, Michael Polanyi
[talking about our understanding of obesity] Here’s another possibility: The 600,000 articles — along with several tens of thousands of diet books — are the noise generated by a dysfunctional research establishment. Gary Taubes.
“We could hardly get excited about an effect so feeble as to require statistics for its demonstration.” David Hubel, Nobel Laureate (quoted in Brain and Visual Perception)
The value of academics’ work is now judged on publication rates, “indicators of esteem,” “impact,” and other allegedly quantitative measures. Every few years in the UK, hundreds of thousands of pieces of academic work, stored in an unused aircraft hangar, are sifted and scored by panels of “experts.” The flow of government funds to academic departments depends on their degree of success in meeting the prescribed KPIs [key performance indicators]. Robert Skidelsky
Another computing metaphor might be useful at this stage, but it points us in a very different direction. Based on the paradigm that pharma companies have to follow to obtain drug registration, we have assumed that guides to clinical practice have to be hierarchical and bureaucratically “quality assured.” This is most obvious in countries such as the United Kingdom (UK), where the state wishes to be the sole arbiter of how people are treated (in part because much health care is tax-payer funded but also because the state likes to assert control of health). The World Wide Web offers us another model of (clinical) expertise, one in which the idea of a single central authority assuring the truth or falsity of statements has been replaced by a community—or cacophony, depending on one’s viewpoint—of voices. Here expertise is distributed, and the measures of truth are perhaps much more nuanced and fluid, subject to change as data and clinical experience changes. Curiously, it is this latter model, albeit using earlier methods of communication, that was the basis for the growth of scientific ideas and our interpretation of evidence about the world. It might be worth revisiting. Here