Lack of robustness of some basic science
I really am not surprised by the contents of this article in Nature. That many— if not the majority— of findings is some areas of medical science are misleading or lacking in depth is no surprise. Cancer or any other ‘hot’ area will be a particular problem. The factors leading this are many but surely include the following: the measures of scientific success are not outcome based but intermediate measures such as publication and grant income; interest and awards are skewed to the short term; publication is less and less about honest communication, and more and more about measures of ‘output’ and career; and biology is very messy and allows all sorts of fudges for good and bad. Scientists don’t have to invest their own money in claims about future health benefits, and there is always a tendency —even a necessity — to talk up ones own work. This means that all the caveats and limitations of experiments are minimised. We confuse truth with ‘getting it past the reviewers’, and getting renewal.
The scientific community assumes that the claims in a preclinical study can be taken at face value — that although there might be some errors in detail, the main message of the paper can be relied on and the data will, for the most part, stand the test of time. Unfortunately, this is not always the case. Although the issue of irreproducible data has been discussed between scientists for decades, it has recently received greater attention (see go.nature.com/q7i2up) as the costs of drug development have increased along with the number of late-stage clinical-trial failures and the demand for more effective therapies.
Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed ‘landmark’ studies (see‘Reproducibility of research findings’). It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.
Some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis. More troubling, some of the research has triggered a series of clinical studies — suggesting that many patients had subjected themselves to a trial of a regimen or agent that probably wouldn’t work.