Thursday 9 January 2014

Off with the old and on with the new: the pressures against cumulative research

 
Yesterday I escaped a very soggy Oxford to make it down to London for a symposium on "Increasing value, reducing waste" in Research. The meeting marked the publication of a special issue of the Lancet containing five papers and two commentaries, which can be downloaded here.

I was excited by the symposium because, although the focus was on medicine, it raised a number of issues that have much broader relevance for science, including several that I have raised on this blog, including pre-registration of research, criteria used by high-impact journalsethics regulation, academic backlogs, and incentives for researchers. It was impressive to see that major players in the field of medicine are now recognizing that there is a massive problem of waste in research. Better still, they are taking seriously the need to devise ways in which this could be fixed.

I hope to blog about more of the issues that came up in the meeting, but for today I'll confine myself to one topic that I hadn't really thought about much before, but which I see as important, namely the importance of doing research that builds on previous research, and the current pressures against this.

Iain Chalmers presented one of the most disturbing slides of the day, a forest plot of effect sizes found in medical trials for a treatment to prevent bleeding during surgery.
Based on Figure 3 of Chalmers et al, 2014
Time is along the x-axis, and the horizontal line corresponds to a result where the active and control treatments do not differ. Points which are below the line and whose fins do not cross it show a beneficial effect of treatment. The graph shows that the effectiveness of the treatment was clearly established by around 2002, yet a further 20 studies including several hundred patients were reported in the literature after that date. Chalmers made the point that it is simply unethical to do a clinical trial if previous research has already established an effect. The problem is that researchers often don't check the literature to see what has already been done, and so there is wasteful repetition of studies. In the field of medicine this is particularly serious because patients may be denied the most effective treatment if they enrol in a research project.

Outside medicine, I'm not sure this is so much of an issue. In fact, as I've argued elsewhere, in psychology and neuroscience I think there's more of a problem with lack of replication. But there definitely is much neglect of prior research. I lose count of the number of papers I review where the introduction presents a biased view of the literature that supports the authors' conclusions. For instance, if you are interested in the relation between auditory deficit and children's language disorders, it is possible to write an introduction presenting this association as an established fact, or to write one arguing that it has been comprehensively debunked. I have seen both.

Is this just lazy, biased or ignorant authors? In part, I suspect it is. But I think there is a deeper problem which has to do with the insatiable demand for novelty shown by many journals, especially the high-impact ones. These journals typically have a lot of pressure on page space and often allow only 500 words or less for an introduction. Unless authors can refer to a systematic review of the topic they are working on, they are obliged to give the briefest account of prior literature. It seems we no longer value the idea that research should build on what has gone before: rather, everyone wants studies that are so exciting that they stand alone. Indeed, if a study is described as 'incremental' research, that is typically the death knell in a funding committee.

We need good syntheses of past research, yet these are not valued because they are not deemed novel. One point made by Iain Chalmers was that funders have in the past been reluctant to give grants for systematic reviews. Reviews also aren't rated highly in academia: for instance, I'm proud of a review on mismatch negativity that I published in Psychological Bulletin in 2007. It not only condensed and critiqued existing research, but also discovered patterns in data that had not previously been noted. However, for the REF, and for my publications list on a grant renewal, reviews don't count.

We need a rethink of our attitude to reviews. Medicine has led the way and specified rigorous criteria for systematic reviews, so that authors can't just cherrypick specific studies of interest. But it has also shown us that such reviews are an invaluable part of the research process. They help ensure that we do not waste resources by addressing questions that have already been answered, and they encourage us to think of research as a cumulative, developing process, rather than a series of disconnected, dramatic events.

Reference
Chalmers, Iain, Bracken, Michael B., Djulbegovic, Ben, Garattini, Silvio, Grant, Jonathan, Gülmezoglu, A. Metin, Howells, David W., Ioannidis, John P. A., & Oliver, Sandy (2014). How to increase value and reduce waste when research priorities are set Lancet : 10.1016/S0140-6736(13)62229-1

5 comments:

  1. Thanks for drawing attention to these important issues, Dorothy. US social scientists deserve most credit for initiating (in the 1970s) improvements in the methodological quality of reviews of research. Medicine started to get its act together in the 1980s, but there is still a long way to go before there will be recognition in some high academic places that failure to review systematically what can already be known from research leads to avoidable suffering and death, and wasted resources in health care and health research.

    ReplyDelete
  2. Thanks for this post, Dorothy. I agree with you, of course, about the importance of these types of systematic reviews, which often comprise or lead to major new theories. For myself, the work that I am most proud of has no new empirical data in it, yet I would like to think that some of this work has been or will be influential in my field. I also agree that there is a sense in which the desire for novelty has sometimes trumped good old fashioned cumulative science. But I have two comments on your post.

    * My first point is that I think that it is time that we put to bed the idea that reviews "don't count" for the REF. The Psychological Bulletin article that you cite is a major piece of theoretical work, highly cited, and vetted by the most stringent process of peer review. I have no idea why this wouldn't meet the HEFCE definition of research. We certainly put several such pieces into our 2008 submission -- as did departments like Oxford -- and neither of us did too badly (these submissions are all publicly available on the HEFCE website for anyone to inspect what kind of research the top departments in any field are producing). I would have thought that arguing that major theoretical work of this nature "doesn't count" is not the way to encourage it!

    * My second point is that for most of what we do, we are the people who decide what is good science - as reviewers, as panel members, as editors, etc. If people think that the wrong sort of work is getting funded, they should get involved in reviewing, or nominate themselves for membership of a funding panel. If people want to change what gets into the science magazines with high IFs, they should take up the opportunities they get to review this material (including the crucial pre-submission inquiries). In this respect, I've had the privilege over the past few years of serving on one of the main RCUK funding panels, and I've been surprised at the low level of engagement in the reviewing process ...it is not uncommon to have 10 individuals decline to review a proposal. This is our system, where we decide what gets funded, and if we think that there is a better way to do science, then we have to engage with the process.

    ReplyDelete
    Replies
    1. Thanks Kathy. As we have discussed on Twitter, it does seem that, re point 1, there are differences between institutions in how they view reviews. In our dept, we were told to pick our 4 best papers, excluding reviews. Yesterday I saw a colleague from Oxford Zoology, who complained of the same directive. Reviews are not seen as 'original research', even though they may come to novel conclusions and act as landmark papers in the field. And I'm currently writing a renewal for the Wellcome Trust and am disappointed to be asked to give a publications list that excludes reviews.
      Agree also re need to be good citizens in terms of serving on panels and acting as reviewer. But I think we need to be careful about assuming a negative response to a review request indicates disengagement. I started logging my reviewing last year, and in 2013 I had 98 requests to review papers, and 10 requests to review major grants. And up to the first half of the year I was also co-chairing a grants panel. I say yes to 20-25% of review requests, so i might look like I am disengaged, but I am actually a good citizen in this regard.

      Delete
  3. I completely agree with you, and I think the way forward is to concentrate on building theory that applies beyond a single specific type of experiment. With a theoretical framework in place it will be easier to orient yourself in the literature and establish novelty without cherry picking your intro. I also think that your suggestion of encouraging comprehensive reviews and syntheses is a great way toward this. The people that are often most interested in and best equipped for building theories are often not the ones that are most interested and best equipped for closely following the whole experimental literature. Reliable reviews could serve as a great entry point for theorists, and a way to avoid building the interdisciplinitis of building irrelevant theories.

    ReplyDelete
    Replies
    1. Thanks for this comment. I was watching a documentary on Crick and Watson recently and found myself thinking that they would never have survived our current research climate - too much time spent thinking and theorising!

      Delete