Sunday 17 May 2015

Will traditional science journals disappear?


The Royal Society has been celebrating the 350th anniversary of Philosophical Transactions, the world's first scientific journal, by holding a series of meetings on the future of scholarly scientific publishing. I followed the whole event on social media, and was able to attend in person for one day. One of the sessions followed a Dragon's Den format, with speakers having 100 seconds to convince three dragons – Onora O'Neill, Ben Goldacre and Anita de Waard – of the fund-worthiness of a new idea for science communication. Most were light-hearted, and there was a general mood of merriment, but the session got me thinking about what kind of future I would like to see. What I came up with was radically different from our current publishing model.

Most of the components of my dream system are not new, but I've combined them into a format that I think could work. The overall idea had its origins in a blogpost I wrote in 2011, and has points in common with David Colquhoun's submission to the dragons, in that it would adopt a web-based platform run by scientists themselves. This is what already happens with the arXiv for the physical sciences and bioRxiv for biological sciences. However, my 'consensual communication' model has some important differences. Here's the steps I envisage an author going through:
1.  An initial protocol is uploaded before a study is done, consisting only of introduction, and a detailed methods section and analysis plan, with the authors anonymised. An editor then assigns reviewers to evaluate it. This aspect of the model draws on features of registered reports, as implemented in the neuroscience journal, Cortex.  There are two key scientific advantages to this approach; first, reviewers are able to improve the research design, rather than criticise studies after they have been done. Second, there is a record of what the research plan was, which can then be compared to what was actually done. This does not confine the researcher to the plan, but it does make transparent the difference between planned and exploratory analyses.
2. The authors get a chance to revise the protocol in response to the reviews, and the editor judges whether the study is of an adequate standard, and if necessary solicits another round of review. When there is agreement that the study is as good as it can get, the protocol is posted as a preprint on the web, together with the non-anonymised peer reviews. At this point the identity of authors is revealed.
3. There are then two optional extra stages that could be incorporated:
a) The researcher can solicit collaborators for the study. This addresses two issues raised at the Royal Society meeting – first, many studies are underpowered; duplicating a study across several centres could help in cases where there are logistic problems in getting adequate sample sizes to give a clear answer to a research question. Second, collaborative working generally enhances reproducibility of findings.
b)  It would make sense for funding, if required, to be solicited at this point – in contrast to the current system where funders evaluate proposals that are often only sketchily described. Although funders currently review grant proposals, there is seldom any opportunity to incorporate their feedback – indeed, very often a single critical comment can kill a proposal.
4. The study is then completed, written up in full, and reviewed by the editor. Provided the authors have followed the protocol, no further review is required. The final version is deposited with the original preprint, together with the data, materials and analysis scripts.
5. Post-publication discussion of the study is then encouraged by enabling comments.
What might a panel of dragons make of this? I anticipate several questions.
Who would pay for it? Well, if arXiv is anything to go by, costs of this kind of operation are modest compared with conventional publishing. They would consist of maintaining the web-based platform, and covering the costs of editors. The open access journal PeerJ has developed an efficient e-publishing operation and charges $99 per author per submission. I anticipate a similar charge to authors would be sufficient to cover costs.
Wouldn't this give an incentive to researchers to submit poorly thought-through studies? There are two answers to that. First, half of the publication charge to authors would be required at the point of initial submission. Although this would not be large (e.g. £50) it should be high enough to deter frivolous or careless submissions. Second, because the complete trail of a submission, from pre-print to final report, would be public, there would be an incentive to preserve a reputation for competence by not submitting sloppy work.
Who would agree to be a reviewer under such a model? Why would anyone want to put their skills in to improving someone else's work for no reward? I propose there could be several incentives for reviewers. First, it would be more rewarding to provide comments that improve the science, rather than just criticising what has already been done. Second, as a more concrete reward, reviewers could have submission fees waived for their own papers. Third, reviews would be public and non-anonymised, and so the reviewer's contribution to a study would be apparent. Finally, and most radically, where the editor judges that a reviewer had made a substantial intellectual contribution to a study, then they could have the option of having this recognised in authorship.
Why would anyone who wasn't a troll want to comment post-publication? We can get some insights into how to optimise comments from the model of the NIH-funded platform PubMed Commons. They do not allow anonymous comments, and require that commenters have themselves authored a paper that is listed on PubMed.  Commenters could also be offered incentives such as a reduction of submission costs to the platform.  To this one could add ideas from commercial platforms such as e-Bay, where sellers are rated by customers, so you can evaluate their reputation. It should be possible to devise some kind of star rating – both for the paper being commented on, and for the person making the comment. This could provide motivation for good commenters and make it easier to identify the high quality papers and comments.
I'm sure that any dragon from the publishing world would swallow me up in flames for these suggestions, as I am in effect suggesting a model that would take commercial publishers out of the loop. However, it seems worth serious consideration, given the enormous sums that could be saved by universities and funders by going it alone.  But the benefits would not just be financial; I think we could greatly improve science by changing the point in the research process when reviewer input occurs, and by fostering a more open and collaborative style of publishing.


This article was first published on the Guardian Science Headquarters blog on 12 May 2015

7 comments:

  1. This sounds like a solid system to me. Particularly the post-publication of method grant award. I proposed this 'Kickstarter' type of funding some years ago on http://researchity.net.

    But I'd like to point out a few alternatives/stumbling blocks/improvements:

    1. Why charge at all? $99 may seem like a pittance to a professor at a UK university (particularly compared to the extortionate open access fees) but in many countries challenged by exchange rate inequities, this could represent a significant proportion of income - particularly if it precedes award of funding.

    2. Avoiding trolls or frivolous submissions could be just as well achieved by a Stack Exchange like system of reputation. Where anonymity could even be preserved.

    3. The system could just as easily be paid for by the Government by taking a fraction of money it funnels into libraries to purchase journals. We should stop pretending that academic publishers are anything but rent-seeking clients who could not survive without massive indirect subsidy from the tax base.

    4. Your approach is more suited to the experimental branches of scholarly inquiry. It could be modified for things like history, literary studies, linguistics, etc. or even literature surveys as a sort of funding filter.

    5. You still need to solve the issue of 'disoverability' and 'curation' which could be achieved by allowing for some sort of 'editorial collectives' - groups of people who regularly share the top articles submitted to the archive in a sort of 'journal-like' manner.

    6. One way to solve participation is to include it in the evaluation of institutions for funding purposes.

    7. The other issue this could solve is the obsession with format and length limitations.

    8. You can also start at a much earlier stage and involve a larger group of stakeholders to help formulate relevant ideas. I've suggested something like that in http://researchity.net/2011/03/02/community-research-and-knowledge-exchange-network-for-neuroscience/.

    ReplyDelete
  2. Very interesting suggestions. Im not sure if it's feasible in the actual state though. Many studies are approved by IRB and funded under a certain protocol. Changing that protocol afterwards based on reviewers comments might by awfully difficult. And submitting an idea of a protocol before even getting approved or funded sounds tricky.
    God knows i'm all in favor of collaboration and i find actual scientific research way too individualistic, so i'm supporting you on that. But the funding problem remains. Grants are getting smaller and rarer. Obtaining a grant for 50 people is not easy and being a small team does not mean that you will produce underpower results.

    ReplyDelete
  3. I like this model in a lot of ways. However, I would feel anxious putting forward a protocol which I had carefully thought through and spent a lot of time on BEFORE I had funding. As we know, many, many research grants are not funded, even when they are highly rated by reviewers. Once your protocol was in the public domain, what is to stop someone else adapting it and applying for funding? Or do we want to encourage that as a way of more rapidly advancing science?

    I note as well that more and more funders are allowing applicants to respond to reviewer's comments. This does allow you to make minor tweaks to you method after peer-review.

    ReplyDelete
  4. These are interesting suggestions; I have a few remarks though.

    1. The funding system largely relies on publication metrics, as it's a way to quickly evaluate researchers (Of course, this argument can be a legitimate debate in itself, but the point is that decision makers appear to fully subscribe to this idea). A simple question thus follows: how would the journals derived from your proposal maximize their impact factor and/or their attractiveness to good studies ?

    2. Starting the peer review even before the beginning of the project is regularly coming up as an alternative (e.g. http://blogs.discovermagazine.com/neuroskeptic/2013/07/13/4129/#.VXba5V2-Rx0).

    But a major issue remains undiscussed (as far as I know): ideas are not so cheap, and scientific competition can be fierce. We all encountered cases where two different labs release similar works simultaneously, and may be tempted to delay the reviewing process of their opponent if given the opportunity to do so. Making a statements of what you will do, why and how you'll do it increases the possibility that some of these ideas will be simultaneously tested elsewhere - perhaps by a bigger lab and by faster people who capitalize on your thought process, and don't even want to go for a slow and heavy pre-submission scheme like the one you suggest.

    3. In my opinion making de-anonymization an incentive (e.g. for reviewing) can be dangerous.

    We already work in small-world research areas, and criticising one's work can be detrimental in many regards: young scientists' career and their very limited number of publications are of course at play, but you also bias the whole selection of reviewers if your reviews are known to be polite or tough.

    Of course, there now exists many ways to raise scientific issies anonymously (e.g. Pubpeer) but a de-anonimization system imposes a small, but I believe systematic, bias towards fawning and authoritative arguments. Worse, you can imagine that future metrics will take reviewing credits into account, and will thus strengthen the reviewers' inclination to make lightened remarks.

    ReplyDelete
    Replies
    1. Thanks for commenting. I am not swayed by your arguments but good to think through why:
      Re 1. More enlightened funders do not use the Journal Impact Factor. It is eschewed, and rightly so, by the UK's Higher Education Funding Council and Wellcome Trust, who are signatories on DORA (San Francisco Declaration on Research Assessment). Agree it is a current problem in some circles but I'm optimistic the tide is turning against it: See here; http://cdbu.org.uk/universities-journal-impact-factors/
      re point 2, if there is pre-registration, then priority for an idea is date-stamped in the protocol. Any theft of ideas would be detectable. (I have to say, in my area, this is seldom an issue anyhow, because nobody does exactly the same thing)
      re point 3: I just disagree! There are upsides and downsides of anonymity, but the downsides are much worse in my view. You might want to check out the reviews of my most recent paper in PeerJ (which are public); I do not see any fawning; there is one ultra-nice one but it is by a very senior figure in the field who does not need to suck up to me; the most critical one is by a junior person. My attitude to the junior person is respect, not a desire to get my own back in future! And the fact that others can see her review can only benefit her reputation as someone who can critique a study well.

      Delete
    2. Thanks for the reply. Just to follow up on this discussion.

      I don't think that expliciting a rule against impact-factor based evaluation, following the strategy of the Welcome Trust, will solve the issue. On the contrary, I would argue that impact factors are heavily influential, not so much because the scientific community is formally asked to use them (by funders or their hierarchy), but because impact factors provide a shortcut. Reading ones' papers in details and evaluating the researchers' competence is much more time-consuming than checking up the journal where his/her works has already been evaluated. So my bet would be that as long as these metrics are not supplanted by better ones, they will remain influential.

      Regarding anonymity, we apparently have different intuitions and experiences, but I agree that each solution has its own advantages.

      I believe, however, that most anonymity-induced issues can be resolved. For instance, reviewing architectures a la StackExchange where reputation points come with useful questions and comments have prooved remarkably efficient. By contrast, signed reviews will always comes with the conflicts of interest that rise from our professional, social and economic relationships.

      Delete
  5. Many thanks for the interesting link to the RS meeting. Section IIb contains a nice discussion of the physics arXiv, but I would like to have seen more on this.Many of us physicists find the dual system works very well - ArXiv for distribution, but serious, detailed reviews from the relevant journal.
    Looking forward to meeting you at the Boyle Summer School!

    ReplyDelete