Saturday, 7 April 2018

Should research funding be allocated at random?


Earlier this week, a group of early-career scientists had an opportunity to quiz Jim Smith, Director of Science at the Wellcome Trust. The ECRs were attending a course on Advanced Methods for Reproducible Science that I ran with Chris Chambers and Marcus Munafo, and Jim kindly agreed to come along for an after-dinner session which started in a lecture room and ended in the bar.

Among other things, he was asked about the demand for small-scale funding. In some areas of science, a grant of £20-30K could be very useful in enabling a scientist to employ an assistant to gather data, or to buy a key piece of equipment. Jim pointed out that from a funder’s perspective, small grants are not an attractive proposition, because the costs of administering them (finding reviewers, running grant panels, etc.) are high relative to the benefits they achieve. And it’s likely that there will be far more applicants for small grants.

This made me wonder whether we might retain the benefits of small grants by dispensing with the bureaucracy. A committee would have to scrutinise proposals to make sure that the proposal met the funder’s remit, and were of high methodological quality; provided that were so, then the proposal could be entered into a pool, with winners selected at random.

Implicit in this proposal is the idea that it isn’t possible to rank applications reliably. If a lottery approach meant we ended up funding weak research and denying funds to excellent project, this would clearly be a bad thing. But research rankings by committee and/or peer review is notoriously unreliable, and it is hard to compare proposals that span a range of disciplines. Many people feel that funding is already a lottery, albeit an unintentional one, because the same grant that succeeds in one round may be rejected in the next. Interviews are problematic because they mean that a major decision – fund or not – is decided on the basis of a short sample of a candidate’s behaviour, and that people with great proposals but poor social skills may be turned down in favour of glib individuals who can sell themselves more effectively.

I thought it would be interesting to float this idea in a Twitter poll.  I anticipated that enthusiasm for the lottery approach might be higher among those who had been unsuccessful in getting funding, but in fact, the final result was pretty similar, regardless of funding status of the respondent: most approved of a lottery approach, with 66% in favour and 34% against.


As is often the way with Twitter, the poll encouraged people to point me to an existing literature I had not been aware of. In particular, last year, Mark Humphries (@markdhumphries) made a compelling argument for randomness in funding allocations, focusing on the expense and unreliability of current peer review systems. Hilda Bastian and others pointed me to work by Shahar Avin , who has done a detailed scholarly analysis of policy implications for random funding – in the course of which he mentions three funding systems where this has been tried.  In another manuscript, Avin presented a computer simulation to compare explicit random allocation with peer review. The code is openly available, and the results from the scenarios modelled by Avin are provocative in supporting the case for including an element of randomness in funding. (Readers may also be interested in this simulation of the effect of luck on a meritocracy, which is not specific to research funding but has some relevance.) Others pointed to even more radical proposals, such as collective allocation of science funding, giving all researchers a limited amount of funding, or yoking risk to reward.

Having considered these sources and a range of additional comments on the proposal, I think it does look as if it would be worth a funder such as Wellcome Trust doing a trial of random allocation of funding for proposals meeting a quality criterion. As noted by Dylan Wiliam, the key question is whether peer review does indeed select the best proposals. To test this, those who applied for Seed Funding could be randomly directed to either stream A, where proposals undergo conventional evaluation by committee, or stream B, where the committee engages in a relatively light touch process to decide whether to enter the proposal into a lottery, which then decides its fate. Streams A and B could each have the same budget, and their outcomes could be compared a few years later.

One reason I’d recommend this approach specifically for Seed Funding is because of the disproportionate administrative burden for small grants. There would, in principle, be no reason for not extending the idea to larger grants, but I suspect that the more money is at stake, the greater will be the reluctance to include an explicit element of chance in the funding decision. And, as Shahar Avin noted, very expensive projects need long-term support, which makes a lottery approach unsuitable.

Some of those responding to the poll noted potential drawbacks. Hazel Phillips suggested that random assignment would make it harder to include strategic concerns, such as career stage or importance of topic. But if the funder had particular priorities of this kind, they could create a separate pool for a subset of proposals that met additional criteria and that would be given a higher chance of funding. Another concern was gaming by institutions or individuals submitting numerous proposals in scattergun fashion. Again, I don’t see this as a serious objection, as (a) use of an initial quality triage would weed out proposals that were poorly motivated and (b) applicants could be limited to one proposal per round. Most of the other comments that were critical expressed concerns about the initial triage: how would the threshold for entry into the pool be set?  A triage stage may look as if one is just pushing back the decision-making problem to an earlier step, but in practice, it would be feasible to develop transparent criteria for determining which proposals didn’t get into the pool: some have methodological limitations which mean they couldn’t give a coherent answer to the question they pose; some research questions are ill-formed; others have already been answered adequately -  this blogpost by Paul Glasziou and Iain Chalmers makes a good start in identifying characteristics of research proposals that should not be considered for funding.

My view is that there are advantages for the lottery approach over and above the resource issues. First, Avin’s analysis concludes that reliance on peer review leads to a bias against risk-taking, which can mean that novelty and creativity are discouraged. Second, once a proposal was in the pool, there would be no scope for bias against researchers in terms of gender or race – something that can be a particular concern when interviews are used to assess. Third, the impact on the science community is also worth considering. Far less grief would be engendered by a grant rejection if you knew it was that you were unlucky, rather than that you were judged to be wanting. Furthermore, as noted by Marina Papoutsi, some institutions evaluate their staff in terms of how much grant income they bring in – a process that ignores the strong element of chance that already affects funding decisions. A lottery approach, where the randomness is explicit, would put paid to such practices.



-->

Friday, 9 February 2018

Improving reproducibility: the future is with the young


I've recently had the pleasure of reviewing the applications to a course on Advanced Methods for Reproducible Science that I'm running in April together with Marcus Munafo and Chris Chambers.  We take a broad definition of 'Reproducibility' and cover not only ways to ensure that code and data are available for those who wish to reproduce experimental results, but also focus on how to design, analyse and pre-register studies to give replicable and generalisable findings.

There is a strong sense of change in the air. Last year, most applicants were psychologists, even though we prioritised applications in biomedical sciences, as we are funded by the Biotechnology and Biological Sciences Research Council and European College of Neuropsychopharmacology. The sense was that issues of reproducibility were not not so high on the radar of disciplines outside psychology. This year things are different. We again attracted a fair number of psychologists, but we also have applicants from fields as diverse as gene expression, immunology, stem cells, anthropology, pharmacology and bioinformatics.

One thing that came across loud and clear in the letters of application to the course was dissatisfaction with the status quo. I've argued before that we have a duty to sort out poor reproducibility because it leads to enormous waste of time and talent of those who try to build on a glitzy but non-replicable result. I've edited these quotes to avoid identifying the authors, but these comments – all from PhD students or postdocs in a range of disciplines - illustrate my point:
  • 'I wanted to replicate the results of an influential intervention that has been widely adopted. Remarkably, no systematic evidence has ever been published that the approach actually works. So far, it has been extremely difficult to establish contact with initial investigators or find out how to get hold of the original data for re-analysis.' 

  • 'I attempted a replication of a widely-cited study, which failed. Although I first attributed it to a difference between experimental materials in the two studies, I am no longer sure this is the explanation.' 

  • 'I planned to use the methods of a widely cited study for a novel piece of research. The results of this previous study were strong, published in a high impact journal, and the methods apparently straightforward to implement, so this seemed like the perfect approach to test our predictions. Unfortunately, I was never able to capture the previously observed effect.' 

  • 'After working for several years in this area, I have come to the conclusion that much of the research may not be reproducible. Much of it is conducted with extremely small sample sizes, reporting implausibly large effect sizes.' 

  • 'My field is plagued by irreproducibility. Even at this early point in my career, I have been affected in my own work by this issue and I believe it would be difficult to find someone who has not themselves had some relation to the topic.' 

  • 'At the faculty I work in, I have witnessed that many people are still confused about or unaware of the very basics of reproducible research.'

Clearly, we can't generalise to all early-career researchers: those who have applied for the course are a self-selected bunch. Indeed, some of them are already trying to adopt reproducible practices, and to bring about change to the local scientific environment. I hope, though, that what we are seeing is just the beginning of a groundswell of dissatisfaction with the status quo. As Chris Chambers suggested in this podcast, I think that change will come more from the grassroots than from established scientists.

We anticipate that the greater diversity of subjects covered this year will make the course far more challenging for the tutors, but we expect it will also make it even more stimulating and fun than last year (if that is possible!). The course lasts several days and interactions between people are as important as the course content in making it work. I'm pretty sure that the problems and solutions from my own field have relevance for other types of data and methods, but I anticipate I will learn a lot from considering the challenges encountered in other disciplines.

Training early career researchers in reproducible methods does not just benefit them: those who attended the course last year have become enthusiastic advocates for reproducibility, with impacts extending beyond their local labs. We are optimistic that as the benefits of reproducible working become more widely known, the face of science will change so that fewer young people will find their careers stalled because they trusted non-replicable results.

Friday, 12 January 2018

Do you really want another referendum? Be careful what you wish for

Many people in my Twitter timeline have been calling for another referendum on Brexit. Since most of the people I follow regard Brexit as an unmitigated disaster, one can see they are desperate to adopt any measure that might stop it.

Things have now got even more interesting with arch-Brexiteer, Nigel Farage, calling yesterday for another referendum. Unless he is playing a particularly complicated game, he presumably also thinks that his side will win – and with an increased majority that will ensure that Brexit is not disrupted.

Let me be clear. I think Brexit is a disaster. But I really do not think another referendum is a good idea. If there's one thing that the last referendum demonstrated, it is that this is a terrible method for making political decisions on complicated issues.

I'm well-educated and well-read, yet at the time of the referendum, I understood very little about how the EU worked. My main information came from newspapers and social media – including articles such as this nuanced and thoughtful speech on the advantages and disadvantages of EU membership by Theresa May. (The contrast between this and her current mindless and robotic pursuit of extreme Brexit is so marked that I do wonder if she has been kidnapped and brainwashed at some point).

I was pretty sure that it would be bad for me as a scientist to lose opportunities to collaborate with European colleagues, and at a personal level I felt deeply European while also proud of the UK as a tolerant and fair-minded society. But I did not understand the complicated financial, legal, and trading arrangements between the UK and Europe, I had no idea of possible implications for Northern Ireland – this topic was pretty much ignored by the media that I got my information from. As far as I remember, debates on the topic on the TV were few and far between, and were couched as slanging matches between opposite sides – with Nigel Farage continually popping up to tell us about the dangers of unfettered immigration. I remember arguing with a Brexiteer group in Oxford Cornmarket who were distributing leaflets about the millions that would flow to the NHS if we left the EU, but who had no evidence to back up this assertion. There were some challenges to these claims on radio and TV, but the voices of impartial experts were seldom heard.

After the referendum, there were some stunning interviews with the populace exploring their reasons for voting. News reporters were despatched to Brexit hotspots, where they interviewed jubilant supporters, many of whom stated that the UK would now be cleansed of foreigners and British sovereignty restored. Some of them also mentioned funding of the NHS: the general impression was that being in the EU meant that an emasculated Britain had to put up with foreigners on British soil while at the same time giving away money to foreigners in Europe. The EU was perceived as a big bully that took from us and never gave back, and where the UK had no voice. The reporters never challenged these views, or asked about other issues, such as financial or other benefits of EU membership.

Of course there were people who supported Brexit for sound, logical reasons, but they seemed to be pretty thin on the ground. A substantial proportion of those voting seemed swayed by arguments about decreasing the number of foreigners in the UK and/or spending money on the NHS rather than 'giving it to Europe'.

Remainers who want another referendum seem to think that, now we've seen the reality of the financial costs of Brexit, and the exodus of talented Europeans from our hospitals, schools, and universities, the populace will see through the deception foisted on them in 2016. I wonder. If Nigel Farage wants a referendum, this could simply mean that he is more confident than ever of his ability to manipulate mainstream and social media to play on people's fears of foreigners. We now know more about sophisticated new propaganda methods that can be used on social media, but that does not mean we have adequate defences against them.

The only thing that would make me feel positive about a referendum would be if you had to demonstrate that you understood what you were voting for. You'd need a handful of simple questions about factual aspects of EU membership – and a person's vote would only be counted if these questions were accurately answered. This would, however, disenfranchise a high proportion of voters, and would be portrayed as an attack on democracy. So that is not going to happen. I think there's a strong risk that if we have another referendum, it will either be too close to call, or give the same result as before, and we'll be no further ahead.

But the most serious objection to another referendum is that it is a flawed method for making political decisions. As noted in this blogpost:

(A referendum requires) a complex, and often emotionally charged issue, to be reduced to a binary yes/no question.  When considering a relationship the UK has been in for over 40 years a simple yes/no or “remain/leave” question raises many complex and inter-connected questions that even professional politicians could not fully answer during or after the campaign. The EU referendum required a largely uninformed electorate to make a choice between the status quo and an extremely unpredictable outcome.

Rather than a referendum, I'd like to see decisions about EU membership made by those with considerable expertise in EU affairs who will make an honest judgement about what is in the best interests of the UK. Sadly, that does not seem to be an option offered to us.