Saturday 25 January 2014

What is educational neuroscience?

©CartoonStock.com

As someone who works at the interface of child development and neuroscience, I've been struck by the relentless rise of the sub-discipline of 'educational neuroscience'. New imaging technologies have led to a burgeoning of knowledge about the developing brain, and it is natural to want to apply this knowledge to improving children's learning. Centres for educational neuroscience have sprung up all over the place, with support from universities who see them as ticking two important boxes: interdisciplinarity and impact.

But at the heart of this enterprise, there seems to be a massive disconnect. Neuroscientists can tell you which brain regions are most involved in particular cognitive activities and how this changes with age or training. But these indicators of learning do not tell you how to achieve learning. Suppose I find out that the left angular gyrus becomes more active as children learn to read. What is a teacher supposed to do with that information?

As John Bruer pointed out back in 1997, the people who can be useful to teachers are psychologists. Psychological experiments can establish the cognitive underpinnings of skills such as reading, and can evaluate which are the most effective ways of teaching, and whether these differ from child to child. They can address questions such as whether there are optimal ages at which to teach different skills, how motivation and learning interact, and whether it is better to learn material in large chunks all at once or spaced out over intervals. At a trivial level, these could all be designated as aspects of 'educational neuroscience', insofar as the brain is necessarily involved in cognition and motivation. But they can all be studied without taking any measurements of brain function.

It is possible, of course, to look at the brain correlates of all of these things, but that's unlikely to influence what's done in the classroom. Suppose I want to see whether training in phonological awareness improves children's reading outcomes. I measure brain activation before and after training, and compare results with those of a control group who don't get the training. There are various possible patterns of results, as laid out in the table below:


As pointed out by Coltheart and McArthur (2012), what matters to the teacher is whether the training is effective in improving reading. It's really not going to make any difference whether detectable brain changes have happened, so either outcome A or B would give good justification for adopting the training, whereas outcomes C and D would not.

Well, you might say, children differ, and the brain measures might show up differences between those who do and don't respond to training. Indeed, but how would that be useful educationally? I've seen several studies that propose brain scans might be useful in identifying which children will and won't benefit from an intervention. That's a logical possibility, but given that brain scanning costs several hundred pounds per person, it's not realistic to suggest this has any utility in the real world, especially when there are likely to be behavioural indicators that predict outcomes just as well.

So are there actual or potential examples of how knowledge of neuroscience - as opposed to psychology - might influence educational practice? I mentioned three examples in this review: neurofeedback, neuropharmacology and brain stimulation are all methods that focus directly on changing the brain in ways that might potentially affect learning, and so could validly be designated as educational neuroscience. They are, however, as yet exploratory and experimental. The last of these, brain stimulation, was described this week in a blogpost by Roi Cohen Kadosh, who notes promising early results, but emphasizes that we need more experimental work establishing both risks and benefits before we could consider direct application of this method to improving children's learning.

I'm all in favour of cognitive neuroscience and basic research that discovers more about the neural underpinnings of typical and atypical development. By all means, let's do such studies, but let's do them because we want to find out more about the brain, and not pretend it has educational relevance.

If our goal is to develop better educational interventions, then we should be directing research funds into well-designed trials of cognitive and behavioural studies of learning, rather than fixating on neuroscience. Let me leave the last word to Hirsh-Pasek and Bruer, who described a Chilean conference in 2007 on Early Education and Human Brain Development. They noted: "The Chilean educators were looking to brain science for insights about which type of preschool would be the most effective, whether children are safe in child care, and how best to teach reading. The brain research presented at the conference that day was mute on these issues. However, cognitive and behavioral science could help."

References
Bishop, D. V. M. (2013). Neuroscientific studies of intervention for language impairment in children: interpretive and methodological problems Journal of Child Psychology and Psychiatry, 54 (3), 247-259 DOI: 10.1111/jcpp.12034

Bruer, J. T. (1997). Education and the brain: A bridge too far. Educational researcher, 26(8), 4-16. doi: 10.3102/0013189X026008004

Coltheart, M., & McArthur, G. (2012). Neuroscience, education and educational efficacy research. In M. Anderson & S. Della Sala (Eds.), Neuroscience in Education (pp. 215-221). Oxford: Oxford University Press.

This article (Figshare version) can be cited as: 
Bishop, Dorothy V M (2014): What is educational neuroscience?. figshare.
http://dx.doi.org/10.6084/m9.figshare.1030405

Sunday 12 January 2014

Why does so much research go unpublished?



As described in my last blogpost, I attended an excellent symposium on waste in research this week. A recurring theme was research that never got published. Rosalind Smyth described her experience of sitting on the funding panel of a medium-sized charity. The panel went to great pains to select the most promising projects, and would end a meeting with a sense of excitement about the great work that they were able to fund. A few years down the line, though, they'd find that many of the funds had been squandered. The work had either not been done, or had been completed but not published.

In order to tackle this problem, we need to understand the underlying causes. Sometimes, as Robert Burns noted, the best-laid schemes go wrong. Until you've tried to run a few research projects, it's hard to imagine the myriad different ways in which life can conspire to mess up your plans. The eight laws of psychological research formulated by Hodgson and Rollnick are as true today as they were 25 years ago.

But much research remains unpublished despite being completed. Reasons are multiple, and the strategies needed to overcome them are varied, but here is my list of the top three problems and potential solutions.

Inconclusive results


Probably the commonest reason for inconclusive results is lack of statistical power. A study is undertaken in the fond hope that a difference will be found between condition X and condition Y, and if the difference is found, there is great rejoicing and a rush to publish. A negative result should also be of interest, provided the study was well-designed and adequately motivated. But if the sample is small, then we can't be sure whether our failure to observe the effect is because it is absent: a real but small effect could be swamped by noise. 

I think the solution to this problem lies in the hands of funding panels and researchers: quite simply, they need to take statistical power very seriously indeed and to consider carefully whether anything will be learned from a study if the anticipated effects are not obtained. If not, then the research needs to be rethought. In the fields of genetics and clinical trials, it is now recognised that multicentre collaborations are the way forward to ensure that studies are conducted with sufficient power to obtain a conclusive result.

Rejection of completed work by journals


Even well-conducted and adequately powered studies may be rejected by journals if the results are not deemed to be exciting. To solve this problem, we must look to journals. We need recognition that - provided a study is methodologically strong and well-motivated - negative results can be as informative as positive ones. Otherwise we are doomed to waste time and money pursuing false leads.  As Paul Glasziou has emphasised, failure is part of the research process. It is important to tell people about what doesn't work if we are not to repeat our mistakes.

We do now have some journals that will publish negative results, and there is a growing move toward pre-registration of studies, with guaranteed publication if the methods meet quality criteria. But there is still a lot to be done, and we need a radical change of mindset about what kinds of research results are valuable.

Lack of time


Here, I lay the blame squarely on the incentive structures that operate in universities. To get a job, or to get promoted, you need to demonstrate that you can pull in research income. In many UK institutions this is quite explicit, and promotions criteria may give a specific figure to aim for of X thousand pounds research income per annum. There are few UK universities whose strategic plan does not include a statement about increasing research funding. This has changed the culture dramatically;  as Fergus Millar put it: "in the modern British university, it is not that funding is sought in order to carry out research, but that research projects are formulated in order to get funding".

Of course, for research to thrive, our Universities need people who can compete for funding to support their work. But the acquisition of funding has become an end in itself, rather than a means to an end. This has the pernicious effect of driving people to apply for grant after grant, without adequately budgeting for the time it takes to analyse and write up research, or indeed to carefully think about what they are doing.  As I argued previously, even junior researchers these days have an 'academic backlog' of unwritten papers.

At the Lancet meeting there were some useful suggestions for how we might change incentive structures to avoid such waste. Malcolm MacLeod argued researchers should be evaluated not by research income and high-impact publications, but by the quality of their methods, the extent to which their research was fully reported, and the reproducibility of findings. An-Wen Chan echoed this, arguing for performance metrics that recognise full dissemination of research and use of research datasets by other groups. However, we may ask whether such proposals have any chance of being adopted when University funding is directly linked to grant income, and Universities increasingly view themselves as businesses.

I suspect we would need revised incentives to be reflected at the level of those allocating central funding before vice-chancellors took them seriously.  It would, however, be feasible for behaviour to be shaped at the supply end, if funders adopted new guidelines. For a start, they could look more carefully at the time commitments of those to whom grants are given: in my experience this is never taken into consideration, and one can see successful 'fat cats' accumulating grant after grant, as success builds on success. Funders could also monitor more closely the outcomes of grants: Chan noted that NIHR withholds 10% of research funds until a paper based on the research has been submitted for publication. Moves like this could help us change the climate so that an award of a grant would confer responsibility on the recipient to carry through the work to completion, rather than acting solely to embellish the researcher's curriculum vitae.

References

Chan, A., Song, F., Vickers, A., Jefferson, T., Dickersin, K., Gotzsche, P., Krumholz, H. M., Ghersi, D., & van der Worp, H. B. (2014). Increasing value and reducing waste: addressing inaccessible research Lancet (8 Jan ) : 10.1016/S0140-6736(13)62296-5

Macleod, M. R., Michie, S., Roberts, I., Dirnagl, U., Chalmers, I., Ioannidis, J. P. A., . . . Glasziou, P. (2014). Biomedical research: increasing value, reducing waste. Lancet, 383(9912), 101-104.

Thursday 9 January 2014

Off with the old and on with the new: the pressures against cumulative research

 
Yesterday I escaped a very soggy Oxford to make it down to London for a symposium on "Increasing value, reducing waste" in Research. The meeting marked the publication of a special issue of the Lancet containing five papers and two commentaries, which can be downloaded here.

I was excited by the symposium because, although the focus was on medicine, it raised a number of issues that have much broader relevance for science, including several that I have raised on this blog, including pre-registration of research, criteria used by high-impact journalsethics regulation, academic backlogs, and incentives for researchers. It was impressive to see that major players in the field of medicine are now recognizing that there is a massive problem of waste in research. Better still, they are taking seriously the need to devise ways in which this could be fixed.

I hope to blog about more of the issues that came up in the meeting, but for today I'll confine myself to one topic that I hadn't really thought about much before, but which I see as important, namely the importance of doing research that builds on previous research, and the current pressures against this.

Iain Chalmers presented one of the most disturbing slides of the day, a forest plot of effect sizes found in medical trials for a treatment to prevent bleeding during surgery.
Based on Figure 3 of Chalmers et al, 2014
Time is along the x-axis, and the horizontal line corresponds to a result where the active and control treatments do not differ. Points which are below the line and whose fins do not cross it show a beneficial effect of treatment. The graph shows that the effectiveness of the treatment was clearly established by around 2002, yet a further 20 studies including several hundred patients were reported in the literature after that date. Chalmers made the point that it is simply unethical to do a clinical trial if previous research has already established an effect. The problem is that researchers often don't check the literature to see what has already been done, and so there is wasteful repetition of studies. In the field of medicine this is particularly serious because patients may be denied the most effective treatment if they enrol in a research project.

Outside medicine, I'm not sure this is so much of an issue. In fact, as I've argued elsewhere, in psychology and neuroscience I think there's more of a problem with lack of replication. But there definitely is much neglect of prior research. I lose count of the number of papers I review where the introduction presents a biased view of the literature that supports the authors' conclusions. For instance, if you are interested in the relation between auditory deficit and children's language disorders, it is possible to write an introduction presenting this association as an established fact, or to write one arguing that it has been comprehensively debunked. I have seen both.

Is this just lazy, biased or ignorant authors? In part, I suspect it is. But I think there is a deeper problem which has to do with the insatiable demand for novelty shown by many journals, especially the high-impact ones. These journals typically have a lot of pressure on page space and often allow only 500 words or less for an introduction. Unless authors can refer to a systematic review of the topic they are working on, they are obliged to give the briefest account of prior literature. It seems we no longer value the idea that research should build on what has gone before: rather, everyone wants studies that are so exciting that they stand alone. Indeed, if a study is described as 'incremental' research, that is typically the death knell in a funding committee.

We need good syntheses of past research, yet these are not valued because they are not deemed novel. One point made by Iain Chalmers was that funders have in the past been reluctant to give grants for systematic reviews. Reviews also aren't rated highly in academia: for instance, I'm proud of a review on mismatch negativity that I published in Psychological Bulletin in 2007. It not only condensed and critiqued existing research, but also discovered patterns in data that had not previously been noted. However, for the REF, and for my publications list on a grant renewal, reviews don't count.

We need a rethink of our attitude to reviews. Medicine has led the way and specified rigorous criteria for systematic reviews, so that authors can't just cherrypick specific studies of interest. But it has also shown us that such reviews are an invaluable part of the research process. They help ensure that we do not waste resources by addressing questions that have already been answered, and they encourage us to think of research as a cumulative, developing process, rather than a series of disconnected, dramatic events.

Reference
Chalmers, Iain, Bracken, Michael B., Djulbegovic, Ben, Garattini, Silvio, Grant, Jonathan, Gülmezoglu, A. Metin, Howells, David W., Ioannidis, John P. A., & Oliver, Sandy (2014). How to increase value and reduce waste when research priorities are set Lancet : 10.1016/S0140-6736(13)62229-1

Friday 3 January 2014

A New Year's letter to academic publishers

My relationships with journals are rather like a bad marriage: a mixture of dependency and hatred. Part of the problem is that journal editors and academics often have a rather different view of the process. Scientific journals could not survive without academics. We do the research, often spending several years of our lives to produce a piece of work that is then distilled into one short paper, which the fond author invariably regards as a fascinating contribution to the field. But when we try to place our work in a journal, we find that it's a buyer's market: most journals are overwhelmed with more submitted papers than they can cope with, and rejection rates are high. So there is a total mismatch: we set out naively dreaming of journals leaping at the opportunity to secure our best work, only to be met with coldness and rejection.  As in the best Barbara Cartland novels, for a lucky few, persistence is ultimately rewarded, and the stony-hearted editor is won over. But many potential authors fall by the wayside long before that point.

But times are changing. We are moving from a traditional "dead tree technology" model, where journals have to be expensively printed and distributed, to electronic-only media. These not only cost less to produce, but also avoid the length limits that traditionally have forced journals to be so highly selective. Alongside the technological changes, there has been rapid growth of the Open Access movement. The main motivations behind this movement were idealistic (making science available to all) and economic (escaping the stranglehold of expensive library subscriptions to closed-access journals). It's early days, but I am starting to sense that there's another consequence of the shift, which is that, as the field opens up, publishers are starting to change how they approach authors: less as supplicants, and more as customers.

In the past, the top journals had no incentive to be accommodating to authors. There were too many of us chasing scarce page space. But there are now some new boys on the open access block, and some of them have recognised that if they want to attract people to publish with them, they should listen to what authors want. And if they want academics to continue to referee papers for no reward, then they had better treat them well too.

This really is not too hard to do. I have two main gripes with journals, a big one and a little one. The big one concerns my time. The older I get, the less patient I am with organizations that behave as if I have all the time in the world to do the small bureaucratic chores that they wish to impose on me. For instance, many journals specify pointless formatting requirements for an initial submission. I really, really resent jumping through arbitrary hoops when the world is full of interesting things I could be doing. And cutting my toenails is considerably more interesting than reformatting references.

I recently encountered a journal whose website required you to enter details (name/address/email) of all authors in order to submit a pre-submission enquiry. Surely the whole point of a pre-submission enquiry is to save time, so you can get a quick decision on whether it's likely to be worth your while battling with the submission portal! There's also the horror of journals that require signatures from all authors at the point when you submit a manuscript: seems a harmless enough requirement, except that authors are often widely dispersed - on maternity leave or sailing the Atlantic - by the time the paper is submitted. The idea is to avoid fraud, of course, but like so many ethics regulations, the main effect of this requirement is to encourage honest, law-abiding people to take up forgery.

Oh, and then there are the 'invitations to review' (makes it sound so enticing, like being invited to a party), which require you to login in order to register your response – which for me invariably means selecting the option that I have forgotten my password, then looking at email to find how to update the password, meanwhile getting distracted by other email messages so I forget what I was doing, and eventually returning to the site to find it wants me now to change the password and enter mandatory contact details before it will accept my response. Well, no.  I'm usually a good citizen but I'm afraid I've just stopped responding to those.

You'd think the advent of electronic submission would make life easier, but in fact it can just open up a whole new world of tiny, fiddly things that you are required to do before your paper is submitted. Each individual thing is usually fairly trivial, but they do add up. So, for instance, if you'd like your authors to suggest referees, please allow them to paste in a list. DO NOT require them to cut and paste title, forename, initial, surname, email and institution into your horrible little boxes for each of six potential referees.  It all takes TIME. And we have more important things in life to be getting on with. Including doing the science that allows us to get the point of writing a paper.

Even worse, some of the requirements of journals are just historical artefacts with no more rationale than male nipples.  Here's a splendid post by Kate Jeffery which in fact was the impetus for this blogpost. I thought of Kate when, having carefully constructed a single manuscript document including figures, as instructed by the Instructions for Authors, I got to the submission portal to be strictly told that ON NO ACCOUNT must the figures be included in the main manuscript. Instead, they had to be separated, not only from the manuscript, but also from their captions (which had to be put as a list at the end of the manuscript). This makes sense ONCE THE PAPER IS ACCEPTED, when it needs to be typeset.  But not at the point of initial submission, when the paper's fate is undecided: it may well be rejected, and if not, it will certainly require revision. And meanwhile, you have referees tearing their hair out trying to link up the text, the Figures and their captions.

The smaller gripe is just about treating people with respect. I do have a preference for journal editors whose correspondence indicates that they are a human being and not an automaton. I've moaned about this before, in an old post describing a taxonomy of journal editors, but my feeling is that in the three years since I wrote that, things have got worse rather than better. Publishers and editors may think they make their referees happy by writing and telling them how useful their review of a paper has been – but the opposite effect is created if it is clear that this is a form letter that goes to all referees, however hopeless.It is really better to be ignored than to be sent an insincere, meaningless email - it just implies that the sender thinks you are stupid enough to be taken in by it.

So my message to publishers in 2014 is really very simple. The market is getting competitive and if you want to attract authors to send their best work to you, and referees to keep reviewing for you, you need to become more sensitive to our needs.  Two journals that appear to be trying hard are eLife and PeerJ, who avoid most of the bad practices I have outlined. I am hoping their example will cause others to up their game. We are mostly very simple souls who are not hard to please, but we hate having our time wasted, and we do like being treated like human beings.


Wednesday 1 January 2014

How the government spins a crisis: the blame game

from: http://www.youtube.com/watch?v=PkHb9q-jpDU
Thousands of people in the UK had a truly miserable Christmas, with extreme weather leading to flooding and power cuts. They were shocked and cold, blundering around in the dark, sometimes for as long as three days. When David Cameron went to visit Yalding in Kent on 27th December, he got an earful from local residents, who complained they had been abandoned, and had no help from the council, who had "all decided to go on holiday."

Cameron's visit was widely seen as a PR disaster: he was criticised for using the floods as a way of getting cheap publicity, and his government's cuts in spending on flood defences were commented on.

On 30th December, we had Owen Paterson, the Energy Secretary stating that energy companies had "let customers down" in their response to the storm.

Yesterday we heard that Tim Yeo chairman of the energy select committee, planned to summon bosses of energy companies to explain their poor performance.

Now, I have no love for the energy companies, whose rapacious pricing strategies are causing real hardship to many. But I find myself wondering what exactly they were supposed to do over the Christmas period. Presumably, if a power line comes down, it requires specialised machinery and replacement parts to be sourced and brought to the site – which may well be affected by flooding – and engineers who not only have the expertise to diagnose and correct the problem, but who are also fit and brave enough to do this in horrendous weather conditions. I doubt that large numbers of such people are just sitting around waiting to be called upon, and indeed over the Christmas period, some of them may have gone away on holiday, and others may themselves be affected by the flooding.  There was much criticism concerning the lack of information given to those affected by flooding and power cuts. But it's just not realistic to expect an organization to magic up large numbers of call centre staff out of nowhere in the middle of a crisis-ridden Christmas break. It's also worth noting that much of the valiant work of helping people deal with the flooding crisis was the responsibility of the fire service, currently under pressure from cuts to funding.

I simply don't know whether the energy companies could have done better; maybe they could have done more with live updates of information through websites, Twitter or local radio. Maybe they could have issued earlier warnings, or cancelled leave for key staff. But it concerns me that we have the Environment Secretary making a very public judgement on this matter, directing blame at energy companies, just a few days after the Prime Minister has been criticised, and long before there has been a chance to evaluate what happened, and which agencies were responsible for what, in a calm and thorough manner.

Forgive me if I seem cynical, but a rapid and punitive response seems to have become a standard reaction of government to situations where they are attracting adverse publicity. Find a scapegoat and come down on them heavily, whether it be Brodie Smith, Sharon Shoesmith or David Kelly. This deflects criticism from the government and makes them look strong. All the better if the criticism can be laid instead at the door of a person or organization who is already unpopular.

By all means, let us consider the response to the crisis to see what could have been done better. But the issues are far too important to be used as propaganda to enhance a government's popularity. Let us not be distracted from a much more important priority: calling the government to account for its policy of cutting back on measures of flood prevention.