Showing posts with label dyslexia. Show all posts
Showing posts with label dyslexia. Show all posts

Tuesday, 8 July 2014

Bishopblog catalogue (updated 8th July 2014)


Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? ( 25 Jan 2014) Parent talk and child language ( 17 Feb 2014) My thoughts on the dyslexia debate ( 20 Mar 2014)

Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014)

Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013)

Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014)

Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011) Accentuate the negative (26 Oct 2011) Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) Novelty, interest and replicability (19 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Blogging as post-publication peer review (21 Mar 2013) A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) The University as big business: the case of King's College London (18 June 2014)  

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012)

Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Men! what you can do to improve the lot of women ( 25 Feb 2014) Should Rennard be reinstated? (1 June 2014)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014)

Thursday, 20 March 2014

My thoughts on the dyslexia debate


During February, there was widespread media coverage of a forthcoming book by Julian Elliott and Elena Grigorenko called The Dyslexia Debate. I've seen an advance copy of the book, whose central message is to argue that the construct of dyslexia lacks coherence. Quite simply, dyslexia does not constitute a natural category, in terms of cognitive profile, neurobiology or genetics.

The authors' arguments are backed by a large body of research: people have tried over many years to find something distinctive about dyslexia, without success. Some children are good at reading and others are not, but it's arbitrary where you place a cutoff to specify that a child has a problem. There's a popular belief that you can identify dyslexics in terms of a particular ability profile, and that affected children have a particular kind of brain organisation that makes them do things like reverse letters (b vs d) or have left-right confusion. In fact, those types of problem are common in typically-developing children at early stages of learning to read and appear to be as much a symptom as a cause of reading problems. Researchers have found neurobiological and genetic correlates of developmental reading problems, but the effects tend to be small and inconsistent from person to person: you could not currently diagnose dyslexia on the basis of brain scans or genetic analysis. It is, of course, possible that one day we may hit upon a new diagnostic test that does clearly differentiate a dyslexic subgroup from other poor readers, but many of us in the field are dubious as to whether this will ever happen.
The first thing to get clear is that Elliott and Grigorenko are not denying the reality of children's reading problems. Their point is a much more specific one about the way we conceptualise reading difficulties and how this affects access to services in everyday life. Their concern is that "dyslexia" implies we are dealing with a specific medical syndrome. Their view is that no such syndrome exists and it is not helpful to behave as if it does. How should we respond to this? I think we need to distinguish three questions:

1. Should we identify those in need of extra help?

Children vary in the ease with which they learn to read. Some need only the briefest exposure to books to crack the code; others struggle for years despite skilled help from expert teachers.
I think most people who have spent time with poor readers (and I would include Elliott and Grigorenko among these) would conclude that the answer to question #1 is yes. It doesn't matter whether those in the latter group have a distinct medical syndrome or not: It is up to us to ensure that all get the best teaching.

2. How should we identify those in need of extra help?

Elliott wrote a piece in the Times Higher Education where he argued that dyslexia diagnoses in Universities were skyrocketing, and that some people were unfairly exploiting the system in order to get accommodations such as a laptop computer and extra time in exams. To my mind, the problem here is less to do with the "dyslexia" label, and more to do with the haphazard way in which individuals are identified, and the lack of consistent criteria for determining who needs extra help. Like Elliott, I think it is entirely right that we should make accommodations for students who have serious difficulties in processing written information at speed. However, as he highlights, the current system is based on an unsustainable idea that "dyslexia" is a distinct disorder that can be reliably identified, and which is often diagnosed on the basis of supposed markers of dyslexia that have no scientific basis. So the current system is both invalid and unfair. Instead, it would be sensible to settle on consistent criteria for allocating extra help to students who are struggling, and to ensure that extra resources are directed to those who are most needy. As Castles and colleagues have noted, there are guidelines that can be used to identify those with severe and persistent problems, but they are not well-known or widely applied.

3. What terminology should we use to refer to those we identify?

So can we just agree that we need to find consistent ways of identifying poor readers and do away with the term "dyslexia"? While this might seem a logical response to the evidence, I think we should not underestimate the implications in practice. On the positive side, we'd get rid of the idea that we're dealing with a special condition that forms a distinct syndrome. Since few scientists would attempt to defend that notion, this would be a good thing. But we should also be aware of negative consequences.

Those commenting on the dyslexia debate so far have talked about it as if it is a particular issue relating to literacy difficulties, but in fact it's just one instance of a much more pervasive problem.  Other neurodevelopmental disorders such as autism spectrum disorder, specific language impairment, attention deficit hyperactivity disorder, developmental dyspraxia and dyscalculia are all beset by the same issues: there is no diagnostic biomarker, the condition is defined purely in terms of behaviour, different disorders overlap and there's no clear boundary between disorder and normality.
Similar issues have been much discussed in relation to adult psychiatric disorders, which are also diagnosed in terms of behavioural features rather than biological tests. In a fascinating paper, Kendell and Jablensky (2003) came to the conclusion that the categories of schizophrenia and depression are massively problematic in terms of validity and reliability – that is to say, just like dyslexia, they don't constitute natural categories clearly demarcated from other disorders, and furthermore, people can't even agree on who merits these diagnoses. So should we just stop using the labels? Kendell and Jablensky considered this possibility but concluded it would be impossible to abandon terms like schizophrenia and depression, on the grounds that they have utility. These labels have been used for many years by practitioners to determine the most effective intervention, and by researchers interested in discovering the underlying causes and likely outcome of a disorder. Similarly, using the construct of "dyslexia" we have discovered much about the nature of the cognitive deficits that characterise many poor readers, about underlying causes, about outcomes, and about effectiveness of intervention. For instance, we know that genes play a part in determining who is a poor reader, and that many children who have poor literacy skills also have subtle problems with oral language.
This argument, though, is not really watertight. We may congratulate ourselves on what we have learned, but on the other hand, it could be argued that there are also barriers to progress that arise from continued use of imprecise terms. It's clear to anyone who knows the research literature that findings can vary from study to study and from child to child within a study. This does not necessarily invalidate the research – it's rare to obtain perfect consistency of findings even within mainstream medicine – but it does make many people wonder whether we might obtain clearer results if we took a different approach. But then we have to consider what alternative approach would be better.

I suggested a few years ago that it might be helpful to treat neurodevelopmental disorders differently, as multidimensional composites, rather than regarding problems with reading, language, arithmetic, attention, motor skills and social behaviour as separate conditions. However, I did not really expect anyone to embrace this idea, as it would be too radical a change, and we are too wedded to current terminology.

Here too, comparisons with psychiatry are interesting. Last year, Tom Insel, director of the US National Institute of Mental Health, ruffled feathers by stating that his organisation would be reorienting its research away from traditional psychiatric diagnostic categories, to develop Research Domain Criteria, i.e. "new ways of classifying mental disorders based on dimensions of observable behavior and neurobiological measures." Yet the domains that are proposed seem to me just as arbitrary as the original diagnostic categories, and the associations between genetic, neurobiological and behavioural measures are mostly weak and poorly understood. So although Insel's vision might seem a rational way of trying to make sense of psychiatric disorders, it is years away from being clinically applicable – as he is the first to admit.  Even though multivariate, dimensional classification seems more logical, our current categories of autism, schizophrenia and dyslexia, though imperfect, may be as good as we can manage in terms of utility in day-to-day clinical practice.

Perhaps the strongest arguments in favour of retention of a term like "dyslexia" come not from science but from public perception. Like it or not, "dyslexia" has been around for over 100 years. In that time, a range of organisations have sprung up to help people with this diagnosis. Some of the most passionate defences of the dyslexia label come from those who have built up a sense of identity around this condition, and who feel they benefit from being part of a community that can offer information and support – see, for instance, this comment by the International Dyslexia Association to the suggestion that "dyslexia" be removed from the DSM5.

One could, of course, argue, that we shouldn't stick with a label just because it has always been there – if we were to adopt that line of argument, we'd still be talking about "maladjusted" and "educationally subnormal" children. But it's clear that many of those diagnosed with dyslexia do see this label as positive. In particular many people worry that if they were to simply switch to a more neutral, less medical term, such as "poor readers", this could trivialise reading problems, and lead people to assume that the difficulties are just caused by poor teaching. Furthermore, legal entitlement to special help under disability legislation could disappear. This, I think, is a key part of the problem, which can get overlooked when just focusing on the scientific evidence: what you call a condition determines two things: how seriously people take it, and where they place blame for the difficulties and responsibility for doing something about it.

To illustrate my point, see this recent piece in the Daily Mail by Peter Hitchens, which appeared under the headline: "Dyslexia is NOT a disease. It is an excuse for bad teachers". This displays a remarkably simplistic world view in which a poor reader either has a "disease", in which case they are blameless victims of an external force, or else it is someone's fault – in this case lacklustre teachers.

In his triumphalist piece against the "pseudoscience and quackery" of dyslexia, Peter Hitchens achieves exactly the opposite of what he intends. This is because he demonstrates one negative consequence of removing the label, which is that many people will no longer think that children who struggle to read need any kind of special help. Instead, we'll be told that "What they need, what we all need, is proper old-fashioned teaching."

A rather more sophisticated version of the same argument was given in the Green Paper that introduced the Government's proposed revision to legislation for Special Educational Needs (SEN) (see: my blogpost on this). There it was stated that too many children were being over-identified with SEN: “Previous measures of school performance created perverse incentives to over-identify children as having SEN. There is compelling evidence that these labels of SEN have perpetuated a culture of low expectations and have not led to the right support being put in place.” (point 22).

We really need to escape this polarised view of children's problems being caused either by a medical disease or by poor teaching. Yes, some children's reading may be held back because their teachers either don't know about or reject evidence-based methods of teaching, but it is seldom black and white, and some children fail despite intensive, high-quality teaching.

My concern is that those holding the purse-strings have a strong incentive to blame all problems on bad teaching or bad parenting, as it absolves them of any responsibility to do anything about them. We need to recognise that for most children, the causal influences are likely to be complex and may involve both constitutional factors and aspects of home and school environment. Unfortunately, most people don't seem able to deal with this complexity, and the language we use determines how problems are viewed. At present we are between a rock and a hard place. The rock is the term "dyslexia", which has inaccurate connotations of a distinct neurobiological syndrome. The hard place is a term like "poor readers" which leads people to think we are dealing with a trivial problem caused by bad teaching.

As Allen Frances argued in the case of psychiatry, we need to resist a growing tendency to use medical labels for what is essentially normal behaviour. However, he wisely notes that this should not blind us to the reality that there are people with problems that are severe, clearcut, and unlikely to go away on their own.  In the current debate, several commentators have made this point and have added that it doesn't really matter what we call them; the more important issue is to ensure affected individuals get appropriate help. But I'd suggest it does matter, because the label we use does much more than just identify a subset of people: it carries connotations of causation, blame and responsibility. While I can see all the disadvantages of the dyslexia label outlined by Elliott and Grigorenko, I think it will survive into the future because it provides many people with a positive view of their difficulties which also helps them get taken seriously. For that reason, I think we may find it easier to work with the label and try to ensure it is used in a consistent and meaningful way, rather than to argue for its abolition.

Kendell, R., & Jablensky, A. (2003). Distinguishing between the validity and utility of psychiatric diagnoses American Journal of Psychiatry, 160 (1) DOI: 10.1176/appi.ajp.160.1.4

This article (Figshare version) can be cited as:
Bishop, Dorothy V M (2014): My thoughts on the dyslexia debate. figshare


Thursday, 26 September 2013

Raising awareness of language learning impairments

A couple of years ago I did a Google search for ‘Specific language impairment’. I was  appalled by what I found. The top hit was a video by a chiropractor who explained he’d read a paper about neurological basis of language difficulties; he proceeded to mangle its contents, concluding that cranial osteopathy would help affected children.

I’ve previously described how I got together with colleagues in 2012 to try and remedy this situation, culminating in a campaign for Raising Awareness of Language Learning Impairments (RALLI). The practicalities have sometimes been challenging but I’m pleased to say that the collection of videos on our RALLI site has now attracted over 90,000 hits, providing an accessible and evidence-based source of information about developmental language impairments. As well as research-based films we have videos with practical information for parents, children and teachers.

So here, for those of you interested in this topic, is an index of what we have so far:

Background to RALLI

Research topics

Information for teachers

Support for parents and children

     Spanish translations/subtitled versions
Bishop, D. V. M., Clark, B., Conti-Ramsden, G., Norbury, C., & Snowling, M. J. (2012). RALLI: An internet campaign for raising awareness of language learning impairments Child Language Teaching & Therapy, 28 (3), 259-262 DOI: 10.1177/0265659012459467

Sunday, 16 June 2013

Overhyped genetic findings: the case of dyslexia

A press release by Yale University Press Office was recently recycled on the Research Blogging website*, announcing that their researchers had made a major breakthrough. Specifically they said "A new study of the genetic origins of dyslexia and other learning disabilities could allow for earlier diagnoses and more successful interventions, according to researchers at Yale School of Medicine. Many students now are not diagnosed until high school, at which point treatments are less effective." The breathless account by the Press Office is hard to square with the abstract of the paper, which makes no mention of early diagnosis or intervention, but rather focuses on characterising a putative functional risk variant in the DCDC2 gene, named READ1, and establishing its association with reading and language skills.

I've discussed why this kind of thing is problematic in a previous blogpost, but perhaps a figure will help. The point is that in a large sample you can have a statistically strong association between a condition such as dyslexia and a genetic variant, but this does not mean that you can predict who will be dyslexic from their genes.

Proportions with risk variants estimated from Scerri et al (2011)
In this example, based on one of the best-replicated associations in the literature, you can see that most people with dyslexia don't have the risk version of the gene, and most people with the risk version of the gene don't have dyslexia. The effect sizes of individual genetic variants can be very small even when the strength of genetic association is large.

So what about the results from the latest Yale press release? Do they allow for more accurate identification of dyslexia on the basis of genes? In a word, no. I was pleased to see that the authors reported the effect sizes associated with the key genetic variants, which makes it relatively easy to estimate their usefulness in screening. In addition to identifying two sequences in DCDC2 associated with risk of language or reading problems, the authors noted an interaction with a risk version of another gene, KIAA0319, such that children with risk versions in both genes were particularly likely to have problems.  The relevant figure is shown here.

Fig 3A from Powers et al (2013)

There are several points to note from this plot, bearing in mind that dyslexia or SLI would normally only be diagnosed if a child's reading or language scores were at least 1.0 SD below average.
  1. For children who have either KIAA0319 or DCDC2 risk variants, but not both, the average score on reading and language measures is at most 0.1 SD below average.
  2. For those who have both risk factors together, some tests give scores that are 0.3 SD below average, but this is only a subset of the reading/language measures. On nonword reading, often used as a diagnostic test for dyslexia, there is no evidence of any deficit in those with both risk versions of the genes. On the two language measures, the deficit hovers around 0.15 SD below the mean.
  3. The tests that show the largest deficits in those with two risk factors are measures of IQ rather than reading or language. Even here, the degree of impairment in those with two risk factors together indicates that the majority of children with this genotype would not fall in the impaired range.
  4. The number of children with the two risk factors together is very small, around 1% of the population.
In sum, I think this is an interesting paper that might help us discover more about how genetic variation works to influence cognitive development by affecting brain function. The authors present the data in a way that allows us to appraise the clinical significance of the findings quite easily. However, the results indicate that, far from indicating translational potential for diagnosis and treatment, genetic effects are subtle and unlikely to be useful for this purpose.

*It is unclear to me whether the Yale University Press Office are actively involved in gatecrashing Research Blogging, or whether this is just an independent 'blogger' who is recycling press releases as if they are blogposts.

Powers, N., Eicher, J., Butter, F., Kong, Y., Miller, L., Ring, S., Mann, M., & Gruen, J. (2013). Alleles of a Polymorphic ETV6 Binding Site in DCDC2 Confer Risk of Reading and Language Impairment The American Journal of Human Genetics DOI: 10.1016/j.ajhg.2013.05.008
Scerri, T. S., Morris, A. P., Buckingham, L. L., Newbury, D. F., Miller, L. L., Monaco, A. P., . . . Paracchini, S. (2011). DCDC2, KIAA0319 and CMIP are associated with reading-related traits. Biological Psychiatry, 70, 237-245. doi: 10.1016/j.biopsych.2011.02.005

Thursday, 21 March 2013

Blogging as post-publication peer review: reasonable or unfair?

In a previous blogpost, I criticised a recent paper claiming that playing action video games improved reading in dyslexics. In a series of comments below the blogpost, two of the authors, Andrea Facoetti and Simone Gori, have responded to my criticisms. I thank them for taking the trouble to spell out their views and giving readers the opportunity to see another point of view. I am, however, not persuaded by their arguments, which make two main points. First, that their study was not methodologically weak and so Current Biology was right to publish it, and second, that it is unfair, and indeed unethical, to criticise a scientific paper in a blog, rather than through the regular scientific channels.
Regarding the study methodology, as noted above, the principal problem with the study by Franceschini et al was that it was underpowered, with just 10 participants per group.  The authors reply with an argument ad populum, i.e. many other studies have used equally small samples. This is undoubtedly true, but it doesn’t make it right. They dismiss the paper I cited by Christley (2010) on the grounds that it was published in a low impact journal. But the serious drawbacks of underpowered studies have been known about for years, and written about in high- as well as low-impact journals (see references below).
The response by Facoetti and Gori illustrates the problem I had highlighted. In effect, they are saying that we should believe their result because it appeared in a high-impact journal, and now that it is published, the onus must be on other people to demonstrate that it is wrong. I can appreciate that it must be deeply irritating for them to have me expressing doubt about the replicability of their result, given that their paper passed peer review in a major journal and the results reach conventional levels of statistical significance. But in the field of clinical trials, the non-replicability of large initial effects from small trials has been demonstrated on numerous occasions, using empirical data - see in particular the work of Ioannidis, referenced below. The reasons for this ‘winner’s curse’ have been much discussed, but its reality is not in doubt. This is why I maintain that the paper would not have been published if it had been reviewed by scientists who had expertise in clinical trials methodology. They would have demanded more evidence than this.
The response by the authors highlights another issue: now that the paper has been published, the expectation is that anyone who has doubts, such as me, should be responsible for checking the veracity of the findings. As we say in Britain, I should put up or shut up. Indeed, I could try to get a research grant to do a further study. However, I would probably not be allowed by my local ethics committee to do one on such a small sample and it might take a year or so to do, and would distract me from my other research. Given that I have reservations about the likelihood of a positive result, this is not an attractive option. My view is that journal editors should have recognised this as a pilot study and asked the authors to do a more extensive replication, rather than dashing into print on the basis of such slender evidence. In publishing this study, Current Biology has created a situation where other scientists must now spend time and resources to establish whether the results hold up.
To establish just how damaging this can be, consider the case of the FastForword intervention, developed on the basis of a small trial initially reported in Science in 1996. After the Science paper, the authors went directly into commercialization of the intervention, and reported only uncontrolled trials. It took until 2010 for there to be enough reasonably-sized independent randomized controlled trials to evaluate the intervention properly in a meta-analysis, at which point it was concluded that it had no beneficial effect. By this time, tens of thousands of children had been through the intervention, and hundreds of thousands of research dollars had been spent on studies evaluating FastForword.
I appreciate that those reporting exciting findings from small trials are motivated by the best of intentions – to tell the world about something that seems to help children. But the reality is that, if the initial trial is not adequately powered, it can be detrimental both to science and to the children it is designed to help, by giving such an imprecise and uncertain estimate of the effectiveness of treatment.
Finally, a comment on whether it is fair to comment on a research article in a blog, rather than going through the usual procedure of submitting an article to a journal and having it peer-reviewed prior to publication. The authors’ reactions to my blogpost are reminiscent of Felicia Wolfe-Simon’s response to blog-based criticisms of a paper she published in Science: "The items you are presenting do not represent the proper way to engage in a scientific discourse”. Unlike Wolfe-Simon, who simply refused to engage with bloggers, Facoetti and Gori show willingness to discuss matters further, and present their side of the story, but they nevertheless it is clear they do not regard a blog as an appropriate place to debate scientific studies. 
I could not disagree more. As was readily demonstrated in the Wolfe-Simon case, what has come to be known as ‘post-publication peer review’ via the blogosphere can allow for new research to be rapidly discussed and debated in a way that would be quite impossible via traditional journal publishing. In addition, it brings the debate to the attention of a much wider readership. Facoetti and Gori feel I have picked on them unfairly: in fact, I found out about their paper because I was asked for my opinion by practitioners who worked with dyslexic children. They felt the results from the Current Biology study sounded too good to be true, but they could not access the paper from behind its paywall, and in any case they felt unable to evaluate it properly. I don’t enjoy criticising colleagues, but I feel that it is entirely proper for me to put my opinion out in the public domain, so that this broader readership can hear a different perspective from those put out in the press releases. And the value of blogging is that it does allow for immediate reaction, both positive and negative. I don’t censor comments, provided they are polite and on-topic, so my readers have the opportunity to read the reaction of Facoetti and Gori. 
I should emphasise that I do not have any personal axe to grind with the study's authors, who I do not know personally. I’d be happy to revise my opinion if convincing arguments are put forward, but I think it is important that this discussion takes place in the public domain, because the issues it raises go well beyond this specific study.

Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, advance online publication. doi: 10.1038/nrn3475
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. doi: 10.1371/journal.pmed.0020124
Ioannidis, J. P. (2008). Why most discovered true associations are inflated. Epidemiology 19(5), 640-648.
Ioannidis JP, Pereira TV, & Horwitz RI (2013). Emergence of large treatment effects from small trials--reply. JAMA : the journal of the American Medical Association, 309 (8), 768-9 PMID: 23443435