Friday 30 December 2011

Publishers, psychological tests and greed

© CartoonStock.com

There was an intriguing piece in the New England Journal of Medicine this week about a commonly used screening test that indicates if someone is likely to have dementia. The Mini Mental State Examination (MMSE) is widely used throughout the world because it is quick and easy to administer. The test is very simple: you need no equipment, and the eleven items, involving questions to test orientation (e.g. “Where are we?”) and language (e.g. “What is this?” while showing the patient a wristwatch) are reproduced at the end of the original article about the MMSE, which was published in 1975.
The problem is that now the authors have taken steps to license the test, so that it has to be purchased from Psychological Assessment Resources. The cost is modest, $1.23 per test, but nevertheless more than the cost of photocopying one side of paper, which is what people have been doing for years. And of course, if people have to use only officially purchased copies of MMSE there are the additional costs of raising purchase orders, postage, storing packs of forms, and so on.
I’ve got a particular interest in this story, as I have published psychological tests, both off my own bat, and through a test publishing company. I started out in the late 1970s, when I developed a test of children’s comprehension called the Test for Reception of Grammar (TROG). This was more complicated than MMSE in two important respects. It involved lots of brightly coloured pictures as well as a record form, and in order to decide if a child had comprehension problems, I needed to establish how well typical children performed at different ages. The latter process, known as test standardisation, is not a trivial task, because you have to test lots of children to get a good estimate of the range of scores as well as the average score at different ages. This early work was done as part of a study funded by the Medical Research Council (MRC), but I assumed that, if the project worked out, we’d need a test publisher, and so I contacted one. The project involved two big costs. First there was the cost of my time and effort in devising the test, finding reliable people to test hundreds of children nationwide, analyse the results and write the manual. The other cost was printing colour test booklets. I had assumed that the test publisher would be willing to cover this, but they weren’t. They suggested that the MRC should find another several thousand pounds to cover printing. Now this made me cross. The publisher would get for free a fully standardised test that they could sell, no doubt at vast profit, but they wanted someone else to foot the bill for production costs. MRC were actually making quite positive noises about finding the money, but I was irritated enough to explore other options. I found a local printer and learned about the arcane world of different colour separation processes, and came away with a reasonable quote. I also discovered something quite interesting. The costs were all in the initial process of creating plates: the actual printing costs were trivial. This meant that it cost no more to print 1,000 picture books than the 100 copies I needed. And the costs of printing record forms were trivial. I returned to MRC and suggested we left the publisher out of the equation, and they agreed. All proceeded very smoothly, but once the standardisation was completed, I had a problem. There were 900 unused copies of the picture book. I discussed with MRC what we should do. They suggested I could give them away, but this would mean the test would become obsolete as soon as all the copies were used up. In the end, we reached an agreement that I could sell the test in a kind of cottage industry, and share any profits with MRC. And so I did for about the next 15 years. I didn’t bother to copyright the test because it was cheaper to buy it from me than to photocopy it. Nevertheless, I made a nice profit, and took considerable pleasure in telling the publisher to piss off some years later when they approached me expressing interest in TROG.
My next foray into test publishing was with a four-page questionnaire, the Children’s Communication Checklist (CCC). As with TROG, I hadn’t set out to devise an assessment: it came about because there wasn’t anything out there that did what I wanted, so I had to make my own instrument. I published a paper on the CCC in 1998, and listed all the items in an Appendix. I had a problem, though. I was getting busier all the time. For some years I had been paying graduate students to look after TROG sales: the weekly trip to the post office with heavy parcels had become too much of a chore. And every time I moved house, there was the question of what to do with the stock: boxes of picture books and record forms. I also realised that TROG was getting out of date - it’s well recognised that tests need restandardising every ten years or so. I also wanted to develop a test of narrative language.  And the CCC was far from perfect and needed revamping and standardising. So I took the big step: I contacted a test publisher. A different one from before. To cut a long story short, they put money into the standardisation, covered production costs, and offered highly professional editorial support. There are now three of my tests in their catalogue. 
The upside for me? The tests are actually marketed, so sales are massive compared with my cottage industry activities. And I no longer have to keep a cellar full of cardboard boxes of stock, or concern myself with organising printing and despatching tests, or dealing with complaints from someone whose finger was cut by an injudiciously placed staple. There is a downside, though. The tests are far more expensive. Having done the publishing myself, I know a little secret of the test publishing business: they don’t make their profits from actual test materials such as coloured picture books or IQ test kit. The profits are all in the record forms. These cost peanuts to produce and are sold at a mind-boggling mark-up.
I went into the deal with the publisher with my eyes open. They are a business and I knew they’d make profit from my academic work - just as journal publishers do. I reckon they’ve done more to deserve that profit than most journal publishers, as they put money into test development. That involved taking a gamble that the tests would sell. I have benefited from having a large professional organisation promoting my work, and I do get royalties on the tests. I recycle these back to a relevant charity, and there’s something pleasing about profits from testing children’s language being ploughed back into helping children with language problems.
But my publisher’s situation is very very different from the situation with MMSE. The only people who could plausibly argue they deserve to make money from the test are its authors: the publisher has put no money into development of the test and taken no risks. The authors appear to be claiming that the test items are their intellectual property, and that anyone who attempts to develop a similar test is infringing their copyright. But where did the MMSE items come from? A quick read of the introduction to the 1975 paper gives an answer. Most of them are based a longer assessment described in a 1971 article by Withers and Hinton. It would seem that the main contribution of Folstein et al was to shorten an existing test. I wonder if the British Journal of Psychiatry should go after them for copyright infringement?

Newman, J., & Feldman, R. (2011). Copyright and Open Access at the Bedside New England Journal of Medicine, 365 (26), 2447-2449 DOI: 10.1056/NEJMp1110652

P.S. Another post that includes some information on how MMSE was developed.

You can read more by scrolling down to "The Mini Exam with Maximal Staying Power" on this site from 2007.

Sunday 18 December 2011

NHS research ethics procedures: a modern-day Circumlocution Office


In Little Dorritt, Charles Dickens rails against the stifling effects of bureaucracy:
No public business of any kind could possibly be done at any time without the acquiescence of the Circumlocution Office.… the Circumlocution Office was down upon any ill-advised public servant who was going to do it, or who appeared to be by any surprising accident in remote danger of doing it, with a minute, and a memorandum, and a letter of instructions that extinguished him.
 Substitute “NHS research ethics procedures” for Circumlocution Office, and “researcher” for public servant, and you have a perfect description of a contemporary problem.

December 2010
My programme grant has been running now for over a year, and it’s time to gird up my loins to tackle NHS ethics. I’ve had plenty of other research to keep me busy, but I’m aware that I’ve been putting off this task after earlier aversive experiences. “Come on,” I tell myself, “you deal with unpleasant and bureaucratic tasks regularly - reviewing grants, responding to reviewer comments, completing your tax return. You really just have to treat this in the same way.”
It starts well enough. I track down a website for the Integrated Research Application System IIRAS). I start to have misgivings when it tells me that it’ll take approximately an hour to work through its e-learning training module. To my mind, any web-based form that requires training in its use needs redesigning. But I bite the bullet and work through the training. Not too bad, I think. I can handle this. I start to complete the form. I’m particularly happy to find little buttons associated with each question that explain what they want you to say. A definite improvement, as in the earlier versions you spent a lot of time trying to work out what the questions were getting at. It also cleverly adapts so that it excludes questions that aren’t relevant to your application. This turns out to be a two-edged sword, as I discover some weeks later. But at present I am progressing and in a cheerful mood.
The process is interrupted by need to travel from Australia to UK, Christmas, snow, massive revision to do to address reviewer comments on a paper, etc.

January 2011
Input more information, design information sheets, consent forms, etc, etc. Still feeling buoyant. The form is virtually complete, except for some information from collaborators and bits that need to be completed by Oxford R&D. I realise we want an information video for kids who can’t read, but it’ll need to be approved, but we don’t want to go to all the trouble and expense of making it before getting approval. Discuss with helpful person from Oxford R&D, who suggests I write a script for approval. I also book in the film crew, shortlist and interview candidates for research assistant posts on the project, send draft to all collaborators for approval, and ask geneticist collaborator for help with some details. Am finding that progress is slower and slower, because navigating the form is so difficult: it displays one page at a time and does not scroll. You can specify a question to go to, but it’s not easy to remember which questions correspond to which numbered item, and so you end up repeatedly printing out the whole form and shuffling through a mountain of paper to find the relevant question. Keeping things consistent is a big headache.

February 2011
Two weeks’ holiday, then enter final details that were sent to me by collaborators and send the whole lot off to R&D.
The dynamic form starts to reveal its diabolic properties when I enter a new collaborator from Cardiff, only to find that the form now pops up with a new question, along the lines of “How will you meet the requirements of the Welsh Language Act 1993?”. I won’t. We’re studying language, and all our tests are in English, so only English speakers will be recruited. Explain that, and hope it works out.
But now it gets seriously worse. I’ve entered lots of clinical colleagues as “NHS Sites”, but it turns out they aren’t sites. They are Patient Identification Centres. I have to delete them all from the form. Well, I think, at least that makes life simpler. But it doesn’t. Because now they aren’t sites any more, new questions pop up. Who will do the patient recruitment, and how will we pay for it? This one is a Catch 22. Previously our research assistants have been supervised by a consultant to go through records to find relevant cases. Some places required that you get honorary NHS status, and that could necessitate fulfilling other requirements. I actually had to get vaccinated for tetanus as part of getting an NHS contract some years ago. They said it was in case I got bitten by a child, something that has not happened to me in 35 years of researching. But I digress.  Now, it seems, even a fully vaccinated, child-proofed, police-checked researcher is not allowed to go through medical records to identify cases unless patients have given prior consent. Which, of course, they won’t have, since they don’t know about the study.
“Help!” I say to my lovely clinical colleagues. “What do we do now?”. Well, they have a suggestion. If I can register with something called CLRN, then they can help with patient recruitment. I’m given contact details for a research nurse affiliated with CLRN who soothes my brow and encourages me to go the CLRN route. I have to fill in something called a NIHR CSP Application Form which apparently goes to a body called the “portfolio adoption team” who can decide whether to adopt me and my project. All of these forms want a project start date and duration. I did have early April as notional start date, but that’s beginning to look optimistic.
Late February: comments back from R&D. Have been through application with a fine toothcomb and picked up various things they anticipate won’t be liked by the ethics committee. Impressed with the thoroughness and promptness of the response, and found the people at R&D very helpful over the phone, but my goodness, there is a lot to cope with here:
First, it seems I am still in a muddle about the definition of NHS sites, so have filled in bits wrongly that need to be entered elsewhere. Am also confused about the distinction between an “outcome” and an “outcome measure”.
Then there is the question of whether I need “Site specific forms”. The word “site” is starting to cause autonomic reactions in me. Here’s what I’m told: “Please supply an NHS SSI form for each research site; Please note for Patient Identification Centres (PICs)  R&D approval is required but you do not need an SSI form for these provided no research activity takes place on that site – taking consent to take part in the project is a research activity, giving out information on the study/advertising the study is not considered a research activity.”
I also baulk at the suggestion that I should add to the information sheet: “The University has arrangements in place to provide for harm arising from participation in the study for which the University is the Research Sponsor. NHS indemnity operates in respect of the clinical treatment with which you are provided.”  Since I don’t understand what this means, I doubt my participants will, and the participants aren’t receiving any clinical treatment. Out of curiosity, I paste these two sentences into a readability index website. It gives the passage a Flesch-Kincaid Grade Level of 22, with readability score of 4 (on a scale of 0 to 100, where 100 is easy). I try to keep my information sheets at maximum 8th grade level, so reword the bits I do understand and delete the bits that seem irrelevant or incomprehensible.
I reluctantly went along with the idea that I should devise an “Assent form” for children. This is like a kiddie consent form, but with easier language, to be signed by both child and researcher. They seem to be a blanket requirement these days, regardless of the level of risk posed by research procedures. I dislike the Assent form because I am not sure what purpose it serves, other than to make children nervous about what they are getting themselves into. It has no legal status, and we can’t gather psychological test data from unco-operative children. Others share my view that this requirement is incoherent and wrong. But I want to do this study, so feel I have no choice. I had a look on the web and NHS guidance sites to look at suggested wordings, and did not like them, so did a modified and simplified version I hoped would be approved. It would be interesting to do some research on Assent forms to see how they are perceived by children.

March 2011
Hooray! By the start of March, I’m ready to submit my forms. Since  IRAS is all electronic, I had assumed I would do it with a button press, but that would be too simple. Multiple copies must be sent by snail mail within a specific time frame. There has been serious research on the environmental impact of this. But first there is the question of booking an appointment with an ethics committee. There’s a whole centre devoted to this task, and they have standard questions that they ask you about the nature of the research. I was doing well with these until we got to the question about children. Yes, I was going to do research with children. Ah, well then I couldn’t go to any old ethics committee, I had to go to one with a paediatrician. And, unfortunately, there weren’t any slots on committees in Oxfordshire with paediatricians. But, said the helpful girl on the phone, I could try calling the Oxfordshire people directly and they might be able to book me in. At 12.05 I call the number I’ve been given, only to get an automated message saying the office is only open from 10 to 12. Since the following morning I’m busy (I am trying to do my regular job through all this), despair starts to set in. But I break out of a meeting to call them the next morning. The phone rings. And rings. Back to my meeting. Break out again, repeat experience. Eventually I get through. Person at end of phone takes me through the same list of questions about type of research, and finds a convenient slot with an Oxfordshire committee, which I can make if I move an appointment. Move the appointment. Get called back to say that committee can’t unfortunately take me, because they don’t do proposals with children. Am offered another slot on a day when I have arranged to examine a PhD in London. Next one in Oxford is a month later, well after the proposed start date for the research. Best they can do is to offer me a slot with a Berkshire committee, who do have a paediatrician and are just one hour’s drive away, and which is later than the original slot, but sooner than the Oxford one. I decide to go for it. I then receive a remarkable document with a lot of multicoloured writing, which gives me a booking confirmation number, and a lot of instructions.

This triggers a frantic process because you then have seven days to get all the material delivered to the ethics committee. This may not seem difficult, except that all the information sheets and consent forms need to have little header put on them with the booking number and date, and they also want copies of things like a CV, copies of test forms and suchlike, and worse still, there have to be signatures not just from me but also from R&D, who are in a hospital a couple of miles away up a hill. Unfortunately coincides with a period when my PA is absent, and so I rush around like a demented cockroach getting this all together. I’d not budgeted much time for this bit, as I’d assumed submission would involve pressing a button on my computer and uploading some attachments and my diary was full. Somehow I had to find a couple of hours for fiddling with forms, a trip up the hill for a signature the next day, and a journey to the post office to ensure it would all get delivered on time.
I also needed to get the documents to CLRN. This could be done by email, but that soon bounced back. Once more the critical distinction between sites and centres eluded me, and I was told that I had to submit corrected documents because:
 “In Part C, if the only research site is the University of Oxford and the other organisations listed are Participant Identification Centres (PICs), there should be listed under the heading Participant Identification Centre(PIC)Collaborator/Contact immediately below the University of Oxford entry, and not separately.”
So back to the form again to alter this bit. At last it is accepted. But this now triggers new emails, including one from London saying:
“We have been notified that you may be participating in the above study. If the Chief Investigator or Study Coordinator confirms this, Central and East London CLRN will be supporting you locally through the NIHR CSP process and we look forward to working with you on this project.
If this is confirmed, please email all relevant documents to me when you submit your SSI Form through IRAS. The documents you need to submit are listed on the Checklist tab within your SSI Form in IRAS…..etc etc”
The SSI form was one I thought I didn’t have to complete, so I phoned the number given on the email, who said they couldn’t comment and I should ask Oxford, so I asked Oxford, who agreed I didn’t need to do anything.
Meanwhile, there’s yet another form that has popped up that wants to know what training in ethics the researchers have had. Since I haven’t had formal training, I’m told I can either go on a half-day course, or take an on-line course in five modules, each lasting around 45 minutes. I try the online course, but find most of the material is not relevant to me. It starts with pictures of concentration camp victims to emphasise why people need to be protected from reseachers, then goes on to give information focussed on clinical trials. I’m not doing a clinical trial. The quizzes at the end of each module don’t seem designed to check whether you have mastered the subtleties of ethical reasoning, so much as whether you know your way around the bureaucratic maze that is involved in ethical approval, and in particular whether you understand all the acronyms.

April 2011
The six weeks from early March to mid April were joyfully free from communications with ethics people, and normal life resumed. My new staff took up their posts and we made a start on filming for an information DVD for the project, and decided that we would delay the editing stage until after the Berkshire meeting. The day of the committee meeting dawned sunny and bright and I drove off to Berkshire, where I had a perfectly reasonable chat with the ethics committee about the project for about 15 minutes. The Paediatrician was absent. I explained I wanted to assemble the information video, but was told I had to wait until I received a letter documenting changes they’d want me to make. When this arrived, about a week later, they wanted some minor rewording of one sentence. This would be trivial for a written information sheet, but entailed some refilming and careful editing. In addition, the committee raised a point that had not been discussed when I met with them, namely that they were concerned at a statement we had made saying we would give feedback to parents about their children’s language assessment if we found difficulties that had not previously been detected. This, I was told, was an incentive, and I should “soften” the language. This was seriously baffling, as you either tell someone you’ll give them feedback or you don’t. I could not see how to reword it, and I also felt the concern about incentives was just silly. I sent them a copy of a paper on this topic for good measure.

May 2011
Oh frabjous day! At last I receive a letter giving consent for the study to go ahead. I think my troubles are over, and we swing into action with those parts of the project that don’t involve NHS recruitment. But joy is short-lived. I am only just beginning to understand the multifarious ways in which it is possible to Get Things Wrong when dealing with the Circumlocutions Office. I now start to have communications with the CLRN, who want copies of all documentation (including protocol, consent forms, the information video, etc etc - a total of 15 documents) and then tell me:
“The R&D Signature pages uploaded to the doc store on 27th June 2011 do not marry up with the R&D Form uploaded on 15th March 2011”
Requests for new form-filling also come in from the CCRN Portfolio. I’m getting seriously confused about who all these people are, but complete the form anyway.
And, worst still, in August I get a request from TVCLRN for a copy of  the letter I sent to Berkshire in which I responded to their initial comments. I had written it at a time when my computer was malfunctioning so it’s not with other correspondence. I spend some time looking on other computers for an electronic copy.  It seems that without a copy of this letter, they will not be satisfied. Anyhow, I think this will be simple to sort out, and phone the Berkshire ethics committee to ask if they could please send me a copy of the letter that I had written to them. Amazingly, I’m told that “due to GCP guidelines” the Berkshire ethics committee cannot give me a copy. Stalemate. I can’t actually remember how we dealt with this in the end, as my brain started to succumb to Circumlocution Overload.

The last 6 months
We have a meeting with the clinical geneticists with whom we’re collaborating, and I find that most of them are as confused as I am by the whole process. We discuss the Catch 22 situation whereby we aren’t allowed to help go through files to identify suitable patients because of ethical concerns, which means they have to take time out of their busy schedules to do so. This is where the CLRN is supposed to help, by providing research nurses who can assist, but only if we complete loads more paperwork. And having done this, after months of to-ing and fro-ing with requests for documentation or clarification, one of the CLRN centres has just written this week to say they can’t help us at all because they are a Patient Identification Centre and they need to be a PI, whatever that is. I’m currently trying to unravel what this means, and I think it means that they have to become an NHS Site - which was what I  had originally assumed when I started filling in the forms. But in order for them to do so, there are yet more forms to complete.
Meanwhile, in October, I had a request from the UKCRN saying I needed to upload monthly data on patient recruitment in a specific format, and sending me a 35 page manual explaining how to do this. Fortunately, after several exchanges on email, I was able to establish that we did not need to do this, as the hospitals we were dealing with were Patient Identification Centres rather than Sites. But now we have a PIC that wants to become a Site, who knows what new demands will appear?
And then, this week, a new complication. The geneticists who are referring to our study need to check with a child’s GP that it is appropriate to send them the recruitment materials. But an eagle-eyed administrator spotted that this letter “was not an ethically approved form”. I was surprised at this. This is not a letter to a patient; it is a standard communication between NHS professionals. Nevertheless, my R&D contact confirmed that this letter would need approval, and that I’d have to fill in a form for a “substantial amendment”, which would then need to be approved by all the R&D sites as well as the Berkshire ethics committee.
When I expressed my despair about the process on Twitter, I had some comments from ethicists, one of whom said “If you're doing research on ppl then someone has to look after them, no?” Of course, the answer is “yes”, and in fact the project I’m working on does raise important ethical issues. As another commentator pointed out, the problem is not usually with the ethics procedures, and it is true that the IRAS form is much better  than its predecessor and guides you through issues that you need to think about and offers good advice. But the whole process has got tangled up in bureaucratic legal issues and most of my problems don’t have anything to do with protecting patients and have everything to do with protecting institutions against remote possibilities of litigation.

Concluding thoughts
1. In the summer, I was contacted by a member of the public who was concerned about the way in which a medical project done at Oxford University was being used to promote unproven diagnostic tests and treatment for a serious medical condition. I recommended that my contact should write to the relevant person dealing with ethics in the University. I was sanguine that this would be taken seriously: here was an allegation of serious infringement of ethical standards and all my dealings with our R&D department indicated they were sticklers for correct procedures. A month or so passed; they didn’t reply to the complainant. I was embarrassed by this and so wrote to point out that a serious complaint had gone uninvestigated. After a further delay we both got a bland reply that did not answer the specific questions that had been raised and just reassured us the matter was being investigated. This just confirms my cynicism about the role of our systems in protecting patients. As Thomas Sowell pointed out: You will never understand bureaucracies until you understand that for bureaucrats procedure is everything and outcomes are nothing.” 
2. The current system is deterring people from doing research. The problem is not with the individuals running the system: they’ve mostly been highly professional, helpful and competent, but they are running a modern Circumlocution Office. I’ve interacted with at least 27 people about my proposal, and that’s not counting the Research Ethics Committee members. I’m a few years off retirement and I’ve already decided that I won’t tangle with NHS Ethics again. I’m in the fortunate position that I can do research studies that don’t involve NHS patients, and I want to spend the time remaining to me engaged in the activity I like, rather than chasing pieces of paper so that someone somewhere can file them, or waiting for someone to agree that an innocuous letter from a Consultant to a GP is ethically acceptable.
3. To end on a positive note: I think there is another way. The default assumption seems to be that all researchers are unscrupulous rogues who’ll go off the rails unless continuously monitored. The system should be revamped as a mechanism for training researchers to be aware of ethical issues and helping them deal with difficult issues. For research procedures that are in common use, one can develop standard protocols that document how things should be done to ensure best practice. On this model, a researcher would indicate that their research would follow protocol X and be trusted to do the research in an ethical fashion. The  training would also ensure that researchers would recognise when a study involved ethically complex or controversial aspects that fell outside a protocol, and would be expected to seek advice from the Research Ethics Committee. The training would not revolve around learning acronyms, but would rather challenge people with case studies of ethical dilemmas to ensure that issues such as confidentiality, consent and risk were at the forefront of the researcher’s mind. This is the kind of model we use for people engaged in other activities that could pose risks to others - e.g.,  medical staff, teachers, car-drivers. Life would come to a standstill if every activity they undertook had to be scrutinised and approved. Instead, we train people to perform to a high standard, and then trust them to get on with it. We need to adopt the same approach to researchers if we are not to stifle research activity with human participants.

Further reading
Kielmann T, Tierney A, Porteous R, Huby G, Sheikh A, & Pinnock H (2007). The Department of Health's research governance framework remains an impediment to multi-centre studies: findings from a national descriptive study. Journal of the Royal Society of Medicine, 100 (5), 234-8 PMID: 17470931
Knowles, R. L., Bull, C., Wren, C., & Dezateux, C. (2011). Ethics, governance and consent in the UK: implications for research into the longer-term outcomes of congenital heart defects. Archives of Disease in Childhood, 96(1), 14-20.
Robinson, L., Drewery, S., Ellershaw, J., Smith, J., Whittle, S., & Murdoch-Eaton, D. (2007). Research governance: impeding both research and teaching? A survey of impact on undergraduate research opportunities. Medical Education, 41(8), 729-736.
Warlow, C. (2005). Over-regulation of clinical research: a threat to public health. Clinical Medicine, 5(1), 33-38.
Wilkinson, M., & Moore, A. (1997). Inducement in research. Bioethics, 11, 374-389.

Sunday 4 December 2011

Pioneering treatment or quackery? How to decide

My mother was only slightly older than I am now when she died of emphysema (chronic obstructive pulmonary disease). It’s a progressive condition for which there is no cure, though it can be managed by use of inhalers and oxygen. I am still angry at the discomfort she endured in her last years, as she turned from one alternative practitioner to another. It started with a zealous nutritionist who was a pupil of hers. He had a complicated list of foods she should avoid: I don’t remember much about the details, except that when she was in hospital I protested at the awful meal she’d been given - unadorned pasta and peas - only to be told that this was at her request. Meat, sauces, fats, cheese were all off the menu. My mother was a great cook who enjoyed good food, but she was seriously underweight and the unappetising meals were not helping. In that last year she also tried acupuncture, which she did not enjoy: she told me how it involved lying freezing on a couch having needles prodded into her stick-like body. Homeopathy was another source of hope, and the various remedies stacked up in the kitchen. Strangely enough, spiritual healing was resisted, even though my Uncle Syd was a practitioner. That seemed too implausible for my atheistic mother, whose view was: “If there is a God, why did he make us intelligent enough to question his existence?”
From time to time, friends and relatives of mine have asked my advice about other treatments that are out there. There is, for instance, the Stem Cell Institute in Panama, offering treatment for multiple sclerosis, spinal cord injury, osteoarthritis, rheumatoid arthritis, other autoimmune diseases, autism, and cerebral palsy.  Or nutritional therapist Lucille Leader,  who has a special interest in supporting patients with Parkinson's Disease, Multiple Sclerosis and Inflammatory Bowel Disease. My mother would surely have been interest in AirEnergy, a “compact machine that creates 'energised air' that feeds every cell in your body with oxygen that it can absorb and use more efficiently”.
Another source of queries are parents of the children with neurodevelopmental disorders who are the focus of my research. If you Google for treatments for dyslexia you are confronted by a plethora of options. There is the Dyslexia Treatment Centre, which offers Neurolinguistic Programming and hypnotherapy to help children with dyslexia, dyspraxia or ADHD. Meanwhile the Dore Programme markets a set of “daily physical exercises that aim to improve balance, co-ordination, concentration and social skills” to help those with dyslexia, dyspraxia, ADHD or Asperger’s syndrome. The Dawson Program offers vibrational kinesiology to correct imbalances in the body’s energy fields.  I could go on, and on, and on….
So how on earth can we decide which treatments to trust and which are useless or even fraudulent? There are published lists of warning signs (e.g. ehow Health, Quackwatch), but I wonder how useful they are to the average consumer. For instance, the cartoon by scienceblogs will make skeptics laugh, but I doubt it will be much help for anyone with no science background who is looking for advice. So here’s my twopennyworth. First, a list of things you need to ignore when evaluating a treatment.
1. The sincerity of the practitioner. It’s a mistake to assume all purveyors of ineffective treatments are evil bastards out to make money of the desperate. Many, probably most,  believe honestly in what they are doing. The nutritionist who advised my mother was a charming man who did not charge her a penny - but still did her harm by ensuring her last months were spent on an inadequate and boring diet. The problem is if practitioners don’t adopt scientific methods of evalulating treatments they will convince themselves they are doing good, because some people get better anyway, and they’ll attribute the improvement to their method.
2. The professionalism of the website. Some dodgy treatments have very slick marketing. The Dore Treatment, which I regard as of dubious efficacy, had huge success when it first appeared. Its founder, Wyford Dore was a businessman who had no background in neurodevelopmental disorders but knew a great deal about marketing. He ensured that if you typed ‘dyslexia treatment’ into Google his impressive website was the first thing you’d hit.
3. Fancy-looking credentials. These can be misleading if you aren’t an expert - and sometimes even if you are. My bugbear is ‘Fellow the Royal Society of Medicine’, which sounds very impressive - similar to Fellow the Royal Society (which really is impressive).  In fact, the threshold for fellowship is pretty low, so much so that fellows are told by the RSM that they should not use FRSM on a curriculum vitae. So when you see this on someone’s list of credentials, it means the opposite of what you think: they are likely to be a charlatan. It’s also worth realising that it’s pretty easy to set up your own organisation and offer your own qualifications. I could set up the Society of Skeptical Quackbusters and offer Fellowship to anyone I choose. The letters FSSQ might look good, but carry no guarantee of anything.
4. Testimonials. There is evidence (reviewed here) that humans trust testimonials far more than facts and figures. It’s a tendency that’s hard to overcome, despite scientific training. I still find myself getting swayed if I hear someone tell me of their positive experience with some new nutritional supplement, and thinking, maybe there’s something in it. Advertisers know this: it’s one thing to say that 9 out of 10 cats prefer KittyMunch, but to make it really effective you need a cute cat going ecstatic over the food bowl. If you are deciding whether to go for a treatment you must force yourself to ignore testimonials. For a start, you don’t even know if they are genuine: anyone who regards sick and desperate people as a business opportunity is quite capable of employing actors to pose as satisfied customers. Second, you are given no information about how typical they are. You might be less impressed by the person telling you their dyslexia was cured if you knew that there were a hundred others who paid for the treatment and got no benefit. And the cancer patients who die after a miracle cure are the ones you won’t hear about.
5. Research articles. Practitioners of alternative treatments are finding that the public is getting better educated, and they may be asked about research evidence. So it’s becoming more common to find a link to ‘research’ on websites advertising treatments. The problem is that all too often this is not what it seems. This was recently illustrated by an analysis of research publications from the Burzynski clinic, which offers the opportunity to participate in expensive trials of cancer treatment. I was interested also to see the research listed on the website of FastForword, a company that markets a computerized intervention for children’s language and literacy problems. Under a long list of Foundational Research articles, they list one of my papers that fails to support their theory that phonological and auditory difficulties have common origins. More generally, the reference list contains articles that are relevant to the theory behind the intervention, but don’t necessarily support it. Few people other than me would know that. And a recent meta-analysis of randomized controlled trials of FastForword is a notable omission from the list of references provided. Overall, this website seems to exemplify a strategy that has previously been adopted in other areas such as climate change, impact of tobacco or sex differences, where you create an impression of a huge mass of scientific evidence, which can only be counteracted if painstakingly unpicked by an expert who knows the literature well enough to evaluate what’s been missed out, as well as what’s in there. It’s similar to what Ben Goldacre has termed ‘referenciness’, or the ‘Gish gallop’ technique of creationists. It’s most dangerous when employed by those who know enough about science to make it look believable. The theory behind FastForword is not unreasonable, but the evidence for it is far less compelling than the website would suggest.
So those are the things that can lull you into a false sense of acceptance. What about the red flags, warning signs that suggest you are dealing with a dodgy enterprise? None of these on its own is foolproof, but where several are present together, beware.
  1. Is there any theory behind the intervention, and if so is it deemed plausible by mainstream scientists? Don’t be impressed by sciency-sounding theories - these are often designed to mislead. Neuroscience terms are often incorporated to give superficial plausibility: I parodied this in my latest novel, with the invention of Neuropositive Nutrition, which is based on links between nutrients, the thalamus and the immune system. I suspect if I set up a website promoting it, I’d soon have customers. Unfortunately, it can be hard to sort the wheat from the chaff, but NHSChoices is good for objective, evidence-based  information. Most universities have a communications office that may be able to point you to someone who could indicate whether an intervention has any scientific credibility.  
  2. How specific is the treatment? A common feature of dodgy treatments is that they claim to work for a wide variety of conditions. Most effective treatments are rather specific in their mode of action.
  3. Does the practitioner reject conventional treatments? That’s usually a bad sign, especially if there are effective mainstream approaches.
  4. Does the practitioner embrace more than one kind of alternative treatment? I was intriguted when doing my brief research on Fellows of the Royal Society of Medicine to see how alternative interventions tend to cluster together. The same person who is offering chiropractic is often also recommended hypnotherapy, nutritional supplements and homeopathy.  Since modern medical advances have all depended on adopting a scientific stance, anyone who adopts a range of methods that don’t have scientific support is likely to be a bad bet.
  5. Are those developing the intervention cautious, and interested in doing proper trials?  Do they know what a randomised controlled trial is? If they aren’t doing them, why not? See this book for an accessible explanation of why this is important.
  6. Does it look as though those promoting the intervention are deliberately exploiting people’s gullibility by relying heavily on testimonials? Use of celebrities to promote a product is a technique used by the advertising industry to manipulate people’s judgement. It’s a red flag.
  7. Are costs reasonable?  Does the website give you any idea of how much they are, or do you have to phone up for information? (bad sign!). Are people tied in to long-term treatment/payment plans? Are you being asked to pay to take part in a clinical trial? (Very unusual and ethically dubious). Do you get a refund if it doesn’t work? If yes, read the terms and condition very carefully so you understand exactly the circumstances under which you get your money back. For instance, I’ve seen a document from the Dore organisation that promised a money-back guarantee on condition there was ‘no physiological change’. That was interpreted as change on tests of balance and eye movements. These change with age and practice, and don’t necessarily mean a treatment has worked. Failing to improve in reading did not qualify you for the refund.
  8. Can the practitioner answer the question of why mainstream medicine/education has not adopted their methods? If the answer refers to others having competing interests, be very, very suspicious. Remember, mainstream practitioners want to make people better, and anyone who can offer effective treatments is going to be more successful than someone who can’t. 

Friday 25 November 2011

The weird world of US ethics regulation

There has been a lot of interest over the past week in the Burzynski Clinic, a US organisation that offers unorthodox treatment to those with cancer. To get up to speed on the backstory see this blogpost by Josephine Jones.
As someone who spends more of my time than I’d like grappling with research ethics committees, there was one aspect of this story that surprised me. According to this blogpost, the clinic is not allowed to offer medical treatment, but is allowed to recruit patients to take part in clinical trials. But this is expensive for participants. The Observer piece that started all the uproar this week described how a family needed to raise £200,000 so that their very sick little girl could undergo Burzynski’s treatment.
I had assumed that this trial hadn’t undergone ethical scrutiny, because I could not see how any committee could agree that it was ethical to charge someone enormous sums of money to take part in a research project in which there was no guarantee of benefit. I suspect that many people would pay up if they felt they’d exhausted all other options. But this doesn’t mean it’s right.
I was surprised, then, to discover that the Burzynski trial had undergone review by an Institutional Review Board (IRB - the US term for an ethics committee). A letter describing the FDA’s review of the relevant IRB is available on the web. It concludes that “the IRB did not adhere to the applicable statutory requirements and FDA regulations governing the protection of human subjects.”  There’s a detailed exposition of the failings of the Burzynski Institute IRB, but no mention of fees charged to patients. So I followed a few more links and came to a US government site that described regulatory guidelines for ethics committees, which had a specific section on Charging for Investigational Products. It seems the practice of passing on research costs to research participants is allowed in the US system.
There has been considerable debate in academic circles about the opposite situation, where participants are paid to take part in a study. I know of cases where such payments have been prohibited by an ethics committee on the grounds that they provide ‘inducement’, which is generally regarded as a Bad Thing, though there are convincing counterarguments. But I am having difficulty in tracking down any literature at all on the ethics of requiring participants to pay a fee to take part in research. Presumably this is a much rarer circumstance than cases where participants are paid, because in general people need persuading to take part in research. The only people who are likely to pay large sums to be a research participant are those who are in a vulnerable state, feeling they have nothing to lose. But these are the very people who need protection by ethics committees because it’s all too easy for unscrupulous operators to exploit their desperation. Anyone who doesn’t have approval to charge for a medical treatment could just redescribe their activities as a clinical trial and bypass regulatory controls. Surely this cannot be right.

Saturday 19 November 2011

Your Twitter Profile: The Importance of Not Being Earnest


I’m always fascinated by the profiles of people who follow me on Twitter. One of the things I love about Twitter is its ability to link me up with people who I’d never otherwise encounter. It’s great when I find someone from the other side of the world who’s interested in the same things as me. There are, of course, also those who just want to promote their product, and others, like Faringdon Motor Parts and Moaning Myrtle (@toiletmoans) whose interests in my tweets are, frankly, puzzling. But the ones that intrigue me most are the ones with profiles that create an immediate negative impression - or to put it more bluntly, make me just think "Pillock!" (If you need to look that up, you’re not from Essex).
Now language is one of my things - I work on language disorders, and over the years I’ve learned a bit about sociolinguistics - the influence of culture on language use. And that made me realise there were at least two hypotheses that could explain the occasional occurrence of offputting profiles. The first was that I am being followed by genuine pillocks. But the other was that there are cultural differences in what is regarded as an acceptable way of presenting yourself to the world. Maybe a turn of phrase that makes me think "pillock" would make someone else think "cool". And perhaps this is culturally determined.
So what, to my British ear, sets off the pillock detector? The major factor was self-aggrandisement. For instance, someone who describes themselves as "a top intellectual", "highly successful", "award-winning", or "inspirational".
But could this just be a US/UK difference? The British have a total horror of appearing boastful: the basic attitude is that if you are clever/witty/beautiful you should not need to tell people - it should be obvious. Someone who tells you how great they are is transgressing cultural norms. Either they really are great, in which case they are up themselves, as we say in Ilford, or they aren’t, in which case they are a dickhead. When I see a profile that says that someone is "interested in everything, knows nothing", "a lazy pedant", or "procrastinaor extraordinaire", I think of them as a decent sort, and I can be pretty sure they are a Brit. But can this go too far? Many Brits are so anxious to avoid being seen as immodest that they present themselves with a degree of self-deprecation that can be confused by outsiders with false modesty at best, or neurotic depression at worst.
A secondary factor that sets off my negative reactions is syrupy sentiment, as evidenced in phrases such as: "empowering others", "Living my dream", or "I want to share my love". This kind of thing is generally disliked by Brits. I suspect there are two reasons for this. First, in the UK, displays of emotion are usually muted, except in major life-threatening circumstances: so much so that when someone is unabashedly emotional they are treated with suspicion and thought to be insincere. And second, Polyannaish enthusiasm is just uncool. The appropriate take on life’s existential problems is an ironic one.
I was pleased to find my informal impressions backed by by social anthropologist Kate Fox, in her informative and witty book "Watching the English" (Hodder & Stoughton, 2004). Humour, she states, is our "default mode", and most English conversations will involve "banter, teasing, irony, understatement, humorous self-deprecation, mockery or just silliness." (p 61). She goes on to describe the Importance of Not Being Earnest rule: "Seriousness is acceptable, solemnity is prohibited. Sincerity is allowed, earnestness is strictly forbidden. Pomposity and self-importance are outlawed." (p. 62). Fox doesn’t explicitly analyse American discourse in the book, but it is revealing that she states: "the kind of hand-on-heart, gushing earnestness and pompous Bible-thumping solemnity favoured by almost all American politicians would never win a single vote in this country - we watch these speeches on our news programmes with a kind of smugly detached amusement." (p 62).
Anthropologists and linguists have analysed trends such as these in spoken discourse, but I wondered whether they could be revealed in the attenuated context of a Twitter profile. So in an idle moment (well, actually when I was supposed to be doing something else I didn’t want to do) I thought I’d try an informal analysis of my Twitter followers to see if these impressions would be borne out by the data. This is easier said than done, as I could find no simple way to download a list of followers, and so I had to be crafty about using "SaveAs" and "Search and Replace" to actually get a list I could paste into Excel, and when I did that, my triumph was short-lived: I found it’d not saved Location information. At this point, my enthusiasm for the project started to wane - and the task I was supposed to be doing was looking ever more attractive. But, having started, I decided to press on and manually enter location for the first 500 followers. (Fortunately I was able to listen to an episode of the News Quiz while doing this. I started to like all those eggs with no Location recorded). I then hid that column so it would not bias me, and coded the profiles for three features: (a) Gender (male/female/corporate/impossible to tell); (b) Self-promotion: my totally subjective rating of whether the profile triggered the pillock-detector; (c) Syrupy: another subjective judgement of whether the profile contained overly sentimental language. I had intended also to code mentions of cats - I was convinced that there was a British tendency to mention cats in one’s profile, but there were far too few to make analysis feasible. I was a victim of confirmation bias. So were my other intuitions correct? Well, yes and no.
For the analysis I just focused on followers from the US and UK. The first thing to emerge from the analysis was that pillocks were rare in both US and UK - rarer than I would have anticipated. I realised that, like mentions of cats, it’s something I had overestimated, probably because it provoked a reaction in me when it occurred. But, I was pleased to see that nonetheless my instincts were correct: there were 7/97 (7.2%) pillocks in the US sample but only 2/153 (1.3%) in the UK . The sample size is really not adequate, and if I were going to seriously devote myself to sociolinguistics I’d plough on to get a much bigger sample size. But nevertheless, for what it’s worth, this is a statistically significant difference (chi square = 5.97, p = .015 if you really want to know). Syrup followed a similar pattern: again it was rare in both samples, but it was coded for 3/153 of the UK sample compared with 7/97 of the US. I’d coded gender as I had thought this might be a confounding factor, but in fact there were no differences between males and females in either pillocks or syrup. Of course, all these conclusions apply only to my followers, who are bound to be an idiosyncratic subset of people.
My conclusion from all this: we need to be more sensitive to cultural differences in self-expression. Looking over some of the profiles that I categorised as "pillock" I realise that I’m being grossly unfair to their owners.  After all, on a Twitter profile, the only information that people have about you comes from the profile - and your tweets. So it really is preposterous for me to react negatively against someone telling me they are an "award-winning author": that should engender my interest and respect. And, because this is a profile, and not a conversation, if they didn’t tell me, I wouldn’t know. And we really ought to cherish rather than mock those who try to bring a bit of love and kindness into the world. But somehow….
I hope that Americans reading this will get some insight into the tortuous mindset of the Brits: if we come across as dysfunctionally insecure losers it’s not that we really are - it’s that we’d rather you thought that of us than that we were boastful.

Sunday 13 November 2011

Vitamins, genes and language


Thiamine chloride  (source: Wikipedia)
In November 2003, a six-month-old boy was admitted to the emergency department of  a children’s hospital in Tel Aviv. He had been vomiting daily for two months, was apathetic, and had not responsed to anti-emetic drugs. The examining doctor noticed something odd about the child’s eye movements and referred him on to the neuro-ophthalmology department. A brain scan failed to detect any tumour. The doctors remembered a case they had seen 18 months earlier, where a 16-year-old girl had presented with episodic vomiting and abnormal eye movements due to vitamin B1 deficiency.  They injected the child with thiamine and saw improvement after 36 hours. The vomiting stopped, and over the next six weeks the eye movements gradually normalised. When followed up 18 months later he was judged to be completely normal.
This was not, however, an isolated case. Other babies in Israel were turning up in emergency departments with similar symptoms. Where thiamine deficiency was promptly recognised and treated, outcomes were generally good, but two children died and others were left with seizures and neurological impairment. But why were they thiamine deficient? All were being fed the same kosher, non-dairy infant formula, but it contained thiamine. Or did it? Analysis of samples by the Israeli Ministry of Health revealed that levels of thiamine in this product were barely detectable, and there was an immediate product recall. The manufacturer confirmed that human error had led to thiamine being omitted when the formula had been altered.
The cases who had been hospitalised were just the tip of the iceberg. Up to 1000 infants had been fed the formula. Most of these children had shown no signs of neurological problems. But a recent study reported in Brain describes a remarkable link between this early thiamine deprivation and later language development. Fattal and colleagues studied 59 children who had been fed thiamine-deficient formula for at least one month before the age of  13 months, but who were regarded as neurologically asymptomatic. Children who had birth complications or hearing loss were excluded. The authors stress that the children were selected purely on the basis of their exposure to the deficient formula, and not according to their language abilities. All were attending regular schools.  A control group of 35 children was selected from the same health centres, matched on age.
Children were given a range of language tests when they were 5 to 7 years of age. These included measures of sentence comprehension, sentence production, sentence repetition and naming. There were dramatic differences between the two groups of children, with the thiamine-deficient group showing deficits in all these tasks. The authors argued that the profile of performance was identical to that seem in children with a diagnosis of specific language impairment (SLI), with specific problems with certain complex grammatical constructions, and normal performance on a test of conceptual understanding that did not involve any language.
Figure 1 An example of a picture pair used in the comprehension task. 
The child is asked to point to the picture that matches a sentence, 
such as ‘Tar’e li et ha-yalda she-ha-isha mecayeret’ 
(Show me the girl that the woman draws). From Fattal et al, 2011.

I have some methodological quibbles with the paper. The authors excluded three control children who did poorly on the syntactic tests because they were outliers - this seems wrong-headed if the aim is to see whether syntactic problems are more common in children with thiamine-deficiency than in those without. The non-language conceptual tests were too easy, with both groups scoring above 95% correct. To convince me that the children had normal abilities they would need to demonstrate no difference between groups on a sensitive test of nonverbal IQ. My own experience of testing children’s grammatical abilities in English is that ability to do tests such as that shown in Figure 1 can be influenced by attention and memory as well as syntactic ability, and so I think we need to rule out other explanations before accepting the linguistic account offered by the authors. I’d also have liked a bit more information about how the control children were recruited, to be certain they were not a ‘supernormal’ group - often a problem with volunteer samples, and something that could have been addressed if a standarized IQ test had been used. But overall, the effects demonstrated by these authors are important, given that there are so few environmental factors known to selectively affect language skills. These results raise a number of questions about children’s language impairments.
The first question that struck me was whether thiamine deficiency might be implicated in other cases outside this rare instance. I have no expertise in this area, but this paper prompted me to seek out other reports. I learned that thiamine deficiency, also known as infantile beriberi, is extremely rare in the developed world, and when it does occur it is usually because an infant is breastfeeding from a mother who is thiamine deficient. It is therefore important to stress that thiamine deficiency is highly unlikely to be implicated in cases of specific language impairment in Western societies. However, a recent paper reported that it is relatively common in Vientiane, Laos, where there are traditional taboos against eating certain foods in the period after giving birth. The researchers suggested that obvious cases with neurological impairments may be the extreme manifestation of a phenomenon that is widespread in milder form. If so, then the Israeli paper suggests that the problem may be even more serious than originally suggested, because there could be longer-term adverse effects on language development in those who are symptom-free in infancy.
The second question concerns the variation in outcomes of thiamine-deficient infants. Why, when several hundred children had been fed the deficient formula, were only some of them severely affected? An obvious possibility is the extent to which infants were fed foods other than the deficient formula. But there may also be genetic differences between children in how efficiently they process thiamine.
This brings us to the third question: could this observed link between thiamine deficiency and language impairment have relevance for genetic studies of language difficulties? Twin and family studies have indicated that specific language impairment is strongly influenced by genes. However, one seldom finds genes that have a major all-or-none effect. Rather, there are genetic risk variants that have a fairly modest and probabilistic impact on language ability.
Robinson Crusoe Island
A recent study by Villanueva et al illustrates this point. They analysed genetic variation in an isolated population on Robinson Crusoe Island, the only inhabited island in the Juan Fernandez Archipelago, 677 km to the west of Chile. At the time of the study there were 633 inhabitants, most of whom were descended from a small number of founder indviduals. This population is of particular interest to geneticists as there is an unusually high rate of specific language impairment.  A genome-wide analysis failed to identify any single major gene that distinguished affected from unaffected individuals. However, there was a small region of chromosome 7 where there genetic structure was statistically different between affected and unaffected cases, and which contained genetic variants that had previously been found linked to language impairments in other samples. One of these, TPK1 is involved in the catalysis of the conversion of thiamine to thiamine pyrophosphate. It must be stressed that the genetic association between a thiamine-related genetic variant and  language impairment is probabilistic and weak, and far more research will be needed to establish whether it is generalises beyond the rare population studied by Villanueva and colleagues. But this observation points the way to a potential mechanism by which a genetic variant could influence language development.
To sum up: the importance of the study by Fattal and colleagues is two-fold. First, it emphasises the extent to which there can be adverse longer-term consequences of thiamine deficiency in children who may not have obvious symptoms, an observation which may assume importance in cultures where there is inadequate nutrition in breast-feeding mothers. Second, it highlights a role of thiamine in early neurodevelopment, which may prove an important clue to neuroscientists and geneticists investigating risks for language impairment.

References
Fattal I, Friedmann N, & Fattal-Valevski A (2011). The crucial role of thiamine in the development of syntax and lexical retrieval: a study of infantile thiamine deficiency. Brain : a journal of neurology, 134 (Pt 6), 1720-39 PMID: 21558277  

Villanueva P, Newbury DF, Jara L, De Barbieri Z, Mirza G, Palomino HM, Fernández MA, Cazier JB, Monaco AP, & Palomino H (2011). Genome-wide analysis of genetic susceptibility to language impairment in an isolated Chilean population. European journal of human genetics : EJHG, 19 (6), 687-95 PMID: 21248734

Monday 31 October 2011

A message to the world

from a teenager with language difficulties

Wednesday 26 October 2011

Accentuate the negative

Suppose you run a study to compare two groups of children: say a dyslexic group and a control group. Your favourite theory predicts a difference in auditory perception, but you find no difference between the groups. What to do? You may feel a further study is needed: perhaps there were floor or ceiling effects that masked true differences. Maybe you need more participants to detect a small effect. But what if you can’t find flaws in the study and decide to publish the result? You’re likely to hit problems. Quite simply, null results are much harder to publish than positive findings. In effect, you are telling the world “Here’s an interesting theory that could explain dyslexia, but it’s wrong.” It’s not exactly an inspirational message, unless the theory is so prominent and well-accepted that the null finding is surprising. And if that is the case, then it’s unlikely that your single study is going to be convincing enough to topple the status quo. It has been recognised for years that this “file drawer problem” leads to distortion of the research literature, creating an impression that positive results are far more robust than they really are (Rosenthal, 1979).
The medical profession has become aware of the issue and it’s now becoming common practice for clinical trials to be registered before a study commences, and for journals to undertake to publish the results of methodologically strong studies regardless of outcome. In the past couple of years, two early-intervention studies with null results have been published, on autism (Green et al, 2010) and late talkers (Wake et al, 2011). Neither study creates a feel-good sensation: it’s disappointing that so much effort and good intentions failed to make a difference. But it’s important to know that, to avoid raising false hopes and wasting scarce resources on things that aren’t effective. Yet it’s unlikely that either study would have found space in a high-impact journal in the days before trial registration.
Registration can also exert an important influence in cases where conflict of interest or other factors make researchers reluctant to publish null results. For instance, in 2007, Cylharova et al published a study relating membrane fatty acid levels to dyslexia in adults. This research group has a particular interest in fatty acids and neurodevelopmental disabilities, and the senior author has written a book on this topic. The researchers argued that the balance of omega 3 and omega 6 fatty acids differed between dyslexics and non-dyslexics, and concluded: “To gain a more precise understanding of the effects of omega-3 HUFA treatment, the results of this study need to be confirmed by blood biochemical analysis before and after supplementation”. They further stated that a randomised controlled trial was underway. Yet four years later, no results have been published and requests for information about the findings are met with silence. If the trial had been registered, the authors would have been required to report the results, or explain why they could not do so.
Advance registration of research is not a feasible option for most areas of psychology, so what steps can we take to reduce publication bias? Many years ago a wise journal editor told me that publication decisions should be based on evaluation of just the Introduction and Methods sections of a paper: if an interesting hypothesis had been identified, and the methods were appropriate to test it, then the paper should be published, regardless of the results.
People often respond to this idea saying that it would just mean the literature would be full of boring stuff. But remember, I'm not suggesting that any old rubbish should get published: there has to be a good case for doing the study made in the Introduction, and the Methods have to be strong. Also, some kinds of boring results are important: miminally, publication of a null result may save some hapless graduate student from spending three years trying to demonstrate an effect that’s not there. Estimates of effect sizes in meta-analyses are compromised if only positive findings get reported. More seriously, if we are talking about research with clinical implications, then over-estimation of effects can lead to inappropriate interventions being adopted.
Things are slowly changing and it’s getting easier to publish null results. The advent of electronic journals has made a big difference because there is no longer such pressure on page space. The electronic journal PLOS One adopts a publication policy that is pretty close to that proposed by the wise editor: they state they will publish all papers that are technically sound. So my advice to those of you who have null data from well-designed experiments languishing in that file drawer: get your findings out there in the public domain.

References

Cyhlarova, E., Bell, J., Dick, J., MacKinlay, E., Stein, J., & Richardson, A. (2007). Membrane fatty acids, reading and spelling in dyslexic and non-dyslexic adults European Neuropsychopharmacology, 17 (2), 116-121 DOI: 10.1016/j.euroneuro.2006.07.003

Green, J., Charman, T., McConachie, H., Aldred, C., Slonims, V., Howlin, P., Le Couteur, A., Leadbitter, K., Hudry, K., Byford, S., Barrett, B., Temple, K., Macdonald, W., & Pickles, A. (2010). Parent-mediated communication-focused treatment in children with autism (PACT): a randomised controlled trial The Lancet, 375 (9732), 2152-2160 DOI: 10.1016/S0140-6736(10)60587-9 

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86 (3), 638-641 DOI: 10.1037/0033-2909.86.3.638 

Wake M, Tobin S, Girolametto L, Ukoumunne OC, Gold L, Levickis P, Sheehan J, Goldfeld S, & Reilly S (2011). Outcomes of population based language promotion for slow to talk toddlers at ages 2 and 3 years: Let's Learn Language cluster randomised controlled trial. BMJ (Clinical research ed.), 343 PMID: 21852344
G