Monday, 8 December 2014

Why evaluating scientists by grant income is stupid


As Fergus Millar noted in a letter to the Times last year, “in the modern British university, it is not that funding is sought in order to carry out research, but that research projects are formulated in order to get funding”.
This topsy-turvy logic has become evident in some universities, with blatant demands for staff in science subjects to match a specified quota of grant income or face redundancy. David Colquhoun’s blog is a gold-mine of information about those universities who have adopted such policies. He notes that if you are a senior figure based in the Institute of Psychiatry in London, or the medical school at Imperial College London you are expected to bring in an average of at least £200K of grant income per annum.  Warwick Medical School has a rather less ambitious threshold of £90K per annum for principal investigators and £150K per annum for co-investigators1.
So what’s wrong with that? It might be argued that in times of financial stringency, Universities may need to cut staff to meet their costs, and this criterion is at least objective. The problem is that it is stupid. It damages the wellbeing of staff, the reputation of the University, and the advancement of science.
Effect on staff 
The argument about wellbeing of staff is a no-brainer, and one might have expected that those in medical schools would be particularly sensitive to the impact of job insecurity on the mental and physical health of those they employ. Sadly, those who run these institutions seem blithely unconcerned about this and instead impress upon researchers that their skills are valued only if they translate into money. This kind of stress does not only impact on those who are destined to be handed their P45 but also on those around them. Even if you’re not worried about your own job, it is hard to be cheerfully productive when surrounded by colleagues in states of high distress. I’ve argued previously that universities should be evaluated on staff satisfaction as well as student satisfaction: this is not just about the ethics of proper treatment of one’s fellow human beings, it is also common-sense that if you want highly skilled people to do a good job, you need to make them feel valued and provide them with a secure working environment. 
Effect on the University
The focus on research income seems driven by two considerations: a desire to bring in money, and to achieve status by being seen to bring in money. But how logical is this? Many people seem to perceive a large grant as some kind of ‘prize’, a perception reinforced by the tendency of the Times Higher Education and others to refer to ‘grant-winners’. Yet funders do not give large grants as gestures of approval: the money is not some kind of windfall. With rare exceptions of infrastructure grants, the money is given to cover the cost of doing research. Even now we have Full Economic Costing (FEC) attached to research council grants, this covers no more than 80% of the costs to universities of hosting the research. Undoubtedly, the money accrued through FEC gives institutions leeway to develop infrastructure and other beneficial resources, but it is not a freebie, and big grants cost money to implement.
So we come to the effect of research funding on a University’s reputation. I assume this is a major driver behind the policies of places like Warwick, given that it is one component of the league tables that are so popular in today’s competitive culture. But, as some institutions learn to their costs, a high ranking in such tables may count for naught if a reputation for cavalier treatment of staff makes it difficult to recruit and retain the best people. 
Effect on science
The last point concerns the corrosive effect on science if the incentive structure encourages people to apply for numerous large grants. It sidelines people who want to do careful, thoughtful research in favour of those who take on more than they can cope with. There is already a glut of waste in science, with many researchers having a backlog of unpublished work which they don’t have time to write up because they are busy writing the next grant.  Four years ago I argued that we should focus on what people do with research funding rather than how much they have. On this basis, someone who achieved a great deal with modest funding would be valued more highly than someone who was failed to publish many of the results from a large grant. I cannot express it better than John Ioannidis, who in a recent paper put forward a number of suggestions for improving the reproducibility of research. This was his suggested modification to our system of research incentives:
“….obtaining grants, awards, or other powers are considered negatively unless one delivers more good-quality science in proportion. Resources and power are seen as opportunities, and researchers need to match their output to the opportunities that they have been offered—the more opportunities, the more the expected (replicated and, hopefully, even translated) output. Academic ranks have no value in this model and may even be eliminated: researchers simply have to maintain a non-negative balance of output versus opportunities.”
1If his web-entry is to be believed, then Warwick’s Dean of Medicine, Professor Peter Winstanley, falls a long way from this threshold, having brought in only £75K of grant income over a period of 7 years. He won’t be made redundant though, as those with administrative responsibilities are protected.

Ioannidis, J. (2014). How to Make More Published Research True PLoS Medicine, 11 (10) DOI: 10.1371/journal.pmed.1001747

Friday, 28 November 2014

Metricophobia among academics

Most academics loathe metrics. I’ve seldom attracted so much criticism as for my suggestion that a citation-based metric might be used to allocate funding to university departments. This suggestion was recycled this week in the Times Higher Education, after a group of researchers published predictions of REF2014 results based on departmental H-indices for four subjects.

Twitter was appalled. Philip Moriarty, in a much-retweeted plea said: “Ugh. *Please* stop giving credence to simplistic metrics like the h-index. V. damaging”. David Colquhoun, with whom I agree on many things, responded like an exorcist confronted with the spawn of the devil, arguing that any use of metrics would just encourage universities to pressurise staff to increase their H-indices.

Now, as I’ve explained before, I don’t particularly like metrics. In fact, my latest proposal is to drop both REF and metrics and simply award funding on the basis of the number of research-active people in a department.  But I‘ve become intrigued by the loathing of metrics that is revealed whenever a metrics-based system is suggested, particularly since some of the arguments put forward do seem rather illogical.

Odd idea #1 is that doing a study relating metrics to funding outcomes is ‘giving credence’ to metrics. It’s not. What would give credence would be if the prediction of REF outcomes from H-index turned out to be very good. We already know that whereas it seems to give reasonable predictions for sciences, it’s much less accurate for humanities. It will be interesting to see how things turn out for the REF, but it’s an empirical question.

Odd idea #2 is that use of metrics will lead to gaming. Of course it will! Gaming will be a problem for any method of allocating money. The answer to gaming, though, is to be aware of how this might be achieved and to block obvious strategies, not to dismiss any system that could potentially be gamed. I suspect the H-index is less easy to game than many other metrics - though I’m aware of one remarkable case where a journal editor has garnered an impressive H-index from papers published in his own journals, with numerous citations to his own work. In general, though, those of us without editorial control are more likely to get a high H-index from publishing smaller amounts of high-quality science than churning out pot-boilers.

Odd idea #3 is the assumption that the REF’s system of peer review is preferable to a metric. At the HEFCE metrics meeting I attended last month, almost everyone was in favour of complex, qualitative methods of assessing research. David Colquhoun argued passionately that to evaluate research you need to read the publications. To disagree with that would be like slamming motherhood and apple pie. But, as Derek Sayer has pointed out, it is inevitable that the ‘peer review’ component of the REF will be flawed, given that panel members are required to evaluate several hundred submissions in a matter of weeks. The workload is immense and cannot involve the careful consideration of the content of books or journal articles, many of which will be outside the reader’s area of expertise.

My argument is a pragmatic one: we are currently engaged in a complex evaluation exercise that is enormously expensive in time and money, that has distorted incentives in academia, and that cannot be regarded as a ‘gold standard’. So, as an empirical scientist, my view is that we should be looking hard at other options, to see whether we might be able to achieve similar results in a more cost-effective way.

Different methods can be compared in terms of the final result, and also in terms of unintended consequences. For instance, in its current manifestation, the REF encourages universities to take on research staff shortly before the deadline – as satirised by Laurie Taylor (see Appointments section of this article). In contrast, if departments were rewarded for a high H-index, there would be no incentive for such behaviour. Also, staff members who were not principal investigators but who made valuable contributions to research would be appreciated, rather than threatened with redundancy.  Use of an H-index would also avoid the invidious process of selecting staff for inclusion in the REF.

I suspect, anyhow, we will find predictions from the H-index are less good for REF than for RAE. One difficulty for Mryglod et al that it is not clear whether the Units of Assessment they base their predictions on will correspond to those used in REF. Furthermore, in REF, a substantial proportion of the overall score comes from impact, evaluated on the basis of case studies. To quote from the REF2014 website: “Case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe.” My impression is that impact was included precisely to capture an aspect of academic quality that was orthogonal to traditional citation-based metrics, and so this should weaken any correlation of outcomes with H-index.

Be this as it may, I’m intrigued by people’s reactions to the H-index suggestion, and wondering whether this relates to the subject one works in. For those in arts and humanities, it is particularly self-evident that we cannot capture all the nuances of departmental quality from an H-index – and indeed, it is already clear that correlations between H-index and RAE outcomes are relatively low these disciplines. These academics work in fields where complex, qualitative analysis is essential. Interestingly, RAE outcomes in arts and humanities (as with other subjects) are pretty well predicted by departmental size, and it could be argued that this would be the most effective way of allocating funds.

Those who work in the hard sciences, on the other hand, take precision of measurement very seriously. Physicists, chemists and biologists, are often working with phenomena that can be measured precisely and unambiguously. Their dislike for an H-index might, therefore, stem from awareness of its inherent flaws: it varies with subject area and can be influenced by odd things, such as high citations arising from notoriety.

Psychologists, though, sit between these extremes. The phenomena we work with are complex. Many of us strive to treat them quantitatively, but we are used to dealing with measurements that are imperfect but ‘good enough’. To take an example from my own research. Years ago I wanted to measure the severity of children’s language problems, and I was using an elicitation task, where the child was shown pictures and asked to say what was happening. The test had a straightforward scoring system that gave indices of the maturity of the content and grammar of the responses. Various people, however, criticised this as too simple. I should take a spontaneous language sample, I was told, and do a full grammatical analysis. So, being young and impressionable I did. I ended up spending hours transcribing tape-recordings from largely silent children, and hours more mapping their utterances onto a complex grammatical chart. The outcome: I got virtually the same result from the two processes – one which took ten minutes and the other which took two days.

Psychologists evaluate their measures in terms of how reliable (repeatable) they are and how validly they do what they are supposed to do. My approach to the REF is the same as my approach to the rest of my work: try to work with measures that are detailed and complex enough to be valid for their intended purpose, but no more so. To work out whether a measure fits that bill, we need to do empirical studies comparing different approaches – not just rely on our gut reaction.

Wednesday, 26 November 2014

Bishopblog catalogue (updated 26th November 2014)


Those of you who follow this blog may have noticed a lack of thematic coherence. I write about whatever is exercising my mind at the time, which can range from technical aspects of statistics to the design of bathroom taps. I decided it might be helpful to introduce a bit of order into this chaotic melange, so here is a catalogue of posts by topic.

Language impairment, dyslexia and related disorders
The common childhood disorders that have been left out in the cold (1 Dec 2010) What's in a name? (18 Dec 2010) Neuroprognosis in dyslexia (22 Dec 2010) Where commercial and clinical interests collide: Auditory processing disorder (6 Mar 2011) Auditory processing disorder (30 Mar 2011) Special educational needs: will they be met by the Green paper proposals? (9 Apr 2011) Is poor parenting really to blame for children's school problems? (3 Jun 2011) Early intervention: what's not to like? (1 Sep 2011) Lies, damned lies and spin (15 Oct 2011) A message to the world (31 Oct 2011) Vitamins, genes and language (13 Nov 2011) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Phonics screening: sense and sensibility (3 Apr 2012) What Chomsky doesn't get about child language (3 Sept 2012) Data from the phonics screen (1 Oct 2012) Auditory processing disorder: schisms and skirmishes (27 Oct 2012) High-impact journals (Action video games and dyslexia: critique) (10 Mar 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) Raising awareness of language learning impairments (26 Sep 2013) Good and bad news on the phonics screen (5 Oct 2013) What is educational neuroscience? (25 Jan 2014) Parent talk and child language (17 Feb 2014) My thoughts on the dyslexia debate (20 Mar 2014) Labels for unexplained language difficulties in children (23 Aug 2014) International reading comparisons: Is England really do so poorly? (14 Sep 2014)

Autism diagnosis in cultural context (16 May 2011) Are our ‘gold standard’ autism diagnostic instruments fit for purpose? (30 May 2011) How common is autism? (7 Jun 2011) Autism and hypersystematising parents (21 Jun 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) The ‘autism epidemic’ and diagnostic substitution (4 Jun 2012) How wishful thinking is damaging Peta's cause (9 June 2014)

Developmental disorders/paediatrics
The hidden cost of neglected tropical diseases (25 Nov 2010) The National Children's Study: a view from across the pond (25 Jun 2011) The kids are all right in daycare (14 Sep 2011) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Changing the landscape of psychiatric research (11 May 2014)

Where does the myth of a gene for things like intelligence come from? (9 Sep 2010) Genes for optimism, dyslexia and obesity and other mythical beasts (10 Sep 2010) The X and Y of sex differences (11 May 2011) Review of How Genes Influence Behaviour (5 Jun 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Moderate drinking in pregnancy: toxic or benign? (21 Nov 2012) Genes, brains and lateralisation (22 Dec 2012) Genetic variation and neuroimaging (11 Jan 2013) Have we become slower and dumber? (15 May 2013) Overhyped genetic findings: the case of dyslexia (16 Jun 2013)

Neuroprognosis in dyslexia (22 Dec 2010) Brain scans show that… (11 Jun 2011)  Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Neuronal migration in language learning impairments (2 May 2012) Sharing of MRI datasets (6 May 2012) Genetic variation and neuroimaging (1 Jan 2013) The arcuate fasciculus and word learning (11 Aug 2013) Changing children's brains (17 Aug 2013) What is educational neuroscience? ( 25 Jan 2014) Changing the landscape of psychiatric research (11 May 2014)

Book review: biography of Richard Doll (5 Jun 2010) Book review: the Invisible Gorilla (30 Jun 2010) The difference between p < .05 and a screening test (23 Jul 2010) Three ways to improve cognitive test scores without intervention (14 Aug 2010) A short nerdy post about the use of percentiles (13 Apr 2011) The joys of inventing data (5 Oct 2011) Getting genetic effect sizes in perspective (20 Apr 2012) Causal models of developmental disorders: the perils of correlational data (24 Jun 2012) Data from the phonics screen (1 Oct 2012)Moderate drinking in pregnancy: toxic or benign? (1 Nov 2012) Flaky chocolate and the New England Journal of Medicine (13 Nov 2012) Interpreting unexpected significant results (7 June 2013) Data analysis: Ten tips I wish I'd known earlier (18 Apr 2014) Data sharing: exciting but scary (26 May 2014) Percentages, quasi-statistics and bad arguments (21 July 2014)

Journalism/science communication
Orwellian prize for scientific misrepresentation (1 Jun 2010) Journalists and the 'scientific breakthrough' (13 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Orwellian prize for journalistic misrepresentation: an update (29 Jan 2011) Academic publishing: why isn't psychology like physics? (26 Feb 2011) Scientific communication: the Comment option (25 May 2011) Accentuate the negative (26 Oct 2011) Publishers, psychological tests and greed (30 Dec 2011) Time for academics to withdraw free labour (7 Jan 2012) Novelty, interest and replicability (19 Jan 2012) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) Time for neuroimaging (and PNAS) to clean up its act (5 Mar 2012) Communicating science in the age of the internet (13 Jul 2012) How to bury your academic writing (26 Aug 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Blogging as post-publication peer review (21 Mar 2013) A short rant about numbered journal references (5 Apr 2013) Schizophrenia and child abuse in the media (26 May 2013) Why we need pre-registration (6 Jul 2013) On the need for responsible reporting of research (10 Oct 2013) A New Year's letter to academic publishers (4 Jan 2014)

Social Media
A gentle introduction to Twitter for the apprehensive academic (14 Jun 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) Will I still be tweeting in 2013? (2 Jan 2012) Blogging in the service of science (10 Mar 2012) Blogging as post-publication peer review (21 Mar 2013) The impact of blogging on reputation ( 27 Dec 2013) WeSpeechies: A meeting point on Twitter (12 Apr 2014)

Academic life
An exciting day in the life of a scientist (24 Jun 2010) How our current reward structures have distorted and damaged science (6 Aug 2010) The challenge for science: speech by Colin Blakemore (14 Oct 2010) When ethics regulations have unethical consequences (14 Dec 2010) A day working from home (23 Dec 2010) Should we ration research grant applications? (8 Jan 2011) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Should we ever fight lies with lies? (19 Jun 2011) How to survive in psychological research (13 Jul 2011) So you want to be a research assistant? (25 Aug 2011) NHS research ethics procedures: a modern-day Circumlocution Office (18 Dec 2011) The REF: a monster that sucks time and money from academic institutions (20 Mar 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) Journal impact factors and REF2014 (19 Jan 2013)  An alternative to REF2014 (26 Jan 2013) Postgraduate education: time for a rethink (9 Feb 2013) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Ten things that can sink a grant proposal (19 Mar 2013)Blogging as post-publication peer review (21 Mar 2013) The academic backlog (9 May 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Discussion meeting vs conference: in praise of slower science (21 Jun 2013) Why we need pre-registration (6 Jul 2013) Evaluate, evaluate, evaluate (12 Sep 2013) High time to revise the PhD thesis format (9 Oct 2013) The Matthew effect and REF2014 (15 Oct 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) The University as big business: the case of King's College London (18 June 2014) Should vice-chancellors earn more than the prime minister? (12 July 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Some thoughts on use of metrics in university research assessment (12 Oct 2014) Tuition fees must be high on the agenda before the next election (22 Oct 2014) Blaming universities for our nation's woes (24 Oct 2014) Staff satisfaction is as important as student satisfaction (13 Nov 2014)  

Celebrity scientists/quackery
Three ways to improve cognitive test scores without intervention (14 Aug 2010) What does it take to become a Fellow of the RSM? (24 Jul 2011) An open letter to Baroness Susan Greenfield (4 Aug 2011) Susan Greenfield and autistic spectrum disorder: was she misrepresented? (12 Aug 2011) How to become a celebrity scientific expert (12 Sep 2011) The kids are all right in daycare (14 Sep 2011)  The weird world of US ethics regulation (25 Nov 2011) Pioneering treatment or quackery? How to decide (4 Dec 2011) Psychoanalytic treatment for autism: Interviews with French analysts (23 Jan 2012) Neuroscientific interventions for dyslexia: red flags (24 Feb 2012) Why most scientists don't take Susan Greenfield seriously (26 Sept 2014)

Academic mobbing in cyberspace (30 May 2010) What works for women: some useful links (12 Jan 2011) The burqua ban: what's a liberal response (21 Apr 2011) C'mon sisters! Speak out! (28 Mar 2012) Psychology: where are all the men? (5 Nov 2012) Men! what you can do to improve the lot of women ( 25 Feb 2014) Should Rennard be reinstated? (1 June 2014)

Politics and Religion
Lies, damned lies and spin (15 Oct 2011) A letter to Nick Clegg from an ex liberal democrat (11 Mar 2012) BBC's 'extensive coverage' of the NHS bill (9 Apr 2012) Schoolgirls' health put at risk by Catholic view on vaccination (30 Jun 2012) A letter to Boris Johnson (30 Nov 2013) How the government spins a crisis (floods) (1 Jan 2014)

Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014)

Friday, 24 October 2014

Blaming universities for our nation's woes

In black below is the text of a comment piece in the Times Higher Education by Jamie Martin, advisor to Michael Gove, on Higher Education in the UK entitled “Must Do Better”. In red are my thoughts on his arguments.

In an increasingly testing global race, Britain’s competitive advantage must be built on education.
What is this ‘increasingly testing global race’? Why should education be seen as part of an international competition rather than a benefit to all humankind?
Times Higher Education’s World University Rankings show that we have three of the world’s top 10 universities to augment our fast-improving schools. Sustaining a competitive edge, however, requires constant improvement and innovation. We must ask hard questions about our universities’ failures on academic rigour and widening participation, and recognise the need for reform.
Well, this seems a rather confused message. On the one hand, we are doing very well, but on the other hand we urgently need to reform.
Too many higher education courses are of poor quality. When in government, as special adviser to Michael Gove, I was shown an analysis indicating that around half of student loans will never be repaid. Paul Kirby, former head of the Number 10 Policy Unit, has argued that universities and government are engaging in sub-prime lending, encouraging students to borrow about £40,000 for a degree that will not return that investment. We lend money to all degree students on equal terms, but employers don’t perceive all university courses as equal. Taxpayers, the majority of whom have not been to university, pick up the tab when this cruel lie is exposed.
So let’s get this right. The government introduced a massive hike in tuition fees (£1,000 per annum in 1998, £3,000 p.a. in 2004, £9,000 p.a. in 2010). The idea was that people would pay for these with loans which they would pay off when they were earning above a threshold. It didn’t work because many people didn’t get high-paying jobs and now it is estimated that 45% of loans won’t be repaid.
Whose fault is this? The universities! You might think the inability of people to pay back loans is a consequence of lack of jobs due to recession, but, no, the students would all be employable if only they had been taught different things!  
With the number of firsts doubling in a decade, we need an honest debate about grade inflation and the culture of low lecture attendance and light workloads it supports. Even after the introduction of tuition fees, the Higher Education Policy Institute found that contact time averaged 14 hours a week and degrees that were “more like a part-time than a full-time job”. Unsurprisingly, many courses have tiny or even negative earnings premiums and around half of recent graduates are in non-graduate jobs five years after leaving.
An honest debate would be good. One that took into account the conclusions of this report by ONS which states: “Since the 2008/09 recession, unemployment rates have risen for all groups but the sharpest rise was experienced by non-graduates aged 21 to 30.”  This report does indeed note the 47% of recent graduates in non-graduate jobs, but points out two factors that could contribute to the trend: the increased number of graduates and decreased demand for graduate skills. There is no evidence that employers are preferring non-graduates to graduates for skilled jobs: rather there is a mismatch between the number of graduates and the number of skilled jobs.
This is partly because the system lacks diversity. Too many providers are weak imitations of the ancient universities. We have nothing to rival the brilliant polytechnics I saw in Finland, while the development of massive online open courses has been limited. The exciting New College of the Humanities, a private institution with world-class faculty, is not eligible for student loans. More universities should focus on a distinctive offer, such as cheaper shorter degrees or high-quality vocational courses.
What an intriguing wish-list: Finnish polytechnics, MOOCs, and the New College of the Humanities, which charges an eye-watering £17,640 for full-time undergraduates in 2014-15.  The latter might be seen as ‘exciting’ if you are interested in the privatisation of the higher education sector, but for those of us interested in educating the UK population, it seems more of an irrelevance – likely to become a finishing school for the children of oligarchs, rather than a serious contender for educating our populace.
If the failures on quality frustrate the mind, those on widening participation perturb the heart. Each year, the c.75,000 families on benefits send fewer students to Oxbridge than the c.100 families whose children attend Westminster School. Alan Milburn’s Social Mobility and Child Poverty Commission found that the most selective universities have actually become more socially exclusive over the past decade.
Flawed admissions processes reinforce this inequality. Evidence from the US shows that standardised test scores (the SAT), which are a strong predictor of university grades, have a relatively low correlation with socio-economic status. The high intelligence that makes you a great university student is not the sole preserve of the social elite. The AS modules favoured by university admissions officers have diluted A-level standards and are a poorer indicator of innate ability than standardised tests. Universities still prioritise performance in personal statements, Ucas forms and interviews, which correlate with helicopter parents, not with high IQ.
Criticise their record on widening access, and universities will blame the failures of the school system. Well, who walked on by while it was failing? Who failed to speak out enough about the grade inflation that especially hurt poorer pupils with no access to teachers who went beyond weakened exams? Until Mark Smith, vice-chancellor of Lancaster University, stepped forward, Gove’s decision to give universities control of A-level standards met with a muted response.
Ah, this is interesting. After a fulmination against social inequality in university admissions (well, at last a point I can agree on), Jamie Martin notes that there is an argument that blames this on failures in the school system. After all, if “The high intelligence that makes you a great university student is not the sole preserve of the social elite”, why aren’t intelligent children from working class backgrounds coming out of school with good A-levels? Why are parents abandoning the state school system? Martin seems to accept this is valid, but then goes on to argue that lower-SES students don’t get into university because everyone has good A-levels (grade inflation) – and that’s all the fault of universities for not ‘speaking out’. Is he really saying that if we had more discriminating A-levels, then the lower SES pupils would outperform private school pupils?
The first step in a prioritisation of education is to move universities into an enlarged Department for Education after the general election. The Secretary of State should immediately commission a genuinely independent review to determine which degrees are a sound investment or of strategic importance. Only these would be eligible for three-year student loans. Some shorter loans might encourage more efficient courses. Those who will brand this “philistinism” could not be more wrong: it is the traditional academic subjects that are valued by employers (philosophy at the University of Oxford is a better investment than many business courses). I am not arguing for fewer people to go to university. We need more students from poorer backgrounds taking the best degrees.
So, more reorganisation. And somehow, reducing the number of courses for which you can get a student loan is going to increase the number of students from poorer backgrounds who go to university. Just how this magic is to be achieved remains unstated.
Government should publish easy-to-use data showing Treasury forecasts on courses’ expected loan repayments, as well as quality factors such as dropout rates and contact time. It should be made much easier to start a new university or to remodel existing ones.
So here we come to the real agenda. Privatisation of higher education.
Politicians and the Privy Council should lose all control of higher education. Student choice should be the main determinant of which courses and institutions thrive.
Erm, but two paragraphs back we were told that student loans would only be available for those courses which were ‘a sound investment or of strategic importance’.
Universities should adopt standardised entrance tests. And just as private schools must demonstrate that they are worthy of their charitable status, universities whose students receive loans should have to show what action they are taking to improve state schools. The new King’s College London Maths School, and programmes such as the Access Project charity, are models to follow.
So it’s now the responsibility of universities, rather than the DfE to improve state schools?
The past decade has seen a renaissance in the state school system, because when tough questions were asked and political control reduced, brilliant teachers and heads stepped forward. It is now the turn of universities to make Britain the world’s leading education nation.
If there really has been a renaissance, the social gradient should fix itself, because parents will abandon expensive private education, and children will leave state schools with a raft of good qualifications, regardless of social background. If only….
With his ‘must do better’ arguments, Martin adopts a well-known strategy for those who wish to privatise public services: first of all starve them of funds, then heap on criticism to portray the sector as failing so that it appears that the only solution is to be taken over by a free market.  The NHS has been the focus of such a campaign, and it seems that now the attention is shifting to higher education. But here Martin has got a bit of a problem. As indicated in his second sentence, we are actually doing surprisingly well, with our publicly-funded universities competing favourably with the wealthy private universities in the USA.

PS. For my further thoughts on tuition fees in UK universities, see here.