Sunday, 17 July 2016

Cost-benefit analysis of the Teaching Excellence Framework

©CartoonStock.com
The government’s new Higher Education and Research Bill gets its second reading this week. One complaint is that it has been rushed in without adequate scrutiny of some key components. I was interested, therefore, to discover, that a Detailed Impact Assessment was published in June, specifically to look at the costs and benefits of the various components of the Bill. What I found was quite shocking: we were being told that the financial benefits of the new Teaching Excellence Framework (TEF) vastly outweighed its costs – yet look in detail and this is all smoke and mirrors.

In particular, the report shows that while the costs of TEF to the higher education sector (confusingly described as ‘business’) are estimated at £20 million, the direct benefits will come to £1,146 million, giving a net benefit of £1,126 million (Table 1). How could the introduction of a new bureaucratic evaluation exercise be so remarkably beneficial? I read on with bated breath.

Well, sad to relate, it’s voodoo analysis.  This becomes clear if you press on to Table 12, which shows the crucial data from statistical modelling. Quite simply, the TEF generates money for institutions that get a good rating because it allows them to increase fees in line with inflation. Institutions that don’t participate in the TEF, or those that fail to get a good enough rating, will not be able to exceed the current 9K per annum fee, and so in real terms their income will decline over time. As far as I can make out, they are not included in Table 1. Furthermore, the increases for the compliant, successful institutions are measured relative to how they would have done if they had not been allowed to raise fees.

So to sum up:
  • You don’t need the TEF to achieve this result. You could get the same outcome by just allowing all institutions to raise fees in line with inflation.
  • As noted in the briefing to the Bill by the House of Commons: “the Bill is expected to result in a net financial benefit to higher education providers of around £1.1billion a year. This is in very large part due to the higher fees that providers with successful TEF outcomes will be able to charge students.” (p. 59)
  • The system is designed for there to be winners and losers, and the losers will inevitably see their real income falling further and further behind the winners, unless inflation is zero.
The impact assessment does consider other options, including that of allowing fee increases in line with inflation provided the institution has a satisfactory Quality Assurance rating. This is rejected on the grounds that: “whilst QA is a good starting point, reliance on QA alone and in the longer-term will not enable significant differentiation of teaching quality to help inform student decisions and encourage institutions to improve their teaching quality.” (p. 37).  This makes clear that one consequence (and one suspects one purpose) of TEF is to facilitate the division into institutional sheep and goats, followed by starvation of the goats.

Another option, which was strongly recommended by many of those who responded to the consultation exercise on the Green Paper which preceded the bill, is to remove the link between TEF and fees. In other words, have some kind of teaching evaluation, where the motivation for taking part would be reputational rather than financial.  This too is rejected as not sufficiently powerful an incentive: “the Research Excellence Framework allocates £1.5bn a year to institutions. To achieve parity of esteem and focus between teaching and research the TEF will need to have a similar level of financial implications.” However, this is rather disingenuous. There is no pot of money on offer. We live in a country where we are used to government supporting Higher Education; now, however, the only source of income to universities for teaching is via student fees, but raising fees is unpopular.  The funding of universities will collapse unless they can either find alternative sources of income, or continue to raise fees in line with inflation, and TEF provides a cover story for doing that.

So we have a system designed to separate winners and losers, but the outcome will depend crucially on two factors: the rate of inflation and the rate of increase in students. The figures in the document have been modelled assuming that the number of students at English Higher Education Institutions will increase at a rate of around 2 per cent per annum (Table 12), and that annual inflation will be around 3 per cent. If either growth in numbers or inflation is lower, then the difference between those who do and don’t get good TEF ratings (and hence the apparent financial benefits of TEF) will decline.

What about the anticipated costs of the TEF?  We are told: “Institutions collectively will experience average annual costs of £22m as a result of familiarising, signing up and applying to the Teaching Excellence Framework, once the TEF covers discipline level assessments. This is equivalent to an average of £53,000 per institution, significantly less than the Research Excellence Framework (REF) at £230,000 per institution per year.” (p. 8). One can only assume that those writing this report have little experience of how academic institutions operate. For instance, they say that “Year One will not represent any additional administrative cost to institutions, as we will use the existing QA process.” I did a quick internet search and immediately found two universities who were advertising now for administrators to work on preparing for the TEF (on salaries of around £30-40K), as well as a consultancy agency that was touting for custom by noting the importance of being “TEF-ready”.

I have yet to get on to the section on costs and benefits of opening the market to ‘alternative providers’…..

If you are concerned at the threats to Higher Education posed by the Bill, please write to your MP - there is a website here that makes it very easy to do so.

Further background reading 
Shaky foundations of the TEF
A lamentable performance by Jo Johnson
More misrepresentation in the Green Paper
The Green Paper’s level playing field risks becoming a morass
NSS and teaching excellence: wrong measure, wrongly analysed
The Higher Education and Research Bill: What's changing?
CDBU's response to the Green Paper
The Alternative White Paper

Saturday, 11 June 2016

Editorial integrity: Publishers on the front line



Thanks to some live tweeting by Anna Sharman (@sharmanedit), I've become aware that the 13th Conference of the European Association of Science Editors (EASE) is taking place in Strasbourg this weekend.
The topic is "Scientific integrity: editors on the front line", and the programme acknowledges Elsevier, who presumably have contributed funding for the conference.
It therefore seems timely to give a brief update of developments following three blogposts I wrote during February-March 2015, documenting some peculiar editorial behaviour at four journals: Research in Autism Spectrum Disorders (RASD: Elsevier), Research in Developmental Disabilities (RIDD: Elsevier), Developmental Neurorehabilitation (DN: Informa Healthcare) and Journal of Developmental and Physical Disabilities (JDPD: Springer).
To do the story full justice, you need to read these blogposts, but in brief, blogpost 1 described how Johnny Matson, the then editor of both RASD and RIDD had published numerous articles in his own journal, and engaged in frequent self-citation, leading to his receiving a 'highly cited' badge from Thomson Reuters. In the comments on that blogpost, another intriguing factor emerged, which was Matson's tendency to accept papers with little or no review. This was denied by Elsevier, despite clear evidence of very short acceptance lags that were incompatible with review.
Blogpost 2 was prompted by Matson defending himself against accusations of self-citation by pointing out that he published in journals that he did not edit. I checked this out and found he had numerous papers in two other journals: DN and JDPD, and that the median lag between a paper of his being submitted and accepted in DN was one day. (JDPD does not provide data on publication lags). I therefore looked at the editors of those journals, and found that they themselves were publishing remarkable numbers of papers in RASD and RIDD, again with extremely short publication lags. A trio of editors and editorial board members (Jeff Sigafoos, Giulio Lancioni and Mark O'Reilly), co-authored no less than 140 papers in RASD and RIDD between 2010 and 2014, typically with acceptance times of less than 2 weeks. Some of the papers in RIDD were not even in the topic area of developmental disabilities, but covered neurological conditions acquired in adulthood.
In blogpost 3, I turned the focus on to the publisher of RASD and RIDD, Elsevier, to query why they had not done anything about such irregular editorial practices. I did a further analysis of publication lags in RIDD, showing that they had dropped precipitately between 2008 and 2012, and that there was a small band of authors whose prolific papers were published there at amazing speed. I provided all the statistical data to support my case, including interactive spreadsheets that made it easy to determine which editors and authors had been benefiting from the slack editorial standards at these journals.
There was some interesting fall-out from all of this. The second blogpost drew fire from supporters of the editors I had "outed", accusing me of bad behaviour and threatening to complain to my university. Since everything I had said was backed by evidence, this did not concern me. I received heartfelt messages of support from people who were appalled that a particular approach to autism intervention had been promoted by this group of editors, who were in effect using their status to gain the veneer of scientific credibility for work which was not in fact peer-reviewed.  I was also contacted by several academics telling me that everyone knew this had been going on for years, but nobody had done anything; this level of passivity was surprising given that many were angry that  authors had reaped benefits from their staggeringly high publication rate, while those who were outside the charmed circle were left behind. I was urged to go further and raise my concerns with the universities employing those who were capitalising on, or engaging in, lax editorial behaviour. I do, however, have an extremely demanding job and I hoped that I had done enough by shining a light on dubious practices, and providing the full datasets that provided evidence. However, I now wonder if I should have been more pro-active.
I wrote to express my concerns to publishers of all four journals, and had my correspondence acknowledged. But then? Well, not a lot.
It's clear that Elsevier has taken some action. Indeed, my first blogpost was prompted by Michelle Dawson noting on Twitter that the editorial boards of RASD and RIDD had mysteriously disappeared from the online journals. She had previously noted Matson's pattern of mega-self-citation, and I had written directly to him, with copy to the publisher, some months previously to express concern, when I realised that I was listed as a member of the editorial board of RASD. Elsevier did not acknowledge my letter, but it is possible that the changes to the editorial boards that they had started were linked to my concerns.
The first direct response I had from Elsevier was some weeks after my final blogpost, when they explained that they were looking into the situation regarding unreviewed papers, but that this was a huge job and would take a long time. They were presumably disinclined to rely on the files that I had deposited on Open Science Framework, which show the identity and submission and acceptance data for every paper in RASD and RIDD.  They did appoint new editors and a small group of associate editors for both journals, all with good track records for integrity.
I have heard on the grapevine that they are now evaluating articles published in those journals that have been identified as not having undergone peer review; some of those approached to do these evaluations have mentioned this to me. It's rather unclear how this is going to work, given that, across the two journals, there are nearly 1000 papers where the available data indicate a lag from receipt to acceptance of under 2 weeks. I guess we should be glad that at least the publisher is taking some action, albeit at a snail's pace, but I am dubious as to whether there will be any retractions.
Meanwhile, Developmental Neurorehabilitation changed publisher around the time I was writing, and is now under the care of Taylor and Francis. I wrote to the publisher explaining my concerns and received a polite reply, but then heard no more. I note that the Editor in Chief is now Wendy Machalicek, who previously co-edited the journal with Russell Lang. Lang's doctoral advisor was Mark O'Reilly, editor of JDPD, and one of the prolific trio who featured in blogpost 2. Lang himself co-authored 24 papers in RASD and 13 in RIDD, and 35 of these 39 papers were accepted within 2 weeks of receipt. Machalicek has published 11 papers in RASD and 5 in RIDD, and 12 of these 16 papers were accepted within 2 weeks of receipt.  She also did her doctorate in O'Reilly's department, and several of her papers are co-authored with him. In an editorial last year, Lang and Machalicek announced changes to the journal, some of which seem to be prompted by a desire to make the reviewing process more rigorous under the new publisher. However, one change is of particular interest: the scope of the journal will be broadened to consider "developmental disability from a lifespan perspective; wherein, it is acknowledged that development occurs throughout a person's life and a range of injuries, diseases and other impairments can cause delayed or abnormal development at any stage of life." That will be good news for Giulio Lancioni, who was previously publishing papers on coma patients, amyotrophic lateral sclerosis, and Alzheimer's disease in RIDD. He and his collaborators – Jeff Sigafoos, Mark O'Reilly, as well as Russell Lang and Johnny Matson – are all current members of the editorial board of the journal.
It seems to be business as usual at the Springer title, Journal of Developmental and Physical Disabilities. Mark O'Reilly is still the editor, with Lang and Sigafoos as associate editors; Lancioni, Machalicek and Matson are all on the editorial board. Springer's willingness to turn a blind eye to editors playing the system becomes clear when one sees that a recent title, "Review Journal of Autism and Developmental Disorders" has as Editor-in-Chief no less a personage than Johnny Matson. And, surprise, surprise, the editorial board includes Lang, Sigafoos and Lancioni.
One of the overarching problems I uncovered when navigating my way around this situation was that there is no effective route for a whistleblower who has uncovered evidence of dubious behaviour by editors. Elsevier has developed a Publishing Ethics Resource Kit  but it is designed to help editors dealing with ethical issues that arise with authors and reviewers. The general advice if you encounter an ethical problem is to contact the editor. The Committee on Publication Ethics also issues guidance, but it is an advisory body with no powers. One would hope that publishers would act with integrity when a serious problem with an editor is revealed, but if my experience is anything to go by, they are extremely reluctant to act and will weave very large carpets to brush the problems under.


Sunday, 29 May 2016

Ten serendipitous findings in psychology

The Thatcher Illusion (see below)
I'm a great fan of pre-registration of studies. It is, to my mind, the most effective safeguard against p-hacking and publication bias, the twin scourges that have led to the literature being awash with false positive findings. When combined with a more formal process, as in Registered Reports, it also allows researchers to benefit from reviewer expertise before they do the study, and to take control of the publication timeline.

But one salient objection to pre-registration comes up time and time again: if we pre-register our studies it will destroy the creative side of doing science, and turn it instead into a dull, robotic, cheerless process. We will have to anticipate what we might find, and close our eyes to what the data tell us.

Now this is both silly and untrue. For a start, there's nobody stopping anyone from doing fairly unstructured exploration, which may be the only sensible approach when entering a completely new area. The main thing in that case is to just be clear that this is what it is, and not to start applying statistical tests to the findings. If a finding has emerged from observing the data, testing it with p-values is statistically illiterate.

Nor is there any prohibition on reporting unexpected findings that emerge in the course of a study. Suppose you do a study with a pre-registered hypothesis and analysis plan, which you adhere to. Meanwhile, a most exciting, unanticipated phenomenon is observed in your experiment. If you are going down the kind of registered reports pathway used in Cortex, you report the planned experiment, and then describe the novel finding in a separate section. Hypothesis-testing and exploration are clearly delineated and no p-values are used for the latter.

In fact, with any new exciting observation, any reputable scientist would take steps to check its repeatability, to explore the conditions under which it emerges, and to attempt to develop a theory that can account for it. In effect, all that has happened is that the 'data have spoken' and suggested a new hypothesis, which could potentially be registered and evaluated in the usual way.

But would there be instances of important findings that would have been lost to history if we started using pre-registration years ago? Because I wanted examples of serendipitous findings to test this point, I asked Twitter, and lo, Twitter delivered some cracking examples. All of these predate by many years the notion of pre-registration, but note that, in all cases, having made the initial unexpected observation – either from unstructured exploratory research, or in the course of investigating something else - the researchers went on to shore up the findings with further, hypothesis-driven experiments. What they did not do is to report just the initial observation, embellished with statistics, and then move on, as if the presence of a low p-value guaranteed the truth of the result.

Here are ten phenomena well-known to psychologists that show how the combination of chance and the prepared mind can lead to important discoveries*. Where I could find one, I cite a primary source, but readers should feel free to contribute further background information.

1. Classical conditioning, Pavlov, 1902. 
The conventional account of Pavlov's discovery goes like this: He was a physiologist interested in processes of digestion and was studying the tendency of dogs to salivate when presented with food. He noted that over time, the dogs would salivate when the lab assistant entered the room, even before the food was presented, thus discovering the 'conditioned response': a response that is learned by association. A recent account is here. I was not able to find any confirmation of the serendipitous event in either Pavlov's Nobel speech, or in his Royal Society obituary, so it would be interesting to know if this described anywhere in his own writings or those of his contemporaries.

One thing that I did (serendipitously) discover from the latter source, was this intriguing detail, which makes it clear that Pavlov would never have had any truck with p-values, even if they had been in use in 1902: "He never employed mathematics even in its elementary form. He frequently said that mathematics is all very well but it confuses clear thinking almost to the same extent as statistics."

Suggested by @speech_woman @smomara1 @AglobeAgog 

2. Psychotropic drugs, 1950s 
Chance appears to have played an important role in the discovery of many psychotropic drugs in the early days of psychopharmacology. For instance, tricyclics were initially used to treat tuberculosis, when it was noticed that there was an unanticipated beneficial effect on mood. Even more striking is Hoffman's first-hand account of discovering the psychotropic effects of LSD, which he had developed as a potential circulatory stimulant. After experiencing strange sensations during a laboratory session, Hoffman returned to test the substances he had been working with, including LSD. "Even the first minimum dose of one quarter of a milligram induced a state of intoxication with very severe psychic disturbances, and this persisted for about 12 hours….This first planned experiment with LSD was a particularly terrifying experience because at the time, I had no means of knowing if I should ever return to everyday reality and be restored to a normal state of consciousness. It was only when I became aware of the gradual reinstatement of the old familiar world of reality that I was able to enjoy this greatly enhanced visionary experience".

Suggested by @ollirobinson @kealyj @neuroraf 

3. Orientation-sensitive receptive fields in visual cortex, 1959 
In his Nobel speech, David Hubel recounts how he and Torsten Wiesel were trying to plot receptive fields of visual cortex neurons using dots of light projected onto a screen, with only scant success, when they observed a cell that gave a massive response as a slide was inserted, creating a faint but sharp shadow on the retina. As he memorably put it, "over the audiomonitor, the cell went off like a machine gun". This initial observation led to a rich vein of research, but, again to quote from Hubel "It took us months to convince ourselves that we weren’t at the mercy of some optical artefact".

 Suggested by: @jpeelle @Anth_McGregor @J_Greenwood @theExtendedLuke @nikuss @sophiescott, @robustgar 

4. Right ear advantage in dichotic listening, 1961 
Doreen Kimura reported that when groups of digits were played to the two ears simultaneously, more were reported back from the right than the left ear (review here). This method was subsequently used for assessing cerebral lateralisation in neuropsychological patients, and a theory was developed that linked the right ear advantage to cerebral dominance for language. I have not been able to access a published account of the early work, but I recall being told during a visit to the Montreal Neurological Institute that it had taken time for the right ear advantage to be recognised as a real phenomenon and not a consequence of unbalanced headphones. The method of dichotic listening dated back to Broadbent or earlier, but it had originally been used to assess selective attention rather than cerebral lateralisation.

5. Phonological similarity effect in STM, 1964 
Conrad and Hull (1964) described what they termed 'acoustic confusions' when people were recalling short sequences of visually-presented letters, i.e. errors tended to involve letters that rhymed with the target letter, such as P, D, or G. In preparation for an article celebrating his 100th birthday, I recently listened to a recording of Conrad describing this early work, and explaining that when such errors were observed with auditory presentation, it was assumed they were due to mishearings. Only after further experiments did it become clear that the phenomenon arose in the course of phonological recoding in short-term memory. 

6. Hippocampal place cells, 1971 
In his 2014 Nobel lecture,  John O'Keefe describes a nice example of unconstrained exploratory research: "… we decided to record from electrodes … as the animal performed simple memory tasks and otherwise went about its daily business. I have to say that at this stage we were very catholic in our approach and expectations and were prepared to see that the cells fire to all types of situations and all types of memories. What we found instead was unexpected and very exciting. Over the course of several months of watching the animals behave while simultaneously listening to and monitoring hippocampal cell activity it became clear that there were two types of cells, the first similar to the one I had originally seen which had as its major correlate some non-specific higher-order aspect of movements, and the second a much more silent type which only sprang into activity at irregular intervals and whose correlate was much more difficult to identify. Looking back at the notes from this period it is clear that there were hints that the animal’s location was important but it was only on a particular day when we were recording from a very clear well isolated cell with a clear correlate that it dawned on me that these cells weren’t particularly interested in what the animal was doing or why it was doing it but rather they were interested in where it was in the environment at the time. The cells were coding for the animal’s location!" Needless to say, once the hypothesis of place cells had been formulated, O'Keefe and colleagues went on to test and develop it in a series of rigorous experiments.

7. McGurk effect, 1976 
In a famous paper, McGurk and McDonald reported a dramatic illusion: when watching a talking head, in which repeated utterances of the syllable [ba] are dubbed on to lip movements for [ga], normal adults report hearing [da]. Those who recommended this example to me mentioned that the mismatching of lips and voices arose through a dubbing error, and there was even the idea that a technician was disciplined for mixing up the tapes, but I've not found a source for that story. I noted with interest that the Nature paper reporting the findings does not contain a single p-value.
 
Suggested by: @criener @neuroconscience @DrMattDavis 

8. Thatcher illusion, 1980 
Peter Thompson kindly sent me an account of his discovery of the Thatcher Illusion (downloadable from here, p. 921). His goal had been to illustrate how spatial frequency information is used in vision, entailing that viewing the same image close up and at a distance will give very different percepts if low spatial frequencies are manipulated. He decided to illustrate this with pictures of Margaret Thatcher, one of which he doctored to invert the eyes and mouth, creating an impressively hideous image. He went to get sellotape to fix the material in place, but noticed that when he returned, approaching the table from the other side, the doctored images were no longer hideous when inverted. Had he had sellotape to hand, we might never have discovered this wonderful illusion.

Suggested by @J_Greenwood 

9. Repetition blindness, 1987 
Repetition blindness, described here by Nancy Kanwisher, is the phenomenon whereby people have difficulty detecting repeated words that are presented using rapid serial visual presentation (RSVP) - even when the two occurrences are nonconsecutive and differ in case. I could not find a clear account of the history of the discovery, but it seems that researchers investigating a different problem thought that some stimuli were failing to appear, and then realised these were the repeated ones.

Suggested by @PaulEDux 

10. Mirror neurons, 1992 
Giacomo Rizzolatti and colleagues were recording from cells in the macaque premotor cortex that responded when the animal reached for food, or bit a peanut. To their surprise, they noticed when testing the animals, the same cell that responded when the monkey picked up a peanut also responded when the experimenter did so (see here for summary). Ultimately, they dubbed these cells 'mirror neurons' because they responded both to the animal's own actions and when the animal observed another performing a similar action. The story that mirror neurons were first identified when they started responding during a coffee break as Rizzolatti picked up his espresso appear to be apocryphal.

Suggested by: @brain_apps @neuroraf @ArranReader @seriousstats @jameskilner @RRocheNeuro 

 *I picked ones that I deemed the clearest and best-known examples. Many thanks to all the people who suggested others.

Tuesday, 24 May 2016

Who wants the TEF?



I'll say this for the White Paper on Higher Education "Success as a Knowledge Economy": it's not as bad as the Green Paper that preceded it. The Green Paper had me abandoning my Christmas shopping for furious tirades against the errors and illogicality that were scattered among the exhausted clich├ęs and management speak (see here, here, here, here and here). So appalled was I at the shoddy standards evident in the Green Paper that I actually went through all the sources quoted in the first section of the White Paper to contact the authors to ask if they were happy with how their work had been reported. I'm pleased to say that out of 12 responses I got, ten were entirely satisfied, and one had just a minor quibble. But what about the twelfth, you ask. What indeed?
When justifying the need for a Teaching Excellence Framework (TEF) last November, Jo Johnson used some extremely dodgy statistical analysis of the National Student Survey to support his case that teaching in some quarters was 'lamentable'. I was pleased to see that this reference was expunged from the White Paper. But that left a motheaten hole in the fabric of the argument: if students aren't dissatisfied, then do we really need a TEF?  One could imagine the civil servants rushing around desperate to find a suitably negative statistic. And so they did, citing the 2015 HEPI-HEA Student Academic Experience Survey as showing that "Many students are dissatisfied with the provision they receive, with over 60% of students feeling that all or some elements of their course are worse than expected and a third of these attributing this to concerns with teaching quality." (p 8, para 5).  The same report is subsequently cited as showing that: ".. applicants are currently poorly-informed about the content and teaching structure of courses, as well as the job prospects they can expect. This can lead to regret: the recent Higher Education Academy (HEA)–Higher Education Policy Institute (HEPI) Student Academic Experience Survey found that over one third of undergraduates in England believe their course represents very poor or poor value for money." The trouble is, both of these quotes again use spin and dodgy statistics.
Let's take the 60% dissatisfaction statistic first. The executive summary of the report stated; "Most students are satisfied with their course, with 87% saying that they are very or fairly satisfied, and only 12% feeling that their course is worse than they expected. However, for those students who feel that their course is worse than expected, or worse in some ways and better than others, the number one reason is not the number of contact hours, the size of classes or any problems with feedback but the lack of effort they themselves put in." So how do we get to 60% dissatisfied? This number is arrived at from the finding that 12% said that their experience had been worse than expected, 49% said that it had been better in some ways and worse in others. So it is literally true that there is dissatisfaction with 'some or all elements', but the presentation of the data is clearly biased to accentuate the negative. One is reminded of Hugh in 'The Thick of It' saying "I did not knowingly not tell the truth".
But it gets worse: As pointed out on the Wonkhe blog, among 'key facts' in a briefing note accompanying the White Paper, the claim was reworded to say over 60% of students said they feel their course is worse than expected. The author of the blogpost referred to this as substantial misrepresentation of the survey. This is serious because it appears that in order to make a political point, the government is spreading falsehoods that could cause reputational damage to Universities.
Moving on to perceptions of 'value for money', there are two reasons for giving this low ratings  - you are paying a reasonable amount for something of poor quality, or you are paying an unreasonable amount for something of good quality. Alex Buckley, one of the authors of the report replied to my query to say that while the numeric data were presented accurately, crucial context was omitted. This made it crystal clear it was the money side of the equation that concerned students. He wrote:
"Figure 11 on page 17 of the 2015 HEPI-HEA survey report shows that students from England (paying £9k) and students from Scotland studying in Scotland (paying no fees) have very different perceptions of value for money. And Figure 12 shows that the perceptions of value for money of students from England plummeted at the time of the increase in fees. Half of 2nd year students from England in 2013 thought they were getting good or very good value for money. In 2014, when 2nd years were paying £9k, that figure was a third. (Other global perceptions of quality - satisfaction etc. - did not change). There is something troubling about the Government citing students' perceptions of value for money as a problem for the sector, when they appear to be substantially determined by Government policy, i.e. the level of fees. The survey suggests that an easy way to improve students' perceptions of the value for money of their degree would be to reduce the level of fees - presumably not the message that the Government is trying to get across."
So do students want the TEF? All the indicators say no. Chris Havergal wrote yesterday in the Times Higher about a report by David Greatbatch and Jane Holland in which students in focus groups gave decidedly lukewarm responses to questions about the usefulness of TEF. Insofar as anyone wants information about teaching quality, they want it at the level of courses rather than institutions, but, as an ONS interim review pointed out, the data is mostly too sparse to reliably differentiate among institutions at the subject level. Meanwhile, the NUS has recommended boycotting the National Student Survey, which forms a key part of the metrics to be used by TEF.
This is all rather rum, given that the government claims its reforms will put students at the heart of higher education. It seems that they have underestimated the intelligence of students, who can see through the weasel words and recognise that the main outcome of all the reforms will be further increases in fees.
It's widely anticipated that fees will rise because of the market competition that the White Paper lauds as a positive stimulus to the sector, and it was clear in the Green Paper that one goal of the reforms was to tie the TEF to a regulatory mechanism that would allow higher fees to be set by those with good TEF scores. Perhaps less widely appreciated is that the plan is for the new Office for Students to be funded largely by subscriptions paid by Higher Education Providers. They will have to find the money somewhere, and the obvious way to raise the cash will be by raising fees. So students will be in the heart of the reforms in the sense that having already endured dramatic rises in fees and loss of the maintenance grant, they will now also be picking up the bill for a new regulatory apparatus whose main function is to satisfy a need for information that they do not want.