Skip to content
The UK's only independent think tank devoted to higher education.

REF 2028? Think Again

  • 10 May 2022
  • By Peter Mandler
  • This blog was written by Peter Mandler, Professor of Modern Cultural History at the University of Cambridge. Peter is on Twitter @PeterMandler1. This is the fourth in a series of blogs reflecting on the REF. The full list of blogs in the series can be found here.
  • Following the announcement of the results of REF2021, join us for a short ‘In Conversation’ with David Sweeney, Executive Chair of Research England, at 3pm on Thursday. To register for this webinar, please click on the following link: https://us06web.zoom.us/j/88024528350?pwd=VHJmejcwem1hQ285ajh3Y29jM0NsQT09.

I’ve been involved with successive national ‘research assessment’ exercises now for 30 years, as contributor to and author of institutional submissions and latterly as an assessor, a member of the national panel for history in the 2008 and 2014 exercises. I used to stand up for these exercises as, first, the least worst way to distribute limited government research funding across the wider range of universities that has emerged since the early 1990s, and, second, as a means of preserving the essential element of peer review in that assessment. The alternatives – metrics, non-expert views, criteria having little to do with the quality of research (the putative object of assessment) – didn’t bear thinking about. So, I was willing to put in literally months and months of work, collating colleagues’ work, writing long bureaucratic documents to increasingly baroque rules, and reading hundreds and hundreds of books, chapters and articles.

Now, I’m not so sure. The ‘Research Excellence Framework’ as it’s currently called (REF) is no longer all that much about excellence or even about research. In the current exercise direct assessment of research counts for only 60 per cent of the outcome; ‘environment’ (a bundle of measures of research culture) counts for 15 per cent; and ‘impact’ (an assessment of the reach and significance of research beyond academia) for 25 per cent. We are told after every exercise – we will be told after this one – that ‘they’ (meaning the Business Department, which bosses the REF, and the Treasury, which bosses the Business Department) will insist on more for impact ‘next time’. And so it may transpire, and assessment of research will take a smaller share still. 

Less assessment of research also means, by definition, less peer review. ‘Impact’ and ‘environment’ are assessed by academics, too, but assisted by ‘impact assessors’ from outside academia. A dirty secret of the assessment of impact and environment is that they are, again almost by definition, assessed much more sketchily and with much less evidence than is research. When I read a book, say of 300 pages, I am spending hours of my time and harnessing my whole career’s expertise in evaluating it. That book would count for a medium-sized department for about 2 per cent of its department research submission and therefore just over 1 per cent of its final outcome (i.e. 60 per cent of 2 per cent). When I read an ‘impact case study’, I’m not allowed to consider anything external to that document, which runs to five pages. I don’t have the same level of expertise to assess it that I do in reading the book, and even the impact assessor can only judge so much from five pages of claims, with again a narrowly specified level of evidence allowed to back them up. And yet that five-page impact case study will count for about 6 per cent of the department’s final outcome. How can we defend – either as peer review or even as a fair assessment – giving six times the weight (and six times the cash) for an impact case study which takes minutes to evaluate (even with multiple evaluators), with limited evidence and expertise, than for the book which takes hours to evaluate, with enormous amounts of evidence and lifelong expertise? What started out as a research assessment exercise has ended up as more of a public-relations assessment exercise, with largely rhetorical documents contributing more and more to the calculus.

(I don’t even address here the injustice, specific to book-oriented disciplines like mine, of weighting a 300 page book which might have taken the lone author between five and 10 years to write as equal to two papers co-authored by up to 50 scientists, of which they might turn out a dozen or more annually.)

Research assessment is thus less and less about the assessment of research. It’s also more and more about other things. Government wants it to have measurable ‘impact’, by definitions (to make it measurable) that inevitably only capture some of that; for example, if you switch institutions, there can be no impact based on work you did while employed at the previous institution. ‘Poof’ – a lifetime of research disappears from the scope of the exercise. Government has also lately been piling on other desiderata; for example, by excluding from assessment work that doesn’t appear in approved forms of ‘open access’. ‘Poof’ again. 

More widely, universities are using the exercise for their own purposes, sometimes very far from the assessment of excellence. Mock REF exercises and REF-generated metrics are used to evaluate staff for hiring (and firing) and promotion, even by universities that are signed up to the DORA declaration which explicitly repudiates the use of such metrics. If REF were just peer review, that might be unobjectionable – peer review is the correct way to assess research performance. But REF as deployed internally by universities is often very far from peer review. It may be handed over to a single non-specialist evaluating an entire department. It is often liberally reinterpreted to suit managerial prerogatives. On occasion a senior manager (usually a scientist) has told me ‘I didn’t understand’ how the History REF worked, by which he meant (usually he’s a he) that I didn’t understand how he used it.

Worst of all, REF has become an enormous bureaucratic nightmare – a steam-powered jackhammer to crack a nut. Each new iteration takes the existing template and adds more levels of complexity and direction. Just between the 2008 and 2014 exercises the costs were estimated to have risen fourfold, from £66 million to £246 million. Such costs are usually justified in terms of the much larger sum being disbursed. But cost comes not only in pounds and pence. The REF now looms over the daily lives of institutions and individuals like a massive headache, insinuating itself in places where it doesn’t belong, dampening initiative and originality, and replacing the object of desire (good research) with its proxy.

The time is ripe for a root-and-branch reconsideration. Rip up the rulebook and start again. Think seriously about whether, as is often suggested, a simple headcount might lead to rough justice without the thousands of pages of boilerplate and the hundreds of meetings and exercises. And get back to basics. Anything worthy of the name ‘research excellence’ has to put excellent research, not a lengthening government or managerial wish-list, at its heart.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

1 comment

  1. David Manning says:

    Thanks, Peter.

    REF is, I would suggest, even more problematic than you suggest: has it not fundamentally changed the definition of high-quality research? See my blog https://wonkhe.com/blogs/how-we-get-what-we-value/

Leave a Reply

Your email address will not be published. Required fields are marked *