This blog was written by Professor Simon Green, Pro-Vice-Chancellor (Research) at Aston University and a member of the Area Studies sub-panel both for REF 2014 and REF 2021.
Results day for the Research Excellence Framework (REF) is typically accompanied by two things. On the one hand, there is (understandable) jubilation amongst those institutions and departments who have done well. On the other hand, there is often widespread frustration with the administrative burdens and processes associated with this, including numerous suggestions for how things might be done differently. The publication of the results for REF 2021 was no different.
Of course, such criticisms are entirely legitimate when it comes to something as significant as the REF. Everyone has their bugbears about what is, after all, a highly complex system of assessment (institutional level environment statements, anyone?). But equally, we should not lose sight of some of the real positive changes that the current system, largely in response to the Stern Review, made.
First and foremost, the expectation for universities to submit all staff with ‘significant responsibility for research’ was an important change compared to 2014, when institutions were permitted to make selective returns. Not only does this give a more holistic picture of the breadth and depth of any department’s research, it is also an implicit element of the Government’s R&D People and Culture strategy. After all, selectivity is more likely to favour those who have not taken career breaks, or who are no longer in the early stages of their research careers. At Aston, we certainly viewed this requirement as an opportunity to make an inclusive submission at 100 per cent of eligible staff. We did so in full knowledge that our Grade Point Average (which determines at least some of the rankings) would not be as high as it could have been in a selective context.
In fact, this was one of the more surprising dimensions of the REF results. As it turns out, the degree of selectivity remains quite high across the sector, and particularly within some units of assessment. To be clear, this is not to suggest any wrongdoing: the rules permitted institutions to spell out in their Codes of Practice how they intended to identify those eligible staff who did not have significant responsibility for research. At the same time, there is also quite a difference between an institution which submitted 98 per cent of eligible staff and one which submitted 40 per cent.
This presents Research England with something of a dilemma in the context of the funding algorithm, which has yet to be confirmed. The easier option is simply to ignore this variation, on the grounds that each institution simply proceeded in accordance with its Code of Practice (which Research England had to approve). But the price of doing so is to create an unintended consequence for the future. If there is no financial benefit to having submitted 100 per cent of eligible staff, the next REF will surely see explicit gaming in this respect, and thereby effectively bring about selectivity by the back door. Can this really foster the inclusive research culture we want and need to create in the UK?
For me, the second innovation which has been successful is the composition of the submission. In 2014, each staff member had to submit four published outputs. In 2021, this could vary from one output to five outputs per person, as long as the overall average across any given submission was 2.5 outputs per person.
This has two important and very positive effects. Firstly, it frames any given submission not as a collection of individual performances, but as a team effort. Across a portfolio of different kinds of academic work, it may be entirely reasonable for an individual researcher to submit only a single journal article or book to the REF, if they are adding significant value in other ways (such as through outreach or public engagement, for example). Likewise, if that single output is judged to be world leading (4*), it will be worth more to the department than three which are internationally excellent (3*). This, then, is the second effect, by shifting the dial towards quality over quantity. Put differently, it becomes in everyone’s interests to publish fewer, higher quality outputs. In my view, this is a welcome culture change, especially for those in the early stages of their careers.
Having now served on two REF sub-panels, and having found both rewarding experiences, I am also unconvinced by the argument that metrics could deliver the same quality of outcome. As Nick Hillman has noted elsewhere on the HEPI blog, ‘peer review is the worst form of evaluation – except for all the others that have been tried’. One of the under-appreciated aspects of REF is that the criteria (originality, significance and rigour for outputs) were originally introduced in 2008. They have genuinely stood the test of time, and will be hard to replicate through metrics. The forthcoming report of the REF’s Equality and Diversity Advisory Panel is a further key element of this: its equality impact assessment will hopefully give the sector assurance that outputs were assessed fairly against protected characteristics. Again, this would be less certain with a purely metrics-based approach.
Already, the Future Research Assessment Programme (FRAP) has begun its work to develop the shape of the next REF. This is to be welcomed, as the sooner universities know what they are dealing with, the better; this itself would contribute to a reduction in the administrative burden, as new requirements would not have to be incorporated into preparations at short notice. While it is inevitable and right that the next REF will bring some changes, our hope must be that the baby is not thrown out with the bathwater.
This post is part of a series of blogs reflecting on the REF. The full list of blogs in the series can be found here.