Skip to content
The UK's only independent think tank devoted to higher education.

The Research Excellence Framework: Time for change?

  • 11 May 2022
  • By Cillian Ryan and Di Bailey
  • Following the announcement of the results of REF2021 tomorrow, join us in conversation with David Sweeney, Executive Chair of Research England. Register for the webinar here.

This blog was written by Professor Cillian Ryan, Pro Vice-Chancellor (International) and Professor Di Bailey, Interim Pro Vice-Chancellor (Research and Innovation) at Nottingham Trent University.

The outcomes of the Research Excellence Framework 2021 will be announced tomorrow. This will be occasion for modest press attention, linked to universities highlighting how well they have performed. Commentators will trumpet the outstanding quality and quantity of the UK’s research. 

A consultation, led by the higher education funding bodies as part of a review of the Future Research Assessment Programme (FRAP), closed last Friday. The FRAP notes the need for any future exercise to provide accountability for public investment in research. It intimates that Quality Related (QR) funding to universities, derived through an algorithm based on the REF results, could change in the future. 

While the REF matters to universities, the resource intensive nature of the exercise is without question as the overall cost of REF2014 is estimated to have been £246 million. While there have been some helpful developments in the way research quality is assessed between REF2014 and REF2021, the fundamental issues of how to reduce the burden and cost remain. The consultation reflects the ongoing debates in the sector regarding:

  • the extent to which a future system can utilise research metrics effectively; and
  • the balance between consistency of reporting between REF cycles, and whether the frequency of more regular reporting is achievable and worthwhile.

What else could we consider in order to assess research? 

The way the FRAP consultation is framed implies that research quality is not assessed elsewhere. This could not be further from the truth. 

Peer review is wired into the assessment of outputs for the REF, by using panel members whose role is to read and rate outputs submitted across the subject area. But these papers and monographs have already been reviewed by academics with more appropriate expertise for journals and publishers. Whether the re-assessment of the outputs by panels is a good use of public funds and academic resources is debatable. Similar concerns come up in every post-REF review and many advocate metrics drawn from other sources as the solution.

Similarly, applications for research grants and end of project research reports are assessed and graded by peer / panel review on behalf of the funding bodies. Research Councils provide feedback to higher education institutions on the grading of proposals and percentage of applications funded. More could be made of these very thorough assessments. For this to work, one obstacle that would need to be addressed is ‘demand management’ regulation by Research Councils limiting how many applications universities can submit. 

In terms of using journal metrics such as their Impact Factor (or Source Normalised Impact per Paper) to provide a proxy measure of output quality, critics point to the lack of a perfect correlation between the ranking of a journal and the merit of any individual paper therein. One discipline – business – has created a guide which ranks academic journals in alignment with the REF rankings. A study conducted post-REF2014 which examined a sample of 1,000 graded papers from eight institutions by the Business and Management panel found that only about half were awarded the same grade by REF reviewers as the journal rank.

Use of citation data is similarly cautioned as an indicator of quality since citations can vary for many reasons, even within disciplines. Scholars in arts, humanities, and many social sciences are particularly critical of this approach, not least because many publications in these fields extend well beyond traditional academic journals into books and physical objects (to include, for example, paintings). Even if such a metrics-based approach was adopted for science subjects it is unlikely this would work for other disciplines in the same way. 

But for many subjects, one readily available option would be to combine proxy indicators of journal quality with the number of citations of the submitted output, weighted for the number of years since publication combined with quality indicators relating to research grants. This might be imperfect, but arguably no less imperfect than the current arrangements, and would provide a score annually for all the outputs of a university in these areas.

Should we assess more frequently?

The FRAP consultation asks respondents to comment on the prioritisation of stability versus the currency of information for a future research assessment exercise. If the primary purpose of the REF is to provide a predictable and recurring funding stream to fund, at their discretion, ground-breaking research between census dates which would not in its early stages attract external funding, then stability wins. If it is purely related to current quality, then currency triumphs. Either way, it’s the overly burdensome process that sits behind process that is at issue.

The last REF demonstrated that the 22 of the top 25 universities by institutional REF ranking (based on the GPA) featured in the top 25 universities arranged by research income over the REF period. Once again, the winning of research awards emerges as an important marker of research quality, one which the REF serves merely to confirm. 

If we think that QR is vital, then one way forward would be to allocate QR funding in proportion to the value of Government funded research grants awarded to universities in the previous year. This allocation could be modified to reflect the percentage of grants judged fundable and the ratings of end of project assessments over the same 12 months. 

Providing universities with a consistent data set, perhaps every three years, and relating to government priorities for research would support a shift to more regular reporting that retains an opportunity for sector benchmarking, while enabling universities to justify their investment in, and outcomes from, QR. 

A three-year cycle would enable new priorities to be set regularly, reflecting, for example, potential changes of government. The practice of providing core data to institutions is well rehearsed. Such an approach would support consistency of submissions across all Units of Assessments within institutions at the same time and provide opportunity to recalibrate the baseline of QR given to each higher education institution, modified annually by a formula that considered their performance on bids, project completion and project impact.

This was the sixth in a series of blogs reflecting on the REF. The full list of blogs in the series can be found here.

Register for our webinar with David Sweeney, Executive Chair of Research England, here.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Reply

Your email address will not be published. Required fields are marked *