- This HEPI blog was kindly authored by Hugh McKenna CBE, Dean of Medical School Development at Ulster University; and Roger Watson, Honorary Professor of Nursing at the University of Hull.
In many countries, most research undertaken in universities is government funded and governments have a responsibility to ensure that it is well spent. This increasingly means that funding must be targeted at research quality. One of the means of knowing where quality research is undertaken is through rigorous and systematic assessment. Therefore, governments in many countries undertake periodic reviews of research conducted in their universities.
One of the first to do this was the Thatcher administration in the UK in the 1986 Research Assessment, which was quickly followed by the 1989 Research Selectivity Exercise. Then the process settled down to a series of Research Assessment Exercises from 1996 up to 2008, after which, the Research Excellence Framework in 2014, now in its third manifestation, was developed.
Development of research assessment
The processes used to assess research in the UK have been facilitated by reviews, consultations and pilot testing before implementation. The UK assessment structures and processes are robust and systematic and amongst the best in the world. Research assessment processes pioneered in the UK have, therefore, proved to be a great export and versions can be found internationally.
The development of impact assessment
Research impact had previously been introduced in 2007 in Australia in its Evaluation of Research in Australia (ERA). However, shortly after this, the government changed and impact, as a component of the ERA, was dropped. In 2009, the UK imported the Australian model, down to its use of impact case studies. The UK Treasury also saw this as an excellent addition to research assessment. Taxpayers in the UK had a right to know what benefit was being achieved from their investment.
A perfect system for assessing the quality of university research will never be developed nor will the processes please everyone. However, it is notable that the UK research assessment exercises in their various forms have studiously avoided using metrics to judge, for example, the quality of published research. The cardinal feature of research assessment in the UK is that the exercise depends on peer review. Expert panels across the units of assessment are appointed in whom the whole range of subjects assessed can have confidence based on their reputations and achievements in their respective fields.
Exporting research assessment
UK research assessment procedures are most closely followed in Hong Kong where, essentially, our most recent exercises are implemented with little attenuation, including using the results as a means of allocating research funding to the seven universities there. The Hong Kong Research Assessment Exercise in 2020 was similar to the UK REF2014. Similarly, the Hong Kong 2026 RAE will be almost identical to the UK REF2021. It is understandable that Hong Kong should conduct research assessment this way since the Special Administrative Region of China was a British colony until 1997 and their universities continue, largely, to follow the same procedures as UK universities. To illustrate this, there are UK assessors on all of the assessment panels, including panel chairs (Convenors) and UK consultants who have been employed to assist Hong Kong universities in their RAE preparations.
At one stage there was a suggestion that the UK could market and franchise its assessment system worldwide. This never came to fruition, but some other governments were enamoured with the UK’s tried and tested system. In 2014, the Swedish Research Council evaluated the quality of health research in seven ALF Regions (County Councils) across Sweden. The plan was that by 2019 20% of the ALF funding would be linked to the results of a quality assessment exercise. Three peer-reviewed panels, composed of international academics met in Stockholm. ALF Panel 1 judged the scientific quality of research, focusing mainly on outputs. ALF Panel 2 assessed the clinical significance and societal impact of research. Finally, ALF Panel 3 assessed the prerequisites of clinical research. The criteria reflected the UK REF approach, as did the assessment working methods. The quality ratings were based on answers to the following three questions:
- How significant is the contribution of this publication to the knowledge base in the area into which it may be classified?
- What are the potential impacts of research in this area?
- What is the reach and significance of these potential impacts?
As in Australia, the Swedish Government changed, and the results of the ALF exercise were not used to allocate funding and the ALF was shelved. This outcome was also due to the university Rektors (Vice Chancellors) voicing their discomfort with a new system that could potentially reduce their funding.
Another country that has adopted the UK system almost wholly is Poland. The Government there has watched the UK assessment approach closely. It began to introduce research assessment in 1991 and it takes place every four years. As with the UK, they use the results to inform their allocation of public funding for university research. Since 2014, they were particularly impressed with how the UK assessed research impact and have adopted it wholesale, allocating 20% value to the assessment of impact case studies, with outputs worth 70%.
Through years of review and honing, the UK has one of the most advanced systems of assessing quality internationally and by adopting it, other governments do not have to create their own scheme. In addition, as can be seen above, some countries have learned from the REF experience and embraced aspects of it for their own approach. However, as with every iteration since 1986, the UK system continues to evolve.
In 2021 an EU movement began to question the inappropriate way research assessments were carried out and how they often led to a damaging hypercompetitive culture. In 2022, this led to the establishment of the Coalition for Advancing Research Assessment (CoARA). CoARA sought to make research assessment fairer, respect the diversity of research roles, make more use of qualitative methods and cease what it saw as the inappropriate use of quantitative metrics. CoARA’s influence can be seen in the forthcoming REF2029.
Since its first iteration in 1986, research assessment in the UK altered its criteria and working methods many times through reviews and consultations and by taking account of the needs of society. We have no doubt it will continue to do so and enhance its international reputation and replication. It is one of the best approaches in identifying how taxpayers’ money is targeted to where quality research is being conducted. If it did not exist, someone would have to invent it!
Conflict of interest statement
Hugh McKenna chaired the 2008 Research Assessment sub-panel for Nursing and Midwifery and the 2014 and 2021 Research Excellence Framework sub-panels for Allied Health Professions, Dentistry, Nursing and Pharmacy.
Roger Watson served on the 2008 Research Assessment sub-panel for Nursing and Midwifery and the 2014 Research Excellence Framework sub-panel for Allied Health Professions, Dentistry, Nursing and Pharmacy.
During the War a photographer from Picture Post published a picture of hundreds of Italian Prisoners of War in the Western Desert fighting for the honour of carrying the rifles of the two British squaddies who were supposedly guarding them.
An apt metaphor for the academic community’s supine attitude to successive research exercises, not least the pathetic attempt to measure ‘impact’.