Generative AI and the REF: closing the gap between policy and practice
This blog was kindly authored by Liam Earney, Managing Director, HE and Research, Jisc.
The REF-AI report, which received funding from Research England and co-authored by Jisc and Centre for Higher Education Transformations (CHET), was designed to provide evidence to help the sector prepare for the next REF. Its findings show that Generative AI is already shaping the approaches that universities adopt. Some approaches are cautious and exploratory, some are inventive and innovative, and most of it is happening quietly in the background. GenAI in research practice is no longer theoretical; it is part of the day-to-day reality of research, and research assessment.
For Jisc, some of the findings in the report are unsurprising. We see every day how digital capability is uneven across the sector, and how new tools arrive before governance has had a chance to catch up. The report highlights an important gap between emerging practice and policy – a gap that the sector can now work collaboratively to close. UKRI has already issued guidance on generative AI use in funding applications and assessment: emphasising honesty, rigour, transparency, and confidentiality. Yet the REF context still lacks equivalent clarity, leaving institutions to interpret best practice alone. This work was funded by Research England to inform future guidance and support, ensuring that the sector has the evidence it needs to navigate GenAI responsibly.
The REF-AI report rightly places integrity at the heart of its recommendations. Recommendation 1 is critical to support transparency and avoid misunderstandings: every university should publish a clear policy on using Generative AI in research, and specifically in REF work. That policy should outline what is acceptable and require staff to disclose when AI has helped shape a submission.
This is about trust and about laying the groundwork for a fair assessment system. At present, too much GenAI use is happening under the radar, without shared language or common expectations. Clarity and consistency will help maintain trust in an exercise that underpins the distribution of public research funding.
Unpicking a patchwork of inconsistencies
We now have insight into real practice across UK universities. Some are already using GenAI to trawl for impact evidence, to help shape narratives, and even to review or score outputs. Others are experimenting with bespoke tools or home-grown systems designed to streamline their internal processes.
This kind of activity is usually driven by good intentions. Teams are trying to cope with rising workloads and the increased complexity that comes with each REF cycle. But when different institutions use different tools in different ways, the result is not greater clarity. It is a patchwork of inconsistent practices and a risk that those involved do not clearly understand the role GenAI has played.
The report notes that most universities still lack formal guidance and that internal policy discussions are only just beginning. In fact, practice has moved so far ahead of governance that many colleagues are unaware of how much GenAI is already embedded in their own institution’s REF preparation, or for professional services, how much GenAI is already being used by their researchers.
The sector digital divide
This is where the sector can work together, with support from Jisc and others, to help narrow the divide that exists. The survey results tell us that many academics are deeply sceptical of GenAI in almost every part of the REF. Strong disagreement is common and, in some areas, reaches seventy per cent or more. Only a small minority sees value in GenAI for developing impact case studies.
In contrast, interviews with senior leaders reveal a growing sense that institutions cannot afford to ignore this technology. Several Pro Vice Chancellors told us that GenAI is here to stay and that the sector has a responsibility to work out how to use it safely and responsibly.
This tension is familiar to Jisc. GenAI literacy is uneven, as is confidence, and even general digital capability. Our role is to help universities navigate that unevenness. In learning and teaching, this need is well understood, with our AI literacy programme for teaching staff well established. The REF AI findings make clear that similar support will be needed for research staff.
Why national action matters
If we leave GenAI use entirely to local experimentation, we will widen the digital divide between those who can invest in bespoke tools and those who cannot. The extent to which institutions can benefit from GenAI is tightly bound to their resources and existing expertise. A national research assessment exercise cannot afford to leave that unaddressed.
We also need to address research integrity, and that should be the foundation for anything we do next. If the sector wants a safe and fair path forward, then transparency must come first. That is why Recommendation 1 matters. The report suggests universities should consider steps such as:
- define where GenAI can and cannot be used
- require disclosure of GenAI involvement in REF related work
- embed these decisions into their broader research integrity and ethics frameworks
As the report notes that current thinking about GenAI rarely connects with responsible research assessment initiatives such as DORA or CoARA, that gap has to close.
Creating the conditions for innovation
These steps do not limit innovation; they make innovation possible in a responsible way. At Jisc we already hear from institutions looking for advice on secure, trustworthy GenAI environments. They want support that will enable experimentation without compromising data protection, confidentiality or research ethics. They want clarity on how to balance efficiency gains with academic oversight. And they want to avoid replicating the mistakes of early digital adoption, where local solutions grew faster than shared standards.
The REF AI report gives the sector the evidence it needs to move from informal practice to a clear, managed approach.
The next REF will arrive at a time of major financial strain and major technological change. GenAI can help reduce burden and improve consistency, but only if it is used transparently and with a shared commitment to integrity. With the right safeguards, GenAI could support fairness in the assessment of UK research.
From Jisc’s perspective, this is the moment to work together. Universities need policies. Panels need guidance. And the sector will need shared infrastructure that levels the field rather than widening existing gaps.





Comments
Add comment