Skip to content
The UK's only independent think tank devoted to higher education.

People, Culture and Environment in REF 2029: time to test it out

  • 4 December 2024
  • By Duncan Shermer

By Duncan Shermer, Head of Research Excellence Framework (REF) Evaluation and Development.

We’ve made lots of progress developing the People, Culture and Environment (PCE) element for REF 2029. Working with the PCE Indicators Project Team and a range of stakeholders, we’ve developed a set of indicators, assessment criteria and an assessment approach and we’re about to start testing them all in the PCE Pilot exercise. In the Pilot, a broad sample of HEIs will produce submissions for assessment in a selection of REF Units of Assessment (UoAs), and soon we’re announcing the panels who will be conducting the assessment.

The Pilot is an experiment

Lots will change over the course of the Pilot. We will be testing what is possible and feasible in the assessment of research culture; we want to understand what works and to refine the working model. As the pilot is an experiment there’s some flexibility for us to test various things:

  • We’re starting with a long list of indicators. We’re testing them across different disciplines and with different types of HEI. We may find that some indicators are more applicable at institution level whereas others work better at UoA level. Or we may find that some indicators are more applicable in different disciplines than others. The pilot exercise is a vehicle to narrow down the list of indicators to a focussed and robust set for use in REF 2029.
  • HEIs in the Pilot may approach their submissions differently. Different HEIs may have more data or information available. Or where HEIs are engaging with multiple UoAs, they might take different approaches to different disciplines. That will help us to understand a range of different approaches and what might work.
  • Assessment panels may also vary their approaches. Different panels may operate differently, and may vary their approach during the assessment. This helps us examine a range of approaches and develop the assessment process during the pilot.

But it’s also worth outlining what we are focussing on in the Pilot; we want to test a broad range of things, but the Pilot can’t test everything:

  • The template may change. The template for the Pilot is a prototype, which we may refine and evolve during the pilot exercise, incorporating feedback from the participating HEIs and from the pilot assessment panels.
  • PCE needs to be situated in the broader REF assessment. There may be indicators which lend themselves better to other areas of the REF assessment, such as Contribution to Knowledge and Understanding, or Engagement and Impact. This is something that we will need to examine as part of the broader REF assessment as we develop the guidance for the whole of the REF 2029 exercise.
  • Focussing on the newer elements of PCE. In previous exercises various data have been used to give us an insight into the research environment at UoA and institution level (e.g. research doctoral degrees awarded, or research income). We do not intend to re-examine these, but anticipate they will be carried forward into REF 2029 in some form.

Key questions to shape PCE

I want to stress again that what we are testing in the Pilot exercise is not the finished article, it is a starting point and will be refined and developed as the pilot progresses. However, I think it’s helpful to note what sorts of questions we will be exploring as this gives an indication of the possible options for the assessment of PCE within REF 2029. We will be asking:

  • Which indicators do the participating institutions feel best reflect their research culture and which sources of evidence do they find most important?
  • Which metrics and evidence are difficult to provide and where is there scope for automation or provision of data from existing sources?
  • Where do participating institutions and assessment panels lack confidence in the robustness of the data and evidence?
  • Which quantitative and qualitative metrics and evidence are most helpful to the panels in reaching their conclusions?

These questions will help us to identify which metrics are important across a range of providers and UoAs, and which are pertinent in certain subject areas, or representative of certain types of institution. They will also give us an idea of the level of burden of PCE assessment, and some insight into where the likely focus of PCE audit may lie. We are likely to end up with some required metrics which will allow comparison between institutions, and some optional metrics which could be selected from a menu to suit different submissions. The pilot will help us determine the appropriate flexibility in selecting options from that menu. Do we expect you to have a starter, a main course and a pudding, or is it okay to skip the starter and have two puddings? For the purposes of the pilot, we will be asking the participating institutions and assessment panels to sample as many courses from the menu as they can, and we acknowledge that this might not make for a very balanced meal at this stage!

Want to know more?

A lot of work has been done, and a lot of ground has been covered, but we still have a long way to go and there’s still plenty to discuss in the development of PCE. Please visit ref.ac.uk and sign up to REF email updates to hear more about PCE in REF and the exciting next steps on our journey.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

1 comment

  1. Helen says:

    How are UKCORI’s shortlist of 16 potential Research Integrity Indicators (https://ukcori.org/wp-content/uploads/2024/11/Indicators-of-Research-Integrity-UK-Committee-on-Research-Integrity-report.pdf) feeding into the PCE Pilot?

Leave a Reply

Your email address will not be published. Required fields are marked *