Skip to content
The UK's only independent think tank devoted to higher education.

Indicators of integrity

  • 1 November 2024
  • By Grace Gottlieb

This blog was kindly authored for HEPI by Grace Gottlieb, Head of Research Policy at UCL.

Imagine you are a panel member for the 2029 Research Excellence Framework (REF), and your job is to assess whether an institution has a healthy research culture. How will you do it?

You are likely to be presented with a combination of narrative and metrics to evaluate. But what will these metrics look like? And how will they distil the complexities of culture into something concrete?

The sector is currently mobilising to generate these metrics, also known as research culture indicators. Alongside work to develop indicators for the People, Culture and Environment (PCE) element of REF, other parts of the sector are initiating their own projects to develop indicators.

Integrity indicators

One such project, by the UK Committee on Research Integrity, has recently generated an impressive set of 115 indicators of research integrity, with 16 identified as the most important. These are intended to enable institutions to self-assess, rather than to be used for benchmarking, but there is scope for some of the indicators to feed into PCE.

As a member of the Advisory Group to the Committee’s integrity indicators project, I gained some insight into the process involved. Developing indicators to measure research culture is harder than it sounds!

Firstly, what exactly counts as an indicator? The Committee defines an indicator as “a quantitative or qualitative factor or variable, which provides a reliable means to evaluate achievement, to reflect the changes connected to an intervention, or to help assess the performance or state of play of an actor or system”. Essentially, it is a measure that tells you something about an aspect of the research system and allows you to evaluate change in the system. If you’re looking to apply indicators across multiple research areas, they also need to balance specificity and inclusivity across different contexts.

With this in mind, the Committee developed indicators over an extensive consultation process, learning from multiple stakeholders and perspectives. They ran a series of workshops, including one focussed specifically on arts and humanities, to ensure inclusivity across disciplines.

This ethos of iterative improvement is central to developing, trialling and embedding indicators in the research assessment system. The practical implications of adopting measures of culture cannot necessarily be foreseen. An emphasis on continuous refinement will therefore stand the sector in good stead.

Choices and trade-offs

In developing indicators, the Committee made a series of choices that provide food for thought.

They made a conscious decision to focus on the role of the institution in fostering an environment conducive to research integrity, rather than focussing on the role of individual researchers. As a result, they excluded indicators relating to research outputs, as these were deemed to place most of the responsibility on researchers.

This raises an interesting set of questions. How do we incentivise behaviours among researchers without risking undue burden on them? Is targeting the institutional environment more impactful in fostering research integrity than direct incentives for researchers? What balance should REF2029 strike between measuring inputs (e.g. the institutional environment) and outputs (e.g. publications affiliated with individuals) of the research system?

These tensions come to the fore in one particular area of research integrity: open research. Given that open research practices typically relate to publication of outputs (e.g. open data), it would not make sense to avoid output-focussed indicators here. Does this tip the balance towards a focus on individual researchers?

The UK Reproducibility Network (UKRN) is confronting this issue directly in a concurrent pilot project to explore open research indicators. To get around this challenge, the project aims to scope out indicators that, yes, focus on outputs, but are anonymous and aggregate. Speaking at a webinar on the UKRN project in March, Neil Jacobs, Head of the UKRN Open Research Programme, noted that, ”We’re not interested in this work in using indicators as a part of researcher assessment”.

Metric diversity

The development of indicators also raises questions around what type of indicator is appropriate and how it should be evidenced. The Committee on Research Integrity opted for indicators that can be evidenced through a mix of information types – binary (yes/no), quantitative, and qualitative, narrative-based information.

As long as there is clear signposting of the categorisation of indicators and type of evidence expected, using a mix of indicator types has the potential to balance burden (minimised by binary metrics) with gaining a more granular picture of the research environment and institutional diversity (provided by qualitative indicators).

This aligns with plans to balance narrative and metrics in REF PCE. A diversity of metrics was also advocated in the 2015 and 2022 Metric Tide reports.

Coalescing into coherence

It is brilliant to see multiple initiatives across the sector – spearheaded by funders, institutions and others – to develop and test indicators of aspects of research culture. Given that research culture is shaped by the various actors in the system, the evaluation of research culture should equally be co-created.

There is now an opportunity and challenge to integrate these various threads into a coherent whole for REF2029. In the run-up to the PCE pilot, the templates and guidance for PCE pilot submissions are due to be shared with the sector soon. They should provide more insight into how the funders’ thinking has evolved on this complex question of how to measure research culture.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

1 comment

  1. Sharon says:

    I think that the focus on outputs restricts opportunity and research. It used to be that research was about curiosity. Novel research didn’t know what the outcome would be – whether an idea would work or not, but the curiosity and journey of discovery was what ignited interest and passion. Now all funding has to be marked as having impact and outputs that limit true curiosity or discovery, and dampens the joy of research. As always, money is the root of the issue. There is no freedom for curiosity because there is no funding to allow for research that does not deliver or develop in a fixed amount of time. It used to be that maths problems would take decades to solve and the passion to try and discover new answers was enough. Now researchers do not have the luxury to spend that time on problem solving. If you want to incentivise researchers, you need to allow them the freedom to be curious without the constraints of time and money to burden them. Institutions should take on the weight of these concerns without passing them onto researchers to be concerned with. Nowadays it is all too self-directed and researchers feel like they carry too much responsibility and have no time for joy.

Leave a Reply

Your email address will not be published. Required fields are marked *