Alexis Brown is Director of Policy and Advocacy at HEPI. HEPI has covered the Research Excellence Framework (REF) in a series of blogs by Dinah Birch, Rory Duncan, Simon Green, Andrew Linn, Cillian Ryan and Di Bailey, Bahram Bekhradnia, Peter Mandler, Andrew Wathey, Geoff Rodgers, and Nick Hillman.
This week, the impact case studies from REF 2021 were finally published. These case studies – which each eligible institution must turn in as part of their REF submission – attempt to give a snapshot an academic’s (or academics’) impact, by showing in a short 5-page narrative the link between their research and some benefit on society, the economy, culture or health.
Good impact case studies are strange beasts. They straddle the quantifiable and the qualitative, needing both a clear and coherent narrative but also ideally some concrete stats. While last time round they counted for only 20% of an institution’s REF score, in REF 2021 they’re now worth a full fourth of the overall score.
It is easy to be cynical about the impact agenda, especially when you get into the nitty gritty of how these case studies are generated. ‘Grimpact’ – or instances in which research negatively impacts society – in particular is a phenomenon that surely needs more attention. Some will also say that the increasing focus on impact has distorted the purpose of academic research, which should be the production of knowledge for its own sake. This view has become increasingly less common however, as REF logic has slowly but surely embedded itself in the UK academic and institutional psyche — perhaps creating an example of how cultural change can happen in higher education that other initiatives (for example, those around security) could take lessons from.
But with impact, not all disciplines are created equal. Some humanities researchers in particular may struggle to demonstrate impact as it has been defined under REF terminology, and this may have in part contributed to how submissions to Panel D (broadly consisting of the humanities subjects) actually shrunk this year, in contrast to other panels. This reduction could be because of recent departmental closures, as Times Higher Education suggests, or it could be from more staff being put on teaching-only contracts – which incidentally reduces the number of impact case studies that these departments are required to submit.
Helen Small’s 2013 book, The Value of the Humanities was in many ways critical of the impact agenda, suggesting it would be better for the humanities ‘to concede as little as possible to the formulaic language of the bureaucratic statistician’ – but even she was nonetheless supportive of the idea that research should have a social benefit, broadly defined. In a slightly exasperated parenthetical aside, she argued: ‘(Are academics seriously unwilling to concede that activities for which they receive public money should be partly assessed in terms of measurable benefits passed on to society?…)’. A hard statement to disagree with.
But if the battle has been won over the justification for that assessment, the terms of the exercise are still very much up for debate. This is no bad thing. A healthy argument over the modes of assessing impact can only aid the REF in future iterations. This is the benefit of peer review, among other things. Now that the case studies are out, there will also be an opportunity to mine them for the kinds of stories policymakers and the public will find compelling when the sector makes the argument for research support. Humanities scholars may be somewhat comforted by the fact that a good impact case study is above all an exercise in storytelling, and one of the last refuges where metrics can be only part of the narrative – at least for now.