Skip to content
The UK's only independent think tank devoted to higher education.

Evaluation in a time of crisis

  • 24 March 2021
  • By Rae Tooth & Richard Shiner

This blog was kindly contributed by Rae Tooth, Chief Executive at Villiers Park Educational Trust and Richard Shiner, Head of Access and Participation Evaluation at the Office for Students. You can find Rae and Richard on Twitter @raetooth @villierspark and @richard_shiner @officestudents.

Our respective journeys through the higher education access and participation evaluation landscape began some years ago at the Office for Fair Access (OFFA). It’s probably fair to say that in those days, evaluation was not as well embedded in universities and colleges as it is now. Things have moved on significantly since then – today there are many more jobs in data monitoring, evaluating evidence and data research. Back then, being that person in an institution responsible for evaluation and data analysis could be lonely at times.

Following our time at OFFA our paths diverged, although we both continue to work in access and participation evaluation – Rae leading Villiers Park Educational Trust, a national social mobility charity and Richard at the Office for Students (OfS). Different organisations, but with a shared commitment to and belief in the crucial importance of evaluation in helping to eliminate inequality in higher education.

Why evaluation matters

Access and participation work should be designed using the best available evidence about the challenges and barriers people face and the interventions that can best help to overcome them. But it’s hard to know if these interventions are working without collecting data and conducting an evaluation. Evaluation data provides a window into whether, why and how access and participation activities support people from disadvantaged and under-represented backgrounds to achieve their potential. Evaluation is also essential to ensure that limited resources are utilised most efficiently for the greatest possible impact.

The principles of good evaluation practice are the same as ever: embed evaluation in your activity from the moment you start thinking about your theory of change (which is the moment you start thinking about an intervention). Set clear measurable outcomes. Evaluate what can be measured. Agree what you need to know early. Ensure your evidence is feeding into your future decisions.

Progress in recent years

The evaluation journey has gathered pace in the last five-to-ten years. There is still a long way to go, but demonstrable progress has been made.

In higher education regulation, evaluation has moved up the agenda in recent years. While evaluation was always a requirement in HEFCE funding and OFFA access agreements, a shift in approach saw a greater focus from 2014 on evaluation and evidence-informed practice.

This policy shift helped set the ball rolling in the direction of the high-quality approaches and evidence we are seeing today. The OfS has raised the bar for higher education providers and the evidence suggests that significant progress has been made in embedding evaluation into access and participation plans. The sector has made great strides due to a combination of this regulatory pressure and support, improvements in knowledge and capability and the will and expertise of providers being mobilised to better effect.

The OfS’s approach to evidence and evaluation

Since its establishment the OfS has been clear about the importance of evaluation, setting out a vision to drive improvements in access and participation through the use of high quality evidence. In 2018, significant reforms to access and participation regulation included an increased focus on evidence-informed interventions and high-quality evaluation practice. Universities and colleges were set challenging expectations through access and participation plans.

The OfS has also invested in TASO, a new what-works centre for the sector with a focus on generating stronger evidence to inform access and participation activity and supporting the sector to improve its evaluation practice. Finally, an example of the OfS’ commitment to evaluation is a major project to understand the impact of the OfS’s national collaborative outreach programme, Uni Connect. One element of this is the recent Uni Connect meta review which has found a clear improvement in the quality of evaluation practice. The OfS will shortly be publishing the outcomes of its consultation on a new approach to Uni Connect and alongside this two new reports looking at learner attitudes and a survey of school and college engagement with the programme.

The contribution of the third sector: a case study from Villiers Park

The third sector has been demonstrating best practice in access and participation evaluation for some time, with organisations such as Villiers Park working with higher education providers to share expertise and experience. Higher education, and students, have benefited hugely from this collaboration.

At Villiers Park, evaluation has been embedded in all interventions. As part of the charity’s practice of developing essential self-reflexive skills, the young people it works with keep a journal of their experiences. The journals have been developed using cognitively-tested approaches and allow for continuous reflection. A benefit of this transparent approach to evaluation is that, along with minimising burden (not another end of module questionnaire!), it gives greater agency to young people to change how they are engaging with a programme to achieve their self-identified outcomes. Additionally, as part of regular staff supervision, practitioners keep a reflexive journal so that they can continuously enhance their practice. The journals can then be analysed alongside destinations and other data to better understand not just what is happening, but why.

There are many transformations which can be tracked: growth in confidence, career planning skills, self-efficacy. All of these are intermediate outcomes which can be positively impacted by outreach practitioners. Let’s be blunt: deciding whether to do a higher-level qualification is a huge decision which is influenced by many different factors. How confident can we really be in proclaiming that it happened to be one particular summer school, say, that made the critical difference? What could be measured, rather, is a young person’s confidence at the start of the programme, and what progress they made during the course of it. Making spurious claims about the impact of individual projects based on limited evidence weakens the case for access and participation as a whole.

The importance of evaluation during the pandemic

Access and participation activity has not been immune from the state of flux that the coronavirus pandemic has brought. In response to the crisis, higher education providers moved swiftly to deliver outreach in different ways. Across the country plenty of innovative practice has been developed at pace, from King’s College London’s work with Gypsy, Roma and Traveller children to Birkbeck’s move to rethink outreach with mature learners. Third sector organisations have worked hard to keep discussions around evaluation going through this challenging period – in summer 2020 Villiers Park held a Why Evaluate? conference and in February launched a new Community of Practice to showcase practitioner-led research in the further education sector, with a particular focus on social mobility and educational access.

The proliferation of genuinely innovative access and participation initiatives resulting from the pandemic was one of the surprise success stories of 2020. But the more innovative a project is, the less existing evidence there is to help predict its effectiveness. If we are to learn lasting lessons from this most unprecedented of times and disseminate them across the sector, we need robust evaluation and evidence.

Even if – arguably, especially if – an organisation has been forced to change its approach rapidly, it still needs to know whether the actions it took were the right things to do. Yes, it may be that the evaluation process needs to be more light touch than in normal circumstances due to a lack of time or resource – but it remains critical that all changes are thought about and evaluation measures embedded right from the start. Skipping these crucial steps is a false economy: if an intervention does not work as hoped, this could actually lead to worse outcomes for young people who were already disadvantaged to start with.

The rate at which innovative but untested interventions are being rolled out, during a period in which resources are becoming increasingly tight, means that evaluation is as important as it has ever been. The job may have got more difficult, the routes may have changed, and new barriers may be getting in our way. But it is exactly in times like these that we need an even clearer understanding of the impacts we are having as the ground moves beneath our feet, to ensure that those most at risk are supported in the most effective ways.

Bringing it together – collaborating for impact

Without evaluation – without striving to understand how to design programmes and policies that truly benefit people – progress towards the goal of equal opportunities in higher education is much harder.

Here, two roads converge. Working together, drawing on the outstanding evaluation practice in the third sector and in universities and colleges, will help us realise this goal. Real progress has been made, and there is potential to build on this to ensure positive outcomes for students.


Leave a Reply

Your email address will not be published. Required fields are marked *