Skip to content
The UK's only independent think tank devoted to higher education.

Measures of success – by Susan Lapworth (interim CEO of the Office for Students)

  • 30 May 2022

This is an edited version of the remarks by Susan Lapworth, Interim Chief Executive of the Office for Students (OfS), to the HEPI / Advance HE breakfast seminar in the Palace of Westminster on 24 May 2022.

Are we measuring the wrong things? Respondents to our recent consultations about regulating student outcomes sometimes suggested we might be.

Some suggested that continuation and completion are not the only measures of quality, and can be viewed, at best, as poor proxies for quality. Some suggested that these measures fail to take account of life circumstances, which may result in certain student groups not completing their studies. Others suggested that the OfS is proposing to apply too narrow a definition of ‘positive outcomes’ which fails to recognise the many benefits of higher education outside of graduate employment.

Our proposals are still subject to consultation and we’ll make final decisions about our future approach over the coming weeks.

We proposed using three indicators to measure student outcomes. First, continuation, the proportion of students who continue on their course after the first year; second, completion, the proportion who complete their course; and finally, progression, the proportion who go on to professional jobs or further study. 

If an institution has strong performance on these indicators, the OfS is unlikely to have a regulatory interest in those particular outcomes.

In those circumstance, institutions may well want to focus on the wider value that students gain from their education. They may feel that the OfS’s student outcomes measures may not represent the richness of their courses and they may be interested in other measures of value for students and other stakeholders.

But that’s only part of the picture. There are circumstances in which the OfS’s student outcome measures are essential in protecting the interests of students and taxpayers. To take just one of those indicators, we can see large institutions with thousands of students where only 75% of students on first degree courses continue from year 1 to year 2; only 64% of level 4 and 5 students continue; or only 65% of taught Master’s students continue. 

That represents a large number of students leaving their courses early. These are students who are, quite likely, leaving with disappointment and debt. Many of us would be uncomfortable defending that level of performance. 

So, are we trying to measure the wrong things? No, not for students thinking of studying at those institutions. These indicators allow the OfS to set a firm floor below which performance rightly attracts regulatory attention. Students should not be able to choose courses, and draw down student loan funding, where outcomes are weak.

But to what extent should this data be balanced by other assessments? My answer is: always. Our indicators can only ever be one element of our judgement. Consultation responses reminded us about the complexity of the higher education sector. That complexity means that any institution’s numerical performance must be understood within its context. We can’t take a blunt data-only approach.

We would want to understand if there is anything about an institution’s courses or students that has a bearing on how we should interpret its performance. We would consider an institution’s benchmarked performance to understand the characteristics of its students. We would also be interested in wider contextual factors, for example whether a rapid change in local employment trends might account for weaker performance for students’ employment outcomes. 

We would also want to understand the actions an institution has already taken to improve its performance, and the credibility and sustainability of those plans. 

Taking all of that together, and would make a rounded judgement about an institution, on the basis of its numerical performance and those contextual factors. 

My final point is that none of this can be static. Our indicators must continue to be fit for purpose in a changing policy environment. For example, the Government’s ambitions for the Lifelong Loan Entitlement will significantly change the shape of higher education provision in England. And so, what and how we measure has to change too – and those are complex policy questions that we’ll need to grapple with over the coming years.

1 comment

  1. Michele Underwood says:

    I feel that you ignore the other contexts which are the result of students discontinuing.
    One, the fact that they realise after year one that they don’t want to be saddled with debt so get out whilst the amount is lower;
    Two, additionally that HE is not for them or at that time- the expectation and encouragement that you must attend University at 18 is narrow minded at it’s best. More people should leave education for a few years and explore what is of value to them- not continue being sifted and propping up a system which values discrimination
    Third- societal discrimination is mirrored in the University sector; so again breaking down boundaries is sometimes more wearing than it is valuable. And remember Universities don’t exist out of the cultural context of their country – so if England is racist – so will its institutions ( your office for one)
    Fourthly- Education should be less about quantitative measurement and more about widening people’s minds- but then I guess if that happened then there would be more challenges to the way behemoths rule.

Leave a Reply

Your email address will not be published. Required fields are marked *