Towards an educational gain approach to TEF

Author:
Johnny Rich
Published:

This blog was kindly authored by Johnny Rich, Chief Executive of Push and Chief Executive of the Engineering Professors’ Council.

You can read HEPI’s other blog on the current OfS consultation here.

The Office for Students is currently consulting on plans to use the Teaching Excellence Framework to regulate fees and student numbers. There are two problems with this. Firstly, the TEF is a poor measure of what deserves to be rewarded. Secondly, even if it weren’t, using fees as rewards will damage the higher education sector.

Paul Ashwin has already dismantled the notion that TEF has the heft for such heavy-lifting. He correctly criticises its broad institution-wide sweep, its data time lags, its susceptibility to gaming and so on. At its heart, the TEF is largely dependent on metrics that are, at best, questionable proxies of how effectively universities perform their core educational purpose. These are then reflected in four cliff-edged, unnuanced ratings.

Hanging fees on this hook is a weighty burden, and it’s a hook that’s stuck to a wall with Blu-Tack. 

But before we dismiss the idea faster than a toddler being offered broccoli, it’s worth considering what it would take to make it easier to swallow. Palatable, even.

To this end, it’s worth taking a step back. The purpose of teaching – especially excellent teaching – is surely to see that learning is achieved. And, given that the current framework relies so heavily on outcomes as the indicators of teaching excellence, surely what TEF is really trying to appraise is how well universities support learning gain.

In the early days of TEF, until 2019, HEFCE explicitly led a hunt for a holy grail metric or algorithm for ‘learning gain’. The quest concluded that learning gain was not a simple one-dimensional thing. Rather than being an attribute of a course (let alone a whole university), it was inherently a measure of a relationship between a student and the education they receive. A function rather than a point on a graph.

No single metric would work for different courses, different institutions and different students.

Having one overall TEF rating per institution with little room for context creates a driver that creates risk for universities that might want to try anything new.

Instead of universities asking themselves how their educational experience might be improved for their students, the safer question is What gets gold? Let’s copy that or Let’s stick with that.

And instead of thinking about how they could diversify to offer something innovative to students who have been traditionally underserved by higher education, it’s less risky to try to recruit whatever students are historically most likely to succeed.

That has a cooling effect on innovation and diversity in the sector, especially when coupled with the effect of rankings, which drive institutions to emulate the so-called ‘best’ and to count what’s measured rather than measure what counts, as Prof Billy Wong brilliantly explained in his recent HEPI blog. It is ironic that one effect of the marketisation of higher education has been to increase homogeneity across the sector, rather than competition driving universities to seek out niches.

We need to return to the quest for a multi-dimensional measure of learning gain – or, as it is now being called, ‘educational gain’ – the distance travelled by the student in partnership with their institution. Prof Wong’s blog accompanied the publication of a paper outlining just such a new approach. This – or something similar – could give the OfS the load-bearing hook it wants.

In the spirit of offering solutions, not just criticisms of the OfS’s plans, I propose that, instead of a TEF with stakes stacked high like a poker chips, the OfS could define a ‘suite’ of metrics (most of which already exist and some of which are already used by the TEF) that it would regard as valid measures of different dimensions of educational gain. These would be benchmarked by socio-economic background, region, discipline mix – or whatever is relevant to the metric in question.

Each institution regulated by the OfS would need to state which measures from the suite it thinks should be used to judge its educational gain. Some would veer towards employment metrics, others would champion access and value-added, and others would aim for progression to further study as a goal. Most, I suspect, would pursue their own multi-faceted mix.

Whatever selection they make would be based on the institution’s mission and they would not only have to say which measures should be used, but what targets they believe they should achieve.

The OfS’s role would be, in the first instance, to assess these educational gain ‘missions’ and decide whether they are sufficiently ambitious to deserve access to fee funding and, subsequently, to assess over time whether each institution is making satisfactory progress towards its targets.

This is not as radical it may sound. The OfS already operates a similar approach in inviting universities to define goals from a preset list in their Access and Participation Plans, although in that instance the list is made up of risks rather than targets.

If the OfS feels the bronze/silver/gold signalling of the TEF is still important, it could still give awards based on level of achievement according to the institutions’ own sufficiently ambitious terms of success.

This would encourage, rather than dampen, diversity. It would be forward-looking rather than relying on lagged data. And it would measure success according to a sophisticated assessment of the distance travelled both by institutions and by their students.

If this were the hook from which OfS wanted to dangle funding carrots, it would drive excellence through each autonomous institution being encouraged to consider how to improve the education it individually offers and to chase that, instead of palely imitating familiar models.

However, even with this educational gain-driven version of TEF, that still leaves the second problem I mentioned at the start.

How would using the TEF to regulate fees damage the sector?

On the one hand, ‘gold’ universities would win higher fees (relative to other institutions at least). Given they are succeeding on the fees they’re already receiving, it would seem an inefficient use of public funding to channel any more money in their direction, as apparently they don’t need it to deliver their already excellent teaching.

On the other hand, for those universities that are struggling, a lack of financial resource may be a significant factor either in their lower assessment or in gaining ground in future. Denying funding to those that need it most would condemn them to a spiral of decline.

The effect would be to bifurcate the system into the gold ‘haves’ and the bronze ‘have-nots’ with the distance between the two camps growing ever more distant, and the silvers walking a tightrope in between, trying to ensure they can fall on the side with the safety net.

An education gain-based approach to TEF wouldn’t solve this problem, but – as I’ve outlined – it could provide a system to incentivise and regulate excellence that would mean the OfS doesn’t have to resort to creating a binary divide through a well-intentioned, but inefficient and unfair allocation of limited resources.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Comments

Add comment

Your comment may be revised by the site if needed.