One issue on which most academics agree is how not to judge the value of different university courses.
But, unfortunately, no one can agree on the right way to do it.
This matters because the lack of consensus cedes power to people who may care less about the health of our sector.
The Government is very interested in looking at the earnings of graduates so that, if a course produces low earnings, its value is questioned. But the data on this are poor, showing only how people have done early in their career and ignoring self-employment and those who work abroad.
There is a more fundamental problem too. Consider nursing. Policymakers say it is such an important profession, it should be restricted to graduates. The same policymakers also, however, keep nurses’ pay down. Low pay is thus a political decision; it does not reflect the quality of the training.
A second way to judge the value of courses is to survey students. You can ask them if they are satisfied with their course, as the National Student Survey does, or whether they think they are getting good value for money or even if they think they are learning much. Yet this only provides an immediate snapshot. Many people change their opinion of their student days after graduation, once out in the big wide world.
A third way of assessing different courses is to look at the number of contact hours or average class sizes. But neither of these is an accepted test of educational value. Moreover, if you measure a university by contact hours, institutions will increase their hours but expand their class sizes. If you judge them on class sizes, they’ll reduce the size but cut the hours. It has been suggested that contact hours and class sizes should be enmeshed together in a new teaching intensity metric, which is – in my view – worth consideration. But this is controversial and good comparable data are not currently easy to obtain.
A fourth way to measure individual courses is to judge the quality of teaching and learning through a basket of different measures. This is what the Teaching Excellence Framework does. But the new subject-level pilots have been condemned by the great and the good. The head of the Royal Statistical Society has complained there is a risk of:
distorted results, misleading rankings and a system which lacks validity and is unnecessarily vulnerable to being gamed.
So there is widespread agreement that current ways of measuring teaching and learning are poor. Yet without a way to judge the quality of different courses, the deep scepticism about the expansion of higher education that exists in the House of Commons, in much of the media and among the general public will continue.
In response, the university sector risks falling in to the trap of pretending every course at every university is perfect. I do not believe this to be true. As I travel around the country visiting institutions, my perception is every university has some excellent courses and every university has some courses that are not yet excellent.
If we pretend all is rosy, policymakers will assume they have free rein to use whatever metrics and whatever policy levers they want. One Government Minister recently suggested people with lower A-Level grades should simply not be allowed to go to university. Lord Agnew asked:
Why are we letting kids go to university with three Es at A-level? Why? It’s a lunacy.
This is a dangerous idea because A-Level grades are often a poor measure of future potential and are very often wrong, as another Minister, the Universities Minister, has acknowledged.
Instead, institutions could put forward measures the sector believes have some validity for assessing the quality of different courses. It seems to me that the place to start should be student engagement, as this reflects how hard students work. It is not just work in lectures, labs or seminars that matters: independent learning is a defining feature of higher education. So we need to look at total workload, which combines contact hours, independent study and even placements.
At the Higher Education Policy Institute, we have been asking students about such things for over a decade, originally in conjunction with the Higher Education Academy and now with Advance HE. Our data from 2018 suggested students work on average a little over 30 hours per week. But nearly 1-in-4 students on full-time courses have a total workload of under 20 hours a week. This is around half the working hours expected of someone in full-time work. It is also half of what regulators have said in the past should be the norm for full-time study.
Surely even the most efficient and quick learner on a full-time course should not be able to complete all their work in less than half the recommended time? Perhaps a student working at such low levels is better described as part-time than full-time? Isn’t work some way above 20 hours the bare minimum that should be acceptable for someone on a full-time course? After all, we know outcomes are generally best when students work between 30 and 39 hours a week (and deteriorate a little above this, as some students seem to work harder than is good for them).
So, for me, one answer to the perennial question ‘what is a Mickey Mouse degree’ is: any course, irrespective of discipline, that fails to engage their students sufficiently hard.
No one could pretend such a measure would provide finely-grained comparative assessments of different courses. But it might just help build a consensus around the idea that there is a minimum level of commitment necessary for successful higher-level study, as well as a minimum level beneath which a true full-time higher education experience may not exist.
I expect some people will respond by pointing out that some students need to do a considerable amount of paid employment, limiting their time for academic work. That is a sad fact, which reflects shortcomings in the maintenance support available. But we mustn’t let this block an important conversation about student engagement. Quite the opposite: discussing workload could usefully light up the debate around financial resources for students while we continue to await the results of the Augar review of post-18 education.
We expect all these issues, and more, to come up at the HEPI Annual Conference, which is taking place in central London on Thursday, 13thJune 2019. At the event, we will be launching the 2019 HEPI / Advance HE Student Academic Experience Survey and hearing from Chris Skidmore, Nicola Dandridge, a range of vice-chancellors and many other voices. I look forward to seeing you there.
Nick Hillman is Director of the Higher Education Policy Institute.
Personally i wonder why do we need to be able to measure this? Surely the original measure of quality for any institution – the quality of the graduates that come from each institution – is hard to beat? Assuming a lot of graduates go into related industries to their courses, surely employers will quickly recognise which institutions are doing better than others in the quality of graduate and candidate they receive? Perhaps we should ask Accenture and IBM (and other large employers) to rate our universities?