Skip to content
The UK's only independent think tank devoted to higher education.

Why we should continue to measure ‘grade inflation’ – but ask different questions about it

  • 28 October 2024
  • By Tim Blackman

This HEPI blog was kindly authored by Tim Blackman, Vice-Chancellor of The Open University (@TimJBlackman). He writes in a personal capacity.

If education is about growth as a person, as it surely must be, then the worst message we can send a student is that because this is what you achieved in the past, that is what we expect you to achieve in the future.

In an interview some years ago I heard educationalist John Hattie comment on how students learn their place in class. They think for example that they’re an ‘A’ student or a ‘C student’. Hattie argued it should be the job of a teacher to upset this, to say it is not good enough to be a C student, I am going to help you become better than that.

In a piece in The Conversation last year updating his meta-analyses of what leads to successful learning, Hattie put it like this: ‘The most important thing for teachers to do is to have high expectations for all students. This means not labelling students as ‘bright’, ‘strugglers’, ‘ADHD’ or ‘autistic’, as this can lead to lower expectations in both teachers and students, but seeing all students as learners who can make leaps of growth in their learning’. 

Leaps of growth, however, are not what the English higher education sector’s regulator, the Office for Students, seems to want to see. If you were a C student at school, they expect you to be a C student at university. This is of course not explicit in OfS policy, but the sociologists reading this may be aware of Pierre Bourdieu’s concept of habitus, the habits of perception, classification and action that reproduce the dominant culture and availability of opportunities. This, I think, is what we are dealing with at the OfS. Part of this habitus is their one-size-fits-all regime of regulation by numbers, and if the numbers are missing just use the ones you have.

Hence the OfS’ recent analysis of degree classifications repeats their past approach of questioning the validity of first and upper second class classifications if they are out of line with graduates’ prior attainment at school or college. These are not directly questioned as misleading classifications, although see their media comment below; the OfS has had a habit of looking for headlines with its media releases that are often out of line with the more qualified content of its reports. What they have done is use statistical models that treat prior level 3 qualifications as an explanatory variable and degree classifications as an outcome variable. Unsaid but clearly implied is that we should doubt high degree classifications if they are not related to high prior attainment, especially if the difference is not the same across institutions or over time.

Know your place

In The Comprehensive University, I wrote that judging the quality of a higher education institution by how academically selective its admission requirements are is like judging the quality of a hospital by how well its patients are when they are admitted. While a hospital is expected to discharge patients with some added value from the hospital’s treatment, higher education institutions seem to be not expected to add value. They are expected to reproduce the patterns of prior social inequality that heavily affect the A level, Higher and BTEC grades their students arrive with. I am grateful to The Complete University Guide for providing me with some of the data I use later below, but this guide helps prove the point. One of the indicators of a university’s performance is its academic entry requirements.

The OfS compares any increase over time in first and upper second class degree awards with what would be predicted by qualification on entry. They conclude that any rise in the proportion of higher classifications not predicted this way is ‘unexplained’. Although they acknowledge that ‘unexplained’, as a statistical term, means that the statistical model used is essentially deficient because it lacks explanatory variables, the Chief Executive of the OfS celebrates a reduction in ‘unexplained’ high classifications in the past couple of years, putting this down to universities’ measures to curb something they call ‘grade inflation’.

There are two other potential headlines:

  • Universities change assessment methods; students have less of a chance of achieving a good degree than in the past (despite working just as hard; or perhaps not given so many now have to take paid employment to pay the rent).
  • Universities’ quality enhancement processes stall, reversing a sustained period of rising student achievement in higher education.

The OfS links the recent decline in firsts and upper seconds to the ending of teacher-assessed grading during the pandemic. The implication is that ending this has been a good thing. At The Open University, however, we have found these measures narrowed disparities in outcomes associated with social disadvantage. As with the effect of BTECS in depressing degree outcomes compared to A levels, there are issues here with what and how universities teach and how they assess their students.

The problem is habitus

As I wrote in The Comprehensive University, ‘there is some evidence that students from different social class backgrounds use different strategies to learn, but that university lecturers recognise and reward middle-class strategies most’ (p. 40). This is also related to how academic culture values ‘being smart’ more than ‘developing smartness’, with ‘smartness’ itself imbued with social class assumptions.

The curriculum in many subjects is still too dominated by needing to memorise content rather than demonstrate being able to source knowledge and then apply it, with this dominance underpinned by ‘the prestige of learning in a research rather than a practice environment’ (p. 56). However, my main argument in The Comprehensive University was the potential for learning gain if we moved to a comprehensive university system that mixes students by prior academic attainment rather than, as at present, stratifying them into universities operating different levels of academic selection. This is essentially a social stratification system not an education system.

A response to this criticism of the OfS’ approach could be that there are unexplained patterns in the data, which are suspicious (in the sense of having nothing to do with the degree the graduates deserved). For the OfS this is more suspicious because the patterns are not the same across the sector or over time. If some universities can limit their ‘grade inflation’ why can’t others?

But surely we should expect some further analysis of what might be missing from their models. I had a look at the 2023 TEF outcomes and explored whether these showed any relationship with the proportion of ‘unexplained’ high classifications by institution. There are different ways of slicing the data for this kind of analysis but if we take the average of the last five years of ‘unexplained’ variation then Figure 1 shows little relationship between the quartile of ‘unexplained’ variation and TEF outcome. Gold institutions, for example, are equally represented in all but the second quartile, while Bronze institutions are equally represented in the lowest and third quartiles and, to a lower extent, the second and upper quartiles. If institutions have somehow been trying to make it easier for students to do well, this is not reflected positively or negatively in these independent assessments of their student outcomes and experience.

There is, though, a relationship with admission requirements. I found a statistically significant negative correlation with the entry standards score by institution as listed in The Complete University Guide. About 16 per cent of the variation in ‘excess’ first and upper second class awards was ‘explained’ by the average UCAS tariff score of new students entering the university. This was about the same for the most recent two years and the preceding three years of higher-than-expected degree outcomes. In other words, there is a tendency that as the tariff score reduces, ‘unexplained’ increases in degree achievement rise. 

There may be a ceiling effect that explains some of this, that is, if there was a ‘super first’ class of degree the high prior attainment students would be monopolising that classification. But we do not know, which is different to ‘unexplained’. Teaching and assessment, for example, may have changed in ways that stop disadvantaging students who for whatever reason did not do well in school or college assessments.

Conclusion

I am not arguing that this type of analysis should not be done. I am arguing that the problem is the habitus in which it is conducted and that shapes it. Surely the OfS should be looking at how these less advantaged students are being supported to achieve better than the OfS would expect. Even if this found an explanation such as, ‘we are so overworked that we are less diligent with our marking and give the many students who struggle the benefit of the doubt’ it would be important. As Hattie’s research establishes definitively, it is teachers – properly trained and supported – who are the main influence on how well a student achieves. His results are from schools and not higher education, but there is no reason why they should not apply to higher education, and certainly no reason why school and college attainment should ‘explain’ higher education attainment if the task of education is the growth of our students.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

5 comments

  1. There are two important matters in play here:
    (a) Pattern of degree classifications (and their movement over time)
    (b) Relationship between A level performance and degree achievement.

    Both ARE important matters BUT they are distinct and should be kept separate. My sense, not least from Tim Blackman’s fascinating analysis, is that the OfS is conflating the two matters. My sense, too, is that this conflation has been going on for decades in higher education policy circles.

    On (b), much work was conducted 40 years ago in examining the relationships between A level and degree performance, and sport was to be had in discerning (eg) differences across subjects – the relationships were weaker in the humanities as compared with the sciences.

    Furthermore, the modelling had difficulty with subject in higher education not taught at school, or where students studied subjects in higher education different from those they took at A level, since – in either case – there there could be no such relationship (between A level and degree performance).

    BUT this whole issue is premised wrongly from an educational point of view. (Here, I go much further than Hattie.) IF higher education is genuinely ‘higher’, and is bringing students into modes of openness, thinking, independence, authenticity, understanding, criticality, responsibility and Being that is not expected at A level (as it should), there is no reason to expect ANY relationship between A level and degree performance.

    To the extent that we do find stable patterns of relationships between A level and degree performance, to that extent we must suspect that we are not in the presence of a genuinely higher education at all.

    In such a situation, higher education is not living up to its name and is falling short of the expectations that may rightly be entertained of it.

    Ron Barnett ([email protected])
    PS: I failed all my exams at school and university repeatedly. It was only when I reached the research degree stage that I was able to flourish and fly. But why should I have had to wait until that stage? Why was a space for a proper self-directed education not available before then?

    (There are, of course, sociological explanations to explain this in terms of education’s ‘cooling out’ functions. But they amount to a saddening commentary on the educational value of so-called ‘education’, at least as it has come to be shaped over the last 200 years the UK. Even higher education is held back from thinking through and providing an education that is genuinely ‘higher’ in the sense that I am intimating here.)

  2. Gavin Moodie says:

    Thanx for this informative post.

    I haven’t read up on the Office for Students’ background, so I am unsure what problem it is trying to solve – I couldn’t find it in its recent report.

    I presume that all assessors at all higher education institutions are assumed or required to assess by criteria rather than norms and that one common explanation of grade inflation is that assessment criteria have been relaxed.

    If that is the case it seems to me that there are much more direct ways of evaluating assessment standards. I understand there are thought to be problems with the external examiners’ system, so that is where I would start.

    Hattie (2015) has considered the applicability of visible learning to higher education. Schneider Preckel (2017) undertook a systematic review of meta-analyses of variables associated with achievement in higher education to find remarkably similar results to Hattie’s for school education.

    Interestingly, Schneider Preckel did not find much difference by discipline, tho there are few rigorous studies by discipline. Tomcho and Foels (2008) report a related study of teaching methods in psychology, but it does not report the effectiveness of different methods..

    Hattie, J. (2015). The applicability of visible learning to higher education. Scholarship of Teaching and Learning in Psychology, 1(1), 79-91.

    Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological Bulletin, 143(6), 565-600.

    Tomcho, T. J., & Foels, R. (2008). Assessing effective teaching of psychology: A meta-analytic integration of learning outcomes. Teaching of Psychology, 35(4), 286-296.

  3. Gavin Moodie says:

    If there is no reason to expect any relation between A level and degree performance is there any reason to expect students to complete A levels before undertaking a degree?

  4. James Fuller says:

    A really interesting summary here, thanks for sharing. It has struck me how relatively uninterested Universities are, compared to schools, about “performance data” and I did wonder how long it would take OfS to get more “ofstedy” about performance measures. The fact is that the whole education system (SATs onwards) and the numbers that we use to judge attainment of young people are built upon extremely shaky foundations.

  5. (replying to Gavin Moodie):

    – in general, absolutely not (that students should be expected to study A levels before university). Only in a limited range of subjects, where there is definite epistemic progression from A levels, is it to be expected and even then may not be necessary.

    The former polytechnics and the Open University have long admitted students without A levels. Moreover, many subjects at university are simply not available at A level.

    So the relationship between A levels and degree performance SHOULD be weak, both on technical grounds and on educational grounds.

Leave a Reply

Your email address will not be published. Required fields are marked *