A guest blog kindly contributed by Tim Blackman, Vice-Chancellor, Middlesex University
Degree classification and inequalities in higher education are the focus of two recent Office for Students reports and regulations. Both these issues are informed by statistical analyses by the OfS that make much of ‘unexplained variation’. However, very different approaches are taken depending on the issue. In this blog I explore why there is this difference and whether the OfS’ much vaunted commitment to using evidence may be obscuring other motivations that mean the evidence is used selectively, notably to achieve media headlines about being tough with providers.
On 19th December last year the OfS media-released its study of trends in degree classifications with the eye-catching headline, ‘Universities must get to grips with spiralling grade inflation’. The report, Analysis of degree classifications over time, shows how a number of possible explanatory variables correlate with the proportion of good degree awards at provider level for the years 2010/11 to 2016/17. Although the variables selected as ‘explanatory’, such as qualifications on entry and POLAR quintile, have little to do with the quality of teaching and learning at an institution, they are given great explanatory significance by the OfS because they correlate with the proportion of firsts and 2:1s awarded.
This is a very problematic line to take. The aim of education should surely be to reduce – even eliminate – any relationship between outcomes and factors such as prior attainment, ethnicity or deprivation, rather than reproduce these patterns. But the OfS seems to have a position on this issue that these factors should largely determine degree outcomes and have a fixed effect over time.
Thus, they take 2010/11 as their base year, work out the proportion of variation in firsts and 2:1s that their explanatory variables account for, and then look at each subsequent year to see if these variables account for the same proportion of this variation. What they find is that the variables account for less and less of the variation in degree outcomes over time. Rather than this being due to better and better learning outcomes (which might be expected given the successive rounds of annual review and quality enhancement conducted at all higher education institutions) the OfS states: ‘The analysis corroborates concerns about grade inflation across the higher education sector’ (p. 4).
Possibly. But are the OfS’ explanatory variables up to the mark? They use the term ‘unexplained variation’ to cast suspicion on this apparent improving trend in students’ academic performance.
In statistics, ‘unexplained’ means no more than that a model is missing the variables needed to explain all the variation in an outcome, and ‘explained’ does not mean ‘explained’ in the everyday meaning of the word but that there is covariation between independent and dependent variables that may or may not be causal.
Almost certainly, much of the unexplained variation in degree outcomes has something to do with variations in local practice and context that are not being measured by the variables the OfS use in their models. Despite this limitation of their models, the OfS imply that this local practice can only be bad, i.e. standards have been slipping, possibly to boost league table performance. They say little about local context even though real explanation needs examination of specific cases and much more than just correlating variables.
In Table A1 of the report they list provider by provider the amount of variation in degree outcomes that by 2016/17 is unexplained by their explanatory variables. This varies substantially across providers and is negative in a few cases. If we take the OfS position that there is something suspicious about this, then things seem much more suspicious at, say, the Conservatoire for Dance and Drama – with a 30.5 percentage point change in the variation in firsts and 2:1s remaining unexplained – than, say, Bishop Burton College, where this is a negative -16.9.
If I had been undertaking this analysis I would have wanted to look a lot closer at the variation in Table A1. What might explain these differences in the extent to which the vast majority of providers further broke free from the OfS’ determining explanatory variables, perhaps because of better teaching and learning or students working harder, and therefore not necessarily a bad thing.
I decided to have a look at one possibility, the ethnic mix of providers’ student populations. There is interesting evidence from a lot of mainly US studies that ethnic diversity on campus improves academic attainment, marginally but enough to be statistically significant. So I ran a correlation between the unexplained variation for 2016/17 in Table A1 and the proportion of students at each provider who are white compared to non-white (taking the latter from the 2017 Bath University report Diverse Places of Learning?).
The result is intriguing: a significant correlation of -0.5,indicating that 25% of the ‘unexplained’ variation problematised by the OfS is actually ‘explained’ by ethnic diversity. As white students become less concentrated at provider level, degree outcomes tend to deviate more and more positively from the OfS’ explanatory variables. This may just be due to better performance by non-white students (in which case we should stop talking about this as ‘grade inflation’) but it is worth investigating further.
In fact, when framed as inequalities the OfS takes a different line about degree outcomes. In a new regulatory notice, media released on 28th February with the instruction that ‘universities must eliminate equality gaps’, they set out 53 pages of detailed requirements for providers’ access and participation plans. These include meeting national targets published last December in their report A new approach to regulating access and participation in English higher education.
In this report, a different approach to explanatory variables is taken. Rather than variables such as prior attainment being regarded as legitimate determinants of degree outcomes, they are presented as‘structural’ factors that cause inequalities in participation rates, degree outcomes and by disability and ethnic status. The OfS considers these should and can be eliminated, albeit over a long time period (target trajectories are presented to 2038/39, the year when inequalities will be eliminated). These are very ambitious targets which, despite the announcement of a new national ‘evidence and impact exchange’, are presented with no theory of change that explains how these lines on a series of graphs are plausible.
Just as concerning for me is inclusion of the Behavioural Insights Team among the three partners selected to run the new exchange. This is the organisation that in 2017 used Government funding to contact high-achieving pupils from low income families urging them to apply to the most selective universities, effectively acting as a marketing agency for the Russell Group and something that would surely be politically impossible to do if it was about encouraging pupils to go to grammar schools rather than comprehensive schools.
While prior attainment is regarded as a legitimate explanation for degree outcomes in the report on degree classification, it is regarded as a cause of inequality in the access and participation report. I am sure that the the OfS would not want to be accused of spinning its analyses according to the arguments it wishes to take to the media, but it seems odd that it is not welcoming the increase in firsts and 2:1s that would inevitably follow from narrowing these inequalities as more disadvantaged students catch up with their more advantaged peers.
Perhaps that is not the plan. The OfS might be expecting providers to reduce the attainment of more advantaged students to avoid ‘grade inflation’. Strange as that may sound, it is not outside the realm of possibility if we look at their plans for participation. Figure 1 in A new approach to regulating access and participation in English higher education shows that the OfS is aiming to close the participation rate gap across POLAR quintiles among the most selective institutions partly by reducing participation from the highest participation POLAR quintile.
In due course this will get caught up with the policy response to the rapidly growing number of 18 year old school and college leavers in the UK population from 2020, and whether a proportionate expansion of higher education is affordable or desirable. There is an interesting reference in the OfS’ December Board minutes to its targets requiring some ‘rebalancing’ of students between providers. Perhaps they have been reading my HEPI pamphlet The Comprehensive University, which argues for just that on both equity and pedagogic grounds. But as I also argue, this would need strong policy intervention in providers’ admission decisions. While there are currently no legal powers for the government in England to determine criteria for the admission of students to higher education courses, the OfS has been chipping away at this with its pronouncements on unconditional offers, and in Scotland institutions are being expected to operate lower tariff quotas on every programme to diversity intakes.
The OfS clearly wants to reduce inequality, although needs to explain much more about how it intends to achieve this, rather than just drawing lines on graphs with no explanation of the theory of change beyond better use of evidence and more collaboration between providers. It also needs to take a more balanced approach to its statistical analyses, seeing ‘unexplained’ variation in degree outcomes for example as offering potential answers to how to improve outcomes as well as potential questions about changing standards.
The OfS’ use of ‘unexplained’ variation beyond the neutral statistical meaning of the term to imply either dodgy practice in the case of good degree awards or that there is low hanging fruit in the case of taking action to narrow inequalities is very questionable. Regarding the latter the OfS argues that – in contrast to ‘structural’ issues such as prior attainment – the existence of unexplained variation means ‘non-continuation and degree attainment are issues over which providers have more direct control … We are therefore setting particularly ambitious targets in these areas’.
With both grade inflation and inequalities, the OfS takes the position that the smaller the unexplained variation at provider level the better the situation. In the case of degree classifications, the more predictable that degree awards are, allegedly the more reliable they are, and predictability is regarded here as a good thing. In the case of inequalities, the less predictable inequalities are, allegedly the more potential there is to reduce those inequalities, and predictability is regarded as a bad thing.
Is the OfS just playing to the media with these different approaches, issuing ‘instructions to deliver’ with little thought to how delivery is expected to happen? In the shorter term at least, they conveniently assign all the agency to providers, a scenario in which the OfS cannot fail but institutions certainly can.