In this blog Peter Scott, Commissioner for Fair Access, Scotland, and Professor of Higher Education Studies at the UCL Institute of Education responds to HEPI’s recently published Debate Paper, Designing an English Social Mobility Index written by Professor David Phoenix, Vice-Chancellor of London Southbank University.
Most league tables measure characteristics which tend to reflect, directly or indirectly, existing advantage – birth, family, attendance at a high-performing school, often strongly correlated with smoother access to elite jobs for individuals; and for institutions historical reputations that translate into more competitive student recruitment and an already established critical mass of research activity that translates into research ‘excellence’. There is always room, therefore, for a league table that goes against the grain by attempting to measure what institutions achieve in terms of ‘added value’ for their students. The Social Mobility Index proposed by David Phoenix is potentially such a league table.
Any index, and league tables based on it, must pass three tests. The first is clarity about what is being measured – in this case, social mobility. The second is the methodology adopted, both in terms of the quality and relevance of the data and the way in which that data is used (principally the weighting of various elements). The third is the credibility of its results, not simply in common-sense terms of ‘what feels right’ but also in the sense that relative positions in the league table are commensurate, i.e. institutions have achieved them for broadly similar reasons.
‘Social mobility’ is inherently a much broader and more contestable category to define and then measure than, for example, average entrance grades, employment rates or REF results. Whether or not we like these other indicators they are straightforward and define themselves. Despite this potential slipperiness ‘social mobility’ is very popular in wider political and public conversations. To a large extent it has displaced the rather different idea of ‘social equity’ with its more alarming resonances about the need for a sustained assault on inequality. In contrast, social mobility is assumed to be almost exclusively ‘upward’. But, if its purposes as Professor Phoenix suggests are to enable individuals to reach their personal and professional potential and to increase productivity, this only works if the supply of higher paid jobs increases at the same rate as the production of graduates. Otherwise some ‘downward’ social mobility will be required to balance the ‘upward’ mobility, which is a much tougher ask.
The evidence suggests that since 2008 the labour market has struggled to generate the required number of higher-pay jobs, although any deficiency has been made good to some extent by the inflation of job titles – ‘admin assistants’ become ‘marketing executives’ and so on. Of course, it is possible the skill content of such jobs may have increased, but that is unlikely to be the whole story. On the broader canvas of the United States middle-class incomes have stagnated in real terms for decades despite – cynics might argue, because of – remarkable gains in productivity. Without getting more deeply into these important questions it is important to recognise that ‘social mobility, measured in narrow economic rather than broader cultural terms, is something that higher education, let alone individual institutions, does not determine on its own.
The methodology used to construct the social mobility index in this paper is adapted from the well established US Social Mobility Index. Two of the five variables in the US index – tuition fees and endowment – have been left out. The first is a constant not a variable in England, and the second is largely absent (outside Oxford and Cambridge). Three are left – access, continuation and graduate salaries. The data for the first two are derived from the data returns made by institutions in the access and participation plans they submit to the Office for Students.
On the first element in the index, access, the data is based on the proportions of students from the two bottom quintiles in areas of multiple deprivation, the 40 per cent most deprived postcodes. Rightly the report uses IMD rather than POLAR, which although generally highlighted by the OfS tends to be self-referential, i.e. it measures itself. But both IMD and POLAR suffer from the disadvantage of being area-based, which raises the possibility of false-positives and false-negatives particularly outside densely populated cities. In my work as Commissioner for Fair Access in Scotland I have argued strongly for using area based metrics, because multiple deprivation is deeply embedded in communities. But there is also a case for using measures that relate to individuals, notably Free School Meals (FSMs), alongside these area-based metrics.
For its second element, continuation, the index uses continuation rates from years 1 to 2 for the same two groups of students. Although it is true that the bulk of wastage takes place at this stage, the picture is not complete because it does not cover the entire student journey.
On the third element, graduate salaries, Longitudinal Education Outcomes (LEO) salary data is used. But it has two shortcomings – it does not capture longer-term salary build-up, which is a feature of some career pathways, and it is not available at IMD level. As a result the compilers of the index use whole-institution LEO figures, which could produce a significant distortion particularly in the case of institutions that only enrol small numbers of students from socially deprived backgrounds. Whole-institution LEO salary data is also strongly influenced by subject mix. Reward pathways are very different in, for example, art and design, engineering or nursing and other healthcare professions. More fundamental objection to relying exclusively on graduate salaries data are that the correlation between highly paid and high-skill jobs is loose at best, and highly paid jobs are not necessarily the ones of most value to society, as the COVID pandemic has made clear.
The potential weaknesses in all three data sources highlight the dilemma the compilers of the index face. In terms of the data they have little choice but, in an old newspaper saying, ‘to go with what they’ve got’. But these data limitations need to be borne in mind when considering the overall credibility of the league table produced by ranking institutions according to the index.
That is the third test. A Social Mobility Index that ranks Kings College London 29 places higher than the University of East London clearly has its work cut out to convince people. That doesn’t make it wrong, of course. But if you want such an index to carry real weight among policy makers and, in particular, potential students and their families and also employers – and I very much do – it has to pass some kind of common sense ‘that feels right’ test. Otherwise it will get ignored and be seen as just another marginal league table that is trying to distract attention from the ‘real’ ones, which as I said at the start measure pre-existing advantage and not much else.
This index as it is currently constructed has two weaknesses. The first is sensitivity that is volatile even by league table standards. Tweak the weightings just a little and you get very different results. Of course, that is an inevitable characteristic of all league tables – the devil is inside the weightings ‘black box’ which most readers never notice even when they are clearly set out, as they are with this paper. The second weakness is more serious. The indices for different institutions are made up very differently. University X may perform rather modestly on access but very high on continuation and, especially, graduate salaries (especially when the average is for all graduates). University Y, in contrast, may perform wonderfully well on access, moderately on continuation and relatively poorly on gradate salaries. Yet they may appear to be delivering similar amounts of ‘social mobility’.
More, much more, in a similar vein could be said about the shortcomings of many more established league tables. That doesn’t stop them getting noticed, and obsessively acted upon. The need, therefore, is not to undermine – and inevitably marginalise – the social mobility index but to improve and strengthen it. We badly need league tables of universities that do not essentially play back pre-existing advantages.
If we need league tables at all..?
Everyone is intrigued by league tables, and so when we see that LSE, Birmingham City University and the University of Salford are ranked 12, 13 and 14 respectively, we wonder why Birmingham City is “worse” than LSE, but “better” than Salford.
We rarely look more deeply, and so we don’t notice that the numbers which determine those ranks are “aggregate scores” of 2.161, 2.160, and 2.158 respectively.
Are the measures which determine those numbers so precise as to imply that 2.160 really is meaningfully different from 2.161 and 2.158?
Rank orders are highly sensitive to the uncertainty of the underlying measurements, and so perhaps compilers of league tables might like to consider how that uncertainty is taken into account.
That also applies to any inferences drawn from rank orders, such as exam grades, which are determined by drawing lines across a table of rank orders compiled from examiners’ marks (https://www.hepi.ac.uk/2020/05/18/two-and-a-half-cheers-for-ofquals-standardisation-model-just-so-long-as-schools-comply/).