Skip to content
The UK's only independent think tank devoted to higher education.

WEEKEND READING: Should the UK be moving to post-qualification admissions?

  • 14 November 2020
  • By Anna Mountford-Zimdars

This blog was kindly contributed by Anna Mountford-Zimdars, Professor of Social Mobility and Director of the Centre for Social Mobility at the University of Exeter. You can find Anna on Twitter @AnnaM_Zimdars

In short: Maybe. But the decision should not be made now.

Periodically, policy makers and UCAS wonder: shall we reform the admissions system? Would a post-qualification admissions (PQA) system be fairer, more efficient – better? Would more disadvantaged students enter selective higher education institutions if we had PQA? These questions are raised at the moment with UCAS and UUK reviewing the issue.

Many countries, indeed many of our European neighbours successfully practise post-qualification admissions. But, one must exercise caution in transposing experiences from one system to another. For example, in a country like Germany – which uses PQA – few courses have restrictions for enrolment or prior attainment. So, the task for students of enrolling themselves at their chosen university tends to be an awful lot simpler than the current selection set up in the UK. Notably, highly competitive courses such as medicine operate pre-qualification admissions even in Germany.

Furthermore, the way PQA works can vary across countries. While in the UK, students have tended to receive ‘conditional’ offers based on their predicted grades (though they increasingly receive unconditional offers), the PQA system in the USA results in firm admissions decisions for university places that are based on achievement to date, predicted grades, references and test results. This experience illustrates that it could be possible to keep PQA but change how it is undertaken.

Debating PQA around the validity of predictions, however, misses a more interesting point: how do we progress students to universities and why? School-leaving examination results have been sacrosanct as entry tickets of entitlement to participate in higher education and determining which institutions students would be eligible to attend. However, this approach finds itself in the middle of a huge natural experiment: The missed GCSEs and A-Level examinations in 2020 due to the COVID-19 school closures have meant that students were admitted to higher education based on the use of teacher predicted grades instead of examination results. We will only know in years to come how these teacher predictions map onto progress in higher education and, indeed, progression into further education and employment.

It might turn out that teacher predictions predict outcomes in higher education. In this case there may be calls to re-evaluate the weight given to predicted and actual examination performance. Put into other words: predicted grades are predictions with errors and there are rightly concerns that these errors systematically advantage or disadvantage particular students. But standard examinations are also measure of ability and potential for higher education with error, that also possibly advantage and disadvantage different students.

Maybe a student who instils confidence in a teacher that they will achieve highly can also impress teachers at university and employers, and thus succeed through university and employment even if they do not always succeed as highly in examinations; and the ability to ace exams does not always translate into acing, for example, the labour market. It is possible that both teachers’ predictions and examinations offer to higher education providers – albeit slightly different – indications of an applicant’s ability and potential. Once we know how teachers’ predictions map onto university performance, it is conceivable that this could be a useful piece of information for selectors to take into account. Usual caveats around biases systematically favouring or disadvantaging certain social groups would also need to be studied and applied.

While it would be possible to undertake a retrospective study of the power of teacher predictions as compared with examination results, this would be hampered by the fact that, in the past, the examination results decided where students would ultimately go, so the present situation where the teacher predictions are decisive makes it genuinely a different scenario.

Finally, the crisis in exams this year is also an opportunity to review whether additional contextual factors should be systematically considered by universities. For example, electively home-educated children missed out on A-Level examinations and predictions. They were left without grades. A review of admissions processes should take into account the needs of this group and of other individuals who may have the ability and potential to succeed at university but have less standard prior educational journeys. This means looking beyond certified attainment at ways to provide opportunities for accessing higher education.

The pandemic, and changes to admissions in 2020 means now is not the right moment in time to revamp the higher education admissions system. We have waited this long: it seems prudent to now wait another three years for the empirical evidence of how teacher predictions map onto higher education performance. it might be we need to rethink more than the timing of the admissions process and re-evaluate more broadly how and why we admit students.

1 comment

  1. Most thought- provoking… especially the question “what are we selecting and predicting for?”.

    One of the key arguments for PQA is the unreliability of teacher predictions.

    Does the evidence ( assume that the actual awards are ‘right’, and that all discrepancies between the awarded grades and the teachers’ predictions are attributable to teacher error? Or does it fully take into account the unreliability of the awarded grades (

    Given the incentives in the system encouraging teachers to be ‘optimistic’, it is quite likely that this happens. But given the known unreliability of actual awards, how robust are the measurements of ‘over-prediction’?

Leave a Reply

Your email address will not be published. Required fields are marked *