Skip to content
The UK's only independent think tank devoted to higher education.

PQA: examining pre-existing systems and if they fit the UK context

  • 20 November 2020
  • By David Hawkins

This blog was contributed by David Hawkins, Founder of ‘The University Guys’. You can find David on twitter at @UniGuyDavid.

HEPI and The University Guys are hosting an event on ‘Should more UK students consider higher education in Europe?’ with university speakers from Ireland, Austria, Spain, Italy, Germany and Switzerland. The event takes place at 10am on Monday 7th December. To register, please go to this link: https://www.airmeet.com/e/b50f8910-227e-11eb-9abe-0fd8493f0346

As the idea of an application system based on post qualification admissions (PQA) moves from the world of ‘not that old chestnut’ to being a more realistic prospect, for those who regularly deal with PQA application systems around the world there is now some considerable concern. Though I would like to think positively of those involved in the various reviews that have led to this flurry of activity, I find it highly doubtful that many (if any) of those involved have much experience of a university admissions process beyond the UK.

As someone currently advising students navigating 16 different application systems, I feel qualified to state that what is on the table with a UK version of PQA may not meet the goals that the well-meaning proponents think it will.

To those, like me, who deal with lots of international application systems regularly, UCAS is one of the best systems in the world: it seems almost like vandalism to unravel it without actually speaking to those who know about other systems and the flaws they have. To paraphrase Winston Churchill, UCAS looks like a really bad system until you look at all the others around the world and realise it is actually one of the best.

For now, I’m going to discount the idea of a PQA system that means university terms start in January, which seems likely to fall down on the fact that the costs and lost learning time will make it almost impossible to implement, and instead focus on a system which gives exam results in August with university courses starting a month or two afterwards.

There are two good examples of countries that run a PQA process on these timelines, with a short window between students receiving grades and then starting at university: Ireland and Australia. In both of those, the reality of processing so many decisions in such a short time means that the system becomes almost entirely data driven.

In Ireland, which runs PQA through a process called the Central Applications Office (CAO), everything comes down to supply and demand, with students placed in courses based on their CAO points converted from the Irish Leaving Certificate (or other international qualifications). Each course has a cut-off score, which is the points gained by the lowest-performing student to get a place on that course. If there are 56 spots on the BSc in Physiotherapy at University College Dublin, the CAO assigns those places to the highest-performing students who listed that course as their top choice. In 2020, the cut off score for that course was 578 points: if you got 577 you were not admitted. Now bear in mind that 577 is more than the equivalent of 3 A*s at A Level, you can start to see how ‘points mean offers’.

What that might mean for PQA in the UK is that we end up having to go back to raw marks in exams or the use of other admissions tests, so that universities can – in the space of just a few weeks – sift through thousands of applicants and decide who gets in. What that would do to our education system is what worries me and should worry many teachers and school leaders who want students to do more than just cram for exams, as in a pure PQA system the focus will have to be all about getting the most possible marks, as one fewer mark could be the difference between getting in or not. Pity the cricket coaches trying to get teams out in the summer with that pressure facing their students.

Another issue to consider is that changing this won’t actually remove the current system – international students will still want to apply to UK universities on the basis of predicted grades, so universities will just have to accept direct applications outside of UCAS (as many do already); international students won’t wait until August to know where they might be going.

Alongside this, the many thousands of UK students who apply to international universities will still be applying on the basis of predicted grades as well, for systems elsewhere who will still require them.

Overall, I hope that rather than jumping into a PQA system without understanding how these systems work in other parts of the world, we consider the significant implications it could have on what goes on in UK schools in the final two years by looking at international comparators. Do we really want to follow the Irish model? Rather than making things fairer, if admission to university becomes solely about granular achievement in A Levels (and BTEC, IB etc.), it will likely be the best-resourced families and the best-resourced schools that will benefit the most, even more so than they do now.

HEPI and The University Guys are hosting an event on ‘Should more UK students consider higher education in Europe?’ with university speakers from Ireland, Austria, Spain, Italy, Germany and Switzerland. The event takes place at 10am on Monday 7th December. To register, please go to this link: https://www.airmeet.com/e/b50f8910-227e-11eb-9abe-0fd8493f0346

10 comments

  1. Jeremy says:

    It would certainly be wise to look at how things work in other countries before changing the UK model.

    But I find your overall argument curious. We shouldn’t go back to raw marks, because it’d make students work harder for exams? We shouldn’t have a numerical cut-off point for admissions because … what, exactly?

    In fact, the Irish system you describe sounds preferable in every way to the UK system. Real grades are more accurate than predicted grades, and raw marks would be more precise, too. There’d certainly be losers from a switch to a simpler, more objective system — cricket clubs, possibly, and those who earn a living from helping families to navigate the current complicated system — but applicants (especially applicants from poor backgrounds) and universities would be better off.

  2. I think there’s enough evidence to question the accuracy of final marks, particularly when you step outside of STEM fields, to call things into question. The Irish system works for their context, but I don’t think the logical consequences of that system are what those who seek to reform UCAS have in mind. The phenomenon of ‘Grinds’ in Ireland is not one that I think we’d like to see here, with a huge tutoring industry seeking to maximise every possible mark in every paper as it could be the difference. Like any tutoring, the most advantage can benefit the most. As for our commercial motive, despite the fact that I think we’d benefit from PQA (as it would lead to a two-tier system for international and domestic applicants) it’s something that morally I really don’t want to see as I think it will embed inequities rather than allow the flexibility for admissions officers to take into account educational context. There are better ways to help more disadvantaged students into higher education than PQA, it’s a sledgehammer to crack a nut approach.

  3. Jeremy says:

    I don’t follow your worries about every possible mark making a difference. That’s already the case, since grades are determined by marks: if you need 70% for an A* and you only get 69% then you don’t get an A*.

    Commercial motives are fine, but I don’t see a serious attempt at an argument here. If you don’t want the most advantaged to benefit the most then why are you running a company that helps rich applicants beat out poor applicants for university places?

  4. A word of caution if I may, please… except for exams structured as multiple choice questions, each with one unambiguously right answer and all others wrong, different marks will be given by different examiners.

    This isn’t because any examiner made a mistake, or was sloppy, or not supervised properly; it’s a result of legitimate differences in expert academic opinion.

    So an exam does not have ‘a’ mark of ’69’, distinguishing that script from another marked ’70’. Rather, the first script is more validly associated with a range of marks such as 69 ± 5, and the second as 70 ± 5. The reality is that the two corresponding candidates are indistinguishable.

    The (in my view unfortunate) belief that there is a single ‘right’ mark is the fundamental reason why 1 exam grade in every 4 is wrong – or, to use Dame Glenys Stacey’s more gentle words, “exam grades are reliable to one grade either way”. See, for example, https://www.hepi.ac.uk/2019/02/25/1-school-exam-grade-in-4-is-wrong-thats-the-good-news/ and https://rethinkingassessment.com/rethinking-blogs/just-how-reliable-are-exam-grades/.

    So beware any process that is based on the erroneous assumption that “this mark is right”.

  5. Jeremy says:

    Dennis: while that’s true, it’s still the case that the candidate with 70 gets the A* and the candidate with 69 doesn’t. So, even if the process for assigning marks is imperfect, it’s still the case that every mark counts, and switching to a raw-mark system wouldn’t change that.

    Unfortunately, if exam grades for some subjects are somewhat inaccurate, predicted grades are a good deal worse: they have no standard, no moderation, no appeal, and no accountability. Those who are concerned about exam inaccuracies ought to be even more concerned about predicted grades!

  6. Hi Jeremy – thank you; yes, 70 is a bigger number than 69. But that’s not the point. The point is whether or not the difference between 69 and 70 has any meaning, or is just an artefact of the process used for measurement.

    The end-result of all exam systems is a rank order determined by the marks. But if those marks are fuzzy, as indeed they are, any resulting rank order is simply a lottery, and has no meaning – certainly no meaning as regards “this candidate scored 70 and is therefore better than that candidate who scored 69”. As an example, take a look at the chart half-day down this blog – https://www.hepi.ac.uk/2020/05/18/two-and-a-half-cheers-for-ofquals-standardisation-model-just-so-long-as-schools-comply/.

    And I agree wholeheartedly that switching to a raw mark system won’t change that one jot. Which is a good reason why switching to a raw mark system is a really bad idea.

    If one of the arguments used to justify PQA is “Because it solves the problem of unreliable teacher predictions”, then I would say “To what purpose? This simply switches the unreliability of teacher predictions for the 1-grade-in-4-is-wrong unreliability of actual grades.”

    And so I find it a great pity that not even one of the recently-proposed ideas for PQA in its various forms addresses the fundamental problem of the unreliability of actual grades, let alone wider issues about the validity of exams and grades as effective proxies for a candidate’s abilities, skills and promise.

    To me, the award of reliable assessments is an absolute pre-prerequisite of any process that seeks, fairly, to recognise a student’s performance with some form of certificate. And whilst the structure of the testing process is other than one of unambiguous multiple choice, the ‘rules’ of assessment must fully take on board the inevitable fact that the underlying data is fuzzy. Which is not difficult to do.

  7. Jeremy says:

    Dennis: I’m glad we agree that raw marks don’t change things here.

    Regarding “To what purpose?”: grades are more reliable than teacher predictions, and consequently better, even if not imperfect. Further, where they’re unreliable the causes are different: the difference in marks between examiners is not due to things like personal animus or favour, racism, or attempts to game a complicated system, which are just some of the causes of predicted grade unreliability.

    There may be good reasons to keep predicted grades (although I haven’t heard any), but the fact that exam marking is imperfect is not among them.

  8. Thanks again; let me summarise my fundamental case:

    1. It is now acknowledged by Ofqual that on average, 1 in every 4 exam grades, as awarded, is ‘wrong’, with significant variability by subject, and by mark within subject. This unreliability of exam grades is now an agreed fact.

    2. My opinion is that an unreliability rate of 1 in 4 is woeful; others may disagree.

    3. I also believe that it is very easy to award assessments that approach 100% reliability, for every mark, in every subject – markedly better than the 75% average that is the case now.

    4. Given that this is easy to do, I argue it should be done, to everyone’s benefit. Including providing a much more robust platform for PQA, should that ever come to pass.

    This makes no statement about whether or not exams are ‘good things’ in the first place; rather, it is simply a pragmatic expression of a belief that, if exams are to take place, the least that can be expected is that the outcomes – the assessments as awarded (which might be grades, but don’t have to be) – should be fully reliable and trustworthy.

    This makes no reference to teacher predicted grades at all.

    A different question is “are teacher predicted grades more or less reliable than exam grades?”.

    That’s an interesting question, which is highly loaded.

    Currently, the process for UCAS predicted grades builds in incentives for teachers to overbid, and certainly they can be biased – and not just against, say, the disadvantaged (as a lingering paranoia, I am still convinced one of my teachers had it in for me all those years ago…).

    The measurement process is flawed too, for it assumes that the actual grades are right, and that any discrepancy is an error in the prediction… There’s something else too – how does a teacher learn to predict wisely when the feedback loop, so essential to all leaning, contains on average 25% of erroneous, random, signals?

    To me, a more interesting question is “when done properly, and with integrity, are teacher assessments better measures of a student’s overall performance and potential than the grade awarded as the result of an exam sat on two hot summer days in June, even if that grade is fully reliable and trustworthy?”.

    My personal belief is “yes”, as I articulated right at the start of what turned out to be this summer’s catastrophe (https://www.hepi.ac.uk/2020/03/21/trusting-teachers-is-the-best-way-to-deliver-exam-results-this-summer-and-after/).

    Others will of course disagree, which is fine. But I think the debate is very important.

  9. Jeremy says:

    Yes, the issue of exam marking reliability can be tackled independently.

    The question here is whether switching from predicted grades to exam grades is an improvement. I haven’t seen any serious arguments that it isn’t in principle (although there are certainly some timetabling challenges).

  10. Indeed. And if those exam grades were fully reliable, that would be a good step!

Leave a Reply

Your email address will not be published. Required fields are marked *