Skip to content
The UK's only independent think tank devoted to higher education.

The Great CAG Car Crash – What Went Wrong?

  • 12 August 2020
  • By Dennis Sherwood

This blog is the latest in a series by Dennis Sherwood, who has been tracking the 2020 results round for HEPI.

As a result of the public uproar following the ‘adjusting down’ of around 124,000 centre assessment grades (CAGs) – about one-quarter of all grades submitted – Scotland’s Education Secretary, John Swinney, has now binned ‘statistical standardisation” and reinstated schools’ down-graded CAGs. In England, the numbers are not known yet but a recent report produced compelling evidence, based on (the somewhat suspect) Slide 12 from Ofqual’s recent Summer Symposium, that about 40% of A-Level CAGs will be down-graded. This too is driving a build-up of public pressure, the final outcome of which is as yet unknown. 

John Swinney also announced the inevitable enquiry into what went wrong, including an autopsy of the process, as well as trying to get to the bottom of why so many CAGs were over-bid, for which two explanations are already on the table: ‘over-optimistic’ teachers; and discrimination against socially disadvantaged pupils. 

But are these the full story?

The muddle of the over-bid CAGs needs to be untangled not only in Scotland, but in Northern Ireland, Wales and England too. And to do that, someone needs to look in detail at the relevant evidence, the CAGs, and ask two key questions:

  1. How many of the CAGs were submitted in good faith and were plausible?
  2. How many appear to have been submitted by chancers, game-players or just lazy professionals?

Let’s deal with the second question first. If the CAGs submitted by any school are way higher for the top grades than the school’s subject history, that’s evidence of, let’s say, game-playing. So, for example, a teacher who thinks ‘I can’t be bothered with all this. I’ll just submit top grades and let the board sort it out’ or someone who, fearing confrontation with irate parents, decides to submit A*s and 9s for everyone – that way, the teacher can look any parent in the eye and say, ‘I submitted a top grade! It’s not my fault the outcome was [whatever]! Blame the exam board, not me!’

Such heavily distorted submissions should be easy to spot, and I trust that there will be very very few of them.

The first question, about plausible submissions, requires more explanation. Anyone who tried to produce this year’s CAGs will have hit two, apparently trivial, but in fact potentially devastating, arithmetical problems: rounding and historical variability.  

Suppose, for example, that the appropriate historical average is that 30% of previous students were awarded, say, grade B. This year’s cohort is 21 students and 30% of 21 is 6.3. That’s not a whole number, which is a problem: students don’t come as decimals, but as whole numbers. So the teacher faces the dilemma of rounding down to 6 or up to 7. The rules of arithmetic say ‘round down’. But that 7th ranked student is quite good, and really deserves a B, so let’s submit 7. So reasonable; so human; so understandable.

But if, in good faith, teachers in many schools rounded up, then grade inflation is blown sky high, for this is the ‘Tragedy of the Commons’. To maintain ‘no grade inflation’, there must be as many roundings down as up, which is most unlikely.

There’s another consequence of rounding too, best illustrated by a rather odd-looking example, but it does make my point.

A school’s historical average grade distribution is such that 10% of its students were awarded each of the ten grades 9 to 1 and U. This year’s cohort is 9. So that’s 0.9 of a student in each of the 10 grades, each rounded to 1. When I add the rounded figures, the total cohort is 10. But there are only 9 students. Where did that extra ‘student’ come from? From the accumulated rounding errors. And so to correct for that, I have to deduct one ‘student’. But which one? From which grade? From grade U of course. That way, each of the 9 students in the cohort are awarded each of grades 9 to 1, with no award of the U.

That makes sense. But there was a choice: I could have awarded one each of grades 8 to 1 and the U. Yet why on earth would I? And if everyone in a similar position chooses the highest grade, not the lowest, guess what happens to grade inflation…

One more example. 

Suppose that, historically, the percentages for grade B were 40%, 20% and 30% over each of three previous years, which – since the cohorts are the same size in each year – average to the 30% used earlier.

If, instead of using the average, I use the best of these years – after all, this year’s cohort is just as good as that one, if not better – then 40% of 21 is 8.4, which I’ll round up to 9. That’s good – I’ll submit 9, that’s sure to be fine

But alas no.

Submitting the rounded up 7, or the 9, or a compromise of 8, could all create havoc if everyone does the same. And why shouldn’t they? It’s all very reasonable…

…especially since neither the SQA nor Ofqual specified the rules!!!

If teachers had been instructed how to do the rounding, if teachers had been instructed just how close they had to be to the average and if teachers had been given the same calculation tool that looked after all this techy stuff consistently and ‘behind the scenes’, then they might have submitted CAGs-that-the-algorithm-first-thought-of, these being the ‘right answers’. And even better if they had also been allowed to submit well-evidenced outliers. 

But in the absence of these rules, teachers were aiming at moving goalposts in the dark. No wonder there have been so many misses.

My thesis is that ‘plausible overbids’ are not the fault of the teachers. To me, the blame lies at the door of the SQA and Ofqual for not making the rules clear. (Chancers and game players are another matter, of course.)

I think that ‘plausible’ and ‘gamed’ over-bids can be untangled by seeking the evidence – by looking through the CAGs and discovering the patterns, as illustrated in the Figure. And I think this should be done with urgency.

‘Plausible’ and ‘gamed’ grade distributions

In these hypothetical examples of the distribution of GCSE grades for the same subject cohort, the central black line is the historic average; the upper red line, the historic maximum; the lower blue line, the historic minimum. For the 2020 cohort, the distribution that most closely fits the historic average is shown by the yellow columns.
The green columns show the grades as submitted. 

On the left, no submitted grade exceeds the maximum, and only grade 1 is just below the minimum. Such a distribution is, in the context of the article, ‘plausible’. Pause for a moment to guess the grade inflation implied by submitting the ‘plausible’ green distribution rather than the exact average yellow distribution. The answer is nearly 6 percentage points. The percentage of 9 to 4 of the yellow distribution is 70.2%; of the green distribution, 76.0%.

On the right, the higher grades all exceed the maximum; the lower grades are all below the minimum. This is the typical pattern of a ‘gamed’ distribution.

I have no idea what the outcome might be. Perhaps most of the over-bids will be shown to be attributable to game playing; perhaps not.

In Scotland, the decision has been taken to scrap the algorithm’s results, and to accept schools’ CAGs, even if they really were over-the-top (but, hopefully, in only a few cases…).

In England, the grades to be announced shortly will, subject to the ‘small cohort’ rule, be those determined by the algorithm, as they have always been. What has been changed by Gavin Williamson’s last minute announcement is a tweak to the rules for appeals.

Until last Thursday (6 August), the grounds for appeal were limited to technical and procedural errors. On that day, and after much pressure, the rules were widened to allow appeals if schools ‘can evidence grades are lower than expected because previous cohorts are not sufficiently representative of this year’s students’. 

Last night (11 August) came the news that the grounds for appeal had been amended a little more: schools can now appeal their awarded grades if their students’ mock results are higher. I’m puzzled by that. If an alternative to calculated grades is to be used as a criterion of ‘right / wrong’, why choose mocks when the CAGs are immediately and easily available, and already have mock results factored in? And not just mock results: pages 5, 6  and 7 of Ofqual’s Guidance notes, for example, list all the aspects of student performance that CAGs were to take into account. Are all these of no value? Has all this important evidence been discarded? Have mocks been chosen in preference to CAGs because the CAGs are all wildly ‘over-optimistic’ and just can’t be trusted?

But as I hope I have demonstrated, some CAGs might not be ‘over-optimistic’ but rather ‘plausible’. We just don’t know. And I think we should find out.

For if we did, that might provide another way out of this appalling mess.

Suppose, for a moment, that all the English CAGs are reviewed to determine which are ‘plausible’ and which are ‘gamed’. Suppose further that Ofqual adopt the rule that all CAGs that are ‘plausible’ are either confirmed (if already awarded) or re-instated (if they have been over-ruled by the model). To complete the picture, those CAGs that have been ‘gamed’ would be over-ruled by the model (as may well have already happened). And since some students of ‘gaming’ teachers might have been penalised by the award of a calculated grade, there also needs to be a free appeals process, open to any student who feels he or she has been awarded an unfair grade, and who can provide suitably robust evidence, of which mock results can be one element.

This will certainly drive some grade inflation – but I would argue that this is a consequence of Ofqual’s failure to design a wise process. The guardian of the ‘no grade inflation’ policy is responsible for its breach.

38 comments

  1. Jane says:

    I looked into the schools who had the highest number of A*A grades last year and there are three types of schools/colleges who have the lions share of the top grades:

    Independent
    Grammar/selective
    Sixth form colleges.

    Ofqual have stated one type of further education establishment over predicted CAGs more than others. My guess would be the huge sixth form colleges who are accounting for in the region of a 1000 plus cohorts per year purely because it must be do hard to grade and rank lots of classes in the same subject. So maybe not gaming the system but having difficulty in comparing vast numbers of students being taught by different teachers and trying to be fair to all.

  2. Huy Duong says:

    Hi Dennis,

    I think some of the things that went wrong are:

    1) Politicians caring too much about macroscopic indicators and not enough about injustices at the microscopic level, despite the fact that these injustices affect real individuals. Thus, the justify the system in terms of macroscopic indicators, such as “reducing grade inflation from 12% to 2%”, “the attainment gaps between the different groups have not grown”.

    Suppose we take £20,000 from the salary of 10% of the Westminster government and give it to another 10% based on postcode or their parents’ professions in the latter’s working lives, how would they like it? Their average salary will be the same, and the average salary gap between them and rest of society has not changed, so it’s OK then?

    2) Ofqual, with all of its statistical experts, should know better, but uses the same lines “reducing grade inflation from 12% to 2%”, “the attainment gaps between the different groups have not grown” as the politicians. Not to mention that it is the gatekeeper of such data, and might have been the source into the public domain of such statements, which are essentially statistical partial truths.

    3) Debate is an essential part of the democratic process. Debate needs information. But Ofqual, the keeper and in some cases originator of key information, has so far not release this information to the public, despite widespread and repeated request. For example, it did not let the public know that about 40% net of A-level CAGs will be downgraded. When that information is revealed, it used its energy to argue that adjusting grades down is not downgrading!

    Crucially, it should release information to inform the public to make our own judgement on whether this 40% net downgrading is acceptable in a democratic society. Specifically, it should let the public know the statistical confidence level that the 40% CAGs downgraded are the CORRECT ones to be downgraded. This is what the public needs to know to decide whether downgrading 40% CAGs is acceptable in a democratic society.

    This is like having 41 speeding drivers out of 100 on a motorway, your speed camera catches 41 cars and you fine 40 drivers. Is that acceptable? To answer that question you need to know how trustworthy the speed camera is. If it catches the wrong car 25% of the time, that’s probably not acceptable in a democratic society. If it catches the wrong car 1% of the time, that’s a lot more acceptable, but you still need a free and fair appeal procedure as a safety net.

    Without the key information I mentioned, which Ofqual should have, the debate is muddy and polarised, and the country gets into this mess.

    Yet Ofqual and some members of the government still insist in telling us that the system is fundamentally fair and is the fairest possible. Don’t TELL us that. SHOW us the statistical confidence that the downgrading doesn’t downgrade the wrong student, and let us decide.

    The Department of Health and Social Care would’t impose an immunisation programme on school children without published data on toxicity and efficacy. The Department for Education and Ofqual shouldn’t use this “standardisation” without published data on its reliability.

  3. Mark says:

    Hello Jane – I’d be very interested in a link to the OFQUAL statement citing a particular type of FEinstitution over-predicting. Many thanks.

  4. Huy Duong says:

    Hi Dennis,

    Your chart on the right is also plausible. Suppose we are talking about A-level, it is entirely plausible that the 2020 cohort is better than the best of the past 3 years, especially in non-selective schools.

    In fact, SAQ’s model even allows the 2020 cohort to be better than the best of the past 4 years by the use of the ventiles of school above and below the school being “standardised”.

    This is data for Matthew Arnold School in Oxford: https://sites.google.com/view/2020-ofqual-grade-calculation/data-from-a-typical-comprehensive-school

    In 2018 the overall A* rate was 4%, in 2017 it was 13%. Assuming no systematic effects, it’s quite possible for this year’s overall A* rate to be anything from 3% to 15%. If we subdivide the school into A-level subjects cohorts, the variation will be even greater.

  5. Jane – thank you. Your point about colleges is important, but receives very little visibility. Many colleges have cohorts of hundreds of students who have been obliged to re-sit GCSE English or Maths, having ‘failed’ the previous year – victims of the policy of “no grade inflation” which condemns about 30% of the entire cohort, every year, to an ‘award’ of 3 or lower, to ‘fail’.

    Those large college cohorts are therefore comprised almost totally of students clustered around the 4:3 grade boundary, making this year’s requirement for ranking totally impossible – https://www.tes.com/news/Coronavirus%3A%20gcses-fes-challenge-ranking-thousands-students.

    There is therefore every reason why colleges would ‘over-estimate’ their grade 4s and minimise the 3s – in my view, totally legitimately.

    From the bureaucrat’s point-of-view, however, this is a nightmare. If a candidate is awarded grade 5 rather than grade 4, this does not affect the key measure of grade inflation, the total across grades 9 – 4. But every additional grade 4, rather than grade 3, makes that measure greater.

    To me, that all reinforces the value – and importance – of getting behind the CAG figures so that they can be understood and interpreted wisely.

    For anyone reading this that has not already come across it, may I mention some truly powerful thinking led by Roy Blatchford on “The Forgotten Third” – his vivid term for the 30% of students condemned to ‘fail’, even before they enter the exam room. One of his ideas is to throw exams and grades away, replacing them with what he calls a ‘passport’, a statement of what the student can achieve, rather than can’t.

    There is a report on this here https://www.ascl.org.uk/Our-view/Campaigns/The-Forgotten-Third, and a new book too https://www.amazon.co.uk/Forgotten-Third-third-thirds-succeed/dp/1913622029/ref=sr_1_1?adgrpid=102578738574&dchild=1&gclid=EAIaIQobChMI1aL50tGX6wIV6YBQBh03bQvqEAAYASAAEgIctfD_BwE&hvadid=446290100133&hvdev=c&hvlocphy=1006512&hvnetw=g&hvqmt=e&hvrand=12190947503817174763&hvtargid=kwd-916307386023&hydadcr=11440_1787660&keywords=the+forgotten+third&qid=1597303125&sr=8-1&tag=googhydr-21

  6. Mark says:

    How will the idea of using mocks to bump up a low grade at appeal impact on the use of rank order to distribute original grades? A student ranked 75 out of a cohort of say 100 who is given a 3 by the algorithm because historically 75% of previous cohorts have failed that subject could refer to the 4 they got in their mock (reflected by the 4 their CAF s predicted) but that would move them up the rank order and presumably require the least safe 4 student to be downgraded to a 3 to prevent grade inflation. If this least safe 4 didn’t get a good grade at mock they have no basis of appeal after their regrade other than their CAG prediction. As you suggest Dennis, why not simply allow CAGs – which include mock results but also projected progress through focused revision etc – to be used in appeal for low achievers? This could be ring-fenced to those receiving fail grades or subsequently falling into the fail category through shifts in rank order to avoid a free-for-all and widespread grade inflation

  7. Huy – thanks as always for your thoughts and ideas. All great stuff as ever.

    I particularly value your point about the chart on the right which I attribute to ‘gaming’. Thank you for making me think, for I now fully appreciate that this distribution – which is tilted to the higher grades – could indeed represent a fair reality for a cohort that is genuinely brighter than the past. And even a chart which shows 100% A* – the most skewed distribution conceivable – could be valid in some circumstances, such as a school which has historically had, say, 10 students in a cohort, with grades spread across the range, but this year, just happens to have a pair of extremely bright twins!

    So my journalistic tendencies , I fear, overwhelmed my caution, and I appreciate benefiting from your wisdom!

    I still believe, though, that it would be helpful to distinguish between the two pattern types – may I suggest that my statement

    “those CAGs that have been ‘gamed’ would be over-ruled by the model”

    would be improved by replacing it with, say,

    “those CAGs that show a pattern of the type I have labelled ‘gamed’ should be scrutinised to discover whether or not the upwards shift in grades is justified, or the result of what looks to be some form of gaming. If the pattern can be justified, the CAGs should be re-instated; if not, the model’s results should over-rule the CAGs.”

    Is that rather better?

  8. simon kaufman says:

    A very helpful analysis, Dennis.

    Links to some of the stories on the Guardian Live coverage of AL results and examples given on the World at One on Radio 4 earlier today of top-end outlier performance being disregarded by the algorithm with potentially devastating consequences in one case for an inner city first generation BAME entrant to Cambridge where it is ever more clear that the oft-quoted – whether by ministers or supposedly well informed journalists – claim that the algorithm allows for some account to be taken of the past performance of the individual student and therefore of their progression trajectory is simply nonsense – all of the evidence put forward so far by school/college leads is that the algorithm simply replicates whole school/college performance from the last three examination cycles taking no meaningful account of the performance profile of individual school/college/subject cohorts this year.

    The supposed ‘triple lock’ re appeals also raises the question as to why thought has not been given to installing a definitive process timetable with safeguarding guarantees for applicants protecting their ‘good standing’ as offer holders at their original institutional choices – with just a modicum of thought and joined-up thinking between the Boards, OFQUAL, UCAS & UUK this could have been modelled on the same principles as those underpinning the Adjustment phase of the UCAS cycle (protection of standing at your firm choice whilst the appeal is managed with the right to approach and hold a fall-back place at a Clearing choice with all institutional choices able to exempt the applicant from numbers control (NC) so that institutional strategic management of recruitment to the NC is not compromised).

    Even odder is Williamson’s statement this morning that he has begun discussion with HEIs of a ‘Late Clearing’ phase for Autumn exam applicants – does anyone have a clue how this could be made to work – presumably this implies a January intake point for institutions who have previously never contemplated such a radical change to the academic year – at least for UG entrants.

  9. Thank you, Simon.

    Some schools have kindly given me some data, and a common pattern I have seen is that Ofqual have pushed down A*, usually towards the three-year minimum – just as if they used only the worst of the last three years, rather than the average.

    This then sets up a ‘waterfall’ effect, as elegantly discovered by Huy Duong, leading to his prediction that 40% of CAGs would be downgraded – a prediction that has indeed been proven to be true.

    The consequence of this is that many more bottom grades have been ‘awarded’, and I have been in contact with schools who have never had a C or D in that subject, yet suddenly find that they have this year.

    What a muddle. And even more reason to do a detailed analysis of the raw data to look for patterns.

    So here’s a mind-game… imagine that the entire dataset were handed over to the Royal Statistical Society for forensic analysis…

    And as regards all this ‘triple lock’ mumbo jumbo, it seems that the ‘mock’ idea was as big a surprise to Ofqual as it has been to us. Which big brain is dreaming all these things up and then announcing them, presumably as tweets… ?

  10. A says:

    The only caveat I would make is that the CAGs in some schools have been through two or three rounds of moderation before submitting and some are reported as being lower that the schools own predicted grades and the pupils mocks/coursework grades so there are victims in there too. The final decision being made without the teachers involvement.

  11. Christopher says:

    I’m really looking forward to Huy Duong’s next post…
    I’m mainly into art – I never realised such stuff could be so fascinating.* That said, I’ve heard people talk about the ‘beauty of maths’ – I just didn’t believe them…
    * and important

  12. Tania says:

    What a shambles!

    A friend of my son was one of the 0.2% to get moved down by 3 grades – was a Y14 doing politics as he had a subject change between Y12 and Y13. Got an A for every exam and piece of coursework except for 1 B over the course of 2 years. “Awarded” a D. How???? It is a subject with some of their better results, in 2019 56% got A/A*, in 2017 40%, I can’t find the data for 2018.

    And moving people up by 3 grades is even odder – how can they have such little faith in teacher judgements? Moving results by 1 grade is inevitable if you are trying to avoid grade inflation, but this is bizarre. I thought it looked like my little Maths/English scenario where the problem comes down to subject level moderation, but that does not fit.

    Y11 son thinks it is ridiculous to use mocks as a lot of people cheated – they knew last years papers were being used and it is easy to get hold of papers which are locked on the exam board websites through social media sites. CAGs much better as teachers know who cheated, the suspicion on Mumsnet is that UK govt cannot follow a Scottish lead, my son does not like that either as top grades handed out like smarties devalues them, but given where we are it seems to be the fairest action.

  13. Thank you A, Christopher and Tania.

    Yes, Huy is a true star – https://www.theguardian.com/education/2020/aug/14/punishment-by-statistics-the-father-who-foresaw-a-level-algorithm-flaws – and Tania, I’ve been talking to quite a few people over the last day, and have been looking at some very strange distributions.

    Some have shown quite a dramatic “waterfall” effect – by pushing the A*s down, there has been an “overflow” through the grades, all the way to the bottom . So some schools have been ‘awarded’ Ds or Es for the first time ever, and are totally puzzled why, with no history of these before, the last one or two in the rank order have ended up with a such low grades.

    I think the entire data set should be handed over to the Royal Statistical Society for a deep forensic analysis.

  14. Janet Hunter says:

    Hello Dennis,
    Thank you for sharing the news story about Huy and his son. Concerned about outliers in non-selective state schools, I posted a comment on one of your earlier blog posts and followed the evolving conversations with interest. It is good to see that Thanh has his university place.
    My son was fortunate not to be affected by the downgrading. However, a friend of his who attended a UTC, found that he was given a U for one of his subjects. There is no way that this was his CAG. I find it very worrying that the’waterfall effect’ that you mention has been extended past the letter grades to include a fail grade, leaving the student with nothing to show for their last two years of study in that subject. They also lose two grades’ worth of UCAS points, going from 16 for an E to zero. I doubt this is reflected in the statistics.

  15. Dave K says:

    Sadly there is yet another form of gaming possible that I suspect has affected my daughter who is a high achiever but “only” needed 3 A’s for her place at Oxford. She was CAGed at A*AA which is for her the harshest possible interpretation of her likely grades. Talking to other parents in similar positions this “lowering” of the highest CAG targets seems to have affected others with offers below their expected grades.

    At the time of submission Ofqual implied that only the exaggerated CAGs would be moderated which may have led schools to guesstimate the number of high grades they would be allowed to get without bringing the moderation algorithm into play. These grades could then be manipulated/distributed to ensure that the maximum number of (chosen)children got the grades they needed for their offers (or any other potentially biased outcome desired). The ability to seal CAG lists from class teachers allowed the Senior Leadership this ability. Extremely hard to prove but it could explain a lot in our high achieving London girls private school.

  16. Huy Duong says:

    Thank you, Dennis, Christopher and Janet, for your kind thoughts and words.

    Given that Ofqual’s own testing data, released after A-level results, indicates that for cohort size 10-24 the final grades awarded this year has only less than 60% chance of being correct, and cohort size 25-49 the final grades awarded this year has only less than 70% chance of being correct, keeping grade inflation down to 2% doesn’t actually do very much for maintaining the integrity of A-level grades, but it has been achieved at enormous human cost.

    It is arguably a misguided policy. Given that kind of inaccuracy in the final grade anyway, it would have been better to allow higher grade inflation, if not 10% then, say, 8%, to lower the human cost.

    However, one cannot expect Williamson to work that out for himself, and Ofqual is employed to do it and advise him. Unfortunately the taylors who for months have been saying to the questioning public “We are refining the details of his clothes” have left the emperor exposed.

  17. Huy Duong says:

    And the emperor’s boss, who probably knows even less than the emperor about how reliable or unreliable this year’s grades are, thought it was a good idea a to close rank with the bluster that this year’s grades are robust.

  18. Antony says:

    3 general observations and then 3 specific statistical points, if I may.

    Firstly, and most importantly, can I congratulate Huy’s son on his University place, I’m sure that if he takes after his father he will go on to have a brilliant career. Can I also suggest than Dennis should be made Head of Ofqual and Huy Education Secretary. Or the other way round if you would prefer 😉

    Secondly, This is a wonderful blog with intelligent, respectful and informed BTL debate, where people listen to each other and (gasp!) sometimes accept others points and change their mind. I have an 18 year old with A level results last week, she did fine and got her Firm choice Uni, although with two CAG>Award grade reductions, one fair the other not, in our view. Many of her friends were not as lucky. So I’ve been following all this very closely, and have been so sad to see the level of ill informed opinion and untruths present in even respected media. As for the BTL comments, words fail me. So I’ll say nothing.

    Lastly, I also wonder how much grade ‘Inflation’ was caused by teachers not trusting Ofqual and the Govt about how their predictions would be used. (with total justification in Scotland, as it turned out). Imagine how you must feel now as a Scottish teacher / head who played it dead straight and strictly adhered to prevision years averages, perhaps even with rounding? Imagine what you would do next year, or if this ever happened again?

    Anyway, my tuppence to add, FWIW:

    1 Huy’s thoughts on the 12% Grade Inflation being reduced to 2%. I fear things may be slightly worse than that, as if one (rightly) accepts that the small cohorts have either total (<=5) or partial (6-15) reliance on CAGs, and that furthermore CAGs are inevitably and rightly going to have SOME Grade inflation, then to get to 2% overall with all Cohort sizes means that Grade Inflation for the larger cohorts must be less than 2%. Not sure how much less (maybe Dennis could crunch the numbers?)

    2 I think Dennis may have missed one factor in his explanation of why some Grade Inflation is inevitable and not 'Gaming'; namely the small but inevitable proportion of students each year who have a nightmare in exams, are sick etc. There are some students in the reverse situation, but due to the nature of these things far fewer. This can never be predicted at an individual level but will always mean that the aggregate of genuine honest prediction will exceed actual results.

    3 How do we explain the 'Whitley Bay' situation, where a college has received scores materially lower than ANY of their previous 3 years?
    https://www.whitleybayhighschool.org/lower-school/key-publications-and-letters
    My first thought was that Ofqual must have applied a less than 0% grade inflation to larger cohorts to ensure that overall GI was near 0% (see point 1 above). I now believe that this hasn't happened (thankfully!) but instead this has something to do with the 'prior attainment adjustment'. Bearing in mind that in the case of Whitley Bay the 2020 Cohort was actually slightly stronger than the previous 3, the only explanation I can see is that Nationally averaged 'Value Adding' has been used, rather than the actual uplift that WB has achieved over the last 3 years. If so that seems wrong, I would be very interested in Dennis' view on this!!

    Thanks, keep up the good work, and I'll keep hoping for a society where facts matter more than opinions or soundbites. Stay Safe!

  19. Hi Anthony – thank you! And things are moving so fast that it’s probably all changed in the time since I typed “hi Anthony”! And it could well be those jobs are soon to be vacant…

    As you say, two sad aspects of the otherwise quite sensible solution of just awarding the CAGs are that the unscrupulous have got away scot-free (ah! Scot!!!)., whilst the truly honourable are thinking, oh dear, without playing any games, I could have given [Sam] an [x]… So I fear we’re very much in the territory of trying to find least-worst way out of the mess rather than what we would all agree to be the best answer.

    Your point (2) is valid, thank you: my statement that a ‘non-plausible’ distribution is gaming is wrong, for the reality could be a very bright cohort. So I should have said “a ‘non-plausible’ distribution should trigger an enquiry to distinguish between the legitimate and the gamer”. That is, I think, much better.

    Whitley Bay. There are an increasing number of bizarre award patterns – patterns that are quite different from history – coming to light, and being discussed in the press, on radio and on TV. Who knows what that algorithm actually did? And why oh why did no one (or the algorithm itself) do some simple sense-checks?

    More importantly, why did Ofqual set out with the intent to FORECAST, to PREDICT, the distribution of grades for every subject in every school in the country? And when some extremely clever person suggested it, why didn’t someone sensible say “I really don’t think that will work…”?

  20. Taryn S says:

    Hi Dr. Duong,

    Really impressed by your analysis and loved your interview in the Guardian.

    I’m with the Associated Press and we were really interested in your story and were wondering if you might be interested in speaking with me about your experience?

    Thank you!
    Taryn

  21. Patrick says:

    Can you appeal your CAG results with support from school, if so given the school submitted the CAG who would you appeal too.
    Thanks

  22. J Cunningham says:

    Why is nobody talking about schools submitting centre assessed grades that are LOWER than the UCAS Predicted Grades they gave to a student to be used in their UCAS application. UCAS predicted grades are defined as “… the grade of qualification an applicant’s school or college believes they’re likely to achieve…”. The CAG grades submitted to the exam board were defined by my school as “our best assessment of what you would have achieved if you had taken exams”. Is this not the same thing? How could a school give me UCAS Predicted Grades of AAA and then submit CAG grades of ABC to the exam board? Have they not totally and utterly failed me? PLEASE can someone shed some light on this for me as I cannot find an answer anywhere. Thanks.

  23. Patrick – much water has flowed under some most unstable bridges since you posted your comment, and let me apologise for not replying sooner. As you now know, you will be awarded your CAG. Which, in general, is good news to very many students.

    My understanding is that you cannot appeal your CAG, but if you fear that your CAG has been influenced by some form of bias and discrimination, and believe you have suitable evidence, you can take action against the school on grounds of malpractice or maladministration, as described, for example, here https://www.gov.uk/government/publications/student-guide-to-appeals-and-malpractice-or-maladministration-complaints-summer-2020

    And Mr Cunningham… thank you; I might be able to shed some light, but I fear that I might be illuminating a rather dark corner.

    One of the great confusions of this year’s process has been the muddle caused by the words ‘predicted grade’.

    A ‘predicted grade’ is the term usually used for the grades suggested by teachers in connection with UCAS admissions. This has been taking place for years, and over those years, it has been generally accepted that ‘predicted grades’ are usually higher – often by two grades – than the grades as subsequently awarded.

    Here, for example, is a statement fromDr Mark Corver, which he made at a recent webinar (http://blog.royalhistsoc.org/2020/06/22/race-update-7-rhs-virtual-workshop-on-the-impact-of-the-covid-crisis-on-bme-student-admissions-in-higher-education-18-june-2020/):

    “Predicted grades do not look anything like exam awarded grades (normally about two grades higher).”

    Mark Corver should know – he was formerly the Director of Analysis and Research at UCAS, and now runs dataHE (https://datahe.uk).

    ‘Predicted grades’ are higher than the actually awarded grades for a number of reasons, such as (1) teachers wanting to do the best for their students (2) general acknowledgement that the concept is ‘best possible grade that student might achieve on a good day’ and (3) there is no redress to the teacher for ‘poor prediction’. Furthermore, everyone involved, at schools and at universities, knows ‘the rules of the game’.

    This year, teachers were asked to submit “centre assessment grades” or CAGs. The rules for these were defined over three pages of Ofqual’s ‘Guidance notes’ (https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/887018/Summer_2020_Awarding_GCSEs_A_levels_-_Info_for_Heads_of_Centre_22MAY2020.pdf), where they specify that teachers must take “a realistic judgement of the grade each student would have been most likely to get if they had taken their exam(s)”.

    “Realistic” has generally been accepted as meaning “what is most likely to happen on a usual day” rather than “on a good day”, and was certainly meant to mean “don’t send in UCAS ‘predicted grades’ because we all know that they are usually over-the-top”.

    Furthermore, there is a ‘comeback’ for the CAGs – every submission had to be accompanied by a ‘declaration’, signed by the Head, to the effect that Ofqual’s rules had been faithfully complied with. That doesn’t happen with UCAS predicted grades.

    So I have a hunch that very many candidates are also asking “why are my CAGs lower than my UCAS predictions?” – as indeed many are. But the fact is that UCAS ‘predicted grades’ and this year’s CAGs are, in principle, different, with the CAGs very likely to be lower than the UCAS predictions. Which is extremely confusing, even more so when the press and the news media refer to the CAGs as ‘predictions’. Which, grammatically, they are in that the CAGs were taking a view on the future. But, in taking the CAG view, teachers were being asked to put on a different set of glasses as compared to taking the ‘UCAS prediction’ view.

    Mmm… I hope that shed some light. But as I said, that particular corner is murky.

  24. Raffaella Mori says:

    Yes why is nobody talking about how the CAGs submitted by fairplaying schools have seriously impeded their students chances of getting into University because they are lower than the UCAS predicted grades!
    What is the current Appeal process for that scenario?
    If there’s no appeals process then resits are the only option.

  25. C says:

    My son is in the same position. Predicted AAA all though his 6th form but downgraded by his school when they submitted the CAG’s. Even after his mocks were not brilliant (he didn’t do much revision as he wanted to see where his weak points were) they still assured him of his A grades as he was one of the “brighter”students and they knew he was capable of acheiving them.
    We have a feeling that his actual teachers put forward his grades to the head of department who then looked at how he had performed through the 2 years. They do not know him as they have never taught him and he did not do his GCSE’s there. He does enough in class, homework and end of topic tests to keep on top of the subject, but he then puts all his effort into his exams. This was how he approached his GCSE’s and got mainly 9’s and 8’s throughout.
    The 1 grade that stayed an A was the one where the head of department was also his teacher so knew what he was capable of. Ironically it was also the subject which was his weakest and with the lower assessment and end of topic marks.
    The other students in his class, who would quite often go to him for help, are also mystified as to his CAG’s.
    Basically, he feels completely let down by a school which encouraged him to aim for a higher university place, recognising that he was capable of acheiving AAA and of which he himself was quietly confident of getting if he had sat his exams.
    We have also tried to find a way to appeal, but if it has to go through the school I don’t hold out much hope of them agreeing to the fact that they got it wrong.

  26. albert wright says:

    What a wonderful series of information, questions and answers – predictive, actual, expected , CAG based, different tinted glasses.

    The devil is definitely in the detail.

    Unfortunately, some words may not mean what we think they mean but instead may mean only what the person who expressed those words meant them to mean.

    How do we know what we don’t know?

    My favourite from the above is ” “Realistic” has generally been accepted as meaning “what is most likely to happen on a usual day” rather than “on a good day”, and was certainly meant to mean “don’t send in UCAS ‘predicted grades’ because we all know that they are usually over-the-top”.

    When is fiction fact and vice versa?

  27. J Cunningham says:

    If it is true that, “it has been generally accepted that ‘predicted grades’ are usually higher – often by two grades – than the grades as subsequently awarded” and “everyone involved, at schools and at universities, knows ‘the rules of the game’”, then the UK education system as a whole is utterly failing students. However, being someone who works in a school, has teachers as friends and whose husband is a teacher, I do not agree that the vast majority of teachers predict grades usually higher by two grades and that everyone involved “knows the rules of the game”! Teachers know this practice would not be in the interest of the student and if, for example, my husband predicted a grade A and the student received a grade C, he would most definitely face redress for poor prediction.

    Regarding my son’s particular case, all of his KS5 school progress reports indicated a Forecast grade of AAA for his A levels. The school’s Forecast grade is defined as, “… an estimate of the grade that the subject teacher believes the student is likely to achieve at the end of the year on the basis of performance so far”. Is this not the same guidelines issued by Ofqual which states that a teacher must take, “a realistic judgement of the grade each student would have been most likely to get if they had taken their exam(s)”.

    In addition, the school criteria for issuing a Forecast grade is, “The Teacher has taken into account the student’s current assessments, including interim tests, coursework, class work and home learning”. Is this not the same criteria for issuing a CAG?

  28. Christopher says:

    Yes, this is why the issue is so important. As Huy Duong indicates, it goes to the heart of what democracy is about. Some people know the rules of the game. Some people don’t even know it’s a game; a game in which people’s futures are at stake.

  29. hi everyone – I write this on 20 August, GCSE results day, and a lot of water has flowed under a lot of bridges!

    Thank you all for your insightful, and very valid, comments. Now we know the end-game, the saddest situation is that of the conscientious teacher who tried, with great integrity, to keep to the rules-as-were-understood, and felt forced to submit a CAG for the 8th ranked student as a B, rather than the A the teacher felt the student deserved.

    But the spectre of the assumed “no grade inflation”, and the implied rationing, put the teacher in a position like some poor prisoner in Stalinist Russia, coerced into signing a false confession.

    The student will never know, but will just feel disappointed; it would be very understandable for the teacher never to divulge the truth – for might that do more damage than good?

    I hope that hasn’t happened too often, but I’m sure it has happened. Even so, I still think this year’s grades are the fairest ever – if only because the benchmark of 1-grade-in-4-is-wrong is so low (https://www.hepi.ac.uk/2020/03/21/trusting-teachers-is-the-best-way-to-deliver-exam-results-this-summer-and-after/).

    That said, I am still disturbed by the “Stalinist Prisoner’s Dilemma”. Doe anyone has any suggestions, please, how this might best be addressed?

  30. Catherine says:

    Downward Moderation by schools who applied their own algorithm by Ofqual is being totally ignored. The concept that all grades issued have no value as they are over inflated is flawed. My child’s school’s honesty on the 11 th Aug revealed in their statement, “ teachers predictions may not be the same as their CAG’s as the school had gone through several rounds of internal moderation” illustrates this. Since the government uturn the school has issued another statement & is now attempting to reverse some of their own CAG’s, where students results warranted a higher grade than their CAG’s. I sympathise with why the school downgraded, however to tell a student all your work is indicative of an A grade but your CAG is B is reprehensible and cruel. What option is open to these students if CAG appeals are only available on the basis of clerical error or discrimination? The CAG fiasco has many stories still to be told.

  31. Catherine, thank you; that’s all very real.

    And since writing about the “Stalinist Prisoner’s Dilemma”, I have now seen this, published on 19 August, in the i – https://inews.co.uk/opinion/gcse-a-level-results-2020-downgrade-students-algorithm-quit-teaching-582789.

    I wonder how many stories there are like this. How can they be brought to light? And most importantly, how can the corresponding injustices be most fairly remedied (where ‘most fairly’ is about ensuring that ‘legitimate’ cases are resolved, but ‘inappropriate’ – for want of a better word – cases are both deterred and eliminated).

  32. Catherine Brio says:

    Wow! Thankyou for this. I knew I was right because I had acknowledgment from the school, but people just think your a sore looser. I’m both angry and relieved that someone has admitted it. I can now count 3 schools including the one in the article that 100% downgraded. The injustice. The poor students who worked so hard and were lambs to the slaughter. My school is trying to put some grades right but only small cohorts. How many students are out there, listening to the media tell them their grades are over inflated when theirs were downgraded. The Facebook group really need you back. So many parents are distraught & no one believes them.

  33. Hi Catherine – thank you… yes I think a lot more is going to emerge over the next few days. Very distressing for everyone.

    Why oh why oh why didn’t Ofqual say “these are the (simple) rules, please comply with them, and if you have any outliers, submit robust evidence and expect rigorous scrutiny”, rather like the second option in the “hindsight” blog https://www.hepi.ac.uk/2018/11/06/6676/.

    That would have saved so much anguish. And it isn’t, as they say, rocket science. But they turned it into rocket science (https://www.hepi.ac.uk/2020/08/18/cags-rule-ok/#comment-34606) doing immense damage as a result. Oh dear.

    I left the group thinking that I had timed out, and that there was nothing more I could helpfully contribute…

  34. Catherine Brioche says:

    Hi Dennis. Please also see this article. I have more xamoles & stories emerging. https://schoolsweek.co.uk/schools-that-followed-advice-to-deflate-grades-must-now-be-given-appeal-route/

    Thanks

  35. Catherine Brioche says:

    Hi Dennis
    The legality of schools applying their own algorithm is under fire.
    https://inews.co.uk/news/education/a-levels-gcses-2020-teenagers-unfairly-downgraded-schools-algorithm-599971

  36. Hi Catherine – thank you for posting those links.

    All this is now exposing the fundamental flaws in the way students have been assessed that have been there for years, flaws that have been not just hidden, but deliberately covered up (https://www.hepi.ac.uk/2020/08/08/weekend-reading-something-important-is-missing-from-ofquals-2020-21-corporate-plan/).

    Exam grades have been unreliable for years (https://www.hepi.ac.uk/2019/02/25/1-school-exam-grade-in-4-is-wrong-thats-the-good-news/), but no one has ever known – there has never been a comparison against which the awarded grade can, or cannot, be matched. Only one grade exists, the awarded grade, and that’s that; and since 2016, Ofqual have deliberately suppressed the opportunity to determine a comparator grade, a “second opinion”, by making harder to appeal (https://www.silverbulletmachine.com/single-post/2018/10/28/Biting-the-poisoned-cherry—why-the-process-for-school-exams-is-so-unfair).

    But this year – quite inadvertently I’m sure – Ofqual created a total set of comparators. For the first time ever, two complete, different, sets of grades are on the table – the CAGs and the algorithm’s grades – both of which are supposed to be measuring the same thing, this being, as Section 22 of the Education Act 2011 requires, “… a reliable indication of knowledge, skills and understanding…” (https://www.legislation.gov.uk/ukpga/2011/21/section/22). Note that word “reliable”.

    And, guess what, these two measures of the same thing, the CAGs and the results of the algorithm, are different!

    Which one is “right”? Why are they different? Were they based on different assumptions? Did different people approach the assignment of CAGs in different ways?

    The anguish felt by those who have been treated unfairly is real, and that unfairness needs to be redressed.

    But more fundamentally, I hope that all this muddle leads to the widespread realisation that statements such as “Ali is a grade [this], therefore…”, “Chris is a grade [that], therefore…”, with the grades based on exam marks in the way they have been for the last decade or more, is totally, totally wrong.

  37. J Cunningham says:

    “Now that teacher predictions have been allowed to stand following the Government’s U-turn, it means some pupils will have been graded more harshly than in schools where the second-guessing did not happen.”

    This is exactly what has happened with my son’s grades. He was Forecast AAA by the school, his Teacher Awarded Grades were A*AB but the school submitted ABC to the exam board. The school have totally failed him.

    I have appealed to the school, hopefully, they will appeal to the exam board and if not I will be looking into legal action.

  38. Christopher says:

    Unfortunately it looks as if the government has successfully diverted perceived responsibility for the assessment catastrophe from themselves to teachers *shock*.

Leave a Reply

Your email address will not be published. Required fields are marked *