Dear DfE… Show Your Workings 

Author:
Mike Crone
Published:

This blog was kindly authored by Mike Crone, a final-year law student at the University of Reading. He is developing a series of blogs, articles and research on public law matters, and the future of higher education policy.

The Issue 

You look intently at the assessment mark. 52%. That’s it. No why. No how. Just 52%. The comments read something like: ‘Good structure. Engage more critically’. Two sentences to explain twelve weeks of work. And that number now decides your classification, your next step, maybe your career. 

In the HEPI and Advance HE Student Academic Experience Survey 2025, 58% respondents state that all or most of their teaching staff gave them useful feedback. However, this means that for 42% of respondents, around half, a minority, or none of their teaching staff were providing useful feedback. Similarly, 51% of respondents stated that all or a majority of teaching staff provided feedback on draft work, and 58% stated that all of the majority of their teaching staff gave them more general feedback on progress. While some of these figures are welcome, there is an issue of consistency. Most students are having a positive experience of feedback from most of their teaching staff, however, there are gaps in the system. For example, 14% of students stated that a minority or none of their teaching staff provided them with useful feedback. 

While these figures have improved over the last five years, the statistics remain concerning. Where useful feedback is lacking, marks may be being awarded without transparent explanation, feedback is often vague, and links to assessment rubrics may be missing or inconsistently applied. Without improvements, students are not consistently being shown how to improve, and even where rubrics are introduced, their effectiveness hinges on clarity, training, and implementation, all of which vary widely. If students question the result, they may often be told it falls under ‘academic judgement’. 

In a system that demands students explain every idea, quote every claim, and justify every argument, surely institutions should be held to the same standard? 

This would be concerning at any time. But in 2025, it’s urgent. Ninety-three per cent of students now use generative AI tools in their studies, up from 66 per cent just a year ago, according to the HEPI–Kortext Gen AI Survey. As the Guardian reported, thousands of UK university students were caught cheating using AI in the last academic year. The pressure on universities to modernise assessment and restore student trust has never been greater. 

And as Rohan SelvaRadov highlighted in his HEPI Policy Note Non-Examinable Content: Student access to exam scripts, most students do not even see their exam scripts. If students cannot access the work being judged, feedback loses almost all its value. Transparency begins with access. Without it, fairness collapses. Rohan’s superb recommendations on page 10 of the Policy Note set the foundations for rectification.  

The Problem 

Assessment is the foundation of credibility in higher education. But right now, that foundation is cracking. Markers vary. Some use rubrics carefully. Others rely on instinct. A recent study of programming assignments asked 28 markers to grade the same set of student submissions. The results were wildly inconsistent, and in some criteria, the level of agreement was close to random. Double marking and moderation exist, but they rarely give students clarity. Feedback still often consists of vague phrases like ‘needs depth’ or ‘some repetition’, which give no insight into how the grade was reached. 

This is not only a pedagogical failure. It raises legal concerns. 

Under Section 49 of the Consumer Rights Act 2015, universities must provide services with ‘reasonable care and skill’. If a student receives a grade without explanation, it risks breaching that statutory duty. Schedule 2 of the Act lists examples of unfair terms, many of which could be triggered by provisions in student handbooks or teaching contracts. 

The Equality Act 2010 goes further. Sections 20 and 21 require universities to make reasonable adjustments where a provision, criterion, or practice places disabled students at a substantial disadvantage. Schedule 13 goes into greater depth surrounding the duties of Higher Education institutes. Vague or unstructured feedback can do exactly that, especially for neurodivergent students who may rely on clarity and structure to improve. Where feedback is not intelligible, impactful, and rubricaligned, universities may be breaching their anticipatory duty under Section 149 as well as the individual duty under Section 20. 

Meanwhile, the formats we continue to rely on (long essays and highstakes exams) are increasingly misaligned with the world graduates inhabit. Essays reward polish and curriculum style and adherence. Exams reward memory under pressure. Both reward conformity. Neither reflects how people learn and work today, especially in an age of technology and AIsupported thinking. 

If students are learning differently, thinking differently, and writing differently; why are aren’t we assessing them differently? 

The Solution 

The Department for Education (DfE) has the power to act. The Secretary of State for Education and Minister for Women and Equalities appoints the Office for Students (OfS) and sets regulatory priorities. The OfS was designed as a buffer, not a direct arm of government. But if students cannot trust how their futures are decided, then the DfE must ensure the OfS enforces transparency. This does not mean ministers marking essays. It means regulators requiring clear and fair feedback from institutions. 

First, every summative assessment should include a short, criterion-linked justification. Paragraphs should be labelled according to the rubric. If the student scored a 2:2 in structure and a 1st in analysis, they should be told so clearly and briefly. It would be as easy as colour-coding the marking rubric sections on the rubric table and then highlighting each sentence, paragraph, or particular section as to which colour-coded rubric area it correlates to.  

Second, from September 2025, Jisc is piloting AI-assisted marking tools like Graide, KEATH and TeacherMatic. These systems generate rubric-matched feedback and highlight inconsistencies. They do not replace human markers. They reveal the thinking behind a mark, or its absence. 

Pilots should be funded nationally. The results should be made public. If AI improves consistency and transparency, it should be integrated with safeguards and moderation. 

Third, we need fewer mega-assessments and more micro-assessments. Small, frequent tasks: oral analyses, short-answer applications, real-world simulations, timed practicals. These are harder to cheat, easier to mark, and better at testing what matters: judgement, adaptability, and process. 

British University Vietnam has already piloted an AI-integrated assessment model with a 33 per cent increase in pass rates and a 5.9 per cent rise in overall attainment. This is not theory. It is happening. But that, precisely, is the concern. A jump in attainment might reflect grade inflation or relaxed calibration rather than increased accuracy. Recent studies complicate the AI narrative: a 2025 study in BMC Medical Education found that while AI systems like ChatGPT-4o and Gemini Flash 1.5 performed well in visually observable OSCE tasks (e.g., catheterisation, injections), they struggled with tasks involving communication or verbal interpretation; areas where nuance matters most. 

Finally, the OfS registration conditions can be updated to require forensic marking as a basic quality measure. The QAA Quality Code can be revised to mandate ‘outcome-reason mapping’. Institutional risk and satisfaction profiles can include indicators like student trust, misconduct rates, and assessment opacity. 

It is to be noted that, as per the Competition & Markets Authority’s (CMA) guidance and the case of Clark v University of Lincolnshire and Humberside [2000] EWCA Civ 129, if assessment is not transparent, it may not be lawful, and could be left open to judicial challenge. However, it may not be wise to pursue such judicial challenge through an application for judicial review. The precedent set by the case of Clark, subsequent cases thereafter, and the CMA’s guidance, almost closes the door to judicial review. But, in turn, it leaves open the door to a civil action of a possible breach of contract.  

In conclusion, Dear DfE… please see me after class regarding the above. If students must show their workings, then so must academic institutions, with government support. With the ever-increasing appetite of the population for litigation, it would seem prudent to take pre-emptive action and collaboration to mitigate such risks.  


Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Comments

  • Charrua says:

    Aye, aye!
    Now that we agree, somebody has to pay for it. Marking and feedback are not free. It is unrealistic to pretend that a person can mark a 2000-essay and provide the constructive, valuable, accurate feedback on 15 minutes, non-stop 8 hours a day over 10-15 days.
    And another source of university income should be found: (1) staff salaries (after years of losses against inflation) are dismal when considering the opportunity cost, and (2) international students are already choosing alternatives to UK HE.
    The UK society has been getting more than what it has paid for years. Reality is finally catching up.

    Reply

    Your comment may be revised by the site if needed.

  • David Palfreyman says:

    A very thoughtful piece – thanks. In our The Law of Higher Education (Farrington & Palfreyman, Oxford U Press, third edition, 2021) we cover a case where the range of first marks, second marks, re-marks on appeal, re-re-marks required by the Court was all over the place from Fail to First for the same item of work – so much for the professionalism of academics when assessing student performance!

    Reply

    Your comment may be revised by the site if needed.

  • Paul Vincent Smith says:

    “It is to be noted that, as per the Competition & Markets Authority’s (CMA) guidance and the case of Clark v University of Lincolnshire and Humberside [2000] EWCA Civ 129, if assessment is not transparent, it may not be lawful, and could be left open to judicial challenge.”

    I’m not sure the CMA document says this at all; it addresses “the overall method(s) of assessment for the course”, but not how work is marked (a slippery thing at best) or how marks are awarded. If I’m wrong, please correct me.

    Reply

    Your comment may be revised by the site if needed.

Add comment

Your comment may be revised by the site if needed.

More like this

Date
10 November 2025
Organiser(s)
HEPI and University of Southampton
Format
Online
Admission
Open-to-all