Skip to content
The UK's only independent think tank devoted to higher education.

Making the Most of a Crisis? Using The Generative AI Threat to Catalyse Transformational Innovation

  • 28 February 2024
  • By Rod Bristow
  • This HEPI blog was kindly authored by Rod Bristow, former President of Pearson UK. Rod chairs the Academic Advisory Board for the education technology business, Kortext and is Visiting Professor at the UCL Institute of Education.

Josh Freeman’s fascinating new HEPI Policy Note produced in association with Kortext found that more than half of students have used generative AI for assessments. The note, packed full of information about the amazing opportunity afforded by AI, suggested that despite the headline, fewer than 5% of students ‘are likely to’ have used AI to cheat – basing this on students’ own reporting.

The wide and generally balanced media commentary focused on this ‘more than half’ statistic, but primarily in a tone of academic curiosity more than indignation – doubtless reassured by the assertion ‘fewer than 5%’ are likely to have used it to cheat.

Of course, there is no reliable way to know whether AI has been used to cheat, because systems cannot detect it reliably, and cheating is a major problem even when the numbers are small. You only need to see media coverage if an A level paper is leaked to understand how public confidence is affected by cheating and to look at social media to see the fury among students. Because of the way that grades are maintained (to avoid grade inflation), if some people achieve better grades than they earned, others will probably achieve worse grades than they earned. And if some people believe that cheating works, others will follow – they will feel compelled to.

Some say this doesn’t matter because too much is made of assessments, and employers don’t value the things assessments measure. They have a case (there’s a much bigger debate to be had about what can be reliably measured by assessments) but are missing the point. Assessments are used to grade and award the degrees which are a proxy for what has been learned. Degrees help people get good jobs. If it is no longer necessary to acquire know-how about a subject (other than how to prompt a machine to answer questions), why bother learning and what is the degree worth? These are obviously questions of existential importance to universities.

The HEPI report suggests the DfE commission a report into how assessment will be affected by AI. Such a report will be useful but it is the responsibility of governing boards to take the urgent action that protects the currency of a degree. This can be done – as in many cases, it has been. If it is not, future coverage of AI and assessment may be less kind, presenting an acute challenge to an already hard-pressed sector. There are few things more important to a university than public confidence in the standard and quality of their degree.

With this threat, there is at the same time a huge and connected opportunity to turn a potential crisis in assessment into a learning opportunity through innovative transformational redesign of curriculum, pedagogy and assessment. This can be done in ways fit for the technologically changed world students now find themselves in – using technology and AI to deliver a more engaging and more relevant experience as well as better outcomes. Universities which boldly embrace this challenge can unleash these benefits for their students – provided they also manage the immediate threat to the standard of the degree. Let’s not allow a potential crisis to go to waste.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Reply

Your email address will not be published. Required fields are marked *