Skip to content
The UK's only independent think tank devoted to higher education.

How are HE leaders responding to generative AI?

  • 9 May 2023
  • By Mary Curnock Cook CBE and Nic Newman

This HEPI blog was kindly authored by Mary Curnock Cook CBE, chair of the Jisc-Emerge HE Edtech Board, and Nic Newman, Founder and partner at Emerge Education.

Given some of the recent media coverage of the rise of generative AI and its potential impact on universities, especially around assessment and academic misconduct, it would be easy to fall into the trap of thinking that university leaders are running scared in the face of ChatGPT.

However, when the Jisc-Emerge HE edtech board of higher education leaders met recently to discuss the potential and pitfalls of generative AI, instead of a discussion about the assessment arms race, there was real curiosity and enthusiasm to explore the potential of the technology and what it holds for universities and students. This echoes the positive approach to generative AI shared by Sir Tim O’Shea in a recent HEPI blog post, which you can read here.

This was a well-attended meeting of around 60 DVCs, PVCs (and even six vice-chancellors), most of whom identified on an opening poll as ready to understand and use generative AI in the education mission rather than ban it. 

Breaking down barriers

Generative AI could offer breakthroughs for students with disabilities, especially those with communication disabilities. It can also break down language barriers through quick translation, creating alternative versions of learning materials, and expanding – decolonising, even – the research canon to include non-English sources more easily.

Looking at the broader HE ecosystem, if generative AI models can be trained to answer students’ questions with a higher degree of accuracy than a human tutor might, the cost of education – and therefore cost barriers – could fall very significantly. Improved online learning offers with far more personalisation would accelerate access to education in developing countries. 

AI developments promise more effective ways to make sense of the rich data we have about students, to help better understand their needs and support them. For instance, data and AI can help address questions about how we capture student learning, assess learning outcomes, and identify students who may be quietly struggling.

Teaching, learning and assessment

When we turned to assessment, as all discussion of AI in HE inevitably does, there was a sense of relief in the room rather than panic. The arrival of ChatGPT has indeed thrown down new challenges for many standard assessment methods, including the iconic essay. But many in the meeting welcomed the opportunity to consider more authentic assessment approaches and ways to use AI to develop critical analysis skills and good judgement for students.  As one delegate put it, “Focus on catching cheating is misplaced effort – we need to focus on making assessment more authentic and enhance opportunities for learning, with whatever tools are available.”

On the panel in the meeting, Herk Kailis, CEO and founder of online assessment platform Cadmus, set out how they focus on supporting the learning journey; seen through that lens, the work needed around generative AI lies in improving assessment design components rather than upping detection capabilities. With AI, it is possible to onboard higher quality assessment that is more scalable at lower cost, which can improve outcomes.

Panel member Kate Lindsay, SVP academic services, HigherEd Partners, suggested an example of how universities could also use AI to support the development of authentic assessment tasks. Looking at Bloom’s Taxonomy, she noted that the worldwide web and Wikipedia have dealt with the bottom layer of ‘knowing and understanding’ while AI is now addressing the middle layer, synthesising and applying that knowledge. Students will now need to push up into the ‘pointy bit’ of the taxonomy, to employ more human agency in terms of critical thinking and knowledge creation.

Well-designed assessments using ChatGPT might be in stages, where the first stage is to use ChatGPT 3 or 4 to help draft an essay outline and give feedback on draft paragraphs. High level cognitive skills are needed to refine the input instruction to ChatGPT so that higher quality, more accurate outputs are generated.  The next stage is to use unique human skills to develop the answer and add depth, drawing on lived experience and case studies.  AI could also be used to analyse and assess the quality of student work, providing feedback on areas where students need to improve and personalising the learning experience.

Using AI to generate learning objectives, formative assessment exercises, summary information and sources could deliver significant workload reduction for staff.  We can already anticipate that generative AI engines will soon be able to draft reports, create presentations and images, and summarise meetings for us.

While it might take a long time for automated marking of written work to be acceptable and publicly trusted, the interim development of double marking engines could reduce academic workload and significantly reduce bias. 

Taking a critical view

A further issue lies in the lack of transparency of current generative AI systems. Universities spend a substantial amount of time teaching students to critically evaluate sources and develop good judgement with respect to identifying and processing the data and information on which they develop their ideas and views. There is a fundamental lack of openness about how ChatGPT, for example, ensures the quality of the judgements it makes and the quality of the information informing those judgements.  As one delegate noted, “no human knows for a given set of inputs what the outputs of the AI will be.”  This means that detecting AI content will always be an algorithms arms race, but also offers real opportunity to develop students’ skills.

With an expectation that a conservative approach will be taken in compulsory education curriculum and assessment, it will be imperative that universities develop AI use as a core digital skill.

It is clear is that our students will graduate into an AI-augmented world in their new careers. Universities have a responsibility to prepare them for this reality and provide them with opportunities to experiment with AI tools, to understand their potential, and teach them to employ ethical approaches to their use.

What next?

Crucially, educators themselves need to take the time fully to understand what AI is, how it works and how we use it with care, rather than sleepwalking into an AI-driven world. The sector is on the back foot at the moment and in reactive mode – AI is ahead of us and we are running hard to catch up.

Nevertheless, the leaders at the Jisc-Emerge HE edtech board felt there was widespread enthusiasm to explore and experiment, and drew a parallel with Covid. Should staff be simply allowed to experiment, as they did with working from home, and learn over time the best way to use AI, giving educators permission to try things out and accept that some things won’t work? Or is an institutional, top-down strategic approach necessary and will we soon be seeing Deans of AI in position? Smaller institutions and providers will need to do the same kind of work but will struggle with resources, so the argument for a sector-wide approach is a strong one.

  • If you are interested in the work of the Jisc-Emerge Edtech Board, please contact Mary Curnock Cook or Nic Newman.  Previous research reports from the Board can be found here.
  • To keep up to date with the latest AI developments for education you may want to follow the work of Jisc’s national centre for AI which provides thought leadership reports, events, practical guidance and pilots for the latest developments in education AI:

Join our mailing list for regular newsletters and updates: [email protected]



Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

1 comment

  1. Ben Kenwright says:

    Yes, the year is 2030, and unbelievably, universities are still important for education. Despite the rapid advancements in technology and the ever-expanding reach of virtual learning platforms, the significance of physical campuses persists. The atmosphere, the camaraderie, and the shared pursuit of knowledge have endured the test of time.

    However, the landscape of education has undergone a remarkable transformation. Gone are the days when academics stood at the front of lecture halls, delivering lectures to rows of eager students. Instead, a new era has dawned, where traditional lecturers have been replaced by a cutting-edge marvel: ChatGPT bots.

    In this not-so-distant future, automated systems have taken center stage in the realm of education. These intelligent bots, connected to vast repositories of knowledge, have revolutionized the way students learn. With their ability to understand and respond to natural language, they provide personalized and interactive learning experiences.

    As the sun rises on the campus of Elmridge University, it illuminates a bustling scene. Students, equipped with sleek tablets and augmented reality glasses, make their way to their respective classes. The air buzzes with anticipation, as everyone is eager to experience the futuristic marvels that await them.

    Among the students is Emily, a bright and inquisitive young woman whose thirst for knowledge is matched only by her enthusiasm for technological advancements. Raised in a world where AI-powered assistants and virtual reality were as commonplace as books and pencils once were, she has always marveled at the possibilities that lay ahead.

    Emily steps onto the university grounds, her heart pounding with excitement and a touch of nostalgia. The imposing architecture of the campus blends seamlessly with the sleek digital interfaces adorning the buildings. The familiar faces of her fellow students greet her, each carrying a unique blend of awe and curiosity.

    Inside the lecture hall, the once-familiar sight of a lecturer standing at a podium is replaced by an imposing array of monitors and a single pedestal in the center. The screens come to life, displaying intricate patterns of light and information, as the ChatGPT bot prepares to take the stage.

    The room falls into a hushed silence as the bot’s voice resonates through the speakers, clear and captivating. The AI-powered educator introduces itself as Professor Athena, named after the ancient Greek goddess of wisdom. The students lean forward in anticipation, eager to embark on a new kind of educational journey.

    As Emily settles into her seat, she can’t help but feel a mix of awe and skepticism. While the benefits of ChatGPT and its potential to enhance learning are apparent to her, she is aware of the resistance that once engulfed its adoption. Before universities fully embraced the power of AI, protests and strikes had erupted across campuses. Traditionalists argued that the human touch, the nuances of face-to-face interaction, and the irreplaceable wisdom of experienced lecturers were being jeopardized. Emily recalls the heated debates that reverberated through society as the battle between innovation and tradition played out. It was only through rigorous discussions, compromises, and careful implementation that the universities eventually embraced the ChatGPT bots. Still, a lingering skepticism remains, a reminder of the journey that led to this pivotal moment in education’s evolution.


Leave a Reply

Your email address will not be published. Required fields are marked *