Given some of the recent media coverage of the rise of generative AI and its potential impact on universities, especially around assessment and academic misconduct, it would be easy to fall into the trap of thinking that university leaders are running scared in the face of ChatGPT.
However, when the Jisc-Emerge HE edtech board of higher education leaders met recently to discuss the potential and pitfalls of generative AI, instead of a discussion about the assessment arms race, there was real curiosity and enthusiasm to explore the potential of the technology and what it holds for universities and students. This echoes the positive approach to generative AI shared by Sir Tim O’Shea in a recent HEPI blog post, which you can read here.
This was a well-attended meeting of around 60 DVCs, PVCs (and even six vice-chancellors), most of whom identified on an opening poll as ready to understand and use generative AI in the education mission rather than ban it.
Breaking down barriers
Generative AI could offer breakthroughs for students with disabilities, especially those with communication disabilities. It can also break down language barriers through quick translation, creating alternative versions of learning materials, and expanding – decolonising, even – the research canon to include non-English sources more easily.
Looking at the broader HE ecosystem, if generative AI models can be trained to answer students’ questions with a higher degree of accuracy than a human tutor might, the cost of education – and therefore cost barriers – could fall very significantly. Improved online learning offers with far more personalisation would accelerate access to education in developing countries.
AI developments promise more effective ways to make sense of the rich data we have about students, to help better understand their needs and support them. For instance, data and AI can help address questions about how we capture student learning, assess learning outcomes, and identify students who may be quietly struggling.
Teaching, learning and assessment
When we turned to assessment, as all discussion of AI in HE inevitably does, there was a sense of relief in the room rather than panic. The arrival of ChatGPT has indeed thrown down new challenges for many standard assessment methods, including the iconic essay. But many in the meeting welcomed the opportunity to consider more authentic assessment approaches and ways to use AI to develop critical analysis skills and good judgement for students. As one delegate put it, “Focus on catching cheating is misplaced effort – we need to focus on making assessment more authentic and enhance opportunities for learning, with whatever tools are available.”
On the panel in the meeting, Herk Kailis, CEO and founder of online assessment platform Cadmus, set out how they focus on supporting the learning journey; seen through that lens, the work needed around generative AI lies in improving assessment design components rather than upping detection capabilities. With AI, it is possible to onboard higher quality assessment that is more scalable at lower cost, which can improve outcomes.
Panel member Kate Lindsay, SVP academic services, HigherEd Partners, suggested an example of how universities could also use AI to support the development of authentic assessment tasks. Looking at Bloom’s Taxonomy, she noted that the worldwide web and Wikipedia have dealt with the bottom layer of ‘knowing and understanding’ while AI is now addressing the middle layer, synthesising and applying that knowledge. Students will now need to push up into the ‘pointy bit’ of the taxonomy, to employ more human agency in terms of critical thinking and knowledge creation.
Well-designed assessments using ChatGPT might be in stages, where the first stage is to use ChatGPT 3 or 4 to help draft an essay outline and give feedback on draft paragraphs. High level cognitive skills are needed to refine the input instruction to ChatGPT so that higher quality, more accurate outputs are generated. The next stage is to use unique human skills to develop the answer and add depth, drawing on lived experience and case studies. AI could also be used to analyse and assess the quality of student work, providing feedback on areas where students need to improve and personalising the learning experience.
Using AI to generate learning objectives, formative assessment exercises, summary information and sources could deliver significant workload reduction for staff. We can already anticipate that generative AI engines will soon be able to draft reports, create presentations and images, and summarise meetings for us.
While it might take a long time for automated marking of written work to be acceptable and publicly trusted, the interim development of double marking engines could reduce academic workload and significantly reduce bias.
Taking a critical view
A further issue lies in the lack of transparency of current generative AI systems. Universities spend a substantial amount of time teaching students to critically evaluate sources and develop good judgement with respect to identifying and processing the data and information on which they develop their ideas and views. There is a fundamental lack of openness about how ChatGPT, for example, ensures the quality of the judgements it makes and the quality of the information informing those judgements. As one delegate noted, “no human knows for a given set of inputs what the outputs of the AI will be.” This means that detecting AI content will always be an algorithms arms race, but also offers real opportunity to develop students’ skills.
With an expectation that a conservative approach will be taken in compulsory education curriculum and assessment, it will be imperative that universities develop AI use as a core digital skill.
It is clear is that our students will graduate into an AI-augmented world in their new careers. Universities have a responsibility to prepare them for this reality and provide them with opportunities to experiment with AI tools, to understand their potential, and teach them to employ ethical approaches to their use.
Crucially, educators themselves need to take the time fully to understand what AI is, how it works and how we use it with care, rather than sleepwalking into an AI-driven world. The sector is on the back foot at the moment and in reactive mode – AI is ahead of us and we are running hard to catch up.
Nevertheless, the leaders at the Jisc-Emerge HE edtech board felt there was widespread enthusiasm to explore and experiment, and drew a parallel with Covid. Should staff be simply allowed to experiment, as they did with working from home, and learn over time the best way to use AI, giving educators permission to try things out and accept that some things won’t work? Or is an institutional, top-down strategic approach necessary and will we soon be seeing Deans of AI in position? Smaller institutions and providers will need to do the same kind of work but will struggle with resources, so the argument for a sector-wide approach is a strong one.
- If you are interested in the work of the Jisc-Emerge Edtech Board, please contact Mary Curnock Cook or Nic Newman. Previous research reports from the Board can be found here.
- To keep up to date with the latest AI developments for education you may want to follow the work of Jisc’s national centre for AI which provides thought leadership reports, events, practical guidance and pilots for the latest developments in education AI:
Join our mailing list for regular newsletters and updates: [email protected]