This blog has been kindly written for HEPI by Eve Alcock, Head of Public Affairs at the Quality Assurance Agency for Higher Education (QAA).
My twitter algorithm on the ‘for you’ page has detected that I have an interest in ChatGPT. All day every day, it serves me a new piece of content written by another expert about – depending on the day – its horrifying perils or wondrous opportunities.
Even in conversations with higher education providers, it is clear there is a range of views and approaches in play. Some are treating generative artificial intelligence models like essay mills, doing what they can to make clear to students that its use is prohibited. Some have opted to recognise the value it provides in supporting the learning process, asking students to declare its use transparently. And others I was surprised to hear are yet to realise its existence at all, though the same probably can’t be said for the providers’ own students.
The issues around the implications of artificial intelligence in society are thorny, and higher education is no different. The very value of a degree rests on an assumption that the student has completed all their assignments through their own ability.
- With open access to chatbots, how can lecturers be sure a student’s work is their own?
- Where a student’s academic achievement is measured through their ability to write an essay, what does it mean that technology is able to do the same?
- And if students go on to take jobs where access and use of technologies like ChatGPT is possible or even encouraged, what are we really preparing them for?
I hear lots of people downplaying ChatGPT because it ‘confidently states things that are wrong’ and ‘fabricates references’. News flash: this is the least sophisticated this technology will ever be, and it’s still got the sector in a spin. But it presents a real and overdue opportunity to rethink the purpose of higher education and how we deliver it – an opportunity that began in the pandemic.
The impact of COVID on the campus experience has been much discussed and researched. We hear that, while digital delivery and technology-assisted learning was welcomed by students, not least because of its accessibility, students still yearned for higher education experiences that were uniquely human. They wanted online lectures and relationships with their personal tutors. They wanted asynchronous learning time and spaces on campus to be in community with each other. Technology can only do so much when it comes to belonging.
As Sarah Eaton highlights in her new book, we’re looking at a future where hybrid human-AI writing is the norm. Artificial intelligence isn’t only a horrifying peril, or a wondrous opportunity; it is both and more. So the conversation we need to be having about generative AI in the sector has to reflect those shades of grey between the black and white as we embark on this transformative period of change.
That’s why QAA is running an ongoing programme of work around artificial intelligence. We’re collating useful resources and links into a single repository on our website that we update weekly. We’ll also be convening the sector through a series of webinars, creating the space to get under the skin of some of the issues I’ve mentioned here.
The webinars will be followed by further QAA guidance on the topic, following the briefing paper we published in January. You can sign up to the webinars via the links below:
- ChatGPT: To ban or not to ban? 22 March 11:00-12:00
- ChatGPT: How do I use it as a force for good? 31 March 11:00-12:00
- ChatGPT: What should assessment look like now? 18 April 10:00-11:00
As the sector finds its feet on artificial intelligence, documenting good practice and ongoing challenges will be vital. HEPI is partnering with Curio to survey the sector on approaches to technologies like ChatGPT and will be producing a report that lays out where the sector is on the issue. If you haven’t already, I’d encourage you to respond to the survey.