- This HEPI guest blog was kindly authored by Mary Curnock Cook CBE, who chairs the Emerge/Jisc HE Edtech Advisory Board, and Bess Brennan, Chief of University Partnerships with Cadmus which is running a series of collaborative roundtables with UK university leaders about the challenges and opportunities of generative AI in Higher Education.
Last month’s invitation-only meeting focused on assessment, with four HE leaders – from the University of Greenwich, Imperial College London, University of Glasgow and Queen’s University Belfast – sharing their institutional frameworks and practical solutions. Together, they took us through the cycle of assessment from the grassroots of assessment design through student and educator AI literacy to tech-led solutions that support students’ authentic progression and engagement with assessment tasks.
Central to all the discussions were the practical ways in which universities are working with students to “use AI to think with you, not for you”. The concept goes to the heart of these leaders’ efforts to move the emphasis from detection of AI misuse to recognising the place AI will have in both education and the future working lives of today’s students.
Sharing solutions
There was a clear consensus among the leaders that there isn’t a silver bullet that will resolve the spectre of pieces being written or part-written by generative AI – there is no backend checker that can prove beyond doubt that a student has or has not misused AI in completing an assessment.
However, as the discussion showed, with care and thought backed by institutional policy and strategy, there are actions that leaders can take – looking at assessment design and how it aligns constructively with learning outcomes, supporting students and scaffolding them through assessment, delivering encouraging, timely and helpful feedback, ensuring that assessment is iterative and inclusive and integrates with learning activities and resources.
University of Greenwich: checking assessment vulnerability
For example, the University of Greenwich is considering all its assessments from the bottom up to ensure they are relevant and authentically reflect what graduates might encounter in the workplace. As a professional technical university with a large widening participation intake, Greenwich is dedicated to developing skills in students that will be of use in the workplace – and the ability to use generative AI effectively and ethically is certainly a skill employers will increasingly value.
The university has developed a simple but effective tool to consider the vulnerability of its assessments to generative AI being misused by students. Using the AI Risk Measure Scale (ARMS), programme leaders assign a numerical risk to all the assessments in their courses and highlight those which are very high risk so they may then modify them.
As Professor Jenny Marie, Pro Vice-Chancellor of education at the University of Greenwich, explained,
An assessment we would class as high risk of being vulnerable to AI misuse is testing something which no longer has relevance to our students and is no longer relevant to what we should be teaching at university. If you can get generative AI to produce a summary or an abstract, then why are we asking students to do that?
At the same time, Greenwich is also aware that such a high risk / low risk framework can be a blunt tool. Although in-person exams are generally very low risk in terms of academic misconduct, moving all assessments to in-person exams would be a retrograde step strategically for Greenwich when it is keen to move towards more authentic assessments and a policy of ‘exam by exception’.
Imperial College London: AI stress testing assessments
At Imperial College London, a similarly close examination of the purpose and suitability of assessments is taking place. An AI stress test is being used to drive educator AI literacy and explore the robustness and resilience of existing assessments, as well as extend innovation in assessment design and foster a culture of sharing and collaboration within the college.
Taking a team approach, individual members of staff worked with educational experts from Imperial’s educational development unit and Ed Tech Lab to stress test their assessments, learning basic AI prompt engineering to explore how robust the assessments really are.
We decided we needed to help our staff with prompt engineering because there was a serious degree of complacency early on with people having a quick go at seeing what AI does and then deciding it didn’t seem to do their assessments very well – but that was not a reliable guide to whether it genuinely could be used by smart students.
Professor Alan Spivey, Associate Provost (Learning and Teaching) at Imperial.
All the cases where an assessment has changed substantially have been collected into an Anatomy of Assessment resource, to share learnings and as part of a policy to try to reduce the assessment burden on students. Where an assessment might be susceptible to GenAI, course leaders are asked if it needs to take place at all, and the extent to which what is being assessed aligns with what is being taught.
University of Glasgow: building educator AI literacy
Educator AI literacy is also high up the agenda at the University of Glasgow. The Practice Enhancement Tool is a 10-minute survey for any staff member involved in teaching and assessment, which asks questions associated with meaningful, iterative, programmatic and inclusive assessment, and prompts faculty to reflect on their own practice.
The results produce a dashboard for the individual staff member to see how their own practice relates to Glasgow’s learning through assessment framework and then steers them towards resources and people who can help. It also provides leaders, such as Professor Moira Fischbacher-Smith, Vice-Principal (Learning and Teaching), with an overview of the situation in the whole university:
We are trying to set a context where we think very carefully about assessment design from the outset and ask, are we over assessing? Are we making sure that the assessment we design for students is connected to their learning and connected to skills? Are those skills really surfaced through the work that they’re doing?
Queen’s University Belfast: piloting AI-augmented feedback
Taking a slightly different tack, Queen’s University Belfast is exploring whether an AI-augmented approach could enhance feedback to students by making it more encouraging and engaging. Co-designed and co-delivered with 100 psychology students at the university, who will be working on the project over the long term, the study is trialling whether ‘human in the loop’ AI augmented feedback can ease staff workload and improve student experience and success. The results have surprised staff.
The students rated the feedback quality much more highly from the AI augmented study than the original feedback. They said it was much nicer, far more encouraging and provided them with very clear paths for improvement. It was easier for them to read and understand, especially when English wasn’t their first language.
Professor Judy Williams, Pro Vice-Chancellor for Education and Students at Queen’s University Belfast
The success metric
While there is no silver bullet for assessment assurance, there are certainly tech enablers to support universities – and students. For example, Herk Kailis, Cadmus CEO, explained how its timeline tool is being used.
Within the Cadmus authentic assessment online platform, end-to-end reconstruction reports can replay the construction of an assessment from start to finish, crucially looking at the process rather than the end product. For educators, the platform provides access to real-time analytics, which monitors the process around students’ development of their assessment rather than trying to catch academic misconduct at the point of submission. Educators can clearly identify how many hours a student has spent writing their assessment in Cadmus, if the work was copied and pasted or transcribed into Cadmus, and if the pasted work appears to be inauthentically created.
However, while the timeline tool can be used to show that a large chunk of text has (or has not) simply been pasted into the interface from ChatGPT, Herk Kailis revealed that when it was released to academics they used it in class with students as an exemplar rather than a punitive tool.
By highlighting to students the process they need to go through for an assessment, from how long it should take and how to get started to building references and working through plans and checklists, it showed students how they could break down the process, giving them confidence, scaffolding them through the various steps and, ultimately, lowering barriers to student engagement with the assessment.
“It highlights that it is assuring learning that should always be the goal – that’s the success metric, not catching students cheating,” concluded Herk Kailis.
Good knowledge about AI