From Detection to Development: How Universities Are Ethically Embedding AI for Learning
This HEPI blog was authored by Isabelle Bambury, Managing Director UK and Europe at Studiosity, a HEPI Partner.
The Universities UK Annual Conference always serves as a vital barometer for the higher education sector, and this year, few topics were as prominent as the role of Generative Artificial Intelligence (GenAI). A packed session, Ethical AI in Higher Education for improving learning outcomes: A policy and leadership discussion, provided a refreshing and pragmatic perspective, moving the conversation beyond academic integrity fears and towards genuine educational innovation.
Based on early findings from new independent research commissioned by Studiosity, the session’s panellists offered crucial insights and a clear path forward.
A new focus: from policing to pedagogy
For months, the discussion around Gen-AI has been dominated by concerns over academic misconduct and the development of detection tools. However, as HEPI Director Nick Hillman OBE highlighted, this new report takes a different tack. Its unique focus is on how AI can support active learning, rather than just how students are using it.
The findings, presented by independent researcher Rebecca Mace, show a direct correlation between the ethical use of AI for learning and improved student attainment and retention. Crucially, these positive effects were particularly noticeable among students often described as ‘non-traditional’. This reframes the conversation, positioning AI not as a threat to learning but as a powerful tool to enhance it, especially for those who need it most.
The analogy that works
The ferocious pace of AI’s introduction to the sector has undoubtedly caught many off guard. Professor Marc Griffiths, Pro-Vice Chancellor for Regional Partnerships, Engagement & Innovation at UWE Bristol, acknowledged this head-on, advocating for a dual approach of governance and ‘sand-boxing’ (the security practice of isolating and testing to make sure an application, system or platform is safe) of new technologies. Instead of simply denying access, he argued, we must test new tools and develop clear guardrails for their use.
In a welcome departure from the widely used but ultimately flawed calculator analogy (read more here Generative AI is not a ‘calculator for words’. 5 reasons why this idea is misleading), Professor Griffiths offered a more fitting one: the overhead projector. Like PowerPoint today, the projector was a new technology that was a conduit for content, but it never replaced the core act of teaching and learning itself. AI, he posited, is simply another conduit. It is what we put into it, and what we get out of it, that matters.
Evidenced insights and reframing the conversation
The panel also grappled with the core questions leaders must ask themselves. Stephanie Harris, Director of Policy at Universities UK posed two fundamental challenges:
- How can I safeguard my key product that I am offering to students?
- How can I prepare my students for the workforce if I don’t yet know how AI will be used in the future?
She stressed the importance of protecting the integrity of the educational experience to prevent an ‘erosion of trust’ between students and institutions. In response to the second question, both Steph and Marc emphasised the answer lies not in specific tech skills, but in timeless critical thinking skills that will prepare students not just for the next three years, but for the next 15. The conversation also touched upon the need for universities to consider students under 16 as the future pipeline, ensuring our policies and frameworks are future-proof. Steph mentioned further prompts for leaders to think about as listed in a UUK-authored, OfS blog Embracing innovation in high education: our approach to artificial intelligence – which was given a commonsense shorthand by Steph as ‘have fun, don’t be stupid!’.
The session drove home the importance of evidence-based insights. Dr David Pike, Head of Digital Learning at the University of Bedfordshire, shared key findings from his own research comparing student outcomes for Studiosity users versus those of non-Studiosity users, stating that the results were ‘very clear’ that students did improve at scale. He provided powerful data showing significant measurable academic progress, along with a large positive correlational impact on retention and progression. Dr. Pike concluded that, given this demonstrated positive impact, we should be calling the technology ‘Assisted Intelligence,’ because when used correctly, that is exactly what it is.
A guiding framework of values
To navigate this new landscape, Professor Griffiths laid out seven core values that must underpin institutional policy on AI:
- Academic integrity: Supporting learning, not replacing it.
- Equity of access: Addressing the real challenge of paywalls.
- Transparency: Clearly communicating how students will be supported.
- Ethical Responsibility
- Empowerment and Capability Building
- Resilience
- Adaptability
These values offer a robust framework for leaders looking to create policies that are both consistent and fair, ensuring that AI use aligns with a university’s mission.
The policy challenge of digital inequality
The issue of equity of access was explored in greater detail by Nick Hillman, who connected the digital divide to the broader student funding landscape. He pointed out that no government had commissioned a proper review on the actual cost of being a student since 1958. With modern student life costing upwards of £20,000 annually if a student wants to involve themselves fully in student life. He made a powerful case for increased maintenance support to match an increased tuition fee, which would also help prevent further disparity between those who can afford premium tech tools and those who cannot. This highlights that addressing digital inequality is not just a technical challenge; it is a fundamental policy one too.
In closing
The session’s core message was clear: while the rise of AI has been rapid, the sector’s response does not have to be only reactive. By embracing a proactive, values-led approach that prioritises ethical development, equity and human-centric learning, universities can turn what was once seen as a threat into a powerful catalyst for positive change.
Studiosity is AI-for-Learning, not corrections – to scale student success, empower educators, and improve retention with a proven , while ensuring integrity and reducing institutional risk.
Comments
Gavin Moodie says:
Thanx for this.
I suggest the fundamental challenge is not: ‘How can I safeguard my key product that I am offering to students?’ but: How can educational institutions change their education so that students, graduates, teachers and research maximise the potential of large language models?
Reply
Ron Barnett says:
Just a quick plea: Can we please refrain from having articles/ blogs on AI that casually have an en passant – ie, obligatory – mention of critical thinking in a single sentence? Two reasons:
– (Minor reason): The matter of critical thinking has a huge and fractious literature across the last 40+ years and it would good if those who speak of it in academic circles could at least indicate that the matter is complex and not at all straightforward (eg, How have understandings of CT changed over the years? What is to count as CT in a populist/ state-steered setting? What are the relationships between CT and critical pedagogy; critical thought; critical reflection; and criticality?)
(Major reason): What is it to be critical of LLMs when they are opaque? Their owners do not divulge their logarithmic basis, the sources that they have mined (probably illegally), or the interests that are driving their corporations (certainly not educational interests). ie, there are very tight limits of ‘critical thinking’ in an AI environment – and that being said, we cannot be said seriously to be concerned with critical thought, which has to be free of constraint.
What we are getting here is critical thinking on the terms of the powerful, working in their interests, and not those of planetary or even human flourishing.
Ron. Barnett
(By the way, where are students encouraged to be critical of the ‘learning outcomes’ that frame their pedagogical environments? Students are obliged to work within the horizons of those learning outcomes, to which they have not been party – and again, limits are imposed on their criticality.)
Reply
Add comment