Ethical AI in higher education: boosting learning, retention and progression

Author:
Isabelle Bambury
Published:

This blog was kindly authored by Isabelle Bambury, Managing Director UK and Europe at Studiosity, a HEPI Partner

New research highlights a vital policy window: deploying Artificial Intelligence (AI) not as a policing tool but as a powerful mechanism to support student learning and academic persistence.

Evidence from independent researcher Dr Rebecca Mace, drawing on data generated by a mix of high, middle and low-tariff UK universities, suggests a compelling, positive correlation between the use of ethically embedded ‘AI for Learning’ tools and student retention, academic skill development and confidence. The findings challenge the predominant narrative that focuses solely on AI detection and academic misconduct, advocating instead for a clear and supportive policy framework to harness AI’s educational benefits.

Redefining the AI conversation: from threat to partner

The initial response of higher education institutions to generative AI has been, understandably, centred on fear of disruption. However, this focus overlooks its immense potential to address perennial challenges in the sector, particularly those related to retention and academic preparedness.

Understanding the purpose and pedagogical role of different types of AI – distinguishing between AI for learning, AI for correction, and AI for content generation – is crucial for their responsible and effective use in higher education, shaping institutional policy and student experience.

As Professor Rebecca Bunting, Vice-Chancellor of the University of Bedfordshire, notes in her Foreword to the new research:

The real conversation we should be having is not about whether students should use AI, but how it can be used ethically and effectively to improve learning outcomes for our students.

This sentiment was echoed in a recent webinar discussing the findings, where guest panelists argued that framing AI as a constant threat leads to a fundamental misunderstanding of how students perceive and use the technology.

HEPI’s Director, Nick Hillman OBE, reinforces the policy relevance of this shift in his own contribution to the new report:

The roll-out of AI is a great opportunity to improve all that higher education institutions do.

Building on research published in HEPI’s recent collection of essays on AI, he also urges policymakers to move away from simplistic binary thinking:

It is now becoming increasingly clear that AI is a tool for use by humans rather than a simple replacement for humans.

The measurable impact: confidence, skills, and retention

The new research focuses on a specific AI for Learning tool from Studiosity in which the AI acts as a learning partner, prompting reflection and supporting students in developing their own ideas, as opposed to generating content on their behalf.

The quantitative findings are striking:

  • Retention: There is a positive correlation on retention and progression for students using Studiosity . Students accessing this formative feedback were significantly more likely to continue their studies than those who did not. For high-risk students, in particular, higher engagement with Studiosity correlated with greater persistence. This suggests the tool acts as a ‘stabilising scaffold’, addressing not just academic gaps but also the psychological barriers (like low self-efficacy) that lead to attrition.
  • Academic skills development: Students showed measurable improvement across academic writing types, with the most significant gains observed in text analysis, scientific reports and essays. Critically, lower-performing students improved fastest, suggesting an equalising effect. This is because the Studiosity tool supports higher-order thinking skills like criticality, use of sources and complexity of language, not just mechanics.
  • Student voice and belonging: Students frequently said the Studiosity tool helped them ‘articulate their ideas more clearly’ and to ‘say it right’ rather than generating thoughts for them. During one of the focus groups, as one student said, ‘It’s not the ideas I struggled with; it’s how to start writing them down in the right way’. This function, sometimes called academic code-switching, is crucial for students from underrepresented backgrounds and is vital to fostering a sense of academic belonging.

Bridging the policy-practice divide and the need for equity

However, the research revealed a ‘concerning discrepancy’ between student perception and institutional regulation. A ‘low-trust culture’ appears to be developing, driven by vague institutional messaging, which sees students hiding their use of AI even when it is for legitimate support.

Staff often centre their concerns on policy enforcement and ‘spotting misuse’ while students focus on the personal anxiety of unintentionally crossing ‘ill-defined ethical lines’. As one student explained, ‘I would feel so guilty’ even if the AI would make their life easier, a sign that the guilt is ‘not rooted purely in fear of being caught, but in a deeper discomfort about presenting work as their own’.

Moreover, there is a clear equity issue. Paywalled AI tools risk deepening the digital divide and penalising students from lower-income backgrounds. Students with low AI literacy are more likely to be flagged for misconduct because they use AI clumsily, while digitally fluent students can blend AI support more subtly.

Recommendations for an ethical AI strategy

The solution is not to resist AI but to integrate it with intentionality, strategy and clarity. The research offers clear and constructive policy proposals for the sector:

  1. Choose the right tool for the job: Focus on dedicated AI for Learning tools that develop skills and maintain academic integrity, rather than all-purpose content-generating chatbots.
  2. Design clear and consistent policy: Develop nuanced policies that move beyond a binary definition of ‘cheating’ to reflect the complex and iterative ways students are now using AI, ensuring consistency across the institution.
  3. Promote transparency: Educators should disclose their own appropriate AI use to remove stigma and foster a culture of critical engagement, allowing students to speak openly about their support needs.
  4. Prioritise equitable access: Institutions should invest in institutionally funded tools to mitigate the digital literacy and economic divides, ensuring all students – especially those most at risk – have fair and transparent access to academic support.

In conclusion

The report concludes that AI offers a substantial policy opportunity to boost a student’s sense of legitimacy and belonging, directly contributing to one of the sector’s most pressing concerns: student success and retention. Policymakers should now shift their attention from policing to pedagogy. You can access a copy of the full report here.

Studiosity is writing feedback and assessment security to support students and validate learning outcomes at hundreds of universities across five continents, with research-backed evidence of impact.

www.studiosity.com

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Comments

Add comment

Your comment may be revised by the site if needed.

More like this

Fifteen years ago, in December 2010, students were rioting on the streets of London as the House of Commons voted to allow undergraduate tuition fees in England to rise to…

Author
Nick Hillman
Published
18 December 2025
Author
Dr Karryl Kim Sagun Trajano, Dr Gayatri Devi Pillai, Professor Mohanan Pillai, Dr Hillary Briffa, Dr Anna Plunkett, Dr Ksenia Kirkham, Dr Özge Söylemez, Dr Lucas Knotter and Dr Chris Featherstone
Published
16 December 2025