- By Professor Alejandro Armellini, Dean of Education and Digital Innovation at the University of Portsmouth.
Universities want to be at the cutting edge of knowledge creation, but many are grappling with a paradox: how to harness the potential of AI while minimising its pitfalls. Done well, generative AI can help institutions run more efficiently, enhance teaching quality and support students in new and exciting ways. Done poorly, it can generate misinformation, introduce bias and make students (and staff) over-reliant on technology they do not fully understand. The challenge is not whether to use AI but how to make it work for human-driven, high-quality education.
Across the sector, institutions are already putting AI to work in ways that go far beyond administrative efficiencies. At many universities, AI-driven analytics are helping identify students at risk of disengagement before they drop out. By analysing attendance, engagement and performance data, tutors can intervene earlier, offering personalised support before problems escalate. Others have deployed AI-powered feedback systems that provide students with instant formative feedback on their writing. The impact? Students who actually improve before their assignments are due, rather than after they’ve been graded.
Concerns about the accuracy, transparency and provenance of AI tools have been well documented. Many of them operate as ‘black boxes’, making it difficult to verify outputs or attribute sources. These challenges run counter to academic norms of evidence, citation and rigour. AI tools continue to occupy a liminal space: they promise and deliver a lot, but are not yet fully trusted. AI can get things spectacularly wrong. AI-powered recruitment tools have been found to be biased against women and minority candidates, reinforcing rather than challenging existing inequalities. AI-driven assessment tools have been criticised for amplifying bias, grading students unfairly or making errors that, when left unchallenged, can have serious consequences for academic progression.
With new applications emerging almost daily, it’s becoming harder to assess their quality, reliability and appropriateness for academic use. Some institutions rush headlong into AI adoption without considering long-term implications, while others hesitate, paralysed by the sheer number of options, risks and potential costs. Indeed, a major barrier to AI adoption at all levels in higher education is fear: fear of the unknown, fear of losing control, fear of job displacement, fear of fostering metacognitive laziness. AI challenges long-held beliefs about authorship, expertise and what constitutes meaningful engagement with learning. Its use can blur the boundaries between legitimate assistance and academic misconduct. Students express concerns about being evaluated by algorithms rather than humans. These fears are not unfounded, but they must be met with institutional transparency, clear communication, ethical guidelines and a commitment to keeping AI as an enabler, not a replacement, for human judgment and interaction. Universities are learning too.
No discussion on AI in universities would be complete without addressing the notion of ‘future-proofing’. The very idea that we can somehow freeze a moving target is, at best, naive and, at worst, an exercise in expensive futility. Universities drafting AI policies today will likely find them obsolete before the ink has dried. Many have explicitly reversed earlier AI policies. That said, having an AI policy is not without merit: it signals an institutional commitment to ethical AI use, academic integrity and responsible governance. The trick is to focus on agile, principle-based approaches that can adapt as AI continues to develop. Over-regulation risks stifling innovation, while under-regulation may lead to confusion or misuse. A good AI policy should be less about prediction and more about preparation: equipping staff and students with the skills and capabilities to navigate an AI-rich world, while creating a culture that embraces change. Large-scale curriculum and pedagogic redesign is inevitable.
Where does all this leave us? Universities must approach AI with a mix of enthusiasm and caution, ensuring that innovation does not come at the expense of academic integrity or quality. Investing in AI fluency (not just ‘literacy’) for staff and students is essential, as is institutional clarity on responsible AI use. Universities should focus on how AI can support (not replace) the fundamental principles of good teaching and learning. They must remain committed to the simple but powerful principle of teaching well, consistently well: every student, every session, every time.
AI is a tool – powerful, perhaps partly flawed, but full of potential. It is the pocket calculator of the 1970s. How universities wield it will determine whether it leads to genuine transformation or a series of expensive (and reputationally risky) missteps. The challenge, then, is to stay in control, keep the focus on successful learning experiences in their multiple manifestations, and never let AI run the show alone. After all, no algorithm has yet mastered the art of handling a seminar full of students who haven’t done the reading.