What generative AI reveals about staff capability and institutional risk in higher education
This blog was kindly authored by Dr Emma Ransome, Academic Lead for Teaching and Learning at Birmingham City University.
In a previous HEPI blog, I argued that generative AI exposes deep flaws in assessment practice, and that compliance driven responses risk missing opportunities for genuine pedagogic reform. While this remains important, the sector’s fixation on student use risks obscuring a more systemic challenge. Generative AI is not simply a student disruption; it is exposing uneven staff capability, fragile pedagogic infrastructures, and institutional incoherence in curriculum design and governance. This piece turns to the less comfortable question of staff capability and confidence in designing teaching and assessment in an AI enabled context, and the implications this has for equity, quality and institutional risk.
Variation in staff capability is already producing uneven student experiences across programmes and institutions. If AI continues to be primarily seen as a student behaviour problem, they may misdiagnose the challenge and misdirect their interventions. The success or failure of AI integration in higher education will be determined less by student misconduct than by professional capacity and institutional design.
Generative AI as a mirror on academic practice
Jisc’s AI in tertiary education report and HEPI’s Generative AI surveys (2024, 2025, 2026) show that Generative AI has disrupted assessment practices and exposed uneven pedagogic capability across institutions. Some educators are experimenting with authentic assessment, integrating AI transparently into learning tasks and developing students’ critical judgement. Others remain uncertain about what AI is, how it works, or how to redesign learning in response.
In many institutions, these differences are systemic, producing fragmented practices across programmes and departments. Generative AI therefore functions as a mirror, revealing long standing variability in curriculum design expertise previously obscured by stable assessment conventions.
This unevenness matters because students’ experiences university through their courses, modules and lecturers, not through institutional policy ambitions. In an AI enabled context, this means some students receive structured guidance on ethical and effective AI use, while others encounter prohibition, ambiguity, or inconsistent enforcement. The student experience becomes contingent on the epistemic confidence of individual educators rather than on institutional intent.
The equity implications of uneven staff capability
This is an equity issue. Students who study on programmes with confident, informed staff are more likely to develop the skills to use AI critically and ethically. Students on courses where staff lack confidence may experience restrictive guidance, poorly designed assessments, or unclear expectations, potentially disadvantaging them in their learning and future employment.
Jisc’s national survey already demonstrates differential access to innovative pedagogies and digital resources across institutions and cohorts. AI capability risks becoming an additional stratifying mechanism, where some students graduate with advanced AI literacy while others are discouraged from engaging with tools that are embedded in professional practice. Uneven staff capability therefore becomes a mechanism through which educational inequality is reproduced.
Institutional risk beyond student misconduct
Regulatory discourse on generative AI has focused heavily on academic integrity and misconduct, yet inconsistent staff capability introduces a different category of institutional risk. Where policy is interpreted differently across modules, where assessments are redesigned inconsistency, or where students receive contradictory messages around AI use, institutions may face challenges related to fairness, quality assurance and student complaints.
Quality frameworks such as TEF, OFS conditions of registration and QAA expectations increasingly emphasise consistency, transparency and student outcomes. In an AI enabled environment, inconsistency in staff practice becomes a governance issue, not merely a professional development concern. Institutions that fail to build staff capability may struggle to demonstrate coherent educational strategies or equitable student experiences, exposing themselves to regulatory and reputational risk.
From policy to professional capacity
Many universities have responded to AI with policy documents, guidance and detection tools; these are necessary but insufficient. Few institutions have developed a coherent AI strategy that connects teaching, assessment, research, governance, digital infrastructure and workforce development. Policy without professional capacity risks symbolic compliance, and policy without strategic coherence risks fragmentation.
Designing AI informed teaching and assessment is skilled pedagogic work that requires disciplinary judgement, curriculum design expertise and time. Staff need structured development opportunities that move beyond awareness sessions. They require frameworks for integrating AI into learning outcomes, assessment design, feedback processes and curriculum sequencing alongside institutional permission to experiment and iterate, rather than operating in a climate of fear and surveillance.
Without an explicit AI strategy aligning curriculum, professional development and governance, implementation will remain driven by individual enthusiasm rather than institutional intent.
What institutions and regulators should prioritise next
Institutions should prioritise sustained professional learning focused on AI enabled pedagogy, embed AI within programme approval and review processes, and recognise curriculum redesign within workload models and promotion criteria. Regulators and sector bodies could support this shift by foregrounding staff capability in AI related guidance and quality frameworks, moving beyond misconduct mitigation towards institutional responsibility for pedagogic coherence in an AI enabled sector.
A provocation for the sector
Generative AI is often framed as a student challenge that universities must control. A more uncomfortable reading is that AI reveals long standing variation in pedagogic capability and institutional coherence that universities have tolerated but not systemically addressed. If higher education continues to focus primarily on student behaviour, it may miss the deeper transformation required in academic practice and professional development.
The question is not simply how students should use AI, but whether universities are prepared to equip their educators to design learning for an AI saturated world. The answer will shape not only assessment integrity, but educational equity, institutional reputation and the future relevance of higher education itself.
Read HEPI’s 2026 Generative AI Survey here, and our collection of essays ‘AI and the Future of Universities’ here.





Comments
Jonathan Alltimes says:
Does generative AI make much university teaching and assessment obsolete (for the purpose of employment)? You can not answer such a question without contemporary detailed real examples. What is pedagogic capability? Students who use generative AI to simulate their own learning for their pedagogues are cheating themselves, as they are refusing to change their mind in readiness for the purpose of the qualification, which is the intellectual equivalent of skipping. Testing physical skills and the making of artefacts can not be skipped.
Reply
Add comment