Skip to content
The UK's only independent think tank devoted to higher education.

It’s time we moved the generative AI conversation on

  • 16 July 2025
  • By Michael Grove
  • By Michael Grove, Professor of Mathematics and Mathematics Education and Deputy Pro-Vice-Chancellor (Education Policy and Academic Standards) at the University of Birmingham.

We are well beyond the tipping point. Students are using generative AI – at scale. According to HEPI’s Student Generative AI Survey 2025, 92% of undergraduates report using AI tools, and 88% say they’ve used them in assessments. Yet only a third say their institution has supported them to use these tools well. For many, the message appears to be: “you’re on your own”.

The sector’s focus has largely been on mitigating risk: rewriting assessment guidance, updating misconduct policies, and publishing tool-specific statements. These are necessary steps, but alone they’re not enough.

Students use generative AI not to cheat, but to learn. But this use is uneven. Some know how to prompt effectively, evaluate outputs, and integrate AI into their learning with confidence and control. Others don’t. Confidence, access, and prior exposure all vary, by discipline, gender, and background. If left unaddressed, these disparities risk becoming embedded. The answer is not restriction, but thoughtful design that helps all students develop the skills to use AI critically, ethically, and with growing independence.

If generative AI is already reshaping how students learn, we must design for that reality and start treating it as a literacy to be developed. This means moving beyond module-level inconsistency and toward programme-level curriculum thinking. Not everywhere, not all at once – but with intent, clarity, and care.

We need programme-level thinking, not piecemeal policy

Most universities now have institutional policies on AI use, and many have updated assessment regulations. But module-by-module variation remains the norm. Students report receiving mixed messages – encouraged to use AI in one context, forbidden in another, ignored in a third, and unsure in a fourth. This inconsistency leads to uncertainty and undermines both engagement and academic integrity.

A more sustainable approach requires programme-level design. This means mapping where and how generative AI is used across a degree, setting consistent expectations and providing scaffolded opportunities for students to understand how these tools work, including how to use them ethically and responsibly. One practical method is to adopt a traffic light’ or five-level framework to indicate what kinds of AI use are acceptable for each assessment – for example, preparing, editing, or co-creating content. These frameworks need not be rigid, but they must be clear and transparent for all.

Such frameworks can provide consistency, but they are no silver bullet. In practice, students may interpret guidance differently or misjudge the boundaries between levels. A traffic-light system risks oversimplifying a complex space, particularly when ‘amber’ spans such a broad and subjective spectrum. Though helpful for transparency, they cannot reliably show whether guidance has been followed. Their value lies in prompting discussion and supporting reflective use.

Design matters more than detection

Rather than relying on unreliable detection tools or vague prohibitions, we must design assessments and learning experiences that either incorporate AI intentionally or make its misuse educationally irrelevant.

This doesn’t mean lowering standards. It means doubling down on what matters in a higher education learning experience: critical thinking, explanation, problem-solving, and the ability to apply knowledge in unfamiliar contexts. In my own discipline of mathematics, students might critique AI-generated proofs, identify errors, or reflect on how AI tools influenced their thinking. In other disciplines, students might compare AI outputs with academic sources, or use AI to explore ideas before developing their own arguments.

We must also protect space for unaided work. One model is to designate a proportion of each programme as ‘Assured’ – learning and assessment designed to demonstrate independent capability, through in-person, oral, or carefully structured formats. While some may raise concerns that this conflicts with the sector’s move toward more authentic, applied assessment, these approaches are not mutually exclusive. The challenge is to balance assured tasks with more flexible, creative, or AI-enabled formats. The rest of the curriculum can then be ‘Exploratory’, allowing students to explore AI more openly, and in doing so, broaden their skills and graduate attributes.

Curriculum design should reflect disciplinary values

Not all uses of AI are appropriate for all subjects. In mathematics, symbolic reasoning and proof can’t simply be outsourced. But that should not mean AI has no role. It can help students build glossaries, explore variants of standard problems, or compare different solution strategies. It can provoke discussion, encourage more interactive forms of learning, and surface misconceptions.

These are not abstract concerns; they are design-led questions. Every discipline must ask:

  • What kind of skills, thinking and communication do we value?
  • How might AI support, or undermine, those aims?
  • How can we help students understand the difference?

These reflections play out differently across subject areas. As recent contributions by Nick Hillman  and Josh Freeman underline, generative AI is prompting us to reconsider not just how students learn, but what now actually counts as knowledge, memory, or understanding.

Without a design-led approach, AI use will default to convenience, putting the depth, rigour, and authenticity of the higher education learning experience at risk for all.

Students need to be partners in shaping this future. Many already have deep, practical experience with generative AI and can offer valuable insight into how these tools support, or disrupt, real learning. Involving students in curriculum design, guidance, and assessment policy will help ensure our responses are relevant, authentic, and grounded in the realities of how they now learn.

A call to action

The presence of generative AI in higher education is not a future scenario, it is the present reality. Students are already using these tools, for better and for worse. If we leave them to navigate this alone, we risk widening divides, losing trust, and missing the opportunity to improve how we teach, assess, and support student learning.

What’s needed now is a shift in narrative:

  • From panic to pedagogy
  • From detection to design
  • From institutional policy to consistent programme-level practice.

Generative AI won’t replace teaching. But it will reshape how students learn. It’s now time we help them do so with confidence and purpose, through thoughtful programme-level design.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Reply

Your email address will not be published. Required fields are marked *