Skip to content
The UK's only independent think tank devoted to higher education.

Making strategic sense of generative AI

  • 12 March 2024
  • This HEPI blog was kindly authored by Chris Husbands and Janice Kay, founder partners of Higher Futures, which provides strategic leadership support to universities navigating change. Chris was formerly vice-chancellor at Sheffield Hallam University and Janice was formerly Provost at University of Exeter.

The HEPI/Kortext report on students’ use of generative AI is a wake-up call: students are making extensive use of generative AI in increasingly sophisticated and discriminating ways. For academics and university leaders who are not specialists, it’s hard to appreciate the sophistication of the large language models [LLMs] which underpin generative AI. All this makes thinking about AI in HE exciting – because the pace of development is so fast that it is difficult to keep up, and dangerous – because the pace of development is so fast that there seems to be little solid ground.  In this blog, we are going to offer seven propositions for institutional leadership on AI. In order to do that, we’ll prepare the way with some comments on what we know about technology in education, and about strategic change in universities.

Technology and education

Over the long term, and despite extraordinary technological change, university teaching models have proved remarkably resilient. Despite this, there has been a persistent tendency to overestimate the effect of any new technology in the short run and underestimate the effect in the long run. This means that institutional leaders can be badly attuned to the implications of novel technologies, and focus on short-term problems rather than long-term possibilities. In the short run, universities have tended to focus on the threats that generative AI poses to current models of assessment in terms of cheating, rather than on the opportunities to review and overhaul models of teaching and learning.

The tendency to conservatism in practice often isn’t helped by technology enthusiasts: almost all technology advocates tend to over-claim benefits and understate the implementation challenges of innovation: the challenge is embedding change, not technology innovation. In respect of AI in teaching and learning, claims have focused essentially on three areas: the ability to accelerate the acquisition of complex ideas, to personalise experience and the impact on assessment. All three need looking at in relation to each other and to the capabilities and capacities of the institution to handle complex change.

A key early understanding from the past year is that there are – of course – novice and expert approaches to using AI – the more sophisticated you are as a user, the more powerful the technology, the more effective you are as a learner or as a teacher. And, despite the intense moral panic about the impact of generative AI on assessment, cheating isn’t by any means new: there’s a constant race between assessment methodologies and subversive tactics. Generative AI is considered by some in the same breath as essay mills, which misses the point.

Strategic change in higher education

Generative AI, with its potential and possibilities, has arrived at a time of intense financial stress on universities, with flat fees on the core undergraduate market and steep challenges in other markets, especially international. Given the rapid pace of development, the tightness of budgets, and the path dependency of current delivery models, the temptation is to hold back on investment: this is a version of what we might call the electric car problem – why buy an electric car with a range of 200 miles when you know that the technology will deliver a model with a range of 700 miles soon. Moreover, the experience of major IT investments in higher education has been patchy, not least because of the gap between technological and staff capabilities.

Complex organisations, and especially cash-strapped complex organisations, have an adaptive tendency to try to absorb change into the way they currently do business – to resist radical change.  This isn’t in itself a bad thing and contributes to institutional resilience and persistence, provided that everyone understands, in Michael Fullan’s great phrase that ‘the main thing is to focus on the main thing’. On the other hand, there is a danger of missing the signals emerging from disruptive change, so that failure to adapt becomes existential for institutions.

Any successful change is about strategic leadership. Making good decisions depends on a self-critical analysis of where you want to get to, where you are now, your capabilities and your weaknesses. University executives are not well equipped or trained in understanding what technology challenge means for current strategies and education – including for attracting applicants and providing opportunities for lifelong learning. Academics and professional staff are not yet well-versed in understanding the capabilities of LLMs in facilitating learning, so tend to focus on the threats. Unless understanding of the capabilities of LLMs is addressed, deployed and used effectively, learners and learning will simply go elsewhere, and the rise of large-scale corporate investment into teaching and learning provision is an obvious threat.

Seven propositions for success

With those observations in mind, here are seven propositions on leading higher education is response to generative AI

  1. First, LLM capabilities will continue to develop faster than the capacity of any single institution to keep up with the development. Leadership teams need to be able to draw on a horizon-scanning capability within the institution, or through external sources. This is to provide insight and information and needs constant updating. The capacity and capability of staff, from those on the ground to senior leaders, must be rapidly upscaled using LLM and generative AI to produce effective plans for changing teaching, learning and assessment methods.
  2. Second, every university has choices to make. There’s too much to do and not enough resource of capability. So institutions need an adaptive, updatable but clear strategy for AI. Choices should reflect institutional mission, institutional capabilities and institutional strategy.
  3. Third, and an old-fashioned and boring point: education and technology are in constant dialogue. There are obvious challenges around assessment, and old assessment models won’t work. It was ironic last year to see one mission group reach instinctively for the idea of banning Chat-GPT only to somersault eight weeks later: it hadn’t thought through the issues. Putting education and learning first won’t mean every decision is right, because that’s not the way things work, but it does mean that decisions are made based on mission and purpose.
  4. Fourth, and also as ever in complex organisations, good strategic outcomes will come from a mix of bottom-up and top-down decisions. There will be imaginative developments in some parts of the university which can be scaled, and there will be strategic prioritising and resource decisions which permeate the university. Effective leadership will join these up. Institutions have to be bold at this pivot point: falling back on business as usual will lead to existential threat
  5. Fifth, and another boring old-fashioned point: people are our key resource and they need challenge, support and development. Historically universities have under-invested in staff capability other than in research. There will be innovators and laggards in every university, enthusiasts and the apprehensive, advocates and critics. Capacity and capability need to be created at scale.
  6. Sixth, and notwithstanding all of the others, there are some obvious places to start. If you haven’t begun a review of your approach to assessment across the institution, you need to (and this will certainly be an essential ingredient of TEF2027 preparation). If you have not thought about how to mobilise resources to personalise student engagement, you need to.
  7. Seventh, it is possible that AI dissolves institutional business models and becomes existential for some institutions. That was the worry about MOOCs. Just because it didn’t happen doesn’t mean it won’t. It was the worry about the pandemic. Just because it didn’t happen before doesn’t mean it won’t. It’s far more likely that generative AI makes university education and learning more responsive, more accessible and more widely disseminated. None of us, now, know where this is going. An avalanche has finally arrived – and we know, or think, that considering our first six propositions gives the best chance of success.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Reply

Your email address will not be published. Required fields are marked *