Skip to content
The UK's only independent think tank devoted to higher education.

Generative AI: a technology that “makes previously exclusive interactions available to all”?

  • 31 May 2024

***Next Wednesday 5th June we are hosting a webinar on the Australian Universities Accord – you can register here.***

Earlier in May 2024, HEPI, with support from learning technology service Studiosity, hosted a roundtable dinner to discuss how important the human touch is in the age of AI learning.

This blog considers some of the themes that emerged from the discussion.

Artificial intelligence tools can give students ethical, personalised support with essays, instantly. They can also identify and help those who are struggling and speed up tasks for teachers, researchers and administrators. Their importance in higher education is expected to grow rapidly. But will there still be a place for a human touch?

How universities should prepare for future AI developments and how important it will be to retain human input was the subject of a recent roundtable dinner hosted by Hepi with Studiosity, an online study success service that provides immediate, expert feedback and guidance at scale. The dinner, attended by senior university leaders and policymakers, was held under Chatham House rules, by which speakers express views on the understanding they will be unattributed.

The first point made was that new developments have regularly opened up access to education in the past, from Plato’s Academy to the invention of the World Wide Web. Generative AI could be viewed similarly as “a technology that makes previously exclusive interactions available to all”.

Nor are fears of the potential drawbacks of such technologies new. Some speakers recalled worries about the impact on education of Wikipedia and faster wifi. Others remembered the digital encyclopedia Encarta, used in secondary schools in the 1990s and early 2000s, and concerns that pupils believed it more than their teachers.

Personalised education

But there was general recognition that the opportunities generative AI presents are significant. One suggestion was that the best large language models could offer all students, at low cost, the kind of tutorial-based education offered by the best Oxbridge tutors. They could be well-read, articulate, patient, adaptable and generally reliable (even if more willing than most academics to make things up when they weren’t sure). Meanwhile many students who struggled to read journal articles could benefit from working with AI-generated summaries, which would make the contents of the articles far more widely accessible.

A poll of 1,250 UK undergraduate students about their attitudes to AI, carried out by HEPI and Kortext earlier this year, found that while more than half had used it in assessments, it was mostly to help explain concepts rather than to aid essay writing. Most think their institution would be able to spot work produced by AI.

Not all participants in the roundtable were confident that this was true. But many thought that it was an indication that universities had focused more on preventing students from misusing AI in their work than preparing them to use it productively.

How to prepare

So how should universities be preparing them?

The roundtable agreed that AI literacy needed to improve for both students and staff.

One speaker said that academic staff needed first to upskill themselves so that they did not feel on the back foot. Another suggested that all students should be offered a course in the fundamentals of AI, in the same way they take courses in academic writing or critical thinking. Such a course should include teaching an understanding of how AI can be susceptible to bias and hallucination.

Less important was teaching students about prompt engineering, since this was generally intuitive, but students did need to have a thorough enough knowledge of their academic subject to be able to scrutinise AI outputs effectively. A willingness to be sceptical was an important part of academic life, it was agreed, and should be extended to all sources of information that students engage with, including both AI and lecture content, with students perhaps asked to acknowledge all their sources of formative help in an assessment – from AI learning tool such as Studiosity to guidance from tutors and peers.

Assessment points

It was agreed that assessment was a particularly tricky area when it came to AI. One roundtable participant said that it was now difficult to be certain that any coursework was a candidate’s own – a problem that predated AI but that it had made worse. Another asked whether it was time to rethink anonymised submission since students may pause before submitting AI generated work under their own name and tutors may be better able to judge whether to investigate.

But one speaker said it was unfortunate that talk around generative AI in universities was so often about students cheating since most did not. They had come to university to develop skills needed for the workplace and realised that cheating would not help them do that.

A suggested solution was to drop the current classification system, which could encourage cheating because of the high stakes involved, in favour of broader ways of recognising learning and to use AI as a driver for a more mindful assessment process.

Supporting access

Meanwhile, there was enthusiasm over what AI could do for access. Not only could it help to identify and support students who were not engaging but it could help boost their confidence by allowing them to regularly test out skills and knowledge. The ability to stay anonymous when asking basic questions or gathering initial feedback was recognised as particularly useful.

At the same time, some raised issues about unequal access to technology. One told of a recent encounter with two students, one of whom spent £80 a month on AI tools, another with no reliable source of home broadband. Students also often arrived at university with unequal knowledge as in some schools and parts of the country computer teachers were scarce.

While all those taking part in the roundtable agreed that AI had huge potential they also recognised that it has limits. The general feeling was that it could achieve around 60% of a given task. Less clear was which 60%. Was it best used as a way of generating ideas or of polishing a final piece of work?

Threats and challenges

A few also raised concerns. Large language models such as Microsoft Co-pilot had not been trained on universities’ own materials and higher education institutions would need to work out how to do this and then how to store the data assets produced, one said. Another warned that Co-pilot could allow free access to interrogate institutional data, much of which – including sensitive emails – may not have been properly classified.

One speaker pointed out that AI would allow outsiders into the higher education world and increase competition. Another suggested that it could pull apart the existing link between teaching and research. AI was likely to be better at teaching than some researchers and while it was true that higher education had so far survived threats from technological innovations, “there must come a point where you’re saying, how much of this job remains something that only the academic expert can do”.

This raised questions. Would it make it harder to become an academic since AI would be fulfilling more basic roles? And how would academics reach the higher rungs of their profession if the lower rungs did not exist? Would teaching become more about conveying human skills, such as the ability to work in a group, than imparting knowledge? Some predicted more emphasis on the role of the teacher as facilitator, with academics as advisors and guides.

One speaker said it was all very well having an AI personalised tutor but what about students talking to each other? Much valuable learning was done through interacting with peers.

Some recalled that key parts of their own university experiences were not just the content they learned but the experiences they had: dealing with problems, forging friendships and broadening horizons. It was those human aspects that truly shaped them and that, whatever the future role of AI, should be preserved.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.


  1. I have just completed a book and felt it necessary to insert a disclaimer to the effect that none of it had been generated by use of a large language model.

    But suppose that I had used such a device? Should I have inserted a explicit acknowledgement of that kind – that I had used an LLM in the ‘writing’ of the book?

    This is a profoundly serious matter as to authorship, writerly responsibility and writing ethics.

    But there are even graver issues at play. Over time, arguably, the degree of creativity will dwindle, as will the stock of new ideas and frameworks. LLMs only circulate what is already in play and then only in a limited number of outlets. So there is a double whammy for humanity: less creativity and a circulation of limited and so biased representations of the world.

    So this new technology may be harbouring nothing less than the extinction of thought when, to combat the challenges of a benighted Earth and world society, thought is needed more than ever.

    Ronald Barnett
    [email protected]

  2. Dave Valler says:

    How can I give a student a mark for their coursework when I can’t tell that they have written it?
    Of course students should use AI to learn and improve. But we simply have to have ways of judging directly at various points what individual students have learnt and what they are capable of. Not just what a machine will produce for them.

Leave a Reply

Your email address will not be published. Required fields are marked *