- This HEPI blog was authored by Josh Freeman, Policy Manager at HEPI, with kind contributions from Rebecca Mace, Senior Lecturer and AI Lead at the University of West London (UWL), and UCAS.
On Monday 26th February, I was kindly invited to speak on a panel at the UCAS Teachers and Advisers conference about AI. The audience was staff who work with young people looking to enter higher education. We talked about HEPI’s recent report on students’ use of AI, and what it means for school-age pupils – about which plenty has already been written on the HEPI website.
We were given the opportunity to ask the audience about their attitudes to AI and, on the principle of never turning down a good research opportunity, I took up the offer. Using live polling technology Slido, we asked five questions relating to AI use.
This blog discusses the results. Participants were given around 20-30 seconds to answer and each question received about 200 responses. The polling does not meet the rigorous standards of typical HEPI research – respondents might give a different answer in a busy conference hall than they might in private, for example. But the results are a useful indication of how attitudes and behaviours toward AI are forming among those who support students to enter higher education.
The first question asked how much teachers and advisers had themselves used ‘generative’ AI tools, such as ChatGPT or Google Gemini (previously Bard). We found a wide mix – including 30% who had never used the tools before and 37% who used them quite frequently.
Second, we asked the room whether they thought their students were using generative AI. Those present were agreed a significant number of students, and possibly a majority, had used them.
Given more time, we would want to know what form this use takes. Are students trying them out occasionally, or using them routinely?
As a panel, we noted some schools have banned AI use. Even if such a policy is desirable, this poll shows why it may be practically impossible. Students will always be able to use free online tools at home or on their smartphones even if they cannot do so on a school computer.
Third, we asked what teachers and advisers say to students about using generative AI for their personal statements. They fall into two camps – AI ‘embracers’ who let students use it in broad and sophisticated ways, and AI ‘avoiders’, who say nothing to their students about AI.
We might speculate that ‘avoiders’ do so for two main reasons – either because they are themselves not familiar enough with AI to develop a policy, or because they worry allowing AI use might encourage students to cheat.
The panel discussed the emotional load of the word “cheat” within an academic context. One interesting discussion highlighted the increasingly blurred boundaries around who owns the writing GenAI creates. As Rebecca Mace, AI Lead at UWL, noted:
It is interesting to see that some students see the work as their own, arguing they gave the GenAI the prompts and caused it to create the sentences. This notion – ownership of knowledge – will be a pressing concern for all those in education moving forward.
UCAS has produced new guidance for students on the use of ChatGPT and AI in their personal statements, which says that AI tools can be useful to:
- Brainstorm ideas;
- Help with structure; and
- Use it for checking readability.
So the group allowing students to use AI in ‘more significant ways’ appears to be following the guidance most closely.
But the guidance also says:
Generating (and then copying, pasting and submitting) all or a large part of your personal statement from an AI tool such as ChatGPT, and presenting it as your own words, could be considered cheating by universities and colleges and could affect your chances of an offer.
If UCAS guidance is followed, using AI tools can be helpful to applicants writing their personal statements to help inspire, clarify and articulate their own ideas. However, the personal statement is just that – personal – and must be a genuine piece of work. So fourth, we asked the room how confident they felt spotting AI use in assessments. In results which mirror attitudes among students, of which a majority think they would be found out, most teachers and advisers think they could spot AI in a personal statement, most of the time.
UCAS is clear that it has a zero-tolerance approach to students submitting work that is not their own, including that generated by AI:
If UCAS anti-plagiarism software detects elements of a personal statement that are similar to others, the universities or colleges it is intended for may be notified.
UCAS advises that if the Fraud and Verification service detects a similarity in a personal statement, this is flagged to the applicant and the university is informed. A similarity does not necessarily indicate fraud, and a university will make a decision on the basis of a variety of information.
Detailed information is available online on the detection software UCAS uses. But it is important to note that the AI detection software such as Turnitin used by some universities is, at least for now, quite unreliable. For example, it often has a high ‘false positive’ rate – that is, it often rates text written by a human as AI-generated. Using it uncritically would result in students being falsely accused of using AI.
Any system to detect AI use must therefore involve some element of human checking, but clearly, this is also fallible. This makes it all the more important that there is effective messaging from schools.
So how can teachers and advisers be so confident about detecting it? A personal statement is, of course, something deeply personal. A good personal statement conveys the passion an applicant feels for their subject. As a panel, we felt that AI, in the bland, emotionally distanced way it tends to write, does not yet effectively convey that passion. This may be what teachers and advisers can spot when they read a student’s draft.
Fifth, we asked those present whether they felt, overall, that AI is more of a challenge or an opportunity. There was cautious optimism in the room – but many were concerned AI would create challenges.
Finally, we began and ended the session by putting the same provocation to those present: Schools and colleges should enable students to use artificial intelligence (AI) tools. At the beginning, 81% agreed they should and only 9% disagreed, and this shifted further to 92% agreeing and 2% disagreeing by the end. The shift – albeit from an already high starting point – suggests that understanding more about AI, its benefits and its limits is one way to help teachers and advisers feel more comfortable working with AI.