On 26 February, HEPI and Kortext will publish the Student Generative AI Survey 2025 by Josh Freeman, with a Foreword by Professor Janice Kay CBE.
Based on a survey of 1,041 students conducted by Savanta, the report shows an unprecedented increase in the use of generative AI tools among undergraduate students from the rates recorded in last year’s survey.
Key findings:
- The proportion of students using generative AI tools such as ChatGPT for assessments has jumped from 53% last year to 88% this year. The most common uses are for generative AI to explain concepts, summarise articles and suggest research ideas. The proportion who have not used generative AI for their assessment in these ways has plummeted from 47% last year to just 12% this year.

- The proportion of students reporting using any AI tool has jumped from 66% last year to 92% this year. ‘Generating text’ is the most popular reason for using AI, ahead of editing work (e.g. with Grammarly) and accessing university textbooks (e.g. with Kortext).
- Just under half of students (45%) said they had used AI at school. Only 29% of higher education students agree their institution ‘encourages’ them to use AI, versus 40% who disagree.
- A third of students (34%) would put in more effort if exams were assessed partly or mainly by AI, with 29% saying they would put in less effort and 27% saying their level of effort would not change.
- The main reason students use AI is to save time (said by 51% of students), closely followed by improving the quality of their work (50%).
‘The productivity of work since has been off the chain. Absolutely brilliant.’
‘I enjoy working with AI as it makes life easier when doing assignments however I do get scared I’ll get caught.’
– Student respondents on the value of AI to them
- The main factors putting students off using AI are being accused of cheating (said by 53% of respondents) and getting false results or ‘hallucinations’ (51%). Just 15% are put off by the environmental impact of AI tools.
- Three-fifths of respondents (59%) agreed the way they are assessed has changed ‘a lot’ in response to generative AI.
- Students still generally believe their institutions have responded effectively to concerns over academic integrity, with 80% saying their institution’s policy is ‘clear’ and three-quarters (76%) saying their institution would spot the use of AI in assessments. Both are improvements on last year’s results.
- But while two-thirds of students (67%) think using AI is ‘essential’ in today’s world, only a third (36%) of students have received training in AI skills from their institution.
‘I feel like they understand how big of an impact AI is having and is being supportive enough of it but not so much that we let it do work for us. We still have to work hard.’
‘It’s still all very vague and up on the air if/when it can be used and why. It seems to be discouraged without the recognition that it will form an integral part of our working lives.’
‘They dance around the subject. It’s not banned but not advised, it’s academic misconduct if you use it but lecturers tell us they use it. Very mixed messages.’
– Student respondents, on their institution’s policy on AI
- The proportion saying university staff are ‘well-equipped’ to work with AI has jumped from 18% in 2024 to 42% in 2025.
- There is a growing digital divide in AI use, with male students, students on STEM and Health courses and more socioeconomically advantaged students more likely to use AI than others.
The report recommends that:
- Institutions urgently ‘stress-test’ their assessments to check they cannot be easily completed using AI;
- Institutions adopt a balanced policy, seeking to support students to develop AI skills while educating them on risks like hallucinations, privacy and environmental concerns;
- Institutions collaborate on AI policy and best practice, perhaps led by organisations like Universities UK.
Josh Freeman, Policy Manager at HEPI and author of the report, said:
‘It is almost unheard of to see changes in behaviour as large as this in just 12 months. The results show the extremely rapid rate of uptake of generative AI chatbots. They are now deeply embedded in higher education and many students see them as a core part of the learning process. Universities should take heed: generative AI is here to stay.
‘There are urgent lessons here for institutions. Every assessment must be reviewed in case it can be completed easily using AI. That will require bold retraining initiatives for staff in the power and potential of generative AI. Institutions will not solve any of these problems alone and should seek to share best practice with each other. Ultimately, AI tools should be harnessed to advance learning rather than inhibit it.’
In her Foreword, Professor Janice Kay CBE, Director, Higher Futures, said:
‘It is a pleasure to introduce this 2025 study, a welcome repeat of the 2024 AI survey of how full-time undergraduate students are currently using AI tools. It shows that use has soared over the past year, demonstrating that AI tools are used in varied ways in learning and assessment.
‘It is a positive sign overall: many students have learned more about using tools effectively and ethically and there is little evidence here that AI tools are being misused to cheat and play the system.
‘And yet, there are quite a lot of signs that will pose serious challenges for learners, teachers and institutions and these will need to be addressed as higher education transforms.’
Robin Gibson, Director of External Affairs at Kortext, said:
‘The rapid rise in student use of generative AI highlights the transformative role these tools are playing in higher education. As AI becomes increasingly embedded in learning, there is an opportunity to support students in developing the skills to use these technologies effectively and ethically.
‘At Kortext, we are committed to working in partnership with the sector to navigate this shift, providing whole university digital learning solutions that integrate AI inclusively and responsibly, helping prepare students for the future of work.’
For more information, please contact Josh Freeman, report author, at [email protected] / 07837 027104.
Notes for editors
- The survey is based on 1,041 online interviews with full-time undergraduate students conducted by Savanta in December 2024. The responses were weighted on demographics such as gender, institution type and year of study to ensure the results are representative of the wider student population and comparable with the previous survey. The margin of error is approximately 3%. Percentages may not sum to 100% due to rounding. The full results are available on the HEPI website.
- HEPI was founded in 2002 to influence the higher education debate with evidence. We are UK-wide, independent and non-partisan. We are funded by organisations and higher education institutions that wish to support vibrant policy discussions, as well as through our own events. HEPI is a company limited by guarantee and a registered charity.
Oh golly gosh time to embrace the challenges.
Why not let the students all use AI by order
and have AI mark it so they all pass without
ethical pressure on poor staff to keep up
progression. Then the staff can concentrate on
writing AI- aided articles to be screened by
AI- using journals. If we make AI a peer then
they are still peer reviewed.
Finally, it is a bit mind blowing to have so little
focus on AI usage by people with poor English language skills as per the intake of many
UK institutions. Don’t set any assessments on
Paul McCartney cover versions as AI thinks he wrote Paper Lace’s Billy Don’t Be A Hero
last time I looked.