Skip to content
The UK's only independent think tank devoted to higher education.

Universities: Guardians of ethical AI?

  • 1 May 2024
  • By Shadi Hijazi
  • This HEPI blog was kindly authored by Dr Shadi Hijazi, Principal Consultant at QS Quacquarelli Symonds.

Businesses are hungry for AI-powered growth, students eagerly adopt new tools, and the ethical compass risks getting lost in the scramble. But in the midst of an artificial intelligence gold rush, can universities ensure everyone plays fair?

AI is no longer a buzzword in lectures. Nearly 80% of students surveyed in the QS Generative AI Student Pulse Survey 2023 use AI platforms for their studies. But enthusiasm is tempered; many students recognise the need for caution to ensure AI’s positive impact. With world-class research and global partnerships, universities have an opportunity, and a responsibility, to shape AI’s trajectory. Yet the path is fraught with complexities.

AI and ethics: The university challenge

Universities navigate the complexities of internal operations, teaching and learning, the advancement of innovation and science, and outreach and engagement with society. Each of these areas has AI implications, implementations and ethical aspects that universities can work on.

Universities must maintain data protection, privacy and security, ensuring the confidentiality and integrity of personal information. The UK’s Data Protection Act 2018 introduced strict rules and fines for data mismanagement, forcing a shift in data processes. This remains relevant today with the increasing deployment of AI tools.

The use of AI and machine learning in administrative tasks, such as admissions, has necessitated further measures to eliminate biases and ensure equitable treatment for all. Bias is an important risk factor in AI, addressed by guidelines and regulations like the European Parliament’s Artificial Intelligence Act. Bias and unfairness in the academic world have been persistent, though emerging technologies have revolutionised how students find information and write assignments, making academic dishonesty more accessible.

AI-powered transformations bring new opportunities and attractive reductions in time and cost, but personalised learning and assessment require clear policies on transparency and accountability to foster trust among students and faculty.

Understanding and mitigating AI risk

Almost 20% of students feel worried about the use of generative AI tools. Over half of academics feel that ethical concerns are the main barrier to successful implementation (QS Generative AI Pulse Surveys 2023). The panic around AI replacing human effort has calmed as knowledge has grown. Though there are real and mounting risks in misinformation and disinformation.

Beyond the impact of accidental errors, an improved understanding of AI technology also leads to harmful actors creating more compelling and effective narratives, often opening swathes of social media accounts and using AI to deliberately spread false information en masse. Universities should be well placed to engage in research and education to develop strategies for mitigating these risks.

At the University of Manchester, significant investments have been made to better understand AI modelling, deep learning, ethics and security. The Centre for Data Science and Artificial Intelligence states that ‘collaboration with people is centrally important to fundamental AI problems.’

Active agents of societal change

Universities have a duty to cultivate a deep understanding of AI’s ethical implications among students and the broader community, preparing them to navigate and shape the digital world responsibly.

At University College London’s School of Management, research into AI fundamentals has led to the integration of AI ethics into the curriculum with a new module exploring the ethical implications of AI ownership, agency and privacy biases, malicious and harmful use of AI and the rights of artificially intelligent beings.

Poised as guardians of socially responsible AI, universities have an important role to play in:

Ethical advocacy – By engaging in public discourse and policy development, universities can advocate for AI applications that uphold social justice, equity and environmental sustainability.

At The Alan Turing Institute, researchers work with policymakers to develop innovative, data-driven solutions to policy problems, and develop ethical frameworks to help the public sector apply AI ethics and safety to the design, development and deployment of algorithmic systems.

Partnering for ethical governance – At a roundtable at the House of Lords, Professor Shitij Kapur, Vice-Chancellor and President of King’s College London, said: ‘Where else in society [other than universities] are there people with the intellectual calibre to worry about the ethical impact? I think there are great possibilities […] to create a measure of the social good we could do, together.’  

By collaborating with industry, government, and civil society, universities can influence the ethical development and deployment of AI technologies on a global scale.

Bias mitigation through research – AI learns from the context of society, so holds the same biases as the information it scrapes. In late 2023, Harvard Business Review reported that scientists investigating a widely used healthcare algorithm found that it severely underestimated the needs of black patients, leading to significantly less care.

The future of ethical AI depends on a delicate balance. Universities, with their intellectual depth and commitment to societal well-being, are indispensable. However, success demands bold action, collaboration beyond academia, and continuous reassessment of their own evolving relationship with this transformative technology.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

1 comment

  1. BP says:

    “The use of AI and machine learning in administrative tasks, such as admissions” – Does anyone have any specific examples of tools used and use cases in admissions?

Leave a Reply

Your email address will not be published. Required fields are marked *