AI in higher education: balancing innovation with academic integrity

Author:
Lan Murdock
Published:

This blog was kindly authored by Lan Murdock, Senior Communications Manager, Taylor & Francis.

Artificial Intelligence (AI) is transforming the research landscape, offering tools to analyse massive datasets, uncover patterns and simplify complex findings. However, as AI is increasingly integrated into research and higher education, ensuring responsible use is not merely a technical challenge – it is a strategic necessity. This article shares the key findings from a recent HEPI Policy Note (67) on the challenges posed by AI and provides actionable recommendations for researchers and higher education leaders to help foster its ethical integration.

What higher education leaders should focus on

Higher education leaders have a unique opportunity to set the standard for responsible AI practices, ensuring this powerful technology serves the greater good.

Recommendations from the report Using AI to accelerate translational research:

  • Invest in ethical AI research and ecosystems: Fund projects that focus on reducing bias, improving transparency and making AI tools accessible.
  • Develop clear policies: Adopt guidelines like the UKRIO’s Embracing AI with Integrity to ensure ethical practices across institutions.
  • Encourage collaboration: Foster interdisciplinary teamwork by bringing together experts from diverse fields to develop innovative and responsible AI applications.
  • Support ethical and responsible AI use in research and research communication: Resources such as the Research Integrity Toolkit, co-created by Taylor & Francis and Sense about Science with input from the UK Research Integrity Office (UKRIO), offer guidance for early career researchers to equip them to deliver and communicate research with integrity. Developed through co-creation workshops with early career researchers, the toolkit emphasises the importance of transparency, human oversight and compliance with ethical guidelines throughout the research cycle and features a section on using AI in research communication.
  • Training and development:Implement training programs to equip researchers with the skills needed to critically evaluate AI outputs. Tailor training to different roles, including researchers and ethics reviewers, to balance AI use with the development of core academic capabilities.

Why ethical AI matters: findings from the HEPI policy report

While outlining many benefits AI brings to the translational research process, the policy note Using AI to Accelerate Translational Research also highlights key issues such as bias, reproducibility, deskilling, and accountability, raising important questions about how AI should be responsibly integrated.

Key risks of using AI in research translation

  1. Bias in AI models
    AI systems may inherit biases from the datasets used for training, reflecting historical inequalities or systemic injustices.
    • Types of bias: Data bias (unrepresentative datasets), development bias (inappropriate algorithm use) and interaction bias (improper user interactions).
    • Impact: These biases can lead to discriminatory outcomes, disadvantaging certain communities and distorting research findings.
  2. Data quality and integrity
    Poor quality or inconsistent data may compromise the accuracy and reliability of AI outputs.
    • Disparate data recording practices (e.g., varying terminology for the same test across hospitals) hinder interoperability, limiting AI’s effectiveness.
    • Opaque algorithms, often developed in commercial settings (companies may not disclose the quality or sources of the data they use), obscure methodologies, making it difficult to scrutinise and reproduce results.
  3. Deskilling of researchers
    Overreliance on AI tools may erode critical thinking and practical skills among researchers.
    • Early career researchers risk losing opportunities to develop foundational skills as AI handles complex tasks like data cleaning and analysis.
    • Without proper training, researchers may struggle to critically evaluate AI outputs, leading to a decline in research quality.
  1. Accountability challenges
    Determining responsibility when AI systems fail may be complex. One potential example highlighted in the blog: in healthcare, clinicians may risk becoming ‘liability sinks’, absorbing legal responsibility for AI-driven errors despite limited control over the AI’s decision-making process.
  2. Ethical concerns
    AI systems may produce inconclusive, inscrutable, or misguided evidence, raising questions about their reliability.
    • Transparency in AI processes is often lacking, making it difficult to trace errors or assign accountability.
    • Ethical challenges include unfair outcomes, transformative effects on societal norms, and difficulties in assigning liability for AI-driven decisions.

These potential risks highlight the urgent need for robust ethical governance, transparent methodologies, and interdisciplinary collaboration.

A call to action

AI has the power to revolutionise research and higher education, but realising its full potential relies on ethical integration. By fostering collaboration, transparency and accountability, we can build an ethical and sustainable ecosystem that mitigates the risks of bias, deskilling, and ethical concerns. Together, we can ensure that AI serves as a tool for innovation and progress, rather than a source of division or harm.

Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Comments

  • Jonathan Alltimes says:

    What is ethical integration?
    Let us assume it means X.
    As assumed by the author of the blog and the report and note to which the author refers, how do we know X is possible?
    Chatbots presume concepts exist or can exist and are not only imagined because of what people have codified in training sets and do not independently verify their possibility and likelihood. There is no intersubjective interpretation and corroboration of meaning.
    What is AI? No examples are described in the blog and in relation to the principles argued.

    Reply

    Your comment may be revised by the site if needed.

Add comment

Your comment may be revised by the site if needed.

More like this

Author
Emily Pollinger and Jen McBride
Published
27 March 2026
Author
Professor Nick Braisby
Published
26 March 2026