In July, HEPI, with support from the publisher Taylor & Francis, hosted a roundtable dinner to discuss harnessing AI to advance translational research and impact. This blog considers some of the themes that emerged from the discussion
Travel through a major railway station in the near future and you may see, alongside the boards giving train times, a video of someone using British sign language. This could be an AI-generated signer, turning the often difficult-to-hear station announcements into sign language so that deaf people can understand what is being said. It is just one example of how artificial intelligence is increasingly being used in the real world.
The question this roundtable focused on was how AI could be used to advance translational research. That is, taking curiosity-driven research and turning it into a real-world application. What role can academic leaders and publishers play in shaping ethical, inclusive and innovative uses of AI in such research? How can AI enhance collaboration across disciplines, and what are the potential barriers, ethical dilemmas and risks involved in the process?
The discussion, attended by senior university and research leaders, publishers and funders, was held under the Chatham House rule, by which speakers express views on the understanding they will be unattributed.
Advantages and risks
Speakers agreed that AI has huge potential to allow researchers to analyse large datasets cheaply, quickly, and accurately, turning research into real-world applications, as well as improving accessibility to scientific knowledge. They noted that AI can help provide plain language summaries of research and present them in different formats, including multilingual or multimedia content, while also opening useful ways for learned societies to disseminate research findings among their member practitioners.
But risks were identified too. How could the use of AI affect creativity and critical thinking among researchers? How can academics guard against bias and ensure transparency in the data on which AI tools are based? And what about environmental concerns – in terms of maintaining the energy-guzzling AI system and managing electronic waste? Most worryingly, when AI is involved in research and its application, who is ultimately accountable if something goes wrong?
Such concerns were addressed in a guide for researchers on Embracing AI with integrity, published by the research integrity office UKRIO in June. https://ukrio.org/wp-content/uploads/Embracing-AI-with-integrity.pdf.
Delegates at the roundtable were told that one message to draw from this guide was that researchers using AI should be asking themselves three essential questions:
- Who owns the information being inputted into the AI?
- Who owns the information once it is in the AI?
- Who owns the output?
Working together
Collaboration is key, said one speaker. That means breaking down existing academic silos and inviting in the experts who will be responsible for applying AI-driven research. It is also crucial to consider the broader picture and the kind of future society we want to be.
One concern the roundtable identified was that power over AI systems is concentrated in the hands of just a few people, which means that rather than addressing societal problems, it is creating divides in terms of access to information and resources.
‘We are not in the age of AI we actually want’, said one speaker. ‘We are in the age of the AI that has been given to us by Big Tech.’
Tackling this issue is likely to involve the development of new regulatory and legal frameworks, particularly to establish accountability. Medical practitioners are particularly concerned about ‘where the buck stops’ and how, for example, potentially transformative AI diagnosis tools can be used in a safe manner.
Others at the roundtable were concerned that placing the bulk of ethical responsibility for AI on researchers might discourage them from testing boundaries.
‘When you do research, you can never have that control completely or you will never do novel things’, said one. Responsibility must therefore be shared between the researcher, implementer and user. That means everyone needs education in AI so they understand the tools they have been given and how to use them effectively.
Reliable data
Being able to rely on the underlying datasets used in AI is essential, said one speaker, who welcomed the government’s decision to open up public datasets through the AI Opportunities Action Plan https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan and to curate a national data library https://datalibrary.uk/.
There was a difference, it was agreed, between research driven by commercially available AI tools when it was not possible to see ‘inside the black box’ and research based on AI tools in which the datasets and algorithms were reliable and transparent. The former was like presenting a research paper that provided the introduction, results, and analysis without explaining the methodology; it was suggested.
Educating users
Yet AI is not just about the data on which it is based but also about the competence of the people using it. How can higher education institutions ensure that students and researchers, particularly early career researchers, have the know-how they need to use AI correctly? (The Taylor and Francis AI Policy may be of interest here.)
It was pointed out that the independent review of the curriculum and assessment system in schools in England, due to publish its recommendations later this year, is likely to be a missed opportunity when it comes to ensuring that pupils enter university with AI skills.
Meanwhile, politicians are struggling to establish the right framework for AI research, as they often lack expertise in this field.
This is a problem since the field is moving so fast. It was suggested that rather than wait for action from policymakers and a regulatory framework, researchers should get on with using AI or risk the UK being left behind.
Social vision
The roundtable agreed that making decisions on all this was not just the responsibility of academia. But where academic research could be useful was in filling the gaps in AI development that big commercial companies neglected because they prioritised business models.
Here, researchers, including in the arts and humanities, could be important in deciding what society ultimately wants AI to achieve. Otherwise, one speaker suggested, it would be driven by the ‘art of the possible’.
Meanwhile, what skills do universities want researchers to have? Some raised the fear that outsourcing work to AI could mean researchers being deskilled. Evidence already suggests that the use of AI can reduce students’ metacognition – the understanding of their own thought processes.
‘If we think it’s important for researchers to be able to translate their findings, don’t let a machine do it’, said one speaker. Another questioned whether researchers should ever be using tools they do not understand.
Artificial colleagues
One suggestion was that rather than outsourcing their work to AI, researchers should be using it to enhance their existing practices.
And while some were concerned about the effect AI could have on creativity, one speaker suggested that, by calibrating AI tools to investigate concepts at the edge of scientific consensus, they could be used to spark more original approaches than a human group would achieve alone.
Another positive identified was that while biases in AI can be a problem, they can also be easier to identify than human biases.
The roundtable heard that successfully accommodating AI should be about teamwork, with AI seen as another colleague – there to advise and reason but not do all the work.
‘The AI will be the thing that detects your biases, it will be the thing that reviews your work, and it will support that process, but it shouldn’t do the thinking, ’ was the message from one speaker. ‘Ultimately, that should come back to humans. ’
Taylor & Francis are a partner of HEPI. Taylor & Francis supports diverse communities of experts, researchers and knowledge makers around the world to accelerate and maximize the impact of their work. We are a leader in our field, publish across all disciplines and have one of the largest Humanities and Social Sciences portfolios. Our expertise, built on an academic publishing heritage of over 200 years, advances trusted knowledge that fosters human progress. Under the Taylor & Francis, Routledge and F1000 imprints, we publish 2,700 journals, 8,000 new books each year and partner with more than 700 scholarly societies.
We will be working together to develop a HEPI Policy Note on the use of AI in advancing translational research. If you have a fantastic case study or AI-related translational approach at your institution, we would love to hear from you. To tell us more about your work, please email [email protected].