- This HEPI blog was kindly authored by Sahil Shah, Managing Director, and Ari Soonawalla, Director, both at Say No to Disinfo.
What have recent changes in AI been?
Rapid advances in AI have greatly improved its capabilities across content generation and sentiment analysis. By drastically pushing down the costs, they have also reduced barriers to entry. Machine learning is making social media monitoring, text and sentiment analysis much more powerful, allowing for predicting social issues, virality of news events, and which groups may be most vulnerable to misinformation (false information). Large language models (LLMs) can already create text, photos, audio, and videos that are becoming more difficult to distinguish from organic content.
What does AI mean for the creation, dissemination and amplification of mis/disinformation?
As the availability of LLMs increases and cost falls, it is easier to create more personalised and effective content. As content creation in turn becomes more automated, this reduces the financial and time costs associated with micro-targeting and hyper-personalization. An improved understanding of the information environment means harmful actors can craft more compelling and effective narratives.
The spread of campaigns often relies on large numbers of accounts across social media. The perceived authenticity of these accounts is key. Machine Learning (ML) techniques allow the generation of increasingly realistic profile photos. This reduces the need for image scraping and the potential for reverse image searches to aid in detection of a campaign. As a result, it is possible to create credible accounts en masse to spread disinformation – false information spread deliberately.
Furthermore, advancements in conversational AI or chatbots could allow engagement with targeted individuals to be automated. Chatbots use large volumes of data, ML, and Natural Language Processing to imitate human interactions, recognizing speech and text input and generating a response. This could be used to take part in online discussions, respond to comments to stimulate controversy and disputes and increase polarisation.
Can we discern AI-generated information?
AI disinformation may be more convincing than that written by humans. New research has found
that people are 3% less likely to spot false tweets generated by AI than those written by
humans. While there are technical tools available to identify mis/disinformation and coordinated inauthentic behaviour using AI, techno-fixes are limited in their effectiveness. Fixes such as
reverse image searches place a high burden of effort for the user. Fact-checking is time
consuming and an ever-decreasing proportion of information can be fact-checked as
AI-generated/spread disinformation is proliferated. The effectiveness of detection algorithms
also depends on the availability of large sets of training data and quality of data labels. While
detection is becoming more robust, it is constantly playing catch up as the offence becomes
more and more sophisticated.
What are the implications of this at an individual and a societal level?
At an individual level, it means that we must all hold information with uncertainty. Disinformation is shifting from a one-size-fits-all approach to more personalised narratives which are much harder to combat. As it becomes quicker and cheaper to produce and disseminate, the information environment will become more crowded.
At a societal level, the collective impact of these individual impacts may cause an increase in the spread of misinformation, a decline in trust in media, institutions and experts and potentially even increased polarisation.
Given the limited effectiveness of technical tools and rapid developments in capability, media literacy and education is crucial in building societal resilience to mis/disinformation.
What are the implications of this for media literacy and education?
The future world is one where disinformation is more prevalent, more personalised and harder to discern. In light of this, traditional media literacy would need to be augmented to include the following:
1) How to hold information in uncertainty. People have a preference for certainty over uncertainty (certainty effect). We may need to help people develop ‘probabilistic’ mindsets, where information may or may not be true.
2) How to interact with uncertain information. Communicating the uncertainty associated with information is critical to enable others hold information in uncertainty, so sharing information does not make people more certain of beliefs.
3) How to recognise disinformation operations. Educating the public on who may be targeting them, why they do so, the techniques they use, the goals they have and how this links to particular narratives can help identify when a piece of information is more likely to be disinformation.
4) What technical tools can be used for verifying information, reporting mis/disinformation and deplatforming.
What makes an effective media literacy programme?
Effective media literacy platforms can train people how to hold information in uncertainty, interact with uncertain information and to recognise influence operations. It is critical to ensure that interventions also help users navigate technologies, for example, helping users understand data-driven automated systems and identifying when they are being used.
It is important to start with a clear definition of media literacy in the context of the intervention, in order to guide design decisions. This is especially important when considering the role of AI in disinformation. In order to be most effective media literacy interventions should be designed with a narrow scope, highly personalised to the target group context, and reinforced with “boosters” over time. This could be done through a modular format with individual modules focused on key learning outcomes and key themes reinforced across modules to optimise the longevity of effect.
This can be implemented in a variety of contexts including through universities, workplaces, broader civic education and public communications, in order to reach diverse groups of people. Critically, in order to be effective, this education needs to come from different nodes within a network, over a prolonged period of time, and from trusted actors.
The education system is a critical component of an effective media literacy strategy, and should be leveraged to provide young people with the tools they need to navigate this rapidly changing and dangerous information environment. The inclusion of well scoped, targeted media literacy modules in curriculums would keep young people safer and build a more resilient society.
Excellent blog, if I may say so.
The matter of trust has not yet been given proper airtime. We are now in a world in which no text of any kind can be immediately trusted (included this one).
On receiving a paper for review, a doctorate thesis to examine, or a book proposal or manuscript to assess, I now ask of the university/publisher ‘And what is your policy on Chatbot-type engines?’ (How am I know that this text – even in part – has not been artificially generated?) So far, I have not received anything approaching a satisfactory answer – it normally amounting to ‘We are looking into the matter.’
I especially warm – in this HEPI blog – to the identification of responses in general, and the separate sets of implications for different sectors.
There is much here to build upon. One issue to pick up in the near future is that of criticality – which should, perhaps, be the central concept for any university: just what are the implications for criticality, in a world where ‘we must all hold information with uncertainty’. (Should that not be that ‘we must hold all information with uncertainty’?)
At least, surely, the ‘critical being’ aspect of criticality must come to the form. Universities must, more than ever, exhibit a spirit of positive scepticism in and towards all matters. And yet, surely, we are seeing a concern with criticality fading from higher education?
Kindly
Ronald Barnett
It’s sad how we always find the worse ways to use a good tech.