What does AI mean for mis/disinformation?

  
AI is increasing the volume, quality and effectiveness of disinformation. As the availability of generative AI increases and cost falls, this makes it easier for threat actors to create more personalised and effective content, and reduces the financial and time costs associated with micro targeting, hyper personalisation and amplification.
 
The spread of disinformation campaigns often relies on large numbers of fake accounts and the perceived authenticity of the accounts is key. Machine Learning (ML) techniques allow generation of increasingly realistic profile photos, reducing the need for image scraping and the potential for reverse image searches in detection. Improvements in text generation through Large Language Models (LLMs) for bio’s and online presence results in en masse creation of credible accounts.
 
Advancements in conversational AI or chatbots could automate engagement with targeted individuals, recognizing speech and text input and generating a response. This can be used to take part in online discussions and respond to comments to stimulate controversy and increase polarisation.
 
What are the implications for organisations?
 
Disinformation can affect both cost/revenue drivers and operations for any element of an organisation depending on the stakeholders targeted and narratives used. Recent academic studies have been conducted that show people’s inability to distinguish genuine and deepfake images, and recent campaigns have included deepfakes created of senior management of public companies designed to manipulate stock prices.
 
Disinformation about financial information may target investors and manipulate stock prices, for example by short sellers. Other examples may include targeting product safety, e.g. food manufacturers may face claims that their upstream supply chains have been victims of disease outbreaks or poisoning. Claims of dubious business practices, e.g. corruption may affect both investors but also employees and the ability to attract talent.
 
AI is able to analyse data quicker and generate hyper targeted and personalised information and narratives. Disinformation is likely to shift from a one-size-fits-all approach to more personalised narratives which are much harder to combat. AI makes organisations much more vulnerable to information threats, where even disgruntled employees or small activist groups will soon have the ability to mount targeted, sophisticated and effective campaigns. Organisations will need to be prepared. 
  
How can organisations effectively counter disinformation?

Effective response to disinformation is very different to proactive campaigns due to anchoring effects, confirmation bias and other cognitive phenomena, and efforts that are not based in evidence may potentially backfire. There is a caucus of evidence for counter-disinformation interventions, but it is highly fragmented and heterogeneous. Extracting and synthesising insight with the assistance of big data and AI is imperative for effective counter-ops. 

There are a range of interventions that are available to counter disinformation, some regulatory, some platform-oriented but also those that can be integrated into campaigns. This includes debunking (providing correcting information targeted towards misconceptions of beliefs), inoculation (pre-emptive exposure to weakened forms of disinformation, technique based or issue based), adjacent messaging (providing alternative more hopeful narratives), etc. 
Effective counter-interventions are context-specific but likely to have the following in common: 

1)  Early detection. Information threats are much easier to respond to when nascent, when more effective techniques can be used. Deplatforming and prebunking are effective options to quash influence threats before disinformation narratives become widespread.    

2)  Target knowledge acquisition. Obtaining demographic and psychographic insight into potential targets of influence operations, e.g. customers/investors help design more effective counter-campaigns                                                     
3)  Evidence-based response. As AI-related disinformation proliferates, pre-empting particular narratives becomes more challenging. Therefore technique based inoculation is likely to be more effective, as techniques will target weaknesses from common cognitive fallacies.