Synthetic Truths: Navigating Fake News and Artificial Intelligence

Index

A guide to recognising AI-generated disinformation and safeguarding the right to free and reliable information.

In an increasingly saturated digital ecosystem, fake news is no longer merely poorly written text, but complex constructs capable of undermining people’s emotional stability. The advent of generative artificial intelligence has drastically reduced the costs of creating disinformation on a massive scale. According to the World Economic Forum’s Global Risks Report 2026, AI-fuelled disinformation is considered the number one global threat to social cohesion over the next two years.

This phenomenon is evident across all sectors of an advanced society – from institutions and academia to government and the private sector – and has a particular impact on the under-30s. It is not merely a matter of fake news, but a genuine manipulation of perception that can isolate individuals and fuel hatred. UNESCO data highlights how the use of ‘summary content’ is making fact-checking increasingly difficult, requiring a new form of digital literacy to avoid becoming trapped in fictional realities.

The anatomy of disinformation: how AI manipulates us

Understanding how artificial intelligence works is the first step towards not feeling overwhelmed by a flood of information that seems too coherent to be false. AI algorithms are designed to maximise engagement, creating ‘filter bubbles’ that reinforce our biases and amplify sensationalist news.

The Oxford Internet Institute has documented how coordinated disinformation campaigns use AI-powered bots to simulate a non-existent popular consensus, a practice known as ‘astroturfing’. This creates a sense of being surrounded for the user, who feels isolated in a distorted reality. The European Union Agency for Cybersecurity (ENISA) emphasises that these systems do not ‘understand’ the truth, but merely predict sequences of data that appear plausible, making their capacity for persuasion extremely dangerous.

The Geopolitics of Disinformation: AI as a Weapon of Destabilisation

Beyond the harm caused to individuals, artificial intelligence has become a central tool in modern hybrid warfare strategies, capable of influencing the fate of entire nations without firing a single shot. Coordinated networks of advanced bots are used by state actors to interfere in democratic processes and elections. According to the European Union’s intelligence service (INTCEN), the use of audio and video deepfakes to simulate compromising statements by political leaders is a rapidly expanding tactic designed to polarise public opinion and undermine trust in institutions.

This manipulation is not merely aimed at making people believe a lie, but at undermining the very notion that a shared truth exists. The NATO StratCom Centre of Excellence highlights how these ‘information laundering’ operations enable artificially generated content to infiltrate traditional media, making the defence of digital sovereignty a national security challenge.

The technological arms race: detection and digital watermarking

Beyond individual harm, artificial intelligence has become a central tool in modern hybrid warfare strategies, capable of influencing the fate of entire nations without firing a single shot. Coordinated networks of advanced bots are being used by state actors to interfere in democratic processes and elections. According to the European Union Intelligence Service (INTCEN), the use of audio and video deepfakes to simulate compromising statements from political leaders is a rapidly growing tactic to polarize public opinion and undermine trust in institutions.

This manipulation aims not only to make people believe a lie, but to destroy the very idea that a shared truth exists. The NATO StratCom Centre of Excellence highlights how these “information laundering” operations allow artificially generated content to infiltrate traditional media, making the defense of digital sovereignty a national security challenge.


Sources

DARPA. (2025). Semantic Forensics (SemaFor): Protecting the integrity of digital media. Defense Advanced Research Projects Agency. https://www.darpa.mil/program/semantic-forensics
ENISA. (2024). AI and disinfo: Threat landscape and mitigation strategies. European Union Agency for Cybersecurity. https://www.enisa.europa.eu/topics/cyber-threats/threat-landscape 
EU INTCEN. (2024). Hybrid threats and the role of generative AI in geopolitical destabilization. EU Intelligence and Situation Centre. https://www.eeas.europa.eu/eeas/countering-hybrid-threats_en
JRC. (2024). AI, deepfakes and the future of digital trust: Technical solutions and watermarking. Joint Research Centre, European Commission. https://op.europa.eu/it/publication-detail/-/publication/14e669cb-fbae-11ee-a251-01aa75ed71a1/language-en p. 10
NATO StratCom CoE. (2024). The role of AI in state-sponsored disinformation campaigns. NATO Strategic Communications Centre of Excellence. https://www.nato.int/en/what-we-do/wider-activities/natos-approach-to-counter-information-threats 
Oxford Internet Institute. (2024). The global inventory of organized social media manipulation. University of Oxford. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2019/09/CyberTroop-Report19.pdf p. 4, p.11
UNESCO. (2024). Guidelines for the governance of digital platforms: Safeguarding freedom of expression.https://unesdoc.unesco.org/ark:/48223/pf0000387339 p. 7
World Economic Forum. (2026). The global risks report 2026: 21st edition. https://reports.weforum.org/docs/WEF_Global_Risks_Report_2026.pdf p. 17

Let’s defend humanity
in the digital age

We are building collaborative and innovative approaches to
protect those in need

/ CYBER NEWS

Articles & Blog

Scroll to Top