FAKE NEWS

A guide to recognising disinformation and safeguarding the right to free and reliable information.

Introduction

Fake news is no longer simply a matter of errors, but content designed to manipulate perceptions and emotions. Spread via social media and messaging platforms, it serves to influence public opinion, generate profit or sow confusion on sensitive issues.
Generative AI has made disinformation cheaper, more widespread and harder to spot, leading to an increase in synthetic content. According to the World Economic Forum, it represents one of the main threats to social cohesion, whilst the use of deepfakes to simulate statements by political leaders is also on the rise. The most serious risk is ‘information laundering’, namely the infiltration of false content into traditional media. This phenomenon undermines trust in institutions and weakens the very idea of a shared truth.
SEMANTIC-VISUAL
INCONSISTENCY

Presence of repetitive statistical patterns and geometric anomalies in rendering, signaling the absence of genuine understanding of logical and environmental context.

NO CERTIFIED
PROVENANCE

The absence of a verifiable source or digital watermarks that certify the origin of the content.

MULTILEVEL STRATEGIC
INFILTRATION

A process that legitimizes synthetic or manipulated content by routing it from unverified channels through to established media outlets or institutional profiles.

Contextual coherence and visual integrity

Content generated by artificial intelligence follows repetitive statistical patterns, which differ from the natural variability of human behaviour and expression. Although it appears plausible, this content lacks genuine semantic understanding and logic, creating an ‘artificial perfection’ that enhances the credibility of sensationalist narratives.
The identification of deception is based on the observation of logical and geometric anomalies and rendering flaws, such as inconsistent reflections, incorrectly positioned shadows or a lack of dynamism in the surrounding environment. Analysing spatial and textual consistency becomes a fundamental tool for distinguishing authentic content from deliberate manipulation, thereby protecting the user from messages designed to destabilise their perceptions or emotions.

From synthetic traceability to information laundering

Multimedia assets lacking certified metadata or a chain of custody (C2PA) facilitate information laundering. The absence of watermarking makes it impossible to verify the origin of the content, turning it into a high-impact threat.
In ‘information laundering’, brief snippets of content are passed from fringe channels to mainstream media, creating circular chains of citations with no verifiable facts and saturating the public debate. The defence consists of exposing the chain to neutralise its spread and protect the integrity of decision-making.
Let’s defend humanity
in the digital age

We are building collaborative and innovative approaches to
protect those in need

/ CYBER NEWS

Articles & Blog

Scroll to Top