DEEPFAKE
Identify fabricated content and defend a person’s authenticity against the falsification of their image and voice.
Introduction
The term ‘deepfake’ refers to multimedia content created or altered using artificial intelligence to depict real people in situations that never actually occurred. They are generated using neural networks, which maximise the realism of the fake. Deepfakes pose a threat to freedom of information, being used for blackmail, fraud or reputational damage. Their prevalence is growing rapidly, with synthetic videos doubling online every six months. Over 90% of non-consensual deepfakes involve explicit material intended to harm women.
Companies are also suffering significant losses as a result of audio deepfake scams, highlighting vulnerabilities in security protocols. According to the JRC (2024), deepfakes undermine trust in institutions, turning the manipulation of appearances into a tool for attacking digital dignity.
LATERAL
"MASK" EFFECT
"MASK" EFFECT
Disappearance or distortion of facial features when the face turns to the side.
AUDIO-VIDEO
DISCONTINUITY
DISCONTINUITY
Lack of synchronization between lip movement and the sounds produced.
ABSENCE OF
PROVENANCE
PROVENANCE
Missing original data (metadata) certifying where and when the photo was taken.
The "mask" effect and loss of synchronisation
Identifying synthetic content in real time requires the analysis of visual and phonetic anomalies. A key indicator is the ‘masking effect’: deepfake algorithms struggle to maintain facial consistency during sudden movements, causing visible glitches or overlaps. On the audio front, metallic distortions, a lack of prosodic variation and micro-delays between facial expressions and speech articulation are evident.
Spectrographic analysis and voiceprint verification enable these inconsistencies to be detected: when the biometric
and acoustic channels do not match, the event is classified as a security incident, immediately triggering procedures to interrupt communication and report the threat.
Inconsistency in the certified origin
A real image taken by a photographer or a witness has a ‘chain of custody’ (who took it, and how). In contrast, synthetic content and deepfakes typically spread through social networks that lack historical context and credibility; in such scenarios, the absence of source metadata is identified as a key indicator of manipulation.
The preferred technological solution lies in the adoption of the C2PA standard, which embeds a cryptographically secure ‘digital passport’ within the assets to ensure genetic transparency. In strategic contexts, any specimen lacking such traceability must be subject to the Zero Trust principle, with its dissemination suspended until the source has been definitively validated, in order to preserve the integrity of institutional decision-making processes.
Let’s defend humanity
in the digital age
in the digital age
Costruiamo percorsi condivisi e innovativi per
proteggere chi ne ha bisogno
/ CYBER NEWS
