FAKE ACCOUNT
How to spot fake profiles, protect your identity and stay safe online in the age of social disinformation.
Introduction
Fake accounts pose a systemic threat to digital security, as they distort reality and undermine trust online. So-called ‘ghost armies’ operate for purposes ranging from political destabilisation to financial crime. In this context, phishing is one of the most insidious techniques: it exploits the identities of trusted organisations, such as banks, to deceive victims and steal their credentials. The user, trusting the fictitious sender, takes actions that compromise their own security.
These practices do not merely affect individuals; they can become tools for large-scale manipulation, thereby jeopardising democratic stability. To protect against this, users and platforms must play an active role, implementing measures such as two-factor authentication and ‘zero-contact’.
VISUAL INCONSISTENCY
AND NO HISTORY
AND NO HISTORY
Lack of a coherent timeline, combined with algorithmically generated images or registry-style usernames.
PRESSURE TO
INTERACT
INTERACT
Luring attempts that exploit human vulnerability to establish immediate contact for financial gain.
ASTROTURFING AND
COORDINATED NETWORKS
COORDINATED NETWORKS
Imbalance between following and followers, combined with synchronized posting patterns designed to simulate artificial consensus through clusters of synthetic accounts.
Anomalies in identity, engagement and content
Identifying a fake identity in digital contexts requires assessing the consistency between images, information and the profile history. The presence of generated images, implausible names and the absence of a structured biography are signs of possible inauthenticity.
These profiles often show little history: recent accounts, primarily active in sharing promotional content or suspicious links, are typically linked to attempts at manipulation or phishing. Furthermore, there is a lack of genuine human interaction, with repetitive and automated behaviour. Recognising these signs enables you to quickly trigger reporting and isolation measures, thereby strengthening your digital security.
Coerced interaction for financial gain
A ‘sockpuppet’ profile, run by a human with malicious intent, attempts to quickly build trust in order to persuade the user to click on links or share sensitive data. These digital grooming techniques aim to induce risky behaviour, such as accessing malicious content.
The most effective defence is to cut off all communication immediately, ensuring that no information is provided to the attacker. It is essential to combine this with technical measures, such as two-factor authentication (MFA) and regular reviews of privacy settings, thereby reducing the effectiveness of manipulation and actively protecting one’s digital identity.
Structural anomalies in clusters and manipulation of consensus
When analysing a suspicious profile, one must take into account the structure of its network of connections, particularly any inconsistency between the number of followers and the quality of interactions. Astroturfing is a sophisticated strategy that creates ‘ghost armies’ to simulate a spontaneous public movement.
A key indicator is the skewed ratio of followers to following: synthetic accounts follow many profiles in order to secure follow-backs, but remain isolated from real communities. As highlighted in the Graphika Report (2025), self-referential clusters in which bots follow one another confirm the artificial nature of the network.
Let’s defend humanity
in the digital age
in the digital age
We are building collaborative and innovative approaches to
protect those in need
/ CYBER NEWS
