Stronger regulation is essential.
Among other things, the European Commission should designate Telegram as a VLOP under the Digital Services Act (DSA).
Read the full report: aiforensics.org/work/telegra...
Posts by AI Forensics
These findings underscore the need for a holistic response to the proliferation of image-based sexual abuse (IBSA), addressing both broader societal dynamics that have normalized misogyny and the platform design choices that have enabled it.
On Telegram, risks are amplified by:
• Moderation choices
• Reporting mechanisms
• Selective enforcement of policies
• Premium features enabling monetization
Telegram also acts as a hub:
• Redistribution of content from TikTok & Instagram
• Reddit used as a recruitment gateway linking to Telegram channels
• Content includes CSAM and depictions of incest and rape
• Monetization: €20–€50 one-time access or €5/month subscriptions to channels and archives
• Nudifying bots are widely advertised and embedded, enabling synthetic NCII at scale
🔍 The investigation found:
• Perpetrators are predominantly young heterosexual men using Telegram’s infrastructure (incl. Premium) to organize and monetize content
• Victims are predominantly women, including partners, acquaintances, ex-partners, and public figures
🔍 We analyzed nearly 2.8 million messages across 16 groups and channels over six weeks.
We found that what is often framed as individual misconduct is in fact structured, coordinated, and transnational. 👇
🚨 NEW: Large-scale Telegram networks are spreading and monetizing non-consensual intimate images (NCII), including CSAM, in Italy and Spain.
Our investigation reveals an organized ecosystem of abuse at scale — involving 25,000 users. 🧵
📊 Read the full study to find out more about how generative AI in French political campaigns has evolved since our 2024 investigation: aiforensics.org/work/artific...
Under the Digital Services Act, platforms must mitigate electoral risks
That includes clear labelling of AI-generated content.
The EU’s upcoming Code of Practice (Aug 2026)
should clarify responsibilities on AI labelling
But enforcement will be key.
4. Use remains most pronounced on the right and far right
(Renaissance, Reconquête), with one instance from the Communist Party.
3. It’s not just photorealistic images.
Parties are also using stylized, cartoonish AI visuals
to push core messages, such as anti-immigrant narratives.
1. Most AI-generated political content is still not labelled, even when watermarks like SynthID are present.
2. Generative AI is becoming normalized, increasingly replacing human-made campaign visuals.
So what’s changed in 2026?
Our follow-up during the recent municipal elections shows the following. 👇
Back in 2024, we found French political parties using AI-generated images to amplify anti-EU and anti-immigrant narratives.
None of it was labelled as AI.
“Generative AI has arrived in France’s local elections.”
That was Le Monde’s headline just a month before the 2026 municipal vote.
But what did that actually mean in practice? 👇
It’s not just photorealistic images.
Parties are also using stylized, cartoonish AI visuals
to push core messages, such as anti-immigrant narratives.
Generative AI is becoming normalized, increasingly replacing human-made campaign visuals.
Most AI-generated political content is still not labelled
—even when watermarks like SynthID are present.
So what’s changed in 2026?
Our follow-up during the recent municipal elections shows the following: 👇
Back in 2024, we found French political parties using AI-generated images to amplify anti-EU and anti-immigrant narratives.
None of it was labelled as AI.
AI Forensics 2025 Annual Report cover
To find out more, read our full 2025 Annual Report: aiforensics.org/about
But this was just one of several major investigations last year.
In 2025, our work spanned:
– AI-generated content and agentic manipulation
– the integrity of key elections
– age verification tools
– the spread of pornographic content
– networks of scam advertising and illegal targeting
Within days:
– regulators contacted us
– X removed the feature
– the European Commission opened a new DSA investigation
This is a powerful illustration of how we work.
In the final days of 2025, thousands of non-consensual AI-generated images spread across X.
We investigated it before regulators were even back from holiday.
Marc Faddoul considers all the ways that AI output can be influenced without transparency or accountability.
As elections unfold in both the NL and FR, use our updated Human Guide for AI Detection to make sense of the information around you.
🔎 Meanwhile, we’ll be monitoring TikTok to examine whose voices are amplified, if disinformation is spreading, and how GenAI is shaping what people see. Stay tuned!
AI Forensics remains committed to producing evidence that supports stronger policy responses to online misogyny at both national and international levels.
Thank you to the @igualdadgob.bsky.social and the @instmujeres.bsky.social, the Ministère des Affaires étrangères français, and the FEMM Committee of the @europarl.europa.eu, in particular its chair @linagalvez.eu , for the invitation to speak on these important topics.
🔎 Among the topics discussed were our investigations into deepfakes and Grok, as well as our forthcoming research on Telegram and the sharing of non-consensual intimate images.