Advertisement · 728 × 90

Posts by AI Forensics

Post image

Stronger regulation is essential.

Among other things, the European Commission should designate Telegram as a VLOP under the Digital Services Act (DSA).

Read the full report: aiforensics.org/work/telegra...

1 week ago 3 2 0 0

These findings underscore the need for a holistic response to the proliferation of image-based sexual abuse (IBSA), addressing both broader societal dynamics that have normalized misogyny and the platform design choices that have enabled it.

1 week ago 1 0 1 0

On Telegram, risks are amplified by:

• Moderation choices
• Reporting mechanisms
• Selective enforcement of policies
• Premium features enabling monetization

1 week ago 0 0 1 0

Telegram also acts as a hub:

• Redistribution of content from TikTok & Instagram
• Reddit used as a recruitment gateway linking to Telegram channels

1 week ago 0 0 1 0

• Content includes CSAM and depictions of incest and rape
• Monetization: €20–€50 one-time access or €5/month subscriptions to channels and archives
• Nudifying bots are widely advertised and embedded, enabling synthetic NCII at scale

1 week ago 0 0 1 1

🔍 The investigation found:

• Perpetrators are predominantly young heterosexual men using Telegram’s infrastructure (incl. Premium) to organize and monetize content
• Victims are predominantly women, including partners, acquaintances, ex-partners, and public figures

1 week ago 0 0 1 1

🔍 We analyzed nearly 2.8 million messages across 16 groups and channels over six weeks.

We found that what is often framed as individual misconduct is in fact structured, coordinated, and transnational. 👇

1 week ago 0 0 1 1
Advertisement

🚨 NEW: Large-scale Telegram networks are spreading and monetizing non-consensual intimate images (NCII), including CSAM, in Italy and Spain.

Our investigation reveals an organized ecosystem of abuse at scale — involving 25,000 users. 🧵

1 week ago 3 3 1 1
Preview
Artificial Elections 2.0: Generative AI in the 2026 French Elections This follow-up to our 2024 research on generative AI use by French political parties shows that during this year’s municipal elections, AI-generated content remained largely unlabelled and increasingl...

📊 Read the full study to find out more about how generative AI in French political campaigns has evolved since our 2024 investigation: aiforensics.org/work/artific...

2 weeks ago 1 0 0 0

Under the Digital Services Act, platforms must mitigate electoral risks

That includes clear labelling of AI-generated content.

The EU’s upcoming Code of Practice (Aug 2026)
should clarify responsibilities on AI labelling

But enforcement will be key.

2 weeks ago 1 0 1 0

4. Use remains most pronounced on the right and far right
(Renaissance, Reconquête), with one instance from the Communist Party.

2 weeks ago 0 0 1 0

3. It’s not just photorealistic images.

Parties are also using stylized, cartoonish AI visuals
to push core messages, such as anti-immigrant narratives.

2 weeks ago 0 0 1 0

1. Most AI-generated political content is still not labelled, even when watermarks like SynthID are present.

2. Generative AI is becoming normalized, increasingly replacing human-made campaign visuals.

2 weeks ago 0 0 1 0

So what’s changed in 2026?

Our follow-up during the recent municipal elections shows the following. 👇

2 weeks ago 0 0 1 0
Advertisement

Back in 2024, we found French political parties using AI-generated images to amplify anti-EU and anti-immigrant narratives.

None of it was labelled as AI.

2 weeks ago 0 0 1 0

“Generative AI has arrived in France’s local elections.”

That was Le Monde’s headline just a month before the 2026 municipal vote.

But what did that actually mean in practice? 👇

2 weeks ago 3 0 1 0

It’s not just photorealistic images.

Parties are also using stylized, cartoonish AI visuals
to push core messages, such as anti-immigrant narratives.

2 weeks ago 0 0 0 0

Generative AI is becoming normalized, increasingly replacing human-made campaign visuals.

2 weeks ago 0 0 1 0

Most AI-generated political content is still not labelled
—even when watermarks like SynthID are present.

2 weeks ago 0 0 1 0

So what’s changed in 2026?

Our follow-up during the recent municipal elections shows the following: 👇

2 weeks ago 0 0 1 0

Back in 2024, we found French political parties using AI-generated images to amplify anti-EU and anti-immigrant narratives.

None of it was labelled as AI.

2 weeks ago 0 0 1 0
Advertisement
AI Forensics 2025 Annual Report cover

AI Forensics 2025 Annual Report cover

To find out more, read our full 2025 Annual Report: aiforensics.org/about

4 weeks ago 2 1 0 0

But this was just one of several major investigations last year.

In 2025, our work spanned:
– AI-generated content and agentic manipulation
– the integrity of key elections
– age verification tools
– the spread of pornographic content
– networks of scam advertising and illegal targeting

4 weeks ago 2 0 1 0

Within days:
– regulators contacted us
– X removed the feature
– the European Commission opened a new DSA investigation

This is a powerful illustration of how we work.

4 weeks ago 1 0 1 0

In the final days of 2025, thousands of non-consensual AI-generated images spread across X.

We investigated it before regulators were even back from holiday.

4 weeks ago 2 0 1 0
Preview
Who’s Whispering in Your Chatbot’s Ear?

Marc Faddoul considers all the ways that AI output can be influenced without transparency or accountability.

1 month ago 3 1 0 0

As elections unfold in both the NL and FR, use our updated Human Guide for AI Detection to make sense of the information around you.

🔎 Meanwhile, we’ll be monitoring TikTok to examine whose voices are amplified, if disinformation is spreading, and how GenAI is shaping what people see. Stay tuned!

1 month ago 1 1 0 0

AI Forensics remains committed to producing evidence that supports stronger policy responses to online misogyny at both national and international levels.

1 month ago 0 0 0 0

Thank you to the @igualdadgob.bsky.social and the @instmujeres.bsky.social, the Ministère des Affaires étrangères français, and the FEMM Committee of the @europarl.europa.eu, in particular its chair @linagalvez.eu , for the invitation to speak on these important topics.

1 month ago 0 0 1 0
Advertisement

🔎 Among the topics discussed were our investigations into deepfakes and Grok, as well as our forthcoming research on Telegram and the sharing of non-consensual intimate images.

1 month ago 0 0 1 0