As elections unfold in both the NL and FR, use our updated Human Guide for AI Detection to make sense of the information around you.
🔎 Meanwhile, we’ll be monitoring TikTok to examine whose voices are amplified, if disinformation is spreading, and how GenAI is shaping what people see. Stay tuned!
Posts by Natalia Stanusch
As #AI-generated content becomes increasingly widespread, users need both awareness of the evolving risks and the tools to navigate an increasingly misleading AI-generated information environment, especially when platform safeguards fall short.
@aiforensics.org 's “The Human Guide to Detecting AI Imagery” is structured as a series of steps, divided into four focus areas:
1) Before the Image: AI Telltales
2) Synthetic Artifacts in AI Imagery
3) Moving Images/Clips
4) Digital Provenance
Our guide translates #medialiteracy into a practical set of tips.
The guide can support the work of researchers, journalists, and fact-checkers.
But it can also serve as a guide in everyday scrolling, helping regular #socialmedia users and voters be better equipped to navigate the age of #AIslop.
With important elections coming up as early as next week in France, the Netherlands, and Hungary, the question of (in)adequate content moderation and its risks is again top of mind. #AI-generated content is increasingly becoming an important part of that risk.
You can find the full “Human Guide to Detecting AI Imagery” on our website:
lnkd.in/dsG7cfR3
Image courtesy: Miriam Sáenz de Tejada/Euractiv, based on AI Forensics’ “The HumanGuide to Detecting AI Imagery.”
An infographic showing signs of AI-generated content on a social media platform. Image courtesy: Miriam Sáenz de Tejada/Euractiv, based on AI Forensics’ “The Human Guide to Detecting AI Imagery.”
Navigating between AI and non-AI content is becoming increasingly difficult. At @aiforensics.org we may not offer a magic formula but we can offer some tools. Today, we’re publishing an updated version of The Human Guide to Detecting AI Imagery, building on our continued work on AI-generated content
Really enjoyed this paper by @nataliastanusch.bsky.social and Richard Rogers. Their term 'techno-hagiography' (performative writing of a mythology of AI and its spokespeople) and its relationship to temporarily is apt to describe future-making in the present. journals.sagepub.com/doi/10.1177/...
Grateful to the organizers and participants for important discussions on EU platform regulation in today’s challenging political climate—and how best to use the DSA and DMA going forward.
🔍 Read our AI Search report: tinyurl.com/3zf7vvcj
🤖 Read our TikTok Research API report: tinyurl.com/4mrw3v95
🔹 @nataliastanusch.bsky.social presented our report “From ‘Googling’ to ‘Asking ChatGPT’: Governing AI Search.”
It argues for:
– expanding moderation to account for AI search systems
– distinguishing content moderation from behaviour moderation
– a complementary role for the DSA and the AI Act
🔹 Our Head of Research, Salvatore Romano, spoke on “Scraping, APIs, and Access to Platform Data.”
Drawing on our TikTok Research API investigation, he showed how unreliable or error-prone APIs can obstruct independent research—often pushing researchers back to scraping.
The API Was Official. The Problems Were Also Official.
Governing AI Search presentation.
At the DSA and Platform Regulation Conference.
Last week marked two years of the DSA.
To mark the occasion, AI Forensics joined the DSA & Platform Regulation Conference, organized by the @dsaobservatory.bsky.social, to share our research and reflect on the future of platform governance in the EU.
We contributed two presentations.👇
“Scraping, APIs & Access to Platform Data”: Our Head of Research Salvatore Romano will present our work on data access under the DSA.
“AI and the DSA”: Our Researcher @nataliastanusch.bsky.social will present our work on regulating and moderating AI-powered search.
See you there!
Planning on joining @dsaobservatory.bsky.social’s #DSA #Conference in #Amsterdam next week?
Come say hi and learn about some of our latest research on Tuesday, February 17 👇
🔒Safer Internet Day is a good moment to reflect on how our investigations are helping push the internet in a safer direction — and how we can continue being a catalyst for change in 2026. 🧵
Stay tuned!
We’ve received many requests in recent days for updates on our Grok report. We’re currently conducting additional monitoring and analysis and will have more to share shortly. 🔍
A year’s work with my colleague @nataliastanusch.bsky.social on “moderation” & regulation of AI search !
Researcher Natalia Stanusch presents the findings, "Investigating AI Chatbots."
Head of Policy Raziye Buse Çetin presents the findings from the report "Governing AI Search."
Head of Policy Raziye Buse Çetin presenting at CDT Europe's DSA roundtable.
🔍 As the EU calls for “simplification,” our year-long study of LLM chatbots shows a growing regulatory gap.
LLMs like ChatGPT and Gemini increasingly act like search engines. But do they fall under the DSA, the AI Act, or somewhere in between?
⚠️ AI chatbots are reshaping how we search. But EU rules haven’t caught up.
Our new policy report digs into the risks of LLM-powered search and the gap in Europe’s regulatory approach.
So how do we fix it?
👇 Full report + recommendations:
tinyurl.com/2p2h77et
🚨354 automated AI accounts generated 4.5 billion views on TikTok through more than 43,000 AI-made posts.
Our investigation finds Agentic AI Accounts spreading harmful content. Yet TikTok labels around 1.38% of these posts. ⚠️
Full report: tinyurl.com/4kkfm4nn
🎯 Targeted Ads, TikTok & Meme Resistance
New in Big Data & Society by @nataliastanusch.bsky.social
What do TikTok memes reveal about how users understand — and resist — being watched by algorithms?
👉 journals.sagepub.com/doi/10.1177/...
New AIF report: is AI-generated imagery really gaming the platforms’ algorithms? We investigated search results on #TikTok and #Instagram across countries, languages, and topics. #genai #aislop -> more in our report:
📢 In a joint civil society effort, we filed a formal DSA complaint against 𝕏 (formerly Twitter) for violating Article 26(3) — which bans targeting ads based on sensitive data like sexuality, politics, religion, health.
📄 Read the joint statement: aiforensics.org/uploads/Join...
New report just dropped. TikTok, content labelling, and genAI in the context of PL elections 2025. Also the report just got covered in PL by @demagog.org.pl
🧵How TikTok failed Polish democracy during the 2025 Presidential Election
Our new investigation exposes serious gaps in TikTok's election content moderation that disproportionately harmed the Polish diaspora : aiforensics.org/work/tiktok-...
🫵 Will you help us spread the word about AIxDESIGN Festival? 🫵
🏉 We're throwing our first-ever festival and need your help reaching the people it's made for! > bit.ly/aixd-festiva...
Tag us if you share - we love to see it!
🎟️ TICKETS NOW AVAILABLE 🎟️
🎪 AIxD Festival: On Slow AI 🎪 May 1-3 in Amsterdam
Imagine a cosy space to deconstruct mainstream AI stories, dream up new ones, and figure out how we make them real. It will be a fever dream of talks, workshops, an art exhibition, and film screening → 🔗 lu.ma/yiaccf39
It's Monday, time to repeat the week again but before you do... ✨ On a scale of esoteric Sun, how do you feel today after reading our chapter on Esoteric AI? 🌞
And in case the horrors of Capitalism persisted, take a moment to vibe and summon the full chapter → 🔗 lnkd.in/gKE-HRfq 🔮
This chapter is the manifestation of Natalia and the team’s past 12 months of research into Esoteric AI ⏳ A labour of love, we’d love to hear your thoughts!
🎞️ DREAM TEAM: Sofia Vieira, Kaashvi Kothari, @nataliastanusch.bsky.social , Ploipailin Flynn, @nadiapiet.bsky.social
RELEASE ALERT 🚨 (3/9) “Esoteric AI: To disenchant the enchanted, and enchant the disenchanted” chapter is out now! 🔗 Summon the full chapter on our website → lnkd.in/gKE-HRfq 🧛