#AI4TRUST protecting against disinfo using trustable AI - join the convo now
Thanks! See you next time! #AI4TRUST
And that’s a wrap! Thank you for your participation and contribution to this thoughtful discussion! 👏
Find out more about #AI4TRUST here: www.ai4trust.eu
A5. Media literacy must be provided throughout life and taught only by humans in order to develop critical thinking skills in everyone. AI shouldn't be the final arbiter of what’s true or make high-stakes decisions—hallucinations happen, algorithms can’t replace human editorial judgment. #AI4TRUST
Humans must verify harm, explain context, and hold platforms accountable, while AI should only flag suspicious content, never decide truth or censor alone. Judgment must stay human. #AI4TRUST
AI should not replace the professional fact-check team.
#AI4TRUST
A5: Humans should verify sources, contextualise claims, and coordinate responses, while AI should not be relied on to judge truth independently, replace critical thinking, or handle sensitive decisions without oversight. #AI4TRUST
A5: Humans: Must provide context and ethics. Only a person should make the final call to label or remove content.
AI: Should handle the scale and speed of data but should never be the final "judge of truth" or an autonomous censor. #AI4TRUST
We’re lucky news audiences still trust humans more than AI, despite dipping trust in journalism in recent years #AI4TRUST
www.euractiv.com/news/news-au...
A5. When the next wave hits: humans verify, explain, build trust. AI scans and maps. What AI shouldn’t do? Be the final judge of truth in messy political debates. #AI4TRUST
A5. AI can help map networks, flag anomalies and surface claims at scale, while humans assess context, intent and proportional response. But AI should never arbitrate truth alone or replace editorial judgement. Trust, accountability and public communication remain human tasks! #AI4TRUST
Q5. When the next major disinformation wave hits, what should humans do - and what should AI not be asked to do?
#AI4TRUST
Language gaps, national politics, and different platforms slow things down for sure. I think nothing is impossible, but responses may stay partly fragmented for now. #AI4TRUST
A4: Europe is building shared hubs like EDMO (European Digital Media Observatory), but responses remain fragmented by language. While English-language AI is strong, smaller languages (like Bulgarian or Polish) have fewer tools, leaving those regions more vulnerable to local lies. #AI4TRUST
A4. Europe can share detection signals, especially via networks like @edmo-eu.bsky.social. But language, platforms and politics still fragment the response. #AI4TRUST
This is where I'm optimistic. There's consensus among countries, and it's not difficult with the help of existing technology.
#AI4TRUST
A4.Europe can build a cross-border defence to protect its values : EU-wide networks and DSA systemic-risk rules enable common monitoring and coordinated responses across platforms. Europe should also create a European news streaming platform highlighting verified, diversified sources. #AI4TRUST
Europe can build a shared approach, but differences in language, platform policies, and national priorities mean responses are still likely to be uneven and partly fragmented. #AI4TRUST
A4. Although AI makes a shared EU response technically feasible, there is still fragmentation. Trust, media systems, language nuance and platform governance differ widely. Coordination would have to be political and cultural, not just technical. #AI4TRUST
Q4. Can Europe build a shared approach to detecting cross-border disinformation, or will responses remain fragmented by language and platform?
#AI4TRUST
They sometimes get things wrong.
#AI4TRUST
Journalists probably fear AI will replace nuanced judgment, fact-checkers worry about hidden biases in the algorithms, and readers? They’re just left wondering what’s real anymore. #AI4TRUST
A3. As the AI Act rolls out, worries grow: opaque tools shaping coverage, over-automation, and who’s accountable when AI gets it wrong. #AI4TRUST
In extreme circumstances, if a malign actor (could be backed by foreign adversaries) flooded the Internet with misinformation and disinformation, would that paralyze AI's ability to check disinformation?
#AI4TRUST
A3: The main fear is verification fatigue. Journalists worry AI-generated summaries will replace deep reporting, while readers fear that if everything can be faked, nothing is true. There is also a major concern about AI tools being trained on stolen journalism. #AI4TRUST
A3. Journalists & fact-checkers worry about being replaced by AI and losing their jobs, and getting false info from AI. Publishers fear their content will be used without respect for copyright. As for readers, they might loose traceability of info - eroding trust. #AI4TRUST
A3. Concerns centre on over-reliance, opaque models, errors framed as authority, bias in training data, and AI summaries diluting original reporting. Loss of editorial control, trust, accountability and who ultimately shapes the narrative are quite worrisome. #AI4TRUST
A3 They are most worried that AI tools could produce errors, reinforce biases, reduce human oversight, and make it harder to verify information, potentially spreading misinformation rather than preventing it. #AI4TRUST
A2.AI-made disinfo can feed what LLMs learn from, AI-text detection is inherently unreliable - so how counter risks created by the tool with the tool itself? AI could add value by rapidly flagging disinfo, track narratives, detect manipulation, but we must enforce rigorous fact-checking. #AI4TRUST
Q3. As EU AI rules move from theory to practice, what worries journalists, fact-checkers and readers most about relying on AI tools?
#AI4TRUST