Our session will be led by @natalihelberger.bsky.social and @ferraribraun.bsky.social, and will include space for co-creating ideals, as well as a practical case study of the pilot currently running in the Dutch research sector trialling a transition to Nextcloud software.
π www.aim4dem.nl/events/
Posts by AI, Media & Democracy Lab
Join our workshop: "Building a Shared Vision on Future Digital Infrastructures"
It's our session part of @iasamsterdam.bsky.social's 10th anniversary, exploring how academia can transition to Big Tech alternatives. π§΅ More info & sign-up link below!
π
19 May | 10:00 - 12:00
π Oude Turfmarkt 145-147
π£ Open call from the AI Hype, Hope & Humanity conference organized by the ELSA network!
Are you a researcher, practitioner, or anything in between, working on human-centered AI and related aspects of societal resilience? Come share your ideas!
π Submit by 24 May: www.eur.nl/en/events/ai...
π And here's last month's edition for a preview of what you can expect: sh1.sendinblue.com/aib4sbhdolpf...
π© Are you signed up for our newsletter?
It's got the Lab's latest updates: publications, upcoming events, and inspiring articles to get you thinking about how AI, media, and democracy are evolving in tandem.
π Subscribe here, just in time for the April edition: www.aim4dem.nl/newsletter/
π How is AI changing society?
@natalihelberger.bsky.social was a speaker at AI and the Future of News 2026, organized by @reutersinstitute.bsky.social, as part of an engaging panel on the impact of AI from perspectives of law, economics, policy, and security.
π Read more: tinyurl.com/4j8eetpu
"The biggest threat to democracy is not AI. But anti-democratic forces, often democratically elected. And yes, they may leverage AI in their strategies"
My opening at the ADD AI Summit
Inspiring, depressing, occasionally uplifting summit
@ddc-sdu.bsky.social
@algosoc.org
@aimediademlab.bsky.social
So, further work in this area needs to have a more granular focus on task-specific evaluations, to capture the many diverse needs and workflows of journalists.
ποΈ See more on the Journalism Benchmark Cookbook:
generative-ai-newsroom.com/the-journali...
One of the conclusions was that one single benchmark might not be realistic for journalism due to the large of use cases genAI can have in this domain, from information extraction, to research and writing aid, and beyond.
Through workshops with journalists, some design guidelines for LLM benchmarks (a "benchmark cookbook") were established, that respect journalistic values and are especially suited to news work. In the process, challenges were identified around generalizability, data & resource access, and validity.
π What would journalism-specific benchmarks for LLMs look like?
This question is one of many that @ndiakopoulos.bsky.social, long-time member of our lab, has been working on with colleagues at Northwestern University and the Computational Journalism Lab.
Here are some takeaways from his work π§΅
We are hoping her research might sound some alarm bells as part of the conversation around XAI in journalism!
π Read more about Jasmin's work: jasminkareem.github.io
Having spoken to developers from state broadcasters, newspapers, media conglomerates and even news aggregators, and building upon research by our lab member @hannescools.bsky.social, Jasmin found that explainable AI is still not a priority in journalism.
ποΈ Explainable recommenders are not very popular with news engineers.
@jasminkareem.bsky.social β joint PhD student at @tue.nl and @uva.nl β visited us to share research on how recommender system engineers across different types of news organizations are not very keen on adopting XAI practices.
What is data slicing and what are its implications for machine learning?
π Our member @annaschjoett.bsky.social discusses this practice in her new publication in the Cambridge Forum on AI, informed by her fieldwork with data scientists at the BBC developing news recommenders:
doi.org/10.1017/cfc....
Natali will also be at the Milton Wolf Seminar on Media & Democracy organized by @asc.upenn.edu for a panel on techno-feudalism: discussing Big Tech companies as "digital lords", extracting value from user activity and dependence on platforms.
π 14 April | Vienna
π More info: tinyurl.com/mvp86vjd
On AI in journalism, Natali is speaking at the International Journalism Festival, along with experts from industry, academia, and governance, discussing the need for responsible implementation of AI in newsrooms.
π 16 April | 15:00 | Live-streamed & recorded!
π Tune in: tinyurl.com/4zwn9bp6
ποΈ What is the future of journalism in the era of AI, and how do software giants extract value from users?
These are the topics of two panels our lab director @natalihelberger.bsky.social is speaking at in the upcoming weeks ποΈ
ποΈ The winners of the IViR SF & Information Law writing competition, co-organized by @kimonkieslich.bsky.social and our director @natalihelberger.bsky.social, will be announced soon at #CPDP2026! Until then, stay tuned for the shortlisted pieces coming out on the DigiCon blog:
π
Last week for submissions to the Natali Helberger award from the Public Tech Media Lab at @uwmadison.bsky.social, founded by our former colleague @tomasdodds.bsky.social!
π For PhD students working on public interest tech in journalism, read more on applying:
ptml.sjmc.wisc.edu/natali-helbe...
With this, we wrap up our reporting from the 2026 ELSA Network day, but stay tuned a more all-encompassing summary of lab activities in the upcoming ELSA magazine this winter! ποΈ
Across projects mentioned by participants (within energy, sustainability, public safety), impact often proved limited to awareness-raising. We need to give nature a voice, diversify research outputs beyond papers, employ participatory methods, and find fair ways to give back to participants.
π± Towards a Quintuple Helix approach β led by Manel Slokom and Sanne Vrijenhoek: This approach broadens impact thinking beyond government, industry, academia, and civil society to explicitly include the environment, asking not just βwho to involveβ but βwhere is our system misaligned?β
Some lessons ELSA researchers have learned are to stress the need for open discussion spaces, to have designated coordination roles (e.g. PIs, institutional support), to keep a balanced distance from stakeholders, and to push for publishing critical work despite funding and business interests.
π₯ Epic ELSA failures & lessons learned β led by @laurensnaudts.bsky.social: Past lessons show how hard it is to question AI and technology while operating inside pre-defined academic, economic, and stakeholder structures. We need to avoid ethics-washing, misrepresentation, and weak communication.
Another rather more negative aspect to keep in mind is that the some stakeholders may have goals or agendas that conflict with the philosophy of ELSA research: in those situations, trusting the personal "gut feeling" is essential, along with thinking ahead to possible risks and consequences.
π₯ Connecting meaningfully with stakeholders β led by Sophie Morosoli:
Engagement requires respect, clear mutual expectations, and openness to diverse forms of knowledge, including citizensβ and indigenous knowledge. It's important to foster structured interaction, and give back by sharing results.
βΆοΈ More on the ELSA way: stakeholder connections, learning from mistakes, and giving nature a voice
These are the last few takeaways from the round-tables held at the ELSA Network Day in February, where participants from AI labs across the Netherlands shared their experiences!π§΅
Thus, as a society, we must continue to think through these problems by settling on granular boundaries and definitions of acceptable AI use.
π Have thoughts on this? Share them with us!
& Read more on the project: www.admscentre.org.au/generative-a...
From this, normative issues arise: does the establishment of content provenance in this way play the positive role we think it is playing? Research from within our lab shows that, at least for journalism, disclosures of AI use actually tend to lead to a decrease of trust in audiences.