Advertisement Β· 728 Γ— 90

Posts by AI, Media & Democracy Lab

Preview
Events – AI, Media & Democracy Lab AI, Media & Democracy Lab

Our session will be led by @natalihelberger.bsky.social and @ferraribraun.bsky.social, and will include space for co-creating ideals, as well as a practical case study of the pilot currently running in the Dutch research sector trialling a transition to Nextcloud software.
πŸ‘‰ www.aim4dem.nl/events/

2 days ago 0 0 0 0
Post image

Join our workshop: "Building a Shared Vision on Future Digital Infrastructures"

It's our session part of @iasamsterdam.bsky.social's 10th anniversary, exploring how academia can transition to Big Tech alternatives. 🧡 More info & sign-up link below!

πŸ“… 19 May | 10:00 - 12:00
πŸ“ Oude Turfmarkt 145-147

2 days ago 3 1 1 1
Preview
AI Hype, Hope and Humanity AI Hype, Hope and Humanity is a three-day conference bringing together researchers, policymakers, practitioners, and societal partners working on AI.

πŸ“£ Open call from the AI Hype, Hope & Humanity conference organized by the ELSA network!

Are you a researcher, practitioner, or anything in between, working on human-centered AI and related aspects of societal resilience? Come share your ideas!

πŸ‘‰ Submit by 24 May: www.eur.nl/en/events/ai...

1 week ago 0 1 0 0

πŸ“… And here's last month's edition for a preview of what you can expect: sh1.sendinblue.com/aib4sbhdolpf...

1 week ago 0 0 0 0
Preview
Newsletter – AI, Media & Democracy Lab AI, Media & Democracy Lab

πŸ“© Are you signed up for our newsletter?
It's got the Lab's latest updates: publications, upcoming events, and inspiring articles to get you thinking about how AI, media, and democracy are evolving in tandem.

πŸ‘‰ Subscribe here, just in time for the April edition: www.aim4dem.nl/newsletter/

1 week ago 1 1 1 0
Preview
What is the impact of AI on society? – AI, Media & Democracy Lab AI, Media & Democracy Lab

🌐 How is AI changing society?

@natalihelberger.bsky.social was a speaker at AI and the Future of News 2026, organized by @reutersinstitute.bsky.social, as part of an engaging panel on the impact of AI from perspectives of law, economics, policy, and security.

πŸ‘‰ Read more: tinyurl.com/4j8eetpu

2 weeks ago 1 1 1 0
Post image

"The biggest threat to democracy is not AI. But anti-democratic forces, often democratically elected. And yes, they may leverage AI in their strategies"

My opening at the ADD AI Summit
Inspiring, depressing, occasionally uplifting summit

@ddc-sdu.bsky.social
@algosoc.org
@aimediademlab.bsky.social

2 weeks ago 23 4 0 0
Advertisement
Preview
The Journalism Benchmark Cookbook: A Template for Benchmarking LLMs in Newsrooms Our approach to creating a community-oriented AI benchmark for journalism

So, further work in this area needs to have a more granular focus on task-specific evaluations, to capture the many diverse needs and workflows of journalists.

πŸ‘‡οΈ See more on the Journalism Benchmark Cookbook:
generative-ai-newsroom.com/the-journali...

2 weeks ago 0 0 0 0

One of the conclusions was that one single benchmark might not be realistic for journalism due to the large of use cases genAI can have in this domain, from information extraction, to research and writing aid, and beyond.

2 weeks ago 0 0 1 0

Through workshops with journalists, some design guidelines for LLM benchmarks (a "benchmark cookbook") were established, that respect journalistic values and are especially suited to news work. In the process, challenges were identified around generalizability, data & resource access, and validity.

2 weeks ago 0 0 1 0
Post image

πŸ“Š What would journalism-specific benchmarks for LLMs look like?

This question is one of many that @ndiakopoulos.bsky.social, long-time member of our lab, has been working on with colleagues at Northwestern University and the Computational Journalism Lab.

Here are some takeaways from his work 🧡

2 weeks ago 1 0 1 0
Preview
About me About me

We are hoping her research might sound some alarm bells as part of the conversation around XAI in journalism!

πŸ‘‰ Read more about Jasmin's work: jasminkareem.github.io

3 weeks ago 2 0 0 0

Having spoken to developers from state broadcasters, newspapers, media conglomerates and even news aggregators, and building upon research by our lab member @hannescools.bsky.social, Jasmin found that explainable AI is still not a priority in journalism.

3 weeks ago 0 0 1 0
Post image Post image

πŸ‘ŽοΈ Explainable recommenders are not very popular with news engineers.

@jasminkareem.bsky.social β€” joint PhD student at @tue.nl and @uva.nl β€” visited us to share research on how recommender system engineers across different types of news organizations are not very keen on adopting XAI practices.

3 weeks ago 2 1 1 0
Preview
Slicing the past to predict the future: Recasting data slicing as curatorial work in ML development and evaluation | Cambridge Forum on AI: Culture and Society | Cambridge Core Slicing the past to predict the future: Recasting data slicing as curatorial work in ML development and evaluation - Volume 2

What is data slicing and what are its implications for machine learning?
πŸ‘‰ Our member @annaschjoett.bsky.social discusses this practice in her new publication in the Cambridge Forum on AI, informed by her fieldwork with data scientists at the BBC developing news recommenders:
doi.org/10.1017/cfc....

3 weeks ago 1 0 0 0
Preview
2026 Seminar Tech Juggernauts: AI, Freedom of Expression, and Shifting Geopolitical Alliances

Natali will also be at the Milton Wolf Seminar on Media & Democracy organized by @asc.upenn.edu for a panel on techno-feudalism: discussing Big Tech companies as "digital lords", extracting value from user activity and dependence on platforms.
πŸ“ 14 April | Vienna
πŸ‘‰ More info: tinyurl.com/mvp86vjd

3 weeks ago 2 1 0 0
Advertisement
Preview
What is the future for journalism in the era of AI? International Journalism Festival

On AI in journalism, Natali is speaking at the International Journalism Festival, along with experts from industry, academia, and governance, discussing the need for responsible implementation of AI in newsrooms.

πŸ“ 16 April | 15:00 | Live-streamed & recorded!
πŸ‘‰ Tune in: tinyurl.com/4zwn9bp6

3 weeks ago 0 0 1 0

πŸŽ™οΈ What is the future of journalism in the era of AI, and how do software giants extract value from users?

These are the topics of two panels our lab director @natalihelberger.bsky.social is speaking at in the upcoming weeks πŸ‘‡οΈ

3 weeks ago 1 0 1 0
Preview
Call for Tales for 4th edition of the IViR β€œScience Fiction & Information Law” writing competition! This is the fourth call for tales for the IViR Science Fiction & Information Law Writing Competition. DigiCon's Sci-fi Team is to again partner up with IViR and CPDP for this stellar competition.…

πŸ†οΈ The winners of the IViR SF & Information Law writing competition, co-organized by @kimonkieslich.bsky.social and our director @natalihelberger.bsky.social, will be announced soon at #CPDP2026! Until then, stay tuned for the shortlisted pieces coming out on the DigiCon blog:

4 weeks ago 2 1 0 0
Preview
Natali Helberger Award The Natali Helberger Award recognizes doctoral students who have conducted research through interdisciplinary collaboration that advances a Public Interest Technology (PIT) perspective in…

πŸ“… Last week for submissions to the Natali Helberger award from the Public Tech Media Lab at @uwmadison.bsky.social, founded by our former colleague @tomasdodds.bsky.social!

πŸ‘‰ For PhD students working on public interest tech in journalism, read more on applying:
ptml.sjmc.wisc.edu/natali-helbe...

4 weeks ago 3 2 0 0

With this, we wrap up our reporting from the 2026 ELSA Network day, but stay tuned a more all-encompassing summary of lab activities in the upcoming ELSA magazine this winter! πŸ“šοΈ

1 month ago 0 0 0 0

Across projects mentioned by participants (within energy, sustainability, public safety), impact often proved limited to awareness-raising. We need to give nature a voice, diversify research outputs beyond papers, employ participatory methods, and find fair ways to give back to participants.

1 month ago 0 0 1 0

🌱 Towards a Quintuple Helix approach β€” led by Manel Slokom and Sanne Vrijenhoek: This approach broadens impact thinking beyond government, industry, academia, and civil society to explicitly include the environment, asking not just β€œwho to involve” but β€œwhere is our system misaligned?”

1 month ago 0 0 1 0
Advertisement

Some lessons ELSA researchers have learned are to stress the need for open discussion spaces, to have designated coordination roles (e.g. PIs, institutional support), to keep a balanced distance from stakeholders, and to push for publishing critical work despite funding and business interests.

1 month ago 0 0 1 0

πŸ’₯ Epic ELSA failures & lessons learned β€” led by @laurensnaudts.bsky.social: Past lessons show how hard it is to question AI and technology while operating inside pre-defined academic, economic, and stakeholder structures. We need to avoid ethics-washing, misrepresentation, and weak communication.

1 month ago 0 0 1 0

Another rather more negative aspect to keep in mind is that the some stakeholders may have goals or agendas that conflict with the philosophy of ELSA research: in those situations, trusting the personal "gut feeling" is essential, along with thinking ahead to possible risks and consequences.

1 month ago 1 0 1 0

πŸ‘₯ Connecting meaningfully with stakeholders β€” led by Sophie Morosoli:
Engagement requires respect, clear mutual expectations, and openness to diverse forms of knowledge, including citizens’ and indigenous knowledge. It's important to foster structured interaction, and give back by sharing results.

1 month ago 0 0 1 0
Post image Post image Post image Post image

▢️ More on the ELSA way: stakeholder connections, learning from mistakes, and giving nature a voice
These are the last few takeaways from the round-tables held at the ELSA Network Day in February, where participants from AI labs across the Netherlands shared their experiences!🧡

1 month ago 2 2 1 0
Preview
Generative Authenticity - ADM+S Centre Examining the assumptions and community impacts of proposed solutions to the problem of authenticity in Generative AI and exploring novel technical responses that contribute to more responsible,…

Thus, as a society, we must continue to think through these problems by settling on granular boundaries and definitions of acceptable AI use.

πŸ‘‰ Have thoughts on this? Share them with us!

& Read more on the project: www.admscentre.org.au/generative-a...

1 month ago 0 0 0 0

From this, normative issues arise: does the establishment of content provenance in this way play the positive role we think it is playing? Research from within our lab shows that, at least for journalism, disclosures of AI use actually tend to lead to a decrease of trust in audiences.

1 month ago 0 0 1 0
Advertisement