Posts by Luke Thorburn
🧵 We are likely to need better democratic systems to navigate the challenges posed by AI advances.
Today we're releasing the Democratic Capabilities Gap Map to show what needs to happen to make better democracy a reality.
📆save the date - Build Peace is back!
We’re excited to share that Build Peace 2026 will take place this year from November 13-16 Co-organised @uwaterloo.ca 🎉
More details on the way - conference themes & ways to get involved
@abualfatah.bsky.social @lukethorburn.com @allancheboi.bsky.social
Here is the syllabus from the most recent iteration of my Stanford Law School class on platform regulation.
docs.google.com/document/d/1...
1/
🚨New preprint and our results are rather concerning..
We find the "boiling frog" equivalent of AI use. Using large-scale RCTs, we provide *casual* evidence that AI assistance reduces persistence and hurts independent performance.
And these effects emerge after just 10–15 minutes of AI use!
1/
Game theory does not have a way to distinguish "conflict" from "competition." I think the difference is whether people are using destructive moves (e.g. murder) to win the game. Such moves are defections in the meta-game of peace and security.
Could social media make us less polarized instead of more?
We tested 5 algorithms on 3 platforms with 10,000 people for 6 months during the 2024 election, and found that the answer is yes.
🧵
🧑💻 New paper at #chi2026 w @lorenzspreen.eurosky.social and @stefanherzog.bsky.social
Are you worried about how social media algorithms affect people’s beliefs? We are, so we tested engagement-based ranking algorithms against alternatives in a pre-reg’d collaborative filtering experiment... 🧵
I'm pretty sure the CJEU in Russmedia just casually dismantled 80% of the DSA with one ill-considered judgment. As I read it, it substitutes in GDPR rules -- not notice and takedown rules -- for any platform where a user is likely to post content about people. 1/
curia.europa.eu/juris/docume...
AI presents a fundamental threat to our ability to use polls to assess public opinion. Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee. Models will also infer and confirm hypotheses in experiments. Current quality checks fail
💥My report out now💥
📗 *VLOPs - how big is your Polarization Footprint? Towards a metric to give EU citizens transparency about an online systemic risk driving conflict in our societies*
Connecting #polarizationfootprint concept to systemic risk framework of Digital Services Act (#DSA)
Is social media dying? How much has Twitter changed as it became X? Which party now dominates the conversation?
Using nationally representative ANES data from 2020 & 2024, I map how the U.S. social media landscape has transformed.
Here are the key take-aways 🧵
arxiv.org/abs/2510.25417
GreenEarth is creating open source AI-driven recommender infrastructure for BlueSky. Type a prompt, see your feed change. We are here for the users, the builders, the dreamers. Join us.
greenearthsocial.substack.com/p/introducin...
Macron remarks are notable- some quotes: "We have been incredibly naive in entrusting our democratic space to social networks that are controlled either by large American entrepreneurs or large Chinese companies, whose interests are not at all the survival or proper functioning of our democracies."
🚨 PhD Position at the University of Amsterdam 🚨
Join my team as a computer scientist / computational social scientist working on LLMs, social media, and politics.
We offer freedom, impact, and an inspiring environment at one of Europe's leading universities.
🔗 werkenbij.uva.nl/en/vacancies...
In the literature, there are two competing explanations for "echo chambers":
1️⃣ Algorithms curate what we see (“filter bubbles”)
2️⃣ People choose like-minded peers (“selective exposure”)
Our new study suggests something surprising:
both explanations might be wrong. 🧵
arxiv.org/abs/2508.10466
Ever since I started thinking seriously about AI value alignment in 2016-7, I've been frustrated by the inadequacy of utility+RL theory to account for the richness of human values.
Glad to be part of a larger team now moving beyond those thin theories towards thicker ones.
Thanks to co-organizers @jasonburton.bsky.social,
@naomishiffman.bsky.social, and @jbakcoleman.bsky.social!
The whole conference looks great this year too! Talks from Cory Doctorow, Kate Starbird, + Glen Weyl; a workshop on futarchy w. Robin Hanson (straight after ours); and lots of papers on using AI to scaffold human coordination and collective decision-making.
ci.acm.org/2025/
🔔
Often our tech policy interventions operate from linear, top-down assumptions that don't account for the complexity they seek to govern.
To dig into this I'm co-organizing a workshop at ACM CI 2025 with Jason Burton, Joe Bak-Coleman, + Naomi Shiffman. You should come!
ci-x-tp.github.io
Proposals for Build Peace (arguably the main conference on digital peacebuilding) are now open. This year it's near Barcelona in November. Consider applying!
howtobuildpeace.org/attend-the-conference/re...
🚨 WEBINAR ALERT 🚨 Join KGI on March 25th for a live discussion on designing algorithmic feeds that put people first. As legislation and litigation around algorithms heats up, it’s never been more important to learn how they can be improved.
EVENT: Join us for Artificial Intelligence and Democratic Freedoms on April 10-11 at
@columbiauniversity.bsky.social & online. Hosted with Senior AI Advisor @sethlazar.org. Co-sponsored by the Knight Institute & @columbiaseas.bsky.social. Panel info in 🧵. RSVP: knightcolumbia.org/events/artif...
Connected by Data are seeking examples of participatory digital governance to map out such projects around the world.
connectedbydata.org/blog/2025/03/05/particip...
This was very much a joint effort with Andrew Konya, Wasim Almasri, Oded Adomi Leshem, Ariel Procaccia, Lisa Schirch, Michiel Bakker @mbakker.bsky.social, and many others.
Looking forward to seeing these kinds of technologies mature!
Link for the full paper below, which documents the whole process, including all the ethical precautions we took.
arxiv.org/abs/2503.01769
This level of agreement is particularly noteworthy because, at the beginning of the process, the substrate of trust that makes dialogue (and Track II diplomacy) possible among peacebuilders in the region had (understandably) grown fragile.
The process resulted in a joint letter to the international community with a set of five demands, each of which has at least 90% of support from participants on each 'side'.
In April – July 2024, in collaboration with the Alliance for Middle-East Peace (ALLMEP), we conducted a series of online collective dialogues with civil society peacebuilders in Israel and Palestine, and used LLMs and bridging-based ranking to surface ideas that had broad support across groups.
A diagram of the process used in the paper that this thread announces. The effort consisted of four dialogue cycles with peacebuilders: three uninational dialogues cycles conducted respectively with Israeli Jews, Palestinians from the West Bank and Gaza, and Palestinian citizens of Israel, followed by a final joint dialogue involving all three groups. Each dialogue cycle involved a process to find common ground among participants followed by a deliberation on the results of that process. The common ground process started with a collective dialogue, then bridging-based ranking was used to identify common ground ’bridging statements’ that were distilled into articulate ’collective statements’ via LLM and reviewed by human experts before being shared back with participants for a final vote.
🔔 (new paper!)
You might have heard of the "Habermas Machine", an AI-human pipeline that is really good at finding common ground between ideologically diverse groups, at least in lab settings.
But can this kind of approach help in real world conflicts?