Posts by Collective Intelligence Project
Put together, these findings reveal early glimpses of the ways in which AI is reorganizing trust, intimacy, and work, providing a picture of how the world now lives with AI
New report drop!
After seven rounds of Global Dialogues with more than 6000 people across 70 countries in 2025, we are releasing the 2025 Global Dialogues Index Report.
blog.cip.org/2025gdindex
- Why "uncommon ground" beats common ground every time
- Sci-fi book recommendations
- And much more
- Our work bringing 100K+ people into AI development through globaldialogues.ai
- How we're building evaluation benchmarks from lived experiences, not just lab tests
- Digital twins that could represent your values without taking up all your evenings
What you'll find in this episode:
- How Taiwan crowdsourced anti-deepfake legislation in 24 hours (and it worked)
- Why 1 in 3 adults now use AI for daily emotional support, and what that means for democracy
@divya.bsky.social and @audreyt.org joined @reidhoffman.bsky.social and Aria Finger, hosts of the Possible Podcast, to talk about how democracy and AI can bring out the best of each other.
Apple: podcasts.apple.com/us/podcast/a...
Spotify: open.spotify.com/episode/6UDj...
We're asking the a global sample of the world: "𝖯𝖾𝗋𝗌𝗈𝗇𝖺𝗅𝗅𝗒, 𝗐𝗈𝗎𝗅𝖽 𝗒𝗈𝗎 𝖾𝗏𝖾𝗋 𝖼𝗈𝗇𝗌𝗂𝖽𝖾𝗋 𝗁𝖺𝗏𝗂𝗇𝗀 𝖺 𝗋𝗈𝗆𝖺𝗇𝗍𝗂𝖼 𝗋𝖾𝗅𝖺𝗍𝗂𝗈𝗇𝗌𝗁𝗂𝗉 𝗐𝗂𝗍𝗁 𝖺𝗇 𝖠𝖨, 𝗂𝖿 𝗍𝗁𝖾 𝖠𝖨 𝗐𝖺𝗌 𝖺𝖽𝗏𝖺𝗇𝖼𝖾𝖽 𝖾𝗇𝗈𝗎𝗀𝗁?"
Prediction time: What % do you think will say yes?
Tell us your response in the comments!
10/10: Read the piece to learn more about this under-explored issue.
It includes specific strategies to address these biases and provides access to the full Github suite.
www.cip.org/blog/llm-jud...
9/10: We built a Github suite to systematically test and quantify these biases.
It lets you:
8/10: To improve reliability: Neutralize labels, vary order, empirically validate all prompt components, and optimize scoring mechanics. Diversify your model portfolio and critically evaluate human baselines.
7/10: These aren't just minor quirks. LLMs lack the mechanistic precision of traditional software. Their architecture means system prompts and input material exist in the same context, leading to unpredictable interactions.
6/10: Rubric-based scoring is also affected. We observed 'recency bias' where criteria scored later received lower averages. Holistic vs. isolated evaluation dramatically shifted scores too.
5/10: For example, in pairwise choices, LLMs favored "Response B" 60-69% of the time, a significant deviation from random. Even explicit "de-biasing" prompts sometimes increased bias.
4/10: LLMs exhibit cognitive biases similar to humans: serial position, framing, anchoring. Our tests across frontier models from Google, Mistral, Anthropic, and OpenAI consistently show these biases in judgment contexts.
3/10: "Prompt engineering" often relies on untested folklore. We found even minor prompt changes, like "Response A" vs. "Response B" labeling, significantly bias LLM choices.
2/10: This is important because LLMs are increasingly deployed for evaluation tasks, ranking, decision-making, and judgement in many critical domains.
1/10: LLM Judges Are Unreliable.
Our latest blog post from @j11y.io shows that positional preferences, order effects, and prompt sensitivity fundamentally undermine the reliability of LLM judges.
The Collective Intelligence Project @cip.org has launched the Global Dialogues Challenge, an open call to explore global perspectives on the future of artificial intelligence.
A $10,000 prize fund will be distributed among the winning entrants.
www.cip.org/challenge
We're really thrilled to be able to have such a juicy prize fund. If you're feeling a sassiness with data and want to build something small to explore or inspire better AI for humans, take a look and enter. cip.org/challenge
Step 1. Grab the data.
Step 2. Build something cool.
<3
Submissions will be judged by an amazing panel:
@audreyt.org (Cyber Ambassador-at-large for Taiwan)
@nabiha.bsky.social (Executive Director of @mozilla.org )
Zoe Hitzig (Research Scientist at OpenAI and Poet)
The challenge runs from Monday, May 19th through Friday, July 11th.
A $10,000 prize fund will be distributed among the winning submissions.
This is an open call to explore global perspectives on AI using the public datasets sourced from our globaldialogues.ai project.
Participants can submit benchmarks, visualizations, artistic responses, or analytical reflections.
We're officially launching the Global Dialogues Challenge!
“We have been sort of stuck with outdated notions of what fairness and bias means for a long time,” says
@divya.bsky.social, “we have to be aware of differences, even if that becomes somewhat uncomfortable.”
Read the full @technologyreview.com article on new approaches to evaluating AI ⬇️