Last year we shared that with support from Google.org, we would be releasing an improved version of our AI red teaming app under an open source software license. Since then, a lot of work has gone into moving the project forward.
Read more: humane-intelligence.org/post/updates...
Posts by Humane Intelligence
Where does mental health fit in how we design and evaluate AI systems? As AI becomes more embedded in daily life and work, its psychological impact—on both users and the people behind these systems—is becoming harder to ignore.
Read our recent blog post: humane-intelligence.org/post/mental-...
We’re excited to welcome Amy Sample Ward, Amy S., and Abby V. to the Humane Intelligence Board of Directors, and Dr. Lauren Damme, PhD and Dr. Diego Garcia-Olano to our Advisory Group.
Read our latest blog post to get to know them: humane-intelligence.org/post/our-new...
We recently facilitated an AI evaluation workshop at the Tribal Innovation Summit. Adversarial stress testing participants exposed vulnerabilities in AI models and assessed them against Indigenous data governance principles.
See the report under "Research."
humane-intelligence.org/insights/
A recent report from National Institute of Standards and Technology explores we assess real-world risks of AI systems combining model testing, red teaming and field testing to understand how AI systems behave in use. We're proud to have contributed to this work.
nvlpubs.nist.gov/nistpubs/ai/...
On April 14, we’re joining a conversation at the sidelines of the 59th Session on the Commission on Population and Development.
Mala Kumar and Annie Brown will contribute to a discussion on ontologies, generative AI, and technology-facilitated gender-based violence.
Register: lnkd.in/gpujupu9
We are pleased to share the Indigenized Adversarial Stress Testing Framework, developed by Emergence Circle, with the support of @humaneintelligence.bsky.social!
This project has tested AI systems against Indigenous values, community priorities, and potential harms before deployment or adoption.
How do you know if an AI system performs well in a real-world use case? Evaluating AI isn’t just about accuracy or benchmarks but understanding context, scope, and coverage—what’s being tested, what’s missing, and how those gaps shape our conclusions.
humane-intelligence.org/post/knowled...
This Women’s History Month, we wanted to share a more personal look at the women of HI: their inspirations, unexpected talents, and the experiences that shaped their paths. A reminder that building better technology ultimately starts with understanding people.
humane-intelligence.org/post/the-wom...
Last year, we partnered with the CoNA Lab at Virginia State University and Valence AI to launch the Global Accessibility Bias Bounty. In our blogpost, we discuss why this work matters and how to evaluate and build in a way that's inclusive.
humane-intelligence.org/post/reflect...
Humane Intelligence is growing, and we are excited to share updates to our Board of Directors and Advisory Group.
We welcome Admas Kanyagia and Benjamin Kinsella to our Board, and Michael Zargham and Kaitlin Thaney to our Advisory Group.
humane-intelligence.org/post/our-new...
Join Humane Intelligence and Tattle Civic Technologies at the India AI Impact Summit.
Our main event, Evaluations and Open Source Software for AI for Social Good at Scale will be on February 20, 2026, 1:30–2:25 PM IST. This session is free, sign-up required.
impact.indiaai.gov.in/registration
We’re still welcoming submissions from interested organizations for our paid opportunity related to the AI red teaming app. Applications are reviewed on a rolling basis, and the deadline to apply is February 6, 2026.
Details and the submission form are below. Thanks for helping us spread the word.
In a new guest blog authored by Humane Intelligence volunteers Károly Boczka and Nkechika Ibe, we explore how large language models struggle with bias, distortion and hallucination in low-resource languages and why this is becoming a governance issue.
humane-intelligence.org/post/ai-gove...
As the SDGs near 2030, what comes after? In this blog post, we explore how generative AI and ontologies could provide a more connected, flexible lens on human development beyond today’s SDG taxonomy.
humane-intelligence.org/post/ontolog...
As AI systems scale, the physical infrastructure behind them matters more than ever.
In a guest blog post, we explore how the concentration of data centers and compute power is reshaping geopolitics, governance, and AI evaluation practices.
humane-intelligence.org/post/changin...
We’re still welcoming submissions from interested organizations for our paid opportunity related to the AI red teaming app. Applications are reviewed on a rolling basis, and the deadline to apply is February 6, 2026.
Details and the submission form are below. Thanks for helping us spread the word.
Humane Intelligence has launched an expression of interest to hire an engineering firm for the backend Python development of our OSS AI red teaming app (paid opportunity.
Full description: docs.google.com/document/d/1...
Submit your interest here: docs.google.com/forms/d/e/1F...
Thank you!
As we close out 2025, we’re reflecting on a pivotal year for Humane Intelligence. From major funding milestones to new strategy, expanded bias bounties, red teaming work, and global partnerships, it was a year of growth. Read our full year-in-review:
humane-intelligence.org/post/humane-...
We're joining forces with @humaneintelligence.bsky.social to move their bias bounty program onto Zindi, a data science platform with users in more than 185 countries, with @heisingsimonsfdn.bsky.social's support.
Read the announcement here: humane-intelligence.org/post/announc...
Great news! Humane Intelligence has received funding from Google.org to accelerate the release of an open source AI red teaming application. This support will help expand access to participatory AI evaluations worldwide. Learn more:
humane-intelligence.org/post/ai-red-...
We’re partnering with Radiant Earth and Zindi, with support from the Heising-Simons Foundation, to bring our bias bounty program to Zindi’s global data science platform!
Learn more:
humane-intelligence.org/post/announc...
Following our 2024 TFGBV red-teaming work with UNESCO, our Playbook on red teaming AI for social good is now available in French and Spanish! Sharing as 16 Days of Activism begins.
EN: unesdoc.unesco.org/ark:/48223/p...
FR: unesdoc.unesco.org/ark:/48223/p...
SP: unesdoc.unesco.org/ark:/48223/p...
We are seeking volunteers to help redesign the user interface of our TFGBV taxonomy and ontology website.
We welcome contributions at any level, whether that is proposing new workflows, creating wireframes or prototypes, or building a front end.
Learn more: humane-intelligence.org/get-involved...
AI in public health remains one of the most overlooked areas in the current wave of AI investment. In her op ed for Security Brief TechDay, Mala Kumar, explains how Humane Intelligence is working to address it through our new AI in Public Health Working Group.
securitybrief.com.au/story/why-ai...
Today is the last day to submit a response to our bias bounty on accessibility!
Please find here all relevant info: humane-intelligence.org/programs-ser...
According to a recent CNBC article, people with ADHD, autism, and dyslexia report that AI assistants are helping them thrive at work.
That’s exactly what our Bias Bounty 4 is all about!
- Submissions close this Friday!
- Join here: lnkd.in/gfdtHTij
- Read the article: lnkd.in/erq96y-8
Announcing a new working group on AI in Public Health!
Humane Intelligence is collaborating with Rumi Chunara, who leads New York University’s Center for Health Data Science, to launch a new working group focused on exploring the role of AI in public health.
humane-intelligence.org/post/join-ou...
Deadline extended! Submissions for the Accessibility Bias Bounty Challenge are now due November 14 at 11:59 PM ET.
With Design and Data Science tracks and a $6,000 prize pool, participants are invited to build AI tools that prioritize impacted communities.
humane-intelligence.org/programs-ser...
There’s still time to join Bias Bounty 4: Accessibility in Digital Conferencing Facilities!
- Design and Data Science tracks
- $6,000 prize pool
- Contribute to a growing community of practice solving technical bias challenges
Submissions close 11/7.
humane-intelligence.org/programs-ser...