Advertisement · 728 × 90

Posts by Humane Intelligence

Preview
OSS AI red teaming app release updates Read about OSS AI red teaming app updates, including our design, front-end and backend work, as well as how to get involved

Last year we shared that with support from Google.org, we would be releasing an improved version of our AI red teaming app under an open source software license. Since then, a lot of work has gone into moving the project forward.

Read more: humane-intelligence.org/post/updates...

1 hour ago 0 0 0 0
Preview
Mental Health as a Hidden Cost of AI Systems A blogpost about why mental health must be treated as a core consideration in AI design and evaluation

Where does mental health fit in how we design and evaluate AI systems? As AI becomes more embedded in daily life and work, its psychological impact—on both users and the people behind these systems—is becoming harder to ignore.

Read our recent blog post: humane-intelligence.org/post/mental-...

5 days ago 0 1 0 0
Post image

We’re excited to welcome Amy Sample Ward, Amy S., and Abby V. to the Humane Intelligence Board of Directors, and Dr. Lauren Damme, PhD and Dr. Diego Garcia-Olano to our Advisory Group.

Read our latest blog post to get to know them: humane-intelligence.org/post/our-new...

1 week ago 0 0 0 0

We recently facilitated an AI evaluation workshop at the Tribal Innovation Summit. Adversarial stress testing participants exposed vulnerabilities in AI models and assessed them against Indigenous data governance principles.

See the report under "Research."
humane-intelligence.org/insights/

1 week ago 0 0 0 0
Post image

A recent report from National Institute of Standards and Technology explores we assess real-world risks of AI systems combining model testing, red teaming and field testing to understand how AI systems behave in use. We're proud to have contributed to this work.
nvlpubs.nist.gov/nistpubs/ai/...

2 weeks ago 0 0 0 0
Post image

On April 14, we’re joining a conversation at the sidelines of the 59th Session on the Commission on Population and Development.

Mala Kumar and Annie Brown will contribute to a discussion on ontologies, generative AI, and technology-facilitated gender-based violence.

Register: lnkd.in/gpujupu9

2 weeks ago 0 0 0 0
Post image

We are pleased to share the Indigenized Adversarial Stress Testing Framework, developed by Emergence Circle, with the support of @humaneintelligence.bsky.social!

This project has tested AI systems against Indigenous values, community priorities, and potential harms before deployment or adoption.

1 month ago 2 1 1 0
Post image

How do you know if an AI system performs well in a real-world use case? Evaluating AI isn’t just about accuracy or benchmarks but understanding context, scope, and coverage—what’s being tested, what’s missing, and how those gaps shape our conclusions.

humane-intelligence.org/post/knowled...

3 weeks ago 0 0 0 0
Preview
The Women of Humane Intelligence: Stories Behind the Work - Humane Intelligence A blog post about the great women of Humane Intelligence

This Women’s History Month, we wanted to share a more personal look at the women of HI: their inspirations, unexpected talents, and the experiences that shaped their paths. A reminder that building better technology ultimately starts with understanding people.
humane-intelligence.org/post/the-wom...

1 month ago 0 0 0 1
Advertisement
Preview
Reflecting on the Accessibility Bias Bounty - Humane Intelligence Learn about our video conferencing accessibility bias bounty (data science challenge), including lesson learned and winner insights

Last year, we partnered with the CoNA Lab at Virginia State University and Valence AI to launch the Global Accessibility Bias Bounty. In our blogpost, we discuss why this work matters and how to evaluate and build in a way that's inclusive.
humane-intelligence.org/post/reflect...

1 month ago 0 0 0 0
Preview
Our New Members of the Humane Intelligence Board of Directors and Advisory Group - Humane Intelligence Humane Intelligence is excited to welcome Admas Kanyagia and Ben Kinsella to our Board of Directors, and Michael Zargham and Kaitlin Thaney to our Advisory Group

Humane Intelligence is growing, and we are excited to share updates to our Board of Directors and Advisory Group.

We welcome Admas Kanyagia and Benjamin Kinsella to our Board, and Michael Zargham and Kaitlin Thaney to our Advisory Group.

humane-intelligence.org/post/our-new...

2 months ago 1 0 0 0
Post image

Join Humane Intelligence and Tattle Civic Technologies at the India AI Impact Summit.

Our main event, Evaluations and Open Source Software for AI for Social Good at Scale will be on February 20, 2026, 1:30–2:25 PM IST. This session is free, sign-up required.

impact.indiaai.gov.in/registration

2 months ago 1 0 0 0

We’re still welcoming submissions from interested organizations for our paid opportunity related to the AI red teaming app. Applications are reviewed on a rolling basis, and the deadline to apply is February 6, 2026.

Details and the submission form are below. Thanks for helping us spread the word.

3 months ago 0 1 0 0
Preview
AI Governance Beyond English: Low-Resource Lessons - Humane Intelligence This is a guest blog authored by Humane Intelligence volunteers, exploring topics related to AI […]

In a new guest blog authored by Humane Intelligence volunteers Károly Boczka and Nkechika Ibe, we explore how large language models struggle with bias, distortion and hallucination in low-resource languages and why this is becoming a governance issue.

humane-intelligence.org/post/ai-gove...

2 months ago 0 0 0 0
Preview
Ontologies, generative AI and the SDGs - Humane Intelligence Co-authors: Mala Kumar, Annie Brown, Anthony Ware, Lizzette Soria This is a guest blog co-authored […]

As the SDGs near 2030, what comes after? In this blog post, we explore how generative AI and ontologies could provide a more connected, flexible lens on human development beyond today’s SDG taxonomy.

humane-intelligence.org/post/ontolog...

2 months ago 0 0 0 0
Preview
Changing geopolitics from data center concentration - Humane Intelligence This is a guest blog authored by Humane Intelligence volunteers, exploring topics related to AI […]

As AI systems scale, the physical infrastructure behind them matters more than ever.
In a guest blog post, we explore how the concentration of data centers and compute power is reshaping geopolitics, governance, and AI evaluation practices.

humane-intelligence.org/post/changin...

2 months ago 1 0 0 0

We’re still welcoming submissions from interested organizations for our paid opportunity related to the AI red teaming app. Applications are reviewed on a rolling basis, and the deadline to apply is February 6, 2026.

Details and the submission form are below. Thanks for helping us spread the word.

3 months ago 0 1 0 0
Post image

Humane Intelligence has launched an expression of interest to hire an engineering firm for the backend Python development of our OSS AI red teaming app (paid opportunity.
Full description: docs.google.com/document/d/1...
Submit your interest here: docs.google.com/forms/d/e/1F...
Thank you!

3 months ago 0 0 0 1
Preview
Humane Intelligence nonprofit - 2025 end of year wrap up - Humane Intelligence About 2.5 years ago, a few months after I left my job as Director of […]

As we close out 2025, we’re reflecting on a pivotal year for Humane Intelligence. From major funding milestones to new strategy, expanded bias bounties, red teaming work, and global partnerships, it was a year of growth. Read our full year-in-review:
humane-intelligence.org/post/humane-...

4 months ago 1 0 0 0
Advertisement
Preview
Announcing Bias Bounties at Scale - Humane Intelligence Humane Intelligence nonprofit is moving our bias bounty program onto Zindi

We're joining forces with @humaneintelligence.bsky.social to move their bias bounty program onto Zindi, a data science platform with users in more than 185 countries, with @heisingsimonsfdn.bsky.social's support.

Read the announcement here: humane-intelligence.org/post/announc...

4 months ago 3 2 0 1
Preview
AI Red Teaming App - OSS Grant Announcement, Google.org - Humane Intelligence Read about Humane Intelligence's plans to release our AI red teaming app under an open source software license

Great news! Humane Intelligence has received funding from Google.org to accelerate the release of an open source AI red teaming application. This support will help expand access to participatory AI evaluations worldwide. Learn more:
humane-intelligence.org/post/ai-red-...

4 months ago 3 0 0 0
Preview
Announcing Bias Bounties at Scale - Humane Intelligence Humane Intelligence nonprofit is moving our bias bounty program onto Zindi

We’re partnering with Radiant Earth and Zindi, with support from the Heising-Simons Foundation, to bring our bias bounty program to Zindi’s global data science platform!
Learn more:
humane-intelligence.org/post/announc...

4 months ago 1 0 0 0
Post image

Following our 2024 TFGBV red-teaming work with UNESCO, our Playbook on red teaming AI for social good is now available in French and Spanish! Sharing as 16 Days of Activism begins.

EN: unesdoc.unesco.org/ark:/48223/p...
FR: unesdoc.unesco.org/ark:/48223/p...
SP: unesdoc.unesco.org/ark:/48223/p...

4 months ago 1 0 0 0
Post image

We are seeking volunteers to help redesign the user interface of our TFGBV taxonomy and ontology website.
We welcome contributions at any level, whether that is proposing new workflows, creating wireframes or prototypes, or building a front end.

Learn more: humane-intelligence.org/get-involved...

4 months ago 2 0 0 0
Preview
Why AI in public health needs focus, funding, and community voice A new working group urges more funding and community input to harness AI’s potential in public health, addressing social factors and equity gaps worldwide.

AI in public health remains one of the most overlooked areas in the current wave of AI investment. In her op ed for Security Brief TechDay, Mala Kumar, explains how Humane Intelligence is working to address it through our new AI in Public Health Working Group.

securitybrief.com.au/story/why-ai...

4 months ago 1 0 0 0
Preview
Bias Bounty Challenge Set 4Improving Accessibility in Digital Conferencing Facilities Humane Intelligence, in collaboration with CoNA Lab and Valence AI, has launched a bias bounty challenge focused on improving accessibility for neurodivergent users in virtual meeting platforms like Z...

Today is the last day to submit a response to our bias bounty on accessibility!

Please find here all relevant info: humane-intelligence.org/programs-ser...

5 months ago 0 0 0 0
Post image

According to a recent CNBC article, people with ADHD, autism, and dyslexia report that AI assistants are helping them thrive at work.

That’s exactly what our Bias Bounty 4 is all about!

- Submissions close this Friday!
- Join here: lnkd.in/gfdtHTij
- Read the article: lnkd.in/erq96y-8

5 months ago 0 0 0 0
Advertisement
Preview
Join our new working group - AI in Public Health! - Humane Intelligence Join our new AI in public health working group, in partnership with NYU's Center for Center for Health Data Science (CHDS)

Announcing a new working group on AI in Public Health!
Humane Intelligence is collaborating with Rumi Chunara, who leads New York University’s Center for Health Data Science, to launch a new working group focused on exploring the role of AI in public health.

humane-intelligence.org/post/join-ou...

5 months ago 0 0 0 0
Post image

Deadline extended! Submissions for the Accessibility Bias Bounty Challenge are now due November 14 at 11:59 PM ET.

With Design and Data Science tracks and a $6,000 prize pool, participants are invited to build AI tools that prioritize impacted communities.

humane-intelligence.org/programs-ser...

5 months ago 0 0 0 0
Post image

There’s still time to join Bias Bounty 4: Accessibility in Digital Conferencing Facilities!

- Design and Data Science tracks
- $6,000 prize pool
- Contribute to a growing community of practice solving technical bias challenges

Submissions close 11/7.

humane-intelligence.org/programs-ser...

5 months ago 0 0 0 0