What does it mean to trust what you see online? PAI's Director of AI, Trust, and Society, @cleibowicz.bsky.social, joined the Financial Times's new documentary on deepfakes to dig into exactly that. www.ft.com/video/559af1...
Posts by Claire Leibowicz
But that's not a reason to look away from how they are being used now. The key is to address how AI tools compound harm and ensure they can also provide real support while we fight for broader change. We can, and must, do both.
I got to reunite with @saltzshaker.bsky.social to develop this work, and as we write in this piece in @techpolicypress.bsky.social AI chatbots are NOT the solution to systemic crises rooted in inequality and inaccessibility.
Questions on the table = urgent + unresolved:
→Should chatbots proactively continue conversations with users in crisis?
→ Should they escalate beyond providing a hotline link?
→And more philosophically, what is the role of a technology company in the emotional and informational life of its users?
We were coming together to develop initial, shared guidance for how AI chatbots should handle high-stakes mental health interactions.
Last week, @partnershipai.bsky.social convened 50+ representatives from orgs like OpenAI, Crisis Text Line, and the American Psychological Association across AI companies, mental health organizations, civil society, research, government, and most importantly, those with lived experience.
🚀 6 yrs ago, we predicted deepfakes would reshape how we create, consume, and trust information. That prediction is now reality.
We've developed evidence-based recommendations to safeguard trust and dignity in the AI age.
🔐Read them here: partnershiponai.org/resource/saf...
📢 In Definition magazine, @cleibowicz.bsky.social, Head of AI and Media Integrity at Partnership on AI, shares practical steps creators can take to protect themselves and their work in an era of synthetic media and increasing creative risks.
Read more: online.bright-publishing.com/view/5220269...
Just published! I contributed to @columjournreview.bsky.social's anthology on AI in journalism alongside industry leaders. A thoughtful exploration of how newsrooms are navigating this technological shift. Thanks to
@mikeananny.bsky.social + @mattdpearce.com for bringing this together!
Yes!! Check out the panel this Friday -- and if you're in Perugia, drop me a line!
This rounds out the *19*-case collection developed over the past two years: a massive effort to better understand opportunities + mitigate the risks of synthetic media.
🔗 Read the full collection here!
syntheticmedia.partnershiponai.org#case_studies
3. How transparency signals can empower users to make informed decisions, from #Google
🔗 Read Here:
partnershiponai.org/google-frame...
2. How disclosure can limit misleading and gendered content, from #Meedan
🔗 Read Here:
partnershiponai.org/meedan-frame...
1. How synthetic media impacts elections and political content, from @code4africa.bsky.social
🔗 Read Here:
partnershiponai.org/codeforafric...
🎉 Sharing a big responsible AI reporting milestone! 🎉
Proud to announce 3 new case studies from @code4africa.bsky.social, #Google, + #Meedan implementing @partnershipai.bsky.social's Synthetic Media Framework.
Explore the collection (syntheticmedia.partnershiponai.org#case_studies) + learn:
Read the full paper here: www.arxiv.org/abs/2502.04526
And check out the important implications for how we approach AI governance broadly!
#AIPolicy #AIGovernance #deepfakes #syntheticmedia #contentauthenticity
Cross-sector collaboration works when it:
-Complements (not replaces) government regulation
-Has adequate lead time
-Builds on trusted relationships
-Combines social & technical expertise
(One day I'll write a whole playbook on this!)
On technical solutions: AI labels & transparency measures? Important but not sufficient. They need to go beyond simple "AI or not" binaries to be truly useful.
Trust emerged as the foundation.
Not just among policymakers, companies & civil society - but crucially, between these groups & the public they serve. Without trust, even good policies struggle.
Time shapes everything.
Stakeholders draw on past experiences, weigh present contexts, & consider future implications when crafting AI policies. History & foresight matter as much as current tech.
First key insight: Synthetic media isn't just a tech problem.
It's fundamentally about misrepresentation & impersonation. This reframing changes how we approach solutions.
📄New Preprint!
How does AI policy *actually* get made across sectors?
I interviewed stakeholders & studied real cases of synthetic media policy to find out. Not theoretical frameworks - real collaboration between tech, civil society, media & policymakers! 🧵
www.arxiv.org/abs/2502.04526
And as the AI agents hype shows no signs of abating, let's not forget the importance of avoiding centralization of power in "personhood" solutions/supporting user choice.
Check out the other incredible papers selected here: fpf.org/press-releas...
- honored to be in such thoughtful company, amongst my (many!) co-authors, and other award winners!
So delighted that the @futureofprivacy.bsky.social selected "Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who is Real Online" as a winner of its Privacy Papers for Policymakers Award!
arxiv.org/abs/2408.07892
Thanks @thomsonfoundation.bsky.social + @cnti.bsky.social for hosting a vibrant workshop on AI comms + news!
And for featuring a preview of my forthcoming PhD research ---- exploring disclosures that have, and have not, served their intended purposes from other fields (shoutout to Cali's Prop 65)!
And you should order the book! From @blackwells.bsky.social! Which has the cheapest price to the US at the moment!
blackwells.co.uk/bookshop/pro...
My dear, bluesky-less friend Isaac's book comes out tomorrow, and you can read an excerpt in @nytimes.com!
Aside from being IMMENSELY proud, I've learned so much about democracy, civic solidarity, multiculturalism, and governance that transcends borders from ISB.
www.nytimes.com/2025/01/13/o...