Advertisement · 728 × 90

Posts by Gillian Hadfield

2/ With Tianmin Shu, Seth Lazar and Dylan Hadfield-Menell: we're building institutions that let real communities, not model developers, define how AI behaves.

5 days ago 1 0 0 0
Preview
Laude | Moonshots We asked the most consequential AI researchers in the world how they would use AI to solve humanity's hardest problems. 125 proposals and 600 researchers later, meet the Moonshots // ONE awardees.

1/ Who decides how AI systems behave? With Tianmin Shu, @sethlazar.org, and @dhadfieldmenell.bsky.social, we were named runner-up in the LaudeInstitute's inaugural Moonshot program and awarded a seed grant to find out. www.laude.org/moonshots

5 days ago 1 0 1 0
Preview
Virginia Pioneers Innovative AI Governance Legislation Under Governor Spanberger - Third News Governor Spanberger's signing of landmark AI governance legislation in Virginia introduces the Independent Verification Organization framework, ensuring safer AI systems.

5/ More on Virginia’s IVO legislation: third-news.com/article/4df7...

5 days ago 0 0 0 0

4/ This is a first step, not the finish line. We don’t have adoption of IVOs yet. But a state legislature voting unanimously for a new model of AI governance, and funding the evaluation, tells you something about where this is heading.

5 days ago 0 0 1 0

3/ Virginia is now studying whether to build it. I developed this framework in my book Rules for a Flat World and a 2019 paper applying it to AI safety. Watching a legislature take it seriously, fund the evaluation, and vote unanimously is not something I take for granted.

5 days ago 0 0 1 0

2/ The idea: independent bodies, licensed by the state, that would verify AI systems satisfy safety criteria set by the state. Voluntary for companies. Verifiers answer to the state, not to industry. Building on methods we've used in financial auditing and product safety.

5 days ago 0 0 1 0

1/ Something I’ve been working toward for a long time. Virginia just signed the first state legislation directing a formal study of Independent Verification Organizations for AI. Bipartisan votes of 84-14 in the House, unanimous 40-0 in the Senate.

5 days ago 2 1 1 0
Advertisement

5/ Full paper, open access: jair.org/index.php/ja...

1 week ago 2 0 0 0

4/ Policymakers read social media to gauge opinion. AI models train on internet data. Both get a distorted picture. We need to change who gets heard, not what people think.

1 week ago 2 0 1 0

3/ Ideological media orgs amplify this by signaling the other side is more extreme than it really is. Recommender systems, optimizing for engagement, sort people into communities where the loudest voices dominate.

1 week ago 2 0 1 0

2/ When rhetoric heats up, moderates face a squeeze from both sides. Allies speaking loudly substitute for your voice. Opponents shrink the reward from speaking up. Either way, moderates go quiet. No opinion change required.

1 week ago 2 1 1 0

1/ Social media makes it look like the public is deeply divided. But what if most of the public just isn’t speaking? In a new paper with Atrisha Sarkar in JAIR, we show why. We call it rational silence.

1 week ago 1 1 1 0
Preview
Building AI for the Democratic Matrix: A Technical Research Agenda for Normative Competence and Normative Institutions To maintain democratic resilience, it is essential to build AI agents capable of choosing behaviors that mirror those of the human agents that constitute human democracies.

Democracy isn't a rulebook. It runs on daily interactions where people comply with norms and hold each other accountable. AI agents are about to join that system. We need to build them to read it. New paper with Rakshit Trivedi and Dylan Hadfield-Menell.

2 weeks ago 11 2 4 0
Preview
FAR.AI: Frontier Alignment Research FAR.AI is an AI safety research non-profit facilitating technical breakthroughs and fostering global collaboration.

At the @FAR.AI London Alignment Workshop I made the case for Independent Verification Organizations: licensed, competing private entities that can grow our regulatory capacity to match the pace of AI capabilities. Full talk: bsky.app/profile/far....

2 weeks ago 2 0 0 0
Gillian K. Hadfield

You can get the paper at my website! gillianhadfield.org

1 month ago 1 0 0 6

Governments can’t translate “fair” or “safe” into technical specs fast enough. But leaving details to industry means the public loses its say. Regulatory markets close both gaps: governments set outcomes, private regulators compete to achieve them.

1 month ago 0 0 0 0
Advertisement
Preview
Regulatory Markets: The Future of AI Governance Regulatory markets can bridge technical and democratic gaps in AI governance by pairing public oversight with private, licensed regulatory innovation.

AI systems are quickly becoming embedded throughout the economy. But we have almost none of the regulatory tools, regulatory markets among them, to manage them. Here's what I think we should do about it: www.americanbar.org/groups/scien...

1 month ago 2 0 2 0

“The most practical governance framework currently in circulation.” That’s Forbes on the Independent Verification Organization model Fathom and I have been developing. Legislation takes years; IVOs move at the pace of innovation.

1 month ago 0 0 1 0
Post image

"Why not work on what kind of new governance is needed to ensure secure, reliable, predictable use of all frontier models, from all companies?"

1 month ago 0 0 0 0
Preview
FAR.AI: Frontier Alignment Research FAR.AI is an AI safety research non-profit facilitating technical breakthroughs and fostering global collaboration.

In London today and tomorrow for the Alignment Workshop organized by FAR.AI. Keynoting alongside Rohin Shah and Allan Dafoe. I look forward to seeing everyone in attendance! www.far.ai/events/event...

1 month ago 2 0 0 0
Preview
International AI Safety Report The International AI Safety Report is the world's first comprehensive review of the latest science on the capabilities and risks of general-purpose AI systems. The work was overseen by an…

The 2026 AI Safety Report's biggest finding isn't the risks it catalogs. It's the evidence gap. We're trying to build AI governance with almost no science underneath. Massive investment in the research regulatory systems depend on is overdue. internationalaisafetyreport.org

1 month ago 1 0 0 0
Screenshot of a LinkedIn post by Jack Shanahan (Retired USAF; Project Maven/DoD JAIC; NCSI MIS; SCSP Defense Partnership), posted 3 hours ago. In the post, Shanahan weighs in on the Anthropic-Pentagon dispute, noting that despite his Project Maven background, he's sympathetic to Anthropic's position. He argues no current LLM should be used in fully lethal autonomous weapons systems, calling that a reasonable red line, and opposes mass surveillance of US citizens as a second red line. He criticizes the public nature of the dispute, calls the supply chain risk designation "laughable," questions invoking the DPA against the company's will, and advocates for shared government-industry-academia governance of frontier AI models.

Screenshot of a LinkedIn post by Jack Shanahan (Retired USAF; Project Maven/DoD JAIC; NCSI MIS; SCSP Defense Partnership), posted 3 hours ago. In the post, Shanahan weighs in on the Anthropic-Pentagon dispute, noting that despite his Project Maven background, he's sympathetic to Anthropic's position. He argues no current LLM should be used in fully lethal autonomous weapons systems, calling that a reasonable red line, and opposes mass surveillance of US citizens as a second red line. He criticizes the public nature of the dispute, calls the supply chain risk designation "laughable," questions invoking the DPA against the company's will, and advocates for shared government-industry-academia governance of frontier AI models.

Why not work on new governance...

1 month ago 0 0 0 0
Preview
Announcing the "AI Agent Standards Initiative" for Interoperable and Secure Innovation The Initiative will ensure that the next generation of AI is widely adopted with confidence, can function securely on behalf of its users, and can interoperate smoothly across the digital ecosystem.

NIST just launched an AI Agent Standards Initiative for identity, security, and interoperability. AI agents are becoming economic actors with zero legal infrastructure in place. We require businesses to register to operate. Why expect less of AI agents? buff.ly/kTU2cfX

1 month ago 1 3 1 0
Preview
IASEAI - International Association for Safe and Ethical AI Building a global movement for safe and ethical AI. Join IASEAI to ensure AI systems operate safely and ethically, benefiting all of humanity.

In Paris this week for IASEAI (Feb 24-26). Tuesday: panel on the International AI Safety Report. Thursday: keynote on regulatory markets, a panel on AI assurance, and a talk in Seth Lazar’s workshop on normative competence. If you’re at IASEAI, come say hello!

1 month ago 4 3 0 0
Preview
Panel Members | Independent International Scientific Panel on AI The 40 members of the Independent International Scientific Panel on AI include people from all five of the UN’s regions. They are from various different backgrounds, including academia, private…

Congratulations to Yoshua Bengio and the 39 other experts appointed to the UN’s first Independent International Scientific Panel on AI. 117-2 in the General Assembly.

2 months ago 0 0 0 0
Advertisement
AI Won’t Automatically Make Legal Services Cheaper - Curl, Kapoor & Narayanan

Better technology doesn’t fix broken institutions. The paper discusses regulatory markets as one path forward: instead of regulating providers directly, create a market for regulation itself. Worth a careful read. buff.ly/kbfvYqN

2 months ago 2 0 0 0
Post image

New in Lawfare from Justin Curl, Sayash Kapoor, & Arvind Narayanan: AI won’t automatically make legal services cheaper. I’ve been working on this for a long time, legal markets are broken because of adversarial dynamics, credence goods problems, & regulations that protect incumbents, not consumers.

2 months ago 3 0 1 0
Preview
Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman Podcast Episode · Scaling Laws · 02/17/2026 · 55m

Billions going into building AI, barely any into making sure it works for us. Talked with @kevintfrazier.bsky.social & Andrew Freedman about our proposal making its way through state legislatures to build a competitive market for AI oversight. New @scalinglaws.bsky.social podcast:

2 months ago 7 3 0 1
Preview
Talk, Judge, Cooperate: Gossip-Driven Indirect Reciprocity in Self-Interested LLM Agents Indirect reciprocity, which means helping those who help others, is difficult to sustain among decentralized, self-interested LLM agents without reliable reputation systems. We introduce Agentic…

6/ Led by Shuhui Zhu with Yue Lin, Shriya Kaistha, Wenhao Li, Baoxiang Wang, Hongyuan Zha, and Pascal Poupart across Waterloo, Vector Institute, CUHK-Shenzhen, and Tongji. arxiv.org/abs/2602.07777

2 months ago 0 0 0 0

5/ We don't need AI agents that default to "nice." We need agents that understand when cooperation makes sense and when it doesn't. That takes institutional structure, not just training. Gossip turns out to be surprisingly powerful institutional structure.

2 months ago 0 0 1 0