Advertisement · 728 × 90

Posts by Abigail Jacobs

Preview
Doing AI Differently | Culture × AI Workshop

I'll be at #ICML2026 on July 10-11 in Seoul to speak at the Workshop on Culture x AI: Evaluating AI as a Cultural Technology.

The workshop is currently accepting submissions, with humanities, ML, HCI, and social/cognitive sciences all welcome.

Submit papers by May 1! Join us in Seoul! 🇰🇷

6 days ago 43 15 2 2
Preview
The Code Is Not the Law: Why Claude’s Constitution Misleads Anthropic’s appeals to constitutionalism and virtue-ethics risk obscuring where the power and accountability for shaping AI behavior lies.

"The constitution is not a neutral charter standing above the firm but, rather, a company document that prioritizes its mission, market position, safety research, and normative influence all at once."

Lisa Klaassen and Ralph Schroeder detail their criticisms of Claude's Constitution.

1 week ago 27 14 0 3
Preview
The Hidden Governance of AI and Other Threats to Democracy Apr 8, 2026, 12:10 pm - Abigail Jacobs researches structure, governance, and inequality in sociotechnical systems and the hidden assumptions in machine learning.

Join us for a Bellwether Lecture! 🔔

University of Michigan School of Information Assistant Professor azjacobs.bsky.social will discuss structure, governance, and inequality in sociotechnical systems & hidden assumptions in machine learning.

📅 April 8, 12:10-1:30 pm
📍 210 South Hall & Online

2 weeks ago 17 3 0 0

Excellent, important, and clarifying.

Historian of computing Kevin Baker teaches us about the history of aerial targeting. And, Baker says, contra news accounts, Anthropic and Claude didn't select the Minab girls' school as a target. @kevinbaker.bsky.social

4 weeks ago 24 13 1 0

Well put: “Formal review persists, but substantive discretion has migrated upstream…When objectives are embedded in architecture, administrative errors and political misjudgments are operationalized at scale.

What appears as a dispute about fairness is therefore a deeper institutional misalignment”

3 weeks ago 7 1 0 0
Preview
Where is Accountability When Governments Deploy AI? AI systems now structure public authority; human-in-the-loop oversight is insufficient, requiring upstream guardrails argues Michael A. Santoro.

AI systems aren’t just supporting decisions—they’re structuring how public authority is exercised, writes Michael A. Santoro. “Human in the loop” isn’t enough. Accountability must be built upstream through guardrails embedded in system design, not added after the fact, he argues.

3 weeks ago 23 9 1 1

Was able to catch a bit! Always love an excuse to follow @jessicahullman.bsky.social ‘s work. An abuse of terms and questionable summary: the bigger picture is about type 3 errors - there’s lots of LLM work to solve ill-posed questions, when we could ask better questions / do better soc sci instead!

3 weeks ago 4 0 1 0
Preview
The Poetics of Bureaucracy Language models are a bureaucratic technology

Coarsely summarizing perspectives on the bureaucratic culture of language models by @himself.bsky.social, @azjacobs.bsky.social, and Lily Chumley.

3 weeks ago 23 4 0 2

!!

1 month ago 0 0 0 0
Advertisement

If you have any interest in the future of AI, please join us for another really insightful conversation with @alondra.bsky.social. She's brilliant but better yet, she's right!

5 months ago 38 9 0 0

(modesty also requires I acknowledge this idiosyncratic phrasing from my one of my earliest mentors and coauthors!)

5 months ago 0 0 0 0
Preview
Measurement as governance in and for responsible AI Measurement of social phenomena is everywhere, unavoidably, in sociotechnical systems. This is not (only) an academic point: Fairness-related harms emerge when there is a mismatch in the measurement p...

and anyways, modesty forbids me to recommend some ill-formed thoughts: arxiv.org/abs/2109.05658

5 months ago 8 0 1 0
Preview
Auto-essentialization: Gender in automated facial analysis as extended colonial project - Morgan Klaus Scheuerman, Madeleine Pape, Alex Hanna, 2021 Scholars are increasingly concerned about social biases in facial analysis systems, particularly with regard to the tangible consequences of misidentification o...

and @morganklauss.bsky.social Madeleine Pape @alexhanna.bsky.social's important work on auto-essentialization, putting AI as governance in historical context
journals.sagepub.com/doi/full/10....

5 months ago 2 0 1 0
Preview
The Legal Regulation of A.I. Hate

to forthcoming work from Fanna Gamal in Calif Law Review (2026) studyofhate.ucla.edu/the-legal-re...

5 months ago 0 0 1 0
Oligarchy, State, and Cryptopia Theoretical accounts of power in networked digital environments typically do not give systematic attention to the phenomenon of oligarchy—to extreme concentrati

But the stakes! are! high!
From Julie E. Cohen, theorizing on the actors behind the scenes and their political and economic relations papers.ssrn.com/sol3/papers....

5 months ago 0 0 1 0

This picks up an earlier thread from @alisongopnik.bsky.social @himself.bsky.social Cosma Shalizi & James Evans -- AI as cultural technology. www.science.org/doi/full/10....

5 months ago 2 1 2 0
Advertisement
Preview
AI as Governance Political scientists have had remarkably little to say about artificial intelligence (AI), perhaps because they are dissuaded by its technical complexity and by current debates about whether AI might ...

AI as governance -- @himself.bsky.social on how AI reshapes markets, bureaucracy, democracy...and culture. Very happy ot see this getting the mainstream social science treatment.
www.annualreviews.org/content/jour... I can't believe I missed this paper coming out!

5 months ago 23 3 1 1

Wow! Sounds like a huge decision and a wonderful outcome. Good luck with the move!

8 months ago 4 0 0 0

Feeling so excited + grateful to be representing this paper at #ICML! Please stop by to talk about how to do more valid measurement for evaling gen AI systems!

Work led by the incomparable @hannawallach.bsky.social and @azjacobs.bsky.social as a part of Microsoft’s AI and Society initiative!!

9 months ago 12 2 0 0

“If ___ ran a mini nuclear power plant” seems like a strong vibe for the day

1 year ago 2 0 0 0
Preview
Opinion | Look Past Elon Musk’s Chaos. There’s Something More Sinister at Work. Everything is content.

As ever, Tressie McMillan Cottom has the most astute analysis of how to read Musk's behavior. www.nytimes.com/2025/02/12/o...

1 year ago 19 6 2 2

"The bureaucracy is an “unelected, fourth, unconstitutional branch of government, which has, in a lot of ways, currently, more power than any elected representative,” insisted Mr. Musk, who serves as an unelected appointee with vast reach across the government."

cool

1 year ago 0 0 0 0
Preview
At Oval Office, Musk Makes Broad Claims of Federal Fraud Without Proof (Gift Article) The billionaire, whose federal cost-cutting team has been operating in secrecy, asserted that he had uncovered waste and fraud across the bureaucracy, without providing evidence.

big day to submit an article on how "efficiency" is used to undermine legitimacy of the administrative state

www.nytimes.com/2025/02/11/u... (gift link)

1 year ago 8 1 1 0
Advertisement

Oh of course good people know each other

1 year ago 6 0 0 0

Thrilled to have you here regardless !

1 year ago 6 0 0 0

(Bonus rant for the enshittification of search. Me trying to fix my house, sample AI suggestions from recent tasks: “use vinegar.” “Not flammable since the 80s!”

…more clicks to experts: “OMFG do not fucking use vinegar.” “Yes still flammable so many people died that way”)

1 year ago 5 0 1 0

This! Not that individualizing the problem is the solution, but so many don’t know how big of a jump it is. Also: bring back lmgtfy (let me google that for you) as a design intervention

1 year ago 5 0 1 0

"there's a lot of qualitative work that goes into designing quantitative metrics" -- @azjacobs.bsky.social

"how do we translate between benchmark performance and what it will really be like to use a model" -- Su Lin Blodgett

1 year ago 47 8 0 1

"Overall, the starting list constitutes at best a narrow coverage of the risks the technology is likely to pose, & at worst a (partial) red herring poised to direct significant risk mitigation efforts to building on inappropriate foundations." @yjernite.bsky.social et al on the AI Act Systemic Risks

1 year ago 11 4 0 0
Preview
Evaluating Generative AI Systems is a Social Science Measurement Challenge Across academia, industry, and government, there is an increasing awareness that the measurement tasks involved in evaluating generative AI (GenAI) systems are especially difficult. We argue that thes...

Evaluating Generative AI Systems is a Social Science Measurement Challenge: arxiv.org/abs/2411.10939

TL;DR: The ML community would benefit from learning from and drawing on the social sciences when evaluating GenAI systems.

1 year ago 4 1 1 0