So maybe the question isn’t:
“How do we build trustworthy AI?”
But:
“Which definition of trust are we operating within?”
Posts by David Broniatowski
The deeper challenge isn’t just technical or political.
It’s cognitive.
People shift between these perspectives depending on context—often without realizing it.
The implication: there is no single solution to “trustworthy AI.”
Different problems → different responses:
• Poor performance → better benchmarks
• Loss of agency → redesign systems
• Weak oversight → audits & regulation
• Power concentration → structural reform
That’s not a bug. It’s what makes coordination possible.
But it also means debates about AI governance often start from misalignment.
One way to make sense of this: treat “trust” as a boundary object.
A shared term that different groups interpret differently—while assuming alignment.
So when organizations say they’re building “trustworthy AI,” they’re often talking past each other.
Same term. Different meanings.
These perspectives don’t just differ—they conflict.
Improving metrics doesn’t solve power imbalances.
Regulation doesn’t guarantee agency.
Better UX can even increase manipulation risks.
In our paper, we identify four distinct ways people think about trust in AI:
• Trust as performance (metrics)
• Trust as relationship (agency, norms)
• Trust as institutions (regulation, auditing)
• Trust as power (or the lack of it)
Paper: rdcu.be/feRMf
The problem isn’t just technical. It’s conceptual.
Different communities are using the same word—“trust”—to mean very different things.
Everyone says they want “trustworthy AI.”
But almost no one agrees on what that actually means.
This substack post summarizes our new paper (link to paper below):
open.substack.com/pub/davidabr...
New @techpolicypress.bsky.social article by @broniatowski.bsky.social & Joseph Simons encourage policymakers to think of large scale digital platforms as critical infrastructure. Policy debates surrounding these platforms narrowly focus on access controls or legal liability. Read at bit.ly/4bzYjDr.
We should treat large social and AI systems like critical infrastructure and adopt "building codes" for them, write David A. Broniatowski and Joseph Simons. Building codes are not suggestions; they are the baseline that ensures a structure is fit for its intended use, they write.
Screen shot of the following text: One of the striking things about the reaction to this preprint is how often it treats disclosure standards as if they must be invented from scratch. They don’t. In biomedicine, where the stakes are unmistakably high, disclosure norms are not casual, and they are not left to intuition. The ICMJE guidelines—used by most major medical journals—define conflicts of interest broadly: not only employment and direct funding, but advisory roles, collaborations, and other relationships that could reasonably be perceived to influence judgment. Under those standards, many of the ties identified in this preprint would straightforwardly qualify as conflicts that should be disclosed. And we don’t find this strange in medicine. We would not blink at the claim that co-authoring with a tobacco company scientist is a disclosable relationship when writing about tobacco control. Even if the specific paper was not funded by Philip Morris. Even if the collaborator is methodologically impeccable. Even if the conclusions are sound.
Some reflections by @broniatowski.bsky.social on our preprint and some of the discussion around what constitutes a conflict of interest in @kakape.bsky.social's write-up.
A few thoughts and reflections below 🧪
broniatowski.substack.com/p/when-is-a-...
We don’t need louder certainty.
We need clearer responsibility — and institutions willing to bear the consequences of restoring legitimacy.
Standing up for science shouldn’t mean asking science to rule.
It should mean insisting that science not be used as political cover — and not be sacrificed when that cover fails.
That path is less satisfying emotionally. It offers fewer villains and no quick moral wins. But it’s how trust is rebuilt without turning science into a faction.
The harder path is insisting on answerability:
clear separation between advising and deciding,
visible ownership of decisions by political leaders,
and honest acknowledgment of value tradeoffs.
If we frame accountability primarily as purification — who belongs, who is beyond the pale — we risk deepening the legitimacy crisis we’re trying to solve.
But history is clear: removal never cleanly restores legitimacy. It provokes backlash, rebellion, and renewed challenges to authority. Institutions have to be prepared to absorb that — not pretend it won’t happen.
Sometimes legitimacy does require removing bad actors from positions of authority. Avoiding that conversation isn’t realistic.
Anthropologist Mary Douglas suggested that when procedural authority weakens, groups compensate by tightening identity and policing boundaries. That move can feel stabilizing — but it accelerates polarization, purity tests, and endorsement of conspiracy theories.
But there’s a risk here. Activism shifts authority from process to moral identity: trust us because we are on the right side.
This is where activism enters — understandably. When institutions fail, moral clarity feels like the only thing left to stand on.
When decisions inevitably went wrong under uncertainty, science became the fall guy — not because it lied, but because responsibility had been misallocated.
Politics has parties, elections, leadership turnover. Science doesn’t. In the blame economy, that asymmetry matters.
Experts stepped in to help. That was understandable. But science is built to advise, not to decide — and it lacks the institutional machinery to absorb blame when decisions go badly.
During COVID, scientists didn’t seize power. Political leaders delegated it — often publicly — because they were unwilling or unable to make hard, value-laden decisions themselves.
But I think we need to ask a harder question:
what kind of authority are we actually defending when we defend “science”?
I want to start with agreement: there has been real harm, real misinformation, and real bad faith from people in power. Silence was never a neutral option.