Advertisement · 728 × 90

Posts by David Broniatowski

Preview
a man in a red jacket is making a stop sign with his hand ALT: a man in a red jacket is making a stop sign with his hand

So maybe the question isn’t:

“How do we build trustworthy AI?”

But:

“Which definition of trust are we operating within?”

1 day ago 0 0 0 0
Preview
a man wearing glasses and a black turtleneck is holding something in his hands and making a funny face . ALT: a man wearing glasses and a black turtleneck is holding something in his hands and making a funny face .

The deeper challenge isn’t just technical or political.

It’s cognitive.

People shift between these perspectives depending on context—often without realizing it.

1 day ago 0 0 1 0
Preview
a hand giving an okay sign with the words ohh caiste below it ALT: a hand giving an okay sign with the words ohh caiste below it

The implication: there is no single solution to “trustworthy AI.”

Different problems → different responses:

• Poor performance → better benchmarks
• Loss of agency → redesign systems
• Weak oversight → audits & regulation
• Power concentration → structural reform

1 day ago 1 0 1 0
Preview
a cartoon dog is sitting at a table with a cup of coffee in front of a fire with the words this is fine . ALT: a cartoon dog is sitting at a table with a cup of coffee in front of a fire with the words this is fine .

That’s not a bug. It’s what makes coordination possible.

But it also means debates about AI governance often start from misalignment.

1 day ago 1 0 1 0
Preview
a cartoon of spongebob and patrick with the word boundaries ALT: a cartoon of spongebob and patrick with the word boundaries

One way to make sense of this: treat “trust” as a boundary object.

A shared term that different groups interpret differently—while assuming alignment.

1 day ago 0 0 1 0
Preview
a man is sitting at a table with a glass of wine making a funny face . ALT: a man is sitting at a table with a glass of wine making a funny face .

So when organizations say they’re building “trustworthy AI,” they’re often talking past each other.

Same term. Different meanings.

1 day ago 0 0 1 0
Preview
a man in a suit with a red tie and a pocket square that says ' i ' on it ALT: a man in a suit with a red tie and a pocket square that says ' i ' on it

These perspectives don’t just differ—they conflict.

Improving metrics doesn’t solve power imbalances.
Regulation doesn’t guarantee agency.
Better UX can even increase manipulation risks.

1 day ago 0 0 1 0
Preview
a cartoon character is wearing a conical hat and the number four is written on it . ALT: a cartoon character is wearing a conical hat and the number four is written on it .

In our paper, we identify four distinct ways people think about trust in AI:

• Trust as performance (metrics)
• Trust as relationship (agency, norms)
• Trust as institutions (regulation, auditing)
• Trust as power (or the lack of it)

Paper: rdcu.be/feRMf

1 day ago 0 0 1 0
Advertisement
Preview
a cartoon of two spidermans standing in front of a nypd van ALT: a cartoon of two spidermans standing in front of a nypd van

The problem isn’t just technical. It’s conceptual.

Different communities are using the same word—“trust”—to mean very different things.

1 day ago 0 0 1 0
Preview
Why “Trustworthy AI” Means Different Things to Different People The four perspectives shaping design, risk, and governance

Everyone says they want “trustworthy AI.”
But almost no one agrees on what that actually means.

This substack post summarizes our new paper (link to paper below):
open.substack.com/pub/davidabr...

1 day ago 0 0 1 0
Preview
A Building Code for Digital Infrastructures We should treat large social and AI systems like critical infrastructure and adopt "building codes" for them, write David A. Broniatowski and Joseph Simons.

New @techpolicypress.bsky.social article by @broniatowski.bsky.social & Joseph Simons encourage policymakers to think of large scale digital platforms as critical infrastructure. Policy debates surrounding these platforms narrowly focus on access controls or legal liability. Read at bit.ly/4bzYjDr.

1 month ago 1 2 0 0
Preview
A Building Code for Digital Infrastructures We should treat large social and AI systems like critical infrastructure and adopt "building codes" for them, write David A. Broniatowski and Joseph Simons.

We should treat large social and AI systems like critical infrastructure and adopt "building codes" for them, write David A. Broniatowski and Joseph Simons. Building codes are not suggestions; they are the baseline that ensures a structure is fit for its intended use, they write.

1 month ago 18 5 0 0
Screen shot of the following text: 

One of the striking things about the reaction to this preprint is how often it treats disclosure standards as if they must be invented from scratch.

They don’t.

In biomedicine, where the stakes are unmistakably high, disclosure norms are not casual, and they are not left to intuition. The ICMJE guidelines—used by most major medical journals—define conflicts of interest broadly: not only employment and direct funding, but advisory roles, collaborations, and other relationships that could reasonably be perceived to influence judgment.

Under those standards, many of the ties identified in this preprint would straightforwardly qualify as conflicts that should be disclosed.

And we don’t find this strange in medicine.

We would not blink at the claim that co-authoring with a tobacco company scientist is a disclosable relationship when writing about tobacco control. Even if the specific paper was not funded by Philip Morris. Even if the collaborator is methodologically impeccable. Even if the conclusions are sound.

Screen shot of the following text: One of the striking things about the reaction to this preprint is how often it treats disclosure standards as if they must be invented from scratch. They don’t. In biomedicine, where the stakes are unmistakably high, disclosure norms are not casual, and they are not left to intuition. The ICMJE guidelines—used by most major medical journals—define conflicts of interest broadly: not only employment and direct funding, but advisory roles, collaborations, and other relationships that could reasonably be perceived to influence judgment. Under those standards, many of the ties identified in this preprint would straightforwardly qualify as conflicts that should be disclosed. And we don’t find this strange in medicine. We would not blink at the claim that co-authoring with a tobacco company scientist is a disclosable relationship when writing about tobacco control. Even if the specific paper was not funded by Philip Morris. Even if the collaborator is methodologically impeccable. Even if the conclusions are sound.

Some reflections by @broniatowski.bsky.social on our preprint and some of the discussion around what constitutes a conflict of interest in @kakape.bsky.social's write-up.

A few thoughts and reflections below 🧪

broniatowski.substack.com/p/when-is-a-...

3 months ago 7 1 1 2

We don’t need louder certainty.
We need clearer responsibility — and institutions willing to bear the consequences of restoring legitimacy.

4 months ago 1 0 0 0

Standing up for science shouldn’t mean asking science to rule.
It should mean insisting that science not be used as political cover — and not be sacrificed when that cover fails.

4 months ago 1 0 1 0

That path is less satisfying emotionally. It offers fewer villains and no quick moral wins. But it’s how trust is rebuilt without turning science into a faction.

4 months ago 0 0 1 0

The harder path is insisting on answerability:
clear separation between advising and deciding,
visible ownership of decisions by political leaders,
and honest acknowledgment of value tradeoffs.

4 months ago 0 0 1 0

If we frame accountability primarily as purification — who belongs, who is beyond the pale — we risk deepening the legitimacy crisis we’re trying to solve.

4 months ago 0 0 1 0
Advertisement

But history is clear: removal never cleanly restores legitimacy. It provokes backlash, rebellion, and renewed challenges to authority. Institutions have to be prepared to absorb that — not pretend it won’t happen.

4 months ago 0 0 1 0

Sometimes legitimacy does require removing bad actors from positions of authority. Avoiding that conversation isn’t realistic.

4 months ago 0 0 1 0

Anthropologist Mary Douglas suggested that when procedural authority weakens, groups compensate by tightening identity and policing boundaries. That move can feel stabilizing — but it accelerates polarization, purity tests, and endorsement of conspiracy theories.

4 months ago 0 0 1 0

But there’s a risk here. Activism shifts authority from process to moral identity: trust us because we are on the right side.

4 months ago 0 0 1 0

This is where activism enters — understandably. When institutions fail, moral clarity feels like the only thing left to stand on.

4 months ago 0 0 1 0

When decisions inevitably went wrong under uncertainty, science became the fall guy — not because it lied, but because responsibility had been misallocated.

4 months ago 0 0 1 0

Politics has parties, elections, leadership turnover. Science doesn’t. In the blame economy, that asymmetry matters.

4 months ago 0 0 1 0

Experts stepped in to help. That was understandable. But science is built to advise, not to decide — and it lacks the institutional machinery to absorb blame when decisions go badly.

4 months ago 0 0 1 0
Advertisement

During COVID, scientists didn’t seize power. Political leaders delegated it — often publicly — because they were unwilling or unable to make hard, value-laden decisions themselves.

4 months ago 0 0 1 0

But I think we need to ask a harder question:
what kind of authority are we actually defending when we defend “science”?

4 months ago 0 0 1 0

I want to start with agreement: there has been real harm, real misinformation, and real bad faith from people in power. Silence was never a neutral option.

4 months ago 0 0 1 0
Preview
When Authority Slips What Moses Can Teach Science About Legitimacy

This is a thread for people who care deeply about science and democracy — especially those who feel the pull toward “standing up for science” in a moment of real institutional failure (longer version at this substack: substack.com/home/post/p-...).

4 months ago 2 1 1 0