Advertisement · 728 × 90

Posts by Daniel S. Schiff

Post image

NEW REPORT from the Imagining the Digital Future Center:
In 160+ impassioned essays, global experts note AI is quickly becoming the invisible operating system of society. They urge an institutions-first strategy to help support human resilience.
imaginingthedigitalfuture.org/reports-and-...

2 weeks ago 1 1 0 0
Preview
Strategies for Harmonizing Fragmented AI Ethics Frameworks, Standards, AI governance is increasingly shaped by a patchwork of ethical frameworks, standards, and regulations, with overlapping demands from technical standards bodies, industry consortia, and governments. A key piece of this puzzle is standardization: as regulators...

9/9 Time to actually embed stakeholder voices throughout.

From marginalized communities to countries - build real org infrastructure for participation so AI works for everyone. 🌍

@purduepolsci.bsky.social @GRAILcenter.bsky.social

link.springer.com/rwe/10.1007...

4 weeks ago 0 0 0 0

8/9 Strengthen auditing:
• Independent accreditation bodies
• Transparent audit results
• Practical tools bridging principles-practice gap

4 weeks ago 0 0 1 0

7/9 Make governance agile:
✅ Living documents evolving with tech
✅ Rapid-response taskforces
✅ Regulatory sandboxes for testing

4 weeks ago 0 0 1 0

6/9 Layered adoption approach by orgs:
• High-level frameworks (NIST AI RMF)
• Domain guidelines (IEEE 7010)
• Operational tools (model cards)

Promotes scalability and adaptability.

4 weeks ago 0 0 1 0
Post image

5/9 🚀 The solution: Human-centered, harmonized, adaptive governance.

A roadmap: global collaboration among ISO, IEEE, OECD. Use crosswalks to map overlaps and reduce redundancy.

4 weeks ago 0 0 1 0
Post image

4/9 Challenge 3: The "15 competing standards" problem.

Proliferation creates decision paralysis. Organizations face patchwork frameworks without clear guidance.

4 weeks ago 0 0 1 0
Advertisement

3/9 Challenge 2: Voluntary standards without enforcement.

Organizations cherry-pick compliance, creating superficial audits instead of meaningful accountability.

4 weeks ago 1 0 2 0

2/9 Challenge 1: Translating fairness, transparency into actionable standards is hard.

These values are context-dependent. Standards risk oversimplifying or "ethics-washing" - leaving structural inequities unaddressed.

4 weeks ago 0 0 1 0

1/9 🤔 Core problem: AI advances rapidly, but governance lags in fragmented systems.

Challenges:
• Redundancy & overlap
• Decision paralysis
• Ethical concepts lost in translation
• Geopolitical divides

4 weeks ago 0 0 1 0
Post image

🚨 AI governance is fragmented chaos. Over 500 standards, overlapping frameworks, contradictory regulations create a maze.

But there's a roadmap to harmonize this and build human-centered AI governance 🧵

link.springer.com/rwe/10.1007...

4 weeks ago 1 0 1 1

4/4 AI governance will shape society—addressing inequality and climate impact if done right, entrenching power imbalances if done wrong.

What trade-offs matter most to you? For policy practitioners and researchers.

@purduepolsci.bsky.social @GRAILcenter.bsky.social

dx.doi.org/10.2139/ssr...

1 month ago 0 0 0 0

3/4 My analysis suggests hybrid models combining:
• Centralized safety oversight
• Decentralized innovation spaces
• Broad societal goal integration

Experimentation across jurisdictions will be essential. Forthcoming in Handbook on the Global Governance of AI.

1 month ago 0 0 1 0

2/4 Current AI governance shows fragmentation everywhere:

• EU centralizing with AI Act
• US pursuing decentralized approaches
• Monitoring ranges from strict to voluntary
• Tech firms drive decisions while public input lags

The pace of AI development only complicates things further.

1 month ago 0 0 1 0

1/4 I examined 4 governance models to understand the possibilities:

✈️ Aviation: Centralized, safety-focused, but expensive
🌱 Organic standards: Decentralized, private-led, inconsistent
🌍 Climate policy: Shared, flexible, weak enforcement
💻 Open-source: Collaborative, adaptive, fragile

1 month ago 0 0 1 0
Post image

AI governance faces 5 fundamental tensions: centralized vs. decentralized control, robust vs. minimal enforcement, adaptive vs. enduring rules. How do we navigate these trade-offs? 🤖

My new paper draws lessons from aviation, climate & open-source governance 👇

dx.doi.org/10.2139/ssr...

1 month ago 2 0 2 1

5/5 The stakes: AI influences fundamental life outcomes. Without robust, transparent audits, we risk perpetuating harms and undermining trust.

For governance folks: What's your biggest auditing challenge—technical gaps, regulatory clarity, or stakeholder engagement?

doi.org/10.1177/205395

1 month ago 0 0 0 0
Advertisement
Post image

4/5 Auditors face regulatory ambiguity, data governance gaps, and interdisciplinary friction between tech, legal, and leadership teams.

Yet they're ecosystem builders—translating vague laws into actionable frameworks and pushing organizations toward better AI governance.

1 month ago 1 0 1 0
Post image

3/5 Key finding: Most audits focus narrowly on technical metrics.

Broader impacts on vulnerable communities? Often sidelined.
Public reporting of audit results? Almost nonexistent.

Transparency and stakeholder engagement remain major gaps. 📈

1 month ago 0 0 1 0
Post image

2/5 What's driving AI auditing growth?

🔹 Regulation (EU AI Act, NIST frameworks)
🔹 Reputation management (avoiding biased AI headlines)
🔹 Competitive strategy (trustworthy AI advantage)

The ecosystem spans internal teams, Big Four firms, specialized startups. @purduepolsci.bsky.social

1 month ago 2 0 1 0
Post image

1/5 We interviewed 34 AI ethics auditors across 23 organizations in 7 countries. Published in Big Data & Society. @bigdatasociety.bsky.social

The field borrows from financial auditing: planning, validating, analyzing risks, reporting. But it's still figuring out what success looks like. 📊

1 month ago 0 0 1 0

AI systems decide who gets hired, who gets loans, who receives healthcare. But who's auditing the AI? 🤖

Our new study explores the emerging field of AI ethics auditing—the people and processes trying to make AI accountable. @grailcenter.bsky.social

doi.org/10.1177/205... 🧵

1 month ago 1 2 1 1

Published in Hastings Center Report. @purduepolsci.bsky.social @GRAILcenter.bsky.social

onlinelibrary.wiley.com/doi/abs/10....

With Daniel Susser, Sara Gerke, Laura Y. Cabrera, I. Glenn Cohen, & team

1 month ago 0 0 0 0

Synthetic data should complement real-world data, not replace it. The choice ahead: Will we use this technology to bridge healthcare gaps or deepen inequities?

For governance teams & researchers working on AI in healthcare—curious what you're seeing?

#SyntheticData #AIinHealthcare #Bioethics

1 month ago 0 0 1 0

We argue synthetic data isn't a magic fix—it's a powerful tool that demands robust safeguards 🛡️

Key needs:
• Standards for accuracy & reliability
• Privacy protections
• Transparent policies
• Continued investment in diverse, real-world datasets

1 month ago 0 0 1 0

But the risks are real:
• Accuracy issues for rare disease algorithms
• Potential privacy leaks despite synthetic nature
• Bias amplification from flawed source data
• Regulatory gaps exploiting "non-identifiable" status
• Justice concerns about sidelining real-world diversity

1 month ago 0 0 1 0

What synthetic data promises:
• Privacy protection through artificial datasets
• Inclusive modeling of rare diseases & underserved groups
• Enhanced AI training capabilities
• Scalable research opportunities

The potential is substantial ⚡

1 month ago 1 0 1 0
Advertisement

Enter synthetic data: AI-generated datasets that mimic real-world patterns without containing actual patient information

Sounds perfect—private, inclusive, scalable. But our analysis in Hastings Center Report reveals significant ethical complexities 🚨

1 month ago 0 0 1 0

The challenge: Healthcare research is data-rich but insight-poor 📊

Privacy laws, demographic gaps, and underrepresentation of rare conditions prevent researchers from fully utilizing available EHRs, public datasets, and lab studies

1 month ago 0 0 1 0

Synthetic data promises to revolutionize healthcare research—solving privacy issues, modeling rare diseases, expanding equity. But it's also an ethical minefield that demands careful navigation 🧵

onlinelibrary.wiley.com/doi/abs/10....

1 month ago 0 0 1 1