Advertisement · 728 × 90
#
Hashtag
#responsibleAI
Advertisement · 728 × 90
Preview
Why AI in Judicial Vetting Might Fail Where It Matters Most: Courtroom Defensibility A practitioner's critical analysis of AI's opportunities and hard limits in judicial integrity screening

In this article, drawing on my experience at Transparency International, I argue why courtroom defensibility (not just technical efficiency) must be the primary design criterion for AI in judicial vetting.
open.substack.com/pub/taraskov...

#AIGovernance #ResponsibleAI

0 0 0 0
Post image

28/31 #WomensHistoryMonth: @E_van_Lundblad AI leadership that keeps #governance and responsible use in the spotlight (where it belongs).
#WomenInTech #AI #ResponsibleAI #MVPbuzz

2 0 0 0
Preview
AI is programmed to hijack human empathy — we must resist that As artificial intelligence begins to mimic consciousness with uncanny skill, we need design norms and laws that prevent it from being mistaken for sentient beings.

Microsoft's AI CEO wrote in Nature that AI mimicking consciousness is a deliberate design choice. I work with AI tools daily and catch myself responding as if someone's there. Knowing it's engineered doesn't switch that off.

www.nature.com/articles/d41...

#ResponsibleAI #AIEthics

0 0 0 0

#CivicAI

#AISafety #AIGovernance #Democracy #ResponsibleAI

✍️Led by David Guzman Piedrahita, with Dave Banerjee, Kevin Blin, Pepijn Cobben, Giulio Corsi, Xuanqiang Angelo Huang, Changling Li, Suvajit Majumder, Punya Syon Pandey, Samuel Simko, Irene Strauss, Terry Jingchen Zhang

0 0 1 0
Post image

𝑩𝒖𝒊𝒍𝒅 𝑩𝒆𝒕𝒕𝒆𝒓:
Inspired by NIST AI RMF GOVERN 1.7  (Processes and procedures are in place for decommissioning and phasing out AI systems safely)

"To create what you cannot also end is to be ruled by your own creation."

#AIGovernance #NISTRMF #ResponsibleAI

1 1 0 0

Great chats at Starpal + Westminster Uni today: “AI-first” has to mean responsible. We dug into bias, human oversight, and making AI decisions understandable. Loved the Elizabeth Bennet Avatar sparking debate on identity + consent. https://guildford.ai #AIethics #ResponsibleAI

3 0 1 0

📄 DOI: 10.1038/s41746-022-00663-0

#AutoPiX #Library #TrustworthyAI #EthicsInAI #DigitalHealth #AIinHealthcare #ResponsibleAI #IHI

0 0 0 0

See you at #EACL2026 in Rabat 🕌!

#UKPLab #NLProc #ResponsibleAI #Quantization #MLSafety #Fairness #TrustworthyAI #ModelCompression #LLMSafety #EthicalAI #NLP #AIResearch @cs-tudarmstadt.bsky.social @proloewe.bsky.social

3 0 0 0
Post image

AI can process manuscripts fast, which is useful, but not enough.

Peer review depends on judgment: what holds up, what doesn’t, what’s missing. AI can support this, but cannot replace it.

Agree or disagree?

#AIpeerReview #ResponsibleAI #AcademicAI #PublicationEthics

0 0 0 0
Post image

Sauti's Quote of the Day 💡

At Sauti Data Lab, we are building data-driven solutions that empower communities, inform decisions, and drive meaningful change.

#SautiDataLab #DataForSocialGood #ResponsibleAI #Innovation

0 0 0 0
Preview
Decoding the Black Box - A Guide to Explainable AI in Data Science Artificial Intelligence (AI) has become an integral part of our lives, powering systems that impact our decisions in healthcare, finance, and even criminal justice. However, the opaqueness of compl...

Decoding the Black Box - A Guide to Explainable AI in Data Science
www.ekascloud.com/our-blog/dec...
#ExplainableAI #XAI #ArtificialIntelligence #DataScience #MachineLearning #AITransparency #BlackBoxAI #AIModels #ResponsibleAI #AIethics #DataAnalytics #DeepLearning #TechEducation

1 0 0 0
Preview
AI can ‘same-ify’ human expression — can some brains resist its pull? Emerging evidence suggests that LLM outputs can shape the text and thoughts of human users.

Scientists using AI tools focus on a narrower range of research fields. People's opinions shifted toward LLM outputs even when warned about bias. They didn't notice. Stylistic diversity in post-ChatGPT text is measurably down.

www.nature.com/articles/d41...

#AIinResearch #ResponsibleAI

1 0 0 0
A bright purple programme booklet with the word “Inspire” inside the Cambridge shield rests on a person’s lap, while nearby rows of seats hold identical copies.

A bright purple programme booklet with the word “Inspire” inside the Cambridge shield rests on a person’s lap, while nearby rows of seats hold identical copies.

The Rt Hon the Lord Knight of Weymouth stands on stage before a screen, holding a phone and gesturing with one hand.

The Rt Hon the Lord Knight of Weymouth stands on stage before a screen, holding a phone and gesturing with one hand.

Toju Duke stands at a podium with a microphone in front of a purple “Inspire” backdrop, tagged “Our Festival of Innovation,” while a nearby screen displays a presentation.

Toju Duke stands at a podium with a microphone in front of a purple “Inspire” backdrop, tagged “Our Festival of Innovation,” while a nearby screen displays a presentation.

A group of colleagues sit attentively in a conference room, focused on a presentation and holding the Inspire programme booklet.

A group of colleagues sit attentively in a conference room, focused on a presentation and holding the Inspire programme booklet.

This week, our annual ed tech event, Inspire, brought colleagues together to explore critical thinking, innovation, and adding value with #AI.

👉 More from the day: https://cambrid.ge/40YwlfB

#WeAreCambridge #AIinEducation #ResponsibleAI #EdTech

4 1 0 0
Post image

AI adoption is accelerating—but so are risks. Uncontrolled AI creates security and compliance issues. Governed AI enables safe scale. The winners won’t be first—they’ll be best managed. #AIGovernance #AIInnovation #ResponsibleAI #CurrentTEKSolutions

0 0 0 0
Preview
Resources | AutoPiX IMAGING FOR PATIENT BENEFIT IN ARTHRITIS

Consult our glossary to review related terms ➡️https://www.autopix-project.eu/resources

#LLM #ArtificialIntelligence #AIinHealthcare #DigitalHealth #Rheumatology #ResponsibleAI #AutoPiX #IHI

0 0 0 0
Post image

This Women’s Month, we celebrate the power of data in advancing gender equity

We developed PoliWatch leveraging NLP and RAG to analyze policies, support fact-checking, and empower women-led CSOs

#DataForGood #DigitalResilience #ResponsibleAI #NLP #CivicTech #SocialImpact

1 0 0 0
Post image

#ResponsibleAI Talk
Algorithmic Insurance

Speaker: Agni Orfanoudaki, Saïd Business School, Oxford University
Date: March 30, 2026
Time: 3:30pm (London)

To join online, send empty email, subject “Subscribe RAI” to daniele.quercia@gmail.com

1 0 0 0
Post image

𝑰𝑺𝑶 𝟰𝟮𝟬𝟬𝟭 𝑺𝒆𝒓𝒊𝒆𝒔:
This infographic maps each step with the key deliverables and clause requirements involved.

Useful for anyone preparing for or considering AI management system certification.

#ISO42001 #AIGovernance #AIMS #ResponsibleAI #AICompliance

2 1 0 0

@leahf.bsky.social #AdityaGautam #ChrisMiles #OmriTubiana #ArushiSaxena #JJMartinezLayuno #DavidJay

#AI #LLMs #Misinformation #TrustAndSafety #Ethics #ResponsibleAI #TechPolicy #ContentModeration #Governance #DigitalTrust #PlatformAccountability

0 0 0 0

#BuildBetter
#ResponsibleAI
#TechEthics

0 0 1 0
Preview
Failure of contextual invariance in gender inference with large language models Standard evaluation practices assume that large language model (LLM) outputs are stable under contextually equivalent formulations of a task. Here, we test this assumption in the setting of gender inf...

MASSIVE thank you to the brilliant @ariel-flint.bsky.social @lajello.bsky.social and @baronca.bsky.social for their help on this work!! Check out the preprint if you want to learn more: arxiv.org/abs/2603.23485

#LingSky #ResponsibleAI #AI #NLP #MachineLearning

8 3 0 0
Preview
STM Plants a Flag About Responsible Use of Research Content in GenAI - The Scholarly Kitchen New STM Association paper seeks to foster a discussion about how GenAI systems can reliably incorporate scholarly research

STM Association published a responsible AI framework for research content. Covers attribution, Version of Record, retractions, bias. Now proposing technical pilots with publishers and AI providers. Overdue but welcome.

scholarlykitchen.sspnet.org/2026/03/19/s...

#ResponsibleAI #ScholarlyPublishing

0 0 0 0

Over 500 AI governance standards—and counting—have created a complex, often conflicting landscape.

This piece maps a path toward alignment and human-centered governance.
🔗 link.springer.com/rwe/10.1007/...

#AIGovernance #AIpolicy #ResponsibleAI
bsky.app/profile/dsch...

1 1 0 0
Post image

When AI makes decisions, who’s really responsible? Dive into the debate on accountability in a world where machines hold the reins.
Read more: buff.ly/lKwKUKy

#AIAccountability #EthicalAI #ResponsibleAI

0 0 0 0
Post image

📢Join us on Monday, 30 March for an insightful webinar featuring Prof. Tatiana Kalganova from Brunel University of London

🔗 Registration is required: us02web.zoom.us/meeting/regi...

#ELOQUENCE #AI #LanguageTechnologies #ExplainableAI #TrustworthyAI #ResponsibleAI

0 0 0 0

We're seeing AI getting shoved thoughtlessly into a lot of software. We think #ResponsibleAI means giving people the option to use as much or as little AI as they want to.

What do you think? Is #ResponsibleAI possible? If so, what do you think it should look like?

0 1 1 0
Preview
Choosing a development partner in the AI era: Why ISO 42001 matters ISO/IEC 42001 certification helps evaluate AI development partners by ensuring governance, risk management, and traceable, responsible AI systems.

𝐖𝐡𝐚𝐭 𝐢𝐟 𝐲𝐨𝐮𝐫 𝐀𝐈 𝐰𝐨𝐫𝐤𝐞𝐝, 𝐛𝐮𝐭 𝐲𝐨𝐮 𝐜𝐨𝐮𝐥𝐝𝐧’𝐭 𝐟𝐮𝐥𝐥𝐲 𝐞𝐱𝐩𝐥𝐚𝐢𝐧 𝐢𝐭?
Choosing a dev partner? Deployment is just the beginning. That's where ISO/IEC 42001 comes in!
www.hotovo.com/blog/choosin...

#ResponsibleAI #ISO42001 #EnterpriseAI #HotovoMeansDone

0 0 0 0
Unlocking AI's untapped potential: responsible innovation in research and publishing

Unlocking AI's untapped potential: responsible innovation in research and publishing

53% of reviewers use AI, but its potential is largely untapped.

Our AI whitepaper goes beyond trends to define what responsible adoption looks like and guide stakeholders in ensuring trust and impact.

Explore the full insights ➡️ fro.ntiers.in/AI-Whitepaper

#ResponsibleAI #ResearcherChampions

0 0 0 0
Preview
If AI Were to Write a Framework to Protect AI The National AI Legislative Framework that the White House released on Friday, March 20 is horribly unserious. It does nothing to change…

Director, Roy L. Austin, Jr., shares his thoughts on the latest National AI Framework. Austin argues that the Framework's real purpose is not consumer protection but a federal preemption of state laws and an immunity shield for AI companies #AI #ResponsibleAI medium.com/p/if-ai-were...

0 0 0 0
Post image

𝑩𝒖𝒊𝒍𝒅 𝑩𝒆𝒕𝒕𝒆𝒓:
Inspired by EU AI Act Article 4

"Tools understood by few and used by many become instruments of accident."

Article 4 makes AI literacy an organizational obligation. Understanding prevents accidents.

#AILiteracy #EUAIAct #ResponsibleAI

1 1 0 0