Advertisement · 728 × 90

Posts by Gernot Rieder

Preview
After Harm: A Plea for Moral Repair after Algorithms Have Failed - Science and Engineering Ethics In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguard...

New paper is out! After Harm: A Plea for Moral Repair after Algorithms Have Failed.

@aktant.bsky.social & I show that post-harm scenarios have not received enough attention and argue why attending to them is essential for a satisfactory account of AI ethics and governance.

doi.org/10.1007/s119...

7 months ago 12 2 1 0
Reassembling Politics through Sensory Power? Digital Contact Tracing and the Infrastructuring of Governance When the COVID-19 pandemic hit, governments worldwide swiftly mobilized digital capabilities and infrastructures to combat the spread of the virus (see ...

#CfP: Topical Collection "Reassembling Politics through Sensory Power?" in Digital Society, co-edited by @nicbaya.bsky.social, Kjetil Rommetveit, Céline Cholez & me.

Abstract deadline: Oct. 1, 2025
Submission deadline: Dec. 31, 2025

More: link.springer.com/collections/...

#STS #philtech

7 months ago 1 0 0 0
Call for Abstracts – 7th Nordic STS Conference 2025

CfP open for 7th Nordic #STS Conference in 🇸🇪:
www.nordicsts.se/call-for-abs...

1 year ago 3 2 0 0
Post image

Hard copies of Critical Data Studies book have finally arrived. The post schedule for parcels travelling from England to Ireland seems to have reverted to that of the medieval period. Great to have the book in hand: always feels like a different experience reading the printed book to the pdf copy.

1 year ago 41 1 1 0
Preview
MIT researchers release a repository of AI risks | TechCrunch A group of researchers at MIT and elsewhere have compiled what they claim is the most thorough databases of possible risks around AI use.

"The AI risk repository, which includes over 700 AI risks grouped by causal factors (e.g. intentionality), and domains (e.g. discrimination), was born out of a desire to understand the overlaps and disconnects in AI safety research"
#AIEthics

techcrunch.com/2024/08/14/m...

1 year ago 41 16 2 1
Post image

Article by M. Ruckenstein about the necessity & practice of engaging collaboratively w. technology & data in ways that create space for critical inquiry, anticipation & revision. Our Data Ethics Decision Aid is featured as an exemplary practice
doi-org.utrechtuniversity.idm.oclc.org/10.1177/2976...

1 year ago 6 2 0 0
Post image

I'll do a workshop next week in Vienna on "everything artists and cultural practitioners need to know about this strange thing called AI", titled "debunking the tech-bro AI cult". Lecture+discussion.

Fr 17.1.2025, 14:00-16:00, brut Wien, please spread & register here:
brut-wien.at/en/Programme...

1 year ago 33 17 1 1

to all search engine researchers out there!!

1 year ago 4 2 0 0

Still a few more days to submit an abstract for this workshop

1 year ago 6 5 0 0
Advertisement
Neither fair nor legal. How and why untrustworthy digital ecosystems evolve Untrustworthy technologies create systemic harms for their users, often by design, and at the societal level. However, the nature of these technologies and the reasons why they evolve have not yet bee...

'Neither fair nor legal. How and why untrustworthy digital ecosystems evolve' by Catherine Thompson, Daniel Samson and Sherah Kurnia in Scandinavian Journal of Information Systems. About Robodebt social welfare scandal aisel.aisnet.org/sjis/vol36/i...

1 year ago 9 5 0 0