Advertisement · 728 × 90

Posts by AI Accountability Lab

Preview
Joint Open Letter: Preserving the Scope and Integrity of the AI Act CDT Europe joined civil society organisations and individuals in a public letter raising concerns about proposed changes to Annex I of the AI Act.

📢 In the context of the ongoing trilogue negotiations on the #AI Omnibus proposal, together with 32 other organisations and individuals, we contributed to an open letter to EU institutions, expressing our concerns about current proposals that could weaken the scope and effectiveness of the AI Act.

1 week ago 13 16 1 1
Post image

The 🇺🇸+🇪🇺 oil&gas lobbies claim the EU needs to weaken its methane regulations or face gas supply shortages.

But energy consultancy Rystad shows that to be not true.

Oil&gas distorting reality to further its own commercial interests? @paulinesophieh.bsky.social thinks so.

Via @politico.eu

1 week ago 6 8 0 0
Dr Brihane is the founder and principal investigator of the AI Accountability Lab, which studies AI technologies and their downstream societal impact, so as to foster a greater ecology of AI accountability. 

She is a member of the ADAPT Centre, an Assistant Professor at Trinity College Dublin, has featured in TIME's 100 Most Influential People in AI, served on the UN's AI Advisory Body and Ireland's AI Advisory Council.

We greatly look forward to hearing her give a keynote address at EdTech 26 at Dublin City University on the 3rd and 4th of June 2026

Dr Brihane is the founder and principal investigator of the AI Accountability Lab, which studies AI technologies and their downstream societal impact, so as to foster a greater ecology of AI accountability. She is a member of the ADAPT Centre, an Assistant Professor at Trinity College Dublin, has featured in TIME's 100 Most Influential People in AI, served on the UN's AI Advisory Body and Ireland's AI Advisory Council. We greatly look forward to hearing her give a keynote address at EdTech 26 at Dublin City University on the 3rd and 4th of June 2026

We are delighted to welcome @abeba.bsky.social of @aial.ie to @dcuioe.bsky.social for the Irish Learning Technology Association's 25th annual EdTech conference, June 3- 4, 2026. #Edtech26 #edusky We greatly look forward to her keynote talk examining #genAI tools in education! ilta.ie/edtech-2026/...

3 weeks ago 6 4 0 1
Post image

Researchers at the @aial.ie @tcddublin.bsky.social & ADAPT have released new findings addressing the lack of visibility into AI training data despite it being an obligation under the AI Act.
Learn more: www.adaptcentre.ie/news-and-eve...
@researchireland.ie

3 weeks ago 2 4 0 0

We think the existing consumer laws already show how several of these issues are unfair in the Unfair Commercial Terms/Practices directives, but also that the situation requires further concrete action and changes through our 12 recommendations.

18/

3 weeks ago 14 3 2 0

Issue 7: The lack of provider's responsibilities is also at odd given that they decide how the models are developed, what data is used in training, and how the model is configured. An example of this is the use of Grok to produce CSAM where X reiterated only users are responsible for outputs.

12/

3 weeks ago 14 3 1 0

Issue 5: The manner in which users are assigned full responsibility and liability over inputs and outputs is at odds with the actual ability of users to control only their prompts but not the underlying AI model or the system prompts which are solely controlled by the provider.

10/

3 weeks ago 22 6 1 0

Issue 4: GenAI services come with "no warranty", "no assurance", which raises the question on how they are being marketed – and if it is misleading to consumers. If true, this also has impacts on use of consumer protection towards AI Act enforcement based on the marketed 'purposes'.

9/

3 weeks ago 15 6 1 0

Issue 3: Terms mentioned service may change without notice, which is ambiguous on changes to UI/UX but also the quality, underlying model, and safety mechanisms. Consumers are given no information, no controls, and no ability to decide, while providers retain complete control.

8/

3 weeks ago 16 3 2 0
Advertisement
Preview
Terms of (Ab)Use: An Analysis of GenAI Services Generative AI services like ChatGPT and Gemini are some of the fastest-growing consumer services. Individuals using such services must accept their terms of use before access, and conform to these ter...

New paper from @aial.ie! @harshp.com, Dick Blankvoort, Adel Shaaban, @sashamtl.bsky.social & me

We analysed 6 GenAI ToS--finding missing info, major power imbalances & user obligations that are impossible to meet without violating the terms

arxiv.org/abs/2603.18964 & aial.ie/research/ter...

1/

3 weeks ago 175 103 2 8
Support Resources for Challenging and Emotionally Taxing AI Research - Coalition for Independent Technology Research Independent technology researchers, journalists and activists who surface and confront politically charged topics such as extremism, disinformation or online hate increasingly face heightened threats,...

Independent AI researchers face rising risks - from harassment to legal intimidation. A new resource by the Coalition for Independent Technology Research & @aial.ie shares tools on safety, wellbeing, and collective care. Read more ⬇️

4 weeks ago 2 2 0 0
Support Resources for Politically Charged and Emotionally Taxing AI Research A living resource list for researchers, journalists, and civil society working on challenging and emotionally taxing topics.

at @aial.ie, we've been navigating emotionally challenging & politically contentious research. i initially set out to assemble internal resource but following conversations w @thecoalition.bsky.social, it developed into a blogpost worth sharing w others in similar situations aial.ie/blog/support...

4 weeks ago 35 16 2 0
Video

🚫 If you or your teen see harmful or illegal content online, be sure to report it to the platform in question. And remember, this applies even if the content is AI-generated. Find more tips and resources around online safety at www.cnam.ie/general-publ... #AIAware #OnlineSafety

4 weeks ago 3 5 0 1
The study of algorithmic bias is heavily influenced—directly or implicitly — by cognitive psychology. However, cognitive psychologists have highlighted that these fields’ bias evaluation approach and method have faced numerous limitations and, in some cases, suffering crises. Furthermore, inherent differences between human cognition and algorithmic systems means uncritical adoption of human bias evaluation onto algorithms (like implicit association tests) can be misleading, shortsighted, and unhelpful ().

The study of algorithmic bias is heavily influenced—directly or implicitly — by cognitive psychology. However, cognitive psychologists have highlighted that these fields’ bias evaluation approach and method have faced numerous limitations and, in some cases, suffering crises. Furthermore, inherent differences between human cognition and algorithmic systems means uncritical adoption of human bias evaluation onto algorithms (like implicit association tests) can be misleading, shortsighted, and unhelpful ().

i conclude with a caution: while cognitive bias and algorithmic bias share some historical roots, uncritically applying methods from human cognition studies to model evaluation can be misleading, shortsighted, and unhelpful

12/

1 month ago 42 7 2 0
The concept of algorithmic bias also entered the mainstream—alongside the popularization of AI products—as investigations of actual deployed algorithms garnered public attention. In 2016, a team of investigative journalists from ProPublica unveiled how an algorithm that was widely used across the United States for recidivism was biased against racial minorities (). Work by Safiya Umoja  made breakthroughs establishing that search engines, far from being a neutral tool, often reproduce systemic racism, sexism, and economic inequity. Ruha  further argued that algorithmic bias towards racial minorities is not an accident and a byproduct of design but often an inbuilt feature masked by the language of progress, efficiency, or innovation. Empirical investigations of real-world deployed systems had significant impact in the term “bias” becoming a well-known phenomenon, particularly outside academic research.  evaluated three commercial facial recognition technology (FRT) products, demonstrating disparate performance based on gender and skin tone. In popular media, scandals such as the Google Photo App that labeled Black people as “gorillas” (), Amazon’s recruitment algorithmic that penalized women (), and the wrongful arrest of Robert Williams, a Black man misclassified by FRT (), all contributed towards the popularization of bias in public understanding. As AI systems increase in scale, sophistication, and modality—especially with the emergence of generative AI—the ways and nuances in which they encode bias are still underexplored [see Large Language Models].

The concept of algorithmic bias also entered the mainstream—alongside the popularization of AI products—as investigations of actual deployed algorithms garnered public attention. In 2016, a team of investigative journalists from ProPublica unveiled how an algorithm that was widely used across the United States for recidivism was biased against racial minorities (). Work by Safiya Umoja made breakthroughs establishing that search engines, far from being a neutral tool, often reproduce systemic racism, sexism, and economic inequity. Ruha further argued that algorithmic bias towards racial minorities is not an accident and a byproduct of design but often an inbuilt feature masked by the language of progress, efficiency, or innovation. Empirical investigations of real-world deployed systems had significant impact in the term “bias” becoming a well-known phenomenon, particularly outside academic research. evaluated three commercial facial recognition technology (FRT) products, demonstrating disparate performance based on gender and skin tone. In popular media, scandals such as the Google Photo App that labeled Black people as “gorillas” (), Amazon’s recruitment algorithmic that penalized women (), and the wrongful arrest of Robert Williams, a Black man misclassified by FRT (), all contributed towards the popularization of bias in public understanding. As AI systems increase in scale, sophistication, and modality—especially with the emergence of generative AI—the ways and nuances in which they encode bias are still underexplored [see Large Language Models].

Black women scholars, alongside key real-world incidents, played a crucial role in bringing the concept of algorithmic bias into the mainstream

3/

1 month ago 109 28 3 0
Preview
Algorithmic Bias

fresh off the press from yours truly: oecs.mit.edu/pub/b61joemo...

I offer an overview of algorithmic bias. I trace its historical roots, examine canonical scholarship and notable real-world incidents, and explore how algorithmic bias emerged as a field of study

1/

1 month ago 505 250 9 19
Preview
Complaint

the lawsuit: knightcolumbia.org/documents/hp...

1 month ago 8 3 0 0
Advertisement
Preview
Trump is using immigration policy to suppress speech, lawsuit claims A new lawsuit accuses the administration of violating the First Amendment by threatening the visas of researchers for work on disinformation and content moderation of social media.

The Coalition for Independent Technology Research (CITR) has filed a lawsuit against the Trump administration’s censorship of noncitizen academics and independent researchers under its immigration policy: www.npr.org/2026/03/09/n...

1 month ago 29 17 1 0
GPAI Training Transparency

The team (Dick Blankvoort, @harshp.com, & Maximilian Gahntz) will be presenting this work at FAccT 🎉 in June, and you can access the preprint and analysis here: aial.ie/research/gpa...

If you have feedback and/or are interested to collaborate with them, please reach out to them.

end/

1 month ago 7 1 0 0
Preview
How Big AI Developers are Skirting a Mandate for Training Data Transparency We need better visibility into what data AI developers are using to train their models, write Dick Blankvoort, Harshvardhan Pandit, and Maximilian Gahntz.

This work is timely and is already being covered by media:

www.techpolicy.press/how-big-ai-d...

15/

1 month ago 17 10 1 0
Post image

Despite their declared assurances or signing of codes of conduct/practice, no big provider has provided a summary. Only 4 providers have explicitly done so – and all are small orgs or open source developers. This sinks arguments against the obligation being burdensome or excessive.

2/

1 month ago 19 7 1 0
Post image

New paper from team @aial.ie! aial.ie/research/gpa...

EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their model’s training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.

1/

1 month ago 130 74 2 3
AI Accountability Lab Stewarding a greater ecology of accountability in the age of AI

As Director of @aial.ie at ADAPT & @tcddublin.bsky.social, Abeba and her team have since produced influential research on #surveillance practices, secured #EU funding to develop audit frameworks, and contributed to both national and #global #AI policy discussions. More about the lab here: aial.ie

1 month ago 7 3 0 0
Post image

Dr Abeba Birhane @abeba.bsky.social @aial.ie, ADAPT & @tcddublin.bsky.social, features in @researchireland.ie's inaugural strategy: Curiosity, Capability, Competitiveness – Charting Ireland’s Research and Innovation Future 2026–2030.

📌 Read the strategy here: www.researchireland.ie/news/researc...

1 month ago 11 8 1 0
Advertisement

today we had a class of young students (12-14 yr old) came to visit the @aial.ie on campus. they were so keen to learn about AI and most importantly I was blown away by the type of questions they asked us: water consumption of data centers, how openai makes money, why RAM prices keep going up, ...

1 month ago 115 20 9 0
Preview
AI-produced material online threatens to ‘erode the foundations of democratic life’ Director of Trinity College Dublin’s AI Accountability Lab says tools such as ChatGPT and Grok are a ‘social disaster’

well, this was quick www.irishtimes.com/politics/202...

2 months ago 51 20 1 3
Reads: Most importantly, there is no AI without massive financial and ideological backing. It is therefore pointless to discuss its techniques or capabilities without asking who controls it, who benefits from it, who builds and deploys it, and what it is doing in the world. As Stafford Beer (2002) argued, the purpose of a system is what it does.

Reads: Most importantly, there is no AI without massive financial and ideological backing. It is therefore pointless to discuss its techniques or capabilities without asking who controls it, who benefits from it, who builds and deploys it, and what it is doing in the world. As Stafford Beer (2002) argued, the purpose of a system is what it does.

Reads: Though less explicit than Thiel’s call to replace politics with technology, major tech firms have effectively privatised core digital public goods. Platforms like Facebook, Google Search, and OpenAI’s ChatGPT operate at infrastructural scale in Ireland, shaping
information, communication, and access to knowledge. Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them; effectively ceding aspects of democratic process to commercial interests.

The monopolization of digital spaces has turned democracy into something the highest bidder can buy and is degrading the digital public goods themselves. As the AI industry, social media and search platforms grow more extractive and less trustworthy, they erode the foundations of democratic life: trust, dialogue, and accountability, blurring the line between truth and falsehood.

An example is the deepfake video falsely showing President Catherine Connolly withdrawing from the presidential race last October, which amassed over 160,0001 Facebook views before being removed.

GenAI’s non-deterministic, stochastic architecture produces plausible output without regard for accuracy or truth.

This makes generative AI a societal disaster and a major threat to truth, democratic processes, information ecosystems, knowledge production, and the social fabric

Reads: Though less explicit than Thiel’s call to replace politics with technology, major tech firms have effectively privatised core digital public goods. Platforms like Facebook, Google Search, and OpenAI’s ChatGPT operate at infrastructural scale in Ireland, shaping information, communication, and access to knowledge. Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them; effectively ceding aspects of democratic process to commercial interests. The monopolization of digital spaces has turned democracy into something the highest bidder can buy and is degrading the digital public goods themselves. As the AI industry, social media and search platforms grow more extractive and less trustworthy, they erode the foundations of democratic life: trust, dialogue, and accountability, blurring the line between truth and falsehood. An example is the deepfake video falsely showing President Catherine Connolly withdrawing from the presidential race last October, which amassed over 160,0001 Facebook views before being removed. GenAI’s non-deterministic, stochastic architecture produces plausible output without regard for accuracy or truth. This makes generative AI a societal disaster and a major threat to truth, democratic processes, information ecosystems, knowledge production, and the social fabric

Reads: For truth, democracy, and the rule of law to endure in the AI era, we need to cultivate an ecosystem of transparency and accountability. Yet governance by algorithms inherently places our digital public squares and democratic processes in the hands of those
building these systems in line with their political and profit-seeking agendas. Without real mechanisms in place, talk of transparency and accountability are empty gestures.

An internal Meta memo outlining plans to launch facial recognition in smart glasses “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns”5 illustrates how those advocating for accountability are under-resourced, retaliated against, and targeted.

Large tech and AI companies, despite selling promises of innovation and societal benefit, monetize and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement.

Given the track record of tech giants, stricter regulation and enforcement is not “anti–freedom of speech” or anti-competitiveness. It is one of the clearest ways governments can show they serve the public interest. After all, innovation that disregards truth and democratic processes risks undermining democracy itself.

Reads: For truth, democracy, and the rule of law to endure in the AI era, we need to cultivate an ecosystem of transparency and accountability. Yet governance by algorithms inherently places our digital public squares and democratic processes in the hands of those building these systems in line with their political and profit-seeking agendas. Without real mechanisms in place, talk of transparency and accountability are empty gestures. An internal Meta memo outlining plans to launch facial recognition in smart glasses “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns”5 illustrates how those advocating for accountability are under-resourced, retaliated against, and targeted. Large tech and AI companies, despite selling promises of innovation and societal benefit, monetize and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement. Given the track record of tech giants, stricter regulation and enforcement is not “anti–freedom of speech” or anti-competitiveness. It is one of the clearest ways governments can show they serve the public interest. After all, innovation that disregards truth and democratic processes risks undermining democracy itself.

I appeared as an expert witness before the Joint Committee on AI at the Houses of Oireachtas (parliament of Ireland) to discuss "AI: truth and democracy" this morning. You can read my opening statement here: www.oireachtas.ie/en/publicati...

2 months ago 162 67 7 5
Blue to green gradient graphic with headshots of the speakers for the event, and text reading: "Technomoral Conversations: What's the Story with AI? Exploring AI Narratives. Join us on 11 February at 6pm in Edinburgh & online, where we will hear from Alex Taylor (University of Edinburgh), Abeba Birhane (Trinity College Dublin), Louise Amoore (Durham University) and John Thornhill (Financial Times)." There are logos for EFI, CTMF and BRAID, the co-organisers of the event.

Blue to green gradient graphic with headshots of the speakers for the event, and text reading: "Technomoral Conversations: What's the Story with AI? Exploring AI Narratives. Join us on 11 February at 6pm in Edinburgh & online, where we will hear from Alex Taylor (University of Edinburgh), Abeba Birhane (Trinity College Dublin), Louise Amoore (Durham University) and John Thornhill (Financial Times)." There are logos for EFI, CTMF and BRAID, the co-organisers of the event.

During her visit, @abeba.bsky.social will be taking part in our Technomoral Conversations event exploring AI narratives and counter-narratives – a collaboration w/ @braiduk.bsky.social & @edfuturesinstitute.bsky.social

🗓️ 11 Feb 18.00-19.30
📍 Edinburgh Futures Institute & online
🎟️ edin.ac/3MZEm0a

2 months ago 14 11 0 0
Headshot photo of Dr Abeba Birhane in front of a bright blue background. She has long black hair, worn in braids. She is wearing glasses and a silky, pale grey dress shirt. Text reads: Dr Abeba Birhane, Trinity College Dublin.

Headshot photo of Dr Abeba Birhane in front of a bright blue background. She has long black hair, worn in braids. She is wearing glasses and a silky, pale grey dress shirt. Text reads: Dr Abeba Birhane, Trinity College Dublin.

This February, we look forward to hosting @abeba.bsky.social for one week as our Distinguished Visiting Scholar!

Dr Birhane founded and leads @aial.ie and is assistant professor of AI @tcddublin.bsky.social. She researches AI accountability with a focus on audits of AI models and training datasets.

2 months ago 20 4 1 1

if you’re passionate about AI accountability research and enjoy working in a vibrant lab with a multi-disciplinary team but not interested in doing traditional academic work, this position might be for you

2 months ago 44 40 1 0