Advertisement · 728 × 90

Posts by

If you are concerned about the government's plans to give gardaí access to facial recognition technology, then sign our petition today and help us fight back.

www.iccl.ie/facial-recog...

1 week ago 32 20 4 0
Preview
Enforce is hiring: AI Expert How data about European defence personnel and political leaders flows to foreign states and non-state actors

Job opportunity

We are hiring an AI expert to join ICCL Enforce

www.iccl.ie/digital-data...

2 weeks ago 8 18 0 0
Preview
Enforce is hiring: AI Expert How data about European defence personnel and political leaders flows to foreign states and non-state actors

Enforce at the @iccl.bsky.social is hiring.

Full-time role. Remote in the EU.

Apply early. Deadline 5pm Dublin time on 15 April 2026.

www.iccl.ie/digital-data...

#AI #AIPolicy #EUAIAct #DigitalRights #TechPolicy
@abeba.bsky.social @rocher.lc @techpolicypress.bsky.social @leevisaari.bsky.social

2 weeks ago 9 10 0 2

thank you @lilianedwards.bsky.social - we indeed cite your paper as a foundation! @jpquintais.bsky.social your paper on content moderation vis. DSA is on the reading list :) Our next steps are investigating combined DSA, GDPR, AIA, etc. Your new project looks intriguing. May we have a call?

3 weeks ago 1 0 1 0
Video

My testimony at the Oireachtas (Irish Parliament) Justice Committee yesterday evening: new documents reveal serious problems in the recruitment process for Ireland's Data Protection Commissioner.

3 weeks ago 52 35 2 7
Preview
ICCL reveals information about DPC recruitment process ICCL is appearing before the Oireachtas Justice Committee to discuss the EU's Digital Omnibus package.

This afternoon I will speak at the Oireachtas Justice Committee. I will share new information obtained by ICCL about the latest Data Protection Commissioner recruitment process, in which a former Meta spokesperson was appointed Ireland's new Data Protection Commissioner.
www.iccl.ie/press-releas...

3 weeks ago 24 14 2 2
Preview
Terms of (Ab)Use: An Analysis of GenAI Services Generative AI services like ChatGPT and Gemini are some of the fastest-growing consumer services. Individuals using such services must accept their terms of use before access, and conform to these ter...

New paper from @aial.ie! @harshp.com, Dick Blankvoort, Adel Shaaban, @sashamtl.bsky.social & me

We analysed 6 GenAI ToS--finding missing info, major power imbalances & user obligations that are impossible to meet without violating the terms

arxiv.org/abs/2603.18964 & aial.ie/research/ter...

1/

4 weeks ago 175 103 2 8
Post image

New paper from team @aial.ie! aial.ie/research/gpa...

EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their model’s training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.

1/

1 month ago 130 74 2 3
Preview
How Big AI Developers are Skirting a Mandate for Training Data Transparency We need better visibility into what data AI developers are using to train their models, write Dick Blankvoort, Harshvardhan Pandit, and Maximilian Gahntz.

There is a battle raging over the lack of visibility into AI training data, write Dick Blankvoort, Harshvardhan Pandit, and Maximilian Gahntz. A neglected provision in the European Union’s AI Act may prove to be the biggest break in securing more transparency from AI developers to date.

1 month ago 7 2 1 0
Preview
LLMs fail at 70% of simple office tasks Occasionally I love ChatGPT. Like when I gave it a research paper I’d written and the itinerary for my planned trip to Australia this November and asked it to look for related art exhibitions…

Did you know that LLMs fail at around 70% of all typical office tasks like generating a report on something from a spreadsheet? The Agent Company is a simulated office environment for benchmarking LLMs - I wrote a blog post about the paper here (with links and a rant): jilltxt.net/llms-fail-at...

7 months ago 160 54 7 12
Advertisement
Post image

The EU must use the anti coercion instrument now, or be humiliated. Europe must be free to enforce its own law on its own soil.

7 months ago 253 82 24 12
AI and Fraternity, Abeba Birhane, AI Accountability Lab  

I envision a future where human dignity, justice, peace, kindness, care, respect, accountability, and rights and freedoms serve as the north stars that guide AI development and use. Realising these ideals can’t happen without intentional tireless work, dialogues, and confrontations of ugly realities – even if they are uncomfortable to deal with. This starts with deciphering hype from reality. Pervasive narratives portray AI as a magical, fully autonomous entity approaching a God-like omnipotence and omniscience. In reality, audits of AI systems reveal a consistent failure to deliver on grandiose promises and suffer from all kinds of shortcomings, issues often swept under the rug. AI in general, and GenAI in particular, encodes and exacerbates historical stereotypes, entrenches harmful societal norms, and amplifies injustice. A robust body of  evidence demonstrates that — from hiring, welfare allocation, medical care allocation to anything in between — deployment of AI is widening inequity, disproportionately impacting people at the margins of society and concentrating power and influence in the hands of few. Major actors—including Google, Microsoft, Amazon, Meta, and OpenAI—have willingly aligned with authoritarian regimes and proactively abandoned their pledges to fact-check, prevent misinformation, respect diversity and equity, refrain from using AI for weapons development, while retaliating against critique. The aforementioned vision can’t and won’t happen without confrontation of these uncomfortable facts. This is precisely why we need active resistance and refusal of unreliable and harmful AI systems; clearly laid out regulation and enforcement; and shepherding of the AI industry towards transparency and accountability of responsible bodies. "Machine agency" must be in service of human agency and empowerment, a coexistence that isn't a continuation of modern tech corporations’ inequality-widening,

AI and Fraternity, Abeba Birhane, AI Accountability Lab I envision a future where human dignity, justice, peace, kindness, care, respect, accountability, and rights and freedoms serve as the north stars that guide AI development and use. Realising these ideals can’t happen without intentional tireless work, dialogues, and confrontations of ugly realities – even if they are uncomfortable to deal with. This starts with deciphering hype from reality. Pervasive narratives portray AI as a magical, fully autonomous entity approaching a God-like omnipotence and omniscience. In reality, audits of AI systems reveal a consistent failure to deliver on grandiose promises and suffer from all kinds of shortcomings, issues often swept under the rug. AI in general, and GenAI in particular, encodes and exacerbates historical stereotypes, entrenches harmful societal norms, and amplifies injustice. A robust body of evidence demonstrates that — from hiring, welfare allocation, medical care allocation to anything in between — deployment of AI is widening inequity, disproportionately impacting people at the margins of society and concentrating power and influence in the hands of few. Major actors—including Google, Microsoft, Amazon, Meta, and OpenAI—have willingly aligned with authoritarian regimes and proactively abandoned their pledges to fact-check, prevent misinformation, respect diversity and equity, refrain from using AI for weapons development, while retaliating against critique. The aforementioned vision can’t and won’t happen without confrontation of these uncomfortable facts. This is precisely why we need active resistance and refusal of unreliable and harmful AI systems; clearly laid out regulation and enforcement; and shepherding of the AI industry towards transparency and accountability of responsible bodies. "Machine agency" must be in service of human agency and empowerment, a coexistence that isn't a continuation of modern tech corporations’ inequality-widening,

so I am one of the 12 people (including the “god-fathers of AI”) that will be at the Vatican this September for a two full-day working group on the Future of AI

here is my Vatican approved short provocation on 'AI and Fraternity' for the working group

8 months ago 530 154 29 17

(reply to review request) I won't review articles that aren't open access as I'd be performing free labour for the publisher's profit. Providing access to journals which I already access via university is not renumeration, and besides the community still suffers if articles are kept paywalled.

8 months ago 1 0 0 0
AI for Good [Appearance?] Reflections on the last minute censorship of my keynote at the AI for Good Summit 2025

A short blogpost detailing my experience of censorship at the AI for Good Summit with links to both original and censored versions of slides and links to my talk

aial.ie/blog/2025-ai...

9 months ago 152 89 3 11
Preview
Open Joint Letter against the Delaying and Reopening of the AI Act CDT Europe, alongside the European Consumer Organisation (BEUC), European Digital Rights (EDRi) and the European Centre for Not-for-Profit Law (ECNL), co-drafted a letter signed by 52 civil society or...

⏰ Today, CDT Europe, together with 51 experts, academics & civil society organisations, sent an open letter to the European Commission to express our concerns regarding the forthcoming Digital Simplification package, which could include revisiting the AI Act.

👇🏻 Read the full letter on our website:

9 months ago 21 12 2 3
OSF

Indeed! I wrote an article recently called "Simple now, Complex later: The Questionable Efficacy of Diluting GDPR Article 30(5)" which shows that this exemption is toothless and actually risks increasing issues! doi.org/10.31235/osf...

9 months ago 0 0 0 0
Preview
UN AI summit accused of censoring criticism of Israel and big tech over Gaza war - Geneva Solutions A prominent AI scientist says she was pressured by the organisers of the UN’s flagship conference on AI to censor parts of her presentation that criticised Israel over its war in Gaza and the role of ...

genevasolutions.news/science-tech...

9 months ago 186 57 1 4

a couple of hours before my keynote, I went through an intense negotiation with the organisers (for over a hour) where we went through my slides and had to remove anything that mentions 'Palestine' 'Israel' and replace 'genocide' with 'war crimes'

1/

9 months ago 1411 672 37 63
Advertisement

Unethical - if there are things in the work intended or unintended to obscure or manipulate or influence the "reviewer" they are by definition unethical.

9 months ago 0 0 0 0

The simplest solution to "stop the clock" on the AI Act is to not use AI. You can't have it both ways, sorry. Want to drive? Get a license. Want to use AI? Get compliant.

9 months ago 0 0 0 0
Preview
Digital sovereignty can’t be bargained away The European Commission has tools, public support and a mandate to act on Big Tech. Trading that away for short-term calm would be a costly mistake.

Do you think that Europe should bargain away its digital sovereignty to appease Trump and the broligarchy? Strong majorities in Germany, France, and Spain are against that (YouGov).

@coricrider.com and I have a better plan:
www.politico.eu/article/digi...

9 months ago 67 27 2 5
Preview
How US Firms Are Weakening the EU AI Code of Practice | TechPolicy.Press Instead of giving in, the Commission must ensure that the Code reflects the intent of the AI Act and safeguards public interest, write Nemitz and Oueslati.

How #US Firms are weakening #EU #AI #Code of practice: By pressuring the European Commission to prioritize a few US firms over 1,000 stakeholders, the companies put entire process at risk and lose credibility as actors of public interest.#AIAct #OpenAI #GAFAM www.techpolicy.press/how-us-firms...

9 months ago 10 11 0 0

Same. I've started using "social media" as my literature review source to get updates and know about work/stuff. So it's less personal and more professional.

9 months ago 1 0 0 0

I'll start with an obvious ones: data on device and the advertising identifier. I wish there was a penalty for such obvious lies 😪

9 months ago 1 0 0 0

Agree. Looks increasingly like the selling point is either to trigger the fear of social judgement (language), or being left behind (prestige), or to encourage not having to take responsibility (learning).

9 months ago 1 0 0 0
Advertisement
Preview
Computer-vision research powers surveillance technology - Nature An analysis of research papers and citing patents indicates the extensive ties between computer-vision research and surveillance.

New paper hot off the press www.nature.com/articles/s41...

We analysed over 40,000 computer vision papers from CVPR (the longest standing CV conf) & associated patents tracing pathways from research to application. We found that 90% of papers & 86% of downstream patents power surveillance

1/

9 months ago 954 532 34 77

😲 this is becoming a worrying trend - there was also a hugely missing presence (through exclusion) of civil society in the AI Act GenAI code of conduct workshops

9 months ago 0 0 0 0
OSF

Draft article "Simple now, Complex later: The Questionable Efficacy of Diluting GDPR Article 30(5)" that questions the Commission's proposal and shows this approach will not yield the intended results, and will lead to more compliance issues for organisations. doi.org/10.31235/osf...

10 months ago 2 0 0 0

So it wasn't just me! I called Vista the "frosted glass" look and was laughing when Apple launched "liquid glass". If its anything like Vista, with the UX also changing, it will feel pretty for a week and then we will realise it's full of friction and wastes time. Hopefully not.

10 months ago 0 0 0 0

Combine this with use of personal data to train models which could then contain it in pseudonymous form, and we're going to be in a pickle to fix the mess. We need to address that before it gets too complicated both legally and socially.

10 months ago 1 0 1 0