If you are concerned about the government's plans to give gardaí access to facial recognition technology, then sign our petition today and help us fight back.
www.iccl.ie/facial-recog...
Posts by
Enforce at the @iccl.bsky.social is hiring.
Full-time role. Remote in the EU.
Apply early. Deadline 5pm Dublin time on 15 April 2026.
www.iccl.ie/digital-data...
#AI #AIPolicy #EUAIAct #DigitalRights #TechPolicy
@abeba.bsky.social @rocher.lc @techpolicypress.bsky.social @leevisaari.bsky.social
thank you @lilianedwards.bsky.social - we indeed cite your paper as a foundation! @jpquintais.bsky.social your paper on content moderation vis. DSA is on the reading list :) Our next steps are investigating combined DSA, GDPR, AIA, etc. Your new project looks intriguing. May we have a call?
My testimony at the Oireachtas (Irish Parliament) Justice Committee yesterday evening: new documents reveal serious problems in the recruitment process for Ireland's Data Protection Commissioner.
This afternoon I will speak at the Oireachtas Justice Committee. I will share new information obtained by ICCL about the latest Data Protection Commissioner recruitment process, in which a former Meta spokesperson was appointed Ireland's new Data Protection Commissioner.
www.iccl.ie/press-releas...
New paper from @aial.ie! @harshp.com, Dick Blankvoort, Adel Shaaban, @sashamtl.bsky.social & me
We analysed 6 GenAI ToS--finding missing info, major power imbalances & user obligations that are impossible to meet without violating the terms
arxiv.org/abs/2603.18964 & aial.ie/research/ter...
1/
New paper from team @aial.ie! aial.ie/research/gpa...
EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their model’s training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.
1/
There is a battle raging over the lack of visibility into AI training data, write Dick Blankvoort, Harshvardhan Pandit, and Maximilian Gahntz. A neglected provision in the European Union’s AI Act may prove to be the biggest break in securing more transparency from AI developers to date.
Did you know that LLMs fail at around 70% of all typical office tasks like generating a report on something from a spreadsheet? The Agent Company is a simulated office environment for benchmarking LLMs - I wrote a blog post about the paper here (with links and a rant): jilltxt.net/llms-fail-at...
The EU must use the anti coercion instrument now, or be humiliated. Europe must be free to enforce its own law on its own soil.
AI and Fraternity, Abeba Birhane, AI Accountability Lab I envision a future where human dignity, justice, peace, kindness, care, respect, accountability, and rights and freedoms serve as the north stars that guide AI development and use. Realising these ideals can’t happen without intentional tireless work, dialogues, and confrontations of ugly realities – even if they are uncomfortable to deal with. This starts with deciphering hype from reality. Pervasive narratives portray AI as a magical, fully autonomous entity approaching a God-like omnipotence and omniscience. In reality, audits of AI systems reveal a consistent failure to deliver on grandiose promises and suffer from all kinds of shortcomings, issues often swept under the rug. AI in general, and GenAI in particular, encodes and exacerbates historical stereotypes, entrenches harmful societal norms, and amplifies injustice. A robust body of evidence demonstrates that — from hiring, welfare allocation, medical care allocation to anything in between — deployment of AI is widening inequity, disproportionately impacting people at the margins of society and concentrating power and influence in the hands of few. Major actors—including Google, Microsoft, Amazon, Meta, and OpenAI—have willingly aligned with authoritarian regimes and proactively abandoned their pledges to fact-check, prevent misinformation, respect diversity and equity, refrain from using AI for weapons development, while retaliating against critique. The aforementioned vision can’t and won’t happen without confrontation of these uncomfortable facts. This is precisely why we need active resistance and refusal of unreliable and harmful AI systems; clearly laid out regulation and enforcement; and shepherding of the AI industry towards transparency and accountability of responsible bodies. "Machine agency" must be in service of human agency and empowerment, a coexistence that isn't a continuation of modern tech corporations’ inequality-widening,
so I am one of the 12 people (including the “god-fathers of AI”) that will be at the Vatican this September for a two full-day working group on the Future of AI
here is my Vatican approved short provocation on 'AI and Fraternity' for the working group
(reply to review request) I won't review articles that aren't open access as I'd be performing free labour for the publisher's profit. Providing access to journals which I already access via university is not renumeration, and besides the community still suffers if articles are kept paywalled.
A short blogpost detailing my experience of censorship at the AI for Good Summit with links to both original and censored versions of slides and links to my talk
aial.ie/blog/2025-ai...
⏰ Today, CDT Europe, together with 51 experts, academics & civil society organisations, sent an open letter to the European Commission to express our concerns regarding the forthcoming Digital Simplification package, which could include revisiting the AI Act.
👇🏻 Read the full letter on our website:
Indeed! I wrote an article recently called "Simple now, Complex later: The Questionable Efficacy of Diluting GDPR Article 30(5)" which shows that this exemption is toothless and actually risks increasing issues! doi.org/10.31235/osf...
a couple of hours before my keynote, I went through an intense negotiation with the organisers (for over a hour) where we went through my slides and had to remove anything that mentions 'Palestine' 'Israel' and replace 'genocide' with 'war crimes'
1/
Unethical - if there are things in the work intended or unintended to obscure or manipulate or influence the "reviewer" they are by definition unethical.
The simplest solution to "stop the clock" on the AI Act is to not use AI. You can't have it both ways, sorry. Want to drive? Get a license. Want to use AI? Get compliant.
Do you think that Europe should bargain away its digital sovereignty to appease Trump and the broligarchy? Strong majorities in Germany, France, and Spain are against that (YouGov).
@coricrider.com and I have a better plan:
www.politico.eu/article/digi...
How #US Firms are weakening #EU #AI #Code of practice: By pressuring the European Commission to prioritize a few US firms over 1,000 stakeholders, the companies put entire process at risk and lose credibility as actors of public interest.#AIAct #OpenAI #GAFAM www.techpolicy.press/how-us-firms...
Same. I've started using "social media" as my literature review source to get updates and know about work/stuff. So it's less personal and more professional.
I'll start with an obvious ones: data on device and the advertising identifier. I wish there was a penalty for such obvious lies 😪
Agree. Looks increasingly like the selling point is either to trigger the fear of social judgement (language), or being left behind (prestige), or to encourage not having to take responsibility (learning).
New paper hot off the press www.nature.com/articles/s41...
We analysed over 40,000 computer vision papers from CVPR (the longest standing CV conf) & associated patents tracing pathways from research to application. We found that 90% of papers & 86% of downstream patents power surveillance
1/
😲 this is becoming a worrying trend - there was also a hugely missing presence (through exclusion) of civil society in the AI Act GenAI code of conduct workshops
Draft article "Simple now, Complex later: The Questionable Efficacy of Diluting GDPR Article 30(5)" that questions the Commission's proposal and shows this approach will not yield the intended results, and will lead to more compliance issues for organisations. doi.org/10.31235/osf...
So it wasn't just me! I called Vista the "frosted glass" look and was laughing when Apple launched "liquid glass". If its anything like Vista, with the UX also changing, it will feel pretty for a week and then we will realise it's full of friction and wastes time. Hopefully not.
Combine this with use of personal data to train models which could then contain it in pseudonymous form, and we're going to be in a pickle to fix the mess. We need to address that before it gets too complicated both legally and socially.