Advertisement · 728 × 90

Posts by Damini Satija

Preview
Journalism under attack: Predator spyware in Angola - Amnesty International Security Lab A new investigation by the Security Lab uncovers the first forensically confirmed case of the Predator spyware being used to target civil society in Angola.

Today we have established that Predator spyware was used in 2024 to target Teixeira Cândido, a prominent Angolan journalist and press freedom activist.

This is the first forensic confirmation of its use in Angola.

securitylab.amnesty.org/latest/2026/...

2 months ago 1 0 0 0

Please share with your communities and others who may benefit! This is a recent field and it has taken a lot of collaborative learning to bring together this resource. We want to continue co-designing and co-creating this work!

4 months ago 2 0 0 0

It is designed for others (civil society groups, journalists, activists and researchers) investigating AI systems used by government and public institutions, with suggested routes for pursuing accountability.

4 months ago 1 0 1 0

This toolkit brings together years of knowledge from Amnesty International's algorithmic investigations into a clear, practical method combining legal analysis, community testimony and public records.

4 months ago 1 0 1 0
Preview
Global: Amnesty International launches an Algorithmic Accountability toolkit to enable investigators, rights defenders' and activists to hold powerful actors accountable for AI-facilitated harms With the widespread use of Artificial Intelligence (AI) and automated decision-making systems (ADMs) that impact our everyday lives, it is crucial that rights defenders, activists and communities are ...

While AI regulation conversations remain stalled and divided (yet its rollout supercharged) it is critical to build our collective power to investigate AI's harms and seek accountability.

That is why we have launched today the Algorithmic Accountability Toolkit.

www.amnesty.org/en/latest/ne...

4 months ago 1 0 1 0
Preview
UK: Government’s unchecked use of tech and AI systems leading to exclusion of people with disabilities and other marginalized groups People with disabilities, those living in poverty or who have serious health conditions are being left in a bureaucratic limbo due to digital exclusion caused by the Department of Work and Pensions’ (...

New report from Amnesty Tech uncovering how the UK's Department of Work and Pensions has created an inaccessible social security system for people already at risk of poverty through the constant testing, rolling out, and rolling back of costly AI & other tech. www.amnesty.org/en/latest/ne...

9 months ago 2 0 0 0
Preview
UK Encryption order threaten privacy rights The United Kingdom government’s order to Apple to allow security authorities access to encrypted cloud data severely harms the privacy rights of users in the UK and worldwide.

Here's @amnesty.org and @hrw.org's response to the UK government's order to Apple to allow security authorities access to encrypted cloud data, an extremely harmful privacy move for users in the UK and worldwide. With more from @heiferbeefmountain.bsky.social www.amnesty.org/en/latest/ne...

1 year ago 17 10 0 0
Video

Damini Satija, Director of Tech and Human Rights at Amnesty is at the Paris #AIActionSummit.

She's sharing three things we're asking decision makers to take heed of on AI harms and regulation 👇

1 year ago 40 20 0 0
Preview
Global/France: AI Action Summit must meaningfully center binding and enforceable regulation to curb AI-driven harms Ahead of the AI Action Summit, which begins on February 10, Amnesty International’s Director of the technology and human rights programme, Damini Satija, said:  “With global leaders and tech executive...

Mit @daminis.bsky.social ist @amnesty.org auf dem Pariser Paris 2025 AI Action Summit vertreten. Das Treffen ist die Gelegenheit, Fortschritte auf dem Weg zu einer menschenrechtskonformen KI-Regulierung auf globaler Ebene zu erzielen👇
www.amnesty.org/en/latest/ne...

1 year ago 8 3 0 0
Advertisement

@techpolicypress.bsky.social

1 year ago 0 0 0 0
Preview
AI as Double Speak for Austerity | TechPolicy.Press Amnesty Tech's Likhita Banerji and Damini Satija say the Summit should prioritize people and communities over the whims of corporations.

In the lead up to next week’s Paris AI Summit, @lipstickkrantikari.bsky.social and I call on government leaders and tech executives to confront how AI and austerity, often coded as ‘government efficiency’ are driving inequality and entrenching corporate power.

www.techpolicy.press/ai-as-double...

1 year ago 17 9 1 1
Post image

📢 LAST CHANCE: Apply for @amnesty.org's Digital Forensic Fellowship!

Working with our team at the Security Lab, you'll learn the tech and investigative skills needed to expose how governments abuse advanced spyware and other surveillance tech against activists and civil society.

1 year ago 15 18 1 1
Preview
Serbia: Authorities using spyware and Cellebrite forensic extraction tools to hack journalists and activists - Amnesty International Security Lab Serbian police and intelligence authorities are using advanced phone spyware alongside Cellebrite mobile phone forensic products to unlawfully target journalists, environmental activists and other ind...

🚨 NEW: Serbian authorities have used highly invasive spyware, including NSO Group’s Pegasus, as well as digital forensic tools to target activists & journalists during periods of detention or routine police interviews, @amnesty.org investigation reveals

securitylab.amnesty.org/latest/2024/...

1 year ago 3 2 0 0
Preview
How Pakistan’s VPN ban undermines rights Over the past year, Pakistan has teetered on the edge of digital oppression, and the proposed ban on ‘unregistered’ VPNs could be the tipping point toward becoming a fully...

Over the past year, Pakistan has teetered on the edge of digital oppression, and the proposed ban on ‘unregistered’ VPNs could be the tipping point toward becoming a fully mass-surveilled state.

Our team at Amnesty Tech write for the News International

www.thenews.com.pk/print/125643...

1 year ago 3 0 0 0
Preview
Sweden: Authorities must discontinue discriminatory AI systems used by welfare agency The use of opaque artificial intelligence (AI) systems by Försäkringskassan, Sweden’s Social Insurance Agency, must be immediately discontinued, Amnesty International said today, following an investig...

🚨 We're calling on the Swedish authorities to discontinue a discriminatory AI system used to flag individuals for benefits fraud, based on a new investigation from Lighthouse Reports

More below w/ analysis from David Nolan, @amnesty.org

www.amnesty.org/en/latest/ne...

1 year ago 3 3 0 0