Advertisement · 728 × 90

Posts by Amy King (she/they)

The safeguarding concerns identified in this report are pressing. Additional measures need to be taken to ensure children are not exposed to inappropriate content on X.

Read our report here: www.counterhate.com/research/x-rated

6 days ago 1 0 0 0
Preview
‘Happy (and safe) shooting!’ AI chatbots helped teen users plan violence in hundreds of tests | CNN Daniel, a troubled American teen, turned to an AI chatbot to vent his political frustration. “Chuck Schumer is destroying America,” he typed, referring to the top Democratic lawmaker in the US Senate....

Despite many AI companies claiming longer conversations pose a higher risk of unsafe answers, our testing revealed concerning gaps in safety features in a series of just 4 questions.

Users deserve better protections built in.

Read CNN's write up here: edition.cnn.com/2026/03/11/a...

4/4

1 month ago 1 0 0 0

Chatbots condense and contextualise information that would take more tailored searching and critical thinking if a teen used a search engine. They cherry pick data. They sometimes provide detailed answers, links, and images.

3/4

1 month ago 0 0 1 0

Not only that, but many of the chatbots are designed to congratulate users for their "Great question!" and even offer suggestions for what information they can provide the user next.

And the test accounts were set up for 13 and 18 year olds.

2/4

1 month ago 1 0 1 0

Really proud of the team's work on this report. The findings are appalling.

Chatbots are failing to connect the dots between questions about how to make people "pay", how to obtain maps of high schools or locations for politicians offices, and advice on gun shops and models.

1/4

1 month ago 1 0 1 0

Governments must "decouple from big tech" if they want to enact effective regulation, says @ceciliarikap.bsky.social. We need alternatives to the cloud so we can divest from the entrenched power of Microsoft, Google and Amazon.

A brilliant book talk! I look forward to getting my hands on the book

1 month ago 2 1 0 0

This is *today*!
There's still time to sign up ⏰
🔗 www.tickettailor.com/events/techn...

1 month ago 2 2 0 0
C21 Literature: Journal of 21st-Century Writings | Issue: Issue: 3(12) The Century at 25 (2026)

Part one of 'The Century @25' SI of @c21literature.bsky.social ed by @alicebennett.bsky.social @melissaschuh.bsky.social @heyitsdenisew.bsky.social and I, is out now (part two coming soon): please check it out! c21.openlibhums.org/issue/1422/i...

2 months ago 12 5 2 0

Just started doing new data analysis and I know I keep saying this, but: I really, really don't think people appreciate how much this moral panic was a deliberate and extremely expensive invention.

2 months ago 4537 1973 40 54
Preview
Join us Bad actors and greedy platforms are creating a toxic environment of hate and disinformation online that has real-world impacts. Join us and be part of the campaign to clean up social media and hold platforms accountable.

Our new findings reveal Grok became an industrial-scale tool for generating sexualized images of women & girls.

But this is preventable. Platforms & AI companies have the ability to build in safety & keep choosing not to.

Get our latest research + updates in your inbox: join our email community ⤵️

2 months ago 13 6 0 0
Advertisement

You know Alex Pretti & Renee Good—the 2 white people ICE killed.

ICE has also killed Keith Porter, a Black man, Parady La, a Cambodian man, & 5 Latinos—Heber Sanchaz Domínguez, Victor Manuel Diaz, Luis Beltran Yanez-Cruz, Luis Gustavo Nunez Caceres, and Geraldo Lunas Campos.

9 TOTAL.

ABOLISH ICE.

2 months ago 3347 1696 63 77
Post image

Our January newsletter is out! ❄ technologypolicy.substack.com/p/the-tip-ne...
We're particularly excited to invite you to our upcoming event with @ceciliarikap.bsky.social on 25 Feb 3pm GMT, where she will present her findings on the Cloud ☁️ in the age of AI 👉
www.tickettailor.com/events/techn...

3 months ago 11 5 2 1

New research from my team has some staggering findings that reinforce the urgent need for tech accountability.

counterhate.com/research/gro...

3 months ago 2 0 0 0
Post image

On 25th of Feb Cecilia Rikap and I will discuss her upcoming book "The Rulers: Corporate Power in the Age of AI and the Cloud" for the @polstudiesassoc.bsky.social's @techpolicy.bsky.social group with Q&A.

1500-1630 GMT

(more details 👇🏼 and in reply)

www.tickettailor.com/events/techn...

3 months ago 7 6 1 2
Post image Post image Post image Post image

The MPG x TIP conference in Bournemouth is a go! We've already had a fantastic keynote by Susan Banaducci and now we're onto the first wave of panels!

@psampg.bsky.social

3 months ago 3 3 0 0
Post image

At @techpolicy.bsky.social x @psampg.bsky.social Navigating Digital Democracy conference this week in Bournemouth! Please come say hi if you're around ☺️

3 months ago 2 1 0 0

Starting soon!!! 👇👇

4 months ago 1 0 0 0
Preview
Literature Is Not a Vibe: On ChatGPT and the Humanities | Los Angeles Review of Books Rachele Dini discusses OpenAI’s “A Machine-Shaped Hand” and an academic sector in crisis.

Bit late to this but it's such a clever piece of AI criticism, developing a literary critique of Sam Altman's auto-metafiction story as a way to explore the grave threats to the "intellectual infrastructure" of the humanities - and HE more broadly - posed by AI lareviewofbooks.org/article/lite...

5 months ago 66 30 2 6
Call for Papers – Corpora & Discourse International Conference 2026

🚨 Call for Papers! 🚨 we are excited to launch the Call for Papers for the Corpora and Discourse International Conference 2026. Deadline 16 November 2025. Submit your abstracts here: wp.lancs.ac.uk/cad-2026/cal.... Please share widely! #CADS2026

7 months ago 30 31 0 3
Post image

NEW: ChatGPT-5's "safe completions" are a misnomer, generating MORE harmful content than ChatGPT-4o and prioritising prolonged user engagement over redirection or refusing to respond to harmful prompts and reduce the risk of users' exposure to harms.

counterhate.com/wp-content/u...

6 months ago 1 0 0 0
Advertisement
Preview
PSA TIP x Media and Politics - Navigating Digital Democracy | The Political Studies Association (PSA) Become a member Log in

*PSA EVENT* 📢 Final Call for Papers📢 'Navigating #Digital #Democracy' joint annual conference by PSA #Media & #Politics Group and #Technology, Information & #Policy Specialist Groups @psampg.bsky.social @techpolicy.bsky.social
📆 Paper Proposals by 26 September
➡️

7 months ago 1 1 0 0

not me forgetting that the 26th is only days away and running to my pc to get to work on mine 😬

7 months ago 1 0 1 0
Navigating Digital Democracy Conference – BU’s Media School You are warmly invited to submit papers for presentation at the joint annual conference of the Political Studies Association’s Media and Politics Group (MPG) & Technology, Information and Policy Group (TIP).

🚨Navigating Digital Democracy Conf– 8-9 Jan, Bournemouth.

The MPG x TIP conference is going to be ace 😍 & I strongly recommend submitting an abstract.

Topics relating to the core areas of @psampg.bsky.social & @techpolicy.bsky.social are welcome.

www.bournemouth.ac.uk/navigating-d...

7 months ago 4 2 1 0
What starts, then, as an apparently moral argument about whether to be for or against violence quickly turns into a debate about how violence is defined and who is called “violent”—and for what purposes. When a group assembles to oppose censorship or the lack of democratic freedoms, and the group is called a “mob,” or is understood as a chaotic or destructive threat to the social order, then the group is both named and figured as potentially or actually violent, at which point the state can issue a justification to defend society against this violent threat. When what follows is imprisonment, injury, or killing, the violence in the scene emerges as state violence. We can name state violence as “violent” even though it has used its own power to name and to represent the dissenting power of some group of people as “violent.” Similarly, a peaceful demonstration such as that which took place in Gezi Park in Istanbul in 2013,5 or a letter calling for peace such as the one signed by many Turkish scholars in 2016,6 can be effectively figured and represented as a “violent” act only if the state either has its own media or exercises sufficient control over the media. Under such conditions, exercising rights of assembly is called a manifestation of “terrorism,” which, in turn, calls down the state censor, clubbing and spraying by the police, termination of employment, indefinite detention, imprisonment, and exile.

What starts, then, as an apparently moral argument about whether to be for or against violence quickly turns into a debate about how violence is defined and who is called “violent”—and for what purposes. When a group assembles to oppose censorship or the lack of democratic freedoms, and the group is called a “mob,” or is understood as a chaotic or destructive threat to the social order, then the group is both named and figured as potentially or actually violent, at which point the state can issue a justification to defend society against this violent threat. When what follows is imprisonment, injury, or killing, the violence in the scene emerges as state violence. We can name state violence as “violent” even though it has used its own power to name and to represent the dissenting power of some group of people as “violent.” Similarly, a peaceful demonstration such as that which took place in Gezi Park in Istanbul in 2013,5 or a letter calling for peace such as the one signed by many Turkish scholars in 2016,6 can be effectively figured and represented as a “violent” act only if the state either has its own media or exercises sufficient control over the media. Under such conditions, exercising rights of assembly is called a manifestation of “terrorism,” which, in turn, calls down the state censor, clubbing and spraying by the police, termination of employment, indefinite detention, imprisonment, and exile.

the introduction of The Force of Nonviolence by Judith Butler (2020) is screamingly relevant today

7 months ago 3 0 0 0

I'm really looking forward to this! Get your abstracts in - deadline is next Friday (26th September)

7 months ago 0 0 0 0

I saw a post recently about speculative futures of generative AI - it looked brilliant but I forgot to save it for later 🫠 anyone able to help a gal out?

7 months ago 0 0 0 0

rewarding my microwave by letting it heat up a fork as a treat

8 months ago 5906 805 66 11
Preview
The Dark Side of AI: Where Women Can’t Say No When Laura Bates, founder of the Everyday Sexism Project, decided to investigate the world of artificial intelligence, she was shocked to find that generative AI is helping create a world where misogy...

Meta’s VR platforms are not safe – especially for women & minors.

Our research found abuse in the metaverse every 7 minutes, including graphic sexual content & threats of violence.

Who would feel safe in a space like that?

Our research in Marie Claire ⤵️
www.marieclaire.com.au/news/ai-miso...

8 months ago 8 7 0 0
Preview
Meta’s AI rules have let bots hold ‘sensual’ chats with children An internal Meta policy document reveals the social-media giant’s rules for chatbots, which have permitted provocative behavior on topics including sex and race.

🤖 What many already knew. Helpful proof, reporting, documentation from @jeffhorwitz.bsky.social: Meta allowing sensual roleplay with kids, promoting discrimination, giving false medical info. But listen, there's a tech PR trick I need to draw your attention to here. 🧵
www.reuters.com/investigates...

8 months ago 80 39 5 6
Advertisement

I'm really grateful to have worked on this one, and more so that I have such a stellar group of colleagues who care so deeply about creating a safer internet and holding corporations accountable in their design choices for the tools and platforms they create.

8 months ago 1 0 0 0