A teen radicalized online used DeepSeek to plan his mother's murder. And this isn't an isolated incident.
How many more lives will need to be taken before AI companies and lawmakers take AI regulation seriously?
It's clear. We need real safeguards now.
Agree? Add your name👇️️️
Posts by Center for Countering Digital Hate
A 13-year-old asks an AI chatbot: “How do I make them pay?” Instead of stopping him, it helps.
This scenario isn’t just hypothetical. AI-assisted violence is real and is affecting kids.
That's why we all need to play our role in pushing for safety-by-design standards from AI companies ⬇️
We found that 8 in 10 popular AI chatbots regularly assisted ‘teens’ in planning violent attacks.
That's why CCDH worked with Baroness Kidron to amend the UK Crime & Policing Bill, making it illegal for AI chatbots to assist with violence. Lawmakers need to follow her lead.
Headline from the Daily Mail: AI bot told teen to use a hammer to kill his mother
Young men are getting radicalized online & AI chatbots are helping them commit real violence. A teen spent hours consuming misogynistic content, then used DeepSeek to help plan his mother's murder.
This tragic case exemplifies what our research has warned.
An LA jury just ruled social media is addictive by design — meaning harm isn’t accidental, it’s systemic.
But one verdict isn’t enough.
Real change requires lawmakers to act. We need regulation that ensures platforms put safety-by-design first. Join us👇
An LA jury just ruled social media is addictive by design — meaning harm isn’t accidental, it’s systemic.
But one verdict isn’t enough.
Real change requires lawmakers to act. We need regulation that ensures platforms put safety-by-design first. Join us👇
The verdict is in: Meta and YouTube are addictive by design.
A landmark bellwether trial found they drove users to addiction, even at the cost of actively harming children.
This sets a precedent, but accountability isn’t enough. We need regulation to make platforms safe by design.
The verdict is in: Meta and YouTube are addictive by design.
A landmark bellwether trial found they drove users to addiction, even at the cost of actively harming children.
This sets a precedent, but accountability isn’t enough. We need regulation to make platforms safe by design.
The New Mexico jury has decided: Meta willfully failed to protect kids from child predators on their platforms.
This is the first verdict of the landmark social media trials taking place this year.
This case sets a legal precedent, but we still need regulation to stop harms before they happen.
In Feb, Canada’s worst school shooting in 40 years occurred & the suspect reportedly used ChatGPT for planning.
In Finland, a 16-year-old used the chatbot to plan a stabbing in a high school.
AI-assisted violence is real & it's in our schools. So, what can we do? Learn more in our new blog 👇
In Feb, Canada’s worst school shooting in 40 years occurred & the suspect reportedly used ChatGPT for planning.
In Finland, a 16-year-old used the chatbot to plan a stabbing in a high school.
AI-assisted violence is real & it's in our schools. So, what can we do? Learn more in our new blog 👇
Totally. It’s time for lawmakers to demand better from Big Tech and create and enforce legislation that holds them accountable for the harms their platforms cause.
Be part of the movement holding Big Tech accountable:
act.counterhate.com/page/138772/...
Leading AI chatbots are making it easier for 'teen’ accounts to plan violence — even though effective safeguards already exist.
But Anthropic’s Claude shows what’s possible: it recognized and refused escalating violent requests in 68% of cases.
Read more in @theregister.com 👇
Leading AI chatbots are making it easier for 'teen’ accounts to plan violence — even though effective safeguards already exist.
But Anthropic’s Claude shows what’s possible: it recognized and refused escalating violent requests in 68% of cases.
Read more in @theregister.com 👇
The UK House of Lords has voted to criminalize AI chatbots that promote or assist terrorism.
The move follows CCDH's report that found 8 in 10 mainstream AI chatbots were regularly willing to help plan violent attacks.
Learn more 👇
Three teenage girls are suing xAI, alleging Grok was used to generate nonconsensual sexualized images of them as minors.
CCDH research on Grok has shown how easily these tools can produce abusive content at scale.
This lawsuit shows the harmful consequences of unregulated AI companies.
Three teenage girls are suing xAI, alleging Grok was used to generate nonconsensual sexualized images of them as minors.
CCDH research on Grok has shown how easily these tools can produce abusive content at scale.
This lawsuit shows the harmful consequences of unregulated AI companies.
8 in 10 leading AI chatbots typically assisted with planning violent attacks in our research with CNN.
Safeguards exist, but companies are choosing not to use them.
We’re calling on lawmakers to demand safety-by-design & real guardrails from AI companies before more harm is done.
Join us ⬇️
A senator just compared social media harm to Chick-fil-A closing on Sundays. Restaurants are actually one of the most regulated industries in the US.
We need Big Tech regulation to protect kids online.
A 16-year-old in Finland used ChatGPT to plan a stabbing attack on his classmates.
In CCDH testing, we found that 8 in 10 leading chatbots would regularly do the same.
The EU has tools to fix this. CCDH's Laura Kaun explains why they need to use them now 👇
Leading AI chatbots are making it easier for ‘teen’ accounts to plan violence according to our latest report with @cnn.com.
This isn't hypothetical. Canada’s worst school shooting in 40 years happened in Feb & the suspect reportedly used ChatGPT to help plan the attack.
More on our findings 👇
Big news. UK lawmakers just voted to make it illegal for AI chatbots to assist terrorist offences – based on CCDH research.
Our testing found 8 in 10 chatbots regularly helped plan violent attacks. That research just moved Parliament — thanks to Baroness Kidron.
Watch👇
BBC's latest doc confirms what our research has shown for years. Harmful content doesn’t spread randomly, it's a Big Tech decision.
Outrage drives engagement, which is profitable.
Join CCDH for Big Tech accountability 👇
act.counterhate.com/page/138772/petition/1
AI chatbots are making it easier for 'teen’ accounts to plan violence like school shootings — even though safeguards exist.
Claude shows what’s possible: it recognized & refused escalating violent requests in 68% of cases.
The tech exists to prevent harm, so why aren’t more companies using it?
8 in 10 leading chatbots assisted 'teen' users with planning violent attacks in our testing. That research is now in front of the UK Parliament.
Peers should adopt Amendment today.
Read our findings here: counterhate.com/research/kil...
🆕 AI chatbots are a threat to national security.
An amendment to the UK Crime & Policing Bill based on CCDH research would address the national security risk, making it illegal for a chatbot to assist terrorist offences.
Algorithms shape what we see and reward outrage over truth.
Bad actors exploit this system to spread harmful content at scale. They fuel dangerous, and even deadly real-world consequences.
We need transparency from social media platforms to ensure accountability.
Watch to learn more 👇